article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
the no - cloning theorem states that it is impossible to make perfect copies of an unknown quantum state . at variance with the classical world , where it is possible to duplicate information faithfully, the unitarity of time evolution in quantum mechanics does not allow us to build a perfect quantum copying machine .this no - go theorem is at the root of the security of quantum cryptography , since an eavesdropper is unable to copy the information transmitted through a quantum channel without disturbing the communication itself .although perfect cloning is not allowed , it is , nevertheless , possible to produce several approximate copies of a given state .several works , starting from the seminal paper by buek and m. hillery , have been devoted to find the upper bounds to the fidelity of approximate cloning transformations compatible with the rules of quantum mechanics .besides the theoretical interest on its own , applications of quantum cloning can be found in quantum cryptography , because they allow us to derive bounds for the security in quantum communication , in quantum computation , where quantum cloning can be used to improve the performance of some computational tasks , and in the problem of state estimation .as mentioned above , the efficiency of the cloning transformations is usually quantified in terms of the fidelity of each output cloned state with respect to the input .the largest possible fidelity depends on several parameters and on the characteristics of the input states . for an cloner it depends on the number of the input states and on the number of output copies .it also depends on the dimension of the quantum systems to be copied. moreover , the fidelity increases if some prior knowledge of the input states is assumed . in the universal cloning machinethe input state is unknown .a better fidelity is achieved , for example , in the phase covariant cloner ( pcc ) where the state is known to lie on the equator of the bloch sphere ( in the case of qubits ) .upper bounds to the fidelity for copying a quantum state were obtained in refs . and in the case of universal and state dependent cloning respectively .the more general problem of copying qubits has been also addressed .the pcc has been proposed in ref .several protocols for implementing cloning machines have been already achieved experimentally . in all the above proposalsthe cloning device is described in terms of quantum gates , or otherwise is based on post - selection methods .for example , the quantum network corresponding to the pcc consists of two controlled - not ( c - not ) gates together with a controlled rotation .the implementation of given tasks by means of quantum gates is not the the only way to execute the required quantum protocols .recently it has been realized that there are situations where it is sufficient to find a proper architecture for the qubit network and an appropriate form for the coupling between qubits to achieve the desired task . under these conditionsthe execution of a quantum protocol is reached by the time evolution of the quantum - mechanical system .the only required control is on the preparation of the initial state of the network and on the read - out after the evolution .this perspective is certainly less flexible than the traditional approach with quantum gates .nevertheless it offers great advantages as it does _ not _ require any time modulation for the qubits couplings .moreover , among the reasons for this `` no - control '' approach to quantum information is that the system is better isolated from the environment during its evolution .this is because there is no active control on the hamiltonian of the system .actually , after initializing the network one needs only to wait for some time ( to be determined by the particular task ) and then read the output .several examples have been provided so far .a spin network for quantum computation based only on heisenberg interactions has been proposed .another area where this approach is attracting increasing attention is quantum communication , where spin chains have been proposed as natural candidates of quantum channels .an unknown quantum state can be prepared at one end of the chain and then transferred to the other end by simply employing the ability of the chain to propagate the state by means of its dynamical evolution .these proposals seem to be particularly suited for solid state quantum information , where schemes for implementation have already been put forward . stimulated by the above results in quantum communication we have studied quantum cloning in this framework .the main goal is to find a spin network and an interaction hamiltonian such that at the end of its evolution the initial state of a spin is ( imperfectly ) copied on the state of a suitable set of the remaining spins . in this paperwe will show that this is indeed possible and we will analyze various types of quantum cloners based on the procedure just described . we will describe a setup for the pcc and we will show that for and the spin network cloning ( snc ) achieves the optimal bound .we will also describe the more general situation of cloning of qudits , i.e. d - level systems .an important test is to compare the performance of our snc with the traditional approach using quantum gates .we show that in the ( unavoidable ) presence of noise our method is far more robust .some of the results have been already given in ref . . in the present paperwe will give many additional details , not contained in ref . , and extend our approach to cloning to several other situations .we discuss cloning of qutrits , universal cloning machines , and optimization of the model hamiltonian just to mention few extentions .the paper is organized as follows . in section [ sec : pcc ]we review the basic properties of approximate cloning showing the theoretical optimal bounds . in section [ sec : snc ]we present the models and the networks topologies considered in this work . in sections [ sec:1mpcc ] and [ sec : nm ] we briefly review and extend the results obtained in .these sections concern the spin network model to implement the and phase covariant cloning transformations respectively .in addition to a more detailed discussion , as compared to , here we present a detailed analysis of the role of static imperfections .we also optimize the cloning protocol over the space of a large class of model hamiltonians which includes the and heisenberg as limiting cases .the effects of noise , included in a fully quantum mechanical approach , are analyzed in section [ sec : noise ] , where we compare our cloning setup with cloning machines based on a gate design . in sect .[ sec : univ ] we study the possibility of achieving universal cloning with the spin network approach . in section [ sec : qudits ] we generalize the snc for qutrits and qudits .finally , in section [ sec : implementation ] we propose a simple josephson junctions circuit that realizes the protocol and in [ sec : conclusion ] we summarize the main results and present our conclusions .most of this paper deals with the case of pcc .we will therefore devote this section to a brief summary of the results known so far for the optimal fidelity achievable in this case .we start our discussion by considering quantum cloning of qubits , whose hilbert space is spanned by the basis states and .the most general state of a qubit can be parametrized by the angles on the bloch sphere as follows quantum cloning was first analyzed , where the universal quantum cloning machine ( uqcm ) was introduced .we remind that the fidelity of a uqcm does not depend on i.e. it is the same for all possible input states .as already mentioned , the quality of the cloner is quantified by means of the fidelity of each output copy , described by the density operator , with respect to the original state the value of the optimal fidelity is achieved by maximizing over all possible cloning transformations . the result forthe uqcm is .the general form of the optimal transformation , which requires an auxiliary qubit , has been explicitly obtained in ref . .when the initial state is known to be in a given subset of the bloch sphere , the value of the optimal fidelity generally increases .for example , in ref . cloning of just two non orthogonal states is studied and it is shown that the fidelity in this case is greater than that for the uqcm .the reason is that now some prior knowledge information on the input state is available. another important class of transformations , which will be largely analyzed in the present paper , is the so called phase covariant cloning . in this type of clonerthe fidelity is optimised equally well for all states belonging to the equator of the bloch sphere : where ] where are the quantum numbers associated to the corresponding operators .the eigenvectors can be found in terms of the clebsch - gordan coefficients .the results for the coefficients are : \label{hbeta1 } \\\beta_2(t ) & = & \beta\frac{\sqrt{2s } } { 1 + 2s}\left[1- e^{i\left(\frac{1}{2}+s\right)t}\right ] \label{hbeta2 } \end{aligned}\ ] ] where .the maximum value for the fidelity }{2 { ( 1 + m ) } ^2}\ ] ] is obtained for the parameters .now let us turn our attention to the model . solving the eigenvalue problem as in one finds fidelity is maximized when and : for versus for the ( solid ) and heisenberg ( dashed ) model are shown .notice that the optimal fidelity for the pcc is exactly that of the model . ] for the fidelity is always greater than for .let us analyze the previous results in two important cases .first let us discuss the case for arbitrary ( see fig.[fig:12cloning ] ) : the fidelity _ coincides _ with the fidelity for the pcc i.e. the snc saturates the optimal bound for the pcc .second let us consider and arbitrary m( see fig.[fig:1mcloning ] ) : for , is always smaller than the optimal fidelity given in ref. .also in this case the xy model is better suited for quantum cloning as compared to the heisenberg case .although for generic the snc does not saturate the optimal bound , there is a very appealing feature of this methods which makes it interesting also in this case . the time required to clone the state_ decreases _ with .this implies that , in the presence of noise , snc may be competitive with the quantum circuit approach , where the number of gates are expected to _ increase _ with .we analyze this point in section [ sec : noise ] .recently a pcc with the star configuration has been proposed also for a multi - qubit cavity . in this proposalthe central spin is replaced by a bosonic mode of the cavity . by restricting the dynamics in the subspace with only one excitation ( one excited qubit or one photon in the cavity )the hamiltonian is equivalent to the spin star network considered here .indeed the optimal fidelities coincide with , eq .. for pcc ( circle ) , ( diamond ) and heisenberg ( triangle ) as functions of for . ]all the results discussed so far have been obtained for the star network .obviously this is not the only choice which fulfills the symmetries of a quantum cloning network . in general oneshould also consider more general topologies and understand to what extent the fidelity depends on the topology .we analyzed this issue by studying the fidelity for the model and for for the graph b of fig.[fig : network ] ( the fidelity for heisenberg model in this case is much worse than in the star configuration ) .we conclude that the maximum fidelity obtained does not depend on the chosen graph .to assess the robustness of our protocol , it is important to analyze the effect of static imperfections in the network . in a nanofabricated network , as for example with josephson nanocircuits, one may expect small variations in the qubit couplings . herewe analyze the cloning assuming that the couplings have a certain degree of randomness .for each configuration of disorder are assigned in an interval of amplitude centered around with a uniform distribution .first we study the case of uncorrelated disorder in different links .the values of and are chosen to be the optimal values of the ideal situation . for a given configuration of the couplingsthe fidelities of each of the clones are different due to the different coupling with the central spin .only the average fidelity is again symmetric under permutation among the clones .we averaged the fidelity over the sites and over realization of disorder . for and the mean fidelity decreases by just less than of the optimal value .it is important to stress that the effect of imperfections is quite weak on the average fidelity .this is because for certain values of , even if the fidelity of a particular site can become much larger than the fidelity in the absence of disorder , at the same time for the same parameters the fidelity in other sites is very small and the average fidelity is weakly affected by imperfections . in figure[ fig : imper ] we show the fidelity for the snc with imperfections as a function of the tolerance .we study also the case with correlations between the signs of nearest neighbor bonds : the probability of equal signs is proportional to ] for and ] and .we found the absolute maximum of the fidelity in this interval .the result of this maximization is summarized in table [ tab : nmcloning ] for several values of and .we also calculated the time to reach a value of fidelity slightly lower than .the time needed to reach , , is greatly reduced .indeed the fidelity is a quasi periodic function of time approaching several times values very close to . in table[ tab : nmcloning ] both the absolute maximum ( column 4 ) in the chosen interval and the time ( last column in the table ) are shown .so far we have described the unitary evolution of isolated spin networks .real systems however are always coupled to an environment which destroys their coherence . in this sectionwe will try to understand the effect of noise on the snc .we will also compare the performances of quantum cloning machines implemented with spin networks and with quantum circuits using the same hamiltonian .the effect of the environment can be modeled in different ways .one is to add classical fluctuations to the external magnetic field or the coupling .these random fluctuations can be either time independent or stationary stochastic processes . in both cases one can define an effective field variance and average the resulting fidelity . in fig .[ clnoise ] we compare the fidelity and as a function of for the -model with the optimal average values for fluctuating ( solid ) and ( dashed ) .the probability distributions are chosen to be gaussian .note that the fidelity is more sensitive to fluctuations of . 1 cm model with a classical fluctuating field . is plotted as a function of the variance for fluctuating ( solid ) and ( dashed ) . ]however there are situations in which the environment can not be modeled as classical noise and one has to use a fully quantum mechanical description . following the standard approach ,we model the effects of a quantum environment by coupling the spin network to a bosonic bath .then we describe the time evolution for the reduced density matrix of the spin system alone , after tracing out the bath degrees of freedom in terms of a master equation .the hamiltonian for the whole system is \\h_{r}&=&\sum_{i=1}^{m+1}\sum_{k } \omega_i(k)\,\ , a_i^\dagger(k)\,a_i(k)\end{aligned}\ ] ] where is the spin hamiltonian defined in eq . .the model is presented for generic but we will discuss the results only for and . we suppose that each spin is coupled to a different bath , labeled by , and that all baths are independent , and are the frequency and the coupling constant of the mode of the bath .it is convenient to define the operator ] indicates the fourier transform . in eq . is the mean occupation number of the mode at temperature and is the spectral density .we suppose that the bath is ohmic , as often encountered in several situations , i.e. has a simple linear dependency at low frequencies up to some cut - off : the parameter represents the strength of the noise and is the cut - off frequency . in order to to compare snc with traditional quantum cloning machines we have to consider a specific system where the required gates are performed . obviously this can be done in several different ways :we choose the hamiltonian as the model system for both schemes .in particular we compare the two methods for and equatorial qubits . for the quantum circuit approach quantum gates are implemented by a timedependent hamiltonian .it has been shown that the hamiltonian is sufficient to implement both one and two - qubit gates .the elementary two - qubit gate is the iswap : {cccc } 1 & & & \\ & 0 & i & \\ & i & 0 & \\ & & & 1 \end{array } \right)\ ] ] it can be obtained turning on an interaction between the two qubits without external magnetic field and letting them interact for . by applying the iswap gate twice , the cnot operation can be constructed }&{{*+=[o]-[f]{\bullet}}{\ar @{- } [ 2,0 ] } { \ar @{- } [ r ] } } & & & { \ar @{- } [ r ] } & { \hspace{-10pt } \rule{6pt}{.4pt } \hspace{-10pt } { \ar @{- } [ r]}}&{*+[f]{r_z\left(-{\frac{\pi}{2}}\right ) } { \ar @{- } [ r ] } } & { { \phantom{iswap } } { \ar @{- } [ r]}\save[0,0].[2,0]!c * \frm{- } \restore } & { * + [ f]{r_x\left({\frac{\pi}{2}}\right ) } { \ar @{- } [ r]}}&{{\phantom{iswap } } { \ar @{- } [ r]}\save[0,0].[2,0]!c * \frm{- } \restore } & { \hspace{-10pt } \rule{6pt}{.4pt } \hspace{-10pt } { \ar @{- } [ r ] } } & \\ & & & = & & & & \textrm{iswap } & & \textrm{iswap}&\\ { \ar @{- } [ r]}&{*+=[o]=<3mm>[f]{+ } { \ar @{- } [ r ] } } & & & { \ar @{- } [ r ] } & { * + [ f]{r_x\left({\frac{\pi}{2}}\right ) } { \ar @{- } [ r ] } } &{ * + [ f]{r_z\left({\frac{\pi}{2}}\right ) } { \ar @{- } [ r ] } } & { { \phantom{iswap } } { \ar @{- } [ r]}}&{\hspace{-10pt } \rule{6pt}{.4pt } \hspace{-10pt } { \ar @{- } [ r]}}&{{\phantom{iswap } } { \ar @{- } [ r]}}&{*+[f]{r_z\left({\frac{\pi}{2}}\right ) } { \ar @{- } [ r ] } } & } \ ] ] this means that we need two two - qubit operations for each cnot .we simulated the circuits shown in fig .[ fig : circuitpcc ] for and in the presence of noise and we calculated the corresponding fidelities .we neglected the effect of noise during single qubit operations .this is equivalent to assume that the time needed to perform this gates is much smaller than the typical decoherence time .the results are shown in fig.[fig : beta10a ] and fig.[fig : beta10b ] .the fidelity for the quantum gates ( squares ) and that for the snc ( circles ) are compared as functions of the coupling parameter . cloning .comparison of the fidelity obtained by the spin network method and the quantum circuit ( interaction ) discussed in in the presence of an external quantum noise .circles and squares refer to the network and gates case respectively ( . ) .the parameters for the environment are and . ] for the case . ] even for small the fidelity for the circuits is much worse than that for the network .notice that for , though without noise ( ) the snc fidelity is lower than the ideal one , for the situation is reversed .this shows that our scheme is more efficient than the one based on quantum gates .moreover for the time required for quantum circuit pcc grows with increasing m while , as discussed previously , the optimal of the snc decreases with .this suggests that our proposal is even more efficient for growing .changing the model does not affect these results .indeed the time required to perform a cnot using heisenberg or ising interactions is just half the time required for the model .we also believe that in a real implementation the effect of noise on our system can be very small compared to the that acting on a quantum circuit .this is because during the evolution the spin network can be isolated from the environment .it would be desirable to implement also a universal quantum cloner by the same method illustrated here . in this sectionwe briefly report our attempt to implement the universal cloner . in the previous sections we demonstrated that for the models presented the fidelity is invariant on ( phase covariance ) but still depends on .this axial symmetry relies on the selection of the -axis for the initialization of the blank spins .in order to perform a universal cloner we need a spherical symmetry .this means that both the hamiltonian and the initial state must be isotropic .the first condition is fulfilled using the heisenberg interaction without static magnetic field that would break the spherical symmetry .the second requirement can be obtained using for the initial state of the blank qubits a completely random state . in other words the complete state of the network ( initial state + blanks ) is the maximum fidelity is obtained for and has the value that has to be compared with the value of the optimal universal cloner .our model is the most general time independent network containing three spins and fulfills the required conditions .spin network cloning technique can be generalized to qutrits and qudits .this is what we discuss in this section starting , for simplicity , with the qutrit case .the cloning of qudits is a straightforward generalization .our task is to find an interaction hamiltonian between qutrits able to generate a time evolution as close as possible to the cloning transformation. one obvious generalization of the qubit case is to consider qutrits as spin-1 systems . in this picturethe three basis states could be the eigenstates of the angular momentum with z component ( -1,0,1 ) .the natural interaction hamiltonian would then be the heisenberg or the interaction alternatively one can think to use the state of physical qubits to encode the qutrits .such an encoding , originally proposed in a different context , uses three qubits to encode one single logical qutrit : in ref. it is shown that this encoding , together with a time - dependent interaction , is universal for quantum computing with qutrits . in our workhowever we have restricted ourselves to the use of time - independent interactions with a suitable design of the spin network . for the qubit casethe interaction is able to swap two spins .we know that this is the key to clone qubits and so one could try a similar approach also for qutrits .however , for higher spin , hund s rule forbids the swapping . for this reasonwe have turned our attention to the encoded qutrits to see if swapping is possible .it is simple to show that the network depicted in fig.[fig : qutritxy ] satisfies our requirements . model . ] in the arrangement each dot represents a spin and three dots inside an ellipse correspond to an encoded qutrit . a static magnetic field pointing in the direction is applied to the first spin .a line connecting two dots means that they interact via an interaction with amplitude j. it can be easily checked that for a single couple of qutrits the exchange processes are possible .this network is the generalization of the spin star that we analyzed before in which a single qutrit interacts with the others .it is easily generalized for the case using three spin stars .the single qutrit hamiltonian is realized applying magnetic fields to the physical qubits .in analogy with the qubit cloner we will prepare qutrit 1 in the original state and initialize the other qutrits in a blank state , for example .now due to the interactions the state will evolve in a restricted subspace of the hilbert space : to find the fidelity of the clones with respect to the state of eq .we need the reduced density matrix of one of the clones ( for example the third ) .the result , in the basis , is in order to find the coefficients and we have to diagonalize the hamiltonian .we consider the double pcc of eq . :our model is automatically invariant on because there is no preferred direction in the space of the qutrits .the maximum fidelity achievable with snc is : this value has been obtained with and .note that this value is very close to the optimal one and the difference is only .pcc in dimensions are compared . ]we calculated also the fidelity for the cloning of qutrits using the star configuration .the maximum fidelity is : obtained for the same value of the star configuration of qubits ( and ) .the generalization to qudits is straightforward .following the same approach we encode qudits using qubits to encode each qudit . after some algebra one finds the general expression for the pcc in d dimensions .the values and are independent from and the expression for the fidelity is : in fig.[fig : qudit ] the optimal and snc fidelities are compared .as we can see , the fidelity of the spin network implementation is very close to the ideal one .the final section of this work is devoted to the possibility of implementing spin network cloning in solid - state devices .besides the great interest in solid state quantum information , nanofabricated devices offer great flexibility in the design and allow to realize the graphs represented in fig.[fig : network ] .we analyze the implementation with josephson nanocircuits which are currently considered among the most promising candidates as building blocks of quantum information processors . herewe discuss only the cloning for qubits .the generalization to the other cases is straightforward . .the device operates in the charging regime , i.e. the josephson couplings of the junction ( crossed box in the figure ) is much smaller than the charging energy .b ) implementation of the spin network cloning by means of josephson qubits .the unknown state to be cloned in stored in the central qubit while the blank qubits and are the ones where the state is cloned .the coupling between the qubits is via the josephson junctions of coupling energy . ] in the charge regime a josephson qubit can be realized using a cooper pair box ( see fig.[fig : squbit]a ) , the logical state is characterized by the box having zero or one excess charge . among the various ways to couple charge qubits , in order to implement snc the qubits should be coupled via josephson junctions ( see fig.[fig : squbit]b ) .the central qubit ( denoted by in the figure ) will encode the state to be cloned while the upper and lower qubits ( denoted with =up and =down ) are initially in the blank state .all the josephson junctions are assumed to be tunable by local magnetic fluxes .the total hamiltonian of the 3-qubit system is given by the sum of the hamiltonians of the qubits plus the interaction between them . where is the josephson coupling in the cooper pair box and is the energy difference between the two charge states of the computational hilbert space .the coupling hamiltonian for the -qubit system is \label{3clone}\end{aligned}\ ] ] here is the josephson energy of the junctions which couple the different qubits and .if the coupling capacitance between the qubits is very small as compared to the other capacitances one can assume to be negligible . in practice , however the capacitive coupling is always present therefore it is necessary to have .then the dynamics of the system approximates the ideal dynamics required to perform quantum cloning .the protocol to realize the snc requires the preparation of the initial state .this can be achieved by tuning the gate voltages in such a way that the blank qubits are in and the central qubit is in the state to be cloned . during the preparation the coupling between the qubits should be kept zero by piercing the corresponing squid loops of the junctsion with a magnetic field equal to a flux quantum . in the second step, is switched off and the dynamics of the system is entirely governed by . at the optimal time the original state is cloned in the and qubits . . ]as the implementation with superconducting nanocircuits has a slightly different hamiltonian as compared to the ideal model it is important to check for the loss of fidelity due to this difference . as it is shown in fig.[fig : fidelityreal ] , for the maximum fidelity achievable differs at most by from the ideal value .we have demonstrated that quantum cloning , in particular pcc , can be realized using no external control but just with an appropriate design of the system hamiltonian .we considered the heisenberg and coupling between the qubits and we found that the model saturates the optimal value for the fidelity of the pcc . in all other caseswe have analyzed ( pcc , universal cloning , cloning of qudits ) our protocol gives a value of the fidelity of clones that is always within a few percent of the optimal value . as compared to the standard protocol using quantum gates , however , there is a major advantage .our setup is fast and , moreover , its execution time does not increase with the number of qubits to be cloned . in the presence of noisethis allows to reach a much better fidelity than the standard protocol even in the presence of a weak coupling to the external environment .in addition we expect that the system in the snc is better isolated from the external environment because no gate pulses are needed .finally we proposed a possible implementation of our scheme using superconducting devices available with present day technology .this would be the first experimental realization of quantum cloning in solid state systems .we want to stress that our results on cloning together with others on communication and computation open new perspectives in the realization of a quantum processor , reducing the effect of noise on the system . it would be interesting to consider if it is possible to realize other quantum information protocols or quantum algorithms , using time independent spin networks .galvo and l. hardy , phys . rev .a * 62 * , 022301 ( 2000 ) . g. m. dariano and p. lo presti , phys .a * 64 * , 042308 ( 2001 ) .d. bru , d. p. divincenzo , a. ekert , c. a. fuchs , c. macchiavello , j. a. smolin , phys .a * 57 * , 2368 ( 1998 ) .n. gisin and s. massar , phys .lett . * 79 * , 2153 - 2156 ( 1997 ) ; d. bruss , a. ekert and c. macchiavello , phys . rev . lett . * 81 * , 2598 ( 1998 ) ; r. f. werner , phys . rev . a*58 * , 1827 ( 1998 ) .d. bru , m. cinchetti , g. m. dariano , c. macchiavello , phys .a * 62 * , 012302 ( 2000 ) . h.k. cummins , c. jones , a. furze , n. f. soffe , m. mosca , j. m. peach , j. a. jones , phys .lett . * 88 * , 187901 ( 2002 ) .a. lama - linares , c. simon , j .- c .howell and d. bouwmeester , science * 296 * , 712 ( 2002 ) .d. pelliccia , v. schettini , f. sciarrino , c. sias and f. de martini , phys .a * 68 * , 042306 ( 2003 ) ; f. de martini , d. pelliccia and f. sciarrino , phys .lett . * 92 * , 067901 ( 2004 ) .j. du , t. durt , p. zou , l.c .kwek , c.h .lai , c.h .oh , and a. ekert , quant - ph/0311010 . c .- s .niu and r.b .griffiths , phys .a * 60 * , 2764 ( 1999 ) .s. c. benjamin and s. bose , phys .. lett . * 90 * , 0247901 ( 2003 ) .yung , d.w .leung and s. bose , quantum inf .* 4 * , 174 ( 2004 ). s. bose , phys .* 91 * , 207901 ( 2003 ) .v. subrahmanyam , phys . rev . a * 69 * , 034304 ( 2004 ) .t. j. osborne and n. linden , phys .a * 69 * , 052315 ( 2004 ) .m. christandl , n. datta , a. ekert , and a. j. landahl , phys .lett . * 92 * , 187902 ( 2004 ) .s. lloyd , phys .90 * , 167902 ( 2003 ) .f. verstraete , m. a. martn - delgado , and j. i. cirac , phys .* 92 * , 087201 ( 2004 ) .v. giovannetti and r. fazio , phys .a * 71 * , 032314 ( 2005 ) .a. romito , r. fazio and c. bruder , phys .b * 71 * 100501 ( 2005 ) .m. paternostro , m.s .kim , g.m .palma , and g. falci , phys .a * 71 * , 042311 ( 2005 ) . g. de chiara , r. fazio , c. macchiavello , s. montangero , and g. m. palma , physa * 70 * , 062308 ( 2004 ) .g. m. dariano and c. macchiavello , phys .a * 67 * , 042306 ( 2003 ) .a. olaya - castro , n. f. johnson , and l. quiroga , phys .* 94 * , 110502 ( 2005 ) .there is not a unique formula for arbitrary .r. f. werner , phys .a * 58 * , 1827 ( 1998 ) . n.j .cerf , t. durt and n. gisin , j. mod ., * 49 * , 1355 ( 2002 ). h. fan , h. imai , k. matsumoto and x .- b .wang , phys .a * 67 * , 022317 ( 2003 ) f. buscemi , g. m. dariano and c. macchiavello , phys .a * 71 * , 042327 ( 2005 ) .j. du , t. durt , p. zou , l.c .kwek , c.h .lai , c.h .oh and a. ekert , phys .* 94 * , 040505 .v. buek , s.l .braunstein , m. hillery and d. bru , phys .a * 56 * , 3446 ( 1997 ) .a. hutton and s. bose , phys .a * 69 * , 042312 ( 2002 ) .j. fiurek , phys .a * 67 * , 052314 ( 2003 ) . c. cohen - tannoudji , j. dupont - rac and g. grynberg , _ atom - photon interactions _, john wiley & sons , new york , ( 1992 ) j. kempe and k.b .whaley , phys .a , * 65 * , 052330 ( 2002 ) .n. schuch and j. siewert , phys .a * 67 * , 032301 ( 2003 ) .j. kempe , d. bacon , d. p. divincenzo and k.b .whaley , in _ quantum information and computation _, r. clark _ et al ._ eds . , rinton press , new jersey , vol.1 , 33 ( 2001 ) .makhlin , g. schn , and a. shnirman , rev .phys . * 73 * , 357 ( 2001 ) .d. v. averin , fortschr . phys . * 48 * , 1055 ( 2000 ) . j. siewert , r. fazio , g.m .palma , and e. sciacca , j. low temp .phys . * 118 * , 795 ( 2000 ) .
|
in this paper we present an approach to quantum cloning with unmodulated spin networks . the cloner is realized by a proper design of the network and a choice of the coupling between the qubits . we show that in the case of phase covariant cloner the coupling gives the best results . in the cloning we find that the value for the fidelity of the optimal cloner is achieved , and values comparable to the optimal ones in the general case can be attained . if a suitable set of network symmetries are satisfied , the output fidelity of the clones does not depend on the specific choice of the graph . we show that spin network cloning is robust against the presence of static imperfections . moreover , in the presence of noise , it outperforms the conventional approach . in this case the fidelity exceeds the corresponding value obtained by quantum gates even for a very small amount of noise . furthermore we show how to use this method to clone qutrits and qudits . by means of the heisenberg coupling it is also possible to implement the universal cloner although in this case the fidelity is off that of the optimal cloner .
|
privacy protection in recommender systems is a notoriously challenging problem .there are often two competing goals at stake : similar users are likely to prefer similar products , movies , or locations , hence sharing of preferences between users is desirable . yet , at the same time , this exacerbates the type of privacy sensitive queries , simply since we are now not looking for aggregate properties from a dataset ( such as a classifier ) but for properties and behavior of other users ` just like ' this specific user .such highly individualized behavioral patterns are shown to facilitate provably effective user de - anonymization .consider the case of a couple , both using the same location recommendation service .since both spouses share much of the same location history , it is likely that they will receive similar recommendations , based on other users preferences similar to theirs . in this context sharing of information is desirable , as it improves overall recommendation quality .moreover , since their location history is likely to be very similar , each of them will also receive recommendations to visit the place that their spouse visited ( e.g. including places of ill repute ) , regardless of whether the latter would like to share this information or not .this creates considerable tension in trying to satisfy those two conflicting goals .differential privacy offers tools to overcome these problems . loosely speaking, it offers the participants _ plausible deniability _ in terms of the estimate .that is , it provides guarantees that the recommendation would also have been issued with sufficiently high probability if another specific participant had not taken this action before .this is precisely the type of guarantee suitable to allay the concerns in the above situation . recent work , e.g. by has focused on designing _ custom built _tools for differential private recommendation .many of the design decisions in this context are hand engineered , and it is nontrivial to separate the choices made to obtain a differentially private system from those made to obtain a system that works well .furthermore , none of these systems lead to very fast implementations . in this paperwe show that a large family of recommender systems , namely those using matrix factorization , are well suited to differential privacy .more specifically , we exploit the fact that sampling from the posterior distribution of a bayesian model , e.g. via stochastic gradient langevin dynamics ( sgld ) , can lead to estimates that are sufficiently differentially private . at the same time, their stochastic nature makes them well amenable to efficient implementation .their generality means that we _ need not custom - design a statistical model for differential privacy _but rather that is possible to _ retrofit an existing model _ to satisfy these constraints .the practical importance of this fact can not be overstated it means that no costly re - engineering of deployed statistical models is needed .instead , one can simply reuse the existing inference algorithm with a trivial modification to obtain a differentially private model .this leaves the issue to performance .some of the best reported results are those using graphchi , which show that state - of - the - art recommender systems can be built using just a single pc within a matter of hours , rather than requiring hundreds of computers . in this paper , we show that by efficiently exploiting the power law properties inherent in the data ( e.g.most movies are hardly ever reviewed on netflix ) , one can obtain models that achieve peak numerical performance for recommendation .more to the point , they are 3 times faster than graphchi on identical hardware . in summary, this paper describes the by far the fastest matrix factorization based recommender system and it can be made differentially privately using sgld without losing performance .most competing approaches excel at no more than one of those aspects . specifically , 1 .it is efficient at the state of the art relative to other matrix factorization systems . *we develop a cache efficient matrix factorization framework for general sgd updates . *we develop a fast sgld sampling algorithm with bookkeeping to avoid adding the gaussian noise to the whole parameter space at each updates while still maintaining the correctness of the algorithm .it is differentially private . *we show that sampling from a scaled posterior distribution for matrix factorization system can guarantee user - level differential privacy .* we present a personalized differentially private method for calibrating each user s privacy and accuracy .* we only privately release to public , and design a local recommender system for each user .experiments confirm that the algorithm can be implemented with high efficiency , while offering very favorable privacy - accuracy tradeoff that nearly matches systems without differential privacy at meaningful privacy level .we begin with an overview of the relevant ingredients , namely collaborative filtering using matrix factorization , differential privacy and a primer in computer architecture .all three are relevant to the understanding of our approach .in particular , some basic understanding of the cache hierarchy in microprocessors is useful for efficient implementations . in collaborative filteringwe assume that we have a set of users , rating items .we only observe a small number of entries in the rating matrix . here means that user rated item .a popular tool to deal with inferring entries in is to approximate by a low rank factorization , i.e. for some , which denotes the dimensionality of the feature space corresponding to each item and movie . in other words , ( user , item )interactions are modeled via here and denote row - vectors of and respectively , and and are scalar offsets responsible for a specific user or movie respectively . finally , is a common bias .a popular interpretation is that for a given item , the elements of measure the extent to which the item possesses those attributes . for a given user the elements of measure the extent of interest that the user has in items that score highly in the corresponding factors .due to the conditions proposed in the netflix contest , it is common to aim to minimize the mean squared error of deviations between true ratings and estimates . to address overfitting , a norm penaltyis commonly imposed on and .this yields the following optimization problem a large number of extensions have been proposed for this model .for instance , incorporating co - rating information , neighborhoods , or temporal dynamics can lead to improved performance . since we are primarily interested in demonstrating the efficacy of differential privacy and the interaction with efficient systems design , we focus on the simple inner - product model with bias .* bayesian view . * note that the above optimization problem can be viewed as an instance of a maximum - a - posteriori estimation problem .that is , one minimizes where , up to a constant offset and and likewise for . in other words , we assume that the ratings are conditionally normal , given the inner product , and the factors and are drawn from a normal distribution . moreover , one can also introduce priors for with a gamma distribution .while this setting is typically just treated as an afterthought of penalized risk minimization , we will explicitly use this when designing differentially private algorithms .the rationale for this is the deep connection between samples from the posterior and differentially private estimates .we will return to this aspect after introducing stochastic gradient langevin dynamics .* stochastic gradient descent . *minimizing the regularized collaborative filtering objective is typically achieved by one of two strategies : alternating least squares ( als ) and stochastic gradient descent ( sgd ) .the advantage of the former is that the problem is biconvex in and respectively , hence minimizing or are convex . on the other hand ,sgd is typically faster to converge and it also affords much better cache locality properties . instead of accessing e.g. all reviews for a given user ( or all reviews for a given movie ) at once , we only need to read the appropriate tuples . in sgd each time we update a randomly chosen rating record by : one problem of sgd is that trivially parallelizing the procedure requires memory locking and synchronization for each rating , which could significantly hamper the performance . shows that a lock - free scheme can achieve nearly optimal solution when the data access is sparse .we build on this _ statistical _ property to obtain a _ fast system _ which is suitable for differential privacy .differential privacy ( dp ) aims to provide means to cryptographically protect personal information in the database , while allowing aggregate - level information to be accurately extracted . in our contextthis means that we protect user - specific sensitive information while using aggregate information to benefit all users .assume the actions of a statistical database are modeled via a randomized algorithm .let the space of data be and data sets .define to be the edit distance or hamming distance between data set and , for instance if and are the same except one data point then we have .[ def : diffp ] we call a randomized algorithm -differentially private if for all measurable sets and for all such that the hamming distance , if we say that is -differential private .the definition states that if we arbitrarily replace any individual data point in a database , the output of the algorithm does nt change much .the parameter in the definition controls the maximum amount of information gain about an individual person in the database given the output of the algorithm .when is small , it prevents any forms of linkage attack to individual data record ( e.g. , linkage of netflix data to imdb data ) .we refer readers to for detailed interpretations of the differential privacy in statistical testing , bayesian inference and information theory .an interesting side - effect of this definition in the context of collaborative filtering is that it also limits the influence of so - called whales , i.e. of users who submit extremely large numbers of reviews .their influence is also curtailed , at least under the assumption of an equal level of differential privacy per user . in other words ,differential privacy confers robustness for collaborative filtering . show that posterior sampling with bounded log - likelihood is essentially exponential mechanism therefore protecting differential privacy for free ( similar observations were made independently in ) . also suggests a recent line of works that use stochastic gradient descent for hybrid monte carlo sampling essentially preserve differential privacy with the same algorithmic procedure .the consequence for our application is very interesting : if we trust that the mcmc sampler has converged , i.e. if we get a sample that is approximately drawn from the posterior distribution , then we can use one sample as the private release . if not , we can calibrate the mcmc procedure itself to provide differential privacy ( typically at the cost of getting a much poorer solution ) . a key difference between generic numerical linear algebra ,as commonly used e.g. for deep networks or generalized linear models , and the methods used for recommender systems is the fact that the access properties regarding users and items are highly nonuniform .this is a significant advantage , since it allows us to exploit the caching hierarchy of modern cpus to benefit from higher bandwidth than what disks or main memory access would permit ..[tb : benchmark ] performance ( single threaded ) on a macbook pro ( 2011 ) using an intel core i7 operating at 2.0 ghz and 160mt / s transfer rate and 2 memory banks .the spread in l1 and l3 bandwidth is due to different packet sizes . [cols="<,>,>,>",options="header " , ] we show the cache efficiency of c - sgd and graphchi in this section .our data access pattern can accelerate the hardward cache prefetching . in the meanwhile we also use software prefetching strategies to prefetch movie factors in advance . butsoftware prefetching is usually dangerous in practice while implementing in practice because we need to know the prefetching stride in advance .that is when to prefetch those movie factors . in our experiments we set prefetching stride to 2 empirically .we set the experiments as follows . in each gradient updatestep given , once the parameters e.g. and in ( [ eq : sgd ] ) been read they will stay in cache for a while until they be flushed away by new parameters .what we really care about in this section is if the first time each parameter be read by cpu is already staying in cache or not .if it is not in cache then there will be a cache miss and will push cpu to idle . after that the succeeding updates ( the specific updates depend on the algorithms e.g. sgd or sgld ) for and will run on cache level .we use cachegrind as a cache profiler and analyze cache miss for this purpose . the result in table [ tb : cache ]shows that our algorithm is quite cache friendly when compared with graphchi on all dimensions .this is likely due to the way graphchi ingests data : it traverses one data and item block at a time . as a resultit has a less efficient portfolio of access frequency and it needs to fetch data from memory more frequently .we believe this to be both the root cause of decreased computational efficiency and slower convergence in the code .we now investigate the influence of privacy loss on accuracy .as discussed previously , a small rescaling factor can help us to get a nice bound on the loss function .for private collaborative filtering purposes , we first trim the training data by setting each user s maximum allowable number of ratings and for the netflix competition dataset and yahoo music data respectively . we set and weight of each user as where is set to 1 .according to different trimming strength we have and for netflix data and yahoo data respectively .note that a maximum allowable rating from to is quite reasonable , since in practice most users rate quite a bit fewer than movies ( due to the power law nature of the rating distribution ) .moreover , for users who have more than ratings , we actually can get a quite a good approximation of their profiles by only using a reasonable size of random samples of these ratings .as such we get a dataset with 33 m ratings for netflix and 100 m ratings for yahoo music data .we study the prediction accuracy , i.e. the utility of our private method by varying the differential privacy budget for fixed model dimensionality .the parameters of the experiment are set as follows . for netflix data , we set , , , . for yahoo data , we set , and , , .in addition , because we are sampling p we fix regularizer parameters which are estimated by a non - private sgld in this section .while we are sampling jointly , we essentially only need to release .users can then apply their own data to get the full model and have a local recommender system : the local predictions , i.e. in our context the utility of differentially private matrix factorization method , along the different privacy loss are shown in figure [ fig : rmse_vs_privacy ] . more specifically , the model ( [ eq : semiprivate ] ) is a _ two - stage _ procedure which first takes the differentially private _ item vectors _ and then use the latter to obtain locally non - private user parameter estimates .this is perfectly admissible since users have no expectation of privacy with regard to their own ratings .on netflix ( top ) and yahoo ( bottom ) .a modest decrease in accuracy affords a useful gain in privacy .[ fig : rmse_vs_privacy],title="fig : " ] on netflix ( top ) and yahoo ( bottom ) .a modest decrease in accuracy affords a useful gain in privacy .[ fig : rmse_vs_privacy],title="fig : " ] interpreting the privacy guarantees can be subtle . a privacy loss of as in figure [ fig : rmse_vs_privacy ] may seem completely meaningless by definition [ def : diffp ] and the corresponding results in may appear much better .we first address the comparison to .it is important to point out that our privacy loss is stated in terms of user level privacy while the results in are stated in terms of rating level privacy , which offers exponentially weaker protection .-user differential privacy translates into -rating differential privacy .since in our case , our results suggest that we almost lose no accuracy at all while preserving rating differential privacy with .this matches ( and slightly improves ) s carefully engineered system . on the other hand, we note that the plain privacy loss can be a very deceiving measure of its practical level of protection .definition [ def : diffp ] protects privacy of an arbitrary user , who can be a malicious spammer that rates every movie in a completely opposite fashion as what the learned model would predict .this is a truly paranoid requirement , and arguably not the right one , since we probably should not protect these malicious users to begin with .for an average user , the personalized privacy ( definition [ def : dppersonal ] ) guarantee can be much stronger , as the posterior distribution concentrates around models that predict reasonably well for such users . as a result ,the log - likelihood associated with these users will be bounded by a much smaller number with high probability . in the example shown in figure [ fig : rmse_vs_privacy ] , a typical user s personal privacy loss is about , which helps to reduce the essential privacy loss to a meaningful range .in this paper we described an algorithm for efficient collaborative filtering that is compatible with differential privacy .in particular , we showed that it is possible to accomplish all three goals : accuracy , speed and privacy without any significant sacrifice on either end .moreover , we introduced the notion of _ personalized _ differential privacy .that is , we defined ( and proved ) the notion of obtaining estimates that respect different degrees of privacy , as required by individual users .we believe that this notion is highly relevant in today s information economy where the expectation of privacy may be tempered by , e.g. the cost of the service , the quality of the hardware ( cheap netbooks deployed with windows 8.1 with bing ) , and the extent to which we want to incorporate the opinions of users .our implementation takes advantage of the caching properties of modern microprocessors . by careful latency hidingwe are able to obtain near peak performance .in particular , our implementation is approximately 3 times as fast as graphchi , the next - fastest recommender system . in sum , this is a strong endorsement of stochastic gradient langevin dynamics to obtain differentially private estimates in recommender systems while still preserving good utility .* acknowledgments : * parts of this work were supported by a grant of adobe research .z. liu was supported by creative program of ministry of education ( irt13035 ) ; foundation for innovative research groups of nnsf of china ( 61221063 ) ; nsf of china ( 91118005 , 91218301 ) ; pillar program of nst ( 2012bah16f02 ) .wang was supported by nsf award bcs-0941518 to cmu statistics and singapore national research foundation under its international research centre @ singapore funding initiative and administered by the idm programme office .the -dp claim follows by choosing the utility function to be the and apply the exponential mechanism which protects -dp by output with probability proportional to where he sensitivity of function be defined as all we need to do is to work out the sensitivity for here . by the constraint in and , we know .since one user contributes only one row to the data the trimming / reweighting procedure ensures that for any and any user , the sensitivity of obeys as specified in the algorithm .the claim is simple ( given in proposition 3 of ) and we omit here .lastly , we note that the `` retry if fail '' procedure will always sample from the the correct distribution of conditioned on satisfying our constraint that is bounded , and it does not affect the relative probability ratio of any measurable event in the support of this conditional distribution . for generality , we assume the parameter vector is and all regularizers is capture in prior . the posterior distribution .for any , if we add ( removing has the same proof ) a particular user whose log - likelihood is uniformly bounded by .the probability ratio can be factorized into in algorithm [ alg : dpmf ] , denote .we are sampling from a distribution proportional to .this is equivalent to taking the above posterior to have the log - likelihood of user bounded by , therefore the algorithm obeys personalized differential privacy for user .take to be any customized subset of adjustied using we get the expression as claimed .s. ahn , a. korattikara , and m. welling .bayesian posterior sampling via stochastic gradient fisher scoring . in_ proceedings of the 29th international conference on machine learning ( icml-12 ) _ , pages 15911598 , 2012 .b. mobasher , r. burke , r. bhaumik , and c. williams . toward trustworthy recommender systems : an analysis of attack models and algorithm robustness ._ acm transactions on internet technology ( toit ) _ , 70 ( 4):0 23 , 2007 . i. sato and h. nakagawa . approximation analysis of stochastic gradient langevin dynamics by using fokker - planck equation and ito process . in _ proceedings of the 31st international conference on machine learning ( icml-14 ) _ , pages 982990 , 2014 .
|
differentially private collaborative filtering is a challenging task , both in terms of accuracy and speed . we present a simple algorithm that is provably differentially private , while offering good performance , using a novel connection of differential privacy to bayesian posterior sampling via stochastic gradient langevin dynamics . due to its simplicity the algorithm lends itself to efficient implementation . by careful systems design and by exploiting the power law behavior of the data to maximize cpu cache bandwidth we are able to generate 1024 dimensional models at a rate of 8.5 million recommendations per second on a single pc .
|
the understanding that a quantum system carries information encoded in its quantum state and , thereby , could be used to accomplish information processing tasks gave rise to the fields of quantum information and computation . in order to accomplish such tasks , in the final stepone has to read out the previously processed information , which corresponds to determine the final state of the system by measuring it .however , as the quantum state is not itself an observable , it is not possible to determine it through a single shot measurement , unless it belongs to a _ known _ set of states which are mutually orthogonal .when this is not the case , i.e. , the possible final states are not orthogonal , they can not be , deterministically , discriminated with certainty and without error even if they belong to a known set .this has led to the development of the area known as quantum - state discrimination ( qsd ) , where a measurement strategy is devised in order to discriminate _ optimally _ , according to some figure of merit , among nonorthogonal states . despite that ,originally , the qsd problem has been introduced in the context of quantum detection ( or decision ) theory long before the birth of quantum information and computation , it quickly became a fundamental tool for these fields .for instance , there is an intimate connection among qsd and probabilistic protocols , like entanglement concentration , cloning , and some quantum algorithms . also , there is a connection among qsd and probabilistic realizations of quantum communication protocols like teleportation , entanglement swapping , and superdense coding .finally , the use of nonorthogonal states , and , consequently , the impossibility of perfectly discriminating among them , underlies the security in some quantum key distribution protocols .the problem addressed in qsd can be briefly posed as follows .a quantum system is prepared in one of possible states in the set , with associated _ a priori _ probabilities ( ) .both the set of states and the prior probabilities are _ known _ in advance . as the statesare nonorthogonal , one has to design a measurement strategy which determine optimally which one was actually prepared .the optimality criteria are related with some figure of merit , mathematically formulated , and each figure corresponds to a different strategy .the oldest and , perhaps , simplest criterion comprises a measurement that minimizes the probability of making an error in identifying the state .for this so - called minimum error strategy ( me ) , the necessary and sufficient conditions that must be satisfied by the operators describing the optimized measurement are well known . nevertheless , only for a few special cases the explicit form of such measurementshave been found .a second strategy , first proposed by ivanovic , allows one to identify each state in the set without error but with the possibility of obtaining an inconclusive result .this strategy , called unambiguous discrimination ( ud ) , is optimized by a measurement that minimizes the probability of inconclusive results . restricting to pure states ,the optimal ud problem was completely solved for the case of two states , while for more than two states only few analytical solutions have been derived . in this latter case, chefles showed that ud is applicable only to linearly independent sets .there exist many other measurement strategies for qsd that optimize differently formulated figures of merit .a discussion about them is beyond the scope of the present work . here, we will focus on the recently introduced maximum - confidence ( mc ) strategy , which is an optimized measurement whose outcome leads us to identify a given state in the set with the maximum possible confidence .the mc measurement can be applied to both linearly independent and linearly dependent states and , unlike the previously discussed me and ud , it allows a closed form solution for the operators describing the optimized measurement for an arbitrary set of states .in fact , mc encompasses both ud and me strategies : for linearly independent states it can reduce to ud , where our confidence in identifying the states becomes unity . on the other hand , when the maximum confidence is the same for all states and there is no inconclusive result , mc and me coincide .in the original proposal of mc measurement , croke _ et al . _ have applied it , as an example , to a set of three equiprobable symmetric pure states of individual two - dimensional quantum systems , i.e. , _qubits_. later , this case was experimentally demonstrated using qubits encoded into single - photon polarization . in the presentwork our goal is to extend this study to individual -dimensional quantum systems ( with ) , i.e. , _ qudits_. the nonorthogonal symmetric states of qudits are known to play an important role in qsd and many other quantum information protocols .motivated by this fact , we study the mc strategy applied to a set of linearly dependent symmetric qudit states , prepared with equal prior probabilities .usually , in this problem , an inconclusive outcome is inevitable , and from the conditions established in ref . we find the optimal positive operator valued measure ( povm ) that maximizes our confidence in identifying each state in the set and minimizes the probability of obtaining inconclusive results .the physical realization of this povm is completely determined and we show that after an inconclusive outcome , the input states may be mapped into a new set of equiprobable symmetric states , restricted , however , to a subspace of the original -dimensional hilbert space . therefore , by applying the mc measurement again onto this subspace , we can still gain some information about the input states , although with less confidence than before .as we will discuss , this process may be iterated in as many stages as allowed by the input set , until no additional information can be extracted from an inconclusive result .we shall establish the conditions in which this _ sequential maximum - confidence measurement _ applies and show that at each stage , our confidence in identifying the input states is higher than or equal to the one achieved by the optimal me measurement applied in that stage. additionally , the more stages we accomplish ( within the maximum allowed ) , the higher will be the probability that this identification was correct .this type of optimized measurement proposed here does not apply for qubits since that after an inconclusive outcome the input states are projected onto a one - dimensional subspace .the optimal sequential mc measurement will be illustrated with an explicit example in the simplest possible case where it applies , which is the discrimination among four symmetric qutrit states . in this case , where there is the possibility of performing a two - stage sequential measurement , we propose an experimental procedure which could implement it .our scheme is based on single photons and linear optics .the symmetric states are encoded into the propagation modes of a single photon , and , at each stage , the optimized measurements are carried out by using the polarization degree of freedom as an ancillary system and a multiport optical interferometer with photodetectors at each of its outputs .this scheme is feasible with the current technology .the remainder of the article is organized as follows : in sec .[ sec : mcm ] we review the basic aspects of mc measurements for discriminating nonorthogonal pure states . in sec .[ sec : symm ] this strategy is applied to the problem of discrimination among symmetric qudit states . in sec .[ sec : smc ] we specify its physical implementation and introduce the concept of sequential maximum - confidence measurements . in sec .[ sec : application ] we exemplify the sequential mc measurement by applying it to four qutrit states .in addition , we propose an optical network which could experimentally implement it .finally , a summary of our results and a brief discussion of their potential applications are given in sec .[ sec : conc ] .often , the optimized measurement strategies in the problem of qsd can be treated , mathematically , within the formalism of povms .a povm is a set of operators which , in order to form a physically realizable measurement , must satisfy the conditions where is the identity operator on the hilbert space of the system . each povm element corresponds to a measurement outcome and the probability of obtaining this outcome by measuring a quantum system in the state given by . in the mc measurement for qsd , the figure of merit to be optimized is the probability that the prepared state was , given that the outcome of a measurement was .this conditional probability , , is interpreted as our confidence in taking the outcome to indicate the state , and an optimal measurement should maximize it . using bayes rule and the above observations about measurement , this quantity can be written as in this expression , is the known preparation probability for the state ; , where is the povm element associated with the outcome and is the total probability of occurrence for the outcome , where is the _ a priori _ density operator for the system . as mentioned in the introduction , the mc measurement has a closed form solution for any set of states .in particular , for a set of pure states , the povm element that maximizes the conditional probability in eq .( [ eq : confidence ] ) is with the weighting factor given by the corresponding maximum confidence that the outcome identifies the state becomes {\rm max } = p_{j}{\rm tr}\left(\hat\rho_{j}\hat\rho^{-1}\right).\ ] ] from eq .( [ eq : confidence ] ) one can see that the weighting factor of the optimal povm element ( [ eq : optimal_povm ] ) has no effect on the maximum confidence with which we can identify the state .therefore , we can construct each optimal , independently , up to an arbitrary multiplicative factor , but taking into account the first two constraints of eq .( [ eq : povm_conditions ] ) . in some occasions , there is no choice of factors that enables the set of operators to fulfill the completeness condition in eq .( [ eq : povm_conditions ] ) . in these cases ,an inconclusive result must be added , with the corresponding povm element subjected to the constraint .an additional criterion of optimality is to choose a set which minimizes the probability of inconclusive result .in this section we apply the mc strategy to discriminate among nonorthogonal symmetric pure states of a _ single - qudit _ system . before doing so ,let us motivate the problem by defining the symmetric states and their importance in the context of qsd and practical applications of quantum information .a set of pure states spanning a -dimensional hilbert space , , is called symmetric if there exists a unitary transformation on such that [ eq : def_sym ] if ( ) is an orthonormal basis in which is diagonal , then from its unitarity and the condition ( [ eq : def_sym_c ] ) , this operator can be written as expanding in the basis and using eqs .( [ eq : def_sym ] ) and ( [ eq : unit ] ) , the symmetric states will be given by and , without loss of generality , we will assume that all of the are nonzero .these states play a very important role in the development of qsd . in general , for a given discrimination strategy , finding the optimal povm for an arbitrary set of states is a highly nontrivial task .however , the equiprobable symmetric states provide analytical solutions for many of those strategies , as , for instance , me and ud .in addition the problem of discriminating among symmetric states of qudits naturally arises in some quantum information protocols like entanglement concentration and quantum teleportation and entanglement swapping via nonmaximally entangled channels .let us now apply the mc strategy to a set of symmetric qudit states , prepared with equal _ a priori _ probabilities .here we consider only the case of linearly dependent states ( ) since for linearly independent ones the problem reduces to ud and has already been solved . using eq .( [ eq : symm_state ] ) , the _ a priori _ density operator for this set will be written as as we assumed that for all , we have , and the maximum confidence calculated from eq .( [ eq : maxconf_prob ] ) will be {\rm max } = \frac{d}{n } \;\;\;\;\;\forall\;j.\ ] ] therefore , our confidence that the symmetric state was indeed present when the outcome is obtained will be for each state in the set .the corresponding povm element that maximizes the confidence is calculated from eq .( [ eq : optimal_povm ] ) to be where are non - normalized states of the form latexmath:[\[\label{eq : reciprocal } states are also linearly dependent and symmetric with respect to the transformation given by eq .( [ eq : unit ] ) .it can be shown that the set of operators ( [ eq : povm_elem ] ) will form a povm only if and for all .this povm is the square - root measurement which is the optimal me measurement .therefore , in this particular case , the inconclusive outcome will not be necessary and mc and me strategies coincide .on the other hand , when the magnitude of the state coefficients are not all the same , those operators will not form a povm for any choice of , and we must include an inconclusive outcome with the povm element given by . for the specific problem under study , this operator can be simplified by noting that the factors , given by eq .( [ eq : a_j ] ) , are proportional to the total probability of occurrence for the outcome . as the input states ( [ eq : symm_state ] )are symmetric and equally likely , and the measurement states ( [ eq : reciprocal ] ) are also symmetric , this total probability should be the same for each outcome .thus , for some positive constant , we will have for all and the inconclusive povm element will be written as the constraint imposes that for all , and in order to optimize the process we must choose the value of which minimizes the probability of obtaining an inconclusive result . using eqs .( [ eq : rho_prior ] ) and ( [ eq : povm _ ? ] ) , this probability is calculated to be .its minimum value , subject to , will be achieved when , where , and is given by {\rm min } = 1-dc_{\rm min}^2.\ ] ] the corresponding povm element ( [ eq : povm _ ? ] ) becomes therefore , we have determined the maximum possible confidence ( [ eq : maxconf_prob_symm ] ) of identifying each state from a linearly dependent set of equiprobable symmetric states and the minimum probability ( [ eq : min _ ? ] ) of obtaining an inconclusive outcome in the process .the corresponding povm that optimizes this measurement , , is given by eqs .( [ eq : povm_elem ] ) and ( [ eq : povm_?_opt ] ) . for the case of three symmetric qubit states , the above results reproduce those obtained by croke __ in ref . , as it should be .we can draw a comparison between mc and me strategies , regarding the confidence achieved in each one for identifying a state as the result of a measurement . for a set of equiprobable symmetric states ( [ eq : symm_state ] )the optimal me measurement is given by where ) ] that the input state was indeed when the outcome of this measurement is can be calculated with the help of eqs .( [ eq : symm_state ] ) and ( [ eq : rho_prior ] ) , and will be given by {\rm max}^{\rm me } = \frac{1}{n}\left(\sum_{m=0}^{d-1}|c_m|\right)^2 \;\;\;\;\;\forall\;j.\ ] ] employing the lagrange multiplier method , it is possible to show that and , hence , {\rm max}^{\rm me } % = \frac{1}{n}\left(\sum_{m=0}^{d-1}|c_m|\right)^2 \leq\frac{d}{n } .\ ] ] therefore , when the mc measurement is not inconclusive , the confidence that it provides [ eq . ( [ eq : maxconf_prob_symm ] ) ] will always be higher than that achieved in the me measurement .the only exception occurs when for all . in this case , mc and me strategies coincide , as we discussed before , and the equality in eq .( [ eq : mc_in_me ] ) holds . in the next section, we will see that the optimized me measurement ( [ eq : povm_me ] ) also applies in one step of the optimized mc measurement for discriminating equiprobable symmetric qudit states .in this section we describe , abstractly , the implementation of the mc measurements for discriminating symmetric qudit states discussed previously .we begin by noting that , as pointed out by croke _ , the mc strategy can be thought of as a two - step process . in the first step ,a two - outcome measurement is performed where one outcome is associated with the success ( ) and the other with the failure ( ) of the process . if is obtained , the result is interpreted as successful in the sense that the input states undergo a transformation which enables their identification with maximum confidence by a proper measurement implemented in the _ second _ step . on the other hand ,if is obtained , the result is interpreted as failure ( or inconclusive ) in the sense that the transformed input states can not be identified ( at all or with maximum confidence , as we will see later ) by any further measurement .therefore , the outcome occurs with probability and is associated with the povm element .accordingly , the outcome occurs with probability and is associated with the povm element .the whole description of the discrimination process above can be made clearer by resorting to the effect ( or detection ) operators , and .when a measurement associated with the povm is performed on the state and the result is , the postmeasurement state changes as .therefore , the pair of operators and transform the initial state according to the outcome of a measurement . in terms of these operators ,any given povm element can be written as , and the knowledge of allows us to determine the effect operators , up to an arbitrary unitary transformation , through the relation . for our particular problem , in the first step of the process, we have to implement a two - outcome povm given by .the effect operators and associated with the outcomes and , respectively , are obtained from the operator [ eq . ([ eq : povm_?_opt ] ) ] and are given by thanks to our freedom in designing the unitary transformation , a convenient choice for our purposes will be this operator simply removes the relative phases of the postmeasurement states which are associated with the input - state coefficients ( [ eq : symm_state ] ) .as we will see , this simplifies the discrimination measurement to be implemented in the second step .the physical implementation of a povm requires the extension of the hilbert space of the system to be measured .this can be provided either by an ancillary quantum system ( _ ancilla _ ) or by adding unused extra dimensions of the original system ( if they exist ) .the first method is called tensor product extension ( tpe ) and the second direct sum extension ( dse ) . in either case , the povm is implemented through a unitary operation acting on the extended space followed by a projective measurement on the ancilla system ( tpe ) or the entire extended space ( dse ) .this procedure is based on neumark s theorem . to implement the two - outcome povm required in the first step of the mc measurement, we will consider the tpe method .therefore , we introduce a two - dimensional ancillary system whose hilbert space is spanned by the logical basis . in terms of the effect operators ( [ eq : det_op_succ ] ) and ( [ eq : det_op _ ? ] ) , the unitary transformation that couples the original -dimensional system and the ancilla can be written as where is the identity and is the pauli operator , both acting on the ancilla space . without loss of generality ,let us assume that the qudit [ in the state of eq .( [ eq : symm_state ] ) ] and the ancilla are , initially , independent and the latter is prepared in the state .thus , the initial state of the composite system will be . applying the unitary transformation of eq .( [ eq : u ] ) onto this state and using eqs .( [ eq : symm_state ] ) , ( [ eq : min _ ? ] ) , and ( [ eq : det_op_succ])([eq : w_unit ] ) we obtain & = & \sqrt{1-p(?)}|u_j\rangle|0\rangle_{\rm a}+\sqrt{p(?)}|\xi_j\rangle|1\rangle_{\rm a } , \nonumber\\ \label{eq : u2}\end{aligned}\ ] ] where is the ( minimum ) probability of obtaining an inconclusive result given by eq .( [ eq : min _ ? ] ) .the qudit states and , associated with the transformation of the initial state by the effect operators and , respectively , are given by with .both set of states and are normalized and nonorthogonal . after unitary interaction ( [ eq : u2 ] ) , the povm is accomplished by measuring the ancilla in the logical basis .if it succeeds ( fails ) , i.e. , if the outcome ( ) is obtained , the ancilla and the original system are projected onto and ( and ) , respectively . from eq .( [ eq : min _ ? ] ) this happens with a success ( failure ) probability ( ) , which is the optimal one .next , we will discuss how the mc measurement proceeds after obtaining the outcome or in the realization of the povm .when the outcome is obtained , the input states [ eq . ( [ eq : symm_state ] ) ] are mapped into [ eq . ( [ eq : u_states ] ) ] .this occurs with the same probability ] is given by eq .( [ eq : mc_in_me_general ] ) . therefore , while the mc measurement gives higher confidences in identifying the input states , the optimal me measurement gives , on the average , a larger number of correct identifications .which strategy is more appropriate to apply will depend on the practical situation .the simplest case where the optimal smc measurement has the possibility of being carried out is that for the discrimination among four symmetric qutrit states . in this sectionwe apply the results obtained previously to analyze this particular problem and , additionally , we propose an optical network which could realize this measurement . from the definition given by eq .( [ eq : symm_state ] ) , a set of symmetric states in a three - dimensional hilbert space ( qutrit ) can be written as , and .first , one implements the optimal two - outcome povm as described in sec .[ subsec:2_oc_povm ] , with and given by eqs .( [ eq : det_op_succ ] ) and ( [ eq : det_op _ ? ] ) , respectively . in case of success ,the states are projected onto a four - outcome povm [ see eq .( [ eq : povm_succ ] ) ] then is implemented with these states , and , for each possible outcome , we have the same maximum confidence that the input state was . from eq .( [ eq : maxconf_prob_symm ] ) , it will be given by {\rm max}=\frac{3}{4}.\ ] ] in case of failure , the states are projected onto for all .we note that at least one of the coefficients vanishes . from eq .( [ eq : min _ ? ] ) , the minimum failure probability will be ^ 2.\ ] ] a second stage of mc measurement will be applicable only if the multiplicity of the input - state coefficients with minimum magnitude is one .when this is the case , the failure states in eq .( [ eq : failure_qutrit ] ) will form a set of four nonorthogonal symmetric qubit states . once again , the two - step mc measurement is applied to this new set and , if it succeeds , we can identify the input states with the confidence {\rm max}=\frac{1}{2}.\ ] ] otherwise , if it fails , with the minimum probability ^ 2,\ ] ] there is no chance of gaining more information about the input states , since the failure states will be projected onto a one - dimensional subspace . from eqs .( [ eq : total_prob ] ) and ( [ eq : prob_???_total ] ) , the total probability of correctly identifying the input states , and the probability of obtaining no information at all about them will be given , respectively , by \frac{3}{4}}^{1^{\rm st}\;\rm stage}\ + \\overbrace{p(?)[1-p'(?)]\frac{1}{2}}^{2^{\rm nd}\;\rm stage},\ ] ] with and given by eqs .( [ eq : prob_?_qutrit ] ) and ( [ eq : prob_?_qutrit_2 ] ) , respectively . for completeness , one can also compute the probability of making an erroneous identification , which is .let us now analyze graphically the above results .first , we compare the confidence achieved in the identification of the input state as the result of a measurement for me and mc strategies .the former is calculated from eq .( [ eq : mc_in_me_general ] ) and the latter is given by eqs .( [ eq : mc_3_4 ] ) ( first stage ) and ( [ eq : mc_1_2 ] ) ( second stage ) . figure [ fig : mc_and_me](a ) shows the confidence of the optimal me measurement applied on the input states in eq .( [ eq : symm_qutrits ] ) as a function of the magnitude of their coefficients and also the confidences of the mc measurement applied in each stage . as discussed before, the mc measurement in the first stage always gives a greater confidence than that found for me ( except when the magnitude of the state coefficients are all equal ) .interestingly , although our confidence in the second stage becomes smaller , it is still larger than that of me for many possible sets of input states , as can be seen in fig .[ fig : mc_and_me](a ) . figure [ fig : mc_and_me](b ) shows the confidence in the second stage of the smc measurement compared with the optimal me , if the latter had been implemented in that stage .these probabilities were plotted as a function of the magnitude of either nonvanishing failure - state coefficient in eq .( [ eq : failure_qutrit ] ) .as expected , the mc measurement gives us greater confidence than the me measurement , except for . ) .( b ) comparison of the maximum - confidence figure of merit for the optimal mc strategy applied in the second stage and the optimal me strategy , if the latter had been applied in the second stage of the smc measurement .these probabilities are plotted as a function of the magnitude of either nonvanishing failure state coefficient ( [ eq : failure_qutrit ] ) ., title="fig:",scaledwidth=44.5% ] ) .( b ) comparison of the maximum - confidence figure of merit for the optimal mc strategy applied in the second stage and the optimal me strategy , if the latter had been applied in the second stage of the smc measurement .these probabilities are plotted as a function of the magnitude of either nonvanishing failure state coefficient ( [ eq : failure_qutrit ] ) ., title="fig:",scaledwidth=42.5% ] in the graphics of figs .[ fig : prob_corr](a)[fig : prob_corr](c ) we plot the probabilities of correctly identifying the input states , as a function of the magnitude of the input - state coefficients [ eq . ( [ eq : symm_qutrits ] ) ] , achieved in the smc measurement and in the optimal me measurement .first , we consider the mc measurement applied only in the first stage , which means that after an inconclusive outcome we make no further measurement .using eq .( [ eq : prob_?_qutrit ] ) and the first term of the right - rand side of eq .( [ eq : prob_smc_qutrit ] ) we obtain the plot shown in fig .[ fig : prob_corr](a ) .now , taking into account the two stages , the probability in eq .( [ eq : prob_smc_qutrit ] ) is plotted in fig .[ fig : prob_corr](b ) . by comparing the graphics of figs .[ fig : prob_corr](a ) and [ fig : prob_corr](b ) , it can be clearly seen that the addition of a second stage significantly increases the chances of gaining information about the input states . for comparison purposes , in fig .[ fig : prob_corr](c ) we plot together with , which is given by eq .( [ eq : mc_in_me_general ] ) with . as discussed earlier , the optimal me measurement is , in general , better when we consider this figure of merit for the discrimination protocol , and the graphics of fig .[ fig : prob_corr](c ) corroborate the inequality shown in eq .( [ eq : smc_me_probs ] ) . finally , using eqs .( [ eq : prob_?_qutrit ] ) , ( [ eq : prob_?_qutrit_2 ] ) , and ( [ eq : prob_???_qutrit ] ) , in the graphic of fig .[ fig : prob_corr](d ) we plot the probability of gaining no information about the input states after a two - stage smc measurement .we now propose an experimental procedure to implement a two - stage smc measurement for the discrimination among four equiprobable symmetric qutrit states , discussed above .our scheme is based on a recent proposal by jimnez _ for the experimental realization of ud and me discrimination among linearly symmetric qudit states .it makes use of single photons to encode the input states , and linear optical elements and photodetectors to carry out the proper transformations and measurements .the sketch of our proposed optical network is shown in fig .[ fig : setup ] .the dashed boxes , numbered from i to viii , indicate each step of the protocol from state preparation to measurement , while the dark and light gray shaded regions indicate the first and second stages of the smc measurement , respectively .the qutrit states will be encoded in the propagation modes of a single photon , which could be generated , for example , using either heralding photon - pair sources or on - demand single - photon sources . in this scheme ,the orientations of the half - wave plates ( hwps ) are specified by the angle the fast axis makes with the horizontal , and all the polarizing beam splitters ( pbss ) transmit vertical polarization ( ) and reflect horizontally polarized light ( ) ; the nonpolarizing beam splitters ( bss ) are 50:50 , and the phase - shifters ( pss ) are adjusted to add the proper phases in the preparation and measurement of the qutrit states . finally , the detectors at each output are single - photon counting modules ( spcm ) .( color online ) proposed optical network that implements the optimal two - stage smc measurement for the discrimination among four symmetric qutrit states .each dashed box from i to viii corresponds to a step in the process from state preparation to measurement which is discussed in detail in the text .the dark ( light ) gray shaded region represents the first ( second ) stage of the smc measurement .the inset shows the symbols for the single photon counting modules ( spcm ) as well as the optical elements used in the scheme .abbreviations : hwp , half - wave plate ; pbs , polarizing beam splitter ; ps , phase - shifter ; bs , 50:50 beam splitter ; mirror . ,scaledwidth=48.0% ] box i in fig .[ fig : setup ] illustrates the preparation of the input states ( [ eq : symm_qutrits ] ) . here , the polarization will be used to assist this preparation .the three half - wave plates ( hwps ) are oriented at , ) , and , respectively .let us assume that the photon is , initially , horizontally polarized .its quantum state after box i will be , where is given by eq .( [ eq : symm_qutrits ] ) , with , , and . for simplicity , and without loss of generality , we set and .hence , the real - valued input - state coefficients will be ordered as , which means that . having prepared the input state we now proceed to perform the first stage of the smc measurement in order to identify this state with maximum confidence . as discussed in sec .[ sec : smc ] , a mc measurement is a two - step process .the first consists of implementing a two - outcome povm , which in our scheme is accomplished within the boxes ii and iii ( fig .[ fig : setup ] ) . here , the photon polarization will play the role of the required two - dimensional ancilla which will provide the tpe of the qutrit hilbert space . to implement the optimal unitary transformation ( [ eq : u ] ) that couples the original system and ancilla , we make use of hwps in the modes 0 and 1 ( box ii ) oriented at , for . upon the identification and , the system - ancilla state ,after conditional polarization rotations in box ii , will be transformed as {\rm min} ] and , provide the optimal unitary coupling ( [ eq : u ] ) and , hence , transform the system - ancilla state as {\rm min}$ ] and .the ancilla then is measured in the basis with a pbs in the mode 0 ( box vi ) .if it succeeds , the state is projected onto which will be determined , with the maximum possible confidence in this second stage of the smc measurement .the four - outcome povm that identifies ( and hence ) is implemented by a symmetric eight - port interferometer followed by photodetectors at each of its outputs , as sketched in box vii of fig .[ fig : setup ] .now , in order to implement this povm , _ two _ vacuum inputs to the interferometer ( indicated by the dashed arrows in box vii ) are needed for providing the two extra dimensions via dse of the hilbert space .similarly to the first stage , the probability that the photodetector in port clicks if the input state to the interferometer was will be where is the completely antisymmetric levi - civita tensor and indicates addition modulo 4 .therefore , due to the one - to - one correspondence between the states and , if a click in the detector leads us to identify the input state as , the above equation tells us that our confidence in doing so will be , which is the maximum one in the second stage of the smc measurement [ see eq .( [ eq : mc_1_2 ] ) ] . when an inconclusive result is obtained from the two - outcome povm in the second stage of the smc measurement , the photon polarization in step vi is projected onto .in this case , one can see from eq .( [ eq : xi_transf_exp ] ) that the states are projected onto for all .therefore , a click in the photodetector `` ? '' ( box viii in fig .[ fig : setup ] ) gives no information about the input states .the probability that this occurs after the two - stage smc measurement will be , according with eq .( [ eq : prob_???_qutrit ] ) , to conclude this section , we would like to make two remarks : ( i ) the implementation of eight - port interferometers , as those sketched in boxes iv and vii of fig . [fig : setup ] , would be the most challenging steps in a possible realization of the optical scheme proposed here .however , this type of interferometer has been recently implemented for carrying out ud among three nonorthogonal states , which ensures the feasibility of our scheme .( ii ) the optical network proposed here can be , in principle , generalized for more states and higher dimensions . for symmetric states in a -dimensional hilbert space ,the propagation modes are obtained by inserting hwps and pbss at the preparation step ( box i in fig .[ fig : setup ] ) . at each possible stage of the smc measurement the two - outcome povmis implemented with the polarization as ancilla .if it succeeds , a -port interferometer followed by photodetectors at each of its outputs will implement the -outcome povm that identifies the input states with maximum confidence in that particular stage .we have investigated a measurement strategy for discriminating among nonorthogonal symmetric qudit states with maximum confidence .our study was restricted to a set of linearly dependent and equally likely pure states . for this problem, we found the optimal povm that maximizes our confidence in identifying each state in the set and minimizes the probability of obtaining inconclusive results .the physical implementation of this povm has been completely specified by considering the mc strategy as a two - step process . in the first step ,a two - outcome povm is performed with one outcome associated with the success and the other with the failure ( or an inconclusive answer ) of the process . to implement it, we introduced a two - dimensional ancilla and , in terms of the effect operators associated with each outcome , we prescribed the optimal unitary operation that provides the coupling between this ancilla and the original system .after measuring the ancilla , we showed that , in case of success , the input states are discriminated with maximum confidence in the second step of the mc strategy .this was achieved by applying an inverse discrete fourier transform to the ( transformed ) input states and carrying out a projective measurement in the logical basis of an extended -dimensional hilbert space . on the other hand , in case of failure , it was shown that the input states can be mapped into a new set of equiprobable symmetric states , restricted to a subspace of the original qudit hilbert space .as we discussed , if that was the case , the two - step mc measurement could be applied again onto this new set , and iterated in as many stages as allowed by the input states , until no further information could be extracted from an inconclusive result .we have shown that by implementing such optimized measurement , which we called `` sequential maximum - confidence measurement , '' our confidence in identifying the input states is the highest possible at each stage , although it decreases from one stage to the next .also , the confidence per stage was shown to be higher than the one achieved by the optimal me measurement if it had been applied in that stage .for an -stage smc measurement we demonstrated that the probability of correctly identifying the input states increases the more stages we accomplish within the allowed . finally , we have illustrated the smc measurement in the simplest possible case where it can be applied , which is the discrimination among four qutrit states .for this particular case , we proposed an optical network , feasible with the current technology , which could carry out a two - stage smc measurement and be generalized for more states and higher dimensions .it is important to remark that the smc measurements for state discrimination may be , in principle , applied to an arbitrary set of linearly dependent qudit states . for the equally likely symmetric states studied here ,the task to find the optimized povm at each stage of the smc measurement is facilitated , since the failure states are also symmetric and equiprobable . in the case of an arbitrary input set, this task will be certainly more complicated , since the form of the failure states and their associated probability distribution will not be necessarily the same at each stage .fortunately , the mc strategy allows a closed form solution of the optimal povm [ eq .( [ eq : optimal_povm ] ) ] for an arbitrary set of states .the smc measurement applied for symmetric qudit states , as proposed here , may have important applications .for instance , it is well known that after a unsuccessful attempt of unambiguously discriminating among equiprobable symmetric qudit states , the input set is mapped onto a linearly dependent set of states which are also symmetric and equally likely . due to the linear dependence, this set can not be unambiguously discriminated by any further process . however , by carrying out a smc measurement , our confidence in identifying the input states would significantly increase .we also anticipate other applications for the results presented in this article in the quantum communication protocols of entanglement swapping and quantum teleportation for high - dimensional quantum systems .when the quantum channel has nonmaximal schmidt rank , it can be shown that these processes are mapped to the problem of discriminating linearly dependent symmetric states with the maximum confidence . moreover , for some quantum channels it might be possible to implement the smc measurement introduced here , such that successful outcomes at each stage will lead to the highest possible fidelity ( in that stage ) for the protocol . c. h. bennett and g. brassard , in _ proceedings of ieee international conference on computers , systems , and signal processing , bangalore , india_ ( ieee , new york , 1984 ) , pp . 175179 ; c. h bennett , , 3121 ( 1992 ) . for an overview of qsd, there exist in the literature a number of review articles dealing both with theoretical and experimental achievements .in particular , refs . bring the most updated developments .the external pss are adjusted to compensate the relative phases arising from the reflections at the pbss . by setting the internal ps to add a phase of , it can be shown that the eight - port interferometers in fig .[ fig : setup ] implement the inverse fourier transform ( ) , where is given by eq .( [ eq : fourier ] ) with .
|
we study the maximum - confidence ( mc ) measurement strategy for discriminating among nonorthogonal symmetric qudit states . restricting to linearly dependent and equally likely pure states , we find the optimal positive operator valued measure ( povm ) that maximizes our confidence in identifying each state in the set and minimizes the probability of obtaining inconclusive results . the physical realization of this povm is completely determined and it is shown that after an inconclusive outcome , the input states may be mapped into a new set of equiprobable symmetric states , restricted , however , to a subspace of the original qudit hilbert space . by applying the mc measurement again onto this new set , we can still gain some information about the input states , although with less confidence than before . this leads us to introduce the concept of _ sequential maximum - confidence _ ( smc ) measurements , where the optimized mc strategy is iterated in as many stages as allowed by the input set , until no further information can be extracted from an inconclusive result . within each stage of this measurement our confidence in identifying the input states is the highest possible , although it decreases from one stage to the next . in addition , the more stages we accomplish within the maximum allowed , the higher will be the probability of correct identification . we will discuss an explicit example of the optimal smc measurement applied in the discrimination among four symmetric qutrit states and propose an optical network to implement it .
|
today we are witnessing an unprecedented worldwide growth of mobile data traffic that is expected to surpass 10 exabytes per month in 2017 . in order to manage this load ,operators deploy small cell base stations ( scbss ) that work in conjunction with the conventional macrocellular base stations ( mbs ) .the scbss increase the area spectral efficiency and serve the users with short range energy - prudent transmission links .the main drawback of this approach however is the high cost incurred by the deployment of the backhaul links that connect the scbss to the core network .local caching of popular files at the scbss has been recently proposed - , so as to reduce the necessary capacity , and hence the cost , of these backhaul links .based on this novel architecture , user requests are served by the scbss , if the latter have cached the respective file , otherwise the mbs is triggered to serve them .the main challenge here is to design the optimal caching policy , i.e. , to determine the files that should be cached in each scbs so as to minimize the cost for serving the requests .however , one aspect of the cellular networks that has not been considered in these previous works , is that operators can employ multicast transmissions to concurrently serve multiple requests of different users .multicast constitutes a promising solution for efficient delivery of multimedia content over cellular networks ( e.g. , see and references therein ) , and has been incorporated in 3gpp specifications .it can be used to deliver content to users that have subscribed to a multicast service , or to users that submit file requests at nearby times and hence can be served via a single multicast transmission .clearly , multicast impacts the caching policies .for example , when mbs multicast is used to deliver a file , there is no need to cache it in any scbs . on the other hand , in order to avoid such mbs transmissions , all the scbss that receive user requests for this file should have it cached . in this paper , we design caching policies for small cell networks when the operators employ multicast .similarly to previous works , our goal is to reduce the servicing cost of the operator by minimizing the volume of the incurred traffic .however , our approach differs substantially from other studies that designed caching policies based solely on content popularity , .namely , multicast transmissions couple caching decisions with the spatiotemporal characteristics of user requests , and renders the problem np - hard even for the simple case of non - overlapping scbs coverage areas .first , we demonstrate through simple examples how multicast affects the optimality of caching policies .accordingly , we introduce a general optimization problem ( which we name macp ) for devising the optimal caching policy under different user requirements .we assume that different users ask for different files in different time instances .the location and the time arrival of these requests determines whether mbs or scbss multicast transmissions are possible which , in turn , affects the design of the caching policies .we prove the complexity of the caching problem and provide a heuristic algorithm that yields remarkable results compared to conventional caching schemes .our main technical contributions are as follows : * _ multicast aware caching problem ( macp)_. we introduce the macp problem that derives caching policies which take into account the possibility of multicast transmission from mbs and scbs .this is very important as content delivery via multicast is part of 3gpp standards and gains increasing interest . *_ complexity analysis of macp_. we prove the intractability of the macp problem by reducing it to the set packing problem . that is , we show that macp is np - hard even to approximate within a factor of , where is the number of scbss . *_ heuristic solution algorithm_. we present a heuristic algorithm that provides significant performance gains compared to the existing caching schemes . the problem formulation and the algorithmare generic in the sense that apply for general network parameters such as different servicing cost , coverage areas and user demands . *_ performance evaluation_. we evaluate the proposed scheme in representative scenarios .we show that our algorithm reduces the servicing cost even down to compared to conventional ( multicast - agnostic ) caching schemes and study the impact of several system parameters such as the cache sizes and the user request patterns .the rest of the paper is organized as follows : section [ section:2 ] reviews our contribution compared to the related works , whereas section [ section:3 ] describes the system model and defines the problem formally . in section [ section:4 ] , we show the intractability of the problem and present a heuristic caching algorithm with concerns on multicast transmissions .section [ section:5 ] presents our numerical results , while in section [ section:6 ] we conclude the paper .the idea of leveraging in - network storage for improving network performance is gaining increasing interest and has been recently proposed also for small cell networks - .authors in performed the file placement in storage capable base stations based solely on file popularity . the subsequent work in extended their results for the special case that users request video files encoded into multiple quality levels . in our previous work , we studied the impact of scbss wireless capacity constraints on the caching decisions .in contrast to all these studies , our caching policy is carefully designed with concerns on the multicast which is often used by operators to reduce the servicing cost .it is worth emphasizing that this twist increases significantly the complexity of the caching policy design problem .namely , while for the simple scenario of non - overlapping coverage areas of the scbs the conventional file placement is a trivial problem , we prove that incorporating multicast transmissions into the system makes it np - hard . the caching problem has also been studied in information delivery through broadcasting in conventional cellular networks ( i.e. , without scbss ) . in these systems , users are endowed with caches in order to store in advance broadcasted content and retrieve later when they need it .the closest work to ours is that presented by maddah - ali et al . .the authors focus on the joint caching and content delivery problem for the case that there exists a set of users , each one requesting a single file .the goal is to serve them with a single multicast transmission in a way that reduces the peak traffic rates .in contrast to that work , we consider a small cell network setting and aim at deriving the caching policy that minimizes the average cost incurred when serving the user requests .in this section we introduce the system model , we provide a motivating example that explains the problem under consideration and highlights the impact of multicast on caching and , finally , we formally define the multicast - aware caching optimization problem .* system model*. we study the downlink operation of a small cell network like the one depicted in figure [ fig : model ] .a set of small cell base stations ( scbss ) are deployed within the macrocell , serving the requests of the nearby users .each scbs is equipped with a cache of size bytes .the mbs is connected to the core network via a backhaul link .we denote with the average incurred cost per byte ( in monetary units / byte ) when transferring data from the core network to the mbs via it s backahul link .parameter refers to the average cost of the required backhaul capacity in case the link is owned by the operator . when the backhaul link is leased ( e.g. , by a tier-1 isp ) , denotes the average cost per byte paid to the link provider , which depends both on peak traffic and on the volume of the traffic , based on the employed pricing scheme . besides, we denote with the cost per byte incurred when transmitting data directly from mbs to the users in the cell .parameter refers to the average mbs energy consumption when transmitting files to the users .finally , let denote the unit cost incurred when transmitting data from the scbs to it s nearby users . clearly , , , since the scbss are in closer proximity to the users than the mbs .in general , the above cost parameters can be interpreted as the average opex , and average projected capex costs of the operator .we study the system for a certain time interval ( several hours or few days ) , during which the users demand for a set of popular files is assumed to be known in advance , as in , , .let indicate that collection of content files . for notational convenience ,we assume that all files have the same size normalized to .this assumption can be easily removed as , in real systems , files can be divided into blocks of the same length , , .users are heterogeneous since they may have different content demands .to facilitate the analysis , we consider the case that the coverage areas of the scbss are non - overlapping , hence each user is in the coverage area of at most one scbs .we want to emphasize at this point that , as it will be explained in the sequel , all the presented results for the complexity of the problem as well as the proposed algorithm hold also for the case of overlapping scbs coverage areas .we denote with the average demand of users for file covered by scbs within the considered time interval .also , denotes the average demand of the users for file that are not in the coverage area of any of the scbss .we assume that file requests must be satisfied within a given time deadline of seconds in order to be acceptable by the users , as in .the multicast service happens every seconds , which ensures that all the requests will be satisfied within the time deadline .we denote with the probability that at least one request for file is generated by users in the coverage area of scbs ( area ) within the time period .similarly , denotes the respective probability for the users that are not in the coverage area of any of the scbss ( area ) .we denote with the probability that at least one request for the file is generated within each one of the areas , within the period .the operator can employ multicast to simultaneously serve many different requests for the _ same file _ that happen within the _ same time interval _ of duration , and thus reduce its servicing cost .we assume that both scbss and mbs can use multicast .namely , each scbs multicast transmissions satisfy the requests of users within its coverage area , while mbs transmissions satisfy requests generated within the coverage areas of different scbss ( and requests from area ) that have not cached the requested file .this latter option induces higher cost since the mbs has higher transmission cost and also needs to fetch the file via its backhaul link .this exactly is the main idea of this work : _ `` to carefully design the caching policy with concerns on the multicast transmissions so as to minimize the servicing cost''_. before we introduce formally the problem , let us provide a simple example that highlights how the consideration of multicast transmissions impacts the caching policy . *motivating example*. consider the scenario depicted in figure [ fig : example ] with two scbss ( and ) .there are three equal sized files ( , and ) .each scbs can cache at most one file because of it s limited cache size .we also set , and .we assume that requests are generated independently among different areas .thus , for each subset of areas it holds that : besides , we assume that the number of requests for each file within the coverage of scbs follows a poisson probability distribution with rate parameter , . thus , the probability that at least one request for file is generated within scbs in the time period is : let , , , , , and .then , , , , , and .the optimal caching policy places to and to .then , all the requests for will be served by transferring it via the backhaul link that connects the core network to the mbs and then transmitting it by the mbs ( via a single multicast ) .the requests for the rest files will be satisfied by the accessed scbss ( at zero cost ) .hence , the total servicing cost is : .however , if we ignore the multicast transmissions for aggregated requests when designing the caching policy , ( and thus assume that each request will be served via a separate unicast transmission ) , then _ the optimal caching policy changes _ ; it places file to both scbss ( because is the most popular file according to and ) .then , the requests within for and the requests within for will be served by the mbs .the total servicing cost is : , where the last term in the summation is multiplied by because _ two different files are requested for download _ and thus can not be served with a single multicast transmission ( i.e. , _ two unicast transmissions are required _ ) .this example demonstrates that ignoring the multicast transmissions in cache management decisions fails to fully exploit the multicast opportunities , and hence yields increased network servicing cost . and ) . ] * problem statement*. let us introduce the integer decision variable , which indicates whether file is placed at the cache of scbs or not .we also define the respective caching policy matrix . to facilitate notationwe introduce variable which indicates whether a multicast transmission by the mbs will happen , for a given caching policy , and a subset of areas requesting file : where is the indicator function , i.e. , is equal to one iff condition is true ; otherwise it is equal to zero .for example , if a request is generated in a point that is not in the coverage area of any scbss , i.e. , then a multicast transmission will happen as the requester can not find the file at a scbs cache .thus , and the external max term is equal to 1 .similarly , for the case that a request is generated within the coverage area of _ at least one _ scbs , but the latter has not stored in it s cache the requested file . the problem of determining the caching policy that minimizes the total servicing cost can be written as follows : the above expression in the objective function indicates that for each subset of areas that generate at least one request for file , within the same time period of duration , a single multicast transmission by the mbs happens , if there is at least one requester that is not in range with a scbs having cached the file . in other case ,i.e. , when , all the requests are satisfied by the accessed scbss . constraints ( [ eq : storeconstraint ] ) denote the cache capacity constraints of the scbss , whereas inequalities ( [ eq : constraint ] ) indicate the discrete nature of the optimization variables .we call the above the _ multicast - aware caching problem _ ( macp ). observe that the description of the objective function in ( [ eq : objective ] ) is exponentially long in the number of scbss due to the number of subsets . in practicethough , its description is affordable as the number of scbss in a single cell is typically small ( e.g. , a few decades ) .even so , as we prove in the next section , macp is an np - hard problem .in this section we prove the high complexity of the macp problem and present a heuristic algorithm for it s solution .namely , we show that the macp problem is np - hard by proving that the well known set packing problem ( spp ) is polynomial - time reducible to macp . in other words, we prove that spp is a special case of macp .since spp is np - hard it directly follows that macp is also np - hard .therefore , the following theorem holds : _ macp is an np - hard problem .moreover , it is np - hard even to approximate it within a factor of . in order to prove theorem we will consider the corresponding ( and equivalent ) decision problem , called multicast aware caching decision problem ( macdp ) . specifically : _ macdp _ : given a set of scbss , a set of unit - sized files , the vector , the costs and , the time deadline , the request probability matrix , and a real number , _ we ask the following question _ : does there exist a caching policy , such that the value of the objective function in ( [ eq : objective ] ) is less or equal to and constraints ( [ eq : storeconstraint])-([eq : constraint ] ) are satisfied ?we denote this problem instance with .the set packing decision problem is defined as follows : _ spp _ : consider a finite set of elements and a list containing subsets of .we ask : do there exist subsets in that are pairwise disjoint ?let us denote this problem instance by .the set packing problem is polynomial - time reducible to the macdp .consider the decision problem and a specific instance of macdp with scbcs , i.e. , , a file set of unit - sized files , i.e. , , unit - sized caches : , , , and .parameter is any positive number , and the question is if we can satisfy the users requests with cost , where is the parameter from the spp .the important point is that we define the elements of matrix as follows : observe that for given spp instances , we can construct the respective specific macdp in polynomial time .notice that with the previous definitions , is the component of the list and containts a certain subset of elements of . for the macdp , under the above mapping, this correspond to a subset of scbss asking with a non - zero probability file .moreover , with ( [ eq : probs ] ) we assume that these probabilities are equal and have value .if the mbs has to serve all the requests , then the macdp problem has a value ( cost ) of ( the worst case scenario ) . for each file that the operator manages to serve completely through local caching at the scbss , the operator reduces its cost by .this reduction is ensured only if the file is cashed in all the scbss for which .therefore , in order to achieve the desirable value , we need to serve locally the requests for files .that is , to find subsets of scbss where each file should be cached so as to avoid mbs multicasts .notice now that the caches are unit - sized .hence , the caching decisions should be disjoint with respect to the scbss .for example , in figure [ fig : np ] , you can not store in scbs 1 both files 1 and 2 , because .this ensures that you will not pick both the subsets and in the spp problem . in other words , the value of the objective function in ( [ eq : objective ] ) can be less or equal to , if there exist subsets in that are pairwise disjoint . .in the macdp instance there are n=3 scbss and i=3 files .there is a solution to macdp of cost that places file to scbs 1 and file to scbss 2 and 3 .accordingly , the solution to spp picks the subsets and . ]conversely , if a set packing for some exists , then for each subset that is picked in it , one can place the file to the cache of each one of the scbss corresponding to this subset . at most one file is placed in each cache , since the picked subsets in the list are pairwise disjoint. the cost will be equal to .spp is np - hard and moreover it is inapproximable within .according to the reduction , we create a scbs for each one of the elements in , and hence it holds , which concludes the proof of theorem . at this point , we need to emphasize that theorem holds also for the more general case that the scbss coverage areas are overlapping , which can be directly proved as this is a harder problem than the non - overlapping scbss that we considered in our analysis .this indicates that the multicast - aware problem is very hard even for the more simple non - overlapping coverage areas scenario . * heuristic algorithm*. because of the above hardness results, we propose a light - weight heuristic algorithm for the solution of the macp problem .the proposed iterative algorithm starts with all the caches empty . at each iteration, it places the file to a non - full cache that yields the lowest value of the objective function in ( [ eq : objective ] ) .the algorithm terminates when all the caches become full .this is a greedy ascending procedure that can be summarized in algorithm .input : , , , , , , output : the caching policy + ] , our scheme achieves significant gains compared to the others .this is of major importance considering that the traffic generation in reality follows a zipf distribution with a parameter _ a _ around , .* impact of the time deadline : * finally , figures 4(c ) shows how the performance of the discussed schemes depends on the time parameter .this is the maximum time duration that a request must be satisfied in order to be acceptable by the users ( and/or the service ) .particularly , as the time deadline ( and hence the duration of the time period of service ) becomes larger , more requests are aggregated for the same file within , and thus more requests are served via multicast transmissions .therefore , the performance gap between each one of the schemes that enable mulitcast transmissions ( pac - mt and mac - mt ) and the pac - ut becomes larger . besides , increasing the time deadline increases the gap between pac - mt and mac - mt .this is because , more multicast transmissions happen and mac - mt is the only scheme out of the three that is designed with concerns on them .in this paper , we considered storage capable small cell base stations and proposed a novel caching scheme to minimize the cost incurred for serving the file requests of mobile users .this is a topic of major importance nowadays , as the mobile data demand growth challenges the cellular operators .in contrast to the traditional caching schemes that simply bring popular content close to the users , our caching strategy is carefully designed so as to additionally exploit the multicast opportunities .interestingly , we find that a simple ascending greedy algorithm achieves cost reduction up to when compared to the existing schemes that perform only unicast transmissions .even when multicast transmissions are employed by other schemes , our caching policy outperforms them , achieving cost reduction up to .
|
the deployment of small cells is expected to gain huge momentum in the near future , as a solution for managing the skyrocketing mobile data demand growth . local caching of popular files at the small cell base stations has been recently proposed , aiming at reducing the traffic incurred when transferring the requested content from the core network to the users . in this paper , we propose and analyze a novel caching approach that can achieve significantly lower traffic compared to the traditional caching schemes . our cache design policy carefully takes into account the fact that an operator can serve the requests for the same file that happen at nearby times via a single multicast transmission . the latter incurs less traffic as the requested file is transmitted to the users only once , rather than with many unicast transmissions . systematic experiments demonstrate the effectiveness of our approach , as compared to the existing caching schemes .
|
the question of excitations is central to the study of cold atoms .numerous experimental and theoretical investigations have been devoted to the study of a variety of collective and elementary excitations in these gases including vortex states , solitary waves , and normal modes . when determining the properties of an excited state , it is natural to consider two kinds of possible instabilities static and dynamic . in the former case ,one wishes to determine whether a state which extremizes the energy is a genuine local minimum of the energy subject to certain physically motivated constraints .since the wave function extremizes the energy , infinitesimal perturbations will make no first - order change of the energy. the state will be stable provided that an arbitrary infinitesimal variation of the wave function necessarily increases the energy to second order . for the problem of dynamic stability ,one conventionally considers the temporal evolution of arbitrary infinitesimal perturbations to the wave function using approximate linearized equations ( e.g. , the bogoliubov equations ) . while the full time - dependent equations conserve probability , the linearized equations do not , and the resulting eigenvalue problem is non - hermitean . if the corresponding eigenvalues are real , the system is dynamically stable . in this casesmall - amplitude time - dependent perturbations of the solution lead to bounded motion about the original equilibrium position .instability is indicated by complex eigenvalues , and the associated exponential growth of small - amplitude perturbations drives the system to a new state beyond the scope of the linearized equations . here, we will explore the intimate connection between these two seemingly - different criteria for stability .two main points will emerge .first , static stability implies dynamic stability , but dynamic stability can exist even in the absence of static stability .second , the transition from dynamic stability to instability is reflected in the features of the corresponding problem of static stability ( subject to appropriate constraints ) . here , inspired by the experiment of ref. and by the numerical studies of refs . , we will focus on the specific problem of the stability ( static and dynamic ) of a doubly - quantized vortex state .our study is , however , more general , and our results can be applied to the study of the stability of any problem described by the non - linear gross - pitaevskii equation .one remarkable observation obtained from numerical simulations of this problem is that the system alternates between regions of dynamical stability and instability as the strength of the interatomic interaction increases . since some ( and probably all ) regions of dynamical stabilitycoincide with regions of static instability , it is useful to seek a simple description of this surprising phenomenon . in the followingwe first examine the questions of static and dynamic stability separately .we will then demonstrate the connection between them .we consider atoms subject to a spherically - symmetric single - particle hamiltonian , , interacting through a short - range effective interaction , here is the strength of the effective two - body interaction , is the scattering length for elastic atom - atom collisions , and is the atomic mass . for simplicitywe assume strong confinement along the axis , which is also chosen as the axis of rotation .this assumption implies that the cloud is in its lowest state of motion in the direction , and the problem thus becomes two - dimensional with higher degrees of freedom along the axis frozen out .we also assume that is rotationally symmetric about the axis ( e.g. , a harmonic oscillator hamiltonian ) with the result that angular momentum is conserved .to be concrete , we will consider the problem of a doubly - quantized vortex state .we start by examining the static stability of this state within the subspace of states of the lowest landau level .we will generalize our arguments below .the corresponding nodeless eigenfunctions of the ( two - dimensional ) single - particle hamiltonian , , will be denoted as , with the angular momentum ; their energy eigenvalues are .we now consider the state which describes a doubly - quantized vortex state when and . since we wish to consider the effect of small admixtures of states with and , we expand the energy to second order in and to obtain the expectation value of the energy per particle where with and .the elements , , are real due to the hermiticity of .the energy is then equal to where .the absence of terms linear in and is a consequence of the spherical symmetry of and implies that is a solution to the variational problem in this restricted space . for fixed and ,it is elementary that has extrema for . in general , the linearly independent curvatures of this quadratic energy surface are obtained as the eigenvalues of the real symmetric matrix with . in this problem , it is evidently of physical interest to consider the stability of the doubly quantized vortex subject to the constraint that angular momentum is conserved .this constraint will be satisfied if , and the energy then becomes the doubly - quantized vortex will be an energy minimum if the coefficient of is positive .in other words , the doubly - quantized vortex will be energetically stable if this condition is not satisfied when only lowest landau levels are retained , and a doubly - quantized vortex state is energetically unstable within this approximation .several comments are in order before turning to the dynamical behavior of the same ( truncated ) problem .we have tested the stability of the state with respect to the admixture of states and .a complete test of stability requires the investigation of the admixture of for all .fortunately , the spherical symmetry of ensures that the generalized quadratic form of eq.([enstab ] ) will only mix states with . with the inclusion of additional values of ,the matrix becomes block diagonal and leads to a sequence of equations for each completely analogous to those considered above .( the choice considered here is known to be the most unstable case . )the problem of finding the eigenvalues of the hermitean matrix , , is subject to familiar variational arguments .expansion of the dimension of this matrix , either by including more values of or by relaxing the restriction to the lowest landau level , can only decrease the smallest eigenvalue of .thus , the fact that a given state is energetically unstable for a given choice of the finite space used to construct is conclusive proof of instability .finally , we note that changing the state under investigation , e.g. , through the inclusion of higher landau levels , leads to fundamental changes in . under such circumstances, simple variational arguments can not tell us whether the improved state will be more or less stable .as mentioned above , dynamic stability probes the temporal behavior of the system .we again start with the wave function of eq.([defwf ] ) and construct the lagrangian to second order in the small parameters and .variations with respect to the coefficients and then lead to the equations where the dot denotes a time derivative .we can solve these equations with the ansatz and .( the time dependence of the solution , , is given by the factor with . )the resulting bogoliubov equations can be written in the form this equation can also be written as where is the matrix appearing in the static stability problem . the dynamic problem is thus governed by the eigenvalues of the non - hermitean eq.([bog ] ) , and these eigenvalues can be complex . since is real hermitean ,the dynamic eigenvalues are either real or come in conjugate pairs . for each eigenfunction , , with a complex eigenvalue , , will also be an eigenfunction with eigenvalue . as a result ,the roots move along the real axis , touch ( i.e. , become degenerate ) and then move off in the complex plane .evidently , the existence of complex eigenvalues indicates exponential divergence and dynamic instability . from eqs.([bog ] ) or ( [ bog2 ] ) we see that the bogoliubov eigenvalues are real and the state under investigation is dynamically stable provided that .this condition is identical to the condition for static stability found in eq.([cond ] ) .as we show below in greater generality , static stability always implies dynamic stability , but dynamic stability can occur even in the presence of static instability .at this point it is useful to generalize our static and dynamic formalisms to allow for the inclusion of an arbitrary number of landau levels in the description of both the doubly quantized vortex and the potentially unstable states with and .the matrix appropriate for the static problem now has the hermitean block form with . aside from its dimension ,the only change in the construction of lies in the replacement of the lowest landau state , , by a superposition of landau states , , which satisfies the obvious euler equation here , is a lagrangian multiplier introduced to ensure that is normalized to unity .it is understood that the replacement is to be made as appropriate in the integrals , , contributing to .as before , positive eigenvalues of imply energetic stability .the dynamic bogoliubov equations again assume the form of eq.([bog2 ] ) and can be written as where is again the real hermitean matrix governing energetic stability .the quantities and now represent column vectors ; their dimensions need not be equal .the reality of all eigenvalues , , is again an indication of dynamical stability .as we have seen , is hermitean and spherically symmetric .the time evolution of an arbitrary wave function , , is governed by the full time - dependent gross - pitaevskii equation it thus comes as no surprise that energy and angular momentum are , in general , constants of the motion . on the other hand ,given that the ( linearized ) bogoliubov equations do not even conserve probability , it is not obvious that energy and momentum are conserved when these equations are used to describe the temporal evolution of the system .this point merits some attention .consider a dynamic eigenvector of the bogoliubov equations , , and its ( possibly complex ) eigenvalue , .given the hermiticity of , standard arguments reveal that this equation is evidently trivial when is real , but it shows that when is complex .this tells us that the probabilities of finding is equal to that for for all times . in this sense ,angular momentum is rigorously conserved in spite of the approximations leading to the bogoliubov equations .the extension of this familiar conservation law presumes that the trial state is deformed by the inclusion ( with arbitrary amplitude ) of a single bogoliubov eigenvector with complex eigenvalue .angular momentum is not in general conserved with arbitrary deformations of involving either single bogoliubov eigenvectors with real eigenvalues or superpositions of bogoliubov eigenvectors .since the primary rationale for studying the bogoliubov equations at all is to determine the existence or non - existence of complex eigenvalues , this point is of interest in spite of such caveats .further , it suggests that we are most likely to reveal relations between the problems of static and dynamic stability if we consider the question of static stability subject to the constraint of constant angular momentum .indeed , this constraint was imposed trivially in eq.([enstabb ] ) above , where it was crucial in establishing the identity of static and dynamic stability criteria in the simple case of single landau levels . for the more general problem considered here ,it is easiest to proceed by introducing real lagrange multipliers , and , in order to impose the constraints of overall normalization and constant angular momentum , respectively , in the static problem .the constrained static eigenvalue problem then reads in practice , the real parameter is adjusted so that the resulting static eigenfunctions satisfy .the desired extrema of then follow as .the similarity between the constrained static problem of eq.([omegas ] ) and the dynamic eigenvalue problem of eq.([bog3 ] ) is now obvious .we shall exploit this connection below .given the fact that , we see that the inner product of with eq.([bog3 ] ) also yields .if is deformed by the inclusion ( with arbitrary amplitude ) of a single bogoliubov eigenvector with complex eigenvalue , the energy is rigorously conserved and identical to that obtained for the pure state .we can use any with complex eigenvalue as a trial function to provide a variational upper bound of zero for the lowest constrained static eigenvalue , .said somewhat more simply , it is elementary that the smallest can not be positive if there exists a state such that .( henceforth , we will consider only the constrained static problem and will refer to it simply as the `` static problem '' . ) thus , if the bogoliubov problem has complex eigenvalues , the lowest eigenvalue of the static problem must be negative ( or zero ) .conversely , if all , none of the bogoliubov eigenvalues can be complex . in other words ,_ dynamic instability guarantees static instability , and static stability guarantees dynamic stability_. this establishes the first rigorous connection between the two stability problems .while this result may appear obvious , a similar argument shows that it is equally impossible to find a state such that if all .this yields the more surprising result that _ complete static instability also leads to dynamic stability_. in order to find additional ties between the static and dynamic stability problems , it is useful to follow the trajectory of two bogoliubov eigenvalues ( as a function of the coupling constant ) as they evolve from distinct real values to a conjugate pair . to see this, it is sufficient to consider the two - dimensional space of and .( here , and are of arbitrary dimension and normalized . )we thus write with all quantities real .the bogoliubov problem of eq.([bog3 ] ) is readily solved with the following results . for ,the are real and distinct .the eigenvectors are real and do not satisfy the constraint of constant angular momentum . for , the eigenvalues are degenerate with .the eigenvectors are identical and given as .( recall that the problem is non - hermitean . ) for , the eigenvalues form a conjugate pair .the eigenvectors also form a conjugate pair with a non - trivial phase , and the magnitudes of their and components are equal ( i.e. , they conserve angular momentum ) .we can also solve the static problem using eq.([omegas ] ) in the same truncated basis .the values of required to impose the constraint of conserved angular momentum are found to be .the corresponding static eigenvalues are with , as expected . for ,the static eigenvalues are positive . for , one of the static eigenvaluesis negative . for ,one static eigenvalue is precisely zero with .the condition for a static eigenvalue of zero is the same as the condition for degenerate dynamic eigenvalues .we also see that the static and dynamic eigenvectors are identical at this point .this should come as no surprise . at , the lagrange multiplier , ,is .equation ( [ omegas ] ) , which determines , becomes identical to the bogoliubov equation , eq.([bog3 ] ) , and identical solutions must result .this result is general .while the form of these two equations is similar , the fact that is a real symmetric matrix forces and to be real ( up to a trivial phase ) . when , it is clear that the associated dynamic eigenvectors , and can not be chosen equal .thus , the phase of is non - trivial , and can not be the desired solution to eq.([omegas ] ) .( the case of real can be dismissed summarily , since the amplitudes of and states are not equal . )when two bogoliubov roots are degenerate , however , satisfies both the constraints of angular momentum conservation and reality . under such conditions , , and . in other words ,the number of conjugate pairs of complex bogoliubov eigenvalues changes by one when a static eigenvalue passes through zero .it might be thought that the number of conjugate pairs of dynamic eigenvalues was equal to the number of negative and that a stronger statement was thus possible .as we will show below , this is not the case .the present statement can nevertheless give us a useful corollary .start from a manifestly stable choice of for which all and dynamical stability is ensured .vary some external parameter ( e.g. , the coupling constant , ) . as demonstrated above, a single conjugate pair of bogoliubov eigenvalues will appear when the first passes through zero , and the system becomes dynamically unstable .with every subsequent passage of an through zero , the number of conjugate pairs changes ( i.e. , increases or decreases ) by one .it is then clear that an odd number of must correspond to an odd number of conjugate bogoliubov pairs and , hence , dynamic instability .as mentioned earlier , it is possible for the system to be dynamically stable even in the presence of static instability .this effect is more subtle but is also useful in tightening the connection between the problems of static and dynamic stability. a simple calculation will be helpful .start with two solutions to the constrained static problem of eq.([omegas ] ) , and .the corresponding static eigenvalues are , which is assumed to be negative , and , which can pass through zero .approximate the bogoliubov equations by truncating the basis to include these two states and the related states and .( this calculation is exact when the dimension of is equal to four . )since these states are not necessarily orthogonal , the bogoliubov equations assume the form with and .both matrices are real symmetric .the first two diagonal elements of are the constrained static eigenvalues , and .the remaining two diagonal elements , and , will be assumed to be positive .the matrix , , is intimately related to the overlap matrix with elements .since the basis states conserve angular momentum , the diagonal elements of vanish .given the origin of the and as constrained extrema of , it is clear that the elements of and are not independent .we shall ignore this fact in the following qualitative arguments .first , consider the case when these four states are maximally linearly dependent .state has been assumed to be distinct from and is explicitly orthogonal to .let us assume it is identical to .this immediately implies that is identical to .the approximate bogoliubov equation is thus reduced to a matrix equation in the space of and and assumes the form the eigenvalues of this problem are and the corresponding ( unnormalized ) eigenvectors are when , the bogoliubov eigenvalues form a conjugate pair .further , eq.([evecs ] ) shows that acquires a non - trivial phase and that }$ ] is zero .the system is dynamically unstable .when , the bogoliubov eigenvalues are degenerate and real . in this case , is equal to , which is real and conserves angular momentum .these results are all consistent with those found above . when , however , the bogoliubov eigenvalues are real and distinct .the corresponding can now be chosen real , and angular momentum is no longer conserved .the presence of two negative static minima of is thus capable of creating dynamic stability in spite of manifest static instability .this argument is actually more detailed than necessary .we have already seen that the number of conjugate pairs of bogoliubov eigenvalues necessarily changes by one every time a static eigenvalue passes through zero .linear dependence has reduced the present problem to one spanned by a two - dimensional space .the fact that ensures that there are initially two complex eigenvalues in the space .we have seen in general that the number of complex eigenvalues must change by two when crosses zero .since all eigenvalues of this problem are already complex , the only possibility is thus that the number of complex eigenvalues is reduced to zero with dynamic stability as a consequence , as we have seen .states and are both essential to this process .now consider the case where the are all mutually orthogonal and maximally linearly independent .partition the matrix into blocks spanned by the states and and the states and , respectively .each of the diagonal sub - blocks of now has a form familiar from eq.([invff ] ) .the diagonal elements are equal in each case .the products of off - diagonal elements are and , respectively .if the off - diagonal blocks of this matrix are sufficiently small , the bogoliubov eigenvalues will be given by the eigenvalues of these two matrices . since by assumption , this block produces the original conjugate pair of dynamical eigenvalues independent of the properties of .as passes through zero , becomes negative , and an additional conjugate pair of eigenvalues appears independent of the properties of .this behaviour will persist whenever the off - diagonal blocks of are sufficiently small .we thus see that there are two possible outcomes when a second static extremum becomes negative .if the corresponding states are `` strongly '' coupled with , the initial dynamic instability will be eliminated .if these states are `` weakly '' coupled with , a second unstable dynamic mode will appear .there is another and potentially fruitful way to distinguish these alternatives .consider the evolution of the closed surfaces with which bound domains within which .[ it is assumed that is explored with real trial vectors subject to the constraint of constant angular momentum .hence , these vectors lie on a ( hyper)torus . ]every negative extremum of is evidently surrounded by such a surface , and every such surface must contain at least one negative extremum . if there is one negative extremum ,there is one surface .there are two possible ways for the topology of these surfaces to change when a second static extremum becomes negative .a second surface , not connected to the first , can appear . in this case, the new negative extremum must be a local minimum .alternatively , the original surface can spread , exploiting the fact that the manifold is periodic , and touch itself .the point of contact necessarily represents a new extremum of , and a few moments thought reveals that this extremum is necessarily a saddle point .( when there are more than two negative static modes , independent surfaces can merge in a similar manner . )randomly drawn numerical examples suggest that the appearance of a new surface corresponds to the case of weak coupling discussed above , and a new unstable dynamic mode emerges .the merger of an surface with itself corresponds to the case of strong coupling , and the original unstable dynamic mode disappears . although the evidence is admittedly slim , it is tempting to guess more generally that each closed surface with produces zero unstable dynamic modes when it contains an even number of negative extrema of and one unstable dynamic mode when it contains an odd number of such extrema .as we have shown , the reality and hermiticity of and the non - hermiticity introduced by the specific form of are not sufficient to provide a unique answer to the question of dynamic stability in the presence of an even number of unstable static modes .( as noted above , an odd number of negative static modes always implies dynamic instability . ) the question of what constitutes `` strong '' or `` weak '' coupling would thus seem to require more details regarding the physical system in question .for this reason , we now turn to the question of the stability of vortex solutions to the physically relevant gross - pitaevskii equation .we now study the static and dynamic stability of a doubly quantized vortex numerically .the single - particle hamiltonian , , represents a harmonic oscillator potential of strength . in an anharmonic potential ,a doubly quantized vortex is energetically stable for weak couplings , but in a purely harmonic potential this is known not to be the case . in our simulationseq.([euler ] ) was first solved to find the doubly quantized vortex .this is equivalent to finding a stationary solution to the gross - pitaevskii equation subject to the constraint that the system has a 4 phase singularity at the origin .this minimization was carried out within a truncated basis composed of the lowest radially excited states and is valid for small values of such that .the static and dynamic eigenvalues were then calculated according to eqs.([bog3 ] ) and ( [ omegas ] ) in a basis consisting of radially excited states for both the and components .the computation of the eigenvalues was carried out in matlab .the results of this calculation are shown in fig.[fig : frequencies ] . , computed by means of exact diagonalization in a truncated basis .solid lines represent the imaginary part of the eigenvalues of the dynamic stability matrix , .dots represent the lowest eigenvalues of the static stability matrix , .[ fig : frequencies],width=340,height=302 ] results are shown for , and we have checked convergence with respect to both and for the range of couplings displayed in the figure . the dynamic eigenvalues coincide with those previously found numerically in refs. .the figure clearly shows the correlation between the `` window '' structure of the complex frequencies and the eigenvalues , , of the static problem in agreement with our general arguments above . for small such that , the properties of the system follow from the perturbative analysis of the lowest landau level as described above : the system possesses one negative static eigenvalue and therefore one pair of complex dynamic eigenvalues . as the coupling increases , a second static eigenvalue crosses zero at , and the system becomes dynamically stable despite its manifest static instability . as a thirdstatic eigenvalue crosses zero , one pair of dynamic eigenvalues again become complex but become real again when a fourth static eigenvalue becomes negative .the numerical findings indicate that as is further increased , a succession of such alternations occurs with the result that the system is found to be dynamically stable ( unstable ) when there are an even ( odd ) number of negative static eigenvalues . for the range of couplings , , studied here ,there is never more than one complex pair of dynamic eigenvalues .it is also seen that the system is always statically unstable . in the limit of large , well - known results for vortices in an infinite system apply . in that limitthe energy of two vortices increases monotonically with decreasing separation , thereby implying that at least one eigenvalue of the static stability matrix is negative .hence it is reasonable to conclude that the system is statically unstable for all values of .we thus conclude that the `` window '' structure of alternating dynamically stable and unstable regions arises from the non - trivial interplay between negative modes of the static stability matrix .the purpose of this paper has been to point out the intimate connections between the static and dynamic stability of solutions to the gross - pitaevskii equations .we have shown that ( suitably constrained ) static stability necessarily implies dynamic stability , that the number of complex conjugate pairs of dynamic eigenvalues changes by one every time a constrained static eigenvalue passes through zero , and that an odd number of negative static eigenvalues thus implies dynamic instability .numerical investigations revealed that the doubly - quantized vortex solution to the gross - pitaevskii equation is statically unstable over the full range of coupling constants explored but displays windows of dynamic stability .the general nature of the arguments presented here suggests that similar connections between the problems of static and dynamic stability are likely to be wide - spread .further , we believe that additional insight obtained by studying both stability problems is likely to be well worth the minimal additional effort required .gmk acknowledges financial support from the european community project ultra-1d ( nmp4-ct-2003 - 505457 ) .
|
we examine the static and dynamic stability of the solutions of the gross - pitaevskii equation and demonstrate the intimate connection between them . all salient features related to dynamic stability are reflected systematically in static properties . we find , for example , the obvious result that static stability always implies dynamic stability and present a simple explanation of the fact that dynamic stability can exist even in the presence of static instability .
|
global constraints allow users to specify patterns that commonly occur in problems .one of the oldest and most useful is the constraint .this ensures that a set of variables are pairwise different .global constraints can often be decomposed into more primitive constraints .for example , the constraint can be decomposed into a clique of binary inequalities .however , such decompositions usually do not provide a global view and are thus not able to achieve levels of local consistency , such as bound and domain consistency .considerable effort has therefore been invested in developing efficient propagation algorithms to reason globally about such constraints .for instance , several different propagation algorithms have been developed for the constraint . in this paper , we show that several important global constraints including can be decomposed into simple arithmetic constraints whilst still providing a global view since bound consistency can be achieved .there are many reasons why such decompositions are interesting .first , it is very surprising that complex propagation algorithms can be simulated by simple decompositions . in many cases ,we show that reasoning with the decompositions is of similar complexity to existing monolithic propagation algorithms .second , these decompositions can be easily added to a new solver . for example , we report experiments here using these decompositions in a state of the art pseudo - boolean solver .we could just as easily use them in an ilp solver .third , introduced variables in these decompositions give access to the state of the propagator .sharing of such variables between decompositions can increase propagation .fourth , these decomposition provide a fresh perspective to propagating global constraints that may be useful .for instance , our decompositions of the constraint suggest learning nogoods based on small hall intervals .a constraint satisfaction problem ( csp ) consists of a set of variables , each with a finite domain of values , and a set of constraints specifying allowed combinations of values for some subset of variables .we use capitals for variables and lower case for values .we write for the domain of possible values for , for the smallest value in , for the greatest , and for the interval ] constraint ensures that for any .we will assume values range over 1 to .constraint solvers typically use backtracking search to explore the space of partial assignments . after each assignment ,propagation algorithms prune the search space by enforcing local consistency properties like domain or bound consistency .a constraint is _ domain consistent _( _ dc _ ) iff when a variable is assigned any of the values in its domain , there exist compatible values in the domains of all the other variables of the constraint .such an assignment is called a _ support_.a constraint is _ bound consistent _( _ bc _ ) iff when a variable is assigned the minimum or maximum value in its domain , there exist compatible values between the minimum and maximum domain value for all the other variables .such an assignment is called a _ bound support_. finally , between domain and bound consistency is range consistency .a constraint is _ range consistent _( _ rc _ ) iff when a variable is assigned any value in its domain , there exists a bound support .constraint solvers usually enforce local consistency after each assignment down any branch in the search tree .for this reason , it is meaningful to compute the total amortised cost of enforcing a local consistency down an entire branch of the search tree so as to capture the incremental cost of propagation .we will compute complexities in this way .the constraint is one of the most useful global constraints available to the constraint programmer .for instance , it can be used to specify that activities sharing the same resource take place at different times . a central concept in propagating the constraintis the notion of a _ hall interval_. this is an interval of domain values which completely contains the domains of variables . ] . in any bound support, the variables whose domains are contained within the hall interval consume all the values in the hall interval , whilst any other variables must find their support outside the hall interval .consider an constraint over the following variables and values : ] from the domains of all the other variables .this leaves with a domain containing values 2,3,4 . ] from the domains of and .this leaves the following range consistent domains : enforcing bound consistency on the same problem does not create holes in domains .that is , it would leave with the values 2,3,4,5 . to identify and prune such hall intervals from the domains of other variables ,leconte has proposed a rc propagator for the constraint that runs in time .we now propose a simple decomposition of the constraint which permits us to enforce rc .the decomposition ensures that no interval can contain more variables than its size .we introduce new 0/1 variables , to represent whether takes a value in the interval ] , ] , ] ) .first take the interval ] , ( [ eqn::firstrange ] ) implies .now from ( [ eqn::lastrange ] ) , .that is , at most one variable can take a value within this interval .this means that . using ( [ eqn::firstrange ] ) and , we get ] , this leaves ] . from ( [ eqn::firstrange ] ) , .now from ( [ eqn::lastrange ] ) , .that is , at most 2 variables can take a value within this interval .this means that . using ( [ eqn::firstrange ] )we get ] .since ] , this leaves and . local reasoning about the decompositionhas thus made the original constraint range consistent .we will prove that enforcing dc on the decomposition enforces rc on the original constraint .we find it surprising that a simple decomposition like this can simulate a complex propagation algorithm like leconte s .in addition , the overall complexity of reasoning with the decomposition is similar to leconte s propagator .[ theorem::range ] enforcing dc on constraints ( [ eqn::secondrange ] ) and ( [ eqn::lastrange ] ) enforces rc on the corresponding constraint in down any branch of the search tree .* proof : * provides a necessary and sufficient condition for rc of the constraint : every hall interval should be removed from the domain of variables whose domains are not fully contained within that hall interval .let ] .constraint ( [ eqn::secondrange ] ) fixes for all .the inequality ( [ eqn::lastrange ] ) with and becomes tight fixing for all .constraint ( [ eqn::secondrange ] ) for , , and removes the interval ] . only the bounds that do not have a bound support are shrunk .the complexity reduces as ( [ eqn::secondbc ] ) appears times and is woken times , whilst ( [ eqn::thirdbc ] ) appears times and is woken just time .a special case of is when we have the same number of values as variables , and the values are ordered consecutively .a decomposition of just needs to replace ( [ eqn::lastrange ] ) with the following equality where ( as before ) , and : this can increase propagation. in some cases , dc on constraints ( [ eqn::firstrange ] ) and ( [ eqn::lastrangeequality ] ) will prune values that a rc propagator for would miss .[ permutation - example ] consider a constraint over the following variables and values : these domains are range consistent .however , take the interval ] .this ensures that the value occurs between and times in to .the constraint is useful in resource allocation problems where values represent resources .for instance , in the car sequencing problem ( prob001 at csplib.org ) , we can post a constraint to ensure that the correct number of cars of each type is put on the assembly line .we can decompose in a similar way to but with an additional integer variables , to represent the number of variables using values in each interval ] and .we then post the following constraints for , , : \label{eqn::firstgcc } \\n_{lu } & = & \sum_{i=1}^n a_{ilu } \label{eqn::bigagcc}\\ n_{1u } & = & n_{1k } + n_{(k+1)u } \label{eqn::gcctriangle } \label{eqn::lastgcc } \end{aligned}\ ] ] consider a constraint with the following variables and upper and lower bounds on the occurrences of values : enforcing rc removes 1 and 3 from and and leaves the other domains unchanged .we can derive this from our decomposition . from the lower and upper bounds on the number of occurrences of the values, we have ] and we have , n_{13}\in [ 2,15] ] . by ( [ eqn::firstgcc ] ) , . from ( [ eqn::bigagcc ] ) , ] ( i.e. , upper bound decreased ) because and ] and from that ] and to ] . by ( [ eqn::firstgcc ] ) , , so by ( [ eqn::bigagcc ] ) , and . by ( [ eqn::firstgcc ] ) , this removes 1 and 3 from . local reasoning about the decompositionhas made the original constraint range consistent .we next show that enforcing dc on constraint ( [ eqn::firstgcc ] ) and bc on constraints ( [ eqn::bigagcc ] ) and ( [ eqn::lastgcc ] ) enforces rc on the constraint .[ thm : gcc - bc ] enforcing dc on constraint ( [ eqn::firstgcc ] ) and bc on constraints ( [ eqn::bigagcc ] ) and ( [ eqn::lastgcc ] ) achieves rc on the corresponding constraint in time down any branch of the search tree .* proof : * we use for the number of variables whose range intersects the set of values , and for the number of variables whose range is a subset of .we first show that if rc fails on the , dc on ( [ eqn::firstgcc ] ) and bc on ( [ eqn::bigagcc ] ) and ( [ eqn::lastgcc ] ) will fail .we derive from ( * ? ? ?* lemmas 1 and 2 ) that rc fails on a if and only if there exists a set of values such that or such that .suppose first a set such that . the fact that domains are considered as intervals implies that either includes more variable domains than the sum of the upper bounds ( like ) , or the union of the that are included in lets a hole of unused values in , which implies that there exists an interval \subset v ] .so , in any case , there exists an interval ] . by ( [ eqn::firstgcc ] )we have } ] .so bc will fail on .suppose now that a set is such that .the total number of values taken by variables being equal to , the number of variables with not intersecting is greater than , that is , } + s^{}_{[v_1 + 1,v_2 - 1 ] } + \ldots + s^{}_{[v_{k}+1,d ] } > n-\sum_{v_i\in v}l_{v_i} ] .so , . the initial domains of variables also tell us that for every in , .thus , . successively applying bc on , then on , and so on until will successively increase the minimum of these variables and will lead to a failure on .we now show that when dc on ( [ eqn::firstgcc ] ) and bc on ( [ eqn::bigagcc ] ) and ( [ eqn::lastgcc ] ) do not fail , it prunes all values that are pruned when enforcing rc on the constraint .consider a value for some such that does not have any bound support .we derive from ( * ? ? ?* lemmas 1 and 6 ) that a value for a variable does not have a bound support on if and only if there exists a set of values such that either ( i ) , and is not included in , or ( ii ) , and intersects . in case ( i ), contains and the values it contains will be taken by too many variables if is in it . in case ( ii ) , does not contain and its values will be taken by not enough variables if is not in it .consider case ( i ) : since dc did not fail on ( [ eqn::firstgcc ] ) , by a similar reasoning as above for detecting failure , we derive that is composed of intervals ] .consider the interval ] , which is exactly the number of variables with range included in ] by assumption .consider now case ( ii ) : is such that .the total number of values taken by the variables being equal to , the number of variables with not intersecting is equal to , that is } + s^{}_{[v_1 + 1,v_2 - 1 ] } + \ldots + s^{}_{[v_{k+1},d ] } = n-\sum_{v_i\in v}l_{v_i} ] .so , . the initial domains of variables also tell us that for every in , .thus , . successively applying bc on , then on , and so on until will increase all and , to the sum of the minimum values of the variables in the right side of each constraint so that . then , because , bc on will decrease the maximum value of and to their minimum value , bc on will decrease the maximum value of and to their minimum value , and so on until all are forced to the singleton } ] because that interval is saturated by variables in } ] that contains .furthermore , intersects , so it is not included in ] .the complexity reduces to as bc on ( [ eqn::secondbc ] ) and ( [ eqn::thirdbc ] ) is in ( see theorem [ theo : alldiffbc ] ) and bc on ( [ eqn::bigagcc ] ) and ( [ eqn::lastgcc ] ) is in ( see theorem [ thm : gcc - bc ] ) .the best known algorithm for bc on runs in time at each call and can be awaken times down a branch .this gives a total of , which is greater than the here when .our decomposition is also interesting because , as we show in the next section , we can use it to combine together propagators .many other global constraints that count variables or values can be decomposed in a similar way .for example , the global constraint , [ y_1 , \ldots , y_n]) ] and ,[o_1,\ldots , o_m]) ] for then both ,[o_1,o_2,o_3,o_4,o_5]) ] are bc .however , enforcing bc on the decomposition of , [ y_1 , y_2]) ] .the overall consistency achieved is therefore between bc and rc .we denote this encoding . to explore the impact of small hall intervals , we also tried , a pb encoding with only those constraints ( [ eqn::lastrange ] ) for which .this detects hall intervals of size at most . finally , we decomposed into a clique of binary inequalities , and used a direct encoding to convert this into sat ( denoted ) .[ [ pigeon - hole - problems . ] ] pigeon hole problems .+ + + + + + + + + + + + + + + + + + + + + table [ t : t1 ] gives results on pigeon hole problems ( php ) with pigeons and holes .our decomposition is both faster and gives a smaller search tree compared to the decomposition . on such problems , detecting large hall intervals is essential . .[t: t1]php problems . is time and is the number of backtracks to solve the problem .[ cols="^ , > , < , > , < , > , < , > , < , > , < , > , < " , ]the constraint first appeared in the alice constraint programming language .regin proposed a dc propagator that runs in time .leconte gave a rc propagator based on hall intervals that runs in time .puget then developed a bc propagator also based on hall intervals that runs in time .this was later improved by melhorn and thiel and then lopez - ortiz _ et al ._ . the global cardinality constraint ,was introduced in the charme language .regin proposed a dc propagator based on network flow that runs in time .katriel and thiel proposed a bc propagator for the constraint ._ proved that enforcing dc on the constraint is np - hard .they also improved the time complexity to enforce dc and gave the first propagator for enforcing rc on .many decompositions have been given for a wide range of global constraint .however , decomposition in general tends to hinder propagation .for instance , shows that the decomposition of constraints into binary inequalities hinders propagation . on the other hand , there are global constraints where decompositions have been given that do not hinder propagation .for example , beldiceanu _ et al ._ identify conditions under which global constraints specified as automata can be decomposed into signature and transition constraints without hindering propagation . as a second example, many global constraints can be decomposed using and which can themselves often be propagated effectively using simple decompositions . as a third example , decompositions of the and constraints have been given that do not hinder propagation .as a fourth example , decompositions of the constraint have been shown to be effective .finally , the constraint can be decomposed into ternary constraints without hindering propagation .we have shown that some common global constraints like and can be decomposed into simple arithmetic constraints whilst still maintaining a global view that achieves range or bound consistency .these decompositions are interesting for a number of reasons .first , we can easily incorporate them into other solvers .second , the decompositions provide other constraints with access to the state of the propagator .third , these decompositions provide a fresh perspective on propagation of global constraints .for instance , our results suggest that it may pay to focus propagation and nogood learning on small hall intervals .finally , these decompositions raise an important question .are there propagation algorithms that can not be efficiently simulated using decompositions ? in , we use circuit complexity to argue that a domain consistency propagator for the constraint can not be simulated using a polynomial sized decomposition .= -1pt n. beldiceanu , i. katriel , and s. thiel . filtering algorithms for the same constraint . in _1st int . conf . on integration of ai and or techniques in cp _ , 6579 , 2004 . n. beldiceanu , i. katriel , and s. thiel .reformulation of global constraints based on constraints checkers filtering algorithms for the same constraint . , 10(4 ) : 339362 , 2005 . c. bessiere , e. hebrard , b. hnich , z. kiziltan and t. walsh .the range constraint : algorithms and implementation . in _conf . on integration of ai and or techniques in cp ( cp - ai - or ) _ , 5973 , 2006 .a. oplobedu , j. marcovitch , and y. tourbier .: un langageindustriel de programmation par contraintes , illustre par une application chez renault . in _ 9th int .workshop on expert systems and their applications _ , 1989 .
|
we show that some common and important global constraints like and can be decomposed into simple arithmetic constraints on which we achieve bound or range consistency , and in some cases even greater pruning . these decompositions can be easily added to new solvers . they also provide other constraints with access to the state of the propagator by sharing of variables . such sharing can be used to improve propagation between constraints . we report experiments with our decomposition in a pseudo - boolean solver .
|
adaptive beamforming techniques are widely used in numerous applications such as radar , wireless communications , and sonar - , to detect or improve the reception of a desired signal while suppressing interference at the output of a sensor array . currently , the beamformers designed according to the constrained minimum variance ( cmv ) and the constrained constant modulus ( ccm ) criteria are among the most used criteria due to their simplicity and effectiveness .the cmv criterion aims to minimize the beamformer output power while maintaining the array response on the direction of the desired signal .the ccm criterion is a positive measure ( chapter 6 in ) of the deviation of the beamformer output from a constant modulus ( cm ) condition subject to a constraint on the array response of the desired signal .compared with the cmv , the ccm - based beamformers exhibit superior performance in many severe scenarios ( e.g. , steering vector mismatch ) since the positive measure provides more information for parameter estimation with constant modulus signals . for the design of adaptive beamformers , numerous adaptive filtering algorithmshave been developed using constrained optimization techniques stochastic gradient ( sg ) and recursive least squares ( rls ) , are popular methods with different tradeoffs between performance and complexity .a major drawback is that they require a large number of samples to reach the steady - state when the array size is large . in dynamic scenarios , filters with many elementsusually provide a poor performance in tracking signals embedded in interference and noise . the multistage wiener filter ( mswf ) provides a way out of this dilemma .the mswf employs the minimum mean squared error ( mmse ) criterion and its extended versions with the cmv and the ccm criteria are reported in , .another cost - effective technique is the auxiliary vector filtering ( avf ) algorithm . in this scheme ,an auxiliary vector is calculated by maximizing the cross correlation between the outputs of the reference vector filter and the previously auxiliary vector filters .the weight vector is obtained by subtracting the scaling auxiliary vector from the reference vector . in ,the avf algorithm iteratively generates a sequence of filters that converge to the cmv filter with a small number of samples .its application in adaptive beamforming has been reported in . motivated by the fact that the ccm - based beamformers outperform the cmv ones for the cm signals, we propose an avf algorithm based on the ccm design for robust adaptive beamforming .the beamformer structure decomposes the adaptive filter into a constrained ( reference vector filters ) and an unconstrained components ( auxiliary vector filters ) .the constrained component is initialized with the array response of the desired signal to start the iteration and to ensure the ccm constraint , and the auxiliary vector in the unconstrained component can be iterated to meet the cm criterion .the weight vector is computed by means of suppressing the scaling unconstrained component from the constrained part .the main difference from the existing avf technique is that , in the proposed ccm - based algorithm , the auxiliary vector and the scalar factor depend on each other and are jointly calculated according to the cm criterion ( subject to different constraints ) .the proposed method provides an iterative exchange of information between the auxiliary vector and the scalar factor and also exploits the information about the cm signals , which leads to an improved performance .simulations exhibit the robustness of the proposed method in typical scenarios including array mismatches .the rest of this paper is organized as follows : we outline a system model and the problem statement in section 2 .the proposed scheme is introduced and the ccm - avf algorithm is developed in section 3 .simulation results are provided and discussed in section 4 , and conclusions are drawn in section 5 .consider narrowband signals that impinge on a uniform linear array ( ula ) of ( ) sensor elements .the sources are assumed to be in the far field with directions of arrival ( doas ) , , .the received vector can be modeled as where ^{t}\in\mathbb{r}^{q \times 1} ] comprises the signal steering vectors , ^{t}\in\mathbb{c}^{m \times 1} ] is the complex weight vector of the beamformer , and stands for hermitian transpose . with the signals introduced in ( [ 1 ] ) and ( [ 2 ] ), we can present the ccm beamformer design by minimizing the following cost function ^{2}\big\},~~ \textrm{subject~to}~~{\boldsymbol w}^{h}(i){\boldsymbol a}(\theta_{0})=1,\ ] ] where is the direction of the signal of interest ( soi ) and denotes the corresponding steering vector .the cost function is the expected deviation of the squared modulus of the array output to a constant , say .the constraint is set to maintain the power of the soi and to ensure the convexity of the cost function .the weight expression obtained from ( [ 3 ] ) is \boldsymbol a(\theta_0)}{\boldsymbol a^h(\theta_0)\boldsymbol r^{-1}(i)\boldsymbol a(\theta_0)}\big\},\ ] ] where \in\mathbb c^{m\times m} ] , and denotes complex conjugate .note that ( [ 4 ] ) is a function of previous values of ( since ) and thus must be initialized to start the iteration .we keep the time index in and for the same reason .the calculation of the weight vector is costly due to the matrix inversion .the sg or rls type algorithms can be employed to reduce the computational load but suffer from a poor performance when the dimension is large .in this section , we introduce a ccm - based adaptive filtering structure for beamforming and develop an efficient ccm - avf algorithm for robust adaptive beamforming .we define the cost function for the beamformer design , which is ^ 2\big\},\ ] ] where can be viewed as a new received vector to the beamformer and is set in accordance with ( [ 3 ] ) . to obtain the weight solution for the time index , we start the iteration by initializing the weight vector then , we subtract a scaling auxiliary vector ( unconstrained component ) that is orthogonal to from ( constrained component ) and obtain where with , and is a scalar factor to control the weight of .the auxiliary vector is supposed to capture the signal components in that are not from the direction .the aim of ( [ 7 ] ) is to suppress the disturbance of the unconstrained component while maintaining the contribution of the soi .the cost function in ( [ 5 ] ) appears in unconstrained form since the constraint has been incorporated in the weight adaptation . from ( [ 7 ] ) , it is necessary to determine the auxiliary vector and the scalar factor for the calculation of .assuming is known , can be obtained by minimizing ^ 2\} ] and \in\mathbb c^{m\times 1} ] and .we keep the time index in since it is a function of , which must be initialized to provide an estimation about and to start the iteration .the expression of is utilized to enforce the constraints and solve for and .indeed , we have where denotes the euclidean norm .substitution of and back in ( [ 10 ] ) leads to that satisfies the constraints and minimizes ( with ) the squared deviation of from the cm condition , yielding so far , we have detailed the first iteration of the proposed ccm - avf algorithm for time index , i.e. , in ( [ 6 ] ) , in ( [ 7 ] ) , in ( [ 8 ] ) , and in ( [ 13 ] ) , respectively . in this procedure, can be viewed as a new received vector that is processed by the adaptive filter ( first estimation of ) to generate the output , in which , is determined by minimizing the mean squared error between the output and the desired cm condition .this principle is suitable to the following iterations with .now , we consider the iterations one step further and express the adaptive filter as where and will be calculated based on the previously identified and . ( ) is chosen to minimize the cost function ^ 2\}$ ] under the assumption that is known beforehand .thus , we have the auxiliary vector is calculated by the minimization of the cost function subject to the constraints and + , which is the above iterative procedures are taken place at time index to generate a sequence of filters with being the iteration number .generally , there exists a maximum ( or suitable ) value of , i.e. , , that is determined by a certain rule to stop iterations and achieve satisfactory performance .one simple rule , which is adopted in the proposed ccm - avf algorithm , is to terminate the iteration if is achieved .alternative and more complicated selection rules can be found in . until now, the weight solution at time index can be given by .the proposed ccm - avf algorithm for the design of the ccm beamformer is summarized in table [ tab : avf ] ..proposed ccm - avf algorithm [ cols= " < " , ] there are several points we need to interpret in table [ tab : avf ] .first of all , initialization is important to the realization of the proposed method . is set to estimate and so , , and . is for the activation of the weight adaptation .note that the calculation of the scalar factor , e.g. , in ( [ 8 ] ) , is a function of and the auxiliary vector obtained from ( [ 13 ] ) depends on .it is necessary to initialize one of these quantities to start the iteration .we usually set a small positive scalar value for simplicity . under this condition, the subscript of the scalar factor for the calculation of should be replaced by instead of , as shown in table [ tab : avf ] .second , the expected quantities , , and are not available in practice .we use a sample - average approach to estimate them , i.e. , where , , and are substituted by their estimates in the iterative procedure to generate . to improve the estimation accuracy , the quantities in ( [ 17 ] ) can be refreshed or further regularized during the iterations .specifically , we use in the iteration step instead of in the initialization to generate , and related and , which are employed to update the estimates , , and . compared with , is more efficient to evaluate the desired signal .thus , the refreshment of the estimates based on the current is valuable to calculate the subsequent scalar factor and the auxiliary vector .third , we drop the normalization of the auxiliary vector - .note that the calculated auxiliary vectors are constrained to be orthogonal to .the orthogonality among the auxiliary vectors is not imposed .actually , the successive auxiliary vectors do satisfy the orthogonality as verifies in our numerical results .we omit the analysis about this characteristic here considering the paper length .the proposed ccm - avf beamformer efficiently measures the expected deviation of the beamformer output from the cm condition and provide useful information for the proposed algorithm for dealing with parameter estimation in many severe scenarios including low signal - to - noise ratio ( snr ) or steering vector mismatch .the proposed ccm - avf algorithm employs an iterative procedure to adjust the weight vector for each time instant .the matrix inversion appeared in ( [ 4 ] ) is avoided and thus the computational cost is limited . since the scalar factor and the auxiliary vector depend on each other , the proposed algorithm provides an iterative exchange of information between them , which are jointly employed to update the weight vector .this scheme leads to an improved convergence and the steady - state performance that will be shown in the simulations .simulations are performed for a ula containing sensor elements with half - wavelength interelement spacing .we compare the proposed algorithm ( ccm - avf ) with the sg , rls , mswf , and avf methods . with respect to each method, we consider the cmv and the ccm criteria for beamforming .a total of runs are used to get the curves . in all experiments ,bpsk sources powers ( desired user and interferers ) are and the input snr db with spatially and temporally white gaussian noise .[ fig : avf_sve ] includes two experiments .there are users , including one desired user in the system .the scalar factor is and the iteration number is . in fig.[fig : avf_sve ] ( a ) , the exact doa of the soi is known at the receiver .all output sinr values increase to the steady - state as the increase of the snapshots ( time index ) .the rls - type algorithms enjoy faster convergence and better steady - state performance than the sg - type methods .the proposed ccm - avf algorithm converges rapidly and reaches the steady - state with superior performance .the ccm - based mswf technique with the rls implementation has comparative fast convergence rate but the steady - state performance is not better than the proposed method . in fig .[ fig : avf_sve ] ( b ) , we set the doa of the soi estimated by the receiver to be away from the actual direction .it indicates that the mismatch induces performance degradation to all the analyzed algorithms .the ccm - based methods are more robust to this scenario than the cmv - based ones .the proposed ccm - avf algorithm has faster convergence and better steady - state performance than the other analyzed methods . in fig .[ fig : avf_rank ] , we keep the same scenario as that in fig . [fig : avf_sve ] ( a ) and check the iteration number for the existing and proposed methods .the number of snapshots is fixed to .the most adequate iteration number for the proposed ccm - avf algorithm is , which is comparatively lower than other analyzed algorithms , but reach the preferable performance .we also checked that this value is rather insensitive to the number of users in the system , to the number of sensor elements , and work efficiently for the studied scenarios .we developed an avf algorithm based on the ccm design for robust adaptive beamforming .the algorithm provides a positive measure of the expected deviation of the beamformer output from the cm condition and thus is robust against the severe scenarios .the weight solution is iterated by jointly calculating the auxiliary vector and the scalar factor , which iteratively exchange information between each other and lead to an improved performance over prior art .the selection of the iteration number may be more efficient and adaptive with the change of the system ( e.g. , the number of users change ) if we employ other techniques .we will consider further improvements to the proposed ccm - avf algorithm in the near future .r. c. de lamare and r. sampaio - neto , low - complexity variable step - size mechanisms for stochastic gradient algorithms in minimum variance cdma receivers " , _ ieee trans . signal processing _ , vol .2302 - 2317 , june 2006 .r. c. de lamare and r. sampaio - neto , blind adaptive code - constrained constant modulus algorithms for cdma interference suppression in multipath channels , " _ ieee communications letters _ , vol .334 - 336 , apr . 2005 .r. c. de lamare , m. haardt , and r. sampaio - neto , blind adaptive constrained reduced - rank parameter estimation based on constant modulus design for cdma interference suppression , " _ ieee trans . signal proc .56 , pp . 2470 - 2482 , jun . 2008 .r. fa , r. c. de lamare , and l. wang , reduced - rank stap schemes for airborne radar based on switched joint interpolation , decimation and filtering algorithm , " _ ieee transactions on signal processing _ , vol.58 , no.8 , aug .2010 , pp.4182 - 4194 .
|
this paper proposes an auxiliary vector filtering ( avf ) algorithm based on a constrained constant modulus ( ccm ) design for robust adaptive beamforming . this scheme provides an efficient way to deal with filters with a large number of elements . the proposed beamformer decomposes the adaptive filter into a constrained ( reference vector filters ) and an unconstrained ( auxiliary vector filters ) components . the weight vector is iterated by subtracting the scaling auxiliary vector from the reference vector . the scalar factor and the auxiliary vector depend on each other and are jointly calculated according to the ccm criterion . the proposed robust avf algorithm provides an iterative exchange of information between the scalar factor and the auxiliary vector and thus leads to a fast convergence and an improved steady - state performance over the existing techniques . simulations are performed to show the performance and the robustness of the proposed scheme and algorithm in several scenarios . beamforming , antenna arrays , constrained constant modulus , auxiliary vector .
|
[ cols= " < , < , < " , ] as it can be seen in figure [ fig : fx_plot]a , function is monotonous decreasing , with one solution ( intersection with the -axis ) at as asserted by theorem [ prop : def1_theorem_alt ] .another example is represented in figure [ fig : fx_plot]b , for a network with reaction rates as in table [ tab : table_1 ] , except for constants and which now take the value .the domain of the function is now constrained to the interval , with and .vector for this parameter set becomes .for this case the function crosses the -axis at .{fx_plot } & \includegraphics[scale=0.33]{fx_plot2 } \end{array} ] , where for and takes , respectively , the values : the set of equilibrium concentrations is then computed by solving . in particular , for this network we can express the first chemical species in terms of species , what leads to a straight line in the -space , which intersects the interior of any positive stoichiometric compatibility class defined in ( [ eq : eqpolyhedron ] ) ( with ) in just one point .this contribution concentrates on the study of feasibility conditions to identify admissible equilibria for weakly reversible mass action law ( mal ) systems . to that purpose ,a flux - based form of the model equations describing the time evolution of the species concentration has been exploited , in combination with results from the theory of linear compartmental systems to develop a canonical representation of the equilibrium set .ingredients of such representation include the so - called family of solutions , with the corresponding positivity conditions , and the feasibility functions employed to characterize the set of feasible ( equilibrium ) solutions .one main result of this contribution is that the introduced feasibility functions are monotonously decreasing on their domain .this allows us to establish connections with classical results in crnt related to the existence and uniqueness of equilibria within positive stoichiometric compatibility classes .in particular , we employ monotonicity to identify regions in the set of possible reaction rate coefficients leading to complex balancing , and to conclude uniqueness of equilibria for a class of positive deficiency networks .it is our hope that the proposed results might support the understanding of the deficiency one theorem from a different point of view , with the possibility of an alternative proof .a number of examples of different complexity are employed to illustrate the notions presented and their relations .as the examples show , all components used for the characterization of equilibria , in particular the family of solutions and the feasibility functions , can be computed efficiently in an algorithmic way , even for large kinetic models .future work will be focused on the constructive application of these functions for the computational search or design of networks with unique equilibria .* acknowledgements * aaa acknowledges partial financial support by grants pie201230e042 and grant salvador de madariaga ( pr2011 - 0363 ) .gs acknowledges the support of the grants nf104706 from the hungarian scientific research fund , and kap-1.1 - 14/029 from pazmany peter catholic university. 10 a. a. alonso and g. szederkenyi . on the geometry of equilibrium solutions in kinetic systems obeying the mass action law . , 2012 .a. a. alonso and b. e. ydstie .stabilization of distributed systems using irreversible thermodynamics ., 37:17391755 , 2001 .a proof of the global attractor conjecture in the single linkage class case ., 71:14871508 , 2011 .r. aris .prolegomena to the rational analysis of systems of chemical reactions ., 19(2):8199 , 1965 .r. aris .prolegomena to the rational analysis of systems of chemical reactions ii .some addenda . , 27(5):356364 , 1968. k. j. arrow. a `` dynamic '' proof of the frobenius - perron theorem for metzler matrices .technical report 542 , the economic series , institute for mathematical studies and social sciences . technical reportn0 , 1989 .a. berman and r.j . plemmons . .siam , philadelphia , 1994 .b. boros .notes on the deficiency - one theorem : multiple linkage classes . , 235:110122 , 2012 .v. chellaboina , s.p .bhat , w.m . haddad and d.s .modeling and analysis of mass - action kinetics nonnegativity , realizability , reducibility , and semistability . , 29:6078 , 2009. b. l. clarke .theorems on chemical network stability . , 62:773775 , 1975 .b. l. clarke ., volume xliii of _ advances in chemical physics_. wiley , 1980 . c. conradi and d. flockerzi .multistationarity in mass action networks with applications to erk activation . , 65:107156 , 2012 . c. conradi , d. flockerzi , j. raisch and j. stelling .subnetwork analysis reveals dynamic features of complex ( bio)chemical networks ., 104:1917519180 , 2007 .g. craciun .toric differential inclusions and a proof of the global attractor conjecture . ,g. craciun , a. dickenstein , a. shiu and b. sturmfels .toric dynamical systems . , 44:15511565 , 2009 .g. craciun and m. feinberg .multiple equilibria in complex chemical reaction networks : i. the injectivity property . , 65(5):15261546 , 2005 .g. craciun and m. feinberg .multiple equilibria in complex chemical reaction networks : ii .the species - reaction graph . , 66(4):13211338 , 2006 .g. craciun and c. pantea .identifiability of chemical reaction networks ., 44:244259 , 2008 .p. rdi and j. tth . .manchester university press , manchester , 1989 .l. farina and s. rinaldi . .wiley , 2000 .m. feinberg .complex balancing in general kinetic systems ., 49:187194 , 1972 .m. feinberg .lectures of chemical reaction networks . technical report ,university of wisconsin , 1979 .m. feinberg .chemical reaction network structure and the stability of complex isothermal reactors ., 42:22292268 , 1987 .m. feinberg .the existence and uniqueness of steady states for a class of chemical reaction networks ., 132:311370 , 1995 .m. feinberg and f. horn .chemical mechanism structure and the coincidence of the stoichiometric and kinetic subspaces ., 66:8397 , 1977 .d. fife .which linear compartmental systems contain traps ?, 14:311315 , 1972 .golub and c.f . van loan . .johns hopkins university press , baltimore , 1996 .gorban , e.m .mirkes and g.s .thermodynamics in the limit of irreversible reactions ., 392:13181335 , 2013 .gorban and m. shahzad .the michaelis - menten - stueckelberg theorem ., 13:9661019 , 2011 .gorban and g.s .yablonsky . extended detailed balance for systems with irreversible reactions ., 63:53885399 , 2011 .w. m. haddad , v. chellaboina and q. hui . .princeton university press , 2010 . k. m. hangos .engineering model reduction and entropy - based lyapunov functions in chemical reaction kinetics ., 12(4):772797 , 2010 .b. hernndez - bermejo , v. fairn and l. brenig .algebraic recasting of nonlinear odes into universal formats .31:24152430 , 1998 . f. horn .necessary and sufficient conditions for complex balancing in chemical kinetics ., 49:172186 , 1972 . f. horn and r. jackson .general mass action kinetics . , 47:81116 , 1972 .m. d. johnston , d. siegel and g. szederkenyi .computing weakly reversible linearly conjugate chemical reaction networks with minimal deficiency ., 241:8898 , 2013 .n.g . van kampen . .elsevier , 2nd ed ., 1981 .kermack and a.g .mckendrick . a contribution to the mathematical theory of epidemics . , 115(772):700721 , 1927 .h. k. khalil . .prentice - hall , 1996 .the mathematical structure of chemical kinetics in homogeneous single - phase systems ., 38(5):317347 , 1970 .g. liptak , g. sederkenyi and k.m .kinetic feedback design for polynomial systems .41:5666 , 2016 . m. mincheva and g. cracium .multigraph conditions for multistability , oscillations and pattern formation in biochemical reaction networks ., 96(8):12811291 , 2008 .s. muller and g. regensburger .generalized mass action systems : complex balancing equilibria and sign vectors of the stoichiometric and kinetic - order subspaces . , 72:19261947 , 2012 . i. otero - muras , j.r .banga and a.a .alonso . exploring multiplicity conditions in enzymatic reaction networks ., 25(3):619631 , 2009 .i. otero - muras , j.r .banga and a.a .alonso . characterizing multistationarity regimes in biochemical reaction networks . , 7(7):e39194 , 2012 .m. perez - millan , a. dickenstein , a. shiu and c. conradi .chemical reaction systems with toric steady states . , 74:10271065 , 2012 .r. j. plemmons . nonsingular m - matrices . , 18:175188 , 1977 .n. samardzija , l. d. greller and e. wassermann .nonlinear chemical kinetic schemes derived from mechanical and electrical dynamical systems . , 90 ( 4):22962304 , 1989 . n. z. shapiro and l. s. shapley .mass action laws and the gibbs free energy function ., 13(2):353375 , 1965 .g. szederknyi and k. m. hangos .finding complex balanced and detailed balanced realizations of chemical reaction networks .49:11631179 , 2011 . a. van der schaft , s. rao and b. jayawardhana . on the mathematical structure of balanced chemical reaction networks governed by mass action kinetics ., 73(2):953973 , 2013 .a. i. volpert .differential equations on graphs ., 17:571582 , 1972 .y. b. zeldovich . ,chapter proof of the uniqueness of the solution of the equations of the law of mass action , pages 144147 .princeton university press , 2014 .[ lem : hq_result ] let be c - metzler and such that : with .let ( with ) be a vector with positive , zero and negative components satisfying : and then : and moreover : * proof : * multiplying both sides of ( [ eq : h1a ] ) by the scalar and subtracting the result from ( [ eq : ghp ] ) , we get : summing the first elements and reordering terms results in : since is c - metzler , according to definition [ def : c - metzler ] , for every , with , we have that : the first term at the right hand side of ( [ eq : the_equivalence ] ) is non - positive since , by construction , for , and the above summations are non - positive .the second term is non - positive since for every and , and .thus , relation ( [ eq : firstr ] ) follows , since is a nonnegative vector and is positive for , so the third term at the right hand side of ( [ eq : the_equivalence ] ) is also non - positive . in a similar way we prove ( [ eq : secondr ] ) . substituting for in , we get : summing the elements of from gives : because is c - metzler , we have that : thus , the first term at the right hand side of of is non - negative , since for .the second term in the expression is also non - negative , since the off - diagonal elements of are non - negative and for . finally , the last term in is non - negative due to the negativity of and the non - negativity of . strict inequalities ( [ eq : defsign_final ] ) can be proven in a straightforward manner from expressions ( [ eq : the_equivalence ] ) and ( [ eq : sumhpl ] ) , if the non - zero components of vector are within the first and last entries .this would be the case , since the last terms at the right hand side in both equations would be strictly negative ( with ) , and positive ( with ) , respectively . if the non - zero components are not within the first , nor within the last entries , the strict inequalities still hold . in order to prove this point, we express as : ,\ ] ] where .let the first components of vector to be zero .then , in ( [ eq : h_factor ] ) must necessarily have at least one positive element ( any non - zero element must be positive because is c - metzler ) .suppose , on the contrary , that is a zero matrix .then , by using ( [ eq : h1a ] ) we have that : which means that , and consequently , are not invertible , contradicting the fact that is c - metzler and therefore , non - singular . since at least one entry of is positive , the second term at the right hand side of ( [ eq : the_equivalence ] ) for must be strictly negative .a similar line of arguments can be employed if the last components of are zero , with matrix and , instead of and .now , we suppose that is a zero matrix , what combined with ( [ eq : h1a ] ) leads to : thus , and consequently , are not invertible , what is in contradiction with the fact that is c - metzler and therefore , non - singular . since at least one entry of must be positive , the second term at the right hand side of ( [ eq : sumhpl ] ) , for , must be strictly positive , completing the proof . [ lem : g_result ] let and consider the function defined as : where are the coordinates of the vector , with and as in lemma [ lem : hq_result ] .for every and , let also have that : then , for every .* proof : * first , we note that ( [ eq : gx ] ) can be re - written as : where implicitly , each is assumed to be a function of . from ( [ eq : putoordenmcged ] ) , we have that for every and , what implies that for every , and for every .thus , from lemma [ lem : hq_result ] , we have that : the signs , as well as inequalities ( [ eq : defsign_final ] ) , from lemma [ lem : hq_result ] , make the right hand side of the above expression strictly negative , what completes the proof .[ prop : defsigns_s_t ] under the conditions of lemma [ lem : hq_result ] , let the positive and the negative components of satisfy : then : for some and .* proof : * the line of arguments is similar to that employed in lemma [ lem : hq_result ] to prove ( [ eq : defsign_final ] ). if the non - zero components of vector are within the first and the last entries , it is straightforward to prove strict inequalities from expressions ( [ eq : the_equivalence ] ) and ( [ eq : sumhpl ] ) , for the last terms at the right hand side in both equations ( with and ) are strictly negative , and positive , respectively .if , on the other hand , the first and the last entries of are zero , then matrix , which can be expressed as in ( [ eq : h_factor ] ) with , must have for at least one positive element .otherwise , from ( [ eq : h1a ] ) , we would have that : what contradicts the hypothesis that is c - metzler and therefore , non - singular . since at least one entry of must be positive , and because of ( [ eq : seq_s_positive ] ) , ( for ) the second term at the right hand side of ( [ eq : the_equivalence ] ) , for , is strictly negative , what proves the first inequality in ( [ eq : definitesign_s_t ] ) .in order to prove the second inequality , we make use of a similar argument with in ( [ eq : h_factor ] ) , to show that must have at least one positive entry . from ( [ eq : seq_t_negative ] ) , we also have that ( for ) so the second term at the right hand side of ( [ eq : sumhpl ] ) , for , must be strictly positive .this proves the second inequality in ( [ eq : definitesign_s_t ] ) . [ lem : limits_sequences ] let us consider the following set of ordered parameters , and functions of the form , with being a given parameter within the ordered set .then , we have that : where , , and are constants .* proof : * the limits in ( [ eq : limits1 ] ) follow since increases and decreases monotonically in their respective domains , . in order to prove ( [ eq : limits2 ] ), we have that : which are ( negative ) constants , because and . using ( [ eq : limits1 ] ) , we then get ( [ eq : limits2 ] ) . in order to compute the limits in ( [ eq : limits3 ] ) , we have that : similarly : in proving ( [ eq : limits4 ] ) , we have that : thus , by the theorem of lhopital , we have that : the sake of completeness , here we summarize in the form of propositions , two fundamental results from crnt on uniqueness and stability .the complete set of arguments can be found in .[ lemma : convexity_sh ] let , with its domain , a convex function with continuous derivatives in , and be the gradient of .then , the following inequalities hold for every : * proof : * in order to prove the first part , choose any and construct a function as the difference between and its supporting hyperplane at .the supporting hyperplane is of the form , and . by construction ,the function is strictly positive , i.e. it is positive for all other than , and result ( i ) follows in a straightforward manner , since , what implies that . to prove the second part, we note that is itself a convex function since , so its hessian coincides with that of the convex function . by using the same supporting hyperplane argument, we construct the following strictly positive definite function around some : ^{t } ( x - x_2 ) \geq 0,\ ] ] where the inequality holds for any .in particular , it holds for , and therefore : ^{t } ( x_1 - x_2 ) \leq 0,\ ] ] which implies that ^{t } ( x_2 - x_1)$ ] , and proves ( ii ) . * proof : * in proving uniqueness , suppose that there are two elements : , that belong to the same stoichiometric compatibility class .then , we have that , what implies that is orthogonal to the stoichiometric subspace . because and are assumed to be in the same compatibility class , the vector must belong to the stoichiometric subspace , and the following relation hold : using the convex function , with gradient , and applying lemma [ lemma : convexity_sh ] ( condition ( ii ) ) , it follows that equality ( [ eq : lncsortcs ] ) holds if and only if , what proves that the set can have at most one element in each positive stoichiometric compatibility class . as pointed out in ,the question of existence ( i.e. that each ( positive ) stoichiometric compatibility class in fact meets ) is somewhat more difficult to answer than uniqueness .the complete argument can be found in ( proposition 4.13 ) . * proof : * first of all , let us make use of eqn ( [ eq : ak_by_fluxes_a_feinb ] ) to write the right hand side of system ( [ subeq : canonical_form_ak ] ) as a summation over , of functions : select some positive reference ( its associated vector is strictly positive ) and re - write the previous expression in the equivalent form : where . the inner product between and ( [ eq : sff_per_linkage ] ) results into the following scalar function : where . in order to get an upper bound for ( [ eq : function_g ] ) , we make use of lemma [ lemma : convexity_sh ] ( condition ( i ) ) , with the convex function , to obtain : for any scalars and .strict convexity of ensures that the equality holds only if .we also have that : combining ( [ eq : factor_exp ] ) with ( [ eq : ineq_exp ] ) , and substituting the resulting expression in ( [ eq : function_g ] ) , we get : if the reference corresponds with a complex balanced equilibrium , then for every , and so is the right hand side of ( [ eq : upper_bound_g ] ) .note that inequality is strict , in the sense that it holds whenever , for every .local asymptotic stability is proved by the standard lyapunov stability method ( see for instance ) with the following lyapunov function candidate , constructed as in the proof of lemma [ lemma : convexity_sh ] : with , being a convex function of the form : computing the derivative of along ( [ subeq : canonical_form_ak ] ) , and using ( [ eq : upper_bound_g ] ) , we get : the result then follows , since and , with equality only if .
|
this paper studies the relations among system parameters , uniqueness , and stability of equilibria , for kinetic systems given in the form of polynomial odes . such models are commonly used to describe the dynamics of nonnegative systems , with a wide range of application fields such as chemistry , systems biology , process modeling or even transportation systems . using a flux - based description of kinetic models , a canonical representation of the set of all possible feasible equilibria is developed . the characterization is made in terms of strictly stable compartmental matrices to define the so - called family of solutions . feasibility is imposed by a set of constraints , which are linear on a log - transformed space of complexes , and relate to the kernel of a matrix , the columns of which span the stoichiometric subspace . one particularly interesting representation of these constraints can be expressed in terms of a class of monotonous decreasing functions . this allows connections to be established with classical results in crnt that relate to the existence and uniqueness of equilibria along positive stoichiometric compatibility classes . in particular , monotonicity can be employed to identify regions in the set of possible reaction rate coefficients leading to complex balancing , and to conclude uniqueness of equilibria for a class of positive deficiency networks . the latter result might support constructing an alternative proof of the well - known deficiency one theorem . the developed notions and results are illustrated through examples . * keywords : * chemical reaction networks , kinetic systems , mass action law , network deficiency , feasible equilibrium , complex balanced equilibrium this manuscript was published as : a. a. alonso and g. szederknyi . uniqueness of feasible equilibria for mass action law ( mal ) kinetic systems . _ journal of process control _ , 48 : 4171 , 2016 . doi link : http://dx.doi.org/10.1016/j.jprocont.2016.10.002 = 1.25
|
rna plays a central role in molecular biology .in addition to transmitting genetic information from dna to proteins , rna molecules participate actively in a variety of cellular processes .examples are found in translation ( rrna , trna , and tmrna ) , editing of mrna , intracellular protein targeting , nuclear splicing of pre - mrna , and x - chromosome inactivation .the rna molecules involved in these processes do not code for proteins but act as functional products in their own right .in addition , rna molecules prepared _ in vitro _ can be selected to bind to specific molecules such as atp . in all these cases ,the information encoded in the sequence of nucleotide bases of each rna molecule determines its functional three - dimensional structure . the nucleotide sequence is a kind of genotype , _i.e. _ , hereditary information , while the folded three - dimensional structure represents phenotype , the physical characteristics on which natural selection operates .the mapping from genotype to phenotype bears on how biological systems evolve , and rna folding probably constitutes the simplest example of this mapping . since early lifeis believed to have been rna based , rna folding can provide us with important clues about early life and evolution .rna is a polynucleotide chain consisting of the four bases : a , u , g , and c. complementary base pairs ( a - u and g - c ) can stack to form `` stems '' which are helical segments similar to the double helix of dna .these helices , called secondary structures , are generally arranged in a three - dimensional tertiary structure , stabilized by the much weaker interactions between the helices .representations of secondary structures are shown in fig .1 . the energy contributions of secondary and tertiary structures are hierarchical , with secondary structures largely determining tertiary folding .secondary structure is frequently conserved in evolution , and structural homology has been used successfully to predict function . in this paper , we investigate the role of alphabet size in the statistical mechanics and selection of rna secondary structures .we find pronounced differences between two - letter and four or six - letter alphabets . for sequences constructed with two types of bases ,only a small fraction of sequences have thermodynamically stable ground - state structures ; these structures are also highly designable , _i.e. _ , have a large number of associated sequences .four and six - letter sequences are much more stable on average , but exhibit no strong correlation between designability and thermodynamic stability .we trace this difference to the greater likelihood of competing , alternatively paired configurations when a two - letter alphabet is used . for rna , there already exist algorithms that predict secondary structures .these algorithms are intended to apply to real rna and , consequently , involve a large number of parameters for the different pairing and stacking combinations .using one of these algorithms , fontana _et al._ found a broad distribution of designabilities , _i.e. _ number of sequences per structure , after structures were grouped by topology . in this paper , we present , instead , a much simpler model for rna secondary structure designed to elucidate the role of alphabet size .the organization of this paper is as follows . in sectionii , we present a base - stacking model for rna secondary structure and outline the recursive algorithm used to compute the partition function and ground - state structure . in section iii , we employ our model to analyze the stability of folded structures .we find a significant difference in stability between two - letter and four or six - letter sequences due to the greater likelihood of alternative folds in the two - letter case . as a consequence of these alternative folds , in the two - letter case , stability correlates with designability , _i.e. _ , total number of sequences associated with a structure .in addition , we find that rna sequences folding to a given structure form a percolating neutral network . finally , in section iv , we summarize our main conclusions .we introduce a base - stacking model for rna secondary - structure formation .it is known that , within a stem of base pairs , the largest energy contribution is the _ stacking energy _ between two adjacent base pairs ( rather than the base - pairing energy itself ) and the total energy of the stem is the sum of stacking energies over all adjacent base pairs . a single stack ( ) is defined as two adjacent non - overlapping base pairs ( ) and ( ) where . for this stack ( )we assign an energy if ( ) and ( ) are both complementary watson - crick base pairs and zero otherwise .we thus neglect differences in energy between , for example , ( a , a;u , u ) , ( a , g;c , u ) and ( g , g;c , c ) stacks .we also neglect energy contributions from isolated base pairs that are not part of a stack , and , consequently , do not include isolated base pairs in the secondary structure .the largest entropic contribution to an rna structure comes from stretches of unpaired bases .we incorporate a simplified version of this polymer configurational entropy in our model by associating degrees of freedom with every unpaired base .thus , the restricted partition function , corresponding to all micro - states compatible with a given secondary structure is \ ] ] where is the number of unpaired bases , is the number of stacks , and is the temperature .the restricted free energy is . in this model ,since only complementary base pairs can participate in a stack , only a fraction of possible structures are compatible with any given sequence .however , provided the structure is compatible with the sequence , its restricted free energy is independent of the sequence .the change in free energy due to the formation of an isolated stack is ; the first term corresponds to the stacking energy and the second to the loss in configurational entropy ( since four bases participate in the stack ) .for every additional adjacent stack the change in free energy is , since only two bases are added to the stack .if , for example , but ( _ i.e. _ , , then formation of an isolated stack would be unfavorable but formation of a segment consisting of two or more adjacent stacks would be favored by a net decrease in free energy .thus , for an appropriate choice of parameters , the model correctly provides a nucleation cost to the formation of stems . for this paperwe choose and , which are physically motivated and correspond to a nucleation cost for the formation of an isolated stack , with a minimum of two adjacent stacks required to form a stable stem .our results , however , do not depend sensitively on the choice of these parameters . in the secondary structure ,any two base pairs ( ) and ( ) , with , are either nested ( ) or independent ( ) .other possibilities correspond to `` pseudoknots '' , which are energetically and kinetically suppressed .it is customary to regard pseudoknots as part of the tertiary structure , and we do not include them here . in order to compute the ground - state structure and partition function for a given sequence , we make use of the hierarchical nature of secondary structures ( due to the absence of pseudoknots ) .we use a recursive algorithm that is a generalization of the techniques described in refs . and . consider the partition function for a segment of bases from the position to .the base is either unpaired or can be part of a stack with .thus obeys : ,\end{aligned}\ ] ] where equals 1 if both and are complementary base pairs , and equals 0 otherwise ; is defined to equal .we have introduced which is the partition function for the segment with the boundary condition that sites and are paired , implying an energy for the formation of a bond between the bases at sites and .we thus require a second recursion relation for : , \nonumber\end{aligned}\ ] ] where equals 1 if are complementary base pairs and otherwise .the partition function can be computed recursively using ( 2 ) and ( 3 ) in steps .we use a similar recursive algorithm to compute the ground - state structure .we have employed our model to analyze the stability of folded structures corresponding to two , four , and six - letter sequences .the thermodynamic stability is defined as the probability that the sequence will be found in the ground state , where is the free energy associated with the ground state .2 shows a histogram of stability for 40-nucleotide long sequences with ground states containing 12 to 15 stacks .we find four - letter sequences considerably more stable on average than two - letter sequences .allowing only for breaking of base pairs , plotted for 40-nucleotide sequences . s denote rna sequences constructed from two types of bases , s denote those constructed from four , and s denote sequences constructed from six types of bases .the actual probability is averaged over sequences with the same pair - breaking stability ( which is sequence independent).,width=264 ] what is the origin of the difference in stabilities between two - letter and four - letter rna sequences ?in order to address this question , we classify the excited - state structures as ( i ) those formed by breaking existing pairs , and ( ii ) those formed by re - pairing , _ i.e. _ , by forming new pairs in addition to breaking existing pairs .independent of alphabet size , all sequences folding into a given secondary structure have the same set of `` pair - breaking '' excited states .the _ sequence _ dependence of stability for a given ground - state structure results entirely from re - pairings . _the crucial difference between two - letter and four sequences lies in the substantially greater likelihood of `` re - paired '' excited states for two - letter sequences ._ this follows because the number of pairs one can form in a random sequence of two letters is typically much larger than for a four or six - letter sequence of the same length .for example , for a random four - letter sequence of length , the probability of forming a stem involving sites to and to is lower by a factor of as compared to a random two - letter sequence of the same length . for the same reason ,the fraction of sequences that have highly stacked ground states is much greater for two - letter sequences than for four - letter sequences , and much greater for four than for six . to demonstrate the importance of re - paired excited states , we first calculate a `` pair - breaking '' stability where is a pair - breaking partition function calculated by considering only pair - broken excited states . gives us an upper bound to the true stability , _ i.e. _ , probability in ground state , that includes competition from re - paired states . in fig .3 , we plot the true average stability against the pair - breaking stability for two , four , and six - letter sequences .as expected , the average stability is much closer to the maximum set by pair breaking in the case of four - letter sequences than in the case of two - letter sequences .thus , structures constructed with four - letter sequences are typically much more than stable than those constructed with two letters , and six - letter sequences are typically more stable than four - letter ones . for folding _, it is these same `` re - paired '' states that act as kinetic traps . due to the lower likelihood of such states , we expect four and six - letter sequences to typically fold faster than two - letter sequences . what determines the average stability , , of two - letter sequences ?we have seen that for four and six - letter sequences the average stability is close to the `` pair - breaking '' stability which is determined largely by the number of stems and loops .insight into the stability of two - letter rna sequences comes from results in protein folding .based on solvation models with differing hydrophobicities of amino acids , a principle of designability has emerged for protein folding .the designability of a structure is measured by the number of sequences folding uniquely into that structure .a small class of protein stuctures emerge as being highly designable ; remarkably , the same class of structures are highly designable whether two or all 20 amino - acid types are used . in a wide range of protein models , sequences associated with highly designable structures are thermodynamically more stable and fold faster than typical sequences .this connection between the designability of a structure and the stability of its associated sequences is referred to as the designability principle .the designability principle reflects a competition among structures . in solvation models, sequences will fold to structures which best match their hydrophobic amino acids to buried sites in the structure ( shielded from water ) .highly designable structures are those with unusual patterns of surface exposure , and therefore few competitors .this lack of competitors also implies that the sequences folding to such structures are thermally stable .we will now show that the designability principle also holds for two - letter rna . versus designability ( in logarithmic scale ) for 24-base rna sequences constructed with two types of bases . in the insetwe plot fraction of compact structures [ 19 ] with designability above versus for two and four - letter rna sequences.,width=264 ] for two base - types ( say a and u ) , we enumerate all sequences and structures of length 24 . we find that secondary structures differ considerably in their designability ; there are highly designable structures which are ground states of a large number of sequences , and there are poorly designable structures which are ground states of only a few sequences ( cf .4 inset ) . in this respect , the results for two - letter sequences are similar to those for protein models .however , the histogram is more noisy for rna than it is for proteins ; so we plot the integrated distribution of designabilities .the most designable structure consists of a stem with a hairpin loop , and a dangling end .we have also studied longer sequences , of lengths 40 and 50 , for which we sample sequence space . for 40-nucleotide sequences ,the most designable structures consist of a single hairpin loop and dangling ends ; a number of double hairpin structures are also highly designable ( fig .5 ) . for sequences of length 50 ,double hairpin structures emerge as the most designable . finally , we find a pronounced correlation between designability and stability of rna structures .this is shown in fig .4 for 24-nucleotide sequences .thus , two - lette rna sequences which fold into highly designable secondary structures are unusually thermally stable , verifying the designability principle . versus designability ( in logarithmic scale ) for 24-base rna sequences constructed with four types of bases .we find no significant correlation between designibality and stability for four letters . , width=264 ] in contrast , for four - letter sequences the range of designabilities is narrower and there is only a weak correlation between designability and stability , with highly stable sequences existing for structures of both high and low designability ( fig .the results for six letters are similar .we trace this difference between two and four or six - letter sequences to the likelihood of competing re - paired states . for two letters ,the correlation between designability and stability ( as well as the nontrivial distribution of designabilities ) arises primarily from competing re - paired states .four and six - letter sequences have far fewer competing re - paired states and hence do not demonstrate significant correlation between designability and stability. we plot ( a ) a histogram of the distances to all two - letter sequences with the same ground state structure , and ( b ) a histogram of the distances to all two - letter sequences .histogram ( b ) is independent of the choice of .histogram ( a ) is also roughly independent of sequence provided its ground - state structure is highly designable.,width=264 ] finally we consider the `` neutral network '' of rna sequences which fold to a particular structure .the connectivity within a network and the shortest distance between networks has drawn considerable attention with respect to the evolvability of rna structures . in our model ,the network of sequences which fold to a particular structure is truly `` neutral '' in that all sequences have the same ground - state free energy , albeit with different stabilities because of repairing .( this contrasts with protein solvation models in which , independent of competing structures , there is typically an energy hierarchy of sequences for each structure , determined by the match between hydrophobicity and surface - exposure pattern . ) in our model , rna sequences that fold to a given structure form , in general , a percolating and non - compact network in sequence space . in particular ,a histogram of the distances between sequences folding to the same highly designable structure is actually broader than a histogram of the distances between _ all _ sequences ( fig .7) . in this respect ,the rna model differs considerably from protein models .to conclude , in this paper we developed and studied a minimalist base - stacking model of rna secondary structure .we found that sequences constructed with four or six types of bases typically have fewer competing excited states , and , consequently , have greater ground - state stability , compared to sequences constructed with two types -of bases . at the same time , the fraction of sequences with highly stacked ground states is much smaller for four - letter sequences than for two , and much smaller for six letters than for four .it is tempting to speculate that four letters optimizes the stability of structures while maintaining a reasonable probability that a random sequence folds into a highly stacked structure .if , as has been postulated , early life was indeed rna based and double - stranded dna came later in evolution , our observations might plausibly bear on nature s choice of four letters for the genetic code .we use a narrow range of stack numbers to emphasize the dependence of stability on alphabet size . for a wider range of stack numbers ,the increase in stability with stack numbers can obscure the dependence on alphabet sizes . for each structure ,we generate a random sample of sequences that are compatible with the structure and calculate the fraction of such sequences that have this structure as the ground state .we multiply the total number of compatible sequences by this fraction to obtain the designability . in fig . 4, we plot the -th percentile of greatest stability , , rather than average stability of sequences folding to a structure. since sequences folding to a structure have , in general , a wide range of stabilities , the two can be quite different .average stability shows a similar , but less pronounced correlation with designability than .
|
we construct a base - stacking model of rna secondary - structure formation and use it to study the mapping from sequence to structure . there are strong , qualitative differences between two - letter and four or six - letter alphabets . with only two kinds of bases , most sequences have many alternative folding configurations and are consequently thermally unstable . stable ground states are found only for a small set of structures of high designability , _ i.e. _ total number of associated sequences . in contrast , sequences made from four bases , as found in nature , or six bases have far fewer competing folding configurations , resulting in a much greater average stability of the ground state .
|
the first part of the title of this paper ( all but the last four words ) is taken from the title of a paper written by aharonov , albert and vaidman ( aav ) over twenty years ago . in that paper aavintroduced the concept of weak values .this concept immediately caused controversy , but over the years it has proved to be a useful paradigm for considering questions related to quantum measurement and the foundations of quantum mechanics .for example , the observation of paradoxical values in a weak - value - type measurement has been linked to the violation of the leggett - garg inequality , which can be used to test realism . in the setup considered by aav, a beam of spin-1/2 particles propagates through a non - uniform magnetic field in a stern - gerlach - type experiment , where the trajectory of a given particle is affected by the spin state of the particle .the modification from the original stern - gerlach experiment is that , in the path of its propagation , the beam encounters two regions in space with magnetic fields .the magnetic field gradient in the first region is designed such that it creates a tendency for particles whose -component of the spin ( which we denote by ) is positive to develop a finite component of the momentum in the positive direction and for particles whose is negative to develop a finite component of the momentum in the negative direction . after exiting this region in space, the beam enters a second region where a -component in the momentum develops based on the -component of the spin ( ) .either one of these stages would constitute a measurement of the spin along some direction : by setting up a screen that the beam hits sufficiently far from the field - gradient region , the position where a given particle hits the screen serves as an indicator of the particle s spin state .when combined , they create a situation where two non - commuting variables are being measured in succession .if ( 1 ) the first measurement stage is designed to be a weak measurement , ( 2 ) the particles in the beam are created in a certain initial state [ e.g. close to being completely polarized along the positive -axis ] and ( 3 ) only those particles for which the second measurement produces a certain outcome [ in this example , a negative -component of the spin ] , then the average value of the spin s -component indicator can suggest values of this component of spin being much larger than 1/2 , a situation that seems paradoxical .a number of studies have already pointed out that since in the aav setup two non - commuting variables are being measured in succession , quantum mechanics forbids treating them as independent measurements whose outcomes do not affect one another . in this paperwe start by presenting an example that demonstrates the role of interpretation in obtaining unphysical results in a weak - measurement - related setup .the setup is chosen to be very simple in order to remove any complications in the analysis related to the successive measurement of non - commuting variables . in the second part of the paper , we present the proper analysis ( from the point of view of quantum mechanics ) of the measurement results obtained in an aav setup .let us consider the following situation : an experimenter purchases a device for measuring the -component of a spin-1/2 particle .the device produces one of two readings , 0 or 1 .the experimenter goes to the lab and calibrates the device .the calibration is done by preparing particles in the spin up state , measuring them one by one , and then doing the same for the spin down state .let us say that the result of the calibration procedure is that for the spin up state the device shows the reading `` 1 '' in 50.25% of the experimental runs and the reading `` 0 '' in 49.75% of the runs .for the spin down state , the probabilities are reversed .clearly , the reading of the measurement device is only weakly correlated with the spin state of the measured particle .the experimenter takes this fact into account and reaches the following conclusion : if i have a large number of identically prepared spin-1/2 particles and measure them using this device , i will obtain a probability for the reading `` 1 '' . using the results of the calibration procedure, the expectation value of the spin -component for the prepared state will be given by the formula : if the probability of obtaining the outcome `` 1 '' is 0.5025 , the above formula gives 1/2 .if the probability of obtaining the outcome `` 1 '' is 0.4975 , the above formula gives -1/2 .it looks like the device is ready to be used .the experimenter now performs an experiment that involves , as its final step , a measurement of .surprisingly , the measurement device shows the reading `` 1 '' every time the experiment is repeated , leading the experimenter to conclude that the value of the spin is in fact 100 .thus one has a paradox .the resolution of the paradox in the above story lies in the fact that the device was not a weak - measurement device as the experimenter assumed , but a strong - measurement device whose reading is perfectly correlated with the spin state of the measured particle .the only problem is that at some point before the measurement device was calibrated , its spin - sensing part was rotated from being parallel to the -axis to an axis that makes an angle 89.7135 with the -axis ( note here that ) .not surprisingly , the calibration procedure produced the probabilities 0.5025 and 0.4975 . in the `` real '' experiment ,the spins were all aligned with the measurement axis of the device , and the reading `` 1 '' was observed in all the runs . the paradox is therefore resolved .an unquestioning believer in quantum mechanics might say that the situation discussed in ref . has a large amount of overlap with the story presented above . in both casesa perfectly acceptable measurement is performed .the reason for obtaining a paradoxical measurement result is simply the wrong interpretation of what the measurement device is measuring and the resulting erroneous mapping from measurement outcomes to values of the measured quantity .) for the two states of the measurement basis , and .,width=226 ] we now turn to the question of the correct interpretation of the aav experiment according to quantum mechanics . instead of the original , stern - gerlach - type experiment analyzed by aav, we formulate the problem slightly differently .we consider a spin-1/2 particle that is subjected to two separate measurements . as a first step, a weak measurement is performed in the basis , where and the states and are the eigenstates of .this measurement can produce any one of a large number of possible outcomes , with probability distributions as shown in fig . 1 .this measurement constitutes a weak measurement of . as discussed in , each possible outcomeis associated with a measurement matrix , where the index represents the outcome that is observed in a given run of the experiment . if the outcome occurs with probability for the system s maximally mixed state , i.e. when averaged over all possible initial states , and it provides measurement fidelity ( in favor of the state ) , the measurement matrix is given by we shall use the convention where a measurement that favors the state has a negative value of and is given by the same expression as above .it is worth mentioning here that the overall , or average , fidelity of this measurement can be obtained by averaging over all possible initial states and all possible outcomes : after the weak -basis measurement , a strong measurement in the basis is performed .this strong measurement step can be described by two outcomes with corresponding measurement matrices as mentioned above , paradoxes arise if one treats the -basis and -basis measurements as two separate measurements that provide complementary information .instead , one should treat each pair of outcomes as a single _ combined - measurement _ outcome .the maximum amount of information in a given run of the experiment can be extracted as follows : given that the outcome pair was observed , one can construct the combined - measurement matrix from the matrices one can construct a so - called positive operator - valued measure ( povm ) defined by the matrices : where the superscript represents the transpose conjugate of a matrix . in particular , where one can find that with given by the same expression as above . as discussed in ref . , one can obtain the measurement basis and fidelity that correspond to the outcome defined by by diagonalizing the matrix .since is a hermitian matrix , its two eigenvalues ( and , with ) will be real and its two eigenstates ( and ) will be orthogonal quantum states that define a basis ( the measurement basis ) .note that because the second measurement in the problem considered here is a strong measurement , we always have .the different outcomes produce different measurement bases , thus this measurement can not be thought of in the usual sense of measuring with being some fixed direction .therefore , the measurement basis is determined stochastically for each ( combined ) measurement ( note that after the strong -basis measurement , the system always ends up in one of the states , even though the combined - measurement basis can be different from the basis ) . by analyzing all the measurement data, one can perform partial quantum state tomography and determine the and -components in the initial state of the system ( assuming of course that all copies are prepared in the same state , which can be pure or mixed ) . note that in this setup no information about can be obtained from the measurement outcome .we now ask whether information can be extracted from the -basis and -basis measurements separately , i.e. by disregarding the outcome of one of the two measurement steps .the answer is yes , provided care is taken in interpreting the results .extracting an -basis measurement from a given measurement outcome is straightforward .all one has to do is disregard the outcome of the -basis measurement , since this measurement is performed after the -basis measurement and can not affect the outcome of the -basis measurement .therefore , by disregarding the outcome of the -basis measurement , one obtains an -basis measurement with overall fidelity .the situation is somewhat trickier if one wants to extract a -basis measurement from the measurement outcome .one can disregard the outcome of the -basis measurement , but one must take into account the fact that this measurement generally changes the state of the system before the -basis measurement is performed . the effect of the -basis measurement is to reduce the fidelity of the -basis measurement .one can calculate this reduced fidelity as follows : let us assume that the system starts in the initial state .after the -basis measurement is performed and the outcome ( with fidelity ) is observed , the state of the system is transformed into a new pure state with . since for any pure state and here we have , we find that after the -basis measurement is reduced from 1 to . if is independent of , one obtains the relation ( in this context , see e.g. ref . ) we now take one final look at the aav gedankenexperiment .we choose a specific form for the -basis measurement , which is essentially the same one used by aav with running over all integers from to and assumed to be a large number .note that the above expression violates the constraint that .however , provided that , the above expression can be treated as a good approximation of the realistic situation for all practical purposes .a simple calculation shows that in this case such that if the measured system is prepared in one of the states , the average value of that is obtained in an ensemble of measurements ( all with the same initial state ) is the small difference between and is the reason why the -basis measurement qualifies as a weak measurement of .we now consider the full measurement procedure .if one prepares the measured system in a state that is very close to , most basis measurements will produce the outcome .only a small fraction of the experimental runs will produce the outcome .if the initial state deviates slightly from , i.e. then outcomes with negative values of and will be suppressed the most ( assuming is positive ) , because these outcomes correspond to states that are orthogonal or almost orthogonal to the initial state ( making their occurrence probabilities particularly small ) .one therefore finds that among the measurements that produced , the average value of can be much larger than for properly chosen parameters .this situation leads to the aav paradox .in conclusion , we have presented explanations according to quantum mechanics of two questions that are relevant to discussions of weak values .first we presented an example that emphasizes the role of interpretation in obtaining unphysical results in an aav setup .we have also presented the correct interpretation ( according to quantum mechanics ) of the measurement results obtained in an aav setup .we believe that our discussion is useful for understanding the origin of the possible observation of unphysical values in a weak - value experimental setup . this work was supported in part by the national security agency ( nsa ) ,the laboratory for physical sciences ( lps ) , the army research office ( aro ) and the national science foundation ( nsf ) grant no .
|
we discuss two questions related to the concept of weak values as seen from the standard quantum - mechanics point of view . in the first part of the paper , we describe a scenario where unphysical results similar to those encountered in the study of weak values are obtained using a simple experimental setup that does not involve weak measurements . in the second part of the paper , we discuss the correct physical description , according to quantum mechanics , of what is being measured in a weak - value - type experiment .
|
as we all have experienced , social networks can evolve in convoluted ways .friendships can become estrangements and vice versa .new friendships can be created while existing friends drift apart .how are these changing relations reflected in the structure of social networks ? as a familiar and illustrative example , suppose that you are friendly with a married couple that gets divorced .a dilemma arises if you try to remain friendly with both of the former spouses . you may find yourself in the uncomfortable position of listening to each of the former spouses separately disparaging each other. ultimately you may find it simplest to remain friends with only one of the former spouses and to cut relations with the other ex - spouse . in the language of _ social balance _ , the initially balanced triad became unbalanced when the couple divorced .when you subsequently kept your friendship with only one former spouse , social balance is restored .what happens in a larger social network ?now we need to look at all triads that link individuals , , and .we define the link variable and friends and otherwise . then the triad is balanced if , and is imbalanced otherwise ( fig .[ triads ] ) .a balanced triad therefore fulfills the adage : * a friend of my friend as well as an enemy of my enemy is my friend ; * a friend of my enemy as well as an enemy of my friend is my enemy .a network is balanced if each constituent triad is balanced .a seemingly more general definition of a balanced network is to require that each closed cycle is balanced ; that is , .cartwright and harary showed that a cycle - based definition of balance is equivalent to a triad - based definition for complete graphs .this result can be reformulated as follows : if we detect an imbalanced cycle of any length in a complete graph , there must be an imbalanced triad .balance theory was originally introduced by heider and important contributions were made by many others .cartwright and harary translated heider s ideas into the framework of graph theory , and proved several fundamental theorems about the structure of balanced networks .there is also an extensive literature on balance theory ( see _ e.g. _ , and references therein ) .cartwright and harary showed that on a complete graph balanced societies are remarkably simple : either all individuals are mutual friends ( `` utopia '' ) , or the network segregates into two mutually antagonistic but internally friendly cliques a `` bipolar '' state .however , spontaneously balanced states are rare if one were to assign relationships in a social network at random , the probability that this society is balanced would vanish exponentially with system size . thus to understand how a network reaches a balanced state we need to go beyond static descriptions to investigate how an initially imbalanced society becomes balanced via social dynamics . herewe discuss the evolution of such social networks when we allow the sense of each link to change from friendly to unfriendly or _ vice versa _ to reflect the natural human tendency to reduce imbalanced triads .two such dynamics are considered : _ local triad dynamics _( ltd ) and _ constrained triad dynamics _ ( ctd ) . for simplicity ,we consider complete graph networks everyone knows everyone else .we will address the basic question : what is the long - time state of such networks ?in local triad dynamics ( ltd ) , an imbalanced triad is selected at random and the sign of a relationship between two individuals is flipped to restore the triad to balance .this change is made irregardless if other triads become imbalanced as a result .thus ltd can be viewed as the social graces of the clueless such a person makes a relationship change without considering the ramifications on the rest of his social network .we define a triad to be of type if it contains unfriendly links .thus and are balanced , while and are imbalanced . with these definitions ,the ltd rules are ( fig . [ process ] ) : ( left ) and ( right ) by local triad dynamics .solid and dashed lines represent friendly and unfriendly links , respectively . ] 1 .pick a random imbalanced ( frustrated ) triad .if the triad is of type , then : ( i ) with probability , change the unfriendly link to a friendly link ; ( ii ) with probability , change a friendly link to an unfriendly link .3 . if the triad is of type , then change an unfriendly link to a friendly link .after the update , the initial imbalanced target triad becomes balanced , but other previously - balanced triads that share a link with this target may become imbalanced. these triads can subsequently evolve and return to balance , leading to new imbalanced triads .for example , when a married couple breaks up , friends of the former couple that remain friends with the former wife may then redefine their relationships with those who choose to remain friends with the former husband .these redefinitions , may lead to additional relationship shifts , _etc_. we now study ltd on a finite complete graph of nodes , links , and triads .let be the number of triads that contain unfriendly links , with the respective triad densities , and ( ) the number of friendly ( unfriendly ) links .the number of triads and links are related by the numerator counts the number of friendly links in all triads while the denominator appears because each link is counted times .the density of friendly links is therefore , while the density of unfriendly links is .it is useful to introduce the quantities as follows : for each friendly link , count the number of triads of type that are attached to this link .then is the average number of such triads over all friendly links .this number is the factor accounts for the fact that each of the triads of type is attached to friendly links ; dividing by then gives the average number of such triads .analogously , we introduce .since the total number of triads attached to any given link equals , the corresponding triad densities are ( fig .[ triangles - plus ] ) in total ) that are attached to a positive link ( heavy line ) . also shownare the stationary - state probabilities for each triad when the friendly link density is .full and dashed lines represent friendly and unfriendly relations , respectively . ]we now write rate equations that account for the changes in the triad densities in an update .we choose a triad at random ; if it is imbalanced ( or ) we change one of its links as shown in fig .[ process ] .let be the probability that a link changes from friendly to unfriendly in an update event , and vice versa for .a friendly link changes to unfriendly with probability when , while an unfriendly link changes to friendly with probability if and with probability 1 if .consequently in the special case of , each link of an imbalanced triad is flipped equiprobably .since each update changes triads , and we define one time step as update events . then the rate equations for the triad densities have the size - independent form where the overdot denotes the time derivative .let us determine the stationary solution to these equations . setting the left - hand sides of eqs .( [ ni - rate ] ) to zero and also imposing to ensure a fixed friendship density , we obtain . forming products such as ,these relations are equivalent to furthermore , the stationarity condition , , gives .using these two results , as well as the normalization condition , , in eqs .( [ stati ] ) , we find , after straightforward algebra , that the stationary density of friendly links is & \qquad p\leq 1/2;\cr 1 & \qquad p\geq 1/2.\cr \end{cases}\ ] ] the triad densities of each type become uncorrelated and are given by as shown in fig .[ n - vs - p ] , the stationary density of friendly links monotonically increases with for until utopia is reached .near the phase transition , the density of unfriendly links vanishes as .and the density of friendly links as a function of . simulation results for for ( crosses ) and 256 ( boxes ) are also shown . ] a remarkable feature of the master equations ( [ ni - rate ] ) is that if the initial triad densities are given by eq .( [ stat - nj])uncorrelated densities the densities will remain uncorrelated forever . in this case, it suffices to study the time evolution of the density of friendly links .we determine this time evolution directly by noting that increases if or , and decreases if . since the respective probabilities for these processes are , and , we have solving this equation , the time dependence of the density of friendly links has the following behaviors : where , , and are constants .thus for there is quick approach to a final state .this state is frustrated for and is utopian for . for utopiais reached slowly as a power - law in time .abstractly , ltd represents a stochastic dynamics in a state space in which each network configuration is represented by a point in this space and a link to another point represents an allowed transition by the dynamics . because balanced networks represent absorbing states of this dynamics , a finite network must ultimately fall into a balanced state for all .we now estimate the size dependence of the time to reach a balanced state , , for any value of by probabilistic arguments . .] for , we use the following random walk argument ( fig .[ eff - rw ] ) : when a link is flipped on an imbalanced triad on an almost balanced network ( nearly balanced triads ) , then of the order of triads that contain this link will become imbalanced .thus starting near balance , ltd is equivalent to a biased random walk in the state space of all network configurations , with the bias is directed away from balance , and with the bias velocity proportional to .conversely , far from the balanced state , local triad dynamics is diffusive because the number of imbalanced triads changes by of the order of equiprobably in a single update .the corresponding diffusion coefficient is then proportional to .since the total number of triads in a network of nodes is , we therefore expect that the time to reach balance will scale as . for , we define the time to reach the balanced state by the naive criterion ; that is , one unfriendly link remains . from eq ., will then grow logarithmically with . at , using eq .( [ rho - cases ] ) , the criterion now gives .while simulations show that scales algebraically with , the exponent is much smaller than 4 .the source of this smaller exponent is the fact that the number of unfriendly links fluctuates strongly about its mean value when there are few unfriendly links ( see fig .[ rate - eqn ] ) . to determine these fluctuations we writethe number of unfriendly links in the canonical form where is deterministic and is a stochastic variable .both and are size independent in the thermodynamic limit .a detailed argument shows that grows as as .because of the finite - size fluctuations in , the time to reach utopia is determined by the criterion that fluctuations in become of the same order as the average , _ viz ._ , using from eq .( [ rho - cases ] ) , , and , eq .( [ criterion ] ) becomes , from which follows .for an initially antagonistic society ( ) for : ( a ) ; ( b ) ; ( c ) p=3/4 .the line in ( b ) has slope . ]summarizing our results , we have : these are in agreement with our simulation results shown in fig .[ avtime ] .in _ constrained triad dynamics _ ( ctd ) , we first select an imbalanced triad at random and then select a _ random _ link in this triad .we change the sign of the link _ only if the total number of imbalanced triads decreases_. if the total number of imbalanced triads is conserved in an update , then the update occurs with probability 1/2 .ctd can be viewed as the dynamics of a socially aware individual who considers her entire social circle before making any relationship change .because of this global constraint , a network is quickly driven to a balanced state in a time that scales as .a more interesting feature is the existence of a dynamical phase transition in the structure of the final state as a function of the initial friendly link density ( fig .[ bavdi ] ) .we quantify this structural change by the scaled difference in sizes of the two cliques in the final state , .for the cliques in the final state are nearly the same size and .as increases toward , the size difference continuously increases and a sudden change occurs at , beyond which the final state is utopia . since and the density of friendly links are related by in a large balanced society , uncorrelated initial relations generically lead to .thus ctd tends to drive a network into a friendlier final state . for several network sizes . ]we now give a simple - minded argument that suggests that a large network undergoes a sudden change from ( two equal size cliques ) to ( utopia ) as a function of the initial friendly link density .this qualitative approach predicts that this transition occurs at . on the other hand ,our numerical simulations show that the transition is located near ( fig .[ bavdi ] ) .let us assume that a network remains uncorrelated during initial stages of evolution and under this assumption we determine the probabilities for a specific friendly link to flip .if the network is uncorrelated , the densities of triads that are attached to a friendly link are : for a link to change from friendly to unfriendly , it is necessary that .that is , this link is a member of more imbalanced triads than balanced triads . from eq .( [ positive ] ) , this condition is equivalent to , which never holds. consequently , friendly links never flip .similarly , the densities of triads attached to an unfriendly link are : to flip this unfriendly bond , we must have , _i.e. _ , the bond is part of more imbalanced than balanced triads .this condition gives , which is valid when .thus for a large uncorrelated network , only unfriendly links flip in ctd , except for .thus a network with should quickly evolve to utopia , while a network with should quickly approach a state where .simulations indicate , however , that correlations in relationships occur when and these ultimately lead to a bipolar society .we find that the precursor to this bipolar society is a state in which the network partitions itself by the dynamics into two subnetworks and of nearly equal sizes and . within each subnetwork ,the density of friendly links and slightly exceeds , while the density of friendly links between subnetworks is slightly less than .this small fluctuation is amplified by ctd so that the final state is two nearly equal - size cliques . to see how such evolution occurs ,let us assume that relationships within each subnetwork and between subnetworks are homogeneous .consider first the evolution within each clique .for an unfriendly link in , the densities of triads attached to this link are given by ( [ negative ] ) , with replaced by when the third vertex in the triad belongs to , and by ( [ negative ] ) , with replaced by when the third vertex belongs to .the requirement that a link can change from unfriendly to friendly by ctd now becomes +c_2[1 - 4\beta(1-\beta)]>0,\ ] ] which is always satisfied .conversely , friendly links within each subnetwork can never change . as a result ,negative intraclique links disappear and there is increased cohesiveness within cliques . and ( blobs at the extremities ) , with friendly link densities .the density of friendly links between cliques is .top : imbalanced triads that lead to an unfriendly link ( think dashed line ) changing to a friendly link within one clique .bottom : imbalanced triads that lead to a friendly link ( thick solid line ) changing to a unfriendly link between cliques . ] 0.2 in and ( blobs at the extremities ) , with friendly link densities .the density of friendly links between cliques is .top : imbalanced triads that lead to an unfriendly link ( think dashed line ) changing to a friendly link within one clique .bottom : imbalanced triads that lead to a friendly link ( thick solid line ) changing to a unfriendly link between cliques . ] consider now relations between cliques .for a friendly link between the subnetworks , the triad densities attached to this link are when the third vertex belongs to .since the change friendly unfriendly is possible if (1 - 2\beta)>0\,.\ ] ] thus if the situation arises where , , and , the network subsequently evolves to increase the density of intra - subnetwork friendly links and decrease the density of inter - subnetwork friendly links .this bias drives the network to a final bipolar state . finally , note that when , the number of ways , , to partition the original network into the two nascent subnetworks and , is maximal .consequently , the partition in which has the highest likelihood of providing the initial link density fluctuation that ultimately leads to two nearly equal - size cliques , as observed in our simulations ( fig . [ bavdi ] ) .although our argument fails to account for the precise location of the transition , the behavior of in the two limiting cases of and is described correctly .we presented a simple setting for social dynamics in which both friendly and unfriendly links exist in a network .these links evolve according to natural rules that reflect a social desire to avoid imbalanced triads . for local triad dynamics, a finite network falls into a socially - balanced state in a time that depends sensitively on the propensity for forming a friendly link in an update event . for an infinite network ,a balanced state is never reached when and the system remains stationary .the density of unfriendly links gradually decreases and the network undergoes a dynamical phase transition to an absorbing , utopia state for . for constrained triad dynamics ,an arbitrary network is quickly driven to a balanced state .this rapid evolution results from the condition that the number of imbalanced triads can not increase . there is also a phase transition from bipolarity to utopia as a function of the initial density of friendly links that arises because of small structural fluctuations that are then amplified by the dynamics .it is interesting to consider the possible role of balance theory in international relations , with the evolution of the relations among the protagonists of world war i being a particularly compelling example ( fig .[ ww1 ] ) .a history starts with the three emperors league ( 1872 , and revived in 1881 ) that aligned germany , austria - hungary , and russia .the triple alliance was formed in 1882 that joined germany , austria - hungary , and italy into a bloc that continued until world war i. in 1890 , a bipartite agreement between germany and russia lapsed and this ultimately led to the creation of a french - russian alliance over the period 1891 - 94 . subsequently an entente cordiale between france and great britain was consummated in 1904 , and then a british - russian agreement in 1907 , that then bound france , great britain , and russia into the triple entente . while our account of these byzantine maneuvers is incomplete ( see refs . for more information ) , and fig .[ ww1 ] does not show all relations and thus the extent of network imbalance during the intermediate stages , the basic point is that these relationship changes gradually led to a reorganization of the relations between european nations into a socially balanced state . thus while social balance is a natural outcome , it is not necessarily a good one !another more immediate , and perhaps more alarming , application of social balance is to current international relations .as popularized in huntington s book , there appear to be developing civilizational divisions across which increasing conflict is occurring ( fig .[ clash ] ) . according to huntington, the division among humankind , and the source of future conflict , will be predominantly cultural rather than ideological and economic .this thesis has generated a great deal of criticism , yet the core idea namely , that division and conflict is a more likely outcome rather than the westernized world s hope for a utopia because of global democratization may prove correct at least in the foreseeable future .we close with some potentially interesting open theoretical questions .first , it is natural consider more general interactions .one can easily imagine ternary relationships of friendly , unfriendly , or indifferent .another possibility is continuous - valued interaction strengths .what is the number of cliques and number of communities as a function of network size and the density of indifferent relationships ?another direction , already considered by davis , is a more machiavellian society in which triads with three unfriendly relations are acceptable that is `` an enemy of my enemy may still be my enemy . ''this more relaxed definition for imbalanced triads may lead to interesting dynamical behavior that will be worthwhile to explore . finally , what happens if relations are not symmetric , that is , ?how does one define balance or some other notion of social stability with asymmetric interactions ?d. cartwright and f. harary , psychol .* 63 * , 277293 ( 1956 ) ; f. harary , r. z. norman and d. cartwright , _ structural models : an introduction to the theory of directed graphs _( john wiley & sons , new york , 1965 ) . a study of a similar spirit to ours is given in k. kulakowski , p. gawronski , and p. gronek , int .c * 16 * , 707 ( 2005 ) ; p. gawronski , p. gronek , and k. kulakowski , acta physica polonica b * 36 * , 2549 ( 2005 ) .we use the fact that the first - passage time to an absorbing point in a finite one - dimensional interval of length l with a bias away from the absorbing point is of the order of . see s. redner , _ a guide to first - passage processes _ , ( cambridge university press , new york , 2001 ) .s. p. huntington , _ the clash of civilizations and the remaking of world order _ , ( simon & schuster , new york , 1996 ) ; see also l. harris,_civilization and its enemies : the next stage of history _ , ( the free press , new york , 2004 ) .
|
how do social networks evolve when both friendly and unfriendly relations exist ? here we propose a simple dynamics for social networks in which the sense of a relationship can change so as to eliminate imbalanced triads relationship triangles that contains 1 or 3 unfriendly links . in this dynamics , a friendly link changes to unfriendly or _ vice versa _ in an imbalanced triad to make the triad balanced . such networks undergo a dynamic phase transition from a steady state to `` utopia''all friendly links as the amount of network friendliness is changed . basic features of the long - time dynamics and the phase transition are discussed . social balance , networks 02.50.ey,05.40.-a,89.75.fb
|
the sequence of evolution of the region around a star with a strong ionising radiation field which turns on rapidly is well known ( kahn 1954 ; goldsworthy 1961 ; see also spitzer 1978 , osterbrock 1989 or dyson & williams 1997 ) .initially , the ionization front ( if ) between ionized and neutral gas moves outwards at a speed limited by the supply of ionizing photons , but it begins to decelerate as the ionizing flux at the front surface is cut by geometrical divergence and absorption by recombined atoms within the front . eventually , when the speed of the front decreases to roughly twice the sound speed in the ( now highly overpressured ) ionized gas , a shock is driven forwards into the neutral gas ahead of the front .before this stage , the front is referred to as r - type , while subsequently it is referred to as d - type .the shock propagates outwards , gradually weakening , until , in principle , the region eventually reaches pressure equilibrium with its surroundings . where the external medium has an ordered magnetic field , the obvious critical flow speeds are the fast , alfvn and slow speeds rather than the isothermal sound speed .redman ( * ? ? ?* , hereafter ) studied ifs with the magnetic field vector in the plane of the front , and found that the fast - mode speed plays the same role as the sound speed does in the hydrodynamic case . in this paper , we extend their work to treat the case of an if moving into a medium in which the magnetic field is oblique to the direction of propagation of the front ( note also we here follow the more conventional usage in which the magnetic fields are termed parallel or perpendicular with respect to the front - normal ) .jump conditions for ifs with oblique magnetization have previously been studied by lasker .here we consider a wider range of upstream magnetic fields , since observations have shown that higher magnetic fields are found around regions than once thought likely .we determine the properties of the jumps as functions of upstream conditions , using the velocity of the front as a parameter rather than as the variable for which we solve .we use evolutionary conditions to isolate the stable if solutions , and verify these conclusions for a simple model of the internal structure of the fronts and using numerical simulations . we find that rather weak parallel magnetic fields can lead to a substantial decrease in the d - critical ( photoevaporation ) velocity from dense clumps except where the magnetic field is exactly parallel to the if , andalso find additional solutions to the jump conditions in the range of front velocities forbidden by the hydrodynamical jump conditions , which were not considered by lasker . in the following sections , we present the jump conditions for mhd shocks ( section [ s : jump ] ) , and discuss the regions for which evolutionary conditions suggest that these solutions are stable ( section [ s : solns ] ) .we verify that the evolutionary solutions are those with resolved internal structures for one simple model for the internal structure of the fronts ( section [ s : resolv ] ) . in the context of these results ,we discuss the development of an if over time ( section [ s : devel ] ) and illustrate this development using numerical models ( section [ s : numeric ] ) . finally ( in section [ s : concl ] ) , we summarize our results , and provide an example of their application to observations of the region s106 .our physical interpretation of the development of mhd ifs will , we hope , facilitate the future application of these results .we orient axes so that is normal to the front , and ( without loss of generality ) that the upstream velocity and magnetic field are in the plane .we use the usual mhd jump conditions : &= & 0 \\ { } [ \rho v_z^2 + p + b_x^2/8\pi ] & = & 0 \\ { } [ \rho v_z v_x - b_z b_x/4\pi ] & = & 0 \\ { } [ b_z ] & = & 0 \\ { } [ v_x b_z - v_z b_x ] & = & 0,\end{aligned}\ ] ] except that instead of using the energy flux condition , we adopt the isothermal equation of state where the sound speed , , increases across the front but is constant on either side of it .we use subscripts 1 and 2 to denote upstream and downstream parameters , respectively , and write , , and . hence equations , and give where and we define and ( the and contributions to the reciprocal of the plasma beta ) .the dependence on the upstream transverse velocity has disappeared , as expected as a result of frame - invariance . in equation , we substitute with for and use equation to eliminate and to find that , so long as and , the dilution factor is given by the quartic equation ( * ? ? ?* see also ) where ( is a typical value ) .it is easily verified that this equation has the correct form in the obvious limiting cases ( of isothermal mhd shocks where , and perpendicular - magnetized ifs where , see paper i ) .= 8 cm [ cols= " < , < " , ] equation relates the dilution factor , , to the upstream properties of the flow .its roots can be found by standard techniques .if the flow is to have a unique solution based only on initial and boundary conditions , then some of these roots must be excluded .it is possible to exclude roots on the basis that they correspond to flows that do not satisfy an evolutionary condition , analogous to that long used in the study of mhd shocks .the evolutionary condition is based on the requirement that the number of unknowns in the jump conditions matches the number of constraints applied to the flow . for mhd shocks ,the number of characteristics entering the shock must be two greater than the number leaving it ( since the number of independent shock equations is equal to the number of conserved variables and if there are no internal constraints on the front structure ) .the applicability of the evolutionary conditions to mhd shocks has been the subject of controversy in the recent past , with various authors suggesting that intermediate shocks ( shocks which take the flow from super- to sub - alfvn speeds ) may be stable .however , falle & komissarov have shown that the non - evolutionary solutions are only stable when the symmetry is artificially constrained , so that the magnetic fields ahead of and behind the shocks are precisely coplanar . in any cases in which the boundary conditions differed from this special symmetry ,the solutions including non - evolutionary shocks were found to be unstable .the equations governing the dynamics across an if are no longer a system of hyperbolic conservation laws , since the ionization source term can not be neglected on the scale of the front .it seems reasonable to apply analogous evolutionary and uniqueness conditions , but the mathematical proofs for hyperbolic conservation laws with dissipation no longer apply .the ` strong ' evolutionary conditions suggest that for if the number of incoming characteristics is equal to the number of outgoing characteristics , since applying the ionization equation means that there is an additional constraint on the velocity of the front .solutions which are under - specified by the external characteristic constraints , termed ` weakly evolutionary ' , may occur if there are internal constraints , as is the case for strong d - type ifs in hydrodynamics .stable weakly evolutionary solutions only occur for limited classes of upstream parameters which depend on the internal structure of the fronts . where the number of characteristics entering the front is greater than suggested by the evolutionary conditions , the solution can be realized as an mhd shock leading or trailing an evolutionary if . in the limit in which tends to unity from above , the phase change through the if has no dynamical consequences , and the if jump conditions approach those for isothermal mhd shocks .this can be seen if we rewrite equation as \nonumber\\ \hfill= -(\alpha-1)[m^2\delta-2\eta]^2.\label{e : opoly}\end{aligned}\ ] ] the if solution which satisfies the strong evolutionary conditions becomes the trivial ( ) solution of the mhd jump condition .the other solutions to the if jump conditions become non - evolutionary or evolutionary isothermal mhd shocks .the analysis of falle & komissarov rules out the former as physical solutions .the latter are treatable as separate discontinuities , which will propagate away from the if when the flow is perturbed , since the coincidence between the speeds of the shock and of the if will be broken .this argument by continuity supports the use of the evolutionary conditions for ifs . as a result of this discussion ,we will proceed for the present to isolate solutions in which the number of characteristics entering an if is equal to the number leaving it ( and discuss in section [ s : resolv ] the weakly evolutionary solutions for a simplified model of the internal structure of ifs ) .the velocity of fronts obeying the strong evolutionary conditions must be between the same critical speeds in the upstream and downstream gas ( somewhat confusingly , the fronts which obey the strong evolutionary conditions are termed ` weak ' in the standard nomenclature of detonations and ifs ) .we follow the usual classification of flow speeds relative to the fast , alfvn and slow mode speeds , so the allowed fronts are , , and . by analogy with the nomenclature of non - magnetized if , we call these fast - r , fast - d , slow - r and slow - d type if , respectively . the panels of fig .[ f : crit ] show regions of and space corresponding to evolutionary mhd ifs for several values of . in the figures , we see regions corresponding to the two distinct classes of r- and d - type solutions .the flows into the r - type fronts are super - fast or super - slow , while those into the d - type fronts are sub - fast or sub - slow . at the edges of the regions of solutionseither the velocity into the front is the alfvn speed or the exit velocity from the front is equal to a characteristic speed ( the fast mode speed at the edge of the fast - r region , etc . ) . for comparison ,the solid lines on these plots show the edges of the forbidden region for perpendicular magnetization ( , see paper i ) : these lines reach at the edges of the forbidden region for unmagnetized if , .since equation is a cubic in , quadratic in and linear in and , there is no simple analytic form for the boundaries of the regions .however , certain critical values can be determined analytically . for ,the slow - r - critical line terminates where it hits the alfvn speed at , while the position at which the fast d - critical line terminates is given by these points are linked , respectively , by steady switch - off and switch - on shocks to the points on the limiting slow - d and fast - r critical loci at which these loci hit the axis .the intercept between the slow - d critical locus and the axis is at for , beyond which the limiting value is as for d - critical hydrodynamic if . to illustrate the reason for this change in solution, we rewrite equation for as the flow leaving a front with no upstream perpendicular component of magnetic field can be either at the alfvn speed or at the velocity of the corresponding non - magnetized if .where the flow is in the region beyond the edge of the slow - d - critical region shown in figure [ f : crit ] ( a ) or ( b ) , the root of equation for flow out at the alfvn speed is not a real solution , since satisfying equation would require that .even for as small a ratio between magnetic and thermal energy upstream of the front as implied by , the effect of parallel magnetization on the slow - d - critical velocity is dramatic .only once ( so the alfvn speed in the upstream gas is below the unmagnetized d - critical speed ) does the fast - critical locus reach , so that the parallel magnetization may be ignored . as increases , the vertical line at the alfvn speed moves across the plot ( see figure [ f : crit ] ) , decreasing the region of fast - mode ifs and increasing that of slow - mode ifs . when ( a very strong parallel magnetic field ) , the ( slow - mode ) forbidden region is identical to that in the unmagnetized case ( independent of ) .if the upstream flow is at the alfvn velocity , , then the physical solution ( when due care is taken with the singularity of equation ) is often at , the ionization of the gas does not change the flow density and it remains at the alfvn speed .both classes of d - type front have ( rarefy the gas ) , while both classes of r - type have ( compress it ) . the perpendicular component of the magnetic field , , increases in a fast - r- or slow - d - type if , while it decreases in a fast - d- or slow - r - type .we will now study the internal structure of the fronts for one simple model .up to now we have assumed , by investigating the jump conditions , that the processes within the if take place on scales far smaller than those of interest for the global flow problem .in fact , the flow structure within an ionization front will vary smoothly on scales comparable to the ionization distance in the neutral gas .the flow may take several recombination lengths behind the front to relax to equilibrium . herewe describe the internal structure of mhd if in one simple approximation , that the temperature of the gas varies smoothly through the front but the flow obeys the inviscid mhd equations throughout ( * ? ? ?* as used in the study of hydrodynamic if structure by ) .we shall see that , in this approximation , only fronts obeying the evolutionary conditions can have smooth structures . for the structures to be generic , the jumps across them must satisfy the strong evolutionary conditions , although singular classes of weakly evolutionary fronts with internal constraints on their flow structures are also possible .= 8 cm in this model the form of an if is given by the variation of the roots of equation with the temperature of the gas , specified by .the left hand side of this equation is a quartic independent of , which is positive at and zero at , and has two or four positive roots ( * ? ? ?* since the condition that its value is zero is the normal mhd shock condition for isothermal gas , ) .the right hand side is a quadratic , which is zero at and depends on through its ( negative ) scale .if the value of increases steadily through the front , the manner in which the solutions vary can be followed , as illustrated by the schematic plot , fig .[ f : schema ] . for the upstream conditions ( ), there will be either two or four solutions where the quartic curve crosses the axis .of these , one is the trivial solution , , and at most one corresponds to an evolutionary shock . when internal heating occurs in an if , the solutions at a given correspond to where the solid curve in fig .[ f : schema ] crosses the corresponding dashed curve , . if gas enters the if at the point marked , in region 2 between the alfvn speed and the fast mode speed , then it can move following the arrow as increases . for sufficiently large ,the dashed curve becomes tangent to the solid curve : when this occurs , the gas is moving at the slow- or fast - mode speed , and the if is called a critical front . in between the initial and final solutions , smooth , steady if solutions must remain between the same characteristic speeds as they were when they started .the strong evolutionary conditions give just those cases in which a front structure calculated for a smoothly varying , monotonically increasing ( and non - zero perpendicular magnetic field ) has a continuous solution from the upstream to the downstream case . when the upstream flow is at the fast or the slow critical speed , the l.h.s. of equation has a second root .thus for , this pair of roots disappears , and a forbidden region is generated .for there is no forbidden region . for any , the points where the solid curve crosses the dashed curve in fig .[ f : schema ] are related to each other by the isothermal mhd shock jump conditions , so a steady shock can form anywhere within the if structure for identical upstream and downstream conditions . however , since such a flow is over - specified , the equality of the speed between the shock and if is a coincidence which will be broken by any perturbation of the flow ( in which case the shock will escape from one or other side of the if ) .this situation is in direct analogy with strong r - type ifs in unmagnetized flows .strong d - type fronts , for which the exhaust leaves the front rather above the critical speed , can occur where the heating is not monotonic . for these, the highest temperature is attained when the flow is at the critical speed ( the curves in figure [ f : schema ] become tangent exactly at the highest value of ) , and as it subsequently cools the solution can move back up the other branch .these fronts will form a more restrictive limiting envelope on the allowed weak solutions than that given by the critical solutions .the evolutionary conditions are necessary but not sufficient , so this behaviour would be expected when more detailed physics was included .the actual envelope will correspond to the case for critical fronts with exhausts at the highest temperature attained within the front . since unmagnetizedif models suggest that any overshoot in the temperature of the gas is likely to be small , the envelope will probably not differ greatly from that found for critical solutions , although the transonic nature of these fronts can be important in determining the structure of global flows .equation suggests that no mhd flow can pass through the alfvn velocity ( where ) in a front unless it has zero perpendicular magnetic field .heating the gas will generally move the flow in regions 2 and 3 away from the alfvn speed in any case ( see figures [ f : schema ] and [ f : cusp ] ) .however , where the perpendicular magnetic field _ is _ zero , a smooth , weakly evolutionary , front structure can be found ( the internal constraint being zero perpendicular magnetic field ) .the internal structure will be identical to an unmagnetized if .indeed , the strong - d hydrodynamic front can become , by analogy , an ` extra - strong ' front which passes through both the alfvn and sound speeds ( for zero perpendicular field , the slow and fast velocities are each equal to one of these ) .by analogy with equation , the -components of velocity and magnetic field are zero everywhere if the mhd equations apply throughout the front ( except if it were to pass through the alfvn velocity ) .components in these directions _ can _ be generated if the velocity coupling between different components of the fluid electrons , ions , neutrals or dust is not perfect ( * ? ? ?* ; * ? ? ?* as in shock structures , ) .these components must , however , damp at large enough scales : far beyond the front the magnetic field must be in the same plane and of the same sign as the upstream field , from the evolutionary conditions .exactly this behaviour has been found to occur in time - dependent multifluid models of mhd shocks ( falle , private communication ) .a full treatment of ionization fronts in multicomponent material is , however , beyond the scope of the present paper .in this section , we discuss the development of the if in a magnetized region , by combining the well - understood development of if in unmagnetized environments with the classes of physical roots of equation found above .an if driven into finite density gas from a source which turns on instantaneously will start at a velocity greater than the fast - r - critical velocity . unless the density decreases rapidly away from the source , the speed of the front decreases so that eventually the ionized gas exhaust is at the fast - mode speed ( at the fast - r - critical velocity ) .when this occurs , two roots of equation merge , and become complex for smaller . as a result , the front will then have to undergo a transition of some sort .as in the unmagnetized case , if the size of the ionized region is large compared to the lengthscales which characterise the internal structures of shocks and ifs , the if will evolve through emitting ( one or more ) shocks . the evolution of an initial if discontinuity can be treated as a modified riemann problem because the speed of the if is determined by the flow properties on either side of it , together with the incident ionizing flux which we assume varies slowly .the development of this modified riemann problem will be self - similar , just as for conventional riemann problems .one complication is that the if may be located within a rarefaction wave , but this does not occur for the circumstances we discuss in the present section .the simplest possibility for a slowing fast - r - critical if is that it will become fast - d - type by emitting a single fast - mode shock .this has obvious limits to the cases where the magnetization is zero ( where the shock is a normal hydrodynamic shock ) , and where it is purely perpendicular .if the speed of the if is specified by the mass flux through it , then the leading shock driven into the surrounding neutral gas must be a fast - mode shock , since the upstream neutral gas must still be advected into the combined structure at more than the alfvn speed after the transition , and so only a fast - mode shock can escape . while there may be no fast - d - type solutions at the value of which applied for the fast - r - type front, the fast - mode shock moving ahead will act to increase the value of upstream of the if , since the ( squared ) increase in the perpendicular component of the magnetic field dominates over the increase in the gas density after the shock . at fast - r - criticality , there is in general a second solution with the same upstream and downstream states in which a fast - mode shock leads a fast - d - critical if , because the l.h.s . of equation (for which a zero value implies an isothermal shock solution ) is positive for large and for post shock flow at the alfvn speed ( ) , unless the flow into the front is at the alfvn velocity , or or is zero . for example , for a downstream state , there are two evolutionary solutions : a front with , , and , and a front with , , and .figure [ f : crit ] parts ( c ) and( b ) , respectively , contain points which correspond to these solutions .these two upstream states are linked by a fast - mode shock with the same velocity as the if .thus the emission of a single fast - mode shock is a valid solution of the modified riemann problem which occurs as the flow passes through criticality _ whatever _ the internal structure of the shock and if , so long as the evolutionary shocks and ifs exist .( since the l.h.s . of equation is greater than zero so long as for , an equivalent argument holds for slow - critical transitions . )it is possible that further waves may be emitted at the transition , for instance a slow - mode shock into the neutral gas together with a slow - mode rarefaction into the ionized gas . for the model resolvedif , this seems unlikely to occur unless the flow velocity reaches the slow - mode speed somewhere within the fast - d - critical if .these other solutions are not seen in our numerical simulations below .additional solutions would also make the development of the if non - unique , if the simpler possibility is allowed .the fast - critical transition may be followed using the jump conditions for a front with its exhaust at the fast - mode speed ( a fast - critical front ) . in the smoothly - varying model of section [s : resolv ] , the upstream and downstream states can be joined by a front in which an isothermal mhd shock is at rest in the if frame _ anywhere _ within the if structure , since the quantities conserved through the front are also conserved by the shock .the evolution of an if through criticality will occur by an infinitesimally - weak fast - mode wave at the exhaust of the if moving forward through its structure and strengthening until it eventually escapes into the neutral gas as an independent shock ( * ? ? ?* as illustrated for recombination fronts by ) .the escaping fast - mode shock leads to a near - perpendicular field configuration upstream of the if .this boost in the perpendicular component is required if the transition is to proceed though the fast - d type solutions , which , as figure [ f : crit ] illustrates , are near - perpendicular ( large ) except where is very large or very small .this will result in a rapid change in downstream parameters across a front where the upstream field is nearly parallel to the if .as an example , the flow downstream of the if will either converge onto or diverge from lines where the upstream magnetic field is parallel to the ionization front , and as a result may produce inhomogeneities in the structure of regions on various scales ( from bipolarity to clumps ) .= 8 cm as the velocity of the front decreases further , eventually it will approach the alfvn speed .we find that the evolution depends qualitatively on whether is smaller than . in fig .[ f : cusp ] , we plot as a function of for a range of values of ( the internal structure of the fronts in the simple model of section [ s : resolv ] ) .we take and , corresponding to our numerical example above , so the ratio of the upstream alfvn speed to the upstream isothermal sound speed is . for slightly larger than ( just super - alfvn ) , the curves remain flat until is close to and then turn upwards when they reach this value . if the value of in the fully ionized gas were less than , the solutions would move through a case where the flow is of uniform density and moves at the alfvn speed throughout the front before becoming slow - r - critical for some , as can be seen in the leftmost part of fig .[ f : cusp ] , for fronts in which the maximum is smaller than 12 . for larger values of ,however , the solutions develop a gradient discontinuity in their structure when the inflow is at the alfvn speed , . at this discontinuity ,the transverse component of the magnetic field becomes zero ( switch - off occurs ) .once the propagation speed of the if drops below the alfvn speed , there is no form of steady evolutionary structure with a single wave in addition to the if which is continuous with that which applied before .non - evolutionary type solutions do exist for fronts just below this limit , but in numerical simulations ( see section [ s : numeric ] ) these break up .a slow - mode switch - off shock moves into the neutral gas and a slow - mode switch - on rarefaction is advected away into the ionized gas . between them , these waves remove the parallel component of magnetic field at the d - type if between them .a precursor for the rarefaction is apparent in internal structure of the critical front , figure [ f : cusp ] .we find in numerical simulations that the if which remains is trans - alfvnic .note that a steady resolved structure is possible for such ( weakly evolutionary ) fronts only because it has exactly zero parallel field throughout .analogous processes must occur in an accelerating _fast_-mode if with weak parallel magnetization when is very large : in fig .[ f : crit ] ( d ) , it is the fast - d - critical line rather than the slow - r - critical line which meets the alfvn locus at finite .= 8 cm l ( a ) + + ( b ) + = 8 cm = 8 cm in this section , we present some numerical examples to illustrate the processes discussed in the preceding sections . we have implemented linear scheme a as described by falle , komissarov & joarder in one dimension , and added an extra conserved variable corresponding to the flow ionization . to study the local development of the ifs we have neglected recombination terms and just chosen to set the mass flux through the ionization flux as a function of time .figure [ f : evol](a ) shows the propagation of an if into gas with density , , .we reset the flow temperature at the end of each time step so that ( the temperature ratio between ionized and neutral gas is rather small so that any shells of shocked neutral gas are more easily resolved ) . for these values ,the characteristic speeds in the neutral gas are , and . in the first figure, the incident flux varies as , and the transitions through fast - r , fast - d and slow - d are clearly visible , while the slow - r stage ( which is often narrow in the parameter space of fig .[ f : crit ] ) is less clear . in figure[ f : evol](b ) the upstream conditions are the same , but the flux was set to a constant value of in order to isolate the slow - r transition .this is between the upstream slow - mode and alfvn speeds , and equation predicts that a slow - r transition exists which will increase the flow density from to ( for comparison , the jump relations require the density to increase to 1.75 ) . in the simulation , a weak fast - mode wave propagates off from the if initially , but does not greatly change the upstream conditions .the slow - r if which follows it increases the density , and is backed by a rarefaction because of the reflective boundary condition applied at the left of the grid .the small overshoot within the front is presumably due to numerical viscosity , and can be removed by broadening the if ( * ? ? ?* by the method described in ) .the plateau between the rarefaction and the if has density , which is in adequate agreement with the jump conditions ( particularly when account is taken of the slight perturbation of the conditions upstream of the front ) .these numerical solutions illustrate the orderly progression of a magnetized if through the various transitions described in the previous section . in a realistic model , however , the density perturbations generated by the transitions will have important effects on the evolution , as the consequent changes in recombination rates alter the flux incident on the if .these processes should ideally be studied in the context of a two- or three - dimensional global model for the evolution of magnetized regions , which is beyond the scope of the present paper . in figure[ f : nevol ] , we illustrate the development of the flow from initial conditions in which a non - evolutionary if is stationary in the grid .the upstream ( neutral ) gas has density , , and and moves into the front at ( with no transverse velocity ) , while the downstream ( fully ionized ) gas has , , , and so the if is of type .we set the pressure as above , and the value of the incident flux as so the initial if would remain steady in the grid . the if breaks up immediately , driving a slow - mode shock away to the right , into the neutral gas , while a slow - mode rarefaction moves away to the left , into the ionized gas .the d - type if which remains is marginally trans - alfvnic , but has zero transverse magnetic field . to study this further , in fig .[ f : dcrit ] , we illustrate a simulation of an if close to the ( hydrodynamic ) d - critical condition , with a parallel magnetic field which makes it trans - alfvnic . when perturbed with small but significant perpendicular field components , this if again switches off these fields by emitting slow - mode waves .it remains stable , satisfying the hydrodynamic jump conditions as a weakly evolutionary solution of the mhd jump conditions .we have presented the jump conditions for obliquely - magnetized ionization fronts . we have determined the regions of parameter space in which physical if solutions occur , and have discussed the nature of the interconversions between the types of front .fast - d and slow - r solutions with high transverse fields are found in the region of front velocities forbidden by the hydrodynamic jump conditions : in the evolution of an region , the fast - mode shock sent into the neutral gas by the fast - critical transition will act to generate these high transverse fields . in the obliquely - magnetized case ,the fronts are significantly perturbed as long as the alfvn speed in the neutral gas is greater than .however , the stability of parallel - magnetized weakly evolutionary if means that the flow may still leave at the isothermal sound speed in the ionized gas over much of the surface of magnetized globules exposed to ionizing radiation fields .large ( ) , highly ordered magnetic fields have been observed in the molecular gas surrounding some regions .roberts suggested that the highest observed magnetizations in s106 are associated with unshocked rather than shocked gas , as a result of the relatively low density and velocity shift observed for the strongly magnetized gas .they suggested that the magnetic field becomes tangled close to the if , leading to the decrease in detectable magnetization .it is interesting to compare these results with the example fast - critical front we discuss in section [ s : devel ] . scaling the parameters of this front to an exhaust hydrogen density of , typical of an ultracompact region , a mean mass per hydrogen nucleus of , and a sound speed in the ionized gas of ,the limiting front takes the flow from an upstream state with , and to an exhaust with , and ( while the fast - r - critical speed is almost exactly twice the exit speed of the front , this ratio decreases in more strongly magnetized fronts ) .the limiting front takes the flow from , and to the same final state . a fast - mode shockahead of the front and at rest with respect to it would change the upstream state from that of the limiting front to that of the limiting front .this fast - mode shock , which precedes a limiting fast - weak - d type front , boosts the -component of the magnetic field from to , with a change in the -velocity and only a change to the -velocity component .a succeeding slow - mode shock would further increase the -velocity component and gas density while weakening the -component of magnetic field , without recourse to field - tangling .if we tentatively identify oh component b of roberts as fast - shocked material and component a as doubly - shocked material , component b is more edge - brightened and has a smaller blueshift than component a as would be expected .component a is kinematically warmer and most blue shifted towards the centre of the region .the strong line - of - sight magnetic fields in s106 are seen at the edges of the region , in a ` toroidal ' distribution .the ( poorly resolved ) line - of - sight velocity of component b has little gradient in the equatorial plane of the region , but this might result in part from a combination of flow divergence and the value of ( which is measured close to the centre of the region ) being rather larger than that of in our example . while the qualitative properties of this assignment are attractive , it remains to calculate a proper model tuned to the properties of the region , in particular its geometry .nevertheless , the present discussion at least illustrates how both fast and slow shocks should be considered in the analysis of regions with well - ordered magnetization .we thank sam falle and serguei komissarov for helpful discussions on evolutionary conditions , and the referee for constructive comments which brought several issues into sharper focus .rjrw acknowledges support from pparc for this work .
|
we present the jump conditions for ionization fronts with oblique magnetic fields . the standard nomenclature of r- and d - type fronts can still be applied , but in the case of oblique magnetization there are fronts of each type about each of the fast- and slow - mode speeds . as an ionization front slows , it will drive first a fast- and then a slow - mode shock into the surrounding medium . even for rather weak upstream magnetic fields , the effect of magnetization on ionization front evolution can be important . # 1 cite#1#2( # 2 # 1)#1([#1 ] ) # 1#2#1 = " 8000 = |#1_#1 # 1#1 [ firstpage ] mhd hii regions ism : kinematics and dynamics ism : magnetic fields .
|
this paper deals with phase transition models ( pt models for short ) of hyperbolic conservation laws for traffic .more precisely , we focus on models that describe vehicular traffic along a unidirectional one - lane road , which has neither entrances nor exits and where overtaking is not allowed . in the specialized literature ,vehicular traffic is shown to behave differently depending on whether it is free or congested .this leads to consider two different regimes corresponding to a _ free - flow phase _ denoted by and a _ congested phase _ denoted by .the pt models analyzed here are given by a scalar conservation law in the free - flow phase , coupled with a system of conservation laws in the congested phase .the coupling is achieved via _ phase transitions _, namely discontinuities that separate two states belonging to different phases and that satisfy the rankine - hugoniot conditions .this two - phase approach was introduced by colombo in and is motivated by experimental observations , according to which for low densities the flow of vehicles is free and approximable by a one - dimensional flux function , while at high densities the flow is congested and covers a -dimensional domain in the fundamental diagram , see ( * ? ? ?* figure 1.1 ) .hence , it is reasonable to describe the dynamics in the free regime by a first order model and those in the congested regime by a second order model .colombo proposed to let the free - flow phase be governed by the classical lwr model by lighthill , whitham and richards , which expresses the conservation of the number of vehicles and assumes that the velocity is a function of the density alone ; on the other hand , the congested phase includes one more equation for the conservation of a linearized momentum .furthermore , his model uses a greenshields ( strictly parabolic ) flux function in the free - flow regime and one consequence is that can not intersect , see ( * ? ? ?* remark 2 ) .the two - phase approach was then exploited by other authors in subsequent papers , see .for instance , in goatin couples the lwr equation for the free - flow phase with the arz model formulated by aw , rascle and zhang for the congested phase . in the authors intentions such a model has the advantage of correcting the drawbacks of the lwr and arz models taken separately . recall that this pt model has been recently generalized in .another variant of the pt model of colombo is obtained in , where the authors take an arbitrary flux function in and consider this phase as an extension of lwr that accounts for heterogeneous driving behaviours . in this paperwe further generalize the two pt models treated in and . for more clarity, we refer to the first model as the pt model and to the latter as the pt model .we omit these superscripts only when they are not necessary .we point out that in the authors assume that , while in the authors assume that . herewe do not impose any assumption on the intersection of the two phases for both the pt and the pt models .moreover , in order to avoid the loss of well - posedness of the riemann problems in the case as noted in ( * ? ? ?* remark 2 ) , we assume that the free phase is characterized by a unique value of the velocity , that coincides with the maximal one . in the next sections , we study riemann problems coupled with a local point constraint on the flow . more precisely , we analyze in detail two constrained riemann solvers corresponding to the cases and .we recall that a local point constraint on the flow is a condition requiring that the flow of the solution at the interface does not exceed a given constant quantity .this can model , for example , the presence at of a toll gate with capacity .we briefly summarize the literature on conservation laws with point constraint recalling that : * the lwr model with a local point constraint is studied analytically in and numerically in ; * the lwr model with a non - local point constraint is studied analytically in and numerically in ; * the arz model with a local point constraint is studied analytically in and numerically in ; * the pt model with a local point constraint is analytically studied in . to the best of our knowledge ,the model presented in is so far the only pt model with point constraint .the paper is organized as follows . in section [ sec : ptmodels ] we introduce the pt and pt models by giving a unified description valid in both cases . in particular, we list the basic notations and the main assumptions needed throughout the paper and we give a general definition of admissible solutions to a riemann problem for a pt model . in sections [ sec :inter ] and [ sec : noninter ] we outline the costrained riemann solver in the case of intersecting phases and non - intersecting phases , respectively . section [ sec : tv ] contains some total variation estimates that may be useful to compare the difficulty of applying the two solvers in a wave - front tracking scheme ; see and the references therein . in section [ sec : simu01 ] we apply the pt model to compute an explicit example reproducing the effects of a toll gate on the traffic along a one - lane road .finally , in section [ sec : tech ] we collect all the proofs of the properties previously stated .in this section , we introduce the pt models , collect some useful notations and recall the main assumptions on the parameters already discussed in .see [ fig : notations ] for a picture with the main notations used throughout the article .the fundamental parameters that are common to pt and pt are the following : * is the unique velocity in the free phase , namely it is the maximal velocity ; * ] represents the density and the ( linearized ) momentum of the vehicles , while and denote the domains of the free - flow phase and of the congested phase , respectively .observe that in the density is the unique independent variable , while in the independent variables are both and .moreover , the ( average ) speed and the flow of the vehicles are defined as v^p(u ) \doteq\frac{q}{\rho } - p(\rho ) & \text{for the pt model } , \end{cases } & f(u ) \doteq \rho \ , v(u).\end{aligned}\ ] ] in the pt model , is the equilibrium velocity defined by \ni \rho \mapsto v_{eq}^a(\rho ) \doteq \left(\dfrac{r}{\rho } - 1\right)\left(\dfrac{v_f \,\sigma}{r-\sigma } + a \ , ( \sigma - \rho)\right),\ ] ] where and are fixed parameters , while the term is a perturbation which provides a thick fundamental diagram in the congested phase ( in accordance with the experimental observations depicted in ( * ? ? ? * figure 3.1 ) ) . observe that for any and we have that coincides with the newell - daganzo velocity on ] satisfies ;{{\mathbb{r } } } ) , & & p'(\rho)>0 , & & 2p'(\rho ) + \rho \, p''(\rho)>0 & \text{for every } \rho \in ( 0,r].\end{aligned}\ ] ] a typical choice is , , see .let , and be fixed parameters such that by definition , with the equality holding if and only if .then , we can introduce the free and congested domains \times { { \mathbb{r}}}\,:\ , q = q(\rho)\bigr\ } , & & { \omega_c}\doteq \bigl\ { u \in [ \sigma_-^c , r ] \times { { \mathbb{r}}}\,:\ , 0\le v(u)\le v_c,\ , w_- \le \dfrac{q}{\rho } \le w_+ \bigr\},\end{aligned}\ ] ] where and }{(r-\rho)[v_f\sigma + a \ , ( \sigma-\rho)(r-\sigma ) ] } & \text{for the pt model},\\[10pt ] \rho \left[v_f+p(\rho)\right ] & \text{for the pt model}. \end{cases}\ ] ] observe that for any and .moreover , is the point in with minimal -coordinate .furthermore , we denote and \bigr\ } , \\ & { \omega_c}^- \doteq { \omega_c}\setminus{\omega_f}^+ , & & { \omega_c}^{ex } \doteq \bigl\ { u \in ( 0,r ] \times { { \mathbb{r}}}\,:\ , v(u ) \in [ 0 , v_f ] ,\ w(u ) \in [ w_-,w_+]\bigr\},\end{aligned}\ ] ] where we point out that observe that in the congested phase is a lagrangian marker , since it satisfies as long as the solution to attains values in .we introduce functions that are practical in the definition of the riemann solvers given in the next sections : \to ( 0,r ] , & & \rho_1 ^ 0(w ) \doteq \begin{cases } r & \text{for the pt model } , \\p^{-1}(w ) & \text{for the pt model } , \end{cases } \\ & \psi_2^\pm \colon \omega \to ( { \omega_c}\cup{\omega_f}^+ ) , & & u=\psi_2^\pm(u_o ) : \begin{cases } w(u)=w_\pm,\\ v(u)=v(u_o ) , \end{cases } \\ & u_*\colon({\omega_c}\cup{\omega_f}^+)^2 \to ( { \omega_c}\cup{\omega_f}^+ ) , & & u = u_*(u_-,u_+ ) : \begin{cases } w(u ) = w(u_-),\\ v(u ) = v(u_+ ) , \end{cases } \\ & \lambda \colon \left\{(u_-,u_+ ) \in \omega^2 \,:\ , \rho_- \neq \rho_+\right\ } \to { { \mathbb{r } } } , & & \lambda(u_- , u_+ ) \doteq \frac{f(u_+)-f(u_-)}{\rho_+-\rho_-}.\end{aligned}\ ] ] we underline that both for the pt and the pt models .observe that , according to the definition of the lax curves given in the next section , the above functions have the following geometrical meaning , see figure [ fig : notations ] : * the point is the intersection of the lax curve of the first family passing through and ; * for any ] as done in , then the above condition reduces to .furthermore , by we have for every ] , for the pt model we have that the first lax curves are strictly concave .let us consider the riemann problem for the pt model , namely the cauchy problem for with initial datum we recall the following general definition of solution to , given in .[ def : colombo ] for any , an _ admissible _ solution to the riemann problem , is a self - similar function that satisfies the following conditions .* if or , then is the usual lax solution to , ( and it does not perform any phase transition ) . * if and , then there exists such that : * * and for all ; * * the rankine - hugoniot jump conditions = f(u(t,\lambda \ , t^+ ) ) - f(u(t,\lambda \ , t^-))\ ] ] are satisfied for all ; * * the functions are respectively the usual lax solutions to the riemann problems =0,\\ u(0,x)=\begin{cases } u(t,\lambda \ , t^+ ) & \hbox{if } x<0,\\ u_r & \hbox{if } x>0 .\end{cases } \end{cases}\end{aligned}\ ] ] * if and , conditions entirely analogous to the previous case are required .we denote by and the riemann solvers associated to the riemann problem , , respectively in the cases of intersecting and non - intersecting phases .we point out that these riemann solvers are defined below according to definition [ def : colombo ] , in the sense that (x / t) ] are admissible solutions to the riemann problem , . besides the initial condition, we enforce a local point constraint on the flow at , i.e. we add the further condition that the flow of the solution at the interface is lower than a given constant quantity and impose in general , is not satisfied by an admissible solution to , .for this reason we introduce the following concept of admissible constrained solution to , , .[ def:42 ] for any , an _ admissible constrained _ solution to the riemann problem , , is a self - similar function such that and satisfy : * ; * the functions are admissible solutions to the riemann problems for with riemann data respectively given by we denote by and the constrained riemann solvers associated to the riemann problems , , , respectively in the cases of intersecting and non - intersecting phases .we point out that these riemann solvers are defined below according to definition [ def:42 ] , in the sense that (x / t) ] are admissible constrained solutions .we let ( with a slight abuse of notation ) (t,0^\pm ) ) \le f \ } , \\ &\mathcal{s}_f \doteq \mathcal{s } & \text { in } & & \mathcal{d}_1 \doteq \ { ( u_\ell , u_r ) \in \omega^2 \,:\ , f(\mathcal{s}[u_\ell , u_r](t,0^\pm ) ) \le f \},\end{aligned}\ ] ] and we denote . in the next sections ,we introduce the riemann solvers and discuss their main properties , such as their consistency , -continuity and their invariant domains . in this regard , we recall the following definitions .[ def : cons ] a riemann solver is said to be consistent if it satisfies both the following statements for any and : (\bar x ) = u_m & & \rightarrow & & & \begin{cases } \mathcal{t}[u_\ell , u_m](x)= \begin{cases } \mathcal{t}[u_\ell , u_r](x ) & \hbox{if } x < \bar x , \\u_m & \hbox{if } x \ge \bar x , \end{cases } \\[10pt ] \mathcal{t}[u_m , u_r](x ) = \begin{cases } u_m & \hbox{if } x < \bar x , \\\mathcal{t}[u_\ell , u_r](x ) & \hbox{if } x \geq \bar x , \end{cases } \end{cases } \\ \begin{rcases } \mathcal{t}[u_\ell , u_m](\bar x)=u_m \\\label{p2}\tag{ii } \mathcal{t}[u_m , u_r](\bar x)=u_m \end{rcases } & & \rightarrow & & & \mathcal{t}[u_\ell , u_r](x)= \begin{cases } \mathcal{t}[u_\ell , u_m](x ) & \hbox{if } x < \bar x , \\ \mathcal{t } [ u_m , u_r](x ) & \hbox{if } x \geq \bar x .\end{cases}\end{aligned}\ ] ] we point out that the consistency of a riemann solver is a necessary condition for the well - posedness of the cauchy problem in .an invariant domain for is a set such that ({{\mathbb{r } } } ) \subseteq \mathcal{i} ] whenever &&&\cup \ { ( u_\ell , u_r ) \in { \omega_f}^-\times{\omega_c}\,:\ , \lambda(u_\ell , u^c_-)\ge\lambda_1(u^c_- ) \ } \\[2pt]&&&\cup \ { ( u_\ell , u_r ) \in { \omega_f}^+\times{\omega_c}\,:\ , l_{w(u_\ell)}''(\rho_\ell ) \le 0 \}. \end{array}\ ] ] 2 .[ s2 ] if , and , then we let (x)\doteq \begin{cases } \mathcal{r}[u_\ell,\psi_1^c(u_\ell)](x ) & \text{for } x<\lambda(\psi_1^c(u_\ell),\psi_1^f(u_\ell)),\\ \mathcal{r}[\psi_1^f(u_\ell),u_r](x ) & \text{for } x>\lambda(\psi_1^c(u_\ell),\psi_1^f(u_\ell ) ) .\end{cases}\ ] ] 3 .[ s3 ] if , and , then we let (x)\doteq \begin{cases } u_\ell&\text{for } x<\lambda(u_\ell , u^c_-),\\ \mathcal{r}[u^c_-,u_r](x ) & \text{for } x>\lambda(u_\ell , u^c_- ). \end{cases}\ ] ] 4 .[ s4 ] if , and , then we let (x)\doteq \begin{cases } u_\ell&\text{for } x<\lambda(u_\ell,\psi_1^c(u_\ell)),\\ \mathcal{r}[\psi_1^c(u_\ell),u_r](x ) & \text{for } x>\lambda(u_\ell,\psi_1^c(u_\ell ) ) .\end{cases}\ ] ] [ rem : specialrs ] notice that differs from ( corresponding to ) only in the cases described in [ s2 ] , [ s3 ] and [ s4 ] , namely ] if and only if satisfies one of the following conditions : in particular , for the pt model we have that ] ( corresponding to ) if and only if ; this is also the case for the pt model if . in the next propositionwe list the main properties of ; the proof is a case by case study and is deferred to section [ sec : tec2 ] . [ prop : cc2 ] the riemann solver is -continuous and consistent . before introducing the riemann solver , we observe that in the present case &\cup & \ { ( u_\ell , u_r ) \in { \omega_c}\times { \omega_f}\,:\ , f(\psi_1^f(u_\ell ) ) \le f \ }\cup \ { ( u_\ell , u_r ) \in { \omega_f}^- \times { \omega_c}\,:\ , \min\{f(u_\ell),f(\psi_2 ^ -(u_r))\ } \le f \ } \\[2pt]&\cup & \ { ( u_\ell , u_r ) \in { \omega_f}^+ \times { \omega_c}\,:\ , f(u_*(u_\ell , u_r ) ) \le f \}. \end{array}\ ] ] 0.27 [ c , c] [ c , b] [ l , b] [ c , b] [ c , c] [ l , b] [ c , b] [ l , c] and given in definition [ def:04].,title="fig:",scaledwidth=88.0% ] 0.27 [ c , c] [ c , b] [ l , b] [ c , b] [ c , c] [ c , c] [ l , b] [ l , c] and given in definition [ def:04].,title="fig:",scaledwidth=88.0% ] 0.27 [ c , c] [ c , b] [ l , b] [ c , b] [ c , c] [ c , c] [ c , b] [ l , b] [ l , c] and given in definition [ def:04].,title="fig:",scaledwidth=88.0% ] [ def:04 ] the constrained riemann solver associated to , , is defined as (x)\doteq \begin{cases } \mathcal{s}[u_\ell , u_r](x ) & \hbox{if } ( u_\ell , u_r ) \in \mathcal{d}_1,\\[5pt ] \begin{cases } \mathcal{s}[u_\ell,\hat{u}](x ) & \hbox{if } x<0,\\ \mathcal{s}[\check{u},u_r](x ) & \hbox{if } x>0 , \end{cases } & \hbox{if } ( u_\ell , u_r ) \in \mathcal{d}_2 , \end{cases}\ ] ] where and are uniquely selected by the conditions [ rem : special ] notice that , if , then the selection criterion for and given above does not coincide with the one given in definition [ def:01 ] with if and only if satisfies one of the following conditions and in this case and . for this reason , in [ fig : uhatucheck11 ] we specify the selection criterion for and given above only in these cases . as a consequence, we have that \ne \mathcal{s}_f[u_\ell , u_r] ] and ] and ] in . to do so , it is sufficient to consider the case where ] is the juxtaposition of ] , we are left to consider the following three cases . * if , then it is sufficient to exploit the fact that to obtain that both and converge to and to be able to conclude . * if and , then and we can assume that ] is described by either [ r4 ] or [ r4b ] . in the first case we can argue as in the previous case . in the latter case ,we exploit the fact that and to obtain that both and converge to and to be able to conclude .[ lem:02 ] the riemann solver is consistent . since = u ] can not contain any contact discontinuity , since no wave can follow a contact discontinuity .hence , to prove or we are left to consider the cases for which satisfies [ r4 ] or [ r4b ] , , and the result easily follows . in the next lemmaswe prove proposition [ prop : cc ] .the riemann solver is -continuous . by the -continuity of , it suffices to consider with and to prove that \to \mathcal{r}[u_\ell , u_r] ] in , where and . for notational simplicity , belowwe denote , and .+ we first consider the cases with . *assume and .then , either or . in the first case , and \to u_\ell ] . moreover , in both cases and \to \mathcal{r}[u_\ell , u_r] ] .moreover , it is sufficient to consider the cases and . in the first case , and \to \mathcal{r}[u_*,u_r] ] . *assume , and .then , and . as a consequence , \to \mathcal{r}[u_\ell,\psi_1(u_\ell)] ] .* assume , and .then , and therefore \to \mathcal{r}[u_\ell,\psi_2 ^ -(u_r)] ] in . *assume , and .in this case , and \to u_\ell ] .we finally consider the cases with . *assume and .then , and . as a consequence , \to \mathcal{r}[u_\ell,\hat{u}] ] .* assume and .then , and therefore \to \mathcal{r}[u_\ell,\hat{u}] ] , while in the latter ( whether or ) we have and \to \mathcal{r}[\check{u},u_r] ] and \to \mathcal{r}[\check{u},u_r] ] and \to \mathcal{r}[\check{u},u_r] ] and ] and take .observe that , and does not hold true .finally , we accomplish the proof of proposition [ prop : idr ] on the minimal invariant domains for .we remark that is the point of intersection between the lines and : if , this point belongs to the region , otherwise it is in .let us first prove [ i11 ] .the invariance of is an easy consequence of definition [ def:01 ] , hence we are left to prove the minimality of .let be an invariant domain for containing .then , has to contain ({{\mathbb{r } } } ) = { \omega_f}\cup \{u\in{\omega_c}\,:\ , f(u)= f\ } \cup \mathcal{i}_2.\ ] ] as a consequence , has to contain also ({{\mathbb{r } } } ) = \begin{cases } \mathcal{i}_1 & \text{if } f \le v\,\sigma_- , \\ \{u \in \mathcal{i}_1 \,:\ , f(\psi_1(u ) ) \ge f\ } & \text{if } f > v\,\sigma_- .\end{cases}\end{aligned}\ ] ] finally , if , then has to contain also ({{\mathbb{r } } } ) = \{u \in \mathcal{i}_1 \,:\ , f(\psi_1(u ) ) \le f\}.\ ] ] in conclusion we proved that . now , to prove [ i12 ] it is sufficient to observe that by definition [ def:01 ] we have that there exist such that the values attained by ] is a single phase transition with satisfying one of the conditions , ,. hence , let in and consider the following cases .* assume that satisfies with and .in this case , it is sufficient to exploit the fact that and to obtain that both and converge to , and to be able to conclude . *assume that satisfies with and .in this case , it is sufficient to exploit the fact that and to obtain that both and converge to , , and to be able to conclude . *assume that satisfies with .in this case , it is sufficient to exploit the fact that to obtain that , , and to be able to conclude .[ lem:08 ] the rieman solver is consistent .since = u ] can not contain any contact discontinuity , since no wave can follow a contact discontinuity .by lemma [ lem:02 ] and by remark [ rem : specialrs ] , to prove or we are left to consider the cases for which satisfies with such that and , or with such that and , or with such that and .hence , the result easily follows .in this final section , we accomplish the proof of proposition [ prop : cc3 ] .recall that the same constrained riemann solver for the pt model has already been studied in .the riemann solver is not -continuous .indeed , take and consider with and . in this case ] in , since = u_\ell ] to converges to where if and if .the riemann solver satisfies but not .assume that (\bar x)=u_m=\mathcal{s}_f[u_m , u_r](\bar x) ] can not present any contact discontinuity , otherwise it would not be possible to juxtapose ] . hence , we are left to consider satisfying with , or with , or with ({{\mathbb{r}}}_-)$ ] . in all these cases , it is easy to see that holds true .finally , by the example of lemma [ lem:05 ] we have that does not satisfy .mdr thanks rinaldo m. colombo and paola goatin for useful discussions .b. andreianov , c. donadello , u. razafison , and m. d. rosini .qualitative behaviour and numerical approximation of solutions to conservation laws with non - local point constraints on the flux and modeling of crowd dynamics at the bottlenecks . , 50(5):12691287 , 2016 .
|
we generalize the phase transition model studied in , that describes the evolution of vehicular traffic along a one - lane road . two different phases are taken into account , according to whether the traffic is low or heavy . the model is given by a scalar conservation law in the _ free - flow _ phase and by a system of two conservation laws in the _ congested _ phase . in particular , we study the resulting riemann problems in the case a local point constraint on the flux of the solutions is enforced .
|
coupling a 3d fluid flow model and a system of hyperbolic equations posed on a 1d graph is a well established approach for numerical simulations of blood flows in a system of vessels .such a geometric multiscale strategy is particularly efficient , when the attention to local flow details and the qualitative assessment of global flow statistics are both important .the relevance to cardiovascular simulations and challenging mathematical problems of coupling parabolic 3d and hyperbolic 1d equations put 3d1d flow problems in the focus of intensive research .thus , the coupling of a 3d fluid / structure interaction problem with a reduced 1d model merged to outflow boundary , which acts as an absorbing device , was studied in .the coupling of a 3d fluid problem with multiple downstream 1d models in the context of a finite element method was considered in . in , a system of a 3d fluid / structure interaction problem and a 1d finite element method model of the whole arterial treewas implemented to model the carotid artery blood flow ; and in a unified variational formulation for multidimensional models was introduced .a splitting method , extending the pressure - correction scheme to 3d1d coupled systems , was studied in . in most of these studies ,the 3d model was a generic fluid - elasticity or rigid fluid model , while numerical validations were commonly done for cylindric type 3d domains ( with rigid or elastic walls ) ; several authors considered geometries with bifurcation or constrained geometries ( modeling a stenosed artery ) .more complicated geometries occur in simulations of blood flows , if one is interested in modeling the effect of endovascular implants , such as inferior vena cava ( ivc ) filters . in numerical simulations ,a part of a vessel with an intravenous filter leads to the computational 3d domain with strongly anisotropic inclusions .a downstream flow behind the implant may exhibit a complex structure with traveling vortices , swirls , and recirculation regions ( the latter may occur if plaque is captured by the filter ) .moreover , the ivc flow is strongly influenced by the contraction of the heart , and both forward ( towards the heart ) and reverse ( from the heart ) flows occur within one cardiac cycle .downstream coupling conditions for such flows may be a delicate issue .thus , the flow over an ivc filter is an interesting and challenging problem for a 3d-1d flow numerical solver .the coupling conditions of 3d and 1d fluid models and their properties were studied by several authors .coupling conditions and algorithms based on subdomain iterations were introduced in , and the stability properties of each subproblem were analyzed separately .the first analysis of two models together was done in . in that paper , it was noted that if the navier - stokes equations are taken in the rotation form and suitably coupled with a 1d downstream flow model , then one can show a bound for the joint energy of the system .it is , however , well known that using a finite element method for the rotation form of the navier - stokes equations needs special care , and setting appropriate outflow boundary conditions can be an issue . in the present paper, we introduce an _ energy consistent _ coupling with a 1d model for the convection form of the navier - stokes equations .the joint energy of a coupled 3d-1d model is appropriately balanced and dissipates for viscous flows .handling highly anisotropic structures is a well - known challenge in numerical flow simulations and analysis .there are only a few computational studies addressing the dynamics of blood flows in vessels with implanted filters .recently , vassilevski et al . numerically approached the problem of intravenous filter optimization using a finite - difference method on octree cartesian meshes to resolve the geometry of implants . in that paper , it was also discussed how the effect of an implant can be accounted in a 1d model through a modification of a vessel wall state equation ( see also for the development of this method for atherosclerotic blood vessels ) . in the present paper , we take another approach and locally resolve the full 3d model , while keeping the state equation unchanged .we report on a finite element method for modeling a 3d-1d coupled fluid problem , when the 3d domain has anisotropic inclusions .naturally , this leads to meshes containing possibly anisotropic tetrahedra .we study the performance of the finite element method both by considering the accuracy of solutions and by monitoring the convergence of one state - of - the - art linear algebra solver for the systems of linear algebraic equations to be solved on every time step of the method .we are interested in the ability of the solver to predict such important statistics as the drag force experienced by an intravenous implant .the remainder of the paper is organized as follows . in section [ s_model ] ,we review 3d and 1d fluid models and discuss coupling conditions .the stability properties of the coupled model are also addressed in section [ s_model ] .section [ s_solver ] presents a time - stepping numerical scheme and an algebraic solver . in section [ s_validate ], we validate the 3d finite element solver and the coupled method by considering the benchmark problem of a flow past a 3d cylinder and a problem with an analytical solution .the application of the method to simulate a blood flow over a model ivc filter is given in section [ s_filter ] .numerical experiments were performed using the ani3d finite element package , which was used to generate tetrahedra subdivisions of 3d domains , to build stiffness matrices , and to implement the linear algebra solvers described in section [ s_solver ] .this section reviews 3d and 1d fluid models and describes the coupling of the models . in this study ,the 3d model is assumed ` rigid ' .consider a flow of a viscous incompressible newtonian fluid in a bounded domain .we shall distinguish between the inflow part of the boundary , , the no - slip and no - penetration part ( rigid walls ) , , and the outflow part of the boundary , . on the inflow partwe assume a given velocity profile .the outflow boundary conditions are defined by setting the normal stress tensor equal to a given vector function .thus , the 3d model is the classical navier - stokes equations in pressure - velocity variables : { \operatorname{div}}{\bf u } & = 0 \end{split}&\quad { \rm in}~\omega\times(0,t ] , \\ { \mathbf{u}}|_{\gamma_{\rm in } } = { \mathbf{u}}_{\rm in},\quad{\mathbf{u}}|_{\gamma_{0 } } = { \bf 0}&,\\ \left.\left(\nu\frac{\partial{\mathbf{u}}}{\partial{\mathbf{n}}}-p{\mathbf{n}}\right)\right|_{\gamma_{\rm out}}=\mbox{\boldmath\unboldmath}&. \end{split } \right.\ ] ] here is the outward normal vector to .the system is also supplemented with initial condition ( ) for in .we remark that the notion of ` inflow ' and ` outflow ' boundary is used here and further in the text conventionally , since the inequalities or are _ not _ necessarily pointwise satisfied on or , respectively . in applications we consider , the mean flux , averaged in space _ and in time _ , is expected to be negative at and positive at . however , for certain ] . if is defined in , then .since has the meaning of the difference between fluid and external pressures , it can be negative . in this case, more than one value of may satisfy . to ensure that the coupled model is not defective, one has to prescribe a particular rule for choosing the root of the cubic equation . in our numerical experiments ,we take which is the closest to .the boundary condition is the combination of fluid and energy fluxes and so it does not guarantee to conserve the ` mass ' of the entire coupled system . although , we do not observe any perceptible generation or loss of mass in our numerical experiments , it does not necessarily mean that for all problems this effect should be negligible .actually , one may consider any other linear combination of fluid and energy fluxes coupling on 3d-1d boundary to compromise between energy stability and mass conservation .( 300,130)(0,0 ) ( 0,0 ) ( 60,50) ( 155,50) ( 240,50) ( 198,78) ( 115,78) in practice , one may also be interested in coupling the 1d fluid model to the _ upstream _ boundary of the 3d domain .hence , we now consider three domains , , and , as shown in figure [ fig2 ] . in and simplified 1d model is posed and in the full three - dimensional navier - stokes equations are solved .the domain is coupled to the inflow ( upstream ) boundary of and is coupled to the outflow ( downstream ) boundary of . the downstream coupling is described above . in the literature , it is common not to distinguish between upstream and downstream coupling boundary conditions .for example , in conditions , are assumed both on upstream and downstream boundaries . following this paradigm, one may consider , or energy consistent conditions , as the coupling conditions on between 1d model posed in and 3d model posed in .note that in entirely 3d fluid flows simulations , inflow and outflow boundary conditions usually differ .if a numerical approach to 1d-3d problem is based on subdomains splitting ( see the next section for an example ) , then it is appropriate to distinguish between upstream and downstream coupling conditions .thus , we impose the upstream coupling conditions in such a way that the 3d problem is supplied with the dirichlet inflow boundary conditions .this is a standard choice for incompressible viscous fluid flows solvers and is especially convenient if third parties or legacy codes are separately used to compute 3d and 1d solutions , and they communicate only through coupling conditions . for the upstream boundary , , we introduce a reference velocity profile , , such that .then the boundary condition on is dirichlet , given by setting ensures the continuity of the flux on . if is found to satisfy the equation then the coupling condition is valid on .two more scalar boundary conditions are required for the 1d model in .we assume that or are given in and in an absorbing condition is prescribed : in computations we set in ; another reasonable absorbing condition would be setting the incoming characteristic equals zero . on the downstream end of , we also assume an absorbing boundary condition .we summarize the properties for the 3d-1d coupling introduced in this section : * it ensures the energy balance , as stated in theorem [ th1 ] ; * the inequality is not assumed ; * it can be easy decoupled with splitting methods into the separate 1d problems and the 3d problem with usual inflow - outflow boundary conditions on every time step .[ s_solver ] in this section , we introduce a splitting numerical time - integration algorithm based on subdomain splitting .further , we consider a fully discrete problem and review one state - of - the - art algebraic solver .denote by , , , and approximations to the corresponding unknown variables at time . given these approximations , we compute , , , and for ( ) in three steps : step 1. integrate ( [ 111 ] ) for ] to find , in .for the numerical integration of the 1d model equations , we use a first order monotone finite difference scheme applied to the characteristic form , see . to handle the 3d model , one has to solve on every time step the linearized navier - stokes equations , also known as the oseen problem : { \operatorname{div}}{\bf u } & = 0 \end{split}&\qquad { \rm in}~\omega_{\rm 3d } , \\ { \bf u}|_{\gamma_{\rm in}\cup\gamma_{0 } } = { \bf g},\quad ( \nu\frac{\partial{\mathbf{u}}}{\partial{\mathbf{n}}}-p{\mathbf{n}})|_{\gamma_{\rm out}}=0 & \end{split } \right.\ ] ] where , , the body forces term and the advection velocity field depend on previous time velocity approximations , , , and . here and in the remainder of this section, we dropped out the time - step dependence ( ) index for unknown velocity and pressure . to discretize the oseen problem , we consider a conforming finite element method .denote the finite element velocity and pressure spaces by and , respectively .let be the subspace of of all fe velocity functions vanishing at .the finite element problem reads : find , , and satisfying with let for .we assume the ellipticity , the continuity , and the stability conditions : with positive mesh - independent constants , , , and .condition is well - known as the lbb or inf - sup stability condition .let and be bases of and , respectively . define the following matrices : the linear algebraic system corresponding to ( _ the discrete oseen system _ ) takes the form the right hand side accounts for body forces and inhomogeneous velocity boundary conditions . to solve, we consider a krylov subspace iterative method , with the block triangular preconditioner : the matrix is a preconditioner for the matrix , such that may be considered as an inexact solver for linear systems involving .the matrix is a preconditioner for the pressure schur complement of ( [ sp ] ) , . in the algorithm, one needs the actions of and on subvectors , rather than the matrices , explicitly .once good preconditioners for and are available , a preconditioned krylov subspace method , such as gmres or bicgstab , is the efficient solver . in the literature, one can find geometric or algebraic multigrid ( see , e.g. , and references therein ) or domain decomposition algorithms which provide effective preconditioners for a range of and various meshes .we use one v - cycle of the algebraic multigrid method to define . defining an appropriate pressure schur complement preconditioner is more challenging . in this paper, we follow the approach of kay et al .first , we define the pressure mass and velocity mass matrices : the original pressure convection - diffusion ( pcd ) preconditioner , proposed in , is defined through its inverse : here denotes an approximate solve with the pressure mass matrix .matrices and are approximations to convection - diffusion and laplacian operators in , respectively .both and ( explicitly or implicitly ) assume some pressure boundary conditions to be prescribed .if defines continuous pressure approximations , one can use the conforming discretization of the pressure poisson problem with neumann boundary conditions : likewise , neumann boundary conditions are conventionally used to define the pressure convection - diffusion problem on .however , the optimal boundary conditions setup both for and depends on the type of the boundary and flow regime , see .we use a modified pcd preconditioner defined below .this modification partially obviates the issue of setting pressure boundary conditions and is consistent with the cahouet chabard preconditioner , if the inertia terms are neglected .the cahouet chabard preconditioner is the standard choice for the time - dependent stokes problem and enjoys the solid mathematical analysis in this case . to define the preconditioner, we introduce the discrete advection matrix for continuous pressure approximations as then the modified pressure convection - diffusion preconditioner ( mpcd ) is ( compare to ): where is a diagonal approximation to the velocity mass matrix . regarding the numerical analysis of the algebraic solver used here , we note the following. the eigenvalues bounds of the preconditioned schur complement : were proved for and the lbb stable finite elements in and for a more general case in .the constants are independent of the meshsize , but may depend on the ellipticity , continuity and stability constants in , and thus may depend on the problem parameters . in particular , the pressure stability constant , and so from , depends on the geometry of the domain ( tending to zero for long or narrow domains ) and for certain fe pairs depends on the anisotropy ratio of a triangulation .both of this dependencies require certain care in using the approach for computing flows in 3d elongated domains with thin and anisotropic inclusions ( prototypical for simulating a flow over ivc filter ) .characterizing the rate of convergence of nonsymmetric preconditioned iterations is a difficult task . in particular , eigenvalue information alone may not be sufficient to give meaningful estimates of the convergence rate of a method like preconditioned gmres .nevertheless , experience shows that for many linear systems arising in practice , a well - clustered spectrum ( away from zero ) usually results in rapid convergence of the preconditioned iteration .this said , we should mention that a rigorous proof of the gmres convergence applied to , with block - triangular preconditioner , is not available in the literature ( except the special case , when is symmetric ) .thus , the numerical assessment of the approach is of practical interest .in this section , we validate the accuracy and stability of the solver for the 3d-1d coupled fluid model by ( i ) comparing the computed discrete solutions against an analytical solution for a problem with simple geometry ; ( ii ) computing the drag coefficient and the pressure drop value for the flow around the 3d circular cylinder .the taylor - hood p2-p1 elements were used for the velocity - pressure approximation .the resulting linear algebraic systems were solved by the preconditioned bicgstab method .the initial guess in the bicgstab method was zero on the first time step and equal to the for the subsequent time steps .the stopping criteria was the decrease of the euclidean norm of the residual .first we consider an example with analytical solution .the 3d domain is .the circular cross sections are the inflow and outflow boundaries . domains and are two intervals of length 5 .the analytical solution is given by with , , , .this solution satisfies the continuity of flux condition on the coupling boundaries .the right - hand sides , and were set accordingly . in this test ,the 3d domain was triangulated using the global refinement of an initial mesh , resulting in the sequence of meshes ( further denoted by mesh 1 , mesh 2 , mesh 3 ) , with the number of tetrahedra , respectively .since we use the first order scheme for the 1d problem , the mesh size in and was divided by 4 on each level of refinement : .the corresponding time step was halved for every spacial refinement , so we use for mesh 1 , mesh 2 , and mesh 3 , respectively .c|llll + & } \|u - u_h\|_{l^2} ] .the benchmarks setups do not specify outflow boundary conditions .hence , on the outflow boundary we apply the 3d-1d coupling using the new conditions , so that numerical performance of the coupling can be verified .the statistics of interest are the following : * the difference between the pressure values in points and . * the drag coefficient given by an integral over the surface of the cylinder : here is the normal vector to the cylinder surface pointing to and is a tangent vector . for problemp2 , the reference velocity in is ..[turek_tou_sou_q1 ] problems p1 : computed and reference values of drag and pressure drop . [ cols="^,^,^,^,^,^",options="header " , ] for these benchmark problems , the paper collects several dns results based on various finite element , finite volume discretizations of the navier - stokes equations and the lattice boltzmann method . in ,the authors provided reference intervals , where the statistics are expected to converge . using a higher order finite element method and locally refined adaptive meshes , more accurate reference values of and found in for the steady state solution ( problem p1 ) and in for unsteady problem p2 .for the computations we use two meshes : a ` coarse ' and a ` fine ' ones , both adaptively refined towards cylinder .the coarser mesh is build of 35803 tetrahedra , which results in 53061 velocity d.o.f . and 8767 pressure d.o.f .for the taylor - hood p2-p1 element .the finer mesh consists of 51634 tetrahedra , which results in 73635 velocity d.o.f . and 12321 pressure d.o.f .both coarse and fine mesh consist of regular tetrahedra .the refinement ratio is about 20 and 60 for the coarse and the fine meshes , respectively .we remark that the fine mesh has four times as many tetrahedra touching the cylinder as the coarse mesh .the time steps are and for the coarse and the fine meshes , respectively .evolution of the drag coefficient for unsteady flow around cylinder : coarse and fine grid results and reference results .the right figure zooms the plot for time in [ 3.8,4.2].,title="fig : " ] evolution of the drag coefficient for unsteady flow around cylinder : coarse and fine grid results and reference results .the right figure zooms the plot for time in [ 3.8,4.2].,title="fig : " ] we first show in tables [ turek_tou_sou_q1 ] and [ turek_tou_sou_q3 ] results for problems p1 and p2 obtained with the coarse and the fine meshes . for all settings ,the computed values are within `` reference intervals '' from ( except for problem p2 , but in this case the upper reference bound appears to be tough ) .the computed drag coefficients were well within 1% of reference values and pressure drop within 2% .this is a good result for the number of the degrees of freedom involved . indeed, the results shown in for meshes with about the same number of degrees of freedom show comparable or worse accuracy .in figure [ fig_drag ] , we show the computed evolution of the drag coefficient for problem p2 and compare it to the reference results .the computed drag coefficients match the reference curve very well .we conclude that the conforming finite element method with the coupling outflow conditions is a reliable and stable approach for the simulation of such flow problems .left : an example of intravenous filter ( comed co. ) ; right : 1d inflow ivc waveform used in computations .it was designed by interpolating the ivc doppler blood flow waveforms from .[ waveform],title="fig:"]10ex left : an example of intravenous filter ( comed co. ) ; right : 1d inflow ivc waveform used in computations .it was designed by interpolating the ivc doppler blood flow waveforms from .[ waveform],title="fig : " ] the development of endovascular devices is the challenging problem of cardiovascular medicine .one example is the design of vascular filters implanted in inferior vena cava ( ivc ) to prevent a blockage of the main artery of the lung or one of its branches by a substance that has traveled from elsewhere in the body through the bloodstream .the filter is typically made of thin rigid metal wires as illustrated in figure [ figfilt ] ( left ) .numerical simulation is an important tool that helps in finding an optimal filter design .thin and anisotropic construction of a ivc filter requires adaptive grid refinement and makes computations of flows in such domains not an easy task . in this section, we demonstrate the ability of the numerical method to treat such problems in a stable way .one statistic of interest here is the drag force experienced by a filter .we recall that in this paper we do not account for the elastic properties of the vessel walls , which are otherwise important in practice .we consider a segment ( long ) of ivc with elliptic cross section .the filter is placed on the distance from inflow , it is long and the diameter of its 12 wire legs is .blood is assumed to be incompressible fluid with dynamic viscosity equal to and density equal to . a blood flow in ivc is strongly influenced by the contraction of the heart .the ivc have pulsatile waveforms with two peaks and reverse flow occurring on every cardiac cycle .we consider the doppler blood flow waveforms of ivc reported in and approximate them by a smooth periodic function plotted in figure [ waveform ] ( right ) .note that the presence of significant reverse flows in ivc differs this problem from computing arteria flows , where such phenomenon does not typically occur . on the inflow and outflow ,the 3d vessel is coupled to 1d models as described in section [ s_coupling ] .each 1d model consists of equations posed on intervals of length .periodic velocity with waveform as shown in figure [ waveform ] is prescribed on the upstream part of the 1d model coupled to .the maximum 1d model velocity of yields the maximum inlet velocity in of about .this agrees with the measurements in .the coupling conditions are the same regardless of the mean flow direction .the visualization of the adaptive mesh for the flow over a model ivc filter problem : the top - left picture shows the boundary surface triangulation ; the top - right picture shows the cutaway views of the tetrahedral grid .the bottom picture shows the zoom of the mesh in the neighborhood of the filter s ` head ' ., title="fig : " ] the visualization of the adaptive mesh for the flow over a model ivc filter problem : the top - left picture shows the boundary surface triangulation ; the top - right picture shows the cutaway views of the tetrahedral grid .the bottom picture shows the zoom of the mesh in the neighborhood of the filter s ` head ' ., title="fig : " ] + the visualization of the adaptive mesh for the flow over a model ivc filter problem : the top - left picture shows the boundary surface triangulation ; the top - right picture shows the cutaway views of the tetrahedral grid .the bottom picture shows the zoom of the mesh in the neighborhood of the filter s ` head ' ., title="fig : " ] the visualization of the velocity -component in several cutplanes orthogonal to -axis for times .one may note the occurrence of ` returning ' flows behind the filter even for ` forward ' mean flow.,title="fig : " ] the visualization of the velocity -component in several cutplanes orthogonal to -axis for times .one may note the occurrence of ` returning ' flows behind the filter even for ` forward ' mean flow.,title="fig : " ] + the visualization of the velocity -component in several cutplanes orthogonal to -axis for times .one may note the occurrence of ` returning ' flows behind the filter even for ` forward ' mean flow.,title="fig : " ] the visualization of the velocity -component in several cutplanes orthogonal to -axis for times .one may note the occurrence of ` returning ' flows behind the filter even for ` forward ' mean flow.,title="fig : " ] + the visualization of the velocity -component in several cutplanes orthogonal to -axis for times .one may note the occurrence of ` returning ' flows behind the filter even for ` forward ' mean flow.,title="fig : " ] the visualization of the velocity -component in several cutplanes orthogonal to -axis for times .one may note the occurrence of ` returning ' flows behind the filter even for ` forward ' mean flow.,title="fig : " ] the mesh was adapted towards the filter , so the ratio of largest and smallest element diameters was about , the maximum elements anisotropy ratio was about .the resulting mesh is illustrated in figure [ fig3 ] .the time step in 3d model was set equal to .the bicgstab iterative method , with preconditioner was used to solve discrete oseen subproblems .the stopping criterion was the reduction of the residual by the factor of .the average number of linear iterations on every time step was about 35 .we found that choosing time step larger for this problem , leads to the significant increase of the linear iteration counts and makes ` long time ' computations non - feasible .we visualize the computed solutions in figure [ fig4 ] by showing the values of the -component of the velocity in several cutplanes orthogonal to -axis . behind the filterthe velocity -component eventually has negative values , indicating the occurrence of circulation zones and ` returning ' flows .note that the solution behind the filter is no longer axial - symmetric : a perturbation to solution induced by non - symmetric tetrahedral grid is sufficient for the von karman type flow instability to develop behind the filter . left : the evolution of the drag force ( ) for the ivc filter .right : the evolution of the mean axial velocity ( ) in the middle point of the 1d model _ before _ and _ after _ the 3d domain with ivc filter.,title="fig : " ] left : the evolution of the drag force ( ) for the ivc filter . right : the evolution of the mean axial velocity ( ) in the middle point of the 1d model _ before _ and _ after _ the 3d domain with ivc filter.,title="fig : " ] figure [ fig5 ] ( left ) shows the time evolution of the drag force experienced by the filter . after the instantaneous start, the flow needs few cycles to obtain the periodic regime . in general, the drag force follows the pattern of the inflow waveform . in particular , the filter experiences forces both in downstream and upstream directions at different periods of the cardiac cycle .the right plot in figure [ fig5 ] shows the mean axial velocity in the middle point of the 1d model before and after the 3d domain with cava filter .it is remarkable that after few cycles , when the flow is periodic , the waveforms in the 1d domains coupled to upstream and downstream boundaries are very close .this suggests that the coupling conditions are efficient in conserving averaged flow quantities such as mean flux .we reviewed the 3d and 1d models of fluid flows and some existing coupling conditions for these models .new coupling conditions were introduced and shown to ensure a suitable bound for the cumulative energy of the model .the conditions were found to perform stable in several numerical tests with analytical and benchmark solutions .for the example of the flow around ivc filter , the coupled numerical model was found to capture the periodic flow regime and correct 1d waveforms before and after 3d domain .the model was able to handle ` opposite direction ' flow , i.e. the flow where the ` upstream ' boundary ( boundary with dirichlet boundary conditions ) becomes the outflow boundary for a period of time .the preconditioned bicgstab method with one state - of - the - art preconditioner applied to the linearized finite element navier - stokes problem performs well .however , often the time step should be taken small enough to make the linear solver converge sufficiently fast .overall , the coupled 3d-1d model together with the conforming finite element method and preconditioned iterative strategy was demonstrated as a reliable tool for the simulation of such biological flows as the flow over an inferior vena cava filter .the authors are grateful to a. danilov ( inm ras , moscow ) for his help in building tetrahedra meshes in ani3d , s. simakov ( mipt , moscow ) for providing us with 1d fluid solver code , and yu.vassilevski ( inm ras , moscow ) for the implementation of the algebraic solver for problem .we would like to thank a. quaini and s. canic ( uh , houston , tx ) for fruitful discussions and pointing to papers .abakumov , k.v .gavrilyuk , n.b .esikova , a.v .lukshin , s.i .mukhin , n.v .sosnin , v.f .tishkin , a.p .favorskij , mathematical model of the hemodynamics of the cardio - vascular system , differ .33 ( 7 ) ( 1997 ) 895900 .e. bayraktar , o. mierka , s. turek , benchmark computations of 3d laminar flow around a cylinder with cfx , openfoam and featflow , international journal of computational science and engineering . 7( 2012 ) , 253266 .l. formaggia , j.f .gerbeau , f. nobile , a. quarteroni , on the coupling of 3d and 1d navier - stokes equations for flow problems in compliant vessels , computer methods in applied mechanics and engineering . 191( 2001 ) 561582 .l. formaggia , a. moura , f. nobile , on the stability of the coupling of 3d and 1d fluid - structure interaction models for blood flow simulations , esaim : mathematical modelling and numerical analysis .41 ( 4 ) ( 2007 ) 743769 .heywood , r. rannacher , s. turek , artificial boundaries and flux and pressure conditions for the incompressible navier - stokes equations , international journal for numerical methods in fluids .22 ( 1996 ) 325352 .olshanskii , a low order galerkin finite element method for the navier - stokes equations of steady incompressible flow : a stabilization issue and iterative methods , comp . meth .( 2002 ) 55155536 .g. papadakis , coupling 3d and 1d fluidstructure - interaction models for wave propagation in flexible vessels using a finite volume pressure - correction scheme , commun .numer . meth .engng . 25 ( 2009 ) 533551 .m. schfer , s. turek , the benchmark problem `` flow around a cylinder '' . in hirschel eh ( ed . ) , flow simulation with high - performance computers ii , vol . 52 .notes on numerical fluid mechanics , vieweg . 1996 ;547566 yu .vassilevskii , s. simakov , v. salamatova , yu . ivanov , t. dobroserdova , numerical issues of modelling blood flow in networks of vessels with pathologies , russian journal of numerical analysis and mathematical modelling .26 ( 6 ) ( 2011 ) 605622 .y. vassilevski , s. simakov , v. salamatova , y. ivanov , t. dobroserdova , vessel wall models for simulation of atherosclerotic vascular networks , mathematical modelling of natural phenomena . 6 ( 7 ) ( 2011 ) 8299 .y. vassilevski , s. simakov , v. salamatova , y. ivanov , t. dobroserdova , blood flow simulation in atherosclerotic vascular network using fiber - spring representation of diseased wall , mathematical modelling of natural phenomena . 6 ( 5 ) ( 2011 ) 333349. i.e. vignon - clementel , c.a .figueroa , k.e .jansen , c.a .taylor , outflow boundary conditions for three - dimensional finite element modeling of blood flow and pressure in arteries , comput .methods appl . mech . engrg . 195 ( 2006 ) 37763796 .
|
the paper develops a solver based on a conforming finite element method for a 3d1d coupled incompressible flow problem . new coupling conditions are introduced to ensure a suitable bound for the cumulative energy of the model . we study the stability and accuracy of the discretization method , and the performance of some state - of - the - art linear algebraic solvers for such flow configurations . motivated by the simulation of the flow over inferior vena cava ( ivc ) filter , we consider the coupling of a 1d fluid model and a 3d fluid model posed in a domain with anisotropic inclusions . the relevance of our approach to realistic cardiovascular simulations is demonstrated by computing a blood flow over a model ivc filter . geometrical multiscale modeling , 3d-1d coupling , fluid flows , cardiovascular simulations , finite element method , iterative methods
|
two of the most important ideas to emerge from recent studies of quantum information are the concepts of quantum error correction and quantum key distribution . quantum error correction allows us to protect unknown quantum states from the ravages of the environment .quantum key distribution allows us to conceal our private discourse from potential eavesdroppers .in fact these two concepts are more closely related than is commonly appreciated .a quantum error correction protocol must be able to reverse the effects of both bit flip errors , which reflect the polarization state of a qubit about the -axis , and phase errors , which reflect the polarization about the -axis . by reversing both types of errors ,the protocol removes any entanglement between the protected state and the environment , thus restoring the purity of the state . in a quantum key distribution protocol, two communicating parties verify that qubits polarized along both the -axis and the -axis can be transmitted with an acceptably small probability of error .an eavesdropper who monitors the -polarized qubits would necessarily disturb the -polarized qubits , while an eavesdropper who monitors the -polarized qubits would necessarily disturb the -polarized qubits .therefore , a successful verification test can show that the communication is reasonably private , and the privacy can then be amplified via classical protocols . in quantum key distribution , the eavesdropper collects information by entangling her probe with the transmitted qubits .thus both error correction and key distribution share the goal of protecting quantum states against entanglement with the outside world .recently , this analogy between quantum error correction and quantum key distribution has been sharpened into a precise connection , and used as the basis of a new proof of security against all possible eavesdropping strategies .earlier proofs of security ( first by mayers , and later by biham _ ) made no explicit reference to quantum error correction ; nevertheless , the connection between quantum error correction and quantum key distribution is a powerful tool , enabling us to invoke the sophisticated formalism of quantum error - correcting codes in an analysis of the security of quantum key distribution protocols .also recently , new quantum error - correcting codes have been proposed that encode a finite - dimensional quantum system in the infinite - dimensional hilbert space of a quantum system described by continuous variables . in this paper, we will apply these new codes to the analysis of the security of quantum key distribution protocols . by this method, we prove the security of a protocol that is based on the transmission of squeezed quantum states of an oscillator .the protocol is secure against all eavesdropping strategies allowed by the principles of quantum mechanics . in our protocol, the sending party , alice , chooses at random to send either a state with a well defined position or momentum .then alice chooses a value of or by sampling a probability distribution , prepares a narrow wave packet centered at that value , and sends the wave packet to the receiving party , bob .bob decides at random to measure either or . through public discussion , alice andbob discard their data for the cases in which bob measured in a different basis than alice used for her preparation , and retain the rest . to correct for possible errors , which could be due to eavesdropping , to noise in the channel , or to intrinsic imperfections in alice s preparation and bob s measurement ,alice and bob apply a classical error correction and privacy amplification scheme , extracting from the raw data for oscillators a number of key bits .alice and bob also sacrifice some of their data to perform a verification test to detect potential eavesdroppers .when verification succeeds , the probability is exponentially small in that any eavesdropper has more than an exponentially small amount of information about the key .intuitively , this protocol is secure because an eavesdropper who monitors the observable necessarily causes a detectable disturbance of the complementary observable ( and vice versa ) . since preparing squeezed states is technically challenging , it is important to know how much squeezing is needed to ensure the security of the protocol .the answer depends on how heavily the wave packets are damaged during transmission .when the noise in the channel is weak , we show that it suffices in principle for the squeezed state to have a width smaller by the factor than the natural width of a coherent state ( corresponding to an improvement by 2.51 db in the noise power for the squeezed observable , relative to vacuum noise ) .it is also important to know that security can be maintained under realistic assumptions about the noise and loss in the channel .our proof of security applies if the protocol is imperfectly implemented , and shows that secure key distribution can be achieved over distances comparable to the attenuation length of the channel .squeezed - state key distribution protocols may have some practical advantages over single - qubit protocols , in that neither single - photon sources nor very efficient photodetectors are needed .key distribution protocols using continuous variable quantum systems have been described previously by others , but ours is the first complete discussion of error correction and privacy amplification , and the first proof of security against arbitrary attacks . in [ sec :codes ] we review continuous variable quantum error - correcting codes and in [ sec : qkd_qecc ] we review the argument exploiting quantum error - correcting codes to demonstrate the security of the bb84 quantum key distribution scheme .this argument is extended to apply to continuous variable key distribution schemes in [ sec : qkd_cont ] and [ sec : secure ] .estimates of how much squeezing is required to ensure security of the protocol are presented in [ sec : gaussian ] .the effects on security of losses due to photon absorption are analyzed in [ sec : losses ] , and [ sec : conclude ] contains conclusions .we begin by describing codes for continuous quantum variables .the two - dimensional hilbert space of an encoded qubit embedded in the infinite - dimensional hilbert space of a system described by canonical variables and can be characterized as the simultaneous eigenspace of the two commuting operators the code s `` stabilizer generators . '' if the eigenvalues are , then the allowed values of and in the code space are integer multiples of , and the codewords are invariant under shifts in or by integer multiples of .thus an orthogonal basis for the encoded qubit can be chosen as the operators commute with the stabilizer generators and so preserve the code subspace ; they act on the basis eq .( [ codewords ] ) according to this code is designed to protect against errors that induce shifts in the values of and . to correct such errors, we measure the values of the stabilizer generators to determine the values of and modulo , and then apply a shift transformation to adjust and to the nearest integer multiples of .if the errors induce shifts , that satisfy then the encoded state can be perfectly restored .a code that protects against shifts is obtained for any choice of the eigenvalues of the stabilizer generators .the code with can be obtained from the code by applying the phase space translation operator the angular variables and ] stabilizer quantum code .that is , first we encode ( say ) a qubit in each of oscillators ; then better protected qubits are embedded in the block of .if the typical shifts are small , then the qubit error rate will be small in each of the oscillators , and the error rate in the protected qubits will be much smaller .the quantum key distribution protocols that we propose are based on such concatenated codes .we note quantum codes for continuous quantum variables with an _ infinite - dimensional _ code space were described earlier by braunstein , and by lloyd and slotine .entanglement distillation protocols for continuous variable systems have also been proposed let s recall the connection between stabilizer quantum codes and quantum key distribution schemes .we say that a protocol for quantum key distribution is secure if ( 1 ) the eavesdropper eve is unable to collect a significant amount of information about the key without being detected , ( 2 ) the communicating parties alice and bob receive the same key bits with high probability , and ( 3 ) the key generated is essentially random .then if the key is intercepted , alice and bob will know it is unsafe to use the key and can make further attempts to establish a secure key . if eavesdropping is not detected , the key can be safely used as a one - time pad for encoding and decoding . establishing that a protocol is secure is tricky , because there inevitably will be some noise in the quantum channel used to distribute the key , and the effects of eavesdroppingcould be confused with the effects of the noise .hence the protocol must incorporate error correction to establish a shared key despite the noise , and privacy amplification to control the amount of information about the key that can be collected by the eavesdropper . in the case of the bb84 key distribution invented by bennett and brassard ,the necessary error correction and privacy amplification are entirely classical .nevertheless , the formalism of quantum error correction can be usefully invoked to show that the error correction and privacy amplification work effectively .the key point is that if alice and bob carry out the bb84 protocol , we can show that the eavesdropper is no better off than if they had executed a protocol that applies quantum error correction to the transmitted quantum states . appealing to the observation that alice and bob _ could have _ applied quantum error correction ( even though they did nt really apply it ) , we place limits on what eve can know about the key .first we will describe a key distribution protocol that uses a quantum error - correcting code to purify entanglement , and will explain why the protocol is secure .the connection between quantum error correction and entanglement purification was first emphasized by bennett __ ; our proof of security follows a proof by lo and chau for a similar key distribution protocol .later , following , we will see how the entanglement - purification protocol is related to the bb84 protocol .a stabilizer code can be used as the basis of an entanglement - purification protocol with one - way classical communication .two parties , both equipped with quantum computers , can use this protocol to extract from their initial shared supply of noisy bell pairs a smaller number of bell pairs with very high fidelity .these purified bell pairs can then be employed for epr quantum key distribution .because the distilled pairs are very nearly pure , the quantum state of the pairs has negligible entanglement with the quantum state of the probe of any potential eavesdropper ; therefore no measurement of the probe can reveal any useful information about the secret key .let s examine the distillation protocol in greater detail .suppose that alice and bob start out with shared epr pairs .ideally , these pairs should be in the state where is the bell state ; however , the pairs are noisy , approximating with imperfect fidelity .they wish to extract pairs that are less noisy . for this purpose ,they have agreed in advance to use a particular ] quantum code , she prepares one of mutually orthogonal codewords .alice also decides at random which of her qubits will be used for key distribution and which will be used for verification . for each of the check bits, she decides at random whether to send an eigenstate ( with random eigenvalue ) or a eigenstate ( with random eigenvalue ) .bob receives the qubits sent by alice , carefully deposits them in his quantum memory , and publicly announces that the qubits have been received .alice then publicly reveals which qubits were used for the key , and which qubits are the check qubits .she announces the stabilizer eigenvalues that she chose to encode her state , and for each check qubit , she announces whether it was prepared as an or eigenstate , and with what eigenvalue .once bob learns which qubits carry the encoded key information , he measures the stabilizer operators and compares his results with alice s to obtain a relative error syndrome .he then performs error recovery and measures the encoded state to decipher the key .bob also measures the check qubits and compares the outcomes to the values announced by alice , to obtain an estimate of the error rate .if the error rate is low enough , error recovery applied to the encoded key bits will succeed with high probability , and alice and bob can be confident in the security of the key . if the error rate is too high , bob informs alice and they abort the protocol .as described so far , the protocol requires that alice and bob have quantum memories and quantum computers that are used to store the qubits , measure stabilizer generators , and correct errors .but if they use a stabilizer code of the css ( calderbank - shor - steane ) type , then the protocol can be simplified further .the crucial property of the css codes is that there is a clean separation between the syndrome information needed to correct bit flip errors and the syndrome information needed to correct phase errors .a css quantum stabilizer code is associated with a classical binary linear code on bits , and a subcode .let denote the parity check matrix of and the generator matrix for the code ( and hence the parity check matrix of the dual code ) .the stabilizer generators of the code are of two types . associated with the row of the matrix a `` -generator , '' the tensor product of s and s and associated with the row of is an `` -generator , '' the tensor product of s and s since has rows , where , and has rows , where there are all together stabilizer generators , and the dimension of the code space ( the number of encoded qubits ) is . from measurements of the generators , bit flip errors can be diagnosed , and from measurement of the generators , phase errors can be diagnosed .the elements of a basis for the code space with eigenvalues of stabilizer generators are in one - to - one correspondence with the cosets of in ; they can be chosen as here is a representative of a coset , and , are -bit strings satisfying thus , to distribute the key , alice chooses and at random , encodes one of the s , and sends the state to bob .after bob confirms receipt , alice broadcasts the values of and .bob compares alice s values to his own measurements of the stabilizer generators to infer a relative syndrome , and he performs error correction. then bob measures of each of his qubits , obtaining a bit string .finally , he subtracts and applies to compute , from which he can infer the coset represented by and hence the key .now notice that bob extracts the encoded key information by measuring of each of the qubits that alice sends .thus bob can correctly decipher the key information by correcting any bit flip errors that occur during transmission .bob does not need to correct phase errors , and therefore he has no use for the phase syndrome information ; hence there is no need for alice to send it . without in any way weakening the effectiveness of the protocol, alice can prepare the encoded state , but discard her value of , rather then transmitting it ; thus we can consider the state sent by alice to be averaged over the value of .averaging over the phase destroys the coherence of the sum over in ; in effect , then , alice is preparing qubits as eigenstates , in the state , sending the state to bob , and later broadcasting the value of .we can just as well say that alice sends a random string , and later broadcasts the value of .bob receives ( where has support on the bits that flip due to errors ) extracts , corrects it to the nearest codeword , and infers the key , the coset .alice and bob can carry out this protocol even if bob has no quantum memory .alice decides at random to prepare her qubits as or eigenstates , with random eigenvalues , and bob decides at random to measure in the or basis .after public discussion , alice and bob discard the results in the cases where they used different bases and retain the results where they used the same basis .thus the protocol we have described is just the bb84 protocol invented by bennett and brassard , accompanied by classical error correction ( adjusting to a codeword ) and privacy amplification ( extracting the coset ) . what error rate is acceptable ? in a random css code , about half of the generators correct bit flips , and about half correct phase flips .suppose that the verification test finds that bit flip errors ( ) occur with probability and phase errors ( occur with probability .classical coding theory shows that a random css code can correct the bit flips with high probability if the number of typical errors on bits is much smaller than the number of possible bit flip error syndromes , which holds provided that where is the binary entropy function .similarly , the phase errors can be corrected with high probability provided the same relation holds with replaced by .therefore , asymptotically as , secure key bits can be extracted from transmitted key bits at any rate satisfying this upper bound on crosses zero at ( or .we conclude that secure key distribution is possible if .the random coding argument applies if the errors in the key qubits are randomly distributed . to assure that this is so , we can direct alice to perform a random permutation of the qubits before sending them to bob . afterbob confirms receipt , alice can broadcast the permutation she performed , and bob can invert it .again , the essence of this argument is that the amount of information that an eavesdropper could acquire is limited by how successfully we could have carried out quantum error correction if we had chosen to and that this relation holds irrespective of whether we really implemented the quantum error correction or not. other proofs of the security of the bb84 protocol have been presented , which do nt make direct use of this connection with quantum error - correcting codes .however , these proofs do use classical error correction and privacy amplification , and they implicitly exploit the structure of the css codes . our objective in this paper is to analyze the security of key distribution schemes that use systems described by continuous quantum variables .the analysis will follow the strategy we have just outlined , in which an entanglement - purification protocol is reduced to a protocol that does not require the distribution of entanglement .but first we need to discuss a more general version of the argument . in the entanglement - purification protocol ,whose reduction to the bb84 protocol we have just described , there is an implicit limitation on the eavesdropper s activity .we have assumed that alice prepares perfect entangled pairs in the state , and then sends half of each pair to bob .eve has been permitted to tamper with the qubits that are sent to bob in any way she chooses , but she has not been allowed any contact with alice s qubits . therefore , if we imagine that alice measures her qubits before sending to bob , we obtain a bb84 protocol in which alice is equipped with a perfect source of polarized qubits .when she sends a eigenstate , the decision to emit a or a is perfectly random , and the state emerges from her source with perfect fidelity . similarly , when she sends an eigenstate , the decision to send is perfectly random , and the state is prepared with perfect fidelity .furthermore , eve has no knowledge of what alice s source does , other than what she is able to infer by probing the qubits as they travel to bob .security can be maintained in a more general scenario . in the entanglement - purification protocol, we can allow eve access to alice s qubits .as long as eve has no way of knowing which pairs alice and bob will select for their verification test , and no way of knowing whether the check pairs will be measured in the or basis , then the protocol still works : eavesdropping can be detected irrespective of whether eve probes alice s qubits , bob s qubits , or both . now if we imagine that alice measures her qubits before sending to bob , we obtain a bb84-like protocol in which alice s source is imperfect and/or eve is able to collect some information about how alice s source behaves . our proof that the bb84-like protocol is secure still works as before .however the proof applies only to a restricted type of source it must be possible to simulate alice s source exactly by measuring half of a two - qubit state . to be concrete ,consider the following special case , which will suffice for our purposes : alice has many identical copies of the two - qubit state . to prepare a `` -state '' she measures qubit in the basis .thus she sends to bob one of the two states chosen with respective probabilities similarly , to prepare an -state she measures in the basis , sending one of chosen with respective probabilities unless the state is precisely the pure state , alice s source is nt doing exactly what it is supposed to do . depending on how is chosen ,the source might be biased ; for example it might send with higher probability than .and the states and need not be the perfectly prepared and that the protocol calls for .now suppose that alice s source always emits one of the states , and that after the qubits emerge from the source , eve is free to probe them any way she pleases .even though alice s source is flawed , alice and bob can perform verification , error correction , and privacy amplification just as in the bb84 protocol .to verify , bob measures or , as before ; if he measures , say , they check to see whether bob s outcome or agrees with whether alice sent or ( even though the state that alice sent may not have been a eigenstate ). thereby , alice and bob estimate error rates and .if both error rates are below , then the protocol is secure .we emphasize again that the security criterion applies not to all sources , but only to the restricted class of imperfect sources that can be simulated by measuring half of a ( possible noisy ) entangled state . to give an extreme example of a type of source to which the security proof does not apply ,suppose that alice _always _ sends the -state or the -state .clearly the key distribution protocol will fail , even if bob s bits always agree with alice s !indeed , a source with these properties can not be obtained by measuring half of any two - qubit state .rather , if the source is obtained by such a measurement , then a heavy bias when we send a -state would require that the error probability be large when we send an -state .now let s consider how the above ideas can be applied to continuous variable systems . we will first describe how in principle alice and bob can extract good encoded pairs of qubits from noisy epr pairs .however , the distillation protocol requires them to make measurements that are difficult in practice. then we will see how key distribution that invokes ( difficult ) entanglement distillation can be reduced to key distribution based on ( easier ) preparation , transmission , and detection of squeezed states .suppose that alice and bob share pairs of oscillators .ideally each pair has been prepared in an epr state , a simultaneous eigenstate ( let s say with eigenvalue 0 ) of and .now suppose that alice measures the two commuting stabilizer generators defined in eq .( [ stabilizer ] ) , obtaining the outcomes or now , the initial state was an eigenstate with eigenvalue one of the operators and .the observables measured by alice commute with these , and so preserve their eigenvalues .thus if the initial epr state of the oscillators were perfect , alice s measurement would also prepare for bob a simultaneous eigenstate of the stabilizer generators with or similarly , the initial state was an eigenstate with eigenvalue one of the observables which also commute with the stabilizer generators that alice measured .thus alice s measurement has prepared an encoded bell pair in the code space labeled by , the state of course the initial epr pair shared by alice and bob might be imperfect , and then the encoded state produced by alice s measurement will also have errors .but if the epr pair is not too noisy , they can correct the errors with high probability .alice broadcasts her measured values of the stabilizer generators to bob ; bob also measures the stabilizer generators and compares his values to those reported by alice , obtaining a relative syndrome that is , the relative syndrome determines the value of ( mod ) , and ( mod ) . using this information , bob can shift his oscillator s and ( by an amount between and ) to adjust ( mod ) , and ( mod ) both to zero .the result is that alice and bob now share a bipartite state in the code subspace labeled by .if the initial noisy epr state differs from the ideal epr state only by relative shifts of bob s oscillator relative to alice s that satisfy , then the shifts will be corrected perfectly . andif larger shifts are highly unlikely , then alice and bob will obtain a state that approximates the desired encoded bell pair with good fidelity .this procedure is a `` distillation '' protocol in that alice and bob start out with a noisy entangled state in a tensor product of infinite dimensional hilbert spaces , and `` distill '' from it a far cleaner entangled state in a tensor product of two - dimensional subspaces .once alice and bob have distilled an encoded bell pair , they can use it to generate a key bit , via the usual epr key distribution protocol : alice decides at random to measure either or , and then publicly reveals what she chose to measure but not the measurement outcome . bob then measures the same observable and obtains the same outcome that outcome is the shared key bit . how do they measure or ?if alice ( say ) wishes to measure , she can measure , and then subtract from the outcome .the value of is determined by whether the result is an even ( ) or an odd ( ) multiple of .similarly , if alice wants to measure , she measures and subtracts the value of is determined by whether the result is an even ( ) or an odd ( ) multiple of .imperfections in the initial epr pairs are inescapable not just because of experimental realities , but also because the ideal epr pairs are unphysical nonnormalizable states .likewise , the stabilizer operators can not even in principle be measured with arbitrary precision ( the result would be an infinite bit string ) , but only to some finite -bit accuracy . still ,if the epr pairs have reasonably good fidelity , and the measurements have reasonably good resolution , entanglement purification will be successful . to summarize ,alice and bob can generate a shared bit by using the continuous variable code for entanglement purification , carrying out this protocol : * key distribution with entanglement purification * * alice prepares ( a good approximation to ) an epr state of two oscillators , a simultaneous eigenstate of , and sends one of the oscillators to bob . *after bob confirms receipt , alice and bob each measure ( to bits of accuracy ) the two commuting stabilizer generators of the code , and .( equivalently , they each measure the value of and modulo . )alice broadcasts her result to bob , and bob applies shifts in and to his oscillator , so that his values of and modulo now agree with alice s ( to -bit accuracy ) .thus , alice and bob have prepared ( a very good approximation to ) a bell state of two qubits encoded in one of the simultaneous eigenspaces of the two stabilizer operators . *alice decides at random to measure one of the encoded operators or ; then she announces what she chose to measure , but not the outcome .bob measures the same observable ; the result is the shared bit that they have generated .now notice that , except for bob s confirmation that he received the states , this protocol requires only one - way classical communication from alice to bob .alice does not need to receive any information from bob before she measures her stabilizer operators or before she measures the encoded operation or .therefore , the protocol works just as well if alice measures her oscillator before sending the other one to bob .equivalently , she prepares an encoded state , adopting randomly selected values of the stabilizer generators .she also decides at random whether the encoded state will be an eigenstate or a eigenstate , and whether the eigenvalue will be or .again , since the codewords are unphysical nonnormalizable states , alice ca nt really prepare a perfectly encoded state ; she must settle for a `` good enough '' approximate codeword . in summary, we can replace the entanglement - purification protocol with this equivalent protocol : * key distribution with encoded qubits * * alice chooses random values ( to bits of accuracy ) for the stabilizer generators and , chooses a random bit to decide whether to encode a eigenstate or an eigenstate , and chooses another random bit to decide whether the eigenvalue will be .she then prepares ( a good approximation to ) the encoded eigenstate of the chosen operator with the chosen eigenvalue in the chosen code , and sends it to bob .* after bob confirms receipt , alice broadcasts the stabilizer eigenvalues and whether she encoded a or an .* bob measures or .he subtracts from his outcome the value modulo determined by alice s announced value of the stabilizer generator , and corrects the result to the nearest integer multiple of .he extracts a bit determined by whether the multiple of is even or odd ; this is the shared bit that they have generated . to carry out this protocol ,alice requires sophisticated tools that enable her to prepare the approximate codewords , and bob needs a quantum memory to store the state that he receives until he hears alice s classical broadcast .however , we can reduce the protocol to one that is much less technically demanding .when bob extracts the key bit by measuring ( say ) , he needs alice s value of modulo , but he does not need her value of the other stabilizer generator . therefore , there is no need for alice to send it ; surely , the eavesdropper will be no better off if alice sends less classical information .if she does nt send the value of , then we can consider the protocol averaged over the unknown value of this generator .formally , for perfect ( nonnormalizable ) codewords the density matrix describing the state that is accessible to a potential eavesdropper then has a definite value of but is averaged over all possible values of it is a ( nonnormalizable ) equally weighted superposition of all position eigenstates with a specified value of mod ; _ e.g. _ in the case where alice prepares a eigenstate , we have averaged over as well , alice is sending a random position eigenstate .likewise , in the case where alice prepares an eigenstate , she sends a random momentum eigenstate .therefore , the protocol in which alice prepares encoded qubits can be replaced by a protocol that is simpler to execute but is no less effective and no less secure . instead of bothering to prepare the encoded qubit , she just decides at random to send either a or eigenstate , with a random eigenvalue .if bob had a quantum memory , he could store the state , and wait to hear from alice whether the state she sent was a or eigenstate ; then he could measure that observable .subtracting ( or ) from his measurement outcome , he would obtain an even or odd multiple of .but bob does not really need the quantum memory .as in the bb84 protocol , it suffices for bob to decide at random to measure either or , and then publicly compare his basis with alice s .they discard the results where they used different bases and retain the others .a problem with this procedure is that the position and momentum eigenstates are unphysical nonnormalizable states , and the probability distribution that alice samples to decide on what value of or to send is also nonnormalizable . for it to implementable, we need to modify the procedure so that alice sends narrow or wave packets , and chooses the position of the center of the wave packet by sampling a broad but normalizable distribution .therefore , alice and bob can adopt the following protocol : * key distribution with squeezed states * * alice chooses a random bit to decide whether to send a state squeezed in or in .she samples a ( discrete approximation to ) a probability distribution or to choose a value of or , and then sends to bob a narrow wave packet centered at that value .* bob receives the state and decides at random to measure either or . *after bob confirms receipt , alice and bob broadcast whether they sent / measured in the or basis .if they used different bases , they discard their results . if they used the same basis , they retain the result and proceed to step 4 . *alice broadcasts the value that she sent , modulo ( to -bit accuracy ) .bob subtracts alice s value from what he measured , and corrects to the nearest integer multiple of .he and alice extract their shared bit according to whether this integer is even or odd .now we are ready to combine the protocol of [ sec : qkd_qecc ] with the protocol of [ sec : qkd_cont ] .the result is a protocol based on concatenating the continuous variable code with an ] , so that the estimate of the error probability can be sharpened to after error correction and measurement in the encoded bell basis , the initial bipartite pure state of two oscillators , with entanglement given by eq .( [ ebits ] ) and ( [ squeeze_param ] ) , is reduced to a bipartite mixed state , diagonal in the encoded bell basis , with fidelity ; this encoded state has entanglement of formation ( where is the binary entropy function ) .if alice and bob have a large number of oscillators in the state , they can carry out an entanglement distillation protocol based on the concatenation of the single - oscillator code with a binary css code , and they will be able to distill qubits of arbitrarily good fidelity at a finite asymptotic rate provided that and are both below ; from eq .( [ better_error ] ) we find that this condition is satisfied for ( which should be compared with the value corresponding to a product of two oscillators each in its vacuum state ) .thus secure epr key distribution is possible in principle with two - mode squeezed states provided that the squeeze parameter satisfies ; from eq .( [ ebits ] ) and ( [ ent_form ] ) , corresponds to ebits carried by each oscillator pair , which is reduced by error correction and encoded bell measurement to ebits carried by each of the encoded bell pairs .now consider the reduction of this entanglement distillation protocol to a protocol in which alice prepares a squeezed state and sends it to bob . in the squeezed - state scheme, alice sends the state with probability . the width of the state that alice sendsis related to the parameter appearing in the estimated error probability according to the state alice sends is centered not at but at .nevertheless , in the squeezed state protocol that we obtain as a reduction of the entanglement distillation protocol , it is rather than that alice uses to extract a key bit , and whose value modulo she reports to bob .the error probability that is required to be below to ensure security is the probability that error correction adjusts bob s measurement outcome to a value that differs from ( not ) by an odd multiple of .as we have noted , this error probability is below 11% for , which ( from eq .( [ tilde_delta ] ) ) corresponds to ; this value should be compared to the value for an oscillator in its vacuum state .thus , secure squeezed - state key distribution is possible in principle using single - mode squeezed states , provided that the squeeze parameter defined by satisfies .when interpreted as suppression , relative to vacuum noise , of the quantum noise afflicting the squeezed observable , this amount of squeezing can be expressed as db . the error rate is below for ( ) , and drops precipitously for more highly squeezed states , _e.g. _ , to below for .for example , if the noise in the channel is weak , alice and bob can use the gaussian squeezed state protocol with ( see fig .[ fig : plot ] ) to generate a shared bit via the or channel with an error rate ( ) comfortably below ; thus the protocol is secure if augmented with classical binary error correction and privacy amplification . of course, if the channel noise is significant , there will be a more stringent limit on the required squeezing .many kinds of noise ( for instance , absorption of photons in an optical fiber ) will cause a degradation of the squeezing factor .if this is the only consequence of the noise , the squeezing exiting the channel should still satisfy for the protocol to be secure , as we discuss in more detail in [ sec : losses ] .otherwise , the errors due to imperfect squeezing must be added to errors from other causes to determine the overall error rate .so far we have described the case where the states and the states are squeezed by equal amounts .the protocol works just as well in the case of unequal squeezing , if we adjust the error correction procedure accordingly .consider carrying out the entanglement distillation using the code with general parameter rather than .the error rates are unaffected if the squeezing in and is suitably rescaled , so that the width of the and states becomes in this modified protocol , alice broadcasts the value of modulo or the value of modulo .bob subtracts the value broadcast by alice from his own measurement outcome , and then adjusts the difference he obtains to the nearest multiple of or .the key bit is determined by whether the multiple of , or , is even or odd .thus , for example , the error rate sustained due to imperfect squeezing will have the same ( acceptably small ) value irrespective of whether alice sends states with , or and ; alice can afford to send coherent states about half the time if she increases the squeezing of her other transmissions by a compensating amount .can we devise a secure quantum key distribution scheme in which alice always sends coherent states ? to obtain , as a reduction of an entanglement distillation protocol , a protocol in which coherent states ( ) are always transmitted , we must consider the case .but in that case , the initial state of alice s and bob s oscillators is a product state .bob s value of or is completely uncorrelated with alice s , and the protocol obviously wo nt work. this observation does not exclude secure quantum key distribution schemes using coherent states , but if they exist another method would be needed to prove the security of such schemes . in general , the source that we obtain by measuring half of the entangled pair is biased . if is not small compared to , then alice is significantly more likely to generate a 0 than a 1 as her raw key bit .but as we have already discussed in [ subsec : bias ] , after error correction and privacy amplification , the protocol is secure if and are both less than .this result follows because the squeezed state protocol is obtained as a reduction of an entanglement distillation protocol .the ideal bb84 quantum key distribution protocol is provably secure .but in practical settings , the protocol can not be implemented perfectly , and the imperfections can compromise its security .( see for a recent discussion . ) for example , if the transmitted qubit is a photon polarization state carried by an optical fiber , losses in the fiber , detector inefficiencies , and dark counts in the detector all can impose serious limitations . in particular , if the photons travel a distance large compared to the attenuation length of the fiber , then detection events will be dominated by dark counts , leading to an unacceptably large error rate .furthermore , most present - day implementations of quantum cryptography use , not single photon pulses , but weak coherent pulses ; usually the source `` emits '' the vacuum state , occasionally it emits a single photon , and with nonnegligible probability it emits two or more photons . quantum key distribution with weak coherent pulses is vulnerable to a `` photon number splitting '' attack , in which the eavesdropper diverts extra photons , and acquires complete information about their polarization without producing any detectable disturbance .a weaker pulse is less susceptible to photon number splitting , but increases the risk that the detector will be swamped by dark counts . from a practical standpoint , quantum key distribution with squeezed statesmay not necessarily be better than bb84 , but it is certainly different .alice requires a source that produces a specified squeezed state on demand ; fortunately , the amount of squeezing needed to ensure the security of the protocol is relatively modest .bob uses homodyne detection to measure a specified quadrature amplitude ; this measurement may be less sensitive to detector defects than the single - photon measurement required in bb84 .but , as in the bb84 protocol , losses due to the absorption of photons in the channel will enhance the error rate in squeezed - state quantum key distribution , and so will limit the distance over which secure key exchange is possible .we study this effect by modeling the loss as a damping channel described by the master equation here is the density operator of the oscillator , is the annihilation operator , and is the decay rate .( [ eq : master ] ) implies that where denotes the expectation value of the operator at time .integrating , we find and so , by expanding in power series , where is an analytic function , and denotes normal ordering ( that is , in , all s are placed to the left of all s ) . in particular , by normal ordering and applying eq .( [ normalevolve ] ) , we find where is the position operator .a similar formula applies to the momentum operator or any other quadrature amplitude .( [ gen_evolve ] ) shows that if the initial state at is gaussian ( is governed by a gaussian probability distribution ) , then so is the final state at .the mean and variance of the initial and final distributions are related by now let s revisit the analysis of [ sec : gaussian ] , taking into account the effects of losses .we imagine that alice prepares entangled pairs of oscillators in the state eq .( [ psi_delta ] ) , and sends one oscillator to bob through the lossy channel ; then they perform entanglement purification .this protocol reduces to one in which alice prepares a squeezed state that is transmitted to bob . in the squeezed - state protocol, alice decides what squeezed state to send by sampling the probability distribution given in eq .( [ pqa ] ) ; if she chooses the value , then she prepares and sends the state in eq .( [ psi_qa ] ) . when it enters the channel , this state is governed by the probability distribution and when bob receives the state this distribution has , according to eq .( [ change_prob ] ) , evolved to where by integrating over in , we can obtain the final marginal distribution for the difference : which generalizes eq .( [ delta_tilde_delta ] ) .we can express the damping factor as where is the length of the channel and is its attenuation length ( typically of the order of 10 km in an optical fiber ) .the protocol is secure if the error rate in both bases is below ; as in [ sec : gaussian ] , this condition is satisfied for .thus we can calculate , as a function of the initial squeezing parameter , the maximum distance that the signal states can be transmitted without compromising the security of the protocol .for , we find thus , the more highly squeezed the input signal , the _ less _ we can tolerate the losses in the channel .this feature , which sounds surprising on first hearing , arises because the amount of squeezing is linked with the size of the range in that alice samples .errors are not unlikely if losses cause the value of to decay by an amount comparable to . in our protocol , if the squeezed states have a small width , then the typical states prepared by alice are centered at a large value ; therefore , a small _ fractional _ decay can cause an error .on the other hand , even without losses , alice needs to send states with to attain a low enough error rate , and as approaches from below , again only a small loss is required to push the error probability over 11% .thus there is an intermediate value of that optimizes the value of , as shown in fig .[ fig : losses ] .this optimal distance , is attained for .our analysis so far applies if alice and bob have no prior knowledge about the properties of the channel .but if the loss is known accurately , they might achieve a lower error rate if bob compensates for the loss by multiplying his measurement outcome by before proceeding with error correction and privacy amplification .this amplification of the signal by bob is entirely classical , but to analyze the security in this case , we may consider an entanglement purification scenario in which bob applies a quantum amplifier to the signal before measuring .since the quantum amplifier ( which amplifies all quadrature amplitudes , not just the one that bob measures ) is noisier , the protocol will be no less secure if bob uses a classical amplifier rather than a quantum one .so now we consider whether entanglement purification will succeed , where the channel acting on bob s oscillator in each epr pair consists of transmission through the lossy fiber followed by processing in bob s amplifier .if the error rate is low enough , the key will be secure even if the amplifier , as well as the optical fiber , are under eve s control .bob s linear amplifier can be modeled by a master equation like eq .( [ eq : master ] ) , but with and interchanged , and where is now interpreted as a rate of gain .the solution is similar to eq .( [ normalevolve ] ) , except the normal ordering is replaced by _anti_-normal ordering ( all s are placed to the _ left _ of all s ) , and with replaced by the gain . we conclude that the amplifier transforms a gaussian input state to a gaussian output state , and that the mean and variance of the gaussian position distribution are modified according to other quadrature amplitudes are transformed similarly .now suppose that a damping channel with loss is followed by an amplifier with gain .then the mean of the position distribution is left unchanged , but the variance evolves as for this channel , the probability distribution governing is again a gaussian as in eq .( [ loss_difference ] ) , but now its width is determined by error rates in the and bases are below 11% , and the protocol is provably secure , for . by solving .we can find the maximum distance ( where ) for which our proof of security holds ; the result is plotted in fig .[ fig : losses ] . when the squeezed input is narrow , , the solution becomes or comparing the two curves in fig .[ fig : losses ] , we see that the protocol with amplification remains secure out to longer distances than the protocol without amplification , _ if _ the input is highly squeezed . in that case , the error rate in the protocol without amplification is dominated by the decay of the signal , which can be corrected by the amplifier .but if the input is less highly squeezed , then the protocol without amplification remains secure to longer distances . in that case, the nonzero width of the signal state contributes significantly to the error rate ; the amplifier noise broadens the state further . with more sophisticated protocols that incorporate some form of quantum error correction, continuous - variable quantum key distribution can be extended to longer distances .for example , if alice and bob share some noisy pairs of oscillators , they can purify the entanglement using protocols that require two - way classical communication .after pairs with improved fidelity are distilled , alice , by measuring a quadrature amplitude in her laboratory , prepares a squeezed state in bob s ; the key bits can be extracted using the same error correction and privacy amplification schemes that we have already described .our proof of security applies to the case where squeezed states are carried by a lossy channel ( assuming a low enough error rate ) , because this scenario can be obtained as a reduction of a protocol in which alice and bob apply entanglement distillation to noisy entangled pairs of oscillators that they share .more generally , the proof applies to any imperfections that can be accurately modeled as a quantum operation that acts on the shared pairs before alice and bob measure them . as one example , suppose that when alice prepares the squeezed state , it is not really the or squeezed state that the protocol calls for , but is instead slightly rotated in the quadrature plane . andsuppose that when bob performs his homodyne measurement , he does not really measure or , but actually measures a slightly rotated quadrature amplitude .in the entanglement - distillation scenario , the imperfection of alice s preparation can be modeled as a superoperator that acts on her oscillator before she makes a perfect quadrature measurement , and the misalignment of bob s measurement can likewise be modeled by a superoperator acting on his oscillator before he makes a perfect quadrature measurement .therefore , the squeezed state protocol with this type of imperfect preparation and measurement is secure , as long as the error rate is below 11% in both bases .of course , this error rate includes both errors caused by the channel and errors due to the imperfection of the preparation and measurement .we also recall that in the protocols of [ sec : secure ] , alice s preparation and bob s measurement were performed to bits of accuracy . in the entanglement distillation scenario , this finite resolution can likewise be well modeled by a quantum operation that shifts the oscillators by an amount of order before alice and bob perform their measurements .thus the proof applies , with the finite resolution included among the effects contributing to the permissible 11% error rate .the finite accuracy causes trouble only when alice s and bob s results lie a distance apart that is within about of ; thus , just a few bits of accuracy should be enough to make this additional source of error quite small .we have described a secure protocol for quantum key distribution based on the transmission of squeezed states of a harmonic oscillator .conceptually , our protocol resembles the bb84 protocol , in which single qubit states are transmitted .the bb84 protocol is secure because monitoring the observable causes a detectable disturbance in the observable , and vice versa .the squeezed state protocol is secure because monitoring the observable causes a detectable disturbance in the observable , and vice versa .security is ensured even if the adversary uses the most general eavesdropping strategies allowed by the principles of quantum mechanics . in secure versions of the bb84 scheme, alice s source should emit single - photons that bob detects . since the preparation of single - photon states is difficult , and photon detectors are inefficient , at least in some settings the squeezed - state protocol may have practical advantages , perhaps including a higher rate of key production .squeezing is also technically challenging , but the amount of squeezing required to ensure security is relatively modest .the protocol we have described in detail uses each transmitted oscillator to carry one raw key bit . an obvious generalization is a protocol based on the code with stabilizer generators given in eq .( [ n_and_alpha ] ) , which encodes a -dimensional protected hilbert space in each oscillator .then a secure key can be generated more efficiently , but more squeezing is required to achieve an acceptable error rate . our protocols , including their classical error correction and privacy amplification , are based on css codes : each of the stabilizer generators is either of the `` ''-type ( the exponential of a linear combination of s ) or of the `` -type '' ( the exponential of a linear combination of s ) .the particular css codes that we have described in detail belong to a restricted class : they are _ concatenated _ codes such that each oscillator encodes a single qubit , and then a block of those single - oscillator qubits are assembled to encode better protected qubits using a binary $ ] stabilizer code .there are more general css codes that embed protected qubits in the hilbert space of oscillators but do not have this concatenated structure ; secure key distribution protocols can be based on these too . the quantum part of the protocol is still the same , but the error correction and privacy amplification make use of more sophisticated close packings of spheres in dimensions .we analyzed a version of the protocol in which alice prepares gaussian squeezed states governed by a gaussian probability distribution .the states , and the probability distribution that alice samples , need not be gaussian for the protocol to be secure .however , for other types of states and probability distributions , the error rates might have to be smaller to ensure the security of the protocol .our proof of security applies to a protocol in which the squeezed states propagate through a lossy channel , over a distance comparable to the attentuation length of the channel . to extend continuous - variable quantum key distribution to much larger distances ,quantum error correction or entanglement distillation should be invoked .strictly speaking , the security proof we have presented applies if alice s state preparation ( including the probability distribution that she samples ) can be exactly realized by measuring half of an imperfectly entangled state of two oscillators . the protocol remains secure if alice s source can be well approximated in this way .our proof does not work if alice occasionally sends two identically prepared oscillators when she means to send just one ; the eavesdropper can steal the extra copy , and then the privacy amplification is not guaranteed to reduce the eavesdropper s information to an exponentially small amount . we thank andrew doherty , steven van enk , jim harrington , jeff kimble , and especially hoi - kwong lo for useful discussions and comments .this work has been supported in part by the department of energy under grant no .de - fg03 - 92-er40701 , and by darpa through the quantum information and computation ( quic ) project administered by the army research office under grant no .daah04 - 96 - 1 - 0386 .some of this work was done at the aspen center for physics .d. mayers , `` quantum key distribution and string oblivious transfer in noisy channels , '' _ advances in cryptology proceedings of crypto 96 _ ( springer - verlag , new york , 1996 ) , pp .343357 ; d. mayers , `` unconditional security in quantum cryptography , '' j. assoc .mach ( to be published ) , quant - ph/9802025 ( 1998 ) e. biham , m. boyer , p. o. boykin , t. mor and v. roychowdhury , `` a proof of the security of quantum key distribution , '' in _ proceedings of the thirty - second annual acm symposium on theory of computing _ ( acm press , new york , 2000 ) , pp 715 - 724 , quant - ph/9912053 .d. gottesman , a. kitaev , and j. preskill , `` encoding a qudit in an oscillator , '' quant - ph/0008040 .t. c. ralph , `` continuous variable quantum cryptography , '' quant - ph/9907073 ; `` security of continuous variable quantum cryptography , '' quant - ph/0007024 .m. hillery , `` quantum cryptography with squeezed states , '' quant - ph/9909006 . m. d. reid , `` quantum cryptography using continuous variable einstein - podolsky - rosen correlations and quadrature phase amplitude measurements . , '' quant - ph/9909030 . c. h. bennett and g. brassard , `` quantum cryptography : public - key distribution and coin tossing , '' in _ proceedings of ieee international conference on computers , systems and signal processing _ ( bangalore , india , 1984 ) , pp .175179 ; c. h. bennett and g. brassard , `` quantum public key distribution , '' ibm technical disclosure bulletin * 28 * , 31533163 ( 1985 ) .l. m. duan , g. giedke , j. i. cirac , and p. zoller , `` entanglement purification of gaussian continuous variable quantum states , '' quant - ph/9912017 ; l. m. duan , g. giedke , j. i. cirac , and p. zoller , `` physical implementation for entanglement purification of gaussian continuous variable quantum systems , '' quant - ph/0003116 . c. h. bennett , d. p. divincenzo , j. a. smolin and w. k. wootters , `` mixed state entanglement and quantum error correction , '' phys .a * 54 * , 38243851 ( 1996 ) , quant - ph/9604024 .lo and h. f. chau , `` unconditional security of quantum key distribution over arbitrarily long distances , '' science * 283 * , 20502056 ( 1999 ) , quant - ph/9803006 .a. r. calderbank and p. w. shor , `` good quantum error correcting codes exist , '' phys .a * 54 * , 10981105 ( 1996 ) , quant - ph/9512032 .a. m. steane , `` multiple particle interference and error correction , '' proc .a * 452 * , 25512577 ( 1996 ) , quant - ph/9601029 .
|
we prove the security of a quantum key distribution scheme based on transmission of squeezed quantum states of a harmonic oscillator . our proof employs quantum error - correcting codes that encode a finite - dimensional quantum system in the infinite - dimensional hilbert space of an oscillator , and protect against errors that shift the canonical variables and . if the noise in the quantum channel is weak , squeezing signal states by 2.51 db ( a squeeze factor ) is sufficient in principle to ensure the security of a protocol that is suitably enhanced by classical error correction and privacy amplification . secure key distribution can be achieved over distances comparable to the attenuation length of the quantum channel .
|
in recent years it has become evident that determining the precise physics of inflation requires the observation of higher order correlation functions beyond the power spectrum .these correlation functions can be obtained from the cosmic microwave background ( cmb ) and large scale structure ( lss ) , but recently , it has been shown that in principle 21-cm observations of the early universe can also be used to measure n - point statistics .because higher order correlation functions introduce more free parameters they can be used to constrain more complex models of inflation , since an increased set of parameters will allow for a unique fitting of the model to the observed data . however ,both due to computational and observational limitations , only the bispectrum has been reasonably investigated . for the detection of higher order correlations we will have to wait for more advanced data sets , such as planck and improved analysis methods , although preliminary attempts have been made .even the detection of the bispectrum is not optimal , as a bispectrum would at least be a continuous three parameter observable but thus far only constraints have been set on limiting cases , in which 2 of the parameters are fixed and the third one is measured for a predetermined triangular configuration .the limiting cases ( shapes ) are known as the local , equilateral and orthogonal ( and in the context of limiting triangular configurations ; enfolded ) non - gaussian features .precisely these features have been chosen , as it has been shown theoretically that most models of inflation produce non - gaussianities that fall in one of these three classes ( for recent reviews see ) .when constraining non - gaussianities using the bispectrum , it has been a prerequisite that the comoving momentum dependence should be factorizable ; the bispectrum should be separable into a product of functions of one variable , each variable being one of the three comoving momenta making up the connected correlation triangle .foremost , this requirement is set because of computational limitations that would render the analysis intractable if a given primordial bispectrum is not of the factorized form .the number integrals and sums one has to perform when computing an unfactorized bispectrum scale with the number of pixels as , while for factorizable shapes this reduces by one factor of .although one integral can be computed fairly quickly the number of pixels ( for wmap and for planck ) is large and one factor of can make all the difference .the constrained bispectra , local , equilateral and orthogonal , have thus far been factorized templates . in case of equilateral and orthogonal have been constructed via approximation of a predicted signal , in the local case , the template is a direct representation of the theory . for a particular type of bispectrum to be constrained , it is necessary to construct a factorized template that ` matches ' the bispectrum . until recent, there was no given prescription how to factorize a given theoretical bispectrum .in it was shown that factorizability can be achieved in both comoving momentum and multipole space by expanding the bispectrum in mode functions that are orthogonal on the domain of the bispectrum dictated by triangle constraints .the purpose of this factorization is to be able to quickly compute the full cmb bispectrum ( ) and generate cmb maps with a arbitrary primordial statistics ( up to the trispectrum ) which are used to determine the variance of the ( statistical ) estimator . in the same paper, it was also shown that one can efficiently extract information about non - gaussianity in the observed cmb by measuring the weight of each mode in the data and comparing this to theoretical predictions . in this paperwe investigate how well this mode expansion works for a class of bispectra that contain ( a large number of ) oscillations .the reason to be interested in such features is that a number of theoretical models predict oscillations in the bispectrum and in order to be able to constrain such models , a plausible first step is to factorize these bispectra .as it is , oscillations can be considered as an extra , distinguishable , degree of freedom within the bispectrum which could result in narrowing down the number of potential scenarios of inflation .we introduce three different cosmological scenarios in which oscillations in the bispectrum can appear .we will briefly discuss the theory behind these models and show to what extend these would be distinguishable from one another in the data in section [ oscillations ] .two out of three bispectra can have significant correlation and it could be difficult to discriminate between such models in future surveys .we will discuss the method of polynomial expansion in order to rewrite the primordial bispectra in factorized / separable form in section [ mode_expansion ] .as expected , the number of modes required in the expansion grows along with the frequency of the theoretical spectra . in section [ powermodes ]we show how fast polynomial expansion would yield a reasonable reconstruction of the given bispectra predicted by the three cosmological scenarios .subsequently we will investigate another set of modes that can lead to a separable expansion of the theoretical bispectrum in section [ fouriermodes ] .these modes are based on the sine and cosine and the resulting set of orthonormal functions can be considered a fourier - type basis on the tetrahedral domain .after detailing the construction of this set of orthonormal mode functions , we will compare the number of modes required to achieve comparable correlation with the polynomial mode expansion .it turns out that this number is reduced significantly and as such fourier expansion can be considered a reasonable alternative to expand oscillatory spectra . for larger frequencies both fourier and polynomialmode expansion become inefficient .fortunately , it turns out that for various oscillatory signals only a limited number of modes contribute significantly in the reconstruction of the original spectrum .this has various consequences for the viability of fourier mode expansion as well as possible observational advantages compared to polynomial modes , which will be discussed in section [ discussion ] .we conclude this paper in section [ conclusion ] .in this section we will briefly discuss 3 distinct possibilities that can produce non - gaussianities that have an oscillatory component .two of these examples have an exact solution , while a third has only been solved numerically and we will use an approximate form . in the following paragraphswe will describe the physics behind these models and quote their theoretically predicted primordial bispectra .in addition we investigate how well these bispectra can be distinguished from one another by computing their correlation , which will be defined shortly .since all these bispectra have poor overlap with existing spectra , there exists substantial room for improvement , which we could achieve by approximating these shapes via mode expansion. this will be the topic of the next section . for completeness ,let us introduce ( standard ) notation .the primordial bispectrum is given by where is the gauge invariant curvature perturbation ( ) which is constant after horizon exit , is the amplitude of the primordial power spectrum ( i.e. for single field slow - roll , where is the hubble rate at the end of inflation and the slow - roll parameter ) and is the shape of the bispectrum .we will also make use of . in the followingwe will discuss the shapes of the bispectra and quote theoretically predicted ranges of their associated .we would like to refer to the literature for a detailed examination of the theoretically predicted values of in various theoretical contexts .sharp features in the potential can temporarily break slow - roll and produce large non - gaussianities . as long as the system relaxes within several hubble times, inflation can still lead to a significant amount of e - folds to solve the standard cosmological problems .the motivation for these type of features is two - fold .first , there are hints of glitches in the primordial power spectrum that could be cross - checked using the bispectrum .a second motivation is theoretical in nature . in certain brane inflation modelsthe effective 4-dimensional potential displays sharp features ( see and references therein ) .one of the possible sharp features is a step in the potential , which can be parameterized as ,\end{aligned}\ ] ] where , and respectively determines the height , width and location of the feature .the resulting bispectrum can only be computed numerically .the authors of have proposed an approximate analytic form the approximation can in principle be improved by multiplying by an ` envelope ' function , but such improvement would not gain us any more useful insight required for the analysis in this paper and we will therefore omit it . here is related the location of the feature in the potential .evidence for features in the power spectrum around have been put forward in .it was shown that the inclusion of features in the primordial potential could improve the best - fit .such a feature would approximately correspond to .this relation also indicates that the smaller the scale at which the feature appears the larger the associated wavelength .roughly the wavelength corresponds to the location of the feature , e.g. for a feature at the wavelength .here we do not necessarily relate to an observed feature at a specific value in multipole space since features that lead to non - vanishing bispectra can still be present with minimal consequences for the observable power spectrum .the quantities we will compute in the remainder of this paper are mostly integrals that run over the domain of comoving momentum space between .it is therefore convenient to choose our reference scale , the smallest observable scale in the data , in order to be able to compare the frequencies in the various models .we then define , , , and rewrite the shape of this bispectrum as with . for a feature at we therefore find .note that can be considered an upper limit in allowable frequencies due to features in the potential . for features at smaller scalesthe frequency will be smaller .this bispectrum with a frequency of is shown in the bottom of figure [ fig:3dbispectra ] .the amplitude of this type of non - gaussianity is governed by the width and the depth of the feature in the potential which for a feature at would imply .this type of non - gaussianity is a result of a periodic feature in the inflaton potential as apposed to a sharp feature explored in the previous example .these features will cause oscillations in the coupling(s ) of the interaction terms of the inflaton field .resonance occurs when an oscillatory mode well within the horizon grows during inflation until its frequency hits the same frequency as those of the couplings .so as long as resonance will occur at some point within the inflationary history of the mode .this resonance can result in a large contribution to the three point correlation function . in a general scenario , with an oscillatory potentialwe obtain an expression for the bispectrum of the form here is related to the frequency as with the hubble rate during inflation ( which is approximately constant ) and introduces a phase .one can also compute the general expected amplitude of non - gaussianity which is related to the frequency as here represents the amplitude of the oscillatory component of the couplings .physically such features might be realized in terms of brane inflation where the periodic feature comes from a duality cascade in the warped throat , as well as axion - monodromy inflation where the periodic feature is a result of instanton effects . as an examplelet us consider the latter .axion inflation is well embedded in string theory and represents a favorable candidate for inflation if the observed tensor modes are relatively large ( ) .such a scenario implies inflation occurred at energies close to the gut scale and would indicate that we require the knowledge of the uv completion .the axion potential is given by the parameter represents the axion decay parameter .the range of which would generate observable non - gaussianities and is still consistent with observations of the power spectrum is given by .the lower bound is set by the requirement that the period of the oscillation should be larger than for .for a linear zero order potential the resulting bispectrum is then given by with and . here is pivot scale ( ) , and is the value of the inflaton field when the pivot scale exits the horizon and is of order 10 ( ) . given the range for the frequency of the oscillations in the bispectrum lie within .a plot of this shape is shown in the top right figure [ fig:3dbispectra ] .the amplitude of the axion bispectrum ( for a linear potential ) is given by the amplitude is therefore proportional to a power of the frequency . for a linear potential , where is fixed by cobe normalization . from observations of the power spectrumone can constrain and therefore allowing . since inflation is an effective field theory in a curved background , choosing an appropriate vacuum state is by no means evident . in general the initial or vacuum state is chosen to be equivalent to the free field vacuum state in flat minkowski space , know as the bunch davies ( bd ) vacuum .although it seems that possible corrections to this assumption are constrained to be small ( from general observation of the power spectrum and backreaction constraints ) , it has been shown that small corrections in the bd state can result in rather large non - gaussian effects . using the currently available bounds on non - gaussianity from cmb data , deviations from a pure bunch davies staehave been constraint even further , although these constraints strongly depend on the inflationary model .however , there exist significant room for improvement as non - gaussianities from these modifications are highly oscillatory and therefore the derived constraints are relatively poor since they depend on the correlation with measured smooth bispectra .a number of different scenarios have been considered in which initial state modifications were investigated . herewe will not discuss all of these , although the results can differ significantly .such differences make it difficult to make robust predictions , it seems inevitable however that once you introduce a effective field theory cutoff , oscillations appear in both the power and bispectrum .we will consider one example that represents a large class of models with a non - canonical effective field theory action , which already drives large non - gaussianities to start with .this particular class has a speed of sound , such that perturbations in the medium propagate slow compared to the growth of the causal horizon .the leading order shape of the resulting bispectrum is given by here .in it was assumed that there exists a fixed physical cutoff hyper - surface that is scale dependent such that the overall momentum dependence of the bispectrum becomes scale invariant .such a choice is known as the new physics hypersurface ( nph ) , as apposed to boundary effective field theory ( beft ) approach in which the cutoff is time dependent .the subtlety is that the cutoff appears due to the presence of a non - bd state in each direction in comoving momentum space .consequently , will depend on the direction the bd vacuum has been perturbed in .this direction is set by the direction in which picks up a minus sign due to the bunch davies vacuum perturbation as explained in .one could allow for scale invariance breaking and consider beft , however there are some suggestions that such large scale invariance should have been observed already .we can rewrite the bispectrum as where and or the ratio between the largest physical momentum scale and the hubble radius at time which can be as large as .note that from this expression it seem that represents a singular line ( the enfolded limit ) .however , one can show that all infinities are cancelled against each other and the the expression is finite and vansihing within the sum .outside the sum , this expression is non - zero but finite .for example gives : ].when computing quantities numerically , such as the correlator in section , these apparent singularities can be hard to handle and we need to be aware of these .we have plotted this shape in the top left figure [ fig:3dbispectra ] .the amplitude of the non - bd bispectrum is a function of the frequency and the bogoliubov parameter quantifying the deformation away from the bd state.the way this bispectrum was computed , considered a bogolyubov correction of linear order and small speed of sound . in this particular scenario , is roughly given by from backreaction and power spectrum constraints , which could still allow observable levels of non - gaussianity .although the presented theoretical bispectra have different characteristics , we would like to get an indication how well these could be discriminated .for instance , it seems obvious that the similarity between the feature bispectrum and the resonant bispectrum could lead to significant confusion when actually traced in the data . in order to do so, we want to measure the distinguishability of these shapes , which is usually quantified using the amount of overlap or correlation between two shapes .one can define a inner product between two shapes the correlation between two shapes and is then defined as here is a weight function , which was chosen as in to increase resemblance with the fisher matrix ( correlation ) found in multipole space .the integral runs over the ` tetrahedral ' domain , which is bounded by the following triangle constraints where , .before we compute the correlation between the shapes , let us perform a quick qualitative analysis in order to get an indication of what to expect .first of all , note that the shape coming from initial state modifications ( eq . ) is clearly different from the other two . while for features ( eq .the argument in the oscillating functions explicitly depends on the sum all three comoving momenta , the argument in eq .depends on the ratio of momenta .consequently we can expect a rather small overlap .this becomes even more apparent once we adapt a new set of variables proposed in . as a consequence the argument in eq .will depend on the two variables and , while the arguments in eq . and will only depend on . in that sense , we can say that oscillations in these shapes are in _ orthogonal _ directions .in addition , for both the feature and resonant bispectrum the frequency is fixed along one direction. that is , the frequency does not change ( feature ) or only slightly changes ( resonant ) when you run through a fixed direction in comoving momentum space . for the non - bd bispectrumhowever the argument in the oscillating function has a component that scales as .consequently for the effective frequency . naturally , is cutoff from below ( as ) , however even with a cutoff the range in effective frequencies is large along a direction .this effect is present at all frequencies , and it turns out it will determine the efficiency of mode expansion for this bispectrum discussed in the next section .we have numerically calculated the correlator as defined in eq . between both feature bispectra and non - bd spectrum .we found the correlation to be maximal for low values of both frequencies ( of order 1 percent around ) , indicating that there is no evidence for a particular resonant frequency ; the largest correlation occurs due to the fact that there are less oscillations , thereby decreasing the chance for ( almost perfect ) cancelations in the integral .as expected , we can safely conclude that these shapes are distinguishable / orthogonal .for the two bispectra of eq .and we can expect a larger correlation .the appearance of a in eq .is the only major difference between the two bispectra . in the new coordinate set ,the bispectrum of does not depend on the or .let us try to make a simple analytical approximation of the relevant correlator before we compute the correlation numerically .the first term in dominates the second for large values of .therefore for simplicity we neglect the second term . as a consequenceboth terms now depend only on . in the computation of the correlatorthe integration over and drops out and to get an indication of the resonance we only need to investigate the following integral : where we assumed that at most .this integral can be done analytically and results in a sum of functions ( we have set ) .the interpretation of the result is rather complicated as all terms are divergent and there are no terms that can be easily neglected .however , one can plot the result and find that there is a clear resonance ` area ' around .we have confirmed this resonance as a function of frequency when considering the full expression and allowing both phases to be non - zero .we have plotted ( fig .[ fig : cosine_feat_res ] ) the correlation for a range of frequencies ( ) and a phase .the largest values obtained from this numerical computation are of order 0.6 , or 60 percent correlation ( we have used discreet steps of ) , and we expect there to exist correlation of for some specific values of ) . as such it will be hard to discriminate between these two models solely using observations of the bispectrum ( as one could simply confuse frequencies ) . however , as mentioned before , axion inflation for example predict a large scalar to tensor ratio .measurement of could break the degeneracy between a sharp feature in the potential versus axion inflation .in addition , one does not expect since it would not produce observational , while for the feature bispectrum the natural frequency is no larger than .if one would be able to extract a frequency from the data , a large frequency would favor a resonant model while a low frequency could indicate a sharp feature ., while the dark shaded areas correspond to correlations close to 0 .the correlation was computed with . ]the discussed primordial bispectra have very little in common with the constrained local , equilateral and orthogonal bispectra . typically , to constrain any type of non - gaussianity onecomputes the correlator ( eq . ) and derive the so - called ` fudge ' factor which indication how much ` signal ' leaks into an existing template with the use of the fudge factor one is able to deduce a bound on the amplitude of the unconstrained bispectrum .the reason why certain templates have been constrained and some others have not , is two - fold .first and foremost , until now most models produced non - gaussianities that can roughly be placed in one of the constrained types . for this reason , it was not immediate to search for any other type , simply because there were no models that indicated bispectra with completely orthogonal characteristics . of course , optimally , one would simply look for the full bispectrum as a function of the multipole numbers instead of constraining the amplitude in particular bispectral configuration , but the low s / n and computational limitations have so - far restrained us to the former .the second reason not to look for more ` exotic ' bispectra is that for a fast estimator , the bispectrum one would like to constrain needs to be factorizable and scale invariant .that is , it is useful if the bispectrum can be we written as sum of products of functions , where each function only depends on one direction in multipole or comoving momentum space .it has been shown that such factorizability reduced the number of computations one has to make in order to constrain the amplitude of the bispectrum by a factor , where is the number of observable multipoles of the experiment ( leaving only computations ) .the constrained non - gaussian amplitudes ( in the form of , where labels the comoving momentum type , local , equilateral or orthogonal ) are all based on templates that are factorized in the manner explained above .for instance , although dbi inflation does not produce a factorized bispectrum , it is well approximated by the equilateral template , that is factorized by construction .the same is true for both the local and orthogonal template , as well as the enfolded template .however , the method for constructing such factorized approximations of existing theoretical bispectra is rather ad - hoc . until recently there was no procedure no construct a factorized bispectrum using a consistent prescription . in ,a method for constructing factorized approximations to theoretical bispectra has been proposed using polynomial expansion .the approach is fairly straightforward ; one defines a set of orthonormal 3 dimensional functions ( where orthonormal is defined using a correlator of the form .once computing the correlator between the original and the reconstructed spectrum one can take in order to see how much of an effect projection onto multipole space can have .we find that it reduces the correlation by to in both polynomial expansion and fourier expansion .as such , it should not effect the conclusions we draw in this paper where all correlation shown are based on . in order to build modes that are optimized for multipole expansionyou should start by considering a weight function .this is beyond the scope of this paper . ]eq . , and the weight function can be adjusted ) which are a - priori factorized and from there one computes the corresponding weight factors ( ) via the inner product between a number of polynomial modes ( ) up until a sufficient overlap between the polynomial expansion and the original bispectrum is established , i.e. until such that without discussing the details of constructing such polynomial modes ( see for a detailed description ) , here we want to try and investigate how well this would work in case of oscillatory bispectra of , and .before we do so , let us make a few notes .first of all , recall that the objective of the expansion is to factorize a given theoretical bispectrum .however , as you can see from eq . , this particular bispectrum , albeit a best - fit approximation , , where and are fitted to the numerical results .the envelope function is therefore also factorizable .again , we did not consider this envelop since it is smooth compared to the oscillatory part of the bispectrum .however , such an envelope could be of significant influence in predicting the correlation in multipole space .] , is already of the factorized form .one can still try to expand this in terms of power law polynomials , as described here , since polynomial modes will in general behave better numerically .the other two examples of primordial bispectra are not factorizable in terms of oscillating functions using simple identities .consequently , the polynomial expansion seems to be a good first effort in order to set up an approximately factorized form .secondly , were we able to expand these into a factorized form , and subsequently projected to multipole space and applied to the data , we might still miss the entire signal , simply because one of the free parameters is the frequency of the oscillations .for a non - bd bispectrum and the axion inflation model , the range of possible frequencies spans ( at least ) 2 orders of magnitude . therefore , if we would fix the frequency , searching for a signal with a constructed factorized template would probably not be the best approach .fortunately , we will later see that if you would measure mode functions in the data , instead of a fixed template , one could in principle extract information about a variety of oscillating signals .let us emphasize that even if we would not be able to reconstruct a factorized form of a given spectrum with a small number of modes , it is still very well possible we could observe the same spectra by measuring a small number of mode functions in the data ( effectively the frequency ( and the phase ) remain a free parameter during mode extraction ) .first we consider the bispectrum coming from a feature in the potential ( eq . ) . out of the given examples it has the simplest form ( excluding the envelope ) .we choose for simplicity , and since the phase can always be scaled out it will not affect the results but we found no difference when expanding between the cosine and sine in terms of the required number of modes . ] . in table [ tab : number_of_modes ] we have computed the number of modes necessary to get a correlation of at least with the original spectrum for several values of .as expected , as the frequency is increased , one has to expand the bispectrum with a ( rapidly ) growing number of modes .for we get a correlation with 82 modes . on itself , it actually quite remarkable that one is able to reproduce the spectrum with a limited number of modes . recall that the possible feature at would result in a ( decaying ) oscillation with , which would be hard to fit this way . on the other hand , as we argued earlier , a frequency of can be considered an upper limit , as features at higher multipole number would result in longer wavelengths . .as the frequency is increased it requires a rapidly growing number of modes to get over correlation with the original spectrum .[ cols="^,^,^,^,^,^,^,^,^,^",options="header " , ] the polynomial expansion of is based on power modes , i.e. the expansion is in increasing order of .this is not necessarily optimal for describing oscillatory functions .there are two possible alternatives ; the first one would be to expand the argument into a sum of functions , that each depend on one direction only .a such , one can again use trigonometric identities to expand the cosine and sine into factorized forms ( be that oscillatory functions ) .the second option could be to use a fourier expansion instead of a polynomial expansion .this would only be useful if for large frequencies you would need a small(er ) number of modes . before we get into fourier mode expansionlet us briefly discuss the alternative of expanding the argument in the oscillatory function .this option would only suffice if the approximation requires 2 modes maximally .if it requires more modes , you will get product of two or three different directions in momentum space , and as a result you will not be able to expand the cosine and the sine .let us consider the axion model .the argument is given by .using the mode expansion , one finds that one can achieve correlation after just two polynomial modes ; zero order and first order . not surprisingly this is almost equivalent to a taylor expansion to first order of around the point . consequently , there are no cross - terms , and one can expand the cosine and sine into factorizable function of the three comoving momenta , just like you could expand the feature spectrum into oscillating functions . as it turns out however , although there is a correlation between the arguments after expansion , the full bispectrum is very sensitive to small deviations in the argument , especially for large frequency .consequently , the correlation between the full bispectrum and the approximated bispectrum decreases as a function of the frequency ; from for to for .although this is equivalent to what can be achieved with the polynomial expansion using modes , the problem is that we can not improve it in any way . since this will only work for a first order expansion , we can never reach beyond correlation , unlike the polynomial expansion , where we can simply include more modes .note that for non - bd model this method will not work as the argument is already a product of two directions in comoving momentum space , i.e. . the second option is to consider a fourier expansion , where we try and fit terms such as to a sum of fourier modes that all depend on one direction only .such factorization would still lead to the reduction in computation , since the integrals in space can now be performed individually .we consider $ ] as our basis function ( as apposed to ) and constructed a orthogonal set of three dimensional mode function similar to .the first few one dimensional functions are given by the functions are shown up to in figure [ fig : qmodes ] . from theseone can construct the three dimensional basis functions via a product of each mode and symmetrization of three comoving momentum arguments ; , and .\nonumber\end{aligned}\ ] ] one has to introduce a counting scheme to re - numerate the three labels to .we have chosen equal slicing counting , of which the first modes ( ) and their association ( ) are shown in table [ tab : mode_numbers ] .after the construction of these modes , one has to apply additional gramm schmidt orthogonalization to to increase orthonormality of different mode functions .we refer to the three dimensional orthonormalized modes as and the corresponding weights as . if would have been complex , one should add in order to take this into account .the coefficients can be computed by taking the inner product ( eq . ) between the original shape function ( bispectrum ) and the various mode functions , i.e. within the tetrahedral domain for the first 11 modes . ] for frequencies and . compared with the polynomial mode expansion we reach similar correlation using about 5 times less modes .also note that the increase of correlation is somewhat discreet , indicating that we might need only a fraction of these modes to reconstruct the original spectrum .we will discuss this observation in the next section . ] after just 5 modes , the fourier remains stuck at . ] for the feature bispectrum we do not necessarily have to consider the fourier expansion since that spectrum can be rewritten into a product of fourier modes simply by using trigonometric identities , e.g. the other two bispectra are not of the same form , since their arguments are non - linear functions , i.e. for resonant non - gaussianities and for for non - bd modifications and these can be made of the form above by expanding , using the constructed fourier modes . given the form of the first argumentyou expect only a limited number of modes to significantly contribute , for example those modes that have equal mode number in the directions , and ( you should think about this expansion as a series around the point , see table [ tab : mode_numbers ] ) .for the second argument you expect more modes to matter , since the arguments depend on all three directions independently .consequently the weights are expected to be close to zero for many when expanding resonant non - gaussianities , while for the non - bd scenario they should all matter to some extend ( and obviously more modes will be important for large ) .we have computed the correlation for the axion bispectrum with the fourier expansion for frequency ranges of up to 82 modes ( figure [ fig : axion_fourier ] ) .as expected , we see that there are only a few modes that give significant contribution to the correlation , while most modes give only very little contribution and are not important for the expansion .we will discuss this fact in the context of cmb data mode extraction in the section 4 .given that the allowed range of frequencies this expansion is actually reasonable for the lower frequencies and the number of modes necessary to establish similar correlation as the polynomial expansion is reduced by a factor 5 .as for the polynomial basis expansion , the presence of a large number of features in the non - bd bispectrum does not allow for a fast reconstruction of the spectrum .in fact , expansion in the fourier basis requires even more modes compared to the polynomial basis , reaching only correlation after 82 modes with .we also find that actually reaches a slightly larger correlation , although this seems mostly due to a relatively large correlation with the zero order ( ) mode .most likely this is caused by the fastest oscillating part of the spectrum which , in combination with numerics , could add constant power .we did observe something similar in figure [ fig : non_bd_pol_modes ] for polynomial modes where the zero mode causes the correlation of the non - bd bispectrum reconstruction with to be better initially compared to bispectrum expansion with . in most realistic scenarios ( otherwise your effective field theory approach breaks down ) andtherefore both polynomial expansion and fourier expansion fail to reconstruct this bispectrum effectively .the possible explanation why fourier expansion is even worse than polynomial expansion for this type of bispectrum , seems to be related to the rapid change in frequency in a fixed direction .fourier expansion is optimized for scale invariant frequencies .the polynomial expansion is simply optimized in reproducing as many different shapes as possible , explaining the observation that it is able to slowly increase correlation with the addition of modes while fourier expansion seems to converge .given the large enhancement of the amplitude ( which scales as ) , one might still be able to extract some information from that data even with such small correlations .another possibility is that once non - bd bispectrum is projected onto multipole space one might establish a larger correlation with fewer ( multipole ) modes .the projection has the tendency to wash out small features ( hence the weight of in the correlator . ) .we hope to report on this in the future . to investigate the power of the fourier expansion for oscillatory bispectra we have also tried to fit three toy - model shapes moving in different direction through comoving momentum space we find again that for such a shapes the correlation increases about 5 times faster compared to polynomial mode expansion with the same frequency .the correlation as a function of mode numbers for and are shown figure [ fig : correlation_toy_spectra ] .we will discuss the weights of these models in the next section .note that is already of the factorized form , however here we simply aim at showing the effectiveness of fourier expansion .we want to emphasize that these spectra are not based on any physical model , but simply show that in general oscillatory spectra are better fitted using a fourier basis .although the fourier expansion seems to work well for resonant non - gaussianities and the toy - spectra , compared to polynomial expansion we confirm that fourier expansion is not as effective : it is easier to gain fast convergence with a limited number of modes for most oscillating bispectra , but it is difficult to get correlation beyond for smooth bispectra .this is probably due to overshooting at the boundaries as discussed in .we explicitly show this in figure [ fig : fourier_vs_power ] where we compare expansion of the ` smooth ' dbi inflation bispectrum ( which is very similar to equilateral ) , using fourier modes and polynomial modes .we conclude that fourier expansion is a viable alternative for polynomial expansion in the case of oscillatory bispectra with relatively large frequencies . using the fourier expansionwe can achieve factorizabilty of various oscillating bispectra with significantly less modes compared to polynomial expansion . for frequencies polynomial and fourier expansion are both unable to reconstruct the original spectrum with a small number of modes . in order to reconstruct models with such large frequencies, one should look for alternative methods .however , constraining these models with only limited number of modes seems to be a practical possibility. this will be topic of the next section .( dashed ) with and ( solid ) with showing that these both peak for similar mode numbers .although distinguishing between these would be quite hard , it seems that for the feature bispectrum the values of the weights are peaked sharper . ]even though the expansion of the oscillatory primordial bispectra becomes unavailing for really large frequencies , there are a number of interesting observations which could make constraining and expanding oscillating bispectra much more viable than presently argued .first of all , as predicted , the expansion in mode functions of the resonant bispectrum has a very discrete character ; basically if you consider fig .[ fig : axion_fourier ] only few modes actually contribute significantly to the convergence of the correlation . in fig .[ fig : alphas ] we show the various weights ( ) as a function of mode number ( as well as for and ( not shown ) ) .we can trace back the corresponding mode numbers in table [ tab : mode_numbers ] .for instance there is a clear peak at , which correspond to all directions being maximally of quadratic order , and with all directions being maximally of cubic order . other peaks ( e.g. , and ) correspond to the modes in which two out of three directions have one and two maximal orders less than the third , i.e. in mode number two directions are maximally quadratic and the third is maximally cubic .as we already argued the location of these peaks makes sense , since the resonant model is a function of ( or ) , which is the sum of the three comoving momenta .effectively this shape is orientated in the direction .one could only try to expand the spectrum only in those modes , which could significantly reduce the number of modes necessary .since the important modes seem to be related to the direction of propagation of the oscillation , we find that this conclusion is independent of the phase .in other words , only the value of the weights will differ , not the mode numbers that are relevant for the expansion .secondly , from an observational point of view , given the discreteness of the correlation it is ( obviously ) not necessary to constrain all mode functions in the cmb data to get an indication of there is an oscillatory three point signal and what the possible frequency of this signal might be .for resonant non - gaussianities we only need to consider those modes that have a significant weight , and the measured value of the weights would be a direct measure of the frequency . if one could extract the multipole projected fourier modes that are responsible for most of the weight , this could in principle provide signatures of primordial bispectra with frequencies much larger than .measuring modes up to e.g. would not only provide information about the frequency of the signal , but could also hint on the type of primordial bispectrum .the distinction between the feature bispectrum and the resonant bispectrum would be more difficult , since the values of the weights peak at similar mode numbers although we have found that expanding the feature bispectrum in the constructed fourier basis ( instead of the simple trigonometric expansion discussed in section [ feature_bispectrum_2 ] ) could still be used to discriminate between the two signals ( see figure [ fig : axion_vs_feature ] ) . to emphasize the ability to extract information on the primordial shape solely from the modes that are important, we have investigated three toy - model shapes of eq . .we have computed the fourier weights for two different frequencies in figure [ fig : alphas2 ] . as expected , has weights that peak when only one comoving momentum in in space is non - zero is , i.e. it peaks at the modes where one momentum oscillates and the other two momenta are constant ( see [ tab : mode_numbers ] ) .the obvious reason is that each term in depends on one comoving momentum variable only , implying that there should be no cross terms in the expansion .for we find that many more modes are relevant , which makes perfect sense given that the argument in the sine depends on all three vectors in comoving momentum space .for however the argument effectively only depends on two comoving momenta , therefore the relevant mode functions ( the ones with the largest ) are the ones that have similar frequency in two momentum vectors and are constant in the third . in this paper we have only discussed mode functions in momentum space , and one either has to construct similar fourier modes in multipole space or project these modes forward using the transfer function , and use these to expand a late - time oscillatory bispectrum , and see if we get similar results in terms of mode number sensitivity .one expects that after projection the transfer function has caused some smoothing of the signal , which could render a fourier basis less effective . on the other hand , intuitively it seems perfectly reasonable that a fourier basis should be much more efficient in reconstructing oscillatory bispectra from the data .in addition , the effects of the transfer function on the correlation in space can be examined by choosing the weight in the primordial correlation function .we have found that our results were only marginally affected when including this weight factor and therefore we expect that fourier mode expansion should be equally efficient in multipole space . to make sure that this is actually true, we should compute the projection of several oscillatory bispectra and construct a orthonormal fourier basis in multipole space .we hope to report on this in the near future .we have investigated the viability of mode expansion for bispectra that contain oscillations .the motivation for investigating such features and their mode expansion , is that recently it has been shown that several scenarios or mechanisms can produce such features not only in the power spectrum , but also in the bispectrum .the appearance of oscillations in the bispectrum makes comparison with existing bispectral constraints , based on smooth bispectra , very inefficient and there exists substantial room for improvement . in order to constrain oscillatory bispectra from the data ,a logical first step is to factorize the bispectrum in order to efficiently compute its multipole counterpart .polynomial expansion has been proposed to achieve factorization of a given theoretical bispectrum and we have investigated this for three different models .as expected , the larger the frequency of the primordial bispectrum , the more modes it requires to establish a reasonable approximation of the original spectrum . in the case of a feature in the primordial potential polynomial mode expansion might still be useful , at least for features at high multipoles ( resulting in rather small frequencies in comoving momentum space ) .in fact , during the finalization of this paper the authors of have considered a feature bispectrum and extracted 31 polynomial modes in the data , which allowed them to investigate late time bispectra with a maximal frequency of ( in comoving momentum space ) .they did not find evidence for non - zero non - gaussianity .the other two example bispectra typically have a lot more oscillations within the tetrahedral domain , resulting in many modes necessary to realize an acceptable correlation .fortunately , both the resonant and non - bd bispectrum have an amplitude that scales with the frequency .therefore , a small improvement in correlation could lead to a significant improvement in the ability to constrain the model by measuring these modes in the data and reconstructing the primordial signal .complementarily , we have proposed a different basis expansion , based on fourier functions instead of polynomials .this still leads to the necessary computational reduction one is after and therefore is a perfectly valid alternative .such expansion is more relevant for resonant and non - bd scenario , since the feature bispectrum can already be transformed into fourier modes analytically , using identities .we have shown that fourier modes are much more efficient for the resonant bispectrum , reducing the number of modes necessary to establish the same correlation as polynomial modes by at least a factor of 5 .for the non - bd bispectrum both fourier expansion and polynomial expansion are difficult .correlation increases fast with the addition of modes , but quickly converges to a fixed value , where the fixed value decreases a function of frequency .we believe that this is due to the exact form of the bispectrum , which has many small features near the edges of the tetrahedral domain .one might hope that some of these very small features are washed out when you compute the multipole equivalent , although that would be very time consuming since the non - bd shape is not of the factorized form .we hope to investigate this in a future attempt .in addition we have investigated three toy - spectra , not based on any particular model , which have a different oscillating orientation compared to the three theoretical models . expanding these in fourier modesshow similar improvement compared to polynomial expansion as the resonant bispectrum . in general, we therefore belief that fourier expansion is much more effective in the expansion of oscillatory spectra compared to polynomial basis expansion . from an observational stand point, it seems that for resonant inflation only a limited number of modes contribute significantly in reproducing the original bispectrum .this allows us to consider only those modes that contribute substantially .this holds independent of the phase and frequency of the signal and is due to the specific form of this bispectrum , which oscillates ( primarily ) in the direction .because the modes that are important for the reconstruction of the original bispectrum are independent of the frequency , this also implies that when one would observe these modes in the data one could in fact find evidence for much larger frequencies than discussed here , simply because for larger frequencies these modes will also matter but their respective weight will be smaller . despite the fact that we could not optimally expand the non - bd bispectrum using fourier modes , we did look into the three toy - sepctra .we found that other modes are important .moreover , the modes that are important directly represent the orientation of the oscillating spectrum and could therefore discriminate between different bispectra quite effectively .if this conclusion holds after forward projection into multipole space , measuring a number of fourier mode functions in the cmb data would present an efficient way of deducing whether oscillations are present in the data and could give both an indication of the frequency and the shape of the primordial bispectrum .the author would like to thank jan pieter van der schaar , pier stefano corasaniti , ralph wijers , licia verde , ben wandelt , james fergusson , xingang chen and michele luguori for very helpful discussions and comments on the manuscript .he would also like to thank the hospitality of damtp , cambridge , where this paper was finalized .the author was supported by the netherlands organization for scientific research ( nwo ) , nwo - toptalent grant 021.001.040 .a. cooray , phys .lett . * 97 * , 261301 ( 2006 ) .a. pillepich , c. porciani and s. matarrese , astrophys .j. * 662 * , 1 ( 2007 ) .a. cooray , c. li and a. melchiorri , phys .d * 77 * , 103506 ( 2008 ) .j. smidt , a. amblard , a. cooray , a. heavens , d. munshi and p. serra , `` a measurement of cubic - order primordial non - gaussianity ( and ) with wmap 5-year data , '' arxiv:1001.5026 [ astro-ph.co ] .creminelli , a. nicolis , l. senatore , m. tegmark and m. zaldarriaga , jcap * 0605 * , 004 ( 2006 ) .a. gangui , f. lucchin , s. matarrese and s. mollerach , astrophys .j. * 430 * , 447 ( 1994 ) .e. komatsu and d. n. spergel , phys .d * 63 * , 063002 ( 2001 ) .j. m. maldacena , jhep * 0305 * , 013 ( 2003 ) . j. r. fergusson and e. p. s. shellard , `` the shape of primordial non - gaussianity and the cmb bispectrum , '' phys . rev .d * 80 * , 043510 ( 2009 ) .l. senatore , k. m. smith and m. zaldarriaga , jcap * 1001 * , 028 ( 2010 ) . c. pahud , m. kamionkowski and a. r. liddle , phys .d * 79 * , 083503 ( 2009 ) .r. holman and a. j. tolley , jcap * 0805 * , 001 ( 2008 ) .l. covi , j. hamann , a. melchiorri , a. slosar and i. sorbera , phys .d * 74 * , 083509 ( 2006 ) .e. sefusatti , m. liguori , a. p. s. yadav , m. g. jackson and e. pajer , jcap * 0912 * , 022 ( 2009 ) .x. chen , m. x. huang , s. kachru and g. shiu , jcap * 0701 * , 002 ( 2007 ) .r. bean , x. chen , g. hailu , s. h. tye and j. xu , jcap * 0803 * , 026 ( 2008 ) .r. easther , w. h. kinney and h. peiris , jcap * 0508 * , 001 ( 2005 ). b. greene , k. schalm , j. p. van der schaar and g. shiu , _ in the proceedings of 22nd texas symposium on relativistic astrophysics at stanford university , stanford , california , 13 - 17 dec 2004 , pp 0001 _ , [ arxiv : astro - ph/0503458 ] .
|
we consider the presence of oscillations in the primordial bispectrum , inspired by three different cosmological models ; features in the primordial potential , resonant type non - gaussianities and deviation from the standard bunch davies vacuum . in order to put constraints on their bispectra , a logical first step is to put these into factorized form which can be achieved via the recently proposed method of polynomial basis expansion on the tetrahedral domain . we investigate the viability of such an expansion for the oscillatory bispectra and find that one needs an increasing number of orthonormal mode functions to achieve significant correlation between the expansion and the original spectrum as a function of their frequency . to reduce the number of modes required , we propose a basis consisting of fourier functions orthonormalized on the tetrahedral domain . we show that the use of fourier mode functions instead of polynomial mode functions can lead to the necessary factorizability with the use of only of the total number of modes required to reconstruct the bispectra with polynomial mode functions . moreover , from an observational perspective , the expansion has unique signatures depending on the orientation of the oscillation due to a resonance effect between the mode functions and the original spectrum . this effect opens the possibility to extract information about both the frequency of the bispectrum as well as its shape while considering only a limited number of modes . the resonance effect is independent of the phase of the reconstructed bispectrum suggesting fourier mode extraction could be an efficient way to detect oscillatory bispectra in the data .
|
marine ecosystems of the eastern boundary upwelling zones are well known for their major contribution to the world ocean productivity .they are characterized by wind - driven upwelling of cold nutrient - rich waters along the coast that supports elevated plankton and pelagic fish production .variability is introduced by strong advection along the shore , physical forcings by local and large scales winds , and high submeso- and meso - scale activities over the continental shelf and beyond , linking the coastal domain with the open ocean .the benguela upwelling system ( bus ) is one of the four major eastern boundary upwelling systems ( ebus ) of the world .the coastal area of the benguela ecosystem extends from southern angola ( around 17 ) along the west coast of namibia and south africa ( 36 ) .it is surrounded by two boundary currents , the warm angola current in the north , and the temperate agulhas current in the south .the bus can itself be subdivided into two subdomains by the powerful luderitz upwelling cell .most of the biogeochemical activity occurs within the upwelling front and the coast , although it can be extended further offshore toward the open ocean by the numerous filamental structures developing offshore . in the bus , as in the other major upwelling areas , high mesoscale activity due to eddies and filaments impacts strongly marine planktonic ecosystem over the continental shelf and beyond .the purpose of this study is to analyze the impact of horizontal stirring on phytoplankton dynamics in the bus within an idealized two dimensional modelling framework .based on satellite data of the ocean surface , recently suggested that mesoscale activity has a negative effect on chlorophyll standing stocks in the four ebus .this was obtained by correlating remote sensed chlorophyll data with a lagrangian measurement of lateral stirring in the surface ocean ( see methods section ) .this result was unexpected since mesoscale physical structures , particularly mesoscale eddies , have been related to higher planktonic production and stocks in the open ocean as well as off a major ebus . a more recent and thorough study performed by in the california and the canary current systems extended the initial results from . based on satellite derived estimates of net primary production , of upwelling strength and of eddy kinetic energy ( eke ) as a measure the intensity of mesoscale activity ,they confirmed the suppressive effect of mesoscale structures on biological production in upwelling areas .investigating the mechanism behind this observation by means of on 3d eddy - resolving coupled models , showed that mesoscale eddies tend to export offshore and downward a certain pool of nutrients not being effectively used by the biology in the coastal areas .this process they called `` nutrients leakage '' is also having a negative feedback by diminishing the pool of deep nutrients available in the surface waters being re - upwelled continuously .in our work , we focused on the benguela area , being the most contrasting area of all ebus in terms of stirring intensity .although the mechanisms studied by seem to involve 3d dynamics , the initial observation of this suppressive effect was essentially based on two - dimensional ( 2d ) datasets . in this workwe use 2d numerical analysis in a semi - realistic framework to better understand the effects of a 2d turbulent flow on biological dynamics , apart from the complex 3d bio - physical processes .the choice of this simple horizontal numerical approach is indeed supported by other theoretical 2d studies that also displayed a negative correlation between stirring and biomass . meanwhile , since biological productivity in upwelling areas rely on the ( wind - driven ) vertical uplift of nutrients , we introduced in our model a nutrient source term with an intensity and spatial distribution corresponding to the upwelling characteristics . instead of the commonly used eke , which is an eulerian diagnostic tool , we used here a lagrangian measurement of mesoscale stirring that has been demonstrated as a powerful tool to study patchy chlorophyll distributions influenced by dynamical structures at mesoscale , such as upwelling filaments .the lagrangian perspective provides a complementary insight to transport phenomena in the ocean with respect to the eulerian one .in particular , the concept of lagrangian coherent structure may give a global idea of transport in a given area , separating regions with different dynamical behavior , and signaling avenues and barriers to transport , which are of great relevance for the marine biological dynamics .while the eulerian approach describes the characteristics of the velocity field , the lagrangian one addresses the effects of this field on transported substances , which is clearly more directly related to the biological dynamics .for example the work by describes currents in the world ocean having the same level of eddy kinetic energy but having two different stirring characteristics , as quantified by lagrangian tools .further discussions comparing lagrangian and eulerian diagnostics can be found , for example , in and the above cited . to consider velocity fields with different characteristics and to test the effect of the spatial resolution ,different flow fields are used , one derived from satellite and two produced by numerical simulations at two different spatial resolutions .our modelled chlorophyll - a concentrations are compared with observed distributions of chlorophyll - a ( a metric for phytoplankton ) obtained from the seawifs satellite sensor .this paper is organized as follows . sec .[ sec : data ] is a brief description of the different datasets used in this study .[ sec : metodo ] depicts the methodology , which includes the computation of the finite - size lyapunov exponents , and the numerical plankton - flow 2d coupled model .then , our results are analyzed and discussed in sec .[ sec : results ] . finally in sec .[ sec : summary ] , we summed - up our main findings .we used three different 2d surface velocity fields of the benguela area .two are obtained from the numerical model regional ocean model system ( roms ) , and the other one from a combined satellite product .roms is a free surface , hydrostatic , primitive equation model , and we used here an eddy - resolving climatologically forced run provided by . at each grid point , linear horizontal resolution is the same in both the longitudinal , , and latitudinal , , directions , which leads to angular resolutions and . the numerical model was run onto 2 different grids : a coarse one at spatial resolution of , and a finer one at of spatial resolution . in the following we label the dataset from the coarser resolution run as _roms1/4 _ , and the finer one as _roms1/12_. for both runs , vertical resolution is variable with layers in total , while only data from the surface upper layer are used in the following .since the flows are obtained from climatological forcings , they would represent a mean annual cycle of the typical surface currents of the benguela region . a velocity field derived from satellite observationsis compared to the simulated fields described previously .it consists of surface currents computed from a combination of wind - driven ekman currents , at 15 m depth , derived from quickscat wind estimates , and geostrophic currents calculated using time - variable sea surface heights ( ssh ) obtained from satellite .these ssh were calculated from mapped altimetric sea level anomalies combined with a mean dynamic topography .this velocity field , labeled as _satellite1/4 _ , covers a period from june 2002 to june 2005 with a spatial resolution of in both longitudinal and latitudinal directions . to validate simulated plankton concentrations, we use a three - year - long time series , from january 2002 to january 2005 , of ocean color data . phytoplankton pigment concentration ( chlorophyll - a ) is obtained from monthly sea viewing wide field - of - view sensor ( seawifs ) products , generated by the nasa goddard earth science ( ges)/distributed active archive center ( daac ) .gridded global data were used with a resolution of approximately 9 by 9 km .fsles provides a measure of dispersion , and thus of stirring and mixing , as a function of the spatial resolution .this lagrangian tool allows isolating the different regimes corresponding to different length scales of the oceanic flows , as well as identifying lagrangian coherent structures ( lcss ) present in the data .fsle are computed from , the time required for two particles of fluid ( one of them placed at ) to separate from an initial distance of ( at time ) to a final distance of , as it is natural to choose the initial points on the nodes of a grid with lattice spacing coinciding with the initial separation of fluid particles .then , values of are obtained in a grid with lattice separation . in most of thiswork the resolution of the fsle field , , is chosen equal to the resolution of the velocity field , .other choices of parameter are possible and can take any value , even much smaller than the resolution of the velocity field .this opens many possibilities that will not be fully explored in this work ( see also fig . [fig : lat_mixing ] and [ ape : smooth ] ) . using similar parameters for the fsles computation, we also investigate the response of the coupled biophysical system to variable resolution of the velocity field , ( see for further details about the sensitivity and robustness of the fsles ) .the field of fsles thus depends on the choice of two length scales : the initial , and the final separations . as in previous works we focus on transport processes at mesoscale , so that is taken as about 110 , or 1 , which is the order of the size of mesoscale eddies at mid latitudes . to compute we need to know the trajectories of the particles , which gives the lagrangian character to this quantity .the equations of motion that describe the horizontal evolution of particle trajectories in longitudinal and latitudinal spherical coordinates , , are : where and represent the eastwards and northwards components of the surface velocity field , and is the radius of the earth ( 6371 km ) .the ridges of the fsle field can be used to define the lagrangian coherent structures ( lcss ) , which are useful to characterize the flow from the lagrangian point of view .since we are only interested in the ridges of large fsle values , the ones which significantly affect stirring , lcss can be computed by the high values of fsle which have a line - like shape .we compute fsles by integrating backwards - in - time the particle trajectories since attracting lcss ( and its associated unstable manifolds ) have a direct physical interpretation .tracers , such as temperature and chlorophyll - a , spread along the attracting lcss , thus creating their typical filamental structure .the plankton model is similar to the one used in previous studies by and .it describes the interaction of a three - level trophic chain in the mixed layer of the ocean , including phytoplankton , zoo - plankton and dissolved inorganic nutrient , whose concentrations evolve in time according to the following equations : where the dynamics of the nutrients , eq .( [ eq.biolo1 ] ) , is determined by nutrient supply due to the vertical transport , its uptake by phytoplankton ( 2 term ) and its recycling by bacteria from sinking particles ( remineralization ) ( 3 term ) .vertical mixing which brings subsurface nutrients into the mixed surface layer of the ocean is parameterized in our coupled model ( see below ) , since the hydrodynamical part considers only horizontal 2d transport .the terms in eq .( [ eq.biolo2 ] ) represent the phytoplankton growth by consumption of ( i.e. primary production ) , the grazing by zooplankton ( ) , and natural mortality of phytoplankton . the last equation , eq .( [ eq.biolo3 ] ) , represents zooplankton growth by consuming phytoplankton minus zooplankton quadratic mortality .an important term of our model is the parameterization of the vertical transport of nutrients by coastal upwelling .assuming constant nutrient concentration below the mixed layer , this term can be expressed as : where the function , which depends on time and space ( on the two dimensional location ) , determines the amplitude and the spatial distribution of vertical mixing in the model , thus specifying the strength of the coastal upwelling .thus , the function represents the vertical transport due to coastal upwelling in our 2d model . upwelling intensity along the coastis characterized by a number of coastal cells of enhanced vertical ekman driven transport that are associated with similar fluctuations of the alongshore wind . following these results , we defined our function as being null over the whole domain except in a 0.5 wide coastal strip , varying in intensity depending on the latitude concerned ( see fig .[ fig : cells ] ) .six separate upwelling cells , peaking at approximately 33 , 31 , 27.5 , 24.5 , 21.5 , 17.5 , can be discerned .they are named cape peninsula , columbine / namaqua , luderitz , walvis bay , namibia and cunene , respectively , luderitz being the strongest . for the temporal dependence , switches between a summer and a winter parameterization displayed in fig .[ fig : cells ] . when is fixed to either its summer or its winter shape described in fig .[ fig : cells ] , the dynamical system given by eqs .( [ eq.biolo1],[eq.biolo2],[eq.biolo3 ] ) evolves towards an equilibrium distribution for , and .the transient time to reach equilibrium is typically days with the initial concentrations used ( see sec .[ coupling ] ) .the parameters are set following a study by and are listed in table [ tab.bio ] . ) of the upwelling cells used in the simulations for winter and summer seasons ( following ) ., scaledwidth=70.0% ] .list of parameters used in the biological model . [ cols="<,<,<",options="header " , ] we used the velocity fields provided by and to do offline coupling with the npz model .the evolution of simulated concentrations advected within a flow is determined by the coupling between the hydrodynamical and biological models , as described by an advection - reaction - diffusion system .the complete model is given by the following system of partial differential equations : the biological model is the one described previously by the functions , and .horizontal advection is the 2d velocity field , which is obtained from satellite data or from the roms model .we add also an eddy diffusion term , via the operator , acting on , , and to incorporate the unresolved small - scales which are not explicitly taken into account by the velocity fields used . the eddy diffusion coefficient , , is given by okubo s formula , , where is the value of the resolution , in meters , corresponding to the angular resolution .the formula gives the values =26.73 for _ satellite1/4 _ and _ roms1/4 _ , and =7.4 for _ roms1/12_. the coupled system eqs . ( [ coupledsystem1]),([coupledsystem2 ] ) and ( [ coupledsystem3 ] ) is solved numerically by the semi - lagrangian algorithm described in , combining eulerian and lagrangian schemes .the initial concentrations of the tracers were taken from and they are , , and .the inflow conditions at the boundaries are specified in the following way : at the eastern corner , and at the western and southern edges of the computational domain fluid parcels enter with very low concentrations ( , , and ) . across the northern boundary ,fluid parcels enter with higher concentrations ( , , and ) .nitrate concentrations are derived from cars climatology , while p and z concentrations are taken from .the integration time step is hours .to convert the modeled values , originally in , into of chlorophyll , we used a standard ratio of as prescribed by and . in the following we refer to as `` simulated chlorophyll '' for the concentrations derived from the simulated phytoplankton p , after applying the conversion ratio ( see above ) ; and we use `` observed chlorophyll '' for the chlorophyll - a measured by seawifs .we compute the fsle with an initial separation of particles equal to the spatial resolution of each velocity field ( = 1/4 for _ satellite1/4 _ and _ roms1/4 _ , and = 1/12 for _ roms1/12 _ ) , an a final distance of = 1 to focus on transport processes by mesoscale structures at mid latitudes .the areas of more intense horizontal stirring due to mesoscale activity can be identified by large values of temporal averages of backward fsles ( see figure [ fig : mixing ] ) . while there are visible differences between the results from the different velocity fields , especially in the small - scale patterns , the spatial pattern are quantitatively well reproduced .for instance , spatial correlation coefficient between fsles map from _ satellite1/4 _ and from _ roms1/4 _is .correlation coefficients between _ satellite1/4 _ and _ roms1/12 _ on one hand , and between _ roms1/4 _ and _ roms1/12 _ on the other hand , are lower ( and respectively ) since the fsle were computed on a different resolution .more details on the effect on the grid resolution when computing fsles can be found in . for all datasets , high stirring valuesare observed in the southern region , while the northern area displays significantly lower values , in line with .note that the separation is well marked for _satellite1/4 _ where high and low values of fsle occur below and above a line at approximately . in the case of roms flow fields ,the stirring activity is more homogeneously distributed , although the north - south gradient is still present .we associate this latitudinal gradient with the injection of energetic agulhas rings , the intense jet / bathymetry interactions and with other source of flow instabilities in the southern benguela .following we compute the eke , another proxy of the intensity of mesoscale activity .there are regions with distinct dynamical characteristics as the southern subsystem is characterized by larger eke values than the northern area , in good agreement with the analysis arising from fsles ( fig .[ fig : mixing ] ) . spatial correlations ( not shown ) indicate that eke and fsle patterns are well correlated using a non - linear fitting ( power law ) .for instance , eke and fsle computed on the velocity field from _ satellite1/4 _ exhibit a of for the non - linear fitting : .this is in agreement with the initial results from , for a related dispersion measurement , and confirmed for the benguela region by the thorough investigations of eke / fsle relationship by . .the black lines are contours of annual eke .the separation between contour levels is 100 ., scaledwidth=99.0% ] to analyze the variability of horizontal mixing with latitude , we compute longitudinal averages of the plots in fig . [fig : mixing ] for two different coastally - oriented strips extended : a ) from the coast to offshore , and b ) from to offshore ( see fig .[ fig : lat_mixing ] ) .it allows analyzing separately subareas characterized by distinct bio - physical characteristics ( see also ) , the coastal upwelling ( strip ) with high plankton biomasses and moderated mesoscale activity , and the open ocean ( from to offshore ) with moderated plankton biomasses and high mesocale activity .it is clear that horizontal stirring decreases with decreasing latitude . in fig .[ fig : lat_mixing ] ( a ) we see that , for _satellite1/4 _ , the values of fsles decay from in the southern to in the northern area , with similar significant decays for _ roms1/4 _ and _ roms1/12_. specifically the north - south difference for _ satellite1/4 _ , _roms1/4 _ and _ roms1/12 _ are of the order of , and , respectively , confirming a lower latitudinal gradient for the case of _note that there are differences in the stirring values ( fsles ) depending on the type of data , their resolution , the averaging strip , and the grid size of fsle computation .in general , considering velocities with the same resolution , the lower values correspond to _ satellite1/4 _ as compared to _roms1/4_. on average , values of stirring from _roms1/4 _ are larger than those from _roms1/12 _ , whereas we would expect the opposite considering the higher resolution of the latter simulation favoring small scales processes .however , this comparison is hampered by the fact that spatial means of fsle values are reduced when computing them on grids of higher resolution , because the largest values become increasingly concentrated in thinner lines , a consequence of their multifractal character .indeed , one can not compare consistently two fsles field computed on a different resolution , whatever the intrinsic resolution of the velocity field is .the fsles computed on a 1/4 grid ( black and red lines on fig .[ fig : lat_mixing ] ) can not be directly compared to fsle fields computed on a 1/12 grid ( green line fig .[ fig : lat_mixing ] ) ( see ) .note however that when fsles are computed using the _ roms1/12 _ and _ roms1/4 _ flows but on the same fsle grid with a fixed resolution of 1/12 , one finds smaller values of fsles for the coarser velocity field ( _ roms1/4 _ ) ( see green and blue lines in fig .[ fig : lat_mixing ] ) .the effect of reducing the velocity spatial resolution on the fsle calculations is considered more systematically in [ ape : smooth ] .fsle values obtained from the same fsle - grid increase as the resolution of the velocity - grid becomes finer ( fig .[ fig.comparison_fslesmooth ] ) a general observation consistent between all datasets is that horizontal mixing is slightly less intense and more variable in the region of coastal upwelling ( from the coast to 3 offshore ) than within the transitional area with the open ocean ( 3 - 6 offshore ) .note also that a low - stirring region is observed within the 3 width coastal strip from to on all calculations .these observations confirm that the roms model is representing well the latitudinal variability of the stirring as measured from fsle based on satellite data .these preliminary results indicate that lyapunov exponents and methods could be used as a diagnostic to validate the representation of mesoscale activity in eddy - resolving oceanic models , as suggested recently by .overall , the variability of stirring activity in the benguela derived from the simulated flow fields is in good agreement with the satellite observations . as a function of latitude .a ) from the coast to 3 degrees offshore ; b ) between 3 and 6 degrees offshore ., title="fig:",scaledwidth=50.0% ] + as a function of latitude .a ) from the coast to 3 degrees offshore ; b ) between 3 and 6 degrees offshore . , title="fig:",scaledwidth=43.0% ] evolution of , and over space and time is obtained by integrating the systems described by eqs .( [ coupledsystem1 ] ) , ( [ coupledsystem2 ] ) and ( [ coupledsystem3 ] ) . the biological model is coupled to the velocity field after the spin - up time needed to reach stability ( days ) .analysing the temporal average of simulated chlorophyll ( fig .[ fig : phyto_average ] ) , we found that coastal regions with high extend approximately , depending on latitude , between half a degree and two degrees offshore .it is comparable with the pattern obtained from the satellite - derived chlorophyll data ( fig.[fig : phyto_average ] d ) ) .the spatial correlation between averaged simulated and satellite chlorophyll is as follows : for _ satellite1/4 _ versus _ seawifs _ ; for _ roms1/4 _ versus _ seawifs _ and for _ roms1/12 _ versus _seawifs_. despite the very simple setting of our model , including the parameterization of the coastal upwelling , the distribution of phytoplankton biomass is relatively well simulated in the benguela area .note however that our simulated chlorophyll values are about 3 - 4 times lower than satellite data .many biological and physical factors not taken into account in this simple setting could be invoked to explain this offset .another possible explanation is the low reliability of ocean color data in the optically complex coastal waters . .logarithmic scale is used to improve the visualization of gradients in nearshore area ., scaledwidth=99.0% ] we now examine the latitudinal distribution of comparing the outputs of the numerical simulations versus the satellite chlorophyll - a over different coastally oriented strips ( fig.[fig : phyto_lat ] ) .simulated concentrations are higher in the northern than in the southern area of benguela , in good agreement with the chlorophyll - a data derived from satellite .a common feature is the minimum located just below the luderitz upwelling cell ( 28 ) , which may be related to the presence of a physical boundary , already studied and named the lucorc barrier by and .the decrease of concentration is clearly visible in the open ocean region of the _satellite1/4 _ case ( fig .[ fig : phyto_lat ] b ) ) .correlations of zonal averages between simulated and satellite chlorophyll - a are poor when considering the whole area ( ranging from 0.1 to 0.5 ) .however , when considering each subsystem ( northern and southern ) independently , high correlation coefficients are found for the south benguela ( around 0.75 ) , but not for the north .this indicates that our simple modelling approach is able to simulate the spatial patterns of chlorophyll in the south benguela , but not properly in the northern part . in the north , other factors not considered here ( such as the 3d flow , the varying shelf width , the external inputs of nutrients , realistic non - climatologic forcings , complex biogeochemical processes , etc ... )seem to play an important role in determining the surface chlorophyll - a observed from space . in fig .[ fig : evo_eddie ] we show six selected snapshots of chlorophyll concentrations every days during a days period for _roms1/12_. since both roms simulations were climatologically forced runs , the dates do not correspond to a specific year .the most relevant feature is the larger value of concentrations near the coast due to the injection of nutrients .obviously the spatial distribution of is strongly influenced by the submeso- and meso - scale structures such as filaments and eddies , especially in the southern subsystem .differences are however observed between the three data sets . in particular, it seems that for _ satellite1/4 _ and _ roms1/12 _ the concentrations extend further offshore than for _ roms1/4 _ ( not shown ) . in [ ape : smooth ] we provide additional analysis of the effect of the velocity spatial resolution on phytoplankton evolution .we found that velocity data with different resolution produces similar phytoplankton patterns but larger absolute values of concentrations as the spatial resolution of the velocity field is refined ( see ) , supporting the need to compare different spatial resolutions .several studies have shown that transport of chlorophyll distributions in the marine surface is linked to the motion of local maxima or ridges of the fsles .this is also observed in our numerical setting when superimposing contours of high values of fsle ( locating the lcss ) on top of phytoplankton concentrations for _ roms1/12 _ ( see fig .[ fig : evo_eddie ] ) . in some regions concentrations are constrained and stirred by lines of fsle .for instance , the elliptic eddy - like structure at , is characterized by high phytoplankton concentrations at its edge , but relatively low in its core .this reflects the fact that tracers , even active such as chlorophyll , still disperse along the lcss .days of large ( top ) values of fsle superimposed on simulated chlorophyll concentrations calculated from _roms1/12 _ in .logarithmic scale for chlorophyll concentrations is used to improve the visualization of the structures , scaledwidth=99.0% ] from fig .[ fig : phyto_lat ] it is clear that phytoplankton biomass has a general tendency to decrease with latitude , an opposite tendency to the one exhibited by stirring ( as inferred from the fsles and eke distributions in figs .[ fig : mixing ] and [ fig : lat_mixing ] ) for the three data sets .moreover , note that the minimum of phytoplankton located just below the lucorc barrier at ( fig .[ fig : phyto_lat ] ) coincides with a local maximum of stirring that might be responsible for this barrier ( fig .[ fig : lat_mixing ] a ) . spatial mean and latitudinal variations of fsle and chlorophyll - a analyzed together suggest an inverse relationship between those two variables .the 2d vigorous stirring in the south and its associated offshore export seem sufficient to simulate reasonably well the latitudinal patterns of .the numerous eddies released from the agulhas system and generally travelling north - westward , associated with the elevated mesoscale activity in the south benguela , might inhibit the development of and export unused nutrients toward the open ocean . although invoked the offshore subduction of unused nutrients ( 3d effect ), our results suggest that 2d offshore advection and intense horizontal mixing could by themselves affect negatively the phytoplankton growth in the southern benguela . to study quantatively the negative effect of horizontal stirring on phytoplankton concentration , we examine the correlation between the spatial averages over each subregion ( north and south ) and the whole area of study of every weekly map of fsle and the spatial average of the corresponding weekly map of simulated , considering each of the three velocity fields ( fig.[fig : phyto_fsle ] ) . for all cases ,a negative correlation between fsles and chlorophyll emerges : the higher the surface stirring / mixing , the lower the biomass concentration .the correlation coefficient taking into account the whole area is quite high for all the plots , =0.77 for _ satellite1/4 _ , 0.70 for _ roms1/4 _ and 0.84 for _ roms1/12 _ , and the slopes ( blue lines in fig.[fig : phyto_fsle ] have the following values : -1.8 for _ satellite1/4 _ , -0.8 for _ roms1/4 _ and -2.3 for _ roms1/12_. the strongest negative correlation is found for the setting with _roms1/12_. note that , similarly to the results of and , the negative slope is larger but less robust when considering the whole area rather than within every subregion .moreover , if we average over the coastal strip ( from coast to 3 offshore ) and only in the south region ( fig.[fig : phyto_fsle ] d),e),f ) ) we find high values of the correlation coefficient for the _ satellite1/4 _ , and _roms1/12 _ cases .the suppressive effect of stirring might be dominant only when stirring is intense , as in the south benguela . stated that the reduction of biomass due to eddies may extend beyond the regions of the most intense mesoscale activity , including the offshore areas that we do not simulate in this work . from the coast ) and in north and south subareas of benguela .satellite1/4 _ , b ) _ roms1/4 _ and c ) _roms1/12_. right column plots the average over 3 offshore in the south region : d ) _, e ) _ roms1/4 _ and f ) _ roms1/12 _ , title="fig:",scaledwidth=40.0% ] from the coast ) and in north and south subareas of benguela .satellite1/4 _ , b ) _roms1/4 _ and c ) _roms1/12_. right column plots the average over 3 offshore in the south region : d ) _ satellite1/4 _ , e ) _roms1/4 _ and f ) _ roms1/12 _ , title="fig:",scaledwidth=40.0% ] from the coast ) and in north and south subareas of benguela .satellite1/4 _ , b ) _ roms1/4 _ and c ) _roms1/12_. right column plots the average over 3 offshore in the south region : d ) _ satellite1/4 _ , e ) _roms1/4 _ and f ) _ roms1/12 _ , title="fig:",scaledwidth=40.0% ] from the coast ) and in north and south subareas of benguela .satellite1/4 _ , b ) _ roms1/4 _ and c ) _roms1/12_. right column plots the average over 3 offshore in the south region : d ) _satellite1/4 _ , e ) _roms1/4 _ and f ) _ roms1/12 _ , title="fig:",scaledwidth=40.0% ] from the coast ) and in north and south subareas of benguela .satellite1/4 _ , b ) _ roms1/4 _ and c ) _roms1/12_. right column plots the average over 3 offshore in the south region : d ) _ satellite1/4 _ , e ) _roms1/4 _ and f ) _ roms1/12 _ , title="fig:",scaledwidth=40.0% ] from the coast ) and in north and south subareas of benguela .satellite1/4 _ , b ) _ roms1/4 _ and c ) _roms1/12_. right column plots the average over 3 offshore in the south region : d ) _ satellite1/4 _ , e ) _roms1/4 _ and f ) _ roms1/12 _ , title="fig:",scaledwidth=40.0% ] in the following we analyse the bio - physical mechanisms behind this negative relationship . in the following, our analysis is focused on the setting using _roms1/12 _ as the previous results revealed that the negative correlation is more robust .similar results and conclusions can be obtained from the simulations using the two other velocity fields ( not shown ) , attesting of the reliability of our approach ( see correlation coefficients and slopes in fig . [fig : phyto_fsle ] ) . to understand why simulated chlorophyll - a concentrations differs in both subsystems ,as is the case in satellite observations , we compute annual budgets of and biological rates ( primary production , grazing and remineralization ) in the case of the biological module alone ( table [ tab.budgets_onlybio ] ) and when coupled with a realistic flow ( table [ tab.budgets_coupled ] ) . considering the biological module alone, we found that in the north subsystem is slightly higher than in the southern one ( 4 , see also table [ tab.budgets_onlybio ] ) , essentially due to the differential nutrient inputs .however , when considering the full coupled system ( hydrodynamic and biology ) , the latitudinal difference in increases significantly ( 32 , see also table [ tab.budgets_coupled ] ) .this latitudinal difference is in agreement with the patterns of derived from remote - sensed data by .these results indicate that the flow is the main responsible of the difference in pp .additional computations ( see [ sensitivity_homo ] ) also confirm the minor effect of the biological module ( ) , as compared with the flow , on the observed latitudinal differences in . &south & north & north - south difference ( ) + nutrients ( ) & 821 & 1305 & 37 + phytoplankton ( ) & 57.0 & 57.7 & 1 + zooplankton ( ) & 113 & 115 & 2 + primary production ( ) & 35 & 36 & 4 + grazing ( ) & 33 & 35 & 4 + ( ) & 28 & 29 & 3 + remineralization ( ) & 7.0 & 7.4 & 4 + & south & north & north - south difference ( ) + nutrients ( ) & 849 & 1937 & 56 + phytoplankton ( ) & 147 & 198 & 26 + zooplankton ( ) & 231 & 347 & 33 + primary production ( ) & 63 & 98 & 32 + grazing ( ) & 56 & 87 & 35 + ( ) & 81 & 91 & 10 + remineralization ( ) & 11 & 18 & 4 + ) suggested that the offshore advection of plankton biomass enhanced by mesoscale structures might be responsible for the suppressive effect of stirring in upwelling areas . to test this mechanism ,we next analyze the net horizontal transport of biological tracers by the flow .in particular , we have computed the zonal , , and meridional , , advective fluxes of ( the diffusive fluxes being much smaller ) : where and are the zonal and meridional components of the velocity field respectively , and with we denote the n , p and z concentrations , all of them given at a specific point in the 2d - space and time . is the flux of the concentration , , i.e. , is the zonal flux of nutrients ( eastward positive ) , is the meridional flux ( northward positive ) of phytoplankton , and so on .annual averages of daily fluxes were computed , and then a zonal average as a function of the latitude was calculated for the different coastal bands considered all along this paper .[ fig : flux_roms1_12 ] shows these calculations for the velocity field from _ roms1/12 _ , while similar results were found for the other data sets ( not shown ) .similar behavior is observed for the fluxes of , and : zonal fluxes are almost always negative , so that westward transport dominates , and meridional fluxes are predominantly positive so that they are directed to the north .comparing north and south in the 3 coastal band , it is observed that at high latitudes the zonal flux has larger negative values than at low latitudes , and the meridional flux presents larger positive values at higher latitudes . in other words , the northwestward transport of biological material is more intense in the southern than in the northern regions , suggesting a higher flushing rate. it also suggests that unused nutrients from the southern benguela might be advected toward the northern areas , possibly promoting even further the local ecosystem . to estimate the transport of recently upwelled nutrients by lcss and other mesoscale structures ,apart from the mean flow , we compute the zonal and meridional fluxes of biological tracers using the smoothed _velocity field at the spatial resolution equivalent to 1/2 ( see [ ape : smooth ] for more details ) .the results , plotted in fig . [ fig : flux_roms1_12 ] ( red lines ) , show that in general the fluxes are less intense in the coarser than in the finer velocity , indicating that there is a contribution to net transport due to the submeso- and meso - scale activity . to estimate the quantitative contribution of mesoscale processes , we compute the difference of the fluxes of the different biological tracers = , , in the coarser velocity field with respect to the original velocity field .the values of range from 30 to 50 , indicating that the contribution of the mesocale to the net transport of the biological concentrations is important . moreover , the values of are larger in the south than in the north confirming that the mesoscale - induced transport is more intense in the south . showed that mesoscale processes reduce the efficiency of nutrients utilization by phytoplankton due to their influence on residence times .the longer residence times ( i.e. the less mesoscale activity ) seem to favor the accumulation of biomass . to test this effect in our simulations, we compute the residence times ( rt ) , defined as the the time interval that a particle remains in the coastal trip of 5 wide .the spatial distribution ( not shown ) of the annual average of rt indicates that the longest rt are found in the north region .in fact , zonal analysis reveals that rt has a tendency to increase as the latitude decreases , with a mean value in the north equals to 249 , and 146 in the south .this suggests that regions with weak fluxes are associated with long residence times and high growth rate of phytoplankton . on the other hand, high mesoscale activity is favoring the northwestward advection which decreases the residence times , associated to lower growth rate of plankton .concentrations for the _ roms1/12 _ case , averaged from the coast to 3 offshore ., title="fig:",scaledwidth=85.0% ] + this effect and the role of horizontal advection is confirmed by performing numerical simulations where no biological dynamics is considered .this amounts to solving eq .( [ eq.biolo1 ] ) with considering solely lateral transport , so that is a passive scalar with sources . in fig .[ fig.advecc_sensit ] we see the results ( for the case , similar for the other datasets ) .there is a very small tracer concentration in the southern domain , and the differences north - south are more pronounced than the case including the plankton dynamics ( see fig .[ fig : phyto_lat ] ) .this supports further the fact that the main actor on the spatial distribution of biomasses is the horizontal transport . ) .b ) comparison of latitudinal profile of time averages of the passive scalar , as a function of latitude , for zonal average over different coastal bands.,scaledwidth=80.0% ]we have studied the biological dynamics in the benguela area by considering a simple biological npz model coupled with different velocity fields ( satellite and model ) .although in a simple framework , a reduction of phytoplankton concentrations in the coastal upwelling for increasing mesoscale activity has been successfully simulated .horizontal stirring was estimated by computing the fsles and was correlated negatively with chlorophyll stocks .similar correlations are found , though not presented in this manuscript , for the primary production .some recent observational and modelling studies proposed the nutrient leakage " as a mechanism to explain this negative correlation . herewe argue that lagrangian coherent structures , mainly mesoscale eddies and filaments , transport a significant fraction ( 30 - 50% ) of the recently upwelled nutrients nearshore toward the open ocean before being efficiently used by the pelagic food web .the fluxes of nutrients and organic matter , due to the mean flow and its mesoscale structures , reflect that transport is predominantly westward and northward .biomass is transported towards open ocean or to the northern area .in addition to the direct effect of transport , primary production is also negatively affected by high levels of turbulence , especially in the south benguela .although some studies dealt with 3d effects , we have shown that 2d advection processes seems to play an important role in this suppressive effect .our analysis suggests that the inhibiting effect of the mesoscale activity on the plankton occurs when the stirring reaches high levels , as in the south benguela .however , this effect is not dominant under certain levels of turbulence .it might indicate that planktonic ecosystems in oceanic regions with vigorous mesoscale dynamics can be , as a first approximation , easily modeled just by including a realistic flow field .the small residence times of waters in the productive area will smooth out all the other neglected biological factors in interaction .our findings confirm the unexpected role that mesoscale activity has on biogeochemical dynamics in the productive coastal upwelling .strong vertical velocities are known to be associated with these physical structures and they might have another direct effect by transporting downward rich nutrient waters below the euphotic zone .further studies are needed such as 3d realistic modelling that take into account the strong vertical dynamics in upwelling regions to test the complete mechanisms involved .i.h - c was supported by a fpi grant from mineco to visit legos .we acknowledge support from mineco and feder through projects fisicos ( fis2007 - 60327 ) and escola ( ctm2012 - 39025-c02 - 01 ) . v. g. thanks cnes funding through hiresubcolor project .we are also grateful to j. sudre for providing us velocity data sets both from roms and from the combined satellite product .ocean color data were produced by the seawifs project at ges and were obtained from daac .a number of numerical experiments were done to investigate the sensitivity of the coupled bio - physical model with respect to different variables . in this experimentwe used a velocity field from roms1/12 smoothed out towards a resolution 1/4 , and to be compared with and at their original spatial resolution .we coarse - grained the velocity field with a convolution kernel weighted with a local normalization factor , and keeping the original resolution for the data so that land points are equally well described as in the original data . the coarsening kernel with scale factor , ,is defined as : to avoid spurious energy dump at land points we have introduced a local normalization weight given by the convolution : , where is the sea mask . for points far from the land the weight is just the normalization of , and for points surrounded by land the weight takes the contribution from sea points only .thus , the velocity field coarsened by a scale factor , is obtained from the original velocity field as : in fig .[ fig.smooth_vel ] we compare two _ roms1/12 _ smoothed velocity fields at scales =3 and =6 ( with an equivalent spatial resolution 1/4 and 1/2 , respectively ) with the original velocity field from _roms1/12_. it is clear that the circulation pattern is smoothed as is increased .the fsle computations using these smoothed velocity fields are shown in fig [ fig.fslesmooth_vel ] . when the spatial resolution is reduced to the fsles and small - scale contributions decrease , but the main global features remain , as indicated in the study by .further coarsening to smoothes most of the structures except the most intense ones . :a ) at original resolution .b ) smoothed by a scale factor of =3 , obtaining and equivalent spatial resolution of 1/4 , c ) smoothed by a scale factor of s=6 , obtaining and equivalent spatial resolution of 1/2 .the snapshots correspond to day 437 of the simulation ., scaledwidth=80.0% ] at the same fsle grid resolution of 1/12 , and using the velocity fields at different resolutions : a ) at original resolution 1/12 .b ) smoothed velocity field at equivalent 1/4 and c ) smoothed velocity field at equivalent 1/2.,scaledwidth=99.0% ] the latitudinal variations of the zonal averages performed on the time averages of the fsle maps plotted in fig .[ fig.fslesmooth_vel ] are compared in fig.[fig.comparison_fslesmooth ] .the mean fsles values strongly diminish when the velocity resolution is sufficiently smoothed out .this is due to the progressive elimination of mesoscale structures that are the main contributors to stirring processes .also the latitudinal variability of stirring diminishes for the very smoothed velocity field ( blue line in fig .[ fig.comparison_fslesmooth ] ) .thus , latitudinal differences of stirring in the benguela system are likely to be related to mesoscale structures ( eddies , filaments , fronts , etc . ) contained in the velocity fields .we have also computed the phytoplankton using these smoothed velocity fields .some instantaneous spatial distributions can be seen in fig [ fig.phyto_smooth_vel ] .the filaments of phytoplankton disappear in the very smoothed velocity field ( 1/2 ) . the spatial distribution of the annual average of phytoplankton concentrations for the different velocity field shows , however , quite similar patterns ( not shown ) . at original resolution 1/12 , b ) smoothed velocity field at equivalent 1/4 , c ) smoothed velocity field at equivalent 1/2 , and d ) at original resolution 1/4 .the units of the colorbar are . , scaledwidth=99.0% ] [ [ sensitivity_homo ] ] sensitivity with respect to different parameterization of the coastal upwelling of nutrients . ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ in section [ bio ] we mimicked coastal upwelling of nutrient via a source term in the nutrients equation which is determined by the function , and was considered spatiotemporally variable . herewe explore the plankton dynamics using spatially and temporally homogeneous upwelling along the coast . is fixed to an average value along the coast at any time . in fig .[ fig : npz_homocells ] we show the annual average of for the ( top panel ) , and the comparisons with the inhomogeneous case for the zonal mean ( bottom panel ) .therefore , this test suggests that the way we simulate vertical mixing along the coast has not a large effect on the 2d biological dynamics , which will be mainly determined by the interplay with horizontal advection .condie , s. , dunn , j. r. , 2006 .seasonal characteristics of the surface mixed layer in the australasian region : implications for primary production regimes and biogeography marine and freshwater research .marine and freshwater research 57 , 122 .demarcq , h. , barlow , r. , shillington , f. , 2003 .climatology and variability of sea surface temperature and surface chlorophyll in the benguela and agulhas ecosystems as observed by satellite .african journal of marine science 25 , 363372 .doney , s.c . , d. g. r. n. , 1996 .a new coupled , one - dimensional biological physical model for the upper oceanapplication to the jgofs bermuda atlantic time - series study ( bats ) site .deep - sea res .ii 43 , 591624 .dovidio , f. , isern - fontanet , j. , lpez , c. , hernndez - garca , e. , garca - ladona , e. , 2009 .comparison between eulerian diagnostics and finite - size lyapunov exponents computed from altimetry in the algerian basin .deep - sea res .i 56 , 1531 .gruber , n. , lachkar , z. , frenzel , h. , marchesiello , p. , mnnich , m. , mcwilliams , j. , nagai , t. , plattner , g. , 2011 .eddy - induced reduction of biological production in eastern boundary upwelling systems .nature geoscience 9 , 787792 .gutknecht , e. , dadou , i. , cambon , b. l. v. g. , sudre , j. , garon , v. , machu , e. , rixen , t. , kock , a. , flohr , a. , paulmier , a. , lavik , g. , 2013 .coupled physical / biogeochemical modeling including 02-dependent processes in the eastern boundary upwelling systems : application in the benguela .biogeosciences 10 , 35593591 .hutchings , l. , van der lingen , c. , shannon l.j .crawford , r. , verheye , h. , bartholomae , c. , van der plas , a. , louw , d. , kreiner , a. , ostrowski , m. , fidel , q. , barlow , r. , lamont , t. , coetzee , j. , shillington , f. , veitch , j. , currie , j. , monteiro , p. , 2009 .the benguela current : an ecosystem of four components .progress in oceanography 83 , 1532 .kon , v. , machu , e. , penven , p. , andersen , v. , garon , v. , fron , p. , demarcq , h. , 2005 . modeling the primary and secundary productions of the southern benguela upwelling system : a comparative study through two biogeochemical models .global biogeochem .cycles 19 , gb4021 .lett , c. , veitch , j. , van der lingen , c. , hutchings , l. , 2007 .assessment of an environmental barrier to transport of ichthyoplankton from the southern to the northern benguela ecosystems .marine ecology progress series 347 , 247259 .mackas , d. , strub , p. , thomas , c. , montecino . , v. , 2006 .eastern ocean boundaries pan - regional view . in : robinson ,a. , brink , k. ( eds . ) , the sea , vol 14a , the global coastal ocean : interdisciplinary regional studies and syntheses : pan - regional syntheses and the coast of north and south america and asia .harvard univ . press ,chap . 2 , cambridge , mass .monteiro , p. , 2009 .carbon fluxes in the benguela upwelling system .in : liu , k. , atkinson , l. , quiones , r. , talaue - mcmanus , l. ( eds . ) , carbon and nutrient fluxes in continental margins : a global synthesis , chap .2 . springer , berlin .oschlies , a. , garon , v. , 1999 .an eddy - permitting coupled physical - biological model of the north atlantic , sensitivity to advection numerics and mixed layer physics .global biocheochem .cycles 13 , 135160 .rossi , v. , lpez , c. , hernndez - garca , e. , sudre , j. , garon , v. , morel , y. , 2009 . surface mixing and biological activity in the four eastern boundary upwellings systems. nonlinear process .16 , 557568 .rossi , v. , lpez , c. , sudre , j. , hernndez - garca , e. , garon , v. , 2008 . comparative study of mixing and biological activity of the benguela and canary upwelling systems .35 , l11602 ., e. , rossi , v. , sudre , j. , weimerskirch , h. , lpez , c. , hernndez - garca , e. , marsac , f. , garon , v. , 2009 .top marine predators track lagrangian coherent structures .proceedings of the national academy of sciencies of the usa 106 , 82458250 .
|
recent studies , both based on remote sensed data and coupled models , showed a reduction of biological productivity due to vigorous horizontal stirring in upwelling areas . in order to better understand this phenomenon , we consider a system of oceanic flow from the benguela area coupled with a simple biogeochemical model of nutrient - phyto - zooplankton ( npz ) type . for the flow three different surface velocity fields are considered : one derived from satellite altimetry data , and the other two from a regional numerical model at two different spatial resolutions . we compute horizontal particle dispersion in terms of lyapunov exponents , and analyzed their correlations with phytoplankton concentrations . our modelling approach confirms that in the south benguela there is a reduction of biological activity when stirring is increased . two - dimensional offshore advection and latitudinal difference in primary production , also mediated by the flow , seem to be the dominant processes involved . we estimate that mesoscale processes are responsible for 30 to 50% of the offshore fluxes of biological tracers . in the northern area , other factors not taken into account in our simulation are influencing the ecosystem . we suggest explanations for these results in the context of studies performed in other eastern boundary upwelling areas .
|
in multiple - access communication , the evolution of user activity may play an important role . from one time instant to the next , some new users may become active and some existing users inactive , while parameters of the persisting users , such as power or location , may vary .now , most of the available multiuser detection ( mud ) theory is based on the assumption that the number of active users is constant , known at the receiver , and equal to the maximum number of users entitled to access the system .if this assumption does not hold , the receiver may exhibit a serious performance loss . in ,the more realistic scenario in which the number of active users is unknown a priori , and varies with time with known statistics , is the basis of a new approach to detector design .this work presents a large - system analysis of this new type of detectors for code division multiple access ( cdma ) .our main goal is to determine the performance loss caused by the need for estimating the identities of active users , which are not known a priori . in this paperwe restrict our analysis to a worst - case scenario , where detection can not improve the performance from past experience due to a degeneration of the activity model ( for instance , assuming a markovian evolution of the number of active users ) into an independent process .the same analysis applies to systems where the input symbols accounting for data and activity are interleaved before detection . to prevent a loss of optimality, we assume that identities and data are estimated jointly , rather than in two separate steps .our interest is in randomly spread cdma system in terms of multiuser efficiency , whose natural dimensions ( number of users , and spreading gain ) tend to infinity , while their ratio ( the `` system load '' ) is kept fixed . in particular , we consider the optimal maximum a posteriori ( map ) multiuser detector , and use tools recently adopted from statistical physics .of special relevance in our analysis is the decoupling principle introduced in for randomly spread cdma .the general results derived from asymptotic analysis are validated by simulations run for a limited number of users .the results of this paper focus on the degradation of multiuser efficiency when the uncertainty on the activity of the users grows and the snr is sufficiently large .we go one step beyond the application of the large - system decoupling principle and provide a new high - snr analysis on the space of fixed - point solutions showing explicitly its interplay with the system load for a non - uniform ternary and parameter - dependent input distribution . by expanding the minimum mean square error for large snr , we obtain tight closed - form bounds that describe the large cdma system as a function of the snr , the activity factor and the system load . in addition , some trade - off results between these quantities are derived . of special novelty hereis the study of the impact of the activity factor in the cdma performance measures ( minimum mean - square error , and multiuser efficiency ) . in particular , we provide necessary and sufficient conditions on the existence of single or multiple fixed - point solutions as a function of the system load and snr .finally , we analytically identify the region of meaningful " multiuser efficiency solutions with their associated maximum system loads , and derive consequences for engineering problems of practical interest .this paper is organized as follows . section [ section : model ] introduces the system model and the main notations used throughout .section [ section : main ] derives the large - system central fixed - point equation , and analytical bounds to the mmse . based on these results , section[ section : main2 ] discusses the interplay of maximum system load and multiuser efficiency .finally , section [ section : conclusions ] draws some concluding remarks .we consider a cdma system with an unknown number of users , and examine the optimum user - and - data detector .in particular , we study randomly spread direct - sequence ( ds ) cdma with a maximum of active users : where is the received signal at time , is the length of the spreading sequences , is the matrix of the sequences , is the diagonal matrix of the users signal amplitudes , is the users data vector , and is an additive white gaussian noise vector with i.i.d . entries .we define the system s activity rate as , .active users employ binary phase - shift keying ( bpsk ) with equal probabilities .this scheme is equivalent to one where each user transmits a ternary constellation with probabilities and .we define the maximum system load as . in a static channel model ,the detector operation remains invariant along a data frame , indexed by , but we often omit this time index for the sake of simplicity . assuming that the receiver knows and , the a posteriori probability ( app ) of the transmitted data has the form hence , the maximum a posteriori ( map ) joint activity - and - data multiuser detector solves similarly , optimum detection of single - user data and activity is obtained by marginalizing over the undesired users as follows : in a communication scheme such as the one modeled by , the goal of the multiuser detector is to infer the information - bearing symbols given the received signal and the knowledge about the channel state .this leads naturally to the choice of the partition function .the corresponding free energy , normalized by the number of users becomes to calculate this expression we make the self - averaging assumption , which states that the randomness of vanishes as .this is tantamount to saying that the free energy per user converges in probability to its expected value over the distribution of the random variables and , denoted by evaluation of is made possible by the _ replica method _ , which consists of introducing independent replicas of the input variables , with corresponding density , and computing as follows : to compute , one of the cornerstones in large deviation theorem , the varadhan s theorem , is invoked to transform the calculation of the limiting free energy into a simplified optimization problem , whose solution is assumed to exhibit symmetry among its replicas .more specifically , in the case of a map individually optimum detector , the optimization yields a fixed - point equation , whose unknown is a single operational macroscopic parameter , which is claimed to be the multiuser efficiency of an equivalent gaussian channel . due to the structure of the optimization problem, the multiuser efficiency must minimize the free energy .the above is tantamount to formulating the _ decoupling principle _ : [ thm0 ] given a multiuser channel , the distribution of the output of the individually optimum ( io ) detector , conditioned on being transmitted with amplitude , converges to the distribution of the posterior mean estimate of the single - user gaussian channel where , and , the multiuser efficiency , is the solution of the following fixed - point equation : .\ ] ] if admits more than one solution , we must choose the one minimizing the free energy function -\frac{1}{2}\ln\frac{2\pi e}{\eta}+\frac{1}{2\beta}\left({\eta}\ln\frac{2\pi}{\eta}\right).\ ] ] in , , is the transition probability of the large - system equivalent single - user gaussian channel described by , and \ ] ] denotes the minimum mean - square error in estimating in gaussian noise with amplitude equal to , where ] , we can derive lower and upper bounds illustrating analytically the transition between the classical assumption ( ) and the cases where the activity is also detected ( ) for large snr .our calculations bring about a new analytical framework to deal with large - system analysis , as we will see in the next section .our bounds are consistent with lemma [ lem1 ] and the lower bound includes the case .the general result is stated as follows .[ thm2 ] the mmse of joint user identification and data detection in a large system with an unknown number of users has the following behavior , valid for sufficiently large values of the product : see appendix [ app : proof_mmse_limits ] .bounds in describe explicitly , in the high - snr region , the relationship between the mmse , the users activity rate , and the effective snr ( ) . in fig .[ fig2 ] these bounds are compared to the true mmse values as a function of for fixed .it can be seen that the uncertainty about the users activity modifies substantially the exponential decay of the mmse for high snr .in fact , a value of different from causes the mmse to decay by , rather than by , which would be the case when all users are active .furthermore , we can observe that , for sufficiently large effective snr , the behavior vs. of the optimal detector is symmetric with respect to , which corresponds to the maximum uncertainty of the activity rate .figure [ fig3 ] shows that for large values of the product , the mmse essentially depends on the minimum distance between the inactivity symbol and the data symbols , and thus users identification prevails over data detection .summarizing , the dependence of the mmse must be symmetrical with respect to , since it reflects the impact of prior knowledge about the user s activity into the estimation .[ ! htbp ] [ ! htbp ]recall the definition of maximum system load , where is the maximum number of users accessing the multiuser channel . when the number of active users is unknown , and there is a priori knowledge of the activity rate , the actual system load is . in this section ,we focus on and study some of its properties .notice that , given an activity rate , results for the actual system load follow trivially .we characterize the behavior of the maximum system load subject to quality - of - service constraints .this helps shedding light into the nature of the solutions of the fixed - point equation .in particular , there might be cases where has multiple solutions .these solutions correspond to the solutions appearing in any simple mathematical model of magnetism based on the evaluation of the free energy with the fixed - point method .they represent what in the statistical physics parlance is called _ phase coexistence _( for example , this occurs in ice or liquid phase of water at ) . in particular , at low temperatures , the magnetic system might have three solutions .solutions and are stable : one of them is globally stable ( it actually minimizes the free energy ) , whereas the other is metastable , and a local minimum .solution is always unstable , since it is is a local maximum .the `` true '' solution is therefore given by and , for which the free energy is a minimum .the same consideration applies also to our multiuser detection problem where multiuser efficiencies for the io detector might vary significantly depending on the value of the system load and snr .more specifically , for sufficiently large snr , stable solutions may switch between a region that approaches the single - user performance ( ) and a region approaching the worst performance ( ) , for .following previous literature , we shall call the former solutions _ good _ and the latter _bad_. when the solution is unique , due to low or high system load , the multiuser efficiency is a globally stable solution that lies in either the good or the bad solution region .then , for given system parameters , the set of _ operational _ ( or globally stable ) solutions is formed by solutions that are part of these sets and minimize the free energy . the existence of good and bad solutions are critical in our problem . from a computational perspective , we are particularly interested in single solutions , either bad or good , that surely avoid metastability and instability .these solutions belong to a specific subregion within the bad and good regions , and appear for low and high snr , respectively . from an information - theoretic perspective , it might seem that the true solutions should capture all our attention .however , it has been shown that metastable solutions appear in suboptimal belief - propagation - based multiuser detectors , where the system is easily attracted into the bad solutions region ( corresponding to low multiuser efficiency ) , due to initial configurations that are far from the true solution .moreover , the region of good solutions is of interest in the high - snr analysis , because , for a given system load , it can be observed that the multiuser efficiency tends to , consistently with previous theoretical results . in what follows, we provide an analysis of the boundaries of the stable solution regions , as well as their computationally feasible subregions with practical interest in the low and high snr regimes . [ !htbp ] and fixed , and db . ] a quantitative illustration of the above considerations is provided by plotting the left- and right - hand sides of to obtain fixed points for constant values of amplitude and activity rate , and as a function of the system load .the solutions of are found at the intersection of the curve corresponding to the right - hand side with the line .[ fig4 ] plots different solutions of the right - hand side of for increasing system load , and db : notice first that the structure of the fixed - point equation in general does not allow the solution , and for finite and , is not a solution .in fact , the latter is an asymptotic solution for large snr and certain system loads , as the mmse decays exponentially to . from fig .[ fig4 ] , one can observe the presence of phase transitions and the coexistence of multiple solutions .in particular , we observe that for the good solution is computationally feasible . on the other hand , for and the system has three solutions , where the true solution belongs to either the bad or the good solution region .when the system load achieves , the curve only intersects the identity curve near , and the operational solution is unique and lies in a subregion of bad solutions . even in the case of good solutions ,the multiuser efficiency can be greatly degraded by the joint effect of the activity rate and the maximum system load . in order to analyze the fixed - point equation from a different perspective and shed light into the interplay between these parameters, we express the maximum system load as the following function , derived from : since mmse is a continuous function of , then is also a continuous function in any compact set over the domain ] , whereas the good - solution region is ] , satisfy where , and are obtained from the bounds .see appendix [ app : beta_proof ] .the above result provides the general boundaries of the space of solutions of our problem .it is important to note that and are very good approximations for high snr of the positions of the minimum and maximum observed in fig .[ fig5b ] , which determine the transition and the critical system loads . as a consequence ,remark that theorem [ thm4 ] analytically tells us the range of s for which there are either single or multiple solutions based on the up - to - a - constant approximation of by and .similarly , and are tight bounds of the boundaries of the single - solution regions as and are of .note also that the activity rate affects the boundaries in the same symmetrical manner as it does the mmse ( i.e. , the worst case here also corresponds to ) but has no impact on the operational region , that is only reduced in size by increasing the snr .in particular , these regions are characterized , in the limit of high snr , as follows : in the limit of high snr , , , and consequently , and .the above corollary results from note that , given a system load with , for sufficiently large snr the unique true ( large - system ) solution is , which corroborates the main result in .moreover , the description of the feasible good solutions by analytical means allows the computation of a sufficient condition on the system load to guarantee a given multiuser efficiency in practical implementations .more specifically , we use the aforementioned lower bound on to state that any system load below guarantees that the given multiuser efficicency is achieved . the result is stated as follows : [ cor1 ] the maximum system load , , for a given activity rate and multiuser efficiency requirement , , where , that lies in , is lower - bounded in the high - snr region by in fig .[ fig6 ] we show the numerical values of the transition and the critical system load as a function of the snr in the space .we also use the asymptotic expansion to derive upper and lower bounds , respectively .the plotted curves are the spinodal lines , which mark the boundary between the regions with and without solution coexistence .the ( lower branch ) separates the region where the bad solution disappears , whereas ( upper branch ) contains the bifurcation points at which the operational solution disappears .the intersection point between both branches corresponds to the snr threshold , which provides the necessary condition for solution coexistence .[ ! htbp ] upper and lower bounds on the numerical spinodal lines ( thick line ) for . ]we now apply the same reasoning for the `` classical '' approach to multiuser detection , corresponding to activity rate . in this case , using the approximation in , the system load function can be lower - bounded by hence , we can derive the following spinodal lines given a necessary condition for the phase coexistence is that moreover , for high snr , the condition is met and the transition system load is upper - bounded by and the critical system load is upper - bounded by where and are given by and . hence , the bad solution region is given by ] .the proof is analogous to that of theorem [ thm4 ] .the same consequence for the asymptotic operational region holds here . in the limit of high snr , , and .this corollary results from [ !htbp ] comparison of upper bounds on the spinodal lines for ( left ) and ( right ) . ] in fig .[ fig7 ] , one can observe a db - difference between the spinodal lines corresponding to and to .this is due to the minimum distance of the underlying constellations , which causes the mmse to have different exponential decays .this can be interpreted by saying that the addition of activity detection to data detection is reflected by a db increase of the snr needed to achieve the same system load performance . moreover , with , the transition system load is lower than the case where all users are active , and , therefore , computationally good solutions correspond to lower values of the maximum system load .a natural application of the above results to practical designs appears when the quality - of - service requirements of the system are specified in terms of uncoded error probability .such an application can provide some extra insight into the plausible values of with joint activity and data detection for efficient design of large cdma systems .once a multiuser - efficiency requirement is assigned , the corresponding probability of error follows naturally .note first that , in order to detect the activity as well as the transmitted data , our model deals with a ternary constellation .when any of these symbols is transmitted by each user with constant snr through a bank of large - system equivalent white gaussian noise channels with variance , the probability of error over depends on the prior probabilities as well as the euclidean distance between the symbols .the error probability implied by the replica analysis is where is the gaussian tail function , and .the relationship between and for our particular case can be used to reformulate the bounds on the function in terms of error probability . the maximum system load , , for a given error probability , , and activity rate is bounded for high snr by : where and is the pre - image of .the result is obtained by noticing that the multiuser efficiency requirement extracted from must lie on the subregion ] if and only if .these points are and lie in the domain ] that takes positive values . since tends to 0 as approaches 1 , and tends to infinity as approaches 0 , it can be concluded that the range for which has only one pre - image is .hence , there are single pre - images in the ranges and ] whereas the largest lies on ] . by bounding the mmse using and replacing , we obtain the desired results .e. biglieri , e. grossi , m. lops , and a. tauste campo , `` large - system analysis of a dynamic cdma system under a markovian input process , '' in _ proc .ieee int .inform . theory _ , toronto ,canada , july 2008 . m. l. honig and h. v. poor, `` adaptive interference suppression in wireless communication systems , '' in _ wireless communications : signal processing perspectives_. 1998 , h.v .poor and g.w .wornell , eds englewood cliffs , nj : prentice hall .a. tauste campo and e. biglieri , `` large - system analysis of static multiuser detection with an unknown number of users , '' in _ proc . of ieee int .workshop on comp .advances in multi - sensor adapt . process .( camsap07 ) _ , saint thomas , us , dec .2007 .a. montanari and d. tse , `` analysis of belief propagation for non - linear problems : the example of cdma ( or : how to prove tanaka s formula ) , '' in _ proc .ieee inform .theory workshop _, punta del este , uruguay , mar .2006 .a. tauste campo and a. guilln i fbregas , `` large system analysis of iterative multiuser joint decoding with an uncertain number of users , '' in _ proc .symp . on inform ., austin , texas , june 2010 .
|
we analyze multiuser detection under the assumption that the number of users accessing the channel is unknown by the receiver . in this environment , users activity must be estimated along with any other parameters such as data , power , and location . our main goal is to determine the performance loss caused by the need for estimating the identities of active users , which are not known a priori . to prevent a loss of optimality , we assume that identities and data are estimated jointly , rather than in two separate steps . we examine the performance of multiuser detectors when the number of potential users is large . statistical - physics methodologies are used to determine the macroscopic performance of the detector in terms of its multiuser efficiency . special attention is paid to the fixed - point equation whose solution yields the multiuser efficiency of the optimal ( maximum a posteriori ) detector in the large signal - to - noise ratio regime . our analysis yields closed - form approximate bounds to the minimum mean - squared error in this regime . these illustrate the set of solutions of the fixed - point equation , and their relationship with the maximum system load . next , we study the maximum load that the detector can support for a given quality of service ( specified by error probability ) .
|
algebraic structures are present in many mathematical problems , so they arise naturally in a large number of applications , like medical imaging , remote sensing , geophysical prospection , image deblurring , etc .moreover , in many real - world computations , the full exploitation of the structure of the problem is essential to be able to manage large dimensions and real time processing . in the last 25 years , a great effort has been made to study the properties of algebraic structures and to develop algorithms capable of taking advantage of these structures in the solution of various matrix problems ( solution of linear systems , eigenvalues computation , etc . ) , as well as in matrix arithmetics , for what concerns memory storage , speed of computation and stability . in spite of the many important advances in this field , there is not much software publicly available for structured matrices computation . on the contrary , most of the _ fast _ algorithms which have been proposed , andwhose properties have been studied theoretically , exist only under the form of published papers or , in some occasion , unreleased ( and often unoptimized ) research code .this fact often force researchers to re - implement from scratch algorithms and _ blas - like _ routines , even for the most classical classes of structured matrices .anyway , this is possible only for those with enough knowledge in mathematics and computer science , and totally rules out a large amount of potential users of structured algorithms .matlab is a computational environment which is extremely diffused among both applied mathematicians and engineers , in academic as well as in industrial research .it makes matrix computation sufficiently easy and immediate , and provides the user with powerful scientific visualization tools . at the moment , besides the standard unstructured ( or _ full _ ) matrices , the only matrix structure natively supported in matlab is sparsity . in _ sparse matrix storage _ only nonzero elements are kept in memory , together with their position inside the matrix .moreover , all operations between sparse matrices are redefined to reduce execution time and memory consumption , while mixed computations return full or sparse arrays depending on the type of operations involved .our idea is to extend matlab with a computational framework devoted to structured matrices , with the aim of making it easy to use and _ matlab - like _ , transparent for the user , highly optimized for what concerns storage and complexity , and easily extensible .we tried to follow closely the way matlab treats sparse matrices , and for doing this , we used the matlab object - oriented classes .starting from version 5 , in fact , it is possible to add new data types in matlab ( _ classes _ ) , to define _ methods _ for _ classes _ , i.e. , functions to create and manipulate variables belonging to the new data type , and to _ overload _ ( redefine ) the arithmetic operators for each new class . at the moment , our toolbox supports two very common classes of structured matrices , namely circulant and toeplitz matrices . in writing the software our aim was not only to furnish storage support , full arithmetics and some additional methods for these structured matrices , but also to create a framework easily extendible , in terms of functions and new data types , and to specify a pattern for future developments of the package .so , a great effort was spent in the software engineering of the toolbox . among the available matlab software for structured matrices computation , we mention the following internet resources .various fast and superfast algorithms for structured matrices have been developed by the mase - team ( matrices having structure ) , coordinated by marc van barel at the katholieke universiteit of leuven . a toolbox for structured matrix decompositions has been included in the slicot package , developed under the niconet ( numerics in control network ) european project .the restoretools is an object oriented matlab package for image restoration which has been developed by james nagy and his group at emory university .the moore tools , an object oriented toolbox for the solution of discrete ill - posed problems derived from , provides some support for certain classes of structured matrices , mainly kronecker products , circulant and block - circulant matrices .matlab implementations of various algorithms are also available in the personal home pages of many researchers working in this field .many subroutines written in general purpose languages , like c or fortran , are also available .it is worth mentioning that there are plans to add support for structured matrices in lapack and scalapack ; see ( * ? ? ?* section 4 ) .the plan of this paper is the following . in section[ sec : toolbox ] , we describe in detail our toolbox , called ` smt ` ( structured matrix toolbox ) , its capabilities , the new data types added to matlab and the functions for their treatment .section [ sec : implementation ] is devoted to some technical implementation issues , while in section [ sec : conclusion ] we describe possible future lines of development of this software package .once installed ( see section [ sec : implementation ] ) , the toolbox resides in the directory tree sketched in fig .[ fig : dirtree ] .the main directory contains a set of general purpose functions , described in detail in section [ sec : general ] , and the following four subdirectories : * ` ` and ` ` , which contain the functions to create and manipulate the objects of class ` smcirc ` and ` smtoep ` , i.e. , circulant and toeplitz matrices ; * ` private ` , whose functions , discussed in section [ sec : general ] , are accessible by the user only through the commands placed at the upper directory level ; * ` demo ` , which hosts an interactive tutorial on the basic use of the toolbox .let us briefly explain how matlab deals with new data types .when the user creates an object of class , say , ` obj ` , then the interpreter looks for the function with the same name in a directory called ` ` , located in the search path .similarly , when an expression involves a variable of class ` obj ` or a function is applied to it , the same directory is searched for an appropriate operator or function defined for objects of this class . writing this software , we took great care in checking the validity of the input parameters , in particular for what concerns dimensions and data types , and in using an appropriate style for warnings and errors , in order to guarantee the _ matlab - like _ behaviour of the toolbox .as this requires a long chain of conditional tests , the resulting functions are often more complicated than expected ( see for example the file ` mtimes.m ` in the directory ` ` ) , but this does not seem to have a significant impact on execution time . full documentation for every function of the toolbox is accessible via the matlab ` help ` command and the code itself is extensively commented .manual pages can be obtained by the usual matlab means , i.e. , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ xxxxxxxxxxxxxxxxxxxx = ` help ` for the functions in the main directory , + ` help ` / where is either ` ` or ` ` , + ` help private`/ for the functions in the ` private ` subdirectory ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ notice that may be ` contents ` ( except in conjunction with ` private ` ) , in which case a description of the entire directory content is displayed .for example , the command _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .... help / contents .... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ displays the list of all the functions , operators and methods for ` smtoep ` objects ( i.e. , toeplitz matrices ) , while _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .... help / mtimes .... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ gives information about the matrix product operator for toeplitz matrices .we remark that the toolbox supports both real and complex structured matrices , since complex numbers are natively implemented in matlab .it is also possible to manage sparse ` smcirc ` and ` smtoep ` objects .a circulant matrix of order is a matrix whose elements satisfy the relations e.g. , for , the main property of a circulant matrix is that it is diagonalized by the normalized fourier matrix , defined by where is any primitive complex -th root of unity ( i.e. , for , and ) .we let and .this allows us to factorize any circulant matrix in the form where and is the discrete fourier transform of the first column of .given the definition of discrete fourier transform adopted in the command of matlab , we have being the first column of . in `smt ` , a variable ` c ` of class ` smcirc ` is a record composed by 4 fields .the field ` c.type ` is set to the string ` _ _ circulant _ _ ' , and is a reminder , present in all ` smt ` data types , denoting the kind of the structured matrix .the first column of the circulant matrix gives complete information about it , and is stored in ` c.c ` , while ` c.dim ` is the dimension .the field ` c.ev ` contains the vector of the eigenvalues of ; it is computed when the matrix is created and updated every time it is modified .this means that the initial allocation of a circulant matrix , as well as some operations involving it , takes floating point operations ( _ flops _ ) . for example , an object of class ` smcirc ` can be created specifying its first column , with the command _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .... c = smcirc([1;2;3;4 ] ) .... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ and it is visualized either as a matrix _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .... c = 1 4 3 2 2 1 4 3 3 2 1 4 4 3 2 1 .... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ or showing its record structure _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .... c = smcirc object with fields : type : ' circulant ' c : [ 4x1 double ] dim : 4 ev : [ 4x1 double ] .... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ depending on how the configuration parameter ` display ` is set ; see the function ` smtconfig ` in section [ sec : general ] .the structure of the object can also be inspected with the command ` get(c ) ` , independently on the configuration of the package . if the column vector passed to `smcirc ` is of class _ sparse _, this memory storage class will be preserved in ` c.c ` , but not in ` c.ev ` .all operations between circulant matrices have been implemented , when possible , by _ fast _ algorithms , meaning that they require a complexity smaller than the corresponding unstructured matrix operations .for example , when the user computes the sum of two circulant matrices with the command ` a = c+d ` , the function ` plus.m ` is automatically called , in order to sum the ` .c ` fields , and to update the ` .ev ` field of the resulting object , as follows to multiply a circulant matrix times a vector , we can exploit the factorization to obtain where denotes the schur product of two vectors .this requires only 2 ` fft ` s , since the vector is stored in the ` .ev ` field of the corresponding object ` c ` . in a similar way, the first column of the product of two circulant matrices is evaluated by the inverse discrete fourier transform of the product of the eigenvalues of the two factors . in all cases ,the computation is optimized in terms of complexity . after performing many operations which update the eigenvalues of an ` smcirc`object, it may be advisable to recompute ` c.ev ` , to improve its accuracy ; if required , this can be done by _ _ _ _ _ _ _ _ _ _ _ _ _ _ .... c = smcirc(c.c ) , .... _ _ _ _ _ _ _ _ _ _ _ _ _ _ as the user is not allowed to directly modify the fields of an object belonging to an ` smt ` class .l@ c|l@ c + ` plus ` & ` a+b ` & ` power ` & ` a.2 ` + ` uplus ` & ` + a ` & ` mldivide ` & ` ab ` + ` minus ` & ` a - b ` & ` mrdivide ` & ` a / b ` + ` uminus ` & ` -a ` & ` ldivide ` & ` a.b ` + ` mtimes ` & ` a*b ` & ` rdivide ` & ` a./b ` + ` times ` & ` a.*b ` & ` transpose ` & ` a. ` + ` mpower ` & ` a2 ` & ` ctranspose ` & ` a ` + all the _ overloaded _ operators , or _ methods _ , for ` smcirc ` objects are coded in a set of functions , whose names ( fixed by the matlab syntax ) are reported in table [ tab : operators ] , together with the equivalent matlab notations . each of these functions is called when at least one of the operands in an expression is of class ` smcirc ` ; if the two operands are different ` smt`objects , the method corresponding to the first one is called .the result is structured whenever this is possible . when an operation is performed between two circulant matrices ,the complexity is not larger than ( for example in the matrix product ) , while it may be larger when one of the arguments is unstructured ; e.g. , the product between a circulant and a full matrix , which is computed by multiplying the first operand times each of the columns of the second one , takes _ flops_. so , if ` c ` and ` d ` are both ` smcirc ` objects and ` x ` is a vector , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ xxxxxxxxxxxxxxx = ` e = c*d ` produces a ` smcirc ` object , + ` y=(c+d)*x ` returns a vector , + ` f = c*rand(n ) ` returns an unstructured matrix , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ and , in all cases , the fast algorithms implemented for the ` smcirc ` class are automatically used in the computation .the operations on ` smcirc ` objects which rely on the factorization , and so exhibit a complexity , are the product , the power and the left / right division for matrices . obviously , some operators do not involve floating point computations at all , like the transposition or the unary minus .[ algo : plus ] xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx = check validity and dimensions of input arguments and + deal with scalar or empty arguments + is ` smcirc ` + is scalar or ` smcirc ` + the result is ` smcirc ` + is ` smtoep ` + the result is ` smtoep ` + + the result is full + ` _\{if is not _ ` smcirc ` _ , then is } _ + is scalar + the result is ` smcirc ` + + the result is full a trivial implementation of the algorithms is not sufficient to obtain a package which is both robust and transparent for the user .in fact , each function should be able to handle most of possible user s errors , and should replicate the typical behaviour of matlab when any of the operands are scalars or empty arrays . as an example , we report in algorithm [ algo : plus ] the structure of the ` plus.m ` function , which is called when the first structured argument in a sum is an ` smcirc ` object .as it can be seen , when the operands are of different classes , the result belongs to the less structured class ; e.g. , circulant plus full is full , circulant plus toeplitz is toeplitz , etc .many matlab standard functions have been redefined for circulant matrices .they are listed and briefly described in table [ tab : functions ] ; among them , there are simple manipulation and conversion functions , like ` abs ` , ` double ` or ` full ` , some which return logical values ( the ` isxxx ` functions ) , and a few which optimize some computations for ` smcirc ` objects , like ` det ` , ` eig ` or ` inv ` .the implementation of the last three functions is straightforward , as each ` smcirc ` object contains the eigenvalue of the circulant matrix in its ` .ev ` field .we remark that some functions require a larger complexity for a circulant than for a full matrix , like ` imag ` , because extracting the imaginary part of the entries of a circulant matrix requires to recompute its eigenvalues .ll|ll + ` abs ` & absolute value & ` fix ` & round towards zero + ` angle ` & phase angle & ` floor ` & round towards + ` conj ` & complex conjugate & ` ceil ` & round towards + ` imag ` & imaginary part & ` round ` & round argument + ` real ` & real part & ` sign ` & signum function + + ` size ` & size of array & ` get ` & get object fields + ` length ` & length of array & ` isempty ` & true for empty array + ` display ` & display array & ` isequal ` & true for equal arrays + + ` diag ` & diagonals of a matrix & ` reshape ` & change size + ` full ` & convert to full matrix & ` tril ` & lower triangular part + ` prod ` & product of elements & ` triu ` & upper triangular part + ` sum ` & sum of elements + + ` double ` & convert to double & ` subsasgn ` & subscripted assignment + ` single ` & convert to single & ` subsindex ` & subscript index + ` isa ` & true if object is in a class & ` subsref ` & subscripted reference + ` isfloat ` & true for floating point & ` end ` & last index + ` isreal ` & true for real array + + ` det ` & determinant & ` inv ` & matrix inverse + ` eig ` & eigenvalues and eigenvectors + the list in table [ tab : functions ] is surely incomplete , since in principle all matlab matrix functions could be overloaded for circulant matrices .we implemented those functions which we consider useful , leaving an extension of this list , if motivated by real need , to future versions of the package .it is sufficiently easy to add new methods to the class , since the user can start from an existing function , as a template , and then place the new file in the ` smt/ ` directory .let us add some comments on some of the functions listed in table [ tab : functions ] .when adding a new class to matlab , there are a number of functions which must be defined so that the class conforms to matlab syntax rules .the ` get ` method allows to extract a field from an object , while ` display ` defines how an object should be visualized on the screen ; this can be customized in ` smt ` , as it will be shown in section [ sec : general ] .some other functions define the effect of subindexing on the new class .we let two of them , ` subsasgn ` and ` subsindex ` , just return an error code for an ` smcirc ` object , since we consider them useless for circulant matrices .the third one , ` subsref ` , is a function which allows to access a field ( ` c.c ` ) or an element ( ` c(2,3 ) ` ) of a circulant matrix , and to use typical matlab subindexing expressions like ` c ( : ) ` or ` c(3:4 , : ) ` .notice that ` c(1:3,4:7 ) ` returns a toeplitz matrix ( i.e. , an ` smtoep`object ; see section [ sec : smtoep ] ) , while ` c([1,3,5],6:8 ) ` returns a full matrix .the class ` smcirc ` includes two additional methods : ` smtvalid ` is a function , called by other functions of the toolbox , which determines if an object is a valid operand in an expression , while ` smtoep ` converts an ` smcirc`object into an ` smtoep ` one , as a circulant matrix is also a toeplitz matrix .a toeplitz matrix of order is a matrix whose elements are constant along diagonals , that is e.g. , for , we introduced a class ` smtoep ` , for toeplitz matrices , similar to the ` smcirc`class .an ` smtoep ` object can be created by specifying its first column and row , for example with the command _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .... t = smtoep([4:7],[4:-1:1 ] ) , .... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ or giving only the first column , in which case the resulting matrix is hermitian .similarly to what happens to ` smcirc ` objects , an ` smtoep ` object can be displayed either as _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .... t = 4 3 2 1 5 4 3 2 6 5 4 3 7 6 5 4 .... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ or _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .... t = smtoep object with fields : type : ' toeplitz ' t : [ 7x1 double ] dim1 : 4 dim2 : 4 cev : [ 8x1 double ] .... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ depending on the ` display ` configuration parameter ; see ` smtconfig ` in section [ sec : general ] .an ` smtoep ` object has two fields for the number of rows ( ` t.dim1 ` ) and columns ( ` t.dim2 ` ) of the matrix , while ` t.t ` contains the data to reconstruct the matrix , namely the first row and column , in a form which is convenient for computation ; in the above example , the meaning of the ` t.cev ` field will be explained later in this section .the componentwise operators , like sum , subtraction , and the so - called dot - operators of matlab , can be easily implemented for toeplitz matrices , similarly to what has been done for the ` smcirc ` class .regarding the matrix product , it is well known that a toeplitz matrix can be embedded in a circulant matrix ; e.g. , given the matrix , we can write .\label{eq : ctmat}\ ] ] so , to compute the product by a fast algorithm , one can construct a vector by padding with zeros to reach the dimension of , then compute by , and finally extract from the first components of the result .the zero diagonals in can be deleted , in which case the dimension of is minimal , or `` tight '' : if is , then the `` tight '' dimension of is . on the contrary, we can insert as many zero diagonals as we want .this may be useful , because the implementations of the ` fft ` perform better when the length of the input vector is a power of 2 . in `smt ` both choices are available , and can be selected by editing the command ` smtconst ` ; see section [ sec : general ] .although matlab implementation of the ` fft ` , namely fftw , exhibits a very good performance also when the size of the input vector is a prime number , we observed that matrix product is generally faster if we extend the matrix to the next power of 2 exceeding .a particular function has been created to speed - up toeplitz matrix multiplication .thus , the command _ _ _ _ _ _ _ _ _ _ _ _ .... t = toeprem(t ) .... _ _ _ _ _ _ _ _ _ _ _ _ pre - computes the eigenvalues of the matrix , and stores them in the ` .cev ` field .this is done automatically when an ` smtoep ` object is allocated , and allows to perform only two ` fft ` s for each matrix product , instead of three .the price to pay is that , like in the case of circulant matrices , some elementary functions involving ` smtoep ` objects have a complexity larger than expected , as they need to compute the ` .cev ` field of the result .if this behaviour is not convenient , the automatic call to ` toeprem ` can be disabled by the ` smtconfig ` command ( see section [ sec : general ] ) , and the user can either call ` toeprem ` when needed , or renounce to multiplication speedup . all the operators and functions of tables [ tab : operators ] and [ tab : functions ] have been implemented for the class ` smtoep ` , with some differences . unlike the circulant matrices ,there is not a standard method to invert a toeplitz matrix , or to compute its determinant or eigenvalues . on the contrary ,various different algorithms are available and , probably , more will be developed in the future .for this reason the functions ` inv ` , ` det ` and ` eig ` , supplied with the toolbox , return an error for an ` smtoep ` object , and they are intended to be overwritten by user supplied programs . solving circulant linear system is immediate , by employing the factorization , and the computation requires just two ` fft ` s . the algorithm is implemented in the functions ` mldivide ` and ` mrdivide ` , placed into the ` ` directory , and is accessible via the usual matrix left / right division operators . to solve toeplitz linear systems , one possibility is to use an iterative solver , either user supplied , or among those ( ` pcg ` , ` gmres ` , etc . )available in matlab .this can be done transparently , taking advantage of the compact storage and fast matrix - vector product provided by the toolbox .it is usual to employ preconditioners to speed up the convergence of iterative methods . in the case of toeplitz linear systems, it has been proved that various classes of circular preconditioners guarantee superlinear convergence for the conjugate gradient method ; see .the function ` smtcprec ` , included in the toolbox , provides the three best known circulant preconditioners , and can be easily extended to include more . for a given toeplitz matrix , the function can construct the strang preconditioner , which suitably modifies the matrix to make it circulant , the so - called optimal preconditioner , which is the solution of the optimization problem where is the algebra of circulant matrices and denotes the frobenius norm , and the superoptimal preconditioner , which minimizes for .while the strang preconditioner is defined only when is toeplitz , the optimal and superoptimal preconditioners can be computed for any matrix ; the function ` smtcprec ` allows this , though the computation of the preconditioner is fast only for a toeplitz matrix .the code in the functions ` strang ` , ` optimal ` , and ` superopt ` ( see table [ tab : compgen ] ) was developed by one of the authors during the research which led to , and the details of the algorithms are described in that paper . to compute the superoptimal preconditioner , it is possible to use either the method introduced in , or the one from .it is remarkable that , since the second algorithm is based on the use of certain toeplitz matrices , our implementation is greatly simplified , as it performs the computation using the arithmetics provided by the toolbox itself .using circulant preconditioners with the iterative methods available in matlab is straightforward , as these functions use the matrix left division to apply a preconditioner , and so take advantage of the storage and fast algorithms furnished by our toolbox .for example , the instructions _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .... t = smtgallery('gaussian',5000 ) ; b = t*ones(5000,1 ) ; c = smtcprec('strang',t ) ; [ x , flag , relres , iter]=pcg(t , b,[],[],c ) ; .... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ create a gaussian linear system of dimension 5000 ( see section [ sec : general ] ) with prescribed solution , and solve it by the conjugate gradient method , preconditioned by the strang circulant preconditioner .ll|ll & + ` smtcprec ` & circulant preconditioners & ` toms729 ` & toeplitz solver + ` strang ` & strang preconditioner & ` tlls ` & toeplitz ls solver + ` optimal ` & optimal preconditioner & ` tsolve ` & user supplied function + ` superopt ` & superoptimal precond .& ` tsolvels ` & user supplied function + + ` issmcirc ` & true for ` smcirc ` object & ` smtconfig ` & toolbox configuration + ` issmtoep ` & true for ` smtoep ` object & ` smtconst ` & toolbox constants setting + ` smtcheck ` & check toolbox installation & ` smtgallery ` & test matrices + besides the iterative methods , there are also many fast and superfast direct solvers for a toeplitz linear system , and some of them have been implemented in publicly available subroutines . with our toolbox , we distribute two of them , having computational complexity ; they are called when one of the matrix division operators ( either `` or ` / ` ) are used to invert a toeplitz matrix .the related files , listed in table [ tab : compgen ] , are placed in the ` private ` subdirectory of ` smt/ ` . the first one , `toms729 ` , is an implementation of the extended levinson algorithm for nonsingular toeplitz linear systems , written in fortran , for which a matlab mex gateway is available .this solver , which has been implemented only for real matrices , calls the ` dsytep ` subroutine from if the system matrix is symmetric , and ` dgetep ` in the general case , with the ` pmax ` parameter set to 10 .when the linear system is overdetermined ( and full - rank ) the toolbox calls the c - mex program ` tlls ` , developed in , which converts it into a cauchy - like system , and computes its least - squares solution as the schur complement of the augmented matrix ,\ ] ] using the generalized schur algorithm with partial pivoting .complex linear systems are supported . if the matrix is either underdetermined or rank - deficient , an error is returned .it is possible for the user to use different algorithms , by supplying the functions ` tsolve ` , for a nonsingular linear system , or ` tsolvels ` , for least - squares , overwriting those placed in the directory ` smt/ / private ` , and changing the default behaviour of the toolbox with the ` smtconfig ` command .for example , entering from the command line the instructions _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .... smtconfig intsolve off x = t\b .... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ disables the solver ` toms729 ` , and solves the linear system by the user supplied function ` tsolve ` . besides the overloaded operators and methods located in the ` ` and ` ` directories , some general functions , listed in table [ tab : compgen ] , are placed in the main toolbox directory , and are directly accessible to the user . among these functions , we find the two ` isxxx ` functions , which return logical values and check if the supplied parameter belongs to the ` xxx ` class , the ` smtcheck ` function , which verifies if the toolbox is correctly installed , and the function ` smtconst ` , intended to define global constants ( for the moment only the dimension of the circulant embedding ). of particular relevance is the ` smtconfig ` function , which modifies the behaviour of the toolbox for what concerns the display method for objects ( ` display ` parameter ) , the use of the toeplitz premultiplication routine , discussed in section [ sec : smtoep ] ( ` toeprem ` parameter ) , the warnings setting ( ` warnings ` parameter ) , and the active toeplitz solvers ( ` intsolve ` and ` intsolvels ` parameters , see section [ sec : toesys ] ) . for example , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ xxxxxxxxxxxxxxxxxxxxxxxxx = ` smtconfig display compact ` ( or ` off ` ) selects compact display of objects , + ` smtconfig display full ` ( or ` on ` ) restores standard display method ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ all parameter are set by default to the ` on ` state ; calling the command ` smtconfig ` with no parameters shows the state of all settings .ll + ` crrand ` & uniformly distributed random matrix + ` crrandn ` & normally distributed random matrix + + ` algdec ` & matrix with algebraic decay + ` expdec ` & matrix with exponential decay + ` gaussian ` & gaussian matrix + ` tchow ` & chow matrix + ` tdramadah ` & matrix of 0/1 with large determinant or inverse + ` tgrcar ` & grcar matrix + ` tkms ` & kac - murdock - szego matrix + ` tparter ` & parter matrix + ` tphans ` & rank deficient matrix from + ` tprand ` & uniformly distributed random matrix + ` tprandn ` & normally distributed random matrix + ` tprolate ` & prolate matrix + ` ttoeppd ` & symmetric positive definite toeplitz matrix + ` ttoeppen ` & pentadiagonal toeplitz matrix + ` ttridiag ` & tridiagonal toeplitz matrix + ` ttriw ` & upper triangular matrix discussed by wilkinson + we end this section reporting another important feature of the toolbox .a collection of test matrices is available in ` smtgallery ` , which is modelled on matlab ` gallery ` function , but returns structured objects .the collection , listed in table [ tab : smtgallery ] , includes random matrices , three matrices studied in ( ` algdec ` , ` expdec ` and ` gaussian ` ) , one from , and all the toeplitz matrices provided by ` gallery ` , most of which come from .the syntax of ` smtgallery ` is in the same style as the matlab ` gallery ` function , and documentation is provided for each test matrix .for instance , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ xxxxxxxxxxxxxxxxxxxx = ` t = smtgallery(gaussian,7 ) ` constructs a toeplitz gaussian matrix , + ` c = smtgallery(crrand,7,c ) ` returns a random complex circulant matrix , + ` help private / tchow ` displays the help page for the chow matrix . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _the toolbox is entirely written in the matlab programming language .the version officially supported is 7.7 ( i.e. , release 2008b ) , anyway we tested it on previous versions , way back to 6.5 , without problems . to install it , it is sufficient to uncompress the archive file containing the software , creating in this way the directory ` smt ` and its subtree .this directory must be added to the matlab search path by the command ` addpath ` , in order to be able to use the toolbox from any other directory .as noted in the previous sections , a great effort has been devoted to catch all possible user s errors , and to reproduce the standard behaviour of matlab , for example for what concerns the output of each function in the presence of empty or scalar arrays in input .since these features are scarcely documented in matlab manuals , our choices are mostly due to experimental tests .we remark that the ` smtconfig ` function , described in section [ sec : general ] , relies on the use of _ warnings _ , so issuing the matlab command ` warning on ` restores the initial configuration of the toolbox , while ` warning off ` may cause unpredictable results .some of the toolbox functions use the ` isfloat ` command , which was introduced in version 7 of matlab . for those who are using version 6.5 , a patch for this functionis included in the software ; see the ` readme.txt ` file in the main toolbox directory .the two programs used to solve toeplitz linear system are the only ones which need to be compiled : ` toms729 ` was written in fortran , and uses the fortran - mex gateway from , while ` tlls ` was originally developed as a c - mex program .the mex interface is a library , distributed with matlab , which allows to dynamically link a fortran or c subroutine to matlab , and to exchange input and output parameters between the compiled program and the environment , using the usual matlab syntax .both toeplitz solvers can be easily compiled under linux , using the ` makefile ` placed in the directory ` smt/ / private ` , and we provide precompiled executables for 32 and 64 bits architectures .compiling the same programs under windows is a bit more involved : we used a porting of the gnu - c compiler and the `` mex configurator '' gnumex , but precompiled executables are available for various matlab versions ; see the ` readme.txt ` file and the content of the ` smt/ / private ` directory .the structured matrix toolbox is a matlab package which implements optimized storage and fast arithmetics for circulant and toeplitz matrices , offering a robust and easily extensible framework .the toolbox is available at the web page http://bugs.unica.it / smt/. we are currently performing numerical tests to assess its performance , and there are plans to extend its functionality by adding the support for other classes of structured matrices .
|
we introduce the ` smt ` toolbox for matlab . it implements optimized storage and fast arithmetics for circulant and toeplitz matrices , and is intended to be transparent to the user and easily extensible . it also provides a set of test matrices , computation of circulant preconditioners , and two fast algorithms for toeplitz linear systems .
|
in the present work two problems from the theory of filtration through a horizontal porous stratum are considered .first we study a short , but intense , flooding followed by natural outflow through the vertical face of an aquifer .further , we consider the possibility to control the spreading of the water mound by use of forced drainage at the boundary .an important practical example of such a problem is groundwater mound formation and extension following a flood , after a breakthrough of a dam , when water ( possibly contaminated ) enters and then slowly extends into a river bank .consider an aquifer that consists of a long porous stratum with an impermeable bed at the bottom and a permeable vertical face on one side ( fig .[ fig : fig1 ] ) .the space coordinate is directed along the horizontal axis with at the vertical face .a water reservoir is located in the region .we assume that the flow is homogeneous in the y - direction . the height of the resulting mound is denoted by . the initial level of water in the stratum is assumed to be negligible .the problem is formulated as follows . at some time ,the water level at the wall begins to rise rapidly , and water enters the porous medium . by time , the water level at the vertical face returns to the initial one.we assume that the distribution at time is given by and is concentrated over a finite region ] .these are certainly highly idealized problems , but their solutions allow one to extract the qualitative properties and to check the numerical methods in solving more realistic problems .in the case of seepage and gently sloping profile and in the absence of capillary retention , the model of flow in a porous stratum is described by the boussinesq equation ( [ 4 ] see also [ 2],[1 ] ) : here , is the permeability of the medium , its porosity ( the fraction of the volume in the stratum which is occupied by the pores ) , the fluid density , its dynamic viscosity , and the acceleration of gravity .according to the hydrostatic law , water pressure .then , the total head is constant throughout the height of the mound . under the assumption of seepage and gently sloping profiles ,darcy law is used to obtain the relation for the total flux .mathematical properties of the boussinesq equation are well known [ 5 ] .an essential feature of this equation is the finite speed of disturbance propagation given a finite ( compactly supported ) initial distribution .another important feature of this equation is the existence of special self - similar solutions .the graphs of such a solution for any two times and are related via a similarity transformation [ 1 ] .the special solutions , themselves corresponding to certain , sometimes artificial , initial and boundary conditions , are important because they provide intermediate asymptotics for a wide class of initial value problems .for these problems , the details of the initial distribution affect the solution only in the beginning ; after some time , the solution approaches a self - similar asymptotics .the boussinesq equation has been studied extensively and a number of self - similar solutions , for different boundary conditions , have been constructed ( [ 2 ] , [ 3 ] ) .following [ 1 ] , [ 6 ] , the boussinesq equation can be modified to incorporate the effects of capillary retention into the model . if we exclude the possibility of water reentering the region that was filled with water at some earlier time and assume that initially the stratum is empty , we have the following situation : when water enters a pore , it occupies the entire volume , allowed by active porosity ; when water leaves the pore , a fraction of the pore volume remains occupied by the trapped water .we assume that is constant .let us denote the initial active porosity by . then, when water is entering previously unfilled pores , the effective porosity is ; when water is leaving previously water - filled pores , the effective porosity becomes .hence , in the presence of capillary retention , porosity depends on the sign of .notice , that permeability can be assumed unaffected , as the effect of capillary forces on permeability is significant only for small and/or dead - end pores , whose contribution to the total flux , in the first approximation , can be neglected . the rate of change in the amount of water inside a volume element ( fig .[ fig : fig1 ] ) is equal to : on the other hand , the rate of change in the volume of water due to the flux through the faces of a volume element is equal to we denote and .then , using the continuity of flux ( no sources inside the water mound ) and the balance of mass we obtain : this is a nonlinear parabolic partial differential equation with discontinuous coefficients , also known as the generalized porous medium equation [ 2 ] , [ 6 ] .continuity of the flux implies that at the mound tip , where mound height is zero , the flux is also zero . for problem 1, these considerations lead to the following initial and boundary conditions to supplement equation ( [ eq1 ] ) : the second line in ( [ eq2 ] ) corresponds to the free boundary conditions on the right boundary , , which is unknown a priori .it should be noted that for the solution of equation ( [ eq0 ] ) ( but not for ( [ eq1 ] ) ) with boundary conditions ( [ eq2 ] ) the dipole moment is constant : we call equation ( [ eq1 ] ) with boundary conditions ( [ eq2 ] ) a dipole - type problem .a similar problem , for source type initial and boundary conditions was considered in [ 6 ] , see also [ 1 ] . for problem 2 ,the boundary conditions are changed to include the forced drainage condition .the discharge rate , which is a quantity that should be specified , determines the boundary condition at the left free boundary . second and third lines in ( [ eq2_1 ] ) define , respectively , the free boundary condition on the right boundary and the forced drainage condition on the left boundary . equation ( [ eq1 ] ) together with boundary conditions ( [ eq2_1 ] ) define problem 2 .the parameters in the problem are , - the initial width of the water mound , and - the initial dipole moment .we can take the dimensions as follows : =h ] , =t ] . for the remaining parameters =l ] .the dimensions for and are set to be independent .this can be done because the differential equation ( [ eq1 ] ) is invariant with respect to the following group of transformations : the invariance insures that we can scale the units of measurement for , while keeping the units for unchanged .the following dimensionless quantities can be obtained from these parameters : it follows that .since for large times , , the parameter , it would seem natural to set , as in the case of , and look for a solution of the form : however , this leads to a contradiction when we consider an ordinary differential equation obtained from ( [ eq1 ] ) : multiplying both sides by we obtain an equation in total differentials , which is readily solved : observe that near , where the height of the mound vanishes , the first equation holds . at , vanishes along with the flux , which is proportional to . from the first equation at , we obtain that .similarly , evaluating the second expression at , where , we find that .next , evaluating the two expressions at , we obtain : and using we obtain . for , the solution can be found [ 2 ] and thus the assumption of complete similarity in is correct .however , in the case of , we have .this is a contradiction , because the change in sign of should occur inside the mound , where the height is positive .hence , the assumption of complete similarity for does not hold .we next solve the problem numerically and study the asymptotic behavior of the solution .in order to simplify the numerical solution for equation ( [ eq1 ] ) with free boundary conditions ( [ eq2 ] ) , we use a change of variables : .we set , and equation ( [ eq1 ] ) is transformed : with boundary conditions .this effectively fixes the right boundary at .the location of the free boundary can be obtained in the course of the numerical solution in the following way .we assume that the solution is nearly stationary near the tip and . here, denotes the instantaneous speed of mound extension , which changes slowly as a function of . then near .considering equation ( [ eq1 ] ) near the boundary of the mound , where is small , we have : so that we solve the new boundary value problem numerically by using a forward - in - time , centered - in - space finite - difference approximation , where is an approximation to the solution of ( [ eq10.1 ] ) at the grid point : > 0 [ ( u_{i-1}^{n-1})^2 -2(u_{i}^{n-1})^2 + ( u_{i+1}^{n-1})^2 ] < 0 $ } ; \\ \end{cases}\nonumber \\ u_i^{n+1}&= & u_i^{n}+\frac{\delta t}{\delta x^2 } \ { \kappa_i^n [ ( u_{i-1}^n)^2 - 2(u_{i}^n)^2 + ( u_{i+1}^n)^2]\nonumber \\ & - & \kappa_1 \xi_i ( u_i^n - u_{i-1}^n)(u_n^n - u_{n-1}^n)\}/{(x_r^n)^2 } , \nonumber\\ x_r^{n+1}&= & x_r^{n}-2 \kappa_1 \frac{\delta t}{\delta x } \ > \frac{u_n - u_{n-1}}{x_r^{n } }. \nonumber\end{aligned}\ ] ] in the numerical computation we start with an initial distribution of the source type , localized near ( fig.[fig : fig2 ] ) .before the left free boundary reaches the point , the solution is of the source type and we can compare our numerical results to those in [ 6 ] . after some time the left free boundary reaches , where it is thereafter fixed ( fig .[ fig : fig2 ] ) .now we consider the scaled solution : we can see in figs .[ fig : fig6 ] and [ fig : fig7 ] that as time increases the numerical solution approaches a self - similar regime , so that the graphs of the scaled solution for different times `` collapse '' into a single curve .moreover , figs .[ fig : fig3 ] and [ fig : fig4 ] show a power - law dependence on time for both and in the self - similar regime . in part 3, we have shown that a self - similar solution of the first kind does not exist for this problem . to explain what happened we return to the dimensional analysis and look now for a generalized self - similar solution .we have determined that the variables in the problem are related as follows : , where our numerical investigation shows that for large , as : where and are constants .in fact , this is the next simplest situation after complete self - similarity and it is referred to as self - similarity of the second kind in ( see [ 2 ] ) .indeed , from the analysis above : where the parameters and depend on the ratio . they can not be determined on the basis of dimensional analysis alone and have to be computed as a part of the solution .we will see that there is , actually , only one unknown parameter involved , since the differential equation provides an additional relation between and .the numerical solution of partial differential equation showed that there is indeed an intermediate asymptotic solution of the form ( [ eq10 ] ) .now , we can obtain such a self - similar solution by transforming the problem of solving partial differential equation ( [ eq1 ] ) with boundary conditions ( [ eq2 ] ) into a nonlinear eigenvalue problem .we substitute ( [ eq10 ] ) into ( [ eq1 ] ) and normalizing so that we get : where since equation ( [ eq15 ] ) can not depend on time explicitly , .finally , we get an ordinary differential equation : the boundaries and , in the new space variable , correspond to and .the boundary condition at becomes : for the right boundary , , we have : from [ eq12 ] and [ eq13 ] it follows that and the tip conditions become : the second order ode ( [ eq12 ] ) with three boundary conditions ( [ eq14 ] and [ eq14.1 ] ) constitutes a non - linear eigenvalue problem , which we now have to solve numerically .for each value of , we find a value of such that the boundary conditions are satisfied .we use a high order , taylor - expansion - based method to start the integration at followed by a 4th order runge - kutta method and an iterative procedue to arrive at the value for such that the third condition is satisfied . for computational convenience , we transform the differential equation by changing variables : , so that does not have a singularity at . in this manner , we obtain the dependence of on ( fig . [fig : fig5 ] ) .in logarithmic coordinates , we obtain from ( [ eq10 ] ) : i.e. , straight lines with slopes and . from plots in figs .[ fig : fig3 ] and [ fig : fig4 ] , we can observe that after some initial time both graphs approach straight lines .we repeat the calculations for a range of values of .comparison of the results of the numerical solution of the nonlinear eigenvalue problem with the results obtained from the numerical solution to the partial differential equation ( fig .[ fig : fig5 ] ) , shows that the two agree with high precision .also , the exact solution for the case gives the value , which coincides with the results of the numerical computations with good accuracy .although problems 1 and 2 are similar , the numerical treatment of problem 2 is more complicated. time evolution of the left boundary in problem 2 makes rescaling , which was used in the numerical solution of problem 1 , infeasible .instead , we solve equation ( [ eq1 ] ) on a grid , taking into account that the left and right boundaries may not fall onto gridpoints .we determine new positions of the boundaries from the numerical solution at each timestep .equation ( [ eq1 ] ) is discretized using a forward - in - time , centered - in - space finite - difference scheme : > 0 , \\\kappa_2 & \text{if } [ ( u_{i-1}^{n-1})^2 -2(u_{i}^{n-1})^2 + ( u_{i+1}^{n-1})^2 ] < 0;\\ \end{cases}\nonumber \\u_i^{n+1}&= u_i^{n}+\frac{\delta t}{\delta x^2 } \ { \kappa_i^n [ ( u_{i-1}^n)^2 - 2(u_{i}^n)^2 + ( u_{i+1}^n)^2]\ } , \nonumber \\ u_l^{n+1}&= u_l^{n}+\frac{2 \delta t}{\delta x + \delta x_l } \kappa_2^n \ { \frac{(u_{l+1}^n)^2 - ( u_{l}^n)^2 } { \delta x } - q^n \ } , \\u_r^{n+1}&= u_r^{n}+\frac{2\delta t}{\delta x + \delta x_r } \kappa_1^n \ { \frac{(u_{r-1}^n)^2 -(u_{r}^n)^2}{\delta x } - \frac{(u_{r}^n)^2}{\delta x_r } \}.\nonumber\end{aligned}\ ] ] here and are the nonzero values of on the grid , adjacent to the left and right boundaries respectively , is the drainage flux , and are distances from the left and right boundaries to the grid points .we treat the values and separately in order to incorporate the boundary conditions and improve precision .the location of the left boundary is obtained from the values of : the right boundary location is obtained by extrapolation from the values of .we check the numerical method for by comparing the numerical solution with a known analytic solution .the exact self - similar solutions for the problems with forced drainage are given in [ 3 ] .we choose a value of and then solve an ordinary differential equation ( [ eq15 ] ) with the initial condition ( [ eq14 ] ) . for solution of the ordinary differential equation intersects the -axis at some point and at . from the solution of the ordinary differential equationwe obtain a self - similar solution : of the partial differential equation .the locations of the free boundaries are given by , , and the drainage flux is given by ( see [ 3 ] ) .we use the self - similar solution at some time as an initial value for the numerical solver , set drainage flux on the left boundary to be , and compute the solutions until time . as in the analysis in section [ sec_compar ] , the graphs of , should be straight lines in logarithmic coordinates , and the graphs of the scaled solution for different times should collapse into one curve .that s what we observe in figs .[ fig : fig9 ] and [ fig : fig10 ] .now , we try to model the conditions of a flood followed by forced drainage , as described in the introduction .we begin by computing the solution to problem 1 until some time , which corresponds to the flood followed by natural drainage through the boundary of the aquifer .after , we set a constant drainage flux at the left boundary .in particular , we set to equal twice the natural drainage flux at time . as we see in fig . [fig : fig11 ] , the water mound , that has appeared after the flood , is completely extinguished in finite time .1 . the numerical simulations of two problems involving drainage and capillary retention of the fluid a in porous medium were presented .it was shown that the problem with dipole type initial and boundary conditions has a self - similar intermediate asymptotics in the case of a porous medium with capillary retention .2 . a problem of control of the water mound extension by forced drainage was considered .the possibility of extinguishing the propagating water mound by creating a forced drainage flux at the left boundary was confirmed numerically .using our results , it should be possible to derive a cost efficient drilling regime and to localize the mound and contain the contamination inside a prescribed region. it would be interesting to extend the numerical investigation above to the case of a fissurized porous medium .the authors are grateful to professor g.i .barenblatt , without whose direction and advice this work would not have been possible .the authors use this occasion to thank professor a. chorin for many helpful discussions of this work and for his constant attention and encouragement .this work was supported in part by the computational science graduate fellowship program of the office of scientific computing in the department of energy , nsf grant contract dms-9732710 , and the office of advanced scientific computing research , mathematical , information , and computational sciences division , applied mathematical sciences subprogram , of the u.s .department of energy , under contract no .de - ac03 - 76sf00098 .barenblatt , _ scaling , self - similarity , and intermediate asymptotics _ , first ed . , cambridge university press , new york , 1996 . *barenblatt , v.m .entov , and v.m .ryzhik , _ theory of fluid flows through natural rocks _ , first ed ., kluwer academic publishers , dordrecht , 1990 . * g.i .barenblatt and j.l .vasquez , _ a new free boundary problem for unsteady flows in porous media _ , euro .jnl of applied mathematics * 9 * ( 1998 ) , 3754 . * c.w .fetter , _ applied hydrogeology _ , third ed . , macmillan college publishing company , new york , 1988 . * a.s .kalashnikov , _ some problems of qualitative theory of the non - linear second - order parabolic equations _, russian math . surveys ( 1987 ) , no .42 , 169222 .kochina , n.n .mikhailov , and m.v .filinov , _ groundwater mound damping .j. engng sci * 21 * ( 1983 ) , no . 4 , 413421 . * b.a .wagner , _ perturbation techniques and similarity analysis for the evolution of interfaces in diffusion and surface tension driven problems _ , zentrum mathematik , tu munchen , 1999 .
|
a model of unsteady filtration ( seepage ) in a porous medium with capillary retention is considered . it leads to a free boundary problem for a generalized porous medium equation where the location of the boundary of the water mound is determined as part of the solution . the numerical solution of the free boundary problem is shown to possess self - similar intermediate asymptotics . on the other hand , the asymptotic solution can be obtained from a non - linear boundary value problem . numerical solution of the resulting eigenvalue problem agrees with the solution of the partial differential equation for intermediate times . in the second part of the work , we consider the problem of control of the water mound extension by a forced drainage .
|
the demand for mobile broadband is increasing rapidly with emerging applications and services .these services require an optimized quality - of - service ( qos ) for efficient network operation .current state - of - the - art research focuses on traditional techniques such as interference management , cooperative communication and cognitive radio to improve spectrum utilization .however , these techniques can not meet the predicted data traffic growth alone without additional spectrum and more pronounced network densification .therefore , heterogeneous networks , comprising of small cell base stations ( sbss ) underlying the macrocellular network , constitute a promising solution to improve coverage and boost capacity . however , simply adding small cells incurs high capital and operational expenditures , which limits their deployments . to remedy to this , content caching at the network edgehas recently been identified as one of the five most disruptive paradigms in 5 g networks .dynamic caching can significantly offload different parts of the network including the radio access network , core network , and backhaul , by smartly prefetching and storing contents closer to the end - users . as a result ,network congestion is eased , backhaul is offloaded and users quality - of - experience ( qoe ) is maximized . + a significant work has been done on content caching in wireless networks . in ,the author presented and optimized opportunistic cooperative mimo ( comp ) scheme for wireless video streaming by caching part of video files at the relays . in , data is cached on different wireless caches with limited storage capabilities and the performance of uncoded and coded data transmission is evaluated by minimizing the distance to retrieve the complete data file .collaborative caching is presented in where the social welfare is maximized using vickrey - clarke - groves ( vcg ) auctions .femtocaching was proposed in where caching helpers store popular contents optimally , thus improving frequency reuse . proactive caching by prefetching contents on different network elements , thus providing substantial gains in terms of backhaul savings and satisfied users . evaluates the performance of caching on helper stations / devices and optimize video quality by devising optimal storage schemes . estimates the loss rate in content delivery networks and proposes several replication methods .most of the works above consider fixed location of caches . to the best of our knowledge, was the first work to study stochastic geometric approaches to caching . however ,this work did nt address the optimization of sbs density as a function of cache size and distance . also considers content caching from a stochastic geometry perspective .however , instead of link reliability , it considers the distance to retrieve all parts of the requested file , without taking interference into account .+ unlike and , this paper investigates the problem of content caching in dense small cell networks ( scns ) , from a stochastic geometry perspective .sbss are randomly deployed in the network according to a poisson point process ( ppp ) , and store a limited number of contents , from a library . by assuming a single user located at the origin of the network, we derive a closed form expression of the outage probability , as a function of the sbs density , storage size and threshold distance .the outage probability is defined as the probability of accessing a specific content , cached in a given coverage area , subject to a given signal - to - interference ( sir)-dependent coverage probability . moreover , for a given target cache hit probability , we derive a closed form expression of the outage probability assuming the closest sbs is within the area defined by threshold distance . by assuming a given threshold distance and cache size , we characterize the optimum number of sbss required to achieve a target outage probability , and derive a number of insights .+ the remainder of this paper is organized as follows : section [ sec : sys_mod ] describes the system model where the network and channel models are presented . in section[ sec : out_pro_form ] , the content outage probability is formalized and the optimum sbs density is characterized , for a fixed replication ratio and threshold distance to achieve a target cache hit probability . finally , numerical results are presented in section [ sec : num_res_dis ] along with simulation results .consider the downlink transmission of a wireless small cell network comprising of sbss in a two dimensional euclidean plane , as shown in fig . [ fig : net_diagram1 ] .the distribution of sbss is modeled as a homogeneous ppp , with intensity where each sbs is equipped with a cache of size , to cache contents from a given library with size .all contents are assumed to be of the same size .we assume the content popularity to follow a uniform distribution where each content is equally popular . as a result ,each sbs randomly selects and caches contents from the library . without loss of generality, we consider a single user equipment ( ue ) located at the origin of the network , termed as a _ reference user _( also known as _ tagged user _ ) .such reference user has a threshold distance that specifies the maximum distance over which its content request can be served .the reference user is associated to the nearest sbs with the cached content within . if no sbs caches the requested content , an outage event occurs . to simplify the mathematical analysis, we further consider that the original process contributes only to the interference field since the reference link is not part of the original ppps .for this reason , the reference link emulates the user association process by assuming that the distance between the reference sbs and the reference user follows the distribution of the closest sbs having the cached content , conditioned on its existence .+ the standard power loss propagation channel model is used with path loss exponent . in order to account for the random channel effects such as fading and shadowing, we consider a rayleigh fading channel in which the channel gain from sbs to a reference user is , which is an exponential random variable with mean 1 .we consider a constant transmit power per sbs .moreover , we assume an interference - limited regime and neglect the effect of additive white gaussian noise ( awgn ) .the sir from sbs to the reference user is given as : where is the random distance between the reference ue and sbs and is the path loss exponent .+ the performance of the system depends on several factors such as , and .a high sbs density increases the cache hit probability at the cost of an increased interference .meanwhile , increasing the threshold distance increases the coverage area of a given user , leading to a high cache hit .however , the sir decreases with the increased distance ( assuming free space path loss ) .additionally , a large cache size increases the cache hit without affecting the interference for the same number of sbss .the goal of the work is to maximize the cache hit probability while achieving a pre - defined sir threshold based on the system parameters such as , and . in the following section ,the analytical results that capture such tradeoffs are presented .in this section , after defining the coverage probability and cache hit probability , we optimize the sbs density to achieve a target cache hit probability based on the threshold distance and replication ratio . in addition, we derive the outage probability of getting a specific content .the replication ratio , denoted by , is the fraction of the library contents stored by the sbs . in this paper, we assume a uniform content replication so that the replication ratio is the same for every content and it is given by : note that , as far as the uniform replication is assumed , the ratio is equal to the probability of having a given content in the sbs cache , with size .the outage probability in an interference - limited system is defined as the probability that a reference ue does not achieve an sir threshold ( ) . mathematically , the outage probability , with respect to an sbs , is given by : for a rayleigh fading channel , the outage probability is given by the following relation : where with being the gamma function , is the sbs density , is the target sir and is the random distance of the reference user .the cache hit probability is defined as the probability of existence of a given content within .the cache hit probability within a threshold distance , is represented by and is given by : this equation ensures the probability of existence of at least one sbs in the coverage area of the user having the requested content . to achieve a given target cache hit probability, the values of , and must satisfy the following inequality , given in lemma [ le_1 ] : [ le_1 ] for a sbs density , replication ratio and target cache hit probability , the following inequality must be satisfied : _ proof ._ let be the target cache hit probability .in order to achieve the target cache hit probability , the right side of eq . should be greater than or equal to , i.e. , by simplifying the above equation , we get . + as there are three variables in lemma [ le_1 ] , in order to find the optimal sbs density , we fix some variables to find the bound in the following cases : + _ * case 1 * _ : for a fixed replication ratio , and satisfying eq . are given by the following inequality : _ * case 2 * _ : as the maximum value of is 1 , the lower and upper bound on is given by : by assuming that a given user associates with the closest sbs containing content , the probability density function of the random distance , given that the content exists within is : the content outage probability is defined as the probability of not achieving sir threshold ( ) for a given content distribution . the content outage probability for a content , is denoted by . the content outage probability of content requested by a reference ue assuming the threshold distance , such that lemma [ le_1 ] is satified , is given by : _ proof ._ in order to find the content outage probability of content , requested by a reference ue , we condition the outage probability of sir specified in eq .( [ eq : sir_eq ] ) , over the random distance given by : , \\ & = \int_{0}^{r_{\mathrm{th } } } \bigg(1 - e^{-{\mathrm{s } } \kappa \pi r^2 \gamma^{\frac { 2 } { \alpha}}}\bigg ) f_c(r ) \mathrm{d}r , \end{split}\ ] ] plugging into the above equation and integrating over the threshold distance yields. [ coro_1 ] for a given , and , the optimum number of sbss is given by : _ proof ._ from and .this section discusses the analytical results obtained in the previous section .we take 5000 realization of to validate the analytical results via simulations . inwhat follows , we provide insights on the outage probability in terms of sbs density , replication ratio , sir threshold and threshold distance .in addition , the simulation results shown in the figures suggest that the analytical model is precise and models the network behavior accurately . fig .[ fig : outage_repratio_sbsdensity ] shows the effect of sbs density on the outage probability for various replication ratios .the figure shows that an increased number of sbss , at a constant threshold distance , results in increased outage probability due to increased interference .another implication from the result suggests that the outage probability becomes constant with an increased sbs density . when sbss cache few contents with a small replication ratio , interference is the dominating factor . assuming a fixed replication ratio while increasing the sbs density , the effect of interference is increased. meanwhile , an increased replication ratio , for a fixed sbs density , increases the hit probability because each sbs caches a higher proportion of library contents , which decreases the outage probability .[ fig : outage_repratio_sbsdensity ] shows the variation of outage probability as a function of the replication ratio . with an increased replication ratio , the outage probability decreases as each sbs caches a large proportion of the library contents .it can also be seen that for a small sbs density and a small replication ratio , the outage probability is very low .the low value of outage probability is due to the aggregate effect of interference and cache hit probability . theoretically , a small sbs density with a small replication ration decreases the hit probability .as the interference effect is very small at a small sbs density , the cache hit probability is the dominating factor .[ fig : outage_distance ] shows the effect of coverage radius on the outage probability for various replication ratios .this demonstrates that the outage probability increases with the threshold distance .if the threshold distance increases , the number of sbss increases and the hit probability for the content increases . however , with an increased distance , the link reliability decreases due to increased interference .another insight drawn from the figure is that with increased replication ratio , the outage probability decreases as every sbs caches most of the library contents .in addition , the outage probability levels off , as the threshold distance is increased beyond 10 m .this is due to the fact that as the distance is increased , the effect of interference becomes negligible .[ fig : outage_sbsdensity_distance ] shows the variation of outage probability with distance for different sbs density . as the threshold distanceis increased , the outage probability increases due to the higher interference level . further increasing the distanceimproves the hit probability at the cost of more interference , thereby impacting the outage probability . besides , with the increased sbs density , the outage probability increases due to increased interference from the sbss .[ fig : outage_repratio_sirthreshold ] presents the variation of outage probability with respect to sir threshold for different replication ratios .it can be seen that with the increased sir threshold , the outage probability increases .the increased sir threshold relaxes the interference requirement , resulting in an increased outage .meanwhile , the outage converges to 1 as the sir threshold increases .however , the convergence is faster for small replication ratio .the fast convergence for small replication is due to the fact that the effect of interference is dominating than the caches of sbss .[ fig : outage_sirthreshold_sbsdensity ] shows the variation of outage probability with sir threshold for different sbs density .it can be seen that the outage probability increases with sir threshold as the sbs density is increased .the increased sir threshold lessens the interference requirement , resulting in an increased outage .in addition , the outage probability coverges to 1 irrespective of the sbs density .however , the convergence rate for small sbs density is less than the convergence rate for large sbs density . for small sbs density ,the aggregate interference of sbss is much smaller instead of large sbss density .[ fig : opt_sbs_distance ] shows the optimum number of sbss as a function of target hit probability for a given and , given by corollary [ coro_1 ] .the first implication , from the figure , suggests that an increased threshold distance requires a small sbs density for a given replication ratio . with the increased threshold distance ,a large number of sbss becomes available in the coverage area of ue , thus , requiring small sbs density .however , the exponential behavior of the curve for large distance suggests that it is quite challenging to achieve a high target cache hit probability for such large threshold distance .in addition , the figure reveals that with an increased target cache hit probability , the sbs density for different distances varies slowly and the optimum sbs density converges .[ fig : opt_sbs_repratio ] shows the optimum number of sbss required to achieve a target hit probability at a fixed distance for different replication ratios .it can be seen that a high replication ratio decreases the number of sbss as many sbss caches most of the library contents. meanwhile , it is difficult to obtain a highest target cache hit probability for a small replication ratio .therefore , small distances and replication ratios require a high sbs density for a given target cache hit probability . moreover , the figure reveals that as the replication ratio is increased , the variation in the sbs density becomes smaller to achieve a target cache hit probability .in this paper , we studied a cache enabled small cell network comprising of sbss that store contents from a library . by considering the distribution of sbss to be a ppp , we derived the outage probability of getting the requested content over a threshold distance .in addition , we characterized the optimum number of sbss to achieve a target hit probability .finally , we performed numerical analysis to show the interplay between outage probability , sbs density , threshold distance and replication ratio .future works involve incorporating the modeling of wired and wireless backhauling , and investigating other network deployments such as clustered ppps .cisco whitepaper , `` cisco visual networking index : global mobile data traffic forecast update , 2013 - 2018 , '' 6 may 2014 .j. andrews , `` interference cancellation for cellular systems : a contemporary overview , '' _ ieee wireless communications magazine .19 29 , apr . 2005 .s. singh , h. s. dhillon and j. g. andrews , `` offloading in heterogeneous networks : modeling , analysis , and design insights , '' _ ieee trans . on wireless commun .5 , pp . 2484 - 2497 , may .2013 .j. dai , f. liu , b. li , b. li and j. liu , `` collaborative caching in wireless video streaming through resource auctions , '' _ ieee journal on sel .areas in commun ._ , vol . 30 , no .458 - 466 , feb . 2012 .k. shanmugam , n. golrezaei , a.g .dimakis , a.f .molisch and g. caire , `` femtocaching : wireless video content delivery through distributed caching helpers , '' _ ieee trans .inf . theory _ ,12 , 8402 - 8413 , dec . 2013 .a. f. molisch , g. caire , d. ott , j. r. foerster , d. bethanabhotla and m. ji , `` caching eliminates the wireless bottleneck in video - aware wireless networks , '' _ pre - print _ , 2014 , http://arxiv.org/abs/1405.5864 .e. bastug , m. bennis and m. debbah , `` cache - enabled small cell networks : modeling and tradeoffs , '' _11th international symposium on wireless communication systems ( iswcs ) _ , barcelona , spain , august 2014 .
|
network densification with small cell base stations is a promising solution to satisfy future data traffic demands . however , increasing small cell base station density alone does not ensure better users quality - of - experience and incurs high operational expenditures . therefore , content caching on different network elements has been proposed as a mean of offloading the backhaul by caching strategic contents at the network edge , thereby reducing latency . in this paper , we investigate cache - enabled small cells in which we model and characterize the outage probability , defined as the probability of not satisfying users requests over a given coverage area . we analytically derive a closed form expression of the outage probability as a function of signal - to - interference ratio , cache size , small cell base station density and threshold distance . by assuming the distribution of base stations as a poisson point process , we derive the probability of finding a specific content within a threshold distance and the optimal small cell base station density that achieves a given target cache hit probability . furthermore , simulation results are performed to validate the analytical model . caching , small cell networks , stochastic geometry
|
pioneered by the riga and karlsruhe liquid sodium experiments , the last fifteen years have seen significant progress in the experimental study of the dynamo effect and of related magnetic instabilities , such as the magnetorotational instability ( mri ) and the kink - type tayler instability ( ti ) . a milestone was the observation of magnetic field reversals in the vks experiment which has spurred renewed interest in simple models to explain the corresponding geomagnetic phenomenon .this is but one example for the fact that liquid metal experiments , though never representing perfect models of specific cosmic bodies , can indeed stimulate geophysical research .one of the pressing questions of geo- and astrophysical magnetohydrodynamics concerns the energy source of different cosmic dynamos .while thermal and/or compositional buoyancy is considered the favourite candidate , precession has long been discussed as a complementary energy source of the geodynamo , in particular at an early stage of earth s evolution , prior to the formation of the solid core .some influence of orbital parameter variations can also be guessed from paleomagnetic measurements that show an impact of the 100 kyr milankovic cycle of the earth s orbit eccentricity on the reversal statistics of the geomagnetic field .recently , precessional driving has also been discussed in connection with the generation of the lunar magnetic field , and with dynamos in asteroids .whilst , therefore , an experimental validation of precession driven dynamo action appears very attractive , the constructional effort and safety requirements for its realization are tremendous . in this paper , we outline the present state of the preparations of such an experiment , along with giving an overview of further liquid sodium experiments that are planned within dresden sodium facility for dynamo and thermohydraulic studies ( dresdyn ) at helmholtz - zentrum dresden - rossendorf ( hzdr ) .= angle between rotation and precession axes , in dependence on the precession ratio , when scaled to the dimensions of the large sodium device .( a ) maximum pressure ( numerically determined at ) and maximum pressure difference ( numerical and experimental ) .( b ) pressure distribution at the rim of the cylinder ( numerical ) for 4 specific precession ratios.,title="fig:",scaledwidth=99.0% ]compared to the flow structures underlying the riga , karlsruhe and the cadarache vks experiment , the dynamo action of precession driven flows is not well understood .recent dynamo simulations in spheres , cubes , and cylinders were typically carried out at reynolds numbers of a few thousand , and with magnetic prandtl numbers not far from 1 . under these conditions ,dynamo action in cubes and cylinders was obtained at magnetic reynolds numbers of around 700 ( is the magnetic permeability constant , the conductivity , the radius or half sidelength in case of a cube , is the angular velocity of rotation ) , which is indeed the value our experiment is aiming at .yet , there are uncertainties about this value , mainly because numerical simulations fail at realistic reynolds numbers . for this purpose , a 1:6 scaled water experiment ( described in ) has been set - up and used for various flow measurements , complementary to those done at the ater experiment in paris - meudon .so far , we have achieved some qualitative , though not quantitative , agreement of the dominant flow structures between experiment and numerics for precessing cylindrical flows , at least for angles between precession and rotation axis not very far from 90 . basically , at low precession ratios ( is the angular velocity of precession ) the flow is dominated by the first ( ) kelvin mode .approximately at , higher azimuthal modes appear which start to draw more and more energy from the forced kelvin mode .still laminar , this regime breaks down suddenly at ( details depend on the aspect ratio of the cylinder , the angle between rotation and precession axis , and the reynolds number ) .we identified two global features by which this laminar - turbulent transition can be easily characterized .the first one is the energy dissipation , measurable by the motor power of the rotating cylinder ( see ) .the second one is the maximum pressure difference between opposite points on the side wall of the cylinder .figure 1a shows the maximum pressure ( numerically determined at ) and the maximum pressure difference ( numerically determined at and experimentally at ) , with all values up - scaled to the dimensions of the large machine . the right end - point , at , of the parabola - like part of the experimental curve , marks the sudden transition from the laminar to the turbulent regime .the corresponding numerical curve is qualitatively similar , but shows significant quantitative deviations , in particular a shift of the transition point towards higher values of .note also that the maximum value of appears approximately at from where on the higher modes are increasingly fed by the forced mode .the effect of the reynolds number on the various transitions found in our experiment is shown in fig .2 . it shows also a first transition ( diamonds ) from a stable , -kelvin - mode dominated flow to a more unstable flow comprising also higher -modes .the two upper lines indicate the laminar - turbulent transition , either coming from the laminar side ( circles ) , or from the turbulent side ( squares ) .the difference between the two lines indicates a hysteresis .the reynolds number only weakly affects the transitions ( an empirical curve fit for laminar - turbulent transition corresponds to ) , raising hopes that the large machine might not behave dramatically different .= . diamonds mark the first bifurcation from the pure forced kelvin mode .circles mark the boundary between the unstable , non - linear ( nl ) regime and the turbulent regime .squares mark the same boundary but when the system returns from the turbulent regime ( hysteresis ) .the curve fits of the upper two data sets correspond to in either case , the curve fit of the lower data set corresponds to .,title="fig:",scaledwidth=80.0% ] interestingly , an intermediate regime characterized by the occurrence of a few medium - sized cyclones has been observed at the ater experiment in paris - meudon .so far , these vortex - like structures could not be identified at our water experiment .a 3-d , volumetric particle image velocimetry system currently being commissioned could ultimately provide a helicity distribution , which in turn could be fed to dynamo simulations . in general, we expect more conclusive dynamo predictions , in particular for the cyclonic and the turbulent regime , from a close interplay of water test measurements and advanced numerical simulations .=) in dependence on specific electrical boundary conditions .the values are for and precession ratio .the thickness of either layer type is taken as , with the same conductivity as the fluid.,title="fig:",scaledwidth=80.0% ] up to present , dynamo action for precessing cylindrical flows has been confirmed in nonlinear simulations of the mhd equations for the case ( is the kinematic viscosity ) and .yet , the critical depends on the specific electrical boundary conditions ( see fig .3 ) , with a surprisingly low optimum value of 550 for the case of electrically conducting side layers and insulating lid layers ( actually , this finding has led us to consider an inner copper layer attached to the outer stainless steel shell of the dynamo vessel ) .albeit this low critical is encouraging , the question of self - excitation in a real precession experiment is far from being settled .further simulations at lower have led to an increase of the critical , and for lower values of dynamo action has not been confirmed yet ( see also ) .in contrast to previous dynamo experiments , the precession experiment has a higher degree of homogeneity since it lacks impellers or guiding blades and any magnetic material ( the latter seems to play a key role for the functioning of the vks dynamo ) .the central precession module encases a sodium filled cylindrical volume of 2 m diameter and the same height ( figure 4 ) . for this volume, we aim at reaching a rotation rate of 10 hz ( to obtain ) , and a precession rate of 1 hz ( to cover both the laminar and the turbulent flow regimes ) . with total gyroscopic torques of up to nm , we are in many respects at the edge of technical feasibility , so that much optimization work is needed to enable safe operation of the machine .the complicated simultaneous rotation around two axes poses several challenges : filling and emptying procedures , heating and cooling methods , and handling of thermal expansion .a decision was made in favor of a slightly enlarged vessel , comprising two conical end - pieces that serve , first , for a well - defined filling and emptying procedure at 43 vessel tilting , and , second , for hosting two bellows which will compensate the thermal expansion of the liquid sodium .= having defined this basic structure of the central vessel , much effort was , and still is , devoted to the optimization of the shell. a shell thickness of around 3 cm is needed anyway to withstand the centrifugal pressure of 20 bar in case of pure rotation ( see fig .1 ) . for increasing precession ratio , this total pressure decreases , but is complemented by a pressure vacillation due to the gyroscopic forces .in addition to those mechanical stresses , we also have to consider thermal stresses caused by the temperature difference over the shell when the dynamo is cooled by a strong flow of air .a particular problem is the high mechanical stress in the holes for the measurement flanges .the next step is the design of the bearings and of a frame that allows to choose different angles between rotation and precession axis .finding appropriate roller bearings for the vessel turned out to be extremely challenging , mainly because of the huge gyroscopic torque .it is the same gyroscopic torque that also requires a very stable basement ( fig .5a , b ) , supported by seven pillars , each extending 22 m into the bedrock .the precession experiment itself is embedded in a containment ( fig .4 , fig . 5b , c ) , preventing the rest of the building from the consequences of possible sodium leaks .since the double rotation can not be stopped quickly in case of an accident , this containment is the only chance of preventing jets that would spill out of a potential leak from perfectly covering all surrounding areas with burning sodium . for such accidents, the containment can be flooded with argon , which is stored in liquid form .given the significant investment needed for the very precession experiment and the infrastructure to support it , we have combined this specific installation with creating a general platform for further liquid metal experiments .a second experiment relevant to geo- and astrophysics will investigate different combinations of the mri and the current - driven ti ( see fig .6 ) . basically , the set - up is a taylor - couette experiment with 2 m height , an inner radius cm and an outer radius cm .rotating the inner cylinder at up to 20 hz we plan to reach an of around 40 , while the axial magnetic field will correspond to a lundquist number of 8 .both values are about twice their respective critical values for the standard version of mri ( with only an axial magnetic field applied ) . below those critical values, we plan to investigate how the helical version of mri approaches the limit of standard mri . to this end, we will use a strong central current , as it was already done in the promise experiment .this insulated central current can be supplemented by another axial current , guided through the ( rotating ) liquid sodium , which will then allow to investigate arbitrary combinations of mri and ti .recent theoretical studies have shown that even a slight addition of current through the liquid would extend the range of application of the helical and azimuthal mri to keplerian flow profiles .= the ti will also play a central role in a third experiment in which different flow instabilities in liquid metal batteries ( lmb ) will be studied .lmb s consist of three self - assembling liquid layers , an alkali or earth - alkali metal ( na , mg ) , an electrolyte , and a metal or half - metal ( bi , sb ) . in order to be competitive, lmb s have to be quite large , so that charging and discharging currents in the order of some ka are to be expected . under those conditions , ti and interface instabilitiesmust be carefully avoided .another installation is an in - service - inspection ( isi ) experiment for various studies related to safety aspects of sodium fast reactors ( sfr ) . in this contextwe also intend to investigate experimentally the impact of magnetic materials ( e.g. ods steels ) on the mean - field coefficients in spiral flow configurations ( see ) , and its consequences for the possibility of magnetic - field self - excitation in the cores of sfr s . the construction of the dresdyn building is well advanced .figure 5c , d illustrate the status of the construction as of october 2014 .the interior construction is expected to be finalized in 2015 .thereafter , installation of the various experiments can start .it goes without saying that both the precession and the mri / ti experiment will be tested with water first , before we can dare to run them with liquid sodium .we have discussed the motivation behind , and the concrete plans for a number of experiments to be set - up in the framework of dresdyn .the new building and the essential parts of the experiments are expected to be ready in 2015 .apart from hosting the discussed experiments , dresdyn is also meant as a general platform for further large - scale experiments , basically but not exclusively with liquid sodium .proposals for such experiments are , therefore , highly welcome .this work was supported by helmholtz - gemeinschaft deutscher forschungszentren ( hgf ) in frame of the helmholtz alliance liquid metal technologies ( limtech ) , and by deutsche forschungsgemeinschaft ( dfg ) under grant ste 991/1 - 2 .we thank jacques lorat for his proposal and encouragement to set - up a precession driven dynamo .fruitful discussions with andreas tilgner are gratefully acknowledged .
|
the most ambitious project within the dresden sodium facility for dynamo and thermohydraulic studies ( dresdyn ) at helmholtz - zentrum dresden - rossendorf ( hzdr ) is the set - up of a precession - driven dynamo experiment . after discussing the scientific background and some results of water pre - experiments and numerical predictions , we focus on the numerous structural and design problems of the machine . we also outline the progress of the building s construction , and the status of some other experiments that are planned in the framework of dresdyn .
|
globally , influenza is an important cause of morbidity , mortality and hospitalization .two major types circulate among human population , type a and type b , and they shared many common morphological and epidemiological characteristics .they also have similar clinical presentations .however , they differ in the animal reservoirs that they reside , influenza a virus infects mainly the mammalian species and birds , while influenza b virus infects only humans and seals .influenza b virus displayed less ( at slower rate ) antigenic drift and can cause significant epidemics but not pandemic , unlike influenza a .previous studies have investigated into the trends and burden of influenza b .glezen et al . suggested that there was an increasing trend of the influenza b proportion in both the united states ( us ) and europe between 1994 and 2011 .other studies have also explored the potential roles of air travel on the global and regional spread of influenza .these studies showed that air travel volumes and flight distance are important factors driving the spread of influenza .however , to our knowledge , there is a lack of recent literature that examined the global patterns of influenza b proportion after the 2009 influenza pandemic .the aim of our study is to examine the spatio - temporal patterns of influenza b proportion in the post - pandemic era at both global and regional level .we focus on weekly laboratory confirmations from hong kong special administrative region ( sar ) china and other 73 countries that have the most laboratory - confirmations of influenza a and influenza b from 2006 to 2015 .we exclude the year of 2009 to avoid the impact of widely different testing efforts ( i.e. testing policies or testing practices ) among countries during the 2009 a(h1n1 ) influenza pandemic , which was originated from mexico .we define the influenza b proportion as the number of laboratory - confirmations of influenza b out of the total of influenza types a and b confirmations over the whole study period .this measure reflects the relative health care burden of influenza b out of both influenza a and b cases .we focus on two research hypotheses : \(1 ) there is a linear association between influenza b proportion and effective distance from mexico , given that the pandemic h1n1 strain spread from mexico and northern america to the rest of the world .simonsen et al . found `` far greater pandemic severity in the americas than in australia , new zealand , and europe '' in 2009 . found that the h1n1pdm ( pandemic strain ) skipped a large part of europe and east asia , but not northern america .\(2 ) in the us , there is a linear association between the pre - pandemic and post - pandemic era at the regional level , given that influenza b and influenza a ( especially the h3n2 strain ) showed anti - phase patterns .namely , when influenza a / h3n2 are severe , the other strains are mild .influenza data are downloaded from the flunet , a publicly available and real - time global database for influenza virological surveillance compiled by the world health organization .we obtained data from january 2006 to october 2015 , covering 138 countries .we excluded data from the year 2009 , to avoid the impact due to excessive testing in many countries .we then summed up the total confirmations of influenza a and influenza b separately , and computed the proportions of influenza b , i.e. , where and are weekly confirmations over the period .we restricted to those countries where and to reduce the impact of statistical noise created by small numbers from small populations on this ratio .we considered and .we also downloaded laboratory confirmations for influenza a and influenza b from the center of health protection in hong kong from january 1998 to october 2015 ( www.chp.gov.hk ) , fluview from the centers for disease control and prevention in the us from january 1997 to october 2015 ( www.cdc.gov/flu/weekly/ ) .again , we computed the influenza b proportion using the same method for the period 2006 - 2015 .we also conduct regional level study on the correlation between influenza b proportion before and after the 2009 influenza pandemic in the us .in addition , we collected information about individual countries including their population size as of 2005 , longitudinal and latitudinal information and flight data from the official airline guide ( oag , http://www.oag.com ) . for hypothesis 1 , we computed the pearson s correlation between influenza b proportion and the effective distance from mexico . as a comparison , we repeat the analysis using china as the reference country , instead of mexico . besides , we also computed the pearson s correlation between influenza b proportion and the longitude and latitude ( absolute value ) .we varied the threshold of influenza specimen at 500 and 2000 .we also considered different time periods , i.e. pre - pandemic period , post - pandemic period or the entire study period covering both . for hypothesis 2 , we computed the pearson s correlation between influenza b proportion in the pre - pandemic and post - pandemic era among the ten census region in the united states .we defined pre - pandemic period with varying start year between 1997 and 2007 , and the end year as 2008 .the post - pandemic period is defined as january 2010 to october 2015 .we computed the pearson s correlation for each of these start years .also , we constructed four different statistical models to determine how the influenza b proportion is associated with the other factors .model 1 is a linear mixed effect model .the response factor is influenza b proportion .the independent factors include population size , longitude , absolute latitude , effective distance as fixed factors , and geographic region as a random factor . model 2 is also a linear mixed effect model , the factors are the same as model 1 , except that we use the number of laboratory specimens tested in place of the population size .model 3 is a linear model .the response factor is influenza b proportion .the independent factors include population size , longitude , absolute latitude and effective distance .model 4 is also a linear model .we have the same factors as in model 3 , except that we use the number of laboratory specimens tested in place of the population size .in addition , we create a linear model ( model 5 ) , with influenza b proportion as response factor , and population size , longitude , absolute latitude , effective distance and region as independent factors .we remove one independent factor at a time , and then re - assess the model fit by re - running the linear model and assessing its akaike information criterion ( aic ) .all statistical analyses are conducted using statistical package r version 3.2.2 .statistical significance is assessed at 0.05 level .the global air traffic data from the official airline guide ( oag , http://www.oag.com ) were used to calculate the effective distance .these data provide the number of seats on each flight route between pairs of worldwide airports in 2009 .we built the global air network at the country level by aggregating each group of airports that are located in the same country into a single node .there are in total 209 countries ( nodes ) and more than 4,700 connections ( edges ) among different countries . as in , the effective distance from node to one of its directly connected node measured by , where is the relative mobility rate from to . represents the number of passengers from node to , and represents the total number of passengers from node .the effective distance from seed node to another node in the network is defined by the distance of the shortest path . for any path between the two nodes ,its distance is the sum of effective distance for every edge on that path .the definition of effective distance seems quite arbitrary .some alterative choices can be used for comparison .for example , define , where and is the population size of country .these two effective distances are highly correlated ( ) .we found that the influenza b proportion from 74 localities are evidently correlated with their effective distance of from mexico , using either definition of effective distance .figure [ fig : effd ] shows the influenza b proportion versus the effective distances from 73 localities to mexico .influenza b proportion is significantly and positively correlated with the effective distance , i.e. , the further from mexico the higher the proportion .pearson s correlation is 0.433 ( 95%ci : 0.225 , 0.603 ) and .the correlation is 0.669 , ( 95% ci : 0.422,0.823 , ) among the top 33 countries where both types a and b confirmations are more than 2000 .it is also evident that countries in the same geographic region ( i.e. as represented by the same colour ) tend to have similar influenza b proportions . as a comparison , if we choose china as the reference country , the correlation of influenza b proportion versus the effective distance from 73 localities to china is not statistically significant with = 0.379 ..summary of all pearson s correlation tests . stands for correlation coefficient , and stands for the threshold of influenza specimens .[ cols= " > , > , > , > , > , > , > , > , > " , ] [ table2 ]to our knowledge , this is the first study on the spatio - temporal patterns of influenza b proportion after the 2009 influenza pandemic , at both global and regional levels .we found an evidently positive correlation between the influenza b proportion and effective flight distance .we also found a significantly negative correlation in the influenza b proportion between pre- and post - pandemic periods at the regional level in the us .for the global patterns , we found that the influenza b proportion significantly correlated with the effective distance from mexico , with a .this correlation is robust to the threshold of 500 cases . with higher threshold of 2000 cases ,the correlation is still statistically significant and correlation coefficient increased .also , excluding data from the year 2015 does not change the conclusion ( pearson s correlation=0.669 , 95% ci:0.422 , 0.823 , and p - value ) .again , it is hard to believe that these results are a coincidence .previous studies focused on the impacts of latitude on influenza b patterns .baumgartner et al . studied the influenza activity worldwide and found its seasonality , timing of influenza epidemic period displayed unique patterns according to the latitudinal gradient .yu et al . studied the relationship between latitudinal gradient and influenza b proportion among the provinces in china .they have found an increasing prevalence of influenza b towards the south . since influenza virus transmission varies with climatic factors such as temperature and humidity , these influenza b patterns are likely to be explained by climatic factors that vary with the latitudinal gradients .however , at the global level , we did not find any evidence supporting the impact of latitude .but we observed an interesting relationship between influenza b proportion and longitude , which could be associated with the initial patterns of spread of h1n1pdm .we found that the influenza b proportion in the period we considered is the lowest among the countries in the south america , followed by north america , western europe , eastern europe , middle east and are the highest in asia .we can see that european countries had a higher proportion of influenza b than united states .it is worthwhile to note that influenza b viruses are more likely to be reported from children .higher influenza vaccination coverage rate among the general population , and in particular , among healthy school children in the us and canada , is likely to decrease the influenza b proportion in these two countries . it will be very worthwhile to conduct an in - depth study on these relationships in the future .our study found that there is a negative correlation between influenza b proportion and pre- and post - pandemic periods in the us .this could be due to cross - reactive immunity as suggested by ahmed et al . where the a(h1n1 ) pandemic influenza resulted in a large number of persons exposed to this strain in certain countriescould have resulted in anti - ha immunity . whether this phenomenon is also applicable to influenza b virus is also worth further investigation .the major strength of our study is the use of flunet database , which allowed us to extract influenza data globally over a long period of time for a comprehensive analysis , and to establish the relationship between influenza b proportion and longitude and effective flight distance from mexico respectively .second , our use of longitude and effective flight distance in the study of influenza b proportion are both novel and epidemiologically relevant . third , influenza b proportion is a more robust indicator of the severity of influenza b in a country than the total number of influenza b confirmations or influenza b positive rates ( i.e. cases of influenza b out of all influenza specimens ) .our study had some limitations .the methods of surveillance data collection and their testing policies differ widely across different countries .the intensity of surveillance might change over time .therefore our results should be interpreted with caution .in this study , we have examined the spatio - temporal patterns of influenza b proportion globally and in the us .the impacts of different surveillance policies would have been reduced to some extent due to the use of proportion rather than other absolute numbers .our results showed that there displayed wide variations in the proportions of influenza b over the study period .differences in influenza b proportion between europe and northern america ( i.e. the us and canada ) could be associated with the different influenza vaccination policies and coverage among healthy school children and the general population .the patterns of spread of h1n1pdm could be one of the major reasons .future studies could examine whether other additional factors , e.g. health access rates , population age structure and climatic factors , which could contribute to the patterns of influenza b proportions . to identify the major factors for prioritizing public health control measureswill be both challenging and worthwhile .the study is of both scientific and public health significance .we are grateful for helpful discussions and suggestions from ben cowling , lin wang , joe wu and lewi stone .we are grateful to lin wang and joe wu for providing effective distance data .d.h . was supported by a rgc / ecs grant from hong kong research grant council ( 25100114 ) , and a health and medical research grant from hong kong food and health bureau research council ( 13121382 ) .10 osterhaus , a.d.m.e . ,rimmelzwaan , g.f . ,martina , b.e.e . ,bestebroer , t.m . & fouchier , r.a.m .influenza b virus in seals . _science_. , 1051 - 3 ( 2000 ) .ferguson , n.m ., galvanim , a.p . & bush , r.m .ecological and immunological determinants of influenza evolution ._ nature_. , 428 - 33 ( 2003 ) .heikkinen , t. , ikonen , n. & ziegler , t. impact of influenza b lineage - level mismatch between trivalent seasonal influenza vaccines and circulating viruses , 1999 - 2012 ._ , 1519 - 24 ( 2014 ) .thompson , w.w . ,shay , d.k . ,weintraub , e. , brammer , l. , cox , n.j . & fukuda , k. influenza vaccination among the elderly in the united states . _ arch . intern .med._. , 2038 - 9 ( 2005 ) .harvala , h. _ et al . _ burden of influenza b virus infections in scotland in 2012/13 and epidemiological investigations between 2000 and 2012 .. _ ( 2014 ) .glezen , p.w . ,schmier , j.k . ,kuehn , c.m ., ryan , k.j . & oxford , j. the burden of influenza b : a structured literature review . _j. public health _ ; , e43 - 51 ( 2013 ) .li , x. , tian , h. , lai , d. & zhang , z. validation of the gravity model in predicting the global spread of influenza .j. environ .public health ._ ,3134 - 43 ( 2011 ) .grais , r.f . ,ellis , j.h . & glass , g.e . assessing the impact of airline travel on the geographic spread of pandemic influenza .j. epidemiol ._ , 1065 - 72 ( 2003 ) .grais , r.f . ,ellis , j.h . , kress , a. & glass , g.e .modeling the spread of annual influenza epidemics in the u.s .: the potential role of air travel ._ health care manag ._ , 127 - 34 ( 2004 ) .ruan , z. , wang , c. , hui , p.m. & liu , z. integrated travel network model for studying epidemics : interplay between journeys and epidemic . _ sci ._ ,11401 ( 2015 ) .chan , m. _ world now at the start of 2009 influenza pandemic 2009 ._ available at : http://www.who.int/mediacentre/news/statements/2009/h1n1_pandemic_phase6_20090611/en/ ( accesssed : 16th october 2015 ) chan , p.k ._ influenza b lineage circulation and hospitalization rates in a subtropical city , hong kong , 2000 - 2010 ._ , 677 - 84 ( 2013 ) .wong , j.y ._ et al . _analysis of potential changes in seriousness of influenza a and b viruses in hong kong from 2001 to 2011 ._ epidemiol ._ ,766 - 71 ( 2015 ) .he , d. , lui , r. , wang , l. , tse , c.k ., yang , l. & stone , l. global spatio - temporal patterns of influenza in the post - pandemic era . __ , 11013 ( 2015 ) .united nations .world population prospects : the 2006 revision . new york : united nations ; c2006 . thematic mapping . _world borders dataset ._ available at : http://thematicmapping.org/downloads/world_borders.php.(accessed : 16 october 2015 ) hinds , a.m. , bozat - emre , s. , van , c.p .& mahmud , s.m .comparison of the epidemiology of laboratory - confirmed influenza a and influenza b cases in manitoba , canada .public health ._ , ( 2015 ) .european centre for disease prevention and control , world health organization ._ flu news europe .joint ecdc - who / europe weekly influenza update . _ available at : http://www.flunewseurope.org/. ( accessed : 16th october 2015 ) azziz , b.e . _ et al . _ seasonality , timing , and climate drivers of influenza activity worldwide . _ j. infect ._ ,838 - 46 ( 2012 ) .ahmed , m.s._et al . _ cross - reactive immunity against influenza viruses in children and adults following 2009 pandemic h1n1 infection . _ antiviral res ._ ,106 - 12 ( 2015 ) .centers for disease control and prevention ._ _ weekly u.s .influenza surveillance report.__available at : http://www.cdc.gov/flu/weekly/.(accessed : 16th october 2015 ) simonsen , l. _ et al_. global mortality estimates for the 2009 influenza pandemic from the glamor project : a modeling study._plos med ._ e1001558 ( 2013 ) .yu , h. _ et al ._ characterization of regional influenza seasonality patterns in china and implications for vaccination strategies : spatio - temporal modeling of surveillance data ._ plos med ._ , e1001552(2013 ) .lowen , a.c . ,mubareka , s. , steel , j. & palese , p. influenza virus transmission is dependent on relative humidity and temperature ._ plos pathog ._ , 1470 - 6 ( 2007 ) .brockmann , d. & helbing , d. the hidden geometry of complex , network - driven contagion phenomena ._ science _ , 1337 - 42 ( 2013 ) .he , d. , dushoff , j. , eftimie , r. & earn , d.j .patterns of spread of influenza a in canada ._ , 20131174 ( 2013 ) .a.c . , q.l . , d.y . and d.h .conceived the work , analysed the data and wrote the manuscript .all authors reviewed the manuscript .the authors declare no competing financial interests .
|
we study the spatio - temporal patterns of the proportion of influenza b out of laboratory confirmations of both influenza a and b , with data from 139 countries and regions downloaded from the flunet compiled by the world health organization , from january 2006 to october 2015 , excluding 2009 . we restricted our analysis to 34 countries that reported more than 2000 confirmations for each of types a and b over the study period . we find that pearson s correlation is 0.669 between effective distance from mexico and influenza b proportion among the countries from january 2006 to october 2015 . in the united states , influenza b proportion in the pre - pandemic period ( 2003 - 2008 ) negatively correlated with that in the post - pandemic era ( 2010 - 2015 ) at the regional level . our study limitations are the country - level variations in both surveillance methods and testing policies . influenza b proportion displayed wide variations over the study period . our findings suggest that even after excluding 2009 s data , the influenza pandemic still has an evident impact on the relative burden of the two influenza types . future studies could examine whether there are other additional factors . this study has potential implications in prioritizing public health control measures .
|
the drift chamber which is described in this report was built for the cosy-11 experimental facility operating at the cooler synchrotron cosy - jlich .the cosy-11 , described in details in ref . and shown schematically in fig . [ cosy-11 ] , is a magnetic spectrometer for measurements at small angles and is dedicated to studies of near - threshold meson production in proton proton and proton deuteron collisions ( see e.g. refs .it uses one of the regular cosy dipole magnets for momentum analysis of charged reaction products originating from interaction of the internal cosy beam particles with the nuclei of a cluster beam target .trajectories of positively - charged particles which are deflected in the dipole magnet towards the center of the cosy - ring are registered with a set of two planar drift chambers indicated as d1 and d2 in fig .[ cosy-11 ] .these chambers cover only the upper range of momenta of the outgoing particles what suffices to measure e.g. two outgoing protons in the process or tracks in the reaction close to threshold . for tracking positively charged pions appearing in near - threshold reactions such as with momenta by a factor of smaller then the proton momenta it was necessary to extend the cosy-11 momentum acceptance towards smaller values .another important purpose was the detection of positively charged kaons prior to their decay what is especially important for the measurement of the close to threshold due to its small cross section on the level of a few nanobarns .for this , an additional drift chamber was built and installed in the free space along the cosy-11 dipole magnet ( see chamber d3 in fig . [ cosy-11 ] ) .the main requirement for the chamber was a shape of the supporting frame which would not interfere with high momentum particles that are registered in the detectors d1 and d2 .this demand was fulfilled by choosing the frame of a rectangular form but with one vertical side missing , called c - shaped frame since it resembles the character c. the main design characteristics of the chamber are given in section 2 .it was also essential , that the chamber allows for a three - dimensional track reconstruction in order to determine the momentum vectors of registered particles at the target by tracing back their trajectories in the magnetic field of the cosy-11 dipole magnet to the nominal target position .therefore three different wire orientations were chosen and a track reconstruction program was developed .this program is described in section 3 .the chamber calibration and results of the track reconstruction are presented in section 4 .the sensitive chamber volume is built up by hexagonal drift cells ( see fig . [ cell ] ) identical with the structure used in the central drift chamber of the saphir detector . in this type of cellsthe drift field has approximately cylindrical symmetry , and thus the distance - drift time relation depends only weekly on the particles angle of incidence . in order to minimize the multiple scattering on wires , gold - plated aluminium is used for the 110 - thick field wires , whereas the sense wires are made of 20 - thick gold - plated tungsten .= 5.5 cm the cells are arranged in seven detection planes as indicated in fig .[ plate ] showing one of two parallel aluminium endplates between which the wires are stretched .three detection planes ( 1 , 4 and 7 ) contain vertical wires , two planes ( 2 , 3 ) have wires inclined at and the remaining two planes ( 5 , 6 ) contain wires inclined at .this arrangement makes it possible to reconstruct particle trajectories in three dimensions , also in cases of multi - track events .the relatively small inclination of the wires was chosen since the measurement of particle trajectories in the horizontal plane needs to be much more accurate compared to the vertical plane .this is due to the fact that the particle momenta are determined on the basis of the deflection of their trajectories in an almost uniform vertical magnetic field in the 6 cm gap of the cosy-11 dipole magnet .each endplate contains bore holes of 4 mm with inserted feedthroughs for mounting the wires .the drilling was done with a cnc jig boring machine assuring a positioning accuracy better than .01 mm .the holes for the field wires form equilateral hexagons with a width of 18 mm . in the middle of these hexagonsthere are the openings for the sense wires .the resulting spacing of the sense wires in the planes with vertical wires is again equal to 18 mm and in the detection planes with wires inclined by is equal to 18 mm = 17.73 mm .the chamber contains cells with vertical wires , cells with wires inclined by and cells with wires inclined by .= 7.5 cm the disturbance of the electric field in the drift cells of a given plane by the cells of the neighboring planes was investigated using the garfield code . with the chosen spacing of the detection planes equal to 28.8 mm( see fig .[ plate ] ) the resulting corrections to the distance - drift time relation determined as a function of position along the sense wires are smaller than 0.05 mm for distances from the sense wire smaller than half of the cell width ( 9 mm ) .these corrections were neglected in view of the expected precision of track reconstruction in the order of 0.1 mm .thus one can assume that the distance - drift time relation is the same for all the cells in a given detection plane , what simplifies the chamber calibration .the two aluminium endplates for mounting the wires are 15 mm thick and are supported by two c - shaped frames made out of 20 mm thick aluminium ( see a three - dimensional view in fig . [ view ] ) .the frames hold the total load of about 2.4 kn originating from the mechanical tension of .2 kn ( n ) of the sense wires and .2 kn ( n ) of the field wires .prior to mounting the wires , the frames were pre - stressed with a force corresponding to the tension of wires using led bricks uniformly distributed on the upper endplate .this caused a deflection of the endplates at the free ends of 0.7 mm each .the ends of the pre - stressed endplates were fixed together using a steel plate and the bricks were taken away .after mounting the wires the steel plate was removed and we checked that the distance between the ends of the endplates did not change . for mounting the field wires brass feedthroughs with a 150 m inner opening are used . since all field wires should have the same ground potential electrical contact of the feedthroughs with the aluminium endplatesis assured by usage of a conductive glue . for the sense wires we used isolating feedthroughs made of delrin , a polyacetal with a high dielectric strength of about 40 kv / mm , with inserted brass feedthroughs containing 50 m openings for the positioning of the wires . a schematic drawing of the feedthroughs is shown in fig . [ feed ] .= 7.5 cm the wires were strung between pairs of feedthroughs and were fixed by means of copper crimp pins . for sealing the feedthroughs and the crimp pins epoxy resin was used .the high voltage was connected to the sense wires through 1 m resistors soldered directly to the crimp pins on the lower endplate .the signals were read out through 1 nf coupling capacitors connected to the sense wires on the upper endplate ( see fig . [ cross ] ) .the capacitors and resistors were enclosed in hermetical volumes which were dried out with silica aerogel allowing to reduce leakage currents . for the window for particles penetrating the chamber 20 kapton foil is used glued to an aluminium frame which is screwed on the support frame and sealed with 3 mm o - ring gasket .= 7.5 cm each of the detection planes is equipped with five 16-channel preamplifier - discriminator cards based on the fujitsu mb13468 amplifier chip and the lecroy mvl407s comparator chip .the cards are mounted directly on the chamber .the output pulses in the ecl standard are led by means of 30 m long twisted - pair cables to fastbus - tdcs of lecroy 1879 type working in the common stop mode with the pulses from the discriminators as start signals and the trigger pulse as the common stop . as chamber gas ,p10 mixture ( 90% argon and 10% methane ) at atmospheric pressure is used .the gas flow through the chamber is about 12 l / h .the sense wire potential is + 1800 v , whereas the field wires are all grounded .the gas amplification in the chambers is about and the discrimination threshold set in the preamplifier - discriminator cards corresponds to electrons .for the reconstruction of particle tracks a simple algorithm was developed and implemented as a computer code written in the c - language .the reconstruction proceeds in three stages : * finding track candidates in two dimensions , independent for each orientation of the wires , * matching the two - dimensional solutions in three dimensions , * three - dimensional fitting in order to obtain optimal track parameters . in the first stage , track candidates are found in two - dimensions using hits in the detection planes with the same inclination of the wires .for this , all possible combinations of pairs of hits from two different detection planes are taken into account . due to the left - right ambiguity of the track position with respect to the sense wire , for each pair of hits there are four straight - line solutions possible .these solutions are determined using an iterative procedure .first , the track distance to the sense wire is calculated from the drift time to drift distance relation assuming the track entrance angle as equal to .note that the time - to - distance function may depend on the angle ( see fig .[ pair ] ) and this is taken into account when making a calibration .thus , the track is defined by a pair of points lying in the corresponding sense wire planes and indicated as p1 and p2 in fig .[ pair ] . in the next step ,the inclination of this track is taken for determining new values of the distances from the sense wires and constructing a new pair of points defining the track .these points are indicated as t1 and t2 in fig .this procedure can be repeated , however , we terminate it already after the second step since further steps give negligible corrections . for each track determined by this method , one subsequently finds hits which are consistent with it within a certain corridor along the track . in this way , hits from neighboring cells with respect to the selected two cells and from other detection planes with the same orientation of wires are taken into account in the reconstruction .for the width of the corridor we take mm which is a few times larger than the position resolution of a single detection plane . with this choicemore than 90% of all hits are included in the reconstruction as it is discussed in the next chapter .= 7.5 cm in three dimensions a two - dimensional solution can be represented by a plane containing the track and parallel to the corresponding sense wires .planes representing the two - dimensional solutions for three different directions of sense wires used in the chamber should intersect along a common line corresponding to the particle track . however , due to the limited spatial resolution of the chamber , the planes intersect along three different lines ( see fig .[ match ] ) .two lines and correspond to the intersections of the inclined planes with the vertical plane and the third line is an intersection of the inclined planes . in order to quantify the consistency between the two - dimensional solutions , we determine the distances and between the crossing points of the lines , with the first and the seventh detection plane , respectively ( see fig . [ match ] ) .for the three - dimensional reconstruction , we accept only the combinations of two - dimensional solutions for which the distances and are both smaller then a certain limit which is adjusted as a reconstruction parameter .too large values of lead to an unnecessary increase of cpu time since too many non - matching combinations are taken into account .on the contrary , too small values of cause losses of correct combinations and consequently decrease the reconstruction efficiency . in our case ,a reasonable value for is 2 cm .the determination of track parameters in three dimensions proceeds via minimization of the function calculated on the basis of the differences between the measured distances of tracks from sense wires and the distances of the fitted tracks from the sense wires : where the summation proceeds over all hits assigned to one track and is an uncertainty of the measured distance . in the case of the two - dimensional track reconstruction for wires oriented in one direction ,the numerator in the above expression can be easily expressed in cartesian coordinates using the formula : ^ 2}{(1+a^2)},\ ] ] where and are parameters in the straight line equation : representing the track , and are coordinates perpendicular to the sense wires and is a point lying in the distance from the sense wire in the location corresponding to the closest approach to the track ( see fig .[ fit ] ) .for the three - dimensional reconstruction , we use a coordinate system with the -axis parallel to the vertical wires and the z - axis perpendicular to the detection planes .the origin of the coordinate system is located in the first plane .particle tracks are parametrized as straight lines in a conventional way : where are the searched track parameters .they can be linked to the parameters corresponding to the two - dimensional solutions using the linear transformation : where is the angle between the sense wires and the vertical direction .inserting eqs .( 5 ) and ( 6 ) into ( 2 ) and the result into ( 1 ) , the expression for takes the form : ^ 2 } { [ 1+(a ' \cos\alpha_i + c ' \sin\alpha_i)^2](\delta d_i)^2}.\ ] ] the derivative of the dimensionless term in the square bracket in the denominator is of the order of unity and is negligible compared with corresponding derivative of the numerator divided by which is of the order of .therefore , during the fitting procedure we regard the numerator just as a constant given by the initial values of the parameters and thus the minimization of is reduced to the linear least squares problem .for the fitting we use the _ svdfit _ function from `` numerical recipes in c '' .the three - dimensional track fitting is performed for all combinations of two - dimensional track candidates and the solution with the minimal value of is selected .the hits corresponding to this solution are then removed from the set of hits recorded in a given event and the reconstruction is repeated from the beginning until there are no further candidates of tracks . in this way multi - track events can be reconstructed .the chamber is calibrated using the experimental data . in a first step an approximate drift time to drift distance relationis determined by integration of the drift time spectra as provided by the uniform irradiation method . a typical spectrum obtained by summing up drift time spectra from all sense wires in one detection planeis shown in fig .[ dtime]a .the width of the drift time distribution is about 300 ns which with the corresponding half of the cell width equal to 9 mm results in a mean drift velocity of 0.03 mm / ns .however , the space - time relationship obtained by integration of the drift time spectrum is not strictly linear ( see fig . [dtime]b ) ; its slope increases slightly in the neighborhood of the sense wire . in the next step ,corrections to this calibration are determined using an iterative procedure . for this , the average deviations between the measured and fitted distances of the tracks from the sense wires are calculated .these deviations are determined as a function of the drift time corresponding to the tdc channel and the track entrance angle from the range ( , ) divided into 9 intervals numerated by the index : where the average is taken over all hits which were recorded in the tdc channel and correspond to the track entrance angle interval .the space - time relation used in the first iteration is then corrected by the above deviation : after performing the reconstruction of tracks with the new space - time relation new corrections are calculated and so on .this procedure is repeated until the corrections become smaller than the position resolution of the chamber .this occurs in our case after 2 - 3 iterations .[ correct ] shows differences of the measured and fitted distances calculated as a function of the drift time for three subsequent iterations .the mean value of deviates from zero only after the first iteration ( upper panel in fig .[ correct ] ) and the corresponding correction to the space - time relation is of the order of 0.3 mm . for higher iterations the corrections are negligible .the standard deviation of is about 0.2 mm and is a measure of the single wire resolution . for testing the track reconstruction we used single tracks of protons scattered elastically on the cosy-11 hydrogen target of the cluster - jet type at a beam momentum of 3.3 gev / c .events of the elastic proton proton scattering were registered as coincidences between the scintillation hodoscope s1 detecting the forward scattered protons and the monitor scintillator placed in the target region on the opposite side of the beam , measuring the recoil protons . additionally , we required firing of the seventh detection plane in order to assure that the proton tracks pass through the chamber .the detection efficiency per detection plane was estimated on the basis of multiplicities of coincidences between the detection planes . in 97% of cases all seven planes fire ( see fig .[ plmulti ] ) which means that the detection efficiency in a single detection plane is close to 100% . the multiplicity of hits shown in fig .[ hitmulti]a is peaked around the value of 14 which is due to the fact that most of tracks are inclined with respect to the chamber and mostly two neighbouring cells from one detection plane fire for each track . for the reconstruction we used an `` effective '' position resolution of 220 mfor which the mean value of per degree of freedom is in the vicinity of 1 .the reconstruction was treated as successful when the per degree of freedom was smaller than 5 .this was the case for 95% of events triggered as elastic scattering .for selected events in which all seven detection planes fired , the reconstruction was successful in 97% . in the reconstruction most of the registered hits were used for the fitting as one can see by comparison of fig .[ hitmulti]a and fig .[ hitmulti]b .the precision of the track parameters was extracted from the diagonal elements of the corresponding covariance matrix . for typical track entrance angles of about the resulting uncertainty of track position along the - and-axis is equal to 0.3 mm and 3 mm , respectively .the uncertainty of the track inclination in the horizontal and vertical plane is about 1 mrad and 10 mrad , respectively .a drift chamber with a c - shaped frame was constructed for the cosy-11 experimental facility .the special shape of the frame allows for the detection of low momentum particles without disturbing the high momentum ejectiles which are registered in other detector components .this kind of chamber can be applied if no frame elements are allowed in the sensitive area of the detection system .an example of a possible further application besides the one discussed here could be the detection of particles scattered at small angles with respect to the beam. in this case , two such chambers placed symmetrically with respect to the beamline could be used for covering the forward angles .the chambers can be installed and removed without dismounting the beam - pipe contrary to an alternative solution of one chamber with a central hole for the beam - pipe .the track reconstruction algorithm which was developed for the chamber can also be used for other planar drift chambers containing cells with an electric field which has approximately cylindrical symmetry .it is required , however , that the chamber contains wires oriented at least in three different directions and for each direction there are at least two detection planes necessary .the chamber allows to determine the track position and inclination in the horizontal plane with an accuracy of about 0.3 mm and 1 mrad , respectively . in the vertical directionthese accuracies are worse by about an order of magnitude , which is in accordance with the design values . for the tracking in the vertical magnetic field of the cosy-11 dipole magnetno higher precision was envisaged .for other applications the precision can be improved by choosing a larger inclination of the wires .this work has been supported by the european community - access to research infrastructure action of the improving human potential programme , by the daad exchange programme ( ppp - polen ) , by the polish state committee for scientific research ( grants no .2p03b07123 and pb1060/p03/2004/26 ) and by the research centre jlich .r. maier et al .instr . and meth . * a 390 * ( 1997 ) 1 .s. brauksiepe et al . , nucl .instr . and meth . * a 376 * ( 1996 ) 397 .p. moskal , m. wolke , a. khoukaz , w. oelert , prog .* 49 * ( 2002 ) 1 . c. quentmeier et al ., phys . lett .* b 515 * ( 2001 ) 276 .p. moskal et al . , phys . lett . * b 474 * ( 2000 ) 416 .h. dombrowski et al .instr . and meth . * a 386 * ( 1997 ) 228 .w. j. schwille et al . , nucl .instr . and meth .* a 344 * ( 1994 ) 470 .california fine wire company , grover beach , california , usa .r. veenhof , garfield , simulation of gaseous detectors , version 7.04 , 2001 , cern program library writeup w5050 , cern , geneva , switzerland .kolf , diploma thesis , university of bonn , germany , january 2002 .l. jarczyk et al . , annual report 1990 ikp - kfa ( 1991 ) p.219 .a. breskin et al .instr . and meth .* 119 * ( 1974 ) 9 .w. h. press et al . , numerical recipes in c , cambridge university press , cambridge , 1992
|
we present the construction of a planar drift chamber with wires stretched between two arms of a c - shaped aluminium frame . the special shape of the frame allows to extend the momentum acceptance of the cosy-11 detection system towards lower momenta without suppressing the high momentum particles . the proposed design allows for construction of tracking detectors covering small angles with respect to the beam , which can be installed and removed without dismounting the beam - pipe . for a three - dimensional track reconstruction a computer code was developed using a simple algorithm of hit preselection . , , , , , , , , , , , , , , , , , , , , , , , , , , , , drift chamber , hexagonal cell , track reconstruction 29.40.cs , 29.40.gx
|
supercooled liquids of glass forming systems are typical examples of high - dimensional systems with highly disordered energy landscapes and our main concern behind this work .simulations have shown that many important characteristics of such a system are better described by a process on the set of so - called _ metabasins _ ( mb ) than by the more common process on the set of visited minima of the energy landscape ( see ) .those mb are formed in the following way by aggregation of suitable states of the describing process along a simulated trajectory : fixing a reasonable observation time , define and then recursively for then the mb up to are chosen as simulation studies have shown that local sampling within a mb does not affect typical parameters of the process like the diffusion coefficient or the time to reach equilibrium .dynamical aspects are therefore fully characterized by the mb - valued process .furthermore , this model reduction by aggregation , as proposed in and , offers several advantages ( referred to as properties 15 hereafter ) : 1 . the probability of a transition from one mb to any other one does not depend on the state at which this mb is entered .2 . there are basically no reciprocating jumps between two mb .this is in strong contrast to the unaggregated process where such jumps occur very often : the system falls back to a minimum many times before eventually cresting a high energy barrier and then falling into a new valley , where it will again take many unsuccessful trials to escape .these reciprocating jumps are not only irrelevant for the actual motion on the state space but also complicating the estimation of parameters like the diffusion coefficient or the relaxation time . 3 .the expected time spent in a mb is proportional to its depth .thus there is a strong and explicit relation between dynamics and thermodynamics , not in terms of the absolute but the relative energy .the energy barriers between any two mb are approximately of the same height , that is , there is an energy threshold such that , for a small , it requires a crossing of at least and at most to make a transition from one mb to another . such systems with are called trap models ( see ) .5 . the sojourn times and the jump distances between successively visited mb ( measured in euclidean distance ) form sequences of weakly or even uncorrelated random variables , and are also mutually independent , at least approximately . therefore, the aggregated process can be well approximated by a continuous time random walk , which in turn simplifies its analysis and thus the analysis of the whole process . despite these advantages ,the suggested definition of mb has the obvious blemish that it depends on the realization of the considered process and may thus vary from simulation to simulation . to provide a mathematically stringent definition of a _ path - independent aggregation _ of the state space , which maintains the above properties andis based on the well - established notion of metastable states , is therefore our principal concern here with the main results being theorem [ thm : mb ] and theorem [ thm : pd versus pid mb ] . in this endeavor, we will draw on some of the ideas developed by bovier in and by scoppola in , most notably her definition of metastable states .metastability , a phenomenon of ongoing interest for complex physical systems described by finite markov processes on very large state spaces , can be defined and dealt with in several ways .it has been derived from a renormalization procedure in , by a pathwise approach in , and via energy landscapes in , the latter being also our approach hereafter . to characterize a supercooled liquid , i.e. a glass forming system at low temperature , via its energy landscapewas first done by goldstein in 1969 and has by now become a common method .the general task when studying metastability , as well originally raised in physics ( , or ) , is to provide mathematical tools for an analysis of the property of thermodynamical systems to evolve in state space along a trajectory of unstable or metastable states with very long sojourn times .inspired by simulations of glass forming systems at very low temperatures with the metropolis algorithm , we will study ( as in ) finite markov chains with exponentially small transition probabilities which are determined by an energy function and a parameter .this parameter can be understood as the inverse temperature and we are thus interested in the behavior of the process as .we envisage an energy function of highly complex order and without the hierarchical ordering that is typical in spin glass models .a good picture is provided by randomly chosen energies with correlations between neighbors or by an energy landscape that looks like a real mountain landscape .we will show that , towards an aggregation outlined above , the metastable states as defined in are quite appropriate because they have an ordering from a kind of `` weak '' to a kind of `` strong '' metastability . around those stateswe will define and then study connected valleys [ definition [ def : tal ] ] characterized by minimal energy barriers . in the limit of low temperatures , any such barrier will determine the speed , respectively probability of a transition between the two valleys it separates .more precisely , in the limit , the process , when starting in a state , will almost surely reach a state with lower barrier earlier than a state with higher barrier [ theorem [ thm : driftzumminimum ] ] . in the limit of low temperatures , the bottom ( minimum ) of an entered valleywill therefore almost surely be reached before that valley is left again [ proposition [ prop : bergnge ] ] . as a consequence ,the probability for a transition from one valley to another is asymptotically independent of the state where the valley is entered .this is property 1 above .furthermore , since valleys as well as metastable states have a hierarchical ordering , we can build valleys of higher order by a successive merger of valleys of lower order [ proposition [ prop : taelerrekursion ] ] .given an appropriate energy landscape , this procedure can annihilate ( on the macroscopic scale ) the accumulation of reciprocating jumps by merging valleys exhibiting such jumps into a single valley [ subsection [ subsec : wkeiten ] ] .hence , valleys of sufficiently high order will have property 2 . beside the macroscopic process [ section [ sec : makroskopischerprozess ] ] , which describes the transitions between valleys, one can also analyze the microscopic process [ section [ sec : mikroskopischerprozess ] ] , that is , the system behavior when moving within a fixed valley . herewe will give a formula for the exit time and connect it with its parameters [ theorem [ thm : verlassenszeit ] ] .this will confirm property 3 .having thus established properties 14 [ theorem [ thm : mb ] ] , we will finally proceed to a comparison of our path - independent definition of mb with the path - dependent one given above .it will be shown [ theorem [ thm : pd versus pid mb ] ] that both coincide with high probability under some reasonable conditions on the connectivity of valleys which , in essence , ensure the existence of reasonable path - dependent mb .we will also briefly touch on the phenomenon of quasi - stationarity [ proposition [ prop : eigenwerte ] ] which is a large area but to our best knowledge less studied in connection with the aggregation of states of large physical systems driven by energy landscapes .let us mention two further publications which , despite having a different thrust , provide definitions of valleys , called basins of attraction or metastates there , to deal with related questions .olivieri & scoppola fully describe the tube of exit from a domain in terms of which basins of attraction of increasing order are visited during a stay in that domain and for how long these basins are visited . in a very recent publication , beltrn & landim , by working with transition rates instead of energies , aim at finding a universal depth ( and time scale ) for all metastates .however , we rather aim at the _ finest aggregation such that transitions back to an already visited metastate are very unlikely within a time frame used in simulations_. this finest aggregation will lead to valleys of very variable depth just as simulations do not exhibit a universal depth or timescale .let be a markov chain on a finite set with transition matrix and stationary distribution , and let be an energy function such that the following conditions hold : irreducibility : : : is irreducible with and iff for all .transition probabilities : : : there exist parameters and with as such that for all distinct with .furthermore , exists for all , is positive if and otherwise .reversibility : : : the pair satisfies the detailed balance equations , i.e. for all .non - degeneracy : : : for all .we are thus dealing with a reversible markov chain with exponentially small transition probabilities driven by an energy landscape .as an example , which is also the main motivation behind this work , one can think of a metropolis chain with transition probabilities of the form here is the inverse temperature and , is a parameter , independent from , giving the number of neighbors of . for above conditions are fulfilled .let us start with the following basic result for the ratios of the stationary distribution .[ lem : statvert ] for any two states with , we have to start with , assume . reversibility andthe assumptions on the transition probabilities imply and now let and be arbitrary . by the irreducibility , there is a path from to of neighboring states with ,\ , 0\leq i\leq n-1 ] for , we obtain an aac which behaves roughly like a rw on a graph . such a rw is diffusive if we assume periodic boundary conditions ( or sufficiently large state space compared to the observation time ) and an energy landscape which is homogeneous enough to ensure zero - mean increments . in particular has to comprise at least two states .a path - independent definition of metabasins can now be given on the basis of the previous considerations .[ defn : mb ] a finite markov chain driven by an energy function satisfying the assumptions stated at the beginning of section [ sec : valleys ] has _ metabasins of order _ if there exists an aggregation level such that the following conditions are fulfilled for each : ( mb4 ) . there are at least two distinct with a minimal uphill - downhill - path from to not hitting any other valley but for . in this case , the valleys are called _ metabasins ( mb ) of order . the reader should notice that each singleton set consisting of a non - assigned state forms a mb .the conditions ( mb1 ) and ( mb2 ) ensure the good nature of ( a ) the energy barriers and ( b ) the spatial arrangement of minima .as already pointed out , this determines which valleys are visited consecutively .properties of mb which can be concluded from the results of the previous sections are summarized in the next theorem .the reader is reminded of properties 14 stated in the introduction .[ thm : mb ] for mb as defined in definition [ defn : mb ] we have ( 4 ) the transition probabilities for jumps between mb do not depend on the point of entrance as ( property 1 ) .there are no reciprocating jumps of order ( property 2 ) .the expected residence time in a mb depends on only via the depth of the mb as ( property 3 ) .regarding only mb pertaining to local minima , the system is a trap model ( property 4 ) .\(1 ) follows from proposition [ prop : bergnge ] , ( 2 ) from proposition [ prop : forthandback ] , ( 3 ) from theorem [ thm : verlassenszeit ] , and ( 4 ) directly from the definition . it should not be surprising that the path - dependent definition of mb by heuer and stated in the introduction differs from our path - independent one . for example, the energy landscape in figure [ fig : pel - baum ] has no reasonable path - dependent mb because every transition between two branches of the shown tree must pass through the state . for a typical trajectory, there will be at most three mb : the states visited before the first occurrence of , the states visited between the first and the last occurrence of , and the states visited after the last occurrence of .the reason for this poor performance is the tree - like structure of the energy landscape or , more generally , the fact that the connectivity between the branches is too small to allow a selfavoiding walk through more than two branches .this results in a small recurrence time for ( compared to the number of states visited in between ) .however , every branch constitutes a mb when using the path - independent definition for sufficiently small , in which case the aac forms a markov chain and , given the metropolis algorithm , even a rw on the graph .having thus exemplified that the two definitions of mb do not necessarily coincide , where the path - independent approach applies to a wider class of energy landscapes , we turn to the question about conditions for them to coincide with high probability .as already pointed out , we have to assume a sufficient connectivity between the metastates to ensure the existence of reasonable path - dependent mb . in terms of this connectivity ( for a precise definition see definition [ def : connectivityparameter ] ) and the parameter and , our last result , theorem [ thm : pd versus pid mb ] below , provides lower bounds for the probability that both definitions yield the same partition of the state space .the first step towards this end is to identify and count , for each and a given , the states for which a transition of from to is likely .this leads to the announced connectivity parameter .[ def : connectivityparameter ] let and suppose that has mb of order at level . define the _ connectivity parameters _ is the minimal number of neighboring sites of a non - assigned state which do not belong to a particular neighboring valley and whose energy is at most plus the energy of . is the minimal number of neighboring sites / valleys of a valley whose energy is at most plus the energy of and which can be reached via an uphill - path from .finally , is the minimal number of neighboring valleys of a non - assigned state which comprise a state with energy of at most plus the energy of . and are always at least 2 and in fact quite large in the very complex energy landscapes of structural glasses . for very small , may be 1 , but if has mb of order in a high dimensional energy landscape , then can be assumed to be quite large as well . that transitions to states counted above have reasonable large probabilities is content of the following lemma .thus , the defined parameters do in fact measure the connectivity of the mb .[ lem : transitionprobability3 ] let and suppose that has mb of order at level with connectivity parameters defined in .writing for and for , , the following assertions hold true for all sufficiently large : * if and , or and satisfies , then * for any distinct and , * for any distinct and , we see that , for small enough compared to , transitions with an energy barrier of at most are still quite likely and thus a jump to a particular valley quite unlikely in the case of high connectivity .\(a ) choose so large that , for , and for any and , the latter being possible by lemma [ lem : transitionprobability2 ] .then for any such and , we infer provided that additionally holds true . if , then for any such that for some , for ( b ) pick again so large that for all . then , by definition of .\(c ) fix so large that for any and . in the very same way as in part ( b ), we then get for all by definition of .the above result motivates that in the case of high connectivity the probability to revisit a particular valley within a fixed time is quite small , or in other words , the probability for the aac to jump along a selfavoiding path is quite high .this is the main step towards the announced theorem and stated below .the observation time of course has to be small compared to the cover time of the process .[ lem : return prob ] let and suppose that has mb of order at level with connectivity parameters defined in .writing for and for , , define then the following assertions hold true for all sufficiently large : * for any and , where and in particular , if for some , then . * for each and , {k}\,e^{-2k\varepsilon\beta}\ ] ] where summation ranges over all pairwise distinct and for we write {k}:=n(n-1)\cdot ... \cdot ( n - k+1) ] forms a lower bound for the number of self - avoiding paths such that for each . we proceed with the announced result about the relation between path - dependent and path - independent mb . to this end , we fix for some .let for denote the random number of mb obtained from as defined in the introduction . for , we further let denote the mb containing and put if no such mb exists which is the case iff . [thm : pd versus pid mb ] let and suppose that has mb of order at level with connectivity parameters defined in .fix , and .then , for each and , there exists such that for all * , where if . * . * if , then {k}\left(1-\max_{m\in \mathcal{s}^{(i)}\backslash n^{(i)}}\delta(m,\beta)\right)^{k-1}e^{-2k\varepsilon\beta}.\ ] ] for the occurring bounds to be significant , two requirements must be met .first , must be small compared to the cover time of the aac and must be small compared to to ensure .second , the connectivity must be high to ensure and {k}e^{-2k\varepsilon\beta}\gg 0 $ ] .typically , the inclusions in parts ( b ) and ( c ) are strict because of high energy states within a valley that will probably be missed during one simulation run and therefore not belong to any path - dependent mb . on the other hand , since our approach strives to cover the state space as completely as possible by valleys the latter comprise such high energy states whenever they are assignable in the sense described in section [ sec : valleys ] . with being fixed , let us write as earlier for , and also for .\(a ) for a given , define with as defined in lemma [ lem : return prob ] and using for , we obtain thus , in order to show that with high probability a path - dependent mb comprises the inner part of a valley , we show that with high probability , when starting in its minimum , the whole inner part will be visited and the process will return to the minimum once more before the valley is left .this is trivial if and thus , for then more needs to be done if , where the second probability in the preceding line can further be bounded with the help of , viz . while for the first probability , we obtain with the help of theorem [ thm : driftzumminimum ] because for and .the latter can be seen as follows : it has been shown in the proof of theorem [ thm : driftzumminimumtler ] that .hence , as asserted .next , we infer thus .recalling the definition of , the last equality implies which will now be used to further bound the expression in , namely together with this yields as asserted \(b ) according to lemma [ lem : return prob ] , choose such that for each . by using and, we now infer and finally \(c ) in the following calculation , let , range over all -vectors with pairwise distinct components in and , for each , let range over all -vectors such that for each . as in part ( b ) , use repeatedly to infer {k}\,e^{-2k\varepsilon\beta},\end{aligned}\ ] ] the last line following from lemma [ lem : return prob ] .we return to example [ bsp : energielandschaft ] given in section [ sec : valleys ] , but modify the energy landscape by allowing direct transitions between some saddles ( see figure [ fig : energiedifferenz ] ( a ) ) because ( mb2 ) can clearly not be fulfilled in a one - dimensional model . while having no effect on the metastable states , valleys change in the way that , for levels , the states do no longer belong to the valley around state 4 and forms its own valley .the energy - differences of the various metastable states at each level are shown in figure [ fig : energiedifferenz ] ( b ) .the supremum of these energy differences decreases in , and we obtain mb of order 1 for , and of order for . to illustrate the behavior , we have run a metropolis algorithm on this energy landscape . for initial state and ,the energies of the trajectories of the original chain as well of the aggregated chain at levels are shown in figure [ fig : simulation ] .the following observations are worth to be pointed out : * the number of reciprocating jumps decreases with increasing level of aggregation . * the deeper the valley , the longer the residence time . * the motion in state spaceis well described by the aggregated process . * due to the very small size of the state space and a long simulation time , valleys are revisited .we are very indebted to andreas heuer for sharing his insight about glass forming structures with us and also for his advice and many stimulating discussions that helped to improve the presentation of this article .
|
glass - forming systems , which are characterized by a highly disordered energy landscape , have been studied in physics by a simulation - based state space aggregation . the purpose of this article is to develop a path - independent approach within the framework of aperiodic , reversible markov chains with exponentially small transition probabilities which depend on some energy function . this will lead to the definition of certain metastates , also called metabasins in physics . more precisely , our aggregation procedure will provide a sequence of state space partitions such that on an appropriate aggregation level certain properties ( see properties 14 of the introduction ) are fulfilled . roughly speaking , this will be the case for the finest aggregation such that transitions back to an already visited ( meta-)state are very unlikely within a moderate time frame . keywords : metastability , metabasins , markov chain aggregation , disordered systems , exit time , metropolis algorithm ams subject classification : 60j10 , 82c44
|
levy flights are a model of diffusion in which the probability of a -length jump is `` broad '' , in that , asymptotically , , . in this casethe sum is distributed according to a levy distribution , whereas for normal diffusion takes place , .interesting problems arise in the theory of levy flights when considering the statistics of the visits to the sites , such for instance the number of different sites visited during a flight , ; in this paper we consider a different , but related , problem , namely the number of times a site visited by a random flyer .suppose that a random walk takes place on a -dimensional lattice , let be a site of and let be the probability that after steps the walker is at .the mean value of visits to the site after steps is since derivation of eq .( [ eq : sola ] ) does not depend on the specific form of the walk , it holds also for levy flights . in the followingit will be assumed ; the asymptotic value of , denoted by , is defined as .it is known that a random walk is transient if and only if ; in other words the existence of finite implies that the walk is transient .levy flights have a wide range of applications ( see for instance and references therein ) and , in particular , analysis of the number of times a site is visited can be relevant in those processes , such as random searches , in which it is important not just to determine what sites have been visited but how often they have been visited ; examples of possible applications range from animal foraging to exploration of visual space . moreover can be given the following interpretation , useful for possible applications : assume that particles undergoing a levy flight are continuously generated at the origin , then , at time , , where is the mean number of particles at site .this property of has been used , in a model based on electrons brownian motion , to simulate distributions of emissivity of supernova remnants .consider first one - dimensional , infinite lattices ; the probability of occupancy of site after steps is where is the probability of having a displacement of sites . in case of symmetric levy flights the canonical representation of and are , where and is a real number , which in the following will be set equal to 1 for simplicity ; a scaling relation holds between and , namely . if eqs .( [ eq : trans ] ) , ( [ eq : pro1 ] ) yield the gaussian distribution , , whereas , if , fails to be a proper distribution not concentrated at a point ; therefore representations ( [ eq : trans ] ) , ( [ eq : pro1 ] ) are valid only in the interval .more recently it has been shown that the analytic forms of and can be given through a fox function .application of ( [ eq : sola ] ) and of the scaling relation leads to , and in particular , recalling that , with converging to a finite value for if and only if ; in this case where is the well known riemann zeta function .( [ eq : zeta ] ) and ( [ eq : bizeta ] ) , show that the visit to site is a transient state if and only if .the trend of as increases can be computed by making use of the formulas related to the zeta function ; for the result is }{z^{\frac{1}{\alpha}+1}}dz \right ) , \label{eq : intor}\end{aligned}\ ] ] where ] eq . ( [ eq : conv ] ) becomes here , for reason of simplicity , instead of ( [ eq : trans ] ) , we have used the transition probability , defined on integers , and , being a normalising constant .a similar form of has been used in a work on the average time spent by flights in a closed interval . in case of numerical calculations , obviously , the absolute length of a step must be truncated to some finite value : here , to allow flights to encompass the whole interval , and consequently , .equation ( [ eq : genlev ] ) provides a valid transition probability for any and hence it can be used to model also classical brownian motion ; for the process becomes the simple symmetric walk .note that by combining ( [ eq : conv ] ) and ( [ eq : sola ] ) a recursive formula for can be derived , namely ; however the separate use of ( [ eq : conv ] ) and ( [ eq : sola ] ) is to be preferred , in that it also yields values of the probability distribution and this is useful to check the correctness of the results . in the classical theory of randomwalk the diffusion approximation allows to replace with the pdf , solution of the diffusion equation ; analogously for levy flights a superdiffusion equation can be derived ( see , among others , , ) , whose solution is a series of eigenfunctions of the operator . setting ,the pdf is .define , in analogy with the discrete case , then where are the eigenvalues of ; obviously , , for all , and the asymptotic formula is . in solution of the superdiffusion equation has been presented that , for symmetric flights , is \nonumber \\ & \times & \sin\left ( \frac{m\pi(x+a)}{l } \right ) \sin\left ( \frac{m\pi a}{l}\right),\end{aligned}\ ] ] here is the length of the interval and the diffusion coefficient ; application of eq .( [ eq : contin ] ) to ( [ eq : pgitt ] ) , with , provides an explicit form for , calculations of from eq.([eq : final ] ) need the numerical value of the diffusion coefficient , and it can be derived from the average time a flyer spends in the interval , related to by the formula since is defined as the approximation can be used to obtain the numerical value of .figures ( [ theo_simu_0_3 ] ) and ( [ theo_simu_2 ] ) show for and respectively . it can be seen that the graph of tends to a triangular shape as increases; indeed for simple symmetric random walk , ( , ) . figure ( [ max_3 ] ) presents the graph of as a function of ; note that the inflection point of the curve occurs at , that is at the boundary between levy flights and classical random walks .in other words , shows a `` phase transition '' from levy flights , characterised by small number of visits , to the gaussian regime where visits are more frequent .the results of this note clarify how the mean number of times a site is visited by a random flyer depends on the dimensionality of the lattice , the value of and the boundary conditions .in particular , it has been shown that unrestricted levy flights are always transient , but for the unidimensional case with ; restricted flights are transient if the boundaries are absorbing . in the last case computations show that the direct numerical method agrees very closely with `` experimental data '' generated by the monte carlo simulation , whereas the agreement is worse for eq .( [ eq : final ] ) , especially when is small ( see figs . [ theo_simu_0_3 ] and [ max_3 ] ) ; this is not surprising , since eq .( [ eq : lconv ] ) deals directly with discrete variables , whereas eq .( [ eq : final ] ) results from the diffusion approximation . on the other hand , obviously , eq .( [ eq : final ] ) provides a more general , analytical formula for and not just a set numerical values .+ we thank the two anonymous referees for useful advice and criticism .
|
formulas are derived to compute the mean number of times a site has been visited during symmetric levy flights . unrestricted levy flights are considered first , for lattices of any dimension : conditions for the existence of finite asymptotic maps of the visits over the lattice are analysed and a connection is made with the transience of the flight . in particular it is shown that flights on lattices of dimension greater than one are always transient . for an interval with absorbing boundaries the mean number of visits reaches stationary values , which are computed by means of numerical and analytical methods ; comparisons with monte carlo simulations are also presented .
|
evolutionary game theory ( egt ) has been proven to be a suitable mathematical framework to model biological and social evolution whenever the success of an individual depends on the presence or absence of other strategies .egt was introduced in 1973 by smith and price as an application of classical game theory to biological contexts , and has since then been widely and successfully applied to various fields , not only biology itself , but also ecology , population genetics , and computational and social sciences . in these contexts , the payoff obtained from game interactions is translated into reproductive fitness or social success .those strategies that achieve higher fitness or are more successful , on average , are favored by natural selection , thereby increase in their frequency .equilibrium points of such a dynamical system are the compositions of strategy frequencies where all the strategies have the same average fitness .biologically , they predict the co - existence of different types in a population and the maintenance of polymorphism . as in classical game theory with the dominant concept of nash equilibrium ,the analysis of equilibrium points in random evolutionary games is of great importance because it allows one to describe various generic properties , such as the overall complexity of interactions and the average behaviours , in a dynamical system .understanding properties of equilibrium points in a concrete system is important , but what if the system itself is not fixed or undefined ?analysis of random games is insightful for such scenarios . to this end, it is ambitious and desirable to answer the following general questions : * how are the equilibrium points distributed ?how do they behave when the number of players and strategies change ?* mathematical analysis of equilibrium points and their stability in a general ( multi - player multi - strategy ) evolutionary game is challenging because one would need to cope with systems of multivariate polynomial equations of high degrees ( see section [ sec : pre ] for more details ) .nevertheless , some recent attempts , both through numerical and analytical approaches , have been made .one approach is to study the probabilities of having a concrete number of equilibria , whether all equilibrium points or only the stable ones are counted , if the payoff entries follow a certain probability distribution .this approach has the advantage that these probabilities provide elaborate information on the distribution of the equilibria .however , it consists of sampling and solving of a system of multivariate polynomial equations ; hence is restricted , even when using numerical simulations , to games of a small number of players and/or small number of strategies : it is known that it is impossible to ( analytically ) solve an algebraic equation of a degree greater than .another possibility is to analyze the attainability of the patterns and the maximal number of evolutionarily stable strategies ( ess ) , revealing to some extent the complexity of the interactions .this line of research has been paid much attention in evolutionary game theory and other biological fields such as population genetics .more recently , in , the authors investigate the expected number of internal equilibria in a multi - player multi - strategy random evolutionary game where the game payoff matrix is generated from normal distributions . by connecting egt and random polynomial theory ,they describe a computationally implementable formula of the mean number of internal equilibrium points for the general case , lower and upper bounds the multi - player two - strategy random games , and a close - form formula for the two - player multi - strategy games . in this paper , we address the aforementioned questions , i.e. , of analysing distributions and behaviours of the internal equilibria of a random evolutionary game , in an _ average _ manner .more specifically , we first analyse the expected density of internal equilibrium points , , i.e. the expected number of such equilibrium points per unit length at point , in a -player -strategy random evolutionary game where the game payoff matrix is generated from a normal distribution ( for short , normal evolutionary games ) . herethe parameter , with , denotes the ratio of frequency of strategy to that of strategy , respectively ( more details in section [ sec : pre ] ) .in such a random game , we then analyse the expected number of internal equilibria , , and , as a result , characterize the expected number of internal _stable _ equilibria , .we obtain both quantitative ( asymptotic formula ) and qualitative ( monotone properties ) results of and , as functions of the ratios , , the number of players , , and that of strategies , . to obtain these results ,we develop further the connection between egt and random polynomial theory explored in , and more importantly , establish appealing ( previously unexplored ) connections to the well - known classes of polynomials , the bernstein polynomials and legendre polynomials .in contrast to the direct approach used in , our approach avoids sampling and solving a system of multivariate polynomial equations , thereby enabling us to study games with large numbers of players and/or strategies .we now summarise the main results of the present paper .the main analytical results of the present paper can be summarized into three categories : asymptotic behaviour of the density function and the expected number of ( stable ) equilibria , a connection between the density function with the legendre polynomials , and monotonic behaviour of the density function .in addition , we provide numerical results and illustration for the general games when both the numbers of players and strategies are large . to precisely describe our main results ,we introduce the following notation regarding asymptotic behaviour of two given functions and note that throughout the paper we sometimes put arguments of a function as subscripts . for instance, the expected density of internal equilibrium points , , besides , is also analyzed as a function of and .we will explicitly state which parameter(s ) is being varied whenever necessary to avoid the confusion . the main results of the present paper are the following . as described above, denotes the expected number of internal equilibrium points per unit length at point , in a -player -strategy random evolutionary game where the game payoff matrix is generated from a normal distribution ; the expected number of internal equilibria ; and the expected number of internal stable equilibria .the formal definitions of these three functions are given in section [ sec : pre ]. in theorem [ theo : concentration ] , we prove the following asymptotic behaviour of for all : .we also prove that is always bounded from above and .2 . in theorem[ theo : behavior of e2 ] , we prove a novel upper bound for the expected number of multi - player two - strategy random games , and obtain its limiting behaviour : .this upper bound is sharper than the one obtained in ( * ? ? ?* theorem 2 ) , which is , .these results lead to two important corollaries .first , we obtain a sharper bound for the expected number of stable equilibria , , and the corresponding limit , , see corollary [ cor : expected stable equi ] . the second corollary , corollary [ cor : expected zeros bernstein ] , is mathematically significant , in which we obtain lower and upper bounds and a limiting behaviour of the expected number of real zeros of a random bernstein polynomial .3 . in theorem[ theo : fd interm of pd ] , we establish an expression of in terms of the legendre polynomial and its derivative .4 . in theorem[ theo : fd interms of pd and pd-1 ] , we express in terms of the legendre polynomials of two consecutive order .5 . in theorem[ theo : fd / d decreases ] , we prove that is a decreasing function of for any given . consequently , and are decreasing functions of .6 . in proposition [ prop :condition for f increase ] , we provide a condition for being an increasing function of for any given .we conjecture that this condition holds true and support it by numerical simulation .. in theorem [ theo : en2 ] , we provide an upper bound for .we also make a conjecture for and in the general case ( ) .we offer numerical illustration for our main results in section [ sec : simulation ] .the density function provides insightful information on the distribution of the internal equilibria : integrating over any interval produces the expected number of real equilibria on that interval . in particular , the expected number of internal equilibria is obtained by integrating over the positive half of the space .theorem [ theo : fd / d decreases ] and proposition [ prop : condition for f increase ] , which are deduced from theorems [ theo : fd interm of pd ] and [ theo : fd interms of pd and pd-1 ] , are qualitative statements , which tell us _ how _ the expected number of internal equilibria per unit length in a -player two - strategy game changes when the number of players increases . theorem [ theo : concentration ] quantifies its behaviour showing that is approximately ( up to a constant factor ) equal to .the function , as seen in theorem [ theo : concentration ] , certainly satisfies the properties that increases but decreases .thus , it strengthens theorem [ theo : fd / d decreases ] and further supports conjecture [ cojecture : fd increases ] .theorem [ theo : behavior of e2 ] is also a quantitative statement which provides an asymptotic estimate for the expected number of internal ( stable ) equilibria .furthermore , it is important to note that the expected number of real zeros of a random polynomial has been extensively studied , dating back to 1932 with block and plya s seminal paper ( see , for instance , for a nice exposition and for the most recent progress ) .therefore , our results , in theorems [ theo : behavior of e2 ] , [ theo : fd interm of pd ] and [ theo : fd interms of pd and pd-1 ] , provide important , novel insights within the theory of random polynomials , but also reveal its intriguing connections and applications to egt .the rest of the paper is structured as follows . in section [ sec : pre ] , we recall relevant details on egt and random polynomial theory .section [ sec : 2d games ] presents full analysis of the expected density function and the expect number of internal equilibria of a multi - player two - strategy game .the results on asymptotic behaviour and on the connection to legendre polynomials and its applications are given in sections [ subsec : asymptotic ] and [ subsec : connection ] , respectively . in section [ sec : general games ], we provide analytical results for the two - player multi - strategy game and numerical simulations for the general case .therein we also make a conjecture about an asymptotic formula for the density and the expected number of internal equilibria in the general case . in section [ sec : conclusion ] , we sum up and provide future perspectives .finally , some detailed proofs are presented in the appendix .this section describes some relevant details of the egt and random polynomial theory , to the extent we need here .both are classical but the idea of using the latter to study the former has only been pointed out in our recent paper .[ sec : pre ] the classical approach to evolutionary games is replicator dynamics , describing that whenever a strategy has a fitness larger than the average fitness of the population , it is expected to spread .formally , let us consider an infinitely large population with strategies , numerated from 1 to .they have frequencies , , respectively , satisfying that and . the interaction of the individuals in the population is in randomly selected groups of participants , that is , they play and obtain their fitness from -player games .we consider here symmetrical games ( e.g. the public goods games and their generalizations ) in which the order of the participants is irrelevant .let be the payoff of the focal player , where ( ) is the strategy of the focal player , and ( with and ) be the strategy of the player in position .these payoffs form a -dimensional payoff matrix , which satisfies ( because of the game symmetry ) whenever is a permutation of .this means that only the fraction of each strategy in the game matters .the equilibrium points of the system are given by the points satisfying the condition that the fitnesses of all strategies are the same , which can be simplified to the following system of polynomials of degree where , and are the multinomial coefficients . assuming that all the payoff entries have the same probability distribution , then all , , have symmetric distributions , i.e. with mean 0 .we focus on internal equilibrium points , i.e. for all .hence , by using the transformation , with and , dividing the left hand side of the above equation by we obtain the following equation in terms of that is equivalent to hence , finding an internal equilibria in a general -strategy -player random evolutionary game is equivalent to find a solution of the system of polynomials of degree in ( [ eq : eqn for fitnessy ] ) .this observation links the study of generic properties of equilibrium points in egt to the theory of random polynomials .suppose that all are gaussian distributions with mean and variance , then for each ( ) , is a multivariate normal random vector with mean zero and covariance matrix given by the density function and the expected number of equilibria can be computed explicitly .the lemma below is a direct consequence of ( * ? ? ?* theorem 7.1 ) ( see also ( * ? ? ?* lemma 1 ) ) . for a clarity of notation ,we use bold font to denote an element in high - dimensional euclidean space such as .[ lemma : e(n , d ) ] assume that are independent normal random vectors with mean zero and covariance matrix as in .the expected density of real zeros of eq . at a point given by where denotes the gamma function and the matrix with entries with as a consequence , the expected number of internal equilibria in a _d_-player _ _ n-__strategy random game is determined by note that the assumption in lemma [ lemma : e(n , d ) ] is quite limited when applying to games with more than two strategies as in that case the independence of the terms does not carry over into the independence of terms , see remark [ rem : assumption ] for a detailed discussion .we provide mathematical analysis of the expected density function and the expected number of equilibria for a multi - player two - strategy game .section [ subsec : asymptotic ] presents asymptotic behaviour .a connection to legendre polynomials and its applications are given in [ subsec : connection ] . in section [ subsec :connection ] , further applications of this connection to study monotonicity of the density function are explored . in the case of multi - player two - strategy games( ) , the system of polynomial equations in becomes a univariate polynomial equation where is the fraction of strategy 1 ( i.e. , is that of strategy 2 ) and is the payoff to strategy 1 minus that to strategy 2 obtained in a -player interaction with other participants using strategy 1 .it is worth noticing that is the bernstein basis polynomials of degree .therefore , the left - hand side of is a random bernstein polynomial of degree . as a by - product of our analysis , see theorem [ theo : behavior of e2 ] , we will later obtain an asymptotic formula of the expected real zeros of a random bernstein polynomial .letting ( ) , eq .is simplified to the expected density of real zeros of this equation at a point is . for simplicity of notation , from now on we write instead of .we recall some properties of the density function from ( * ? ? ?* proposition 1 ) that will be used in the sequel .[ pro : properties of f ]the following properties hold 1 . .2 . ^\frac{1}{2} ] . in this paper , however , the arguments are not in this interval since .legendre polynomials with arguments greater than unity have been used in the literature , for instance in ( * ? ? ?* chapter 2 ) .we now establish a connection between the density and the legendre polynomials . according to the second property in proposition [ pro : properties of f ], we have ^\frac{1}{2 } , % = \frac{1}{2\pi}\left[\frac{1}{t}\frac{m'_{d+1}(t)}{m_{d+1}(t)}+\frac{m''_{d+1}(t)m_{d+1}(t)-(m'_{d+1}(t))^2}{m^2_{d+1}(t)}\right]^\frac{1}{2},\ ] ] where denotes the derivative with respect to . using this formula and lemma [ lem : connection btw md and pd ] , we obtain the following expression of in terms of and its derivative .[ theo : fd interm of pd ] the following formula holds see appendix [ app : proof of theorem fd interm of pd ] . as a direct consequence of, we obtain the following bound for . in comparison with the estimate obtained in theorem [ theo : concentration ] , this inequality is weaker for since it is of order .however , it does not blow up as approaches .we provide another expression of in terms of two consecutive legendre polynomials and . in comparison with, this formula avoids the computations of the derivative of the legendre polynomial .[ theo : fd interms of pd and pd-1 ] it holds that ^ 2.\ ] ] see appendix [ app : proof of theoremfd interms of pd and pd-1 ] theorem [ theo : fd interms of pd and pd-1 ] is crucial for the subsequent qualitative study of the density for varying .[ lem : turan inequality ] the following inequality holds for all see appendix [ app : proof of lemma turan inequality ] .note that this inequality is the reverse of the turn inequality where the author considered the case x \geq 1 h_d < 0$}\\ % & = \frac{p^2_d(x)}{p^2_{d-1}(x)}\left[h_d \cdot \left ( p^2_{d}(x)- p^2_{d-1}(x)\right ) + ( 2d-1)(x^2 - 1)p^2_d(x)p^2_{d-1}(x)\right ] , \\% & \cdots \\ % & \geq\frac{p^2_d(x)}{p^2_1(x)}\left[h_1 \cdot \left(p^2_{1}(x)- p^2_{0}(x)\right ) + ( x^2 - 1)p^2_1(x)p^2_0(x)\right ] \\% & = p^2_d(x ) ( x^2 - 1 ) \\% & \geq 0,\end{aligned}\ ] ] where .suppose that is true , i.e. , .\ ] ] this implies that for all and .then it follows that + ( 2d+1)(x^2 - 1)p^2_d(x)p^2_{d+1}(x)\nonumber \\&\qquad\geq \frac{p^2_{d+1}(x)-p^2_{d}(x)}{p^2_{d}(x)-p^2_{d-1}(x)}\left[h_d \left(p^2_{d}(x)-p^2_{d-1}(x)\right ) + ( 2d-1)(x^2 - 1)p^2_d(x)p^2_{d-1}(x)\right]\nonumber \\&\qquad\geq\frac{p^2_{d+1}(x)-p^2_{d}(x)}{p^2_{d}(x)-p^2_{d-1}(x)}\times \frac{p^2_{d}(x)-p^2_{d-1}(x)}{p^2_{d-1}(x)-p^2_{d-2}(x)}\nonumber \\&\qquad\qquad\times \left[h_{d-1 } \left(p^2_{d-1}(x)-p^2_{d-2}(x)\right ) + ( 2d-3)(x^2 - 1)p^2_{d-1}(x)p^2_{d-2}(x)\right]\nonumber \\&\qquad\geq\cdots\nonumber \\&\qquad\geq \prod\limits_{i=1}^{d}\frac{p^2_{i+1}(x)-p^2_{i}(x)}{p^2_{i}(x)-p^2_{i-1}(x)}\times\left[h_1 \left(p^2_{1}(x)-p^2_{0}(x)\right ) + ( x^2 - 1)p^2_{1}(x)p^2_{0}(x)\right].\end{aligned}\ ] ] by definition of , we have substituting this into , we obtain + ( 2d+1)(x^2 - 1)p^2_d(x)p^2_{d+1}(x ) \\&\geq ( x^2 - 1)\prod\limits_{i=1}^{d}\frac{p^2_{i+1}(x)-p^2_{i}(x)}{p^2_{i}(x)-p^2_{i-1}(x)}\\&=p^2_{d+1}(x)-p_d^2(x)\geq 0,\end{aligned}\ ] ] i.e. , the condition is satisfied . e. kostlan . on the expected number of real roots of a system of random polynomial equations . in _foundations of computational mathematics ( hong kong , 2000 ) _ , pages 149188 .world sci .publ . , river edge , nj , 2002 .
|
in this paper , we study the distribution and behaviour of internal equilibria in a -player -strategy random evolutionary game where the game payoff matrix is generated from normal distributions . the study of this paper reveals and exploits interesting connections between evolutionary game theory and random polynomial theory . the main contributions of the paper are some qualitative and quantitative results on the expected density , , and the expected number , , of ( stable ) internal equilibria . firstly , we show that in multi - player two - strategy games , they behave asymptotically as as is sufficiently large . secondly , we prove that they are monotone functions of . we also make a conjecture for games with more than two strategies . thirdly , we provide numerical simulations for our analytical results and to support the conjecture . as consequences of our analysis , some qualitative and quantitative results on the distribution of zeros of a random bernstein polynomial are also obtained . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
|
in this paper we present a method to evaluate important constants which describe the behaviour of physical fields near crack tips in a perturbed problem set in a domain containing an imperfect interface .imperfect interfaces account for the fact that the interface between two materials is almost never sharp . accounted for this observation by by placing a very thin strip of a homogeneous material in the model between two larger bodies with different elastic moduli to that of the strip .if the thin layer is considered to be either much softer or stiffer than the main bodies , its presence can be replaced in models by transmission conditions , whose derivation can be found for example in for a soft imperfect interface , or for a stiff imperfect interface .we shall consider only soft imperfect interfaces in the present paper . presented an asymptotic model of adhesive joints in a layered structure . found the asymptotic behaviour of displacements and stresses in a vicinity of the crack tip situated on a soft imperfect interface between two different elastic materials , where the non - ideal interface is replaced by non - ideal transmission conditions .for such a case , the asymptotics are of a markedly different form to the perfect interface case , in which components of stress exhibit a square root singularity at the crack tip ; such behaviour is not present for imperfect interface cracks. a key element of our approach will be the derivation of a new weight function .the concept of weight functions was introduced by . in the perfect interface setting these provide weights for the loads applied to the crack surfaces such that their weighted integrals over the crack surfaces provide the stress intensity factors at a certain point . modified the weight function technique to yield similarly useful asymptotic constants that characterise stress fields near crack tips along an imperfect interface .a survey of macro - microcrack interaction problems can be found in .of particular relevance is the recent manuscript of which examines an analogous problem to that presently considered with a perfect interface in place of the imperfect interface .the approach in that paper utilises the dipole matrix approach of to construct an asymptotic solution that takes into account the presence of a micro - defect such as a small inclusion .the present paper seeks to adapt this approach to the imperfect interface setting .we adopt the following structure for the paper .we first formulate the physical problem before giving the weight function problem formulation .fourier transform techniques allow us to obtain a wiener - hopf type problem for the weight function , whose kernel we factorise in a computationally convenient fashion .the wiener - hopf equation is solved to yield expressions for the weight function and comparisons are drawn between the perfect and imperfect interface weight function problems .we then use the reciprocal theorem ( betti formula ) in the spirit of to relate the sought physical solution to the weight function .the presence of imperfect interface transmission conditions alters properties of the functions in the betti identity and so different analysis is required .the application of betti s identity enables us to find an expression for the leading order of tractions near the crack tip in terms of the new weight function and the imposed arbitrary tractions prescribed on the faces of the crack : here , bars denote fourier transform , is a constant depending on the material parameters and extent of interface imperfection , and are respectively the jump and average of the weight function across the crack / interface line , and and are the jump and average of the tractions prescribed on the crack faces .in section [ section : pert ] , we perform perturbation analysis to determine the impact on the tractions near the crack tip of the presence of a small inclusion .the asymptotic solution is sought in the form where is the unperturbed physical displacement solution ( the solution with no inclusion present ) , is a boundary layer concentrated near the inclusion and is introduced to fulfil the original boundary conditions on the crack faces and along the imperfect interface .this enables us to find the first order variation in the crack tip tractions ; we expand the constant as and use betti identity arguments to derive an expression for ( see ( [ deltaa0expression ] ) ) .this is interpreted physically as the change in traction near the crack tip induced by the inclusion s presence ; as such we say that the sign of for any given positioning and configuration of the inclusion either shields or amplifies the propagation of the main crack .note that for the unpeturbed setup ( with no inclusion present ) and so we will naturally drop the superscript when referring to the quantity corresponding to the unperturbed problem .we conclude the paper by presenting numerical results in section [ section : numerical ] .in particular we show how varies depending on the extent of interface imperfection and choice of material contrast parameter for different loadings .these computations are performed for point loadings that are chosen to be illustrative of the suitability of our method to asymmetric self - balanced loadings .we further propose a method of comparing with stress intensity factors from the analogous perfect interface problem and find agreement as the extent of interface imperfection tends towards zero .we also present computations that show the sign of for varying location and orientation of the micro - defect .we consider an infinite two - phase plane with an imperfect interface positioned along the positive -axis .a semi - infinite crack is placed occupying the line .we refer to the half - planes above and below the crack and interface respectively as and .the material occupying has shear modulus and mass density for .the anti - plane shear displacement function satisfies the laplace equation occupying half - planes above and below the crack and imperfect interface for the central point of a micro - defect is situated at a distance from the tip of the main crack . ]the plane also contains a micro - defect whose centre is at the point ; we will consider in particular elliptic inclusions although other types of defect may be incorporated into the model provided a suitable dipole matrix can be obtained ( see for example in which micro - cracks and rigid line inclusions are considered ) .the defect has shear modulus , is placed at a distance from the crack tip , makes an angle with the imperfect interface and is oriented at an angle to the horizontal as shown in figure [ plane : figure : physicaldefectsetup ] . the value of may be greater than or less than the value of ( which may be or depending where the defect is placed ) , and so both stiff and soft defects can be considered . we assume continuity of tractions across the crack and interface , and introduce imperfect interface conditions ahead of the crack : where the notation defines the jump in displacement across , i.e. the parameter describes the extent of imperfection of the interface , with larger corresponding to more imperfect interfaces .we further impose prescribed tractions on the crack faces : these tractions are assumed to be self - balanced ; that is and it is further assumed that vanish in a neighbourhood of the crack tip . although the techniques we will establish can be applied to any permissible loading , we will particularly focus our attention on the case where these loadings are point loadings , with a loading on the upper crack face positioned at ( where ) balanced by two equal point loadings on the lower crack face positioned at and , where . this loading makes computations more difficult to perform than for the smooth loadings considered by , but is more illustrative of the asymmetry of the load . near the crack tip, the physical displacement behaves as as demonstrated by .it follows that the displacement jump is approximated by as the crack tip is approached along the -axis . in the neighbourhood of the crack tip, the out of plane component of stress behaves as as , in the usual polar coordinate system and so along the interface , these estimates demonstrate that fourier transforms of the displacement jump and out - of - plane stress components can be taken ; we denote the fourier transformation of a function by thus as , the fourier transform of the displacement jump behaves as moreover , along the axis , the out of plane stress component decays as ) and weight function ( figure ) setups . ]the sought weight function also satisfies the laplace equation , but with the crack occupying .we define the functions in their respective half - planes by boundary conditions analogous to the physical set - up apply .that is , we expect that along the interface , the displacement jump behaves as while along the crack , and where are constants .we further expect that the asymptotic behaviour of allows us to apply fourier transforms .moreover , the behaviour near demonstrates that the fourier transform exists as a cauchy principal value integral . applying the fouriertransform with respect to and taking into account the behaviour of at infinity , we obtain that the transformed solutions of ( [ plane : laplace ] ) are of the form with the corresponding expressions for tractions at given by we define the functions by these functions are analytic in the complex half planes denoted by their superscripts .we expect that as in their respective domains , asymptotic estimates for are and near zero , we verify this later ( see equations ( [ plane : accuratephibehaviour1])-([plane : phiplusinfinity ] ) ) .the condition of continuity of tractions across the crack and interface ( [ plane : contoftractions ] ) gives that and the fourier transform of the jump function can be seen from ( [ plane : formofubar ] ) to be combining these conditions ( [ plane : whderivation1])-([plane : whderivation3 ] ) , we conclude that the functions satisfy the functional equation of the wiener - hopf type where with the constant given by this wiener - hopf kernel is the same as that found by .the behaviour of the functions is however different .in this section we factorise the function as defined in ( [ plane : xidefined ] ) .as we just remarked , despite this function having been previously factorised in , we provide here an alternative factorisation which is more convenient for computations .we define an auxiliary function by with the functions given by here is the standard square root function with its branch cut positioned along the negative real axis .thus are analytic functions in half - planes corresponding to their respective subscripts .now , is an even function and behaves at zero and infinity as follows : the kernel function can be factorised as this function can itself be factorised as the functions satisfy , with being regular and non - zero in the half plane .moreover , stirling s formula gives that the behaviour as in an upper half plane is where .analogous asymptotics for are easily obtained by noting that . near , the asymptotics for are given by the function can be written in the form where in particular , we stress that the functions are easy to compute .near zero , we find that where which follows from a similar derivation to that of .moreover , behaviour near infinity in a suitable domain is described by these expressions again emphasise the well behaved nature of the functions .the ` bad ' behaviour of the kernel near is all contained in the function which has subsequently been factorised into the product of readily computable analytic functions . in this sectionwe solve the wiener - hopf problem given in equation ( [ plane : whunfactorised ] ) . substituting our factorised expressions for and into ( [ plane : whunfactorised ] ) , we arrive at the wiener - hopf type equation both sides of ( [ plane : whfactorised ] ) represent analytic functions in their respective half - planes and do not have any poles along the real axis .the asymptotic estimates as given in ( [ plane : phipmatinfinity ] ) , ( [ plane : xizeroinfinity ] ) and ( [ plane : xistarinfinity ] ) demonstrate that both sides of equation ( [ plane : whfactorised ] ) behave as as in their respective domains .we therefore deduce that both sides must be equal to a constant , which we denote .we deduce that the functions are given by these expressions validate our earlier expectations ( see equations ( [ plane : phipmatinfinity ] ) and ( [ plane : phipmatminusinfinity ] ) on page ) regarding the asymptotic estimates for .in particular , accurate estimates near zero are given by while as in the appropriate domains , it also follows from ( [ plane : formofphis ] ) that the fourier transform of is given by expressions for the transforms of the displacement jump and the mean displacement across the interface are therefore respectively given by where is the dimensionless mechanical contrast parameter these expressions will be useful in section [ plane : section : betti ] where we consider the betti identity in an imperfect interface setting .in particular we note that has asymptotic expansions near zero and infinity as follows the function behaves similarly , as another key difference between the imperfect and perfect interface ( as considered in ) cases is also readily seen here . due to the condition of continuity of displacement across perfect interfaces ,the function is a plus function in the perfect case , since is zero for lying along the negative real axis .however , across an imperfect interface , the displacement is no longer continuous and so is neither a plus function nor a minus function .in this section we refer to the physical fields for displacement and out - of - plane stress component as and respectively , and to the weight function fields for displacement and stress as and respectively .we will use the reciprocal theorem ( betti formula ) as in to relate the physical solution to the weight function . applying the betti formula to the physical fields and to the upper and lower half plane we obtain and these identities were proved under the assumption that the integrand decays faster at infinity than along any ray .it is clear from the asymptotic estimates for the physical solution and the weight function given in subsections [ plane : section : physicalformulation ] and [ plane : section : weightformulation ] that this condition is satisfied . subtracting ( [ plane : bettibottom ] ) from ( [ plane : bettitop ] ) we obtain \}\mathrm{d}x=0.\label{plane : bettishort}\end{aligned}\ ] ] we split the terms for physical stress into two parts , writing where and are defined as follows here denotes the heaviside step function .the functions represent the prescribed loading on the crack faces . after this splitting , equation ( [ plane : bettishort ] )becomes we introduce notation for symmetric and skew - symmetric parts of the loading : this allows us to rewrite the right hand side of ( [ plane : bettilong ] ) , giving we now split into the sum of in the spirit of ( [ plane : splitting ] ) , and similarly split into the sum of .we will use the usual notation of to denote the convolution of and . rewriting ( [ plane : bettilong2 ] ) using these expressions gives taking fourier transforms in yields we now make use of the transmission conditions which state that this causes the second and third terms in the left hand side of ( [ plane : bettilong3 ] ) to cancel , leaving we note that and can therefore combine the asymptotic estimates in ( [ plane : uinfinity ] ) , ( [ plane : sigmainfinity ] ) , ( [ plane : phiminusinfinity ] ) and ( [ plane : phiplusinfinity ] ) to yield that where .we now multiply both sides of ( [ plane : betti ] ) by , giving then , and similarly to the expression obtained for the perfect interface betti formula approach of , the left hand side now has asymptotics at infinity ( in appropriate domains ) of the form \ ] ] as , where the term in square brackets is the regularization of the dirac delta function , namely . integrating both sides of ( [ plane : bettibyxi ] ) , we can arrive at an expression for the constant in terms of known , readily computable functions : we note that since and behave as as , and the functions and behave as bounded oscillations as for point loadings , the integrand is well behaved at infinity .moreover , near the integrand is also sufficiently well behaved , acting as . equation ( [ plane : expressionfora0 ] ) is a particularly important result ; it gives an expression for the leading order of the out - of - plane component of stress near the crack tip ( see ( [ plane : whatisa0 ] ) ) in terms of known functions and acts as an imperfect interface analogue to the stress intensity factor from the perfect interface setting .as we stated earlier in this paper , although the methods described are applicable to any permissible loading , we will later perform computations using the specific point loading configuration shown in figure [ plane : figure : physicaldefectsetup ] on page . for this configuration ,the loadings are defined as a point load on the upper crack face at balanced by two equal loads at and , that is the corresponding explicit expressions for and are which have fourier transforms given by will later require a method to evaluate the _ unperturbed physical solution _ and its first order partial derivatives with respect to and .this problem has been solved by by approximating the loading by a linear combination of exponentials ; this approximation is however not ideal for point loadings .tractions on the upper and lower crack faces can be written as it follows immediately from continuity of tractions across the imperfect interface that we further define minus functions , and as we expect that the unknown functions and behave at infinity as from these expressions follow the relationships and also moreover , since transformed solutions are of the form we further have the relationships and these seven equations in eight unknowns reduce to the following wiener - hopf type equation relating and : noting that the term in braces on the left hand side of ( [ plane : antiwh ] ) is the function we earlier defined as and have already suitably factorised , we can write where recall that can be factorised in the form where we have defined the functions for the sake of notational brevity by which are analytic in the half planes indicated by their superscripts .these functions have behaviour near zero and infinity given by thus we can decompose the final term on the right hand side as usual into where are given by for .we expect that behave as as .the wiener - hopf equation becomes both terms on each side of ( [ plane : antisymmwh ] ) decay as , .moreover , each side is analytic in the half - plane denoted by the superscripts .liouville s theorem yields that both sides are equal to zero , and so these expressions verify that our expectations of the behaviour of and as were correct .moreover , ( [ plane:3rdcond ] ) enables us to express as condition ( [ plane:4thcond ] ) then yields an expression for the transform of the displacement jump from which we can obtain expressions for and as follows these expressions now enable us ( see ( [ plane : whatareaj ] ) ) to compute the fourier transform of the unperturbed solution ( i.e. the setup with no small defect present ) for any .we shall construct an asymptotic solution of the problem using the method of , that is the asymptotics of the solution will be taken in the form in ( [ exp ] ) , the leading term corresponds to the unperturbed solution , which is described in the previous section .the small dimensionless parameter is defined as the ratio of the semi - major axis of the elliptical inclusion to the distance of the defect s center from the crack tip , that is .the term corresponds to the boundary layer concentrated near the defect and needed to satisfy the transmission conditions for the elastic inclusion the term is introduced to fulfil the original boundary conditions ( 4 ) on the crack faces and the interface conditions ( 2 ) , ( 3 ) disturbed by the boundary layer ; this term , in turn , will produce perturbations of the crack tip fields and correspondingly of the constant .we shall consider an elastic inclusion , situated in the upper ( or lower ) half - plane .the leading term clearly does not satisfy the transmission conditions ( [ inclusion ] ) on the boundary .thus , we shall correct the solution by constructing the boundary layer , where the new scaled variable is defined by with being the `` centre '' of the inclusion ( see figure [ plane : figure : physicaldefectsetup ] ) . for consider the following problem where the function remains continuous across the interface , that is , and satisfies on the following transmission condition as , where is an outward unit normal on .the formulation is completed by setting the following condition at infinity the problem above has been solved by various techniques and the solution can be found , for example , in .since we assume that the inclusion is at a finite distance from the interface between the half - planes , we shall only need the leading term of the asymptotics of the solution at infinity .this term reads as follows \cdot \left[{\mbox{\boldmath}}\frac{{\mbox{\boldmath }}}{|{\mbox{\boldmath }}|^2}\right ] + o(|{\mbox{\boldmath }}|^{-2 } ) \quad \text{as } \quad { \mbox{\boldmath }}\to \infty,\ ] ] where is a 2 2 matrix which depends on the characteristic size of the domain and the ratio ; it is called the dipole matrix .for example , in the case of an elliptic inclusion with the semi - axes and making an angle with the positive direction of the -axis and -axis , respectively , the matrix takes the form where \displaystyle -\frac{(1 - e)({\nu _ * } - 1 ) \sin 2\alpha}{(e + { \nu_*})(1 + e{\nu _ * } ) } & \displaystyle \frac{1 - \cos 2\alpha}{e + { \nu _ * } } + \frac{1 + \cos2\alpha}{1 + e{\nu _ * } } \end{array } \right],\ ] ] and .we note that for a soft inclusion , , the dipole matrix is negative definite , whereas for a stiff inclusion , , the dipole matrix is positive definite . in the limit , we obtain the dipole matrix for a rigid movable inclusion . in the case of an elliptic rigid inclusion, we have where ( 1 - e)\sin 2\alpha & h_-(\alpha)+e h_+(\alpha ) \end{array } \right].\ ] ] here we have defined the functions for brevity of notation . the term in a neighbourhood of the -axis written in the takes the form where \cdot \left[{\mbox{\boldmath }}\frac{{\mbox{\boldmath }}- { \mbox{\boldmath }}}{|{\mbox{\boldmath }}- { \mbox{\boldmath }}|^2}\right].\ ] ] as a result , one can compute the average and the jump of the `` effective '' tractions on the crack faces induced by the elastic inclusion .since must hold on the crack line ( to satisfy the original boundary conditions ( 4 ) ) , this gives for where \cdot { \mbox{\boldmath }}\frac{{\mbox{\boldmath }}_2}{|{\mbox{\boldmath }}- { \mbox{\boldmath }}|^2}+ \frac{1}{\pi } \left[\left.\nabla_{{\mbox{\boldmath } } } u^{(0)}\right|_{{\mbox{\boldmath }}}\right ]\cdot { \mbox{\boldmath }}\frac{({\mbox{\boldmath }}- { \mbox{\boldmath }})(y - y)}{|{\mbox{\boldmath }}- { \mbox{\boldmath }}|^4}.\end{aligned}\ ] ] additionally , we can compute the transmission conditions for the functions across the interface . in order for the perturbed solution in ( [ exp ] ) to satisfy the original transmission conditions ( 2 ) and ( 3 ), the following relations must hold for constant which describes the traction near the crack tip ( see ( [ plane : whatisa0 ] ) ) is expanded in the form our objective is to find the first order variation .let us consider the model problem for the first order perturbation and write the corresponding betti identity in the form this follows immediately from ( [ plane : bettishort ] ) by noting that .we split the terms for stress into two parts , observing that in contrast to the zero order problem where the load is described by ( [ plane : unpertloading ] ) , the terms with superscript are non - zero since the presence of inclusions induces stresses along the imperfect interface and should be taken into account .equation ( [ betti ] ) becomes \displaystyle \quad=-\int_{-\infty}^{\infty } \biggl\ { { { \llbracket}u { \rrbracket}}(x ' - x ) p^{(-)}(x ) + { \langle u \rangle}(x ' - x)q^{(-)}(x)+ { \langle u \rangle}(x ' - x)q^{(+)}(x ) \biggr\ } \mathrm{d}x .\end{array}\ ] ] we now split into the sum of and similarly split into the sum of .this gives taking the fourier transform in yields we now make use of the transmission conditions thus obtaining the same reasoning used in section [ plane : section : betti ] , allows us to derive the integral representation for in the form \mathrm{d}\xi + \int_{-\infty}^\infty \left [ \kappa \xi { \langle \overline{\sigma } \rangle}^-(\xi ) \overline{p}^+(\xi ) + \xi { \langle \overline{u } \rangle}(\xi ) \overline{q}^+(\xi ) \right ] \mathrm{d}\xi \biggr\}.\label{deltaa0expression}\end{aligned}\ ] ] this important constant has an immediate physical meaning .if then the defect configuration is neutral ; its presence causes zero perturbation to the leading order of tractions at the crack tip .otherwise , if , the presence of the defect causes a reduction in the crack tip traction and so shields the crack from propagating further . finally , if then the defect causes an amplification effect and so can be considered to be encouraging the propagation of the main crack .against . both cases plotted here use the parameters and , but with different values for , which controls the separation between the point loadings .the red plot has while the blue plot uses . ] in this section we present results of computations obtained by following the methods previously described in this paper .all results have been computed using matlab .figure [ plane : a0vsmustar ] plots against , showing how the constant from the asymptotic expansion at the crack tip varies with differently contrasting stiffnesses of materials . recalling that we note that when is near to , this corresponds to . that is , the material occupying the region below the crack is far stiffer than the material above the crack . as this limitis approached , the precise locations of the point loadings on the lower face of the crack decrease in importance , since the material becomes sufficiently stiff for the material to act as an almost rigid body ; this explains the meeting of the two lines at . in figure [ plane : tramlines ]we present a log - log plot of against , the dimensionsless parameter of interface imperfection defined as .this has been computed for different values of ( describing the contrast in material stiffnesses ) and also for different values of ( describing the separation distance between the point loadings ) while keeping fixed ( ) .the solid lines correspond to while dotted lines represent and different colours correspond to different values of : green corresponds to , blue to and red to .bearing in mind our remarks regarding figure [ plane : a0vsmustar ] , we would expect that changing the value of would have the greatest impact for values of near + 1 .this is indeed the case in figure [ plane : tramlines ] .also plotted in figure [ plane : tramlines ] is a grey dotted line that is tangent to the curves ( which run parallel ) as ; this tangent has slope , indicating that as . as ,the interface becomes almost perfect , and so the square - root behaviour associated with fields near crack tips in the perfect interface setting is not unexpected . moreover , as , the curves on the log - log plot have slope , implying that as . against for differently contrasting materials . ]computations analogous to those presented in figure [ plane : tramlines ] have been performed for smooth asymmetric loadings given by we do not present them here since changing the loading to the form ( [ plane : nicerloadings ] ) introduces no new features . in the following subsection however , we will detail an approach for comparing against stress intensity factors and will present computations there for both point and smooth loadings . in this subsectionwe discuss an approach which enables a comparison to be made between imperfect and perfect interface situations . comparing the fieldsdirectly is not a simple task since in the perfect interface case the stresses become unbounded at the crack tip , exhibiting asymptotic behaviour of , . in the imperfectsetting , we have derived the leading order of stresses at the crack tip , , which is independent of . moreover , different normalisations may make comparisons difficult .however , given two particular pairs of materials with contrast parameters and say , we might expect the dimensionless ratios of stress intensity factors ( from the perfect interface case ) and ( imperfect case ) to be similar for small . as defined in ( [ plane : ratior ] ) for four different values of with smooth loadings of the form ( [ plane : nicerloadings ] ) acting on the crack faces . ] as defined in ( [ plane : ratior ] ) for four different values of with smooth loadings of the form ( [ plane : nicerloadings ] ) acting on the crack faces . ] in the perfect interface case , the stress intensity factor ( derived in ) is given by as derived earlier in section [ plane : section : betti ] , the leading order of tractions near the crack tip in the imperfect interface case is given by we emphasise that this quantity depends heavily upon the extent of interface imperfection , characterised by the dimensionless parameter .figure [ plane : ratio ] plots the ratio for with fixed and for four different values of .the loadings used are balanced ; a point loading on the upper crack face at is balanced by two equal loadings at and .we see from the plot that as , .this provides some verification of the accuracy of our computations for asymmetric point loadings and demonstrates that the comparison of ratios approach for small again the perfect interface case is useful .figure [ plane : ratiosmooth ] plots the ratio for the smooth asymmetric loadings described by ( [ plane : nicerloadings ] ) .we see that as , thus demonstrating that is comparable with stress intensity factors for smooth loadings as well as point loadings. we now present numerical results for the perturbed problem computed using matlab .figure [ plane : regions ] shows the sign of for a specific configuration . to reduce the computational task here, we have used smooth loadings with the tractions on the upper and lower crack faces of the form ( [ plane : nicerloadings ] ) ; the imperfect interface has .the results presented in figures [ plane : ratio ] and [ plane : ratiosmooth ] demonstrate that results for point loadings and smooth loadings are qualitatively similar .we emphasise however that the perturbation methods described in section [ section : pert ] are applicable to both smooth and point loadings .the inclusion is stiff , with the contrast between the internal and external materials of the inclusion given by . for varying and .the darker shaded areas are those for which while paler regions have . ]the figure clearly shows the regions for which crack growth is encouraged or discouraged for this configuration .however , we make the observation that different analysis should be sought when is particularly close to zero since this corresponds to the crack being placed near the imperfect interface which contradicts the assumption made before equation ( [ boundary_layer_1 ] ) .the imperfect interface weight function techniques presented here allow for the leading order out - of - plane component of stress and the displacement discontinuity near the crack tip to be quantified .the displacement discontinuity can serve as an important parameter in fracture criteria for imperfect interface problems ; we demonstrated that , in the limiting case as the extent of imperfection tends towards zero , the criterion is consistent with classical criteria based on the notion of the stress intensity factor .perturbation analysis further enables us to correct the solution to account for the presence of a small inclusion .the techniques presented enable us to determine whether the defect s presence shields of amplifies the propagation of the main crack .although we have presented computations in this paper for the situation where only one such inclusion is present and the inclusion is elliptical , we stress that the technique is readily applicable to geometries containing any number of small independent defects , provided a corresponding dipole matrix for each inclusion is used . indeed ,even homogenisation - type problems for composite materials with the main crack lying along a soft imperfect interface of the composite could be tackled using the described techniques . moreover, similar analysis could be conducted for more general problems , for instance mode i / mode ii analysis and for various different types of imperfect interface ( see for example ) .av and gm respectively acknowledge support from the fp7 iapp projects piap - ga-2009 - 251475-hydrofrac and piap - ga-2011 - 286110-intercer2 .ap would like to acknowledge the italian ministry of education , university and research ( miur ) for the grant firb 2010 future in research `` structural mechanics models for renewable energy applications '' ( rbfr107akg ) .antipov , y.a . , avila - pozos , o. , kolaczkowski , s.t . andmovchan , a.b . , 2001 , mathematical model of delamination cracks on imperfect interfaces ._ international journal of solids and structures _ , * 38 * , 66656697 .mishuris g.s . ,kuhn , g. , 2001 , asymptotic behaviour of the elastic solution near the tip of a crack situated at a nonideal interface ._ zeitschrift fr angewandte mathematik und mechanik _ , * 81 * , 811826 .mishuris , g. , movchan , a. , movchan , n. and piccolroaz , a. , 2011 , interaction of an interfacial crack with linear small defects under out - of - plane shear loading ._ computational materials science _ , * 52 * , 226230 . piccolroaz , a. , mishuris , g. , movchan , a.b . , 2009 ,symmetric and skew - symmetric weight functions in 2d perturbation models for semi - infinite interfacial cracks ._ j. mech .solids _ , * 57(9 ) * , 16571682 .piccolroaz , a. , mishuris , g. , movchan , a. , movchan , n. , 2012 , perturbation analysis of mode iii interfacial cracks advancing in a dilute heterogeneous material . _ international journal of solids and structures _ , * 49 * , 244255 .
|
we analyse a problem of anti - plane shear in a bi - material plane containing a semi - infinite crack situated on a soft imperfect interface . the plane also contains a small thin inclusion ( for instance an ellipse with high eccentricity ) whose influence on the propagation of the main crack we investigate . an important element of our approach is the derivation of a new weight function ( a special solution to a homogeneous boundary value problem ) in the imperfect interface setting . the weight function is derived using fourier transform and wiener - hopf techniques and allows us to obtain an expression for an important constant ( which may be used in a fracture criterion ) that describes the leading order of tractions near the crack tip for the unperturbed problem . we present computations that demonstrate how varies depending on the extent of interface imperfection and contrast in material stiffness . we then perform perturbation analysis to derive an expression for the change in the leading order of tractions near the tip of the main crack induced by the presence of the small defect , whose sign can be interpreted as the inclusion s presence having an amplifying or shielding effect on the propagation of the main crack . imperfect interface , crack , weight function , perturbation , inclusion , fracture criterion
|
identification has always been required in critical tasks and applications ; to ask for an object or a signature that only the right person possesses . throughout history , there were always attempts to make this process flawless and secure , mostly to prevent forgeries . for centuries , identity was confirmed through an item or a mark . todaythere are many ways for a person to identify himself or herself , including passwords and keys .a very reliable way is to utilize something that is very difficult to duplicate quickly ; features of the person himself , also known as biometric data .the latter began in the late 19th century with the collection of fingerprints for forensic purposes due to them being unique to every person from whom they are sampled .afterwards many other characteristics were deemed efficient and unique to be used in the areas of security and identification .various algorithms have been used on an individual s biometric data such as fingerprints , iris patterns , , face and palmprints . sometimes even several methods are used together and then cross - referenced to dramatically increase the verity of the judgment .we chose palmprints to be our focus in this work , because we believe that despite their more simplicity than fingerprints which casts the illusion that their use is less secure , they can be utilized just as reliably .palmprints are more economical in the sense of acquisition . they can be easily obtained using inexpensive ccd cameras .they also work in different conditions of weather and are typically time - independent .however , due to sampling limitations , lighting and other factors , they may pose problems like insufficient data due to unclear wrinkles or confusion due to poor image quality .this is the reason there are usually many different samples from every person in the database .like all biometric data , the key is to use image processing and , in many cases , machine learning approaches to extract distinct traits of every person , called features , by their samples and use the captured data for the next blocks of data to come .being a popular area of research , there are many set of features and different approaches used for palmprint recognition ; however , two general approaches for palmprint recognition are the following : 1 . transforming palmprints into another domain and extracting the features in the transform domain , which could be wavelet , fourier , gabor , etc .2 . trying to extract principal lines and wrinkles and other geometrical characteristics as discriminants .there are many transform - based approaches .li proposed fourier - based features for palmprint recognition .wu presented a wavelet - based approach for palmprint recognition .they used wavelet energy distribution as a discriminant for the recognition process .ekinci proposed a gabor wavelet representation approach followed by kernel pca for palmprint recognition .there are also several line - based approaches , since palm lines are among the most useful features of palmprints .chen proposed a recognition algorithm that primarily uses creases .they extract all creases from a palm and use them for palmprint matching .the main advantage of this algorithm is that it is rotation- and translation - invariant .jia used robust line orientation code for palmprint verification .a few groups used image coding methods for palmprint recognition , such as palm code , fusion code , competitive code , ordinal code . a survey about palmprint recognition algorithms before 2009 is provided by kong . in the more recent works , in ,jia proposed a new descriptor for palmprint recognition called histogram of oriented lines ( hol ) which is inspired by the histogram of oriented gradients descriptors .the proposed descriptor has some robustness against small deformation and changes of illumination . in , minaeeproposed to use a set of textural features for palmprint recognition . in their work ,a set of local texture features are derived for each palmprint and then weighted majority voting algorithm is used to perform recognition task . in , mistani proposed an energy - based feature which results in a high accuracy for palmprint recognition . in , xu proposed a quaternion principal component analysis approach for multispectral palmprint recognition which achieved a high accuracy rate . in this work ,we have used the palmprint database created by the polytechnic university of hong kong ( polyu ) which includes a set of 12 palmprint samples from 500 people under four distinct light spectra .the job of the identifier is to take the picture of a new palmprint sample called a test subject and determine the person in possession of the most similar palmprint .our dataset allows us to use multiple spectra of the same palmprint .multispectral methods require different samples of the same object in order to make a better decision .the images in this dataset are preprocessed and the regions of interest ( roi ) for each of them are extracted . as a result ,no more preprocessing is required before feature extraction .four different palmprint images are shown in figure 1 .here we decided to use a set of features which capture the palmprint information both in spatial and frequency domains .we first divide each image into non - overlapping blocks and then extract 5 statistical features to capture essential spatial information and 9 wavelet - based features to determine the frequency content of the image .since the statistical features alone are not able to capture high - frequency patterns in palm images , we also use wavelet features to capture fine details of palm images so we are able to detect the partial differences between two different palmprints .after feature extraction , we have to use a classification algorithm to identify palmprints . in this work ,two different classifiers are used , the first one being minimum distance classifier and the other one is the weighted majority voting algorithm , which is very fast and can be also implemented in electronic devices in conjunction with energy - efficient algorithms , .the rest of the paper is organized as follows .section [ sectionii ] provides a detailed explanation of the proposed features .the minimum distance classifier and weighted majority voting algorithms are explained in sections [ subsectioniiia ] and [ subsectioniiib ] respectively .experimental results are given in section [ sectioniv ] .we have provided a comprehensive comparison with other state - of - the - art algorithms there . in the end , conclusion is given in section [ sectionv ] .in general , features play a crucial part in the area of machine learning and computer vision .the more informative features are , the higher accuracy one can get . therefore it is of utmost importance to extract a set of features which have the required information for prediction of the target value . once the images are dealt with , it is usually needed to extract a set of features from them to use for prediction . for a comprehensive study of feature extraction ,the reader is referred to .there are different kinds of features that can be used for palmprint recognition .one type consists of spatial and statistical features .another type is transform - domain features such as fourier , wavelet and gabor - based features .another category is the geometrical features based on principal lines and wrinkles .this category requires to extract these lines from the palmprint first , which may not be very simple for low - resolution images .foreground segmentation techniques can be used to extract principal lines from palmprint .geometrical features are also used in other applications .sparsity - based features have also drawn a lot of attention in image classification during the past few years - . herea set of features is used to capture the behavior of the palmprint in both spatial and frequency domains .based on the simulations , this results in a very highly accurate identification method for palmprints .two images may have similar global characteristics but look different in local regions .thus the local features are extracted from different parts of each palmprint and combined to create a feature matrix for every image .each palmprint is divided into non - overlapping blocks , and from each block , 5 statistical and 9 wavelet - based features are derived which are expected to determine the frequency information of the palms . to obtain the statistical features of each block , it is necessary to find the histogram of pixel intensities first .let us assume that represents the pixel value at the location of a block of size ( here ) and that denotes the probability mass function for the -th pixel value , , in that block .now the 5 following attributes can be defined as the statistical features of the current block : = \sum_{k=1}^{k } p(k)v(k ) \\ \hspace{-1.9 cm } f_2=e[(v - e[v])^2 ] \\\hspace{-1.9 cm } f_3=e[(v - e[v])^3 ] \\\hspace{-1.9 cm } f_4=e[(v - e[v])^4 ] \\\hspace{0.97 cm } f_5=entropy(p)= - \sum_{k=1}^{k } p(k)\log_2 p(k)\end{gathered}\ ] ] where denotes the number of different pixel values in the current block .the other 9 features are wavelet - based . in this work ,the wavelet transform used is the second - order daubechies filter .the 2d - wavelet decomposition is performed up to three stages , and in the end , 10 subbands are produced .since the mean pixel intensity is used as a statistical feature already , it is not required to use the ll subband of the last stage , but all other 9 subbands may be utilized .we extract the wavelet features in our implementation using the following algorithm : 1 .divide each palm image into non - overlapping blocks ; 2 .decompose each block up to 3 levels using daubechies-2 wavelet transform ; and 3 .compute the energy of each subband and put the similar subband energy of all blocks in a vector .if each subband is denoted by where , the wavelet features can be derived as follows : , \ \ \ \ \ \ \ \ i=1,2, ... ,9\end{gathered}\ ] ] note that are blocks of size , are blocks of size and are blocks of size .an example of 3-level wavelet decomposition of a palmprint is presented in figure 2 .after the computations , there will be 14 different features for each block which can be combined in a vector together : .it is necessary to find the mentioned features for each block of a palmprint .if each palm image has a size of , the total number of non - overlapping blocks will be : therefore there are such feature vectors , .similarly they can be put in the columns of a 2-dimensional matrix to produce the feature matrix of that palmprint , : \end{gathered}\ ] ] therefore there will be a total number of features for each palmprint image .the goal of palmprint recognition is to identify a person using their palmprint samples .it is possible to use the derived features of each person for identification . after finding the features of all people in the dataset , a classifieris required so that the features of each test palmprint can be compared with all of the available samples in the dataset and find the most similar one .there are different classifiers that can be used for this job ; for example , minimum distance classifiers , support vector machines and probabilistic neural networks . in our work ,two different classifiers are used .one is the minimum distance classifier which finds the most similar palmprint by minimizing a distance between the features of the test samples and those of the training samples .the other one is the weighted majority voting algorithm which finds the most similar palmprint by acquiring the predictions based on each feature and its weight , each time awarding the training data with points , and choosing the entry with the highest point .these two algorithms are described in the following sections .since there are enough data in our dataset , our only goal is to minimize the recognition error on test samples , but if one is dealing with a small dataset , the over - fitting problem should also be considered , as it is discussed in .the minimum distance classifier is a popular algorithm in the template matching area .basically , it finds the distance between the features of an unknown sample and those of the training samples and picks the training sample which has the minimum distance to the unknown as the predicted label . therefore if denotes the features of a test sample and denotes the features of the -th sample in our dataset , minimum distance assigns the test sample to one of the samples in the dataset such that : \end{gathered}\ ] ] here euclidean distanceis used , which results in the nearest neighbor classifier . in this algorithm ,the feature matrix of all palmprints are extracted first .considering size of the image and the block , each feature matrix has a size of . as previously mentioned , there are 500 different persons in the database , and for each , there are 12 sample images . every time , of these 12 samplesare assigned as training and the remaining ones ( ) as test samples , leading to a total of test samples .for each person , the feature matrix is defined as the average of the feature matrices of the different training images of that person .then , for an unknown sample with the feature matrix , the following distance should be found : which is very similar to the frobenius norm of the difference of the two matrices , and each row has a weight of , where is a feature - normalizing factor trying to map all features into the same range .the term can be defined as the reciprocal of the mean value of the corresponding feature of all training samples . is the feature importance factor which gives higher weight to the features with more information about image labels .this factor can be any increasing function of single feature accuracy . here is defined as the recognition accuracy when the -th row of the feature matrix is used on its own for the recognition process . for each palmprint, there are four different spectra ; red , green , blue and infrared .their features are signified by , , and respectively .the key is to calculate the above distance for all the spectra by comparing the images in the same spectrum .next , the distance between a test image and the -th training sample will be defined as the average of the distances of their corresponding spectra .then , the predicted entry for a test image with the feature matrix will be : \end{gathered}\ ] ] voting theory has many applications in ai , search engines and recommendation systems . in algorithms based on majority voting, every voter decides the outcome of the test on its own , and in the end , all the decisions are counted and the final verdict is given . here the voters are the features and the votes are given to every person in the training samples . in the unweighted case ,all features have the same impact on the votes and none of them is superior .in the weighted case , which is used here , each feature has a weight of its own , based on which points will be awarded to each person . when added , the score will decide to which profile the test image is the most analogous .this scheme has a very simple algorithm and can be performed in a very short time compared to other works in this field .first , the images of every single person are uniformly shuffled in the database so that the training part can use different pieces of data from a random set of the 12 images .then , the features of the all the training data are gathered and the feature average for every person is computed .next , the other images are used as test subjects and , for every existing spectrum , the distance between the feature vector of every sample and the average matrix from the training period is calculated . the minimum distance with any subject based on every featureis awarded points based on the coefficient of the feature in that stage .this reward is also applied to a matrix shared by all four spectra and holds the total score . in the end, the person gaining the maximum of the global score matrix is identified as the answer to the recognition query . for every feature vector , the voting result will be : when finds the person with minimum distance to the test subject , that person receives a point equal to the weight of the feature .the score of person based on is denoted by where is the weight of the corresponding feature and where is the indicator function .then the total score of the -th training sample based on all the features in the scope of all the colors will be : in the end , the identification factor will be calculated : = { \arg\!\max}_j \big[\sum_{all~colors}\sum_{i}w_is_j(i ) \big]\end{gathered}\ ] ]we have tested our algorithm on the polyu multisprectral palmprint database containing 6000 palmprints captured from 500 different palms .every palm is sampled 12 times in two sessions .each palmprint contains 4 palm images collected at the same scene under 4 different illuminations , including red , green , blue and nir ( near - infrared ) .therefore the total number of images is 24000 .the resolution of each image is 128 .as mentioned before , we are working on preprocessed palmprint images .therefore , no further action is required to align or resize the palm images .we have performed palmprint recognition for different fractions of training and test images .correct identification takes place when the test palmprint is classified as a person whose label is the same as the label of this palmprint , and misidentification occurs when the test palmprint is classified for an entry whose tag is different from that of the correct palmprint .table [ tblres1 ] denotes the identification accuracy for two different classifiers .every result is produced by repeating the experiment 10 times and taking the average of their results in order to make it more precise .[ h ] distance classifier & using weighted majority voting + 3/12 & 97.42 & 99.95 + 4/12 & 99.72 & 99.99 + 5/12 & 99.51 & 99.99 + 6/12 & 100 & 99.99 + [ tblres1 ] it can be seen that when using a lower number of training samples , weighted majority voting fares much better that minimum - distance classifier . the reason is the fact that there are many features deciding the output of our system , and if a group of them fail to successfully pinpoint the match to the test subject , there are still others to help the system find the correct entry .table [ tblcomp ] shows a comparison of the results of our work and those of five other highly accurate schemes .we have compared our work with methods which were introduced in recent years .k - pca+gwr denotes ekinci s approach which applies kernel pca to the gabor features .mda+gwr denotes multilinear discriminant analysis applied to gabor representation which is presented in .the reported accuracy of the proposed scheme in table 2 corresponds to the case where half of the images ( 3000 multispectral images ) of each person are used for training and the other half for testing . for more details about the experiment conditions of other works ,the reader is referred to the referenced papers in the first column of table 2 .[ h ] k - pca+gwr & 95.17% + quaternion principal component analysis & 98.13% + mda+gwr & 98.81% + histogram of oriented lines & 99.97% + textural features & 100% + proposed scheme using majority voting & 99.99% + proposed scheme using minimum distance classifier & 100% + [ tblcomp ] as it can be seen , the algorithm utilized in this paper outperforms the other methods .this can be due to the fact that statistical features are also used in parallel with wavelet - based ones .it is known that wavelet transform is quite sensitive to small changes in the image due to deformation , distortion and other transformations . as a result ,methods solely based on such features are more susceptible to noise and other distortions .however , the proposed statistical features in this work do not share this drawback .therefore they can help to have a more accurate recognition algorithm .the system is implemented using matlab on a laptop with windows 7 and core i7 cpu running at 2ghz .the execution time for the proposed method is about 0.05s per test using majority voting algorithm .this paper proposed a set of statistical and wavelet - based features for palmprint recognition .one attempts to find the spatial information of palm images and the other aims to mostly capture their frequency content .one is sensitive to the major difference between different palms , while the other is more perceptive of the partial differences between similar palmprints .two different classifiers are used to perform the recognition process . by using this method ,our algorithm is able to identify palmprints with similar line patterns as well as unclear palmprints .the proposed algorithm has significant advantages over the previous popular methods .the used features are very simple to extract .the algorithm is very fast and it does not need classifier training .most importantly , it has a very high accuracy rate which is robust to the number of training samples and can be high even for the case where the ratio of training to test is 1 to 3 . in the future , we will apply this set of features to other biometrics as well .the authors would like to thank the hong kong polytechnic university ( polyu ) for sharing their multisprectral palmprint database with us .k. delac and m. grgic , `` a survey of biometric recognition methods , '' electronics in marine , proceedings elmar .46th international symposium .ieee , 2004 .wildes , `` iris recognition : an emerging biometric technology , '' proc .ieee , vol .1348 - 1363 , sept .s. minaee , aa .abdolrashidi and yao wang , `` iris recognition using scattering transform and textural features '' , ieee signal processing workshop , 2015 .turk and ap .pentland , `` face recognition using eigenfaces , '' ieee conference on computer vision and pattern recognition , 1991 .a. kong , d. zhang and m. kamel , `` a survey of palmprint recognition , '' pattern recognition 42.7 , 1408 - 1418 , 2009 .w. li , d. zhang and z. xu , `` palmprint identification by fourier transform , '' international journal of pattern recognition and artificial intelligence 16.04 : 417 - 432 , 2002 . x. wu , k. wang and d. zhang , `` wavelet based palmprint recognition , '' international conference on machine learning and cybernetics , 2002 .3 . ieee , 2002 .m. ekinci and m. aykut , `` gabor - based kernel pca for palmprint recognition , '' electronics letters , vol .20 , pp . 1077 - 1079 , 2007 .j. chen , c. zhang and g. rong , `` palmprint recognition using crease , '' ieee international conference on image processing .vol . 3 , 2001 . w. jia , d. huang and d. zhang , `` palmprint verification based on robust line orientation code , '' pattern recognition 41.5 , : 1504 - 1513 , 2008 .f. yue , w. zuo , d. zhang , and k. wang , `` orientation selection using modified fcm for competitive code - based palmprint recognition , '' pattern recognition 42 , no .11 : 2841 - 2849 , 2009 . w. jia , r. hu , x. lei , yk .zhao and j. gui , `` histogram of oriented lines for palmprint recognition , '' ieee transactions on systems , man , and cybernetics : systems , 44(3 ) , 385 - 395 , 2014 .s. minaee and aa .abdolrashidi , `` multispectral palmprint recognition using textural features , '' ieee signal processing in medicine and biology symposium ( spmb ) , 2014 .mistani , s. minaee and e. fatemizadeh , `` multispectral palmprint recognition using a hybrid feature , '' arxiv preprint arxiv:1112.5997 , 2011 .x. xu and z. guo , `` multispectral palmprint recognition using quaternion principal component analysis , '' ieee workshop on emerging techniques and challenges for hand - based biometrics , pp .15 , 2010 .d. zhang , z. guo , g. lu and w. zuo , `` an online system of multispectral palmprint verification,''ieee transactions on instrumentation and measurement , 59.2 : 480 - 490 , 2010 .m. hosseini , a. fedorova , j. peters and s. shirmohammadi , `` energy - aware adaptations in mobile 3d graphics '' , acm multimedia : 1017 - 1020 , 2012 .m. hosseini , j. peters , s. shirmohammadi , `` energy - budget - compliant adaptive 3d texture streaming in mobile games '' , proceedings of the 4th acm multimedia systems conference , 2013 .i. guyon , `` feature extraction : foundations and applications , '' springer science and business media , vol .207 , 2006 .s. minaee and y. wang , `` screen content image segmentation using least absolute deviation fitting '' , ieee international conference on image processing , 2015 .s. minaee , m. fotouhi and b.h .khalaj , `` a geometric approach for fully automatic chromosome segmentation '' , signal processing in medicine and biology symposium , ieee , 2014 .u. srinivas , hs .mousavi , c. jeon , v. monga , a. hattel and b. jayarao , `` shirc : a simultaneous sparsity model for histopathological image representation and classification , '' isbi , pp . 1118 - 1121 .ieee , 2013 .mousavi , u. srinivas , v. monga , y. suo , m. dao and td .tran , `` multi - task image classification via collaborative , hierarchical spike - and - slab priors '' , icip , pp .4236 - 4240 .ieee , 2014 .u. srinivas , hs .mousavi , v. monga , a. hattel and b. jayarao , `` simultaneous sparsity model for histopathological image representation and classification '' , ieee transactions on medical imaging , 33.5 : 1163 - 1179 , 2014 . i. daubechies , `` ten lectures on wavelets , '' vol . 61 .philadelphia : society for industrial and applied mathematics , 1992 .s. minaee , y. wang and y. w. lui , `` prediction of longterm outcome of neuropsychological tests of mtbi patients using imaging features , '' signal processing in medicine and biology symposium , ieee , 2013 .
|
palmprint is one of the most useful physiological biometrics that can be used as a powerful means in personal recognition systems . the major features of the palmprints are palm lines , wrinkles and ridges , and many approaches use them in different ways towards solving the palmprint recognition problem . here we have proposed to use a set of statistical and wavelet - based features ; statistical to capture the general characteristics of palmprints ; and wavelet - based to find those information not evident in the spatial domain . also we use two different classification approaches , minimum distance classifier scheme and weighted majority voting algorithm , to perform palmprint matching . the proposed method is tested on a well - known palmprint dataset of 6000 samples and has shown an impressive accuracy rate of 99.65%-100% for most scenarios . palmprint , statistical features , wavelet , minimum distance classifier , majority voting .
|
since the early work of hahn and lindquist , smarr and eppley , numerical relativity has become an important subfield of the theory of gravitation . to outsidersthe progress often seems marginal and unsatisfactory .the classic goal of providing waveform catalogs for the newly built gravitational wave detectors has still not been reached ( although considerable progress has been made recently ) . by the nature of general relativity, the simulation of isolated systems poses particularly hard problems .mathematically such systems can be formalized by the concept of asymptotically flat spacetimes ( see e.g. the standard textbook of wald ) , but it turns out that important quantities such as the total mass , ( angular ) momentum or emitted gravitational radiation can only consistently be defined at infinity .the traditional approach of introducing an arbitrary spatial cutoff introduces ambiguities and is not satisfactory at least from a mathematical point of view .a remedy is suggested by conformal compactification methods , such as the characteristic approach presented by luis lehner in this volume , or friedrich s conformal field equations , which he describes in this volume .the latter approach avoids the problems associated with the appearance of caustics in the characteristic formulation by allowing to foliate the compactified metric by spacelike hypersurfaces .these hypersurfaces are analogous to the standard hyperboloid in minkowski spacetime and are _ asymptotically null _ in the physical spacetime .the price to pay is the loss of the simplicity inherent in the use of null coordinates , and one has to deal with the full complexity of 3 + 1 numerical relativity .the fundamental ideas of the numerical solution of the conformal field equations have been laid out by frauendiener in this volume , and in a _ living review _ , and he has also discussed his code to treat spacetimes with a hypersurface - orthogonal killing vector and toroidal s . the purpose of the present article is to show the status of numerical simulations based on the conformal field equations in 3d i.e. three space dimensions without assuming any continuous symmetries and to discuss what is needed in order to render this approach a practical tool to investigate physically interesting spacetimes . by making future null infinity accessible to ( completely regular and well defined ) local computations , the approach excels at the extraction of radiation e.g. the quantities to hopefully be measured within the next years by new large - scale detectors .one of the main pedagogical goals will be to explain the challenges of numerical relativity and to highlight some open problems related to constructing hyperboloidal initial data and actually carrying out long - time stable and accurate simulations .for an even more condensed account of the conformal approach to numerical relativity see .the organization is as follows : sec .[ sec : algorithms ] introduces the algorithms developed in the last years by peter hbner ( the radiation extraction procedure , which i will only mention briefly , is based on work of hbner and marsha weaver ) , and implemented in a set of codes by peter hbner ( who has recently left the field ) .all results presented here have been obtained with these codes , which hbner has described in a series of articles .[ sec : weakdata ] will start with a brief description of the evolution of weak initial data which possess a regular point representing future timelike infinity , based on the work of hbner .then i will discuss the evolution of slightly stronger initial data which exhibit various problems that will have to be solved , e.g. the choice of gauge , and use this as a starting point for discussing the main current problems . in sec .[ sec : computational ] purely computational aspects of this project will be discussed , and in sec .[ sec : discussion ] i will sum up the current status and sketch a possible roadmap for further work .the conformal field equations are formulated in terms of an unphysical lorentzian metric defined on an unphysical manifold which gives rise to a physical metric , where the conformal factor is determined by the equations .the physical manifold is then given by .contrary to the formalism used by frauendiener in his contribution , we use a metric based formulation of the conformal field equations : [ konfgl ] here the ricci scalar of is considered a given function of the coordinates . for any solution , is the traceless part of the ricci tensor , and the weyl tensor of . note that the equations are regular even for .these `` conformal field equations '' render possible studies of the global structure of spacetimes , e.g. reading off radiation at null infinity , by solving regular equations .the 3 + 1 decomposition of the conformal geometry can be carried out as usual in general relativity , e.g. where and are the riemannian 3-metrics induced by respectively on a spacelike hypersurface with unit normals , and equivalently ( our signature is ) .the relation of the extrinsic curvatures ( , ) is then easily derived as , where .note that for regular components of and , the corresponding components of and with respect to the same coordinate system will in general diverge due to the compactification effect .however for the coordinate independent traces , we get which can be assumed regular everywhere .note that at , . since is an ingoing null surface ( with but ), we have that at .it follows that at . we will thus call regular spacelike hypersurfaces in hyperboloidal hypersurfaces , since in they are analogous to the standard hyperboloids in minkowski space , which provide the standard example .since such hypersurfaces cross but are everywhere spacelike in , they allow to access and radiation quantities defined there by solving a cauchy problem ( in contrast to a characteristic initial value problem which utilizes a null surface slicing ) .note that hyperboloidal hypersurfaces which cross are only cauchy surfaces for the _ future _ domain of dependence of the initial slice of , we therefore call our studies _we will not discuss the full equations here for brevity , but rather refer to .what is important , is that the equations split into symmetric hyperbolic evolution equations plus constraints which are propagated by the evolution equations .the evolution variables are , , the connection coefficients , projections and of the traceless 4-dimensional ricci tensor , the electric and magnetic components of the rescaled weyl tensor , , as well as , , , in total this makes quantities .in addition the gauge source functions , and have to be specified , in order to guarantee symmetric hyperbolicity they are given as functions of the coordinates . here determines the lapse as and is the shift vector .the ricci scalar can be thought of as implicitly steering the conformal factor .the numerical treatment of the constraints and evolution equations will be described below .but before , let us spend some time on general considerations about the treatment of null infinity .since is an evolution variable and not specified a priori , will in general not be aligned with grid points , and interpolation has to be used to evaluate computed quantities at locations of vanishing . for the physically interesting case of modeling an isolated system , `` physical '' i.e. the component of that idealizes us outside observers and our gravitational wave detectors ( neglecting cosmological effects such as redshift etc . ) has spherical topology. there may be more than one component of , i.e. additional spherical components associated with `` topological black holes '' ( see sec .[ subsec : bh ] ) . in principleit is possible of course to control the movement of through the grid by the gauge choice see for how to achieve such _ fixing _ within frauendiener s formulation .an example would be the so called _ freezing _ , where does not change its coordinate location . what is the significance of how moves through the grid ?this question is directly related to the question for the global structure of spacetime .although many questions are left open , the present understanding of the global structure of generic vacuum spacetimes , which can be constructed from regular initial data , does provide some hints .first , note that spacetimes which are asymptotically flat in spacelike and null directions i.e. isolated systems do not necessarily have to be asymptotically flat in timelike directions .an example would be a spacetime that contains a star or a black hole .in such cases where the end state of the system is not flat space , we can not expect the conformal spacetime to contain a regular point . in the case of sufficiently weak data however , friedrich has shown in that a regular point will exist[fig : slicings ] consistent with our intuition .the global structure is then similar to minkowski space .the standard conformal compactification of minkowski space is discussed in textbooks ( see e.g. ) as a mapping to the einstein static universe .there moves inward and contracts to a point within finite coordinate time . in order to resolve such situationsit seems most appropriate to choose a gauge which mimics this behavior , i.e. where contracts to a point after finite coordinate time .the boundary of the computational domain is set in the unphysical region and the physical region contracts in coordinate space .accordingly , the initial data are also extended beyond the physical region of spacetime .it is this scenario which is best understood so far , and which is presented in some more detail in sec .[ sec : weakdata ] . for sufficiently strong regular data it is known that singularities develop according to the cosmic censorship conjecture such singularitiesshould generically be hidden inside of black holes .for such data we can not expect a regular point to exist . in the case when is singular ( and not much else is currently known even about the of kruskal spacetime see however bernd schmidt s contribution in this volume ) we have to expect structure like sharp gradients near , which makes it unlikely that we can afford to significantly reduce the size of the physical region in coordinate space ( at least not without adaptive mesh refinement a technology not yet available for 3d evolutions ) .a -freezing gauge may be appropriate for such a situation .furthermore , phenomena like quasi - normal ringdown , or the orbital motion of a two - black hole system suggest that the numerical time - coordinate better be adapted to the intrinsic time scale of the system . associated with quasi - normal ringdown for example is a fixed period in bondi - time , which suggests bondi time as a time coordinate near in a situation dominated by ringdown .thus , for black hole spacetimes it might turn out that the best choice of gauge fixes to a particular coordinate position , and shifts into the infinite future .it could be possible that in such a case the boundary can be chosen to either coincide with , or to be put just a small number of gridpoints outside , which would raise the question for an evolution algorithm that does not require a topologically rectangular grid .we see that the optimal choice of numerical algorithms and gauges may be tightly related to the global structure of the investigated spacetimes which is actually one of the main questions our simulations should be able to answer !evolution of a solution to the einstein equations starts with a solution to the constraints .the constraints of the conformal field equations ( see eq . ( 14 ) of ref . ) are regular equations on the whole conformal spacetime .however , they have not yet been cast into a standard type of pde system , such as a system of elliptic pdes .one therefore resorts to a 3-step method : 1 . obtain data for the einstein equations : the first and second fundamental forms and induced on by , corresponding in the compactified picture to , and and .this yields so - called `` minimal data '' .2 . complete the minimal data on to data for _ all _ variables using the conformal constraints _ in principle _ this is mere algebra and differentiation .3 . extend the data from to in some ad hoc but sufficiently smooth and `` well - behaved '' way . in order to simplify the first step ,the implementation of the code is restricted to a subclass of hyperboloidal slices where initially is pure trace , .the momentum constraint then implies .we always set . in order to reduce the hamiltonian constraint to _ one _ elliptic equation of second order, we use a modified lichnerowicz ansatz with _ two _ conformal factors and . the principal idea is to choose and , and solve for , as we will describe now .first , the `` boundary defining '' function is chosen to vanish on a 2-surface the boundary of and initial cut of with non - vanishing gradient on .the topology of is chosen as spherical for asymptotically minkowski spacetimes .then we choose to be a riemannian metric on , with the only restriction that the extrinsic 2-curvature induced by on is pure trace , which is required as a smoothness condition . with this ansatz is singular at , indicating that represents an infinity .the hamiltonian constraint then reduces to the yamabe equation for the conformal factor : this is a semilinear elliptic equation except at , where the principal part vanishes for a regular solution .this however determines the boundary values as existence and uniqueness of a positive solution to the yamabe equation and the corresponding existence and uniqueness of regular data for the conformal field equations using the approach outlined above have been proven by andersson , chruciel and friedrich .solutions to the yamabe equation and thus minimal initial data can either be taken from exact solutions or from numerical solutions of the yamabe equation . exact solutions which possess a of spherical topology have been implemented for minkowski space and for kruskal spacetime see the contribution of bernd schmidt in this volume .these solutions are defined even outside of , and thus can directly be completed to initial data for all variables by using the conformal constraints .if the yamabe equation is solved numerically , the boundary has to be chosen at , the initial cut of , with boundary values satisfying eq .( [ boundaryvals ] ) .if the equation would be solved on a larger ( more convenient cartesian ) grid , generic boundary conditions would cause the solution to lack sufficient differentiability at , see hbner s discussion in .this problem is due to the degeneracy of the yamabe equation at .unfortunately , this means that we have to solve an elliptic problem with _spherical boundary_. this problem is solved by combining the use of spherical coordinates with pseudo - spectral collocation methods . in pseudo - spectral methodsthe solution is expanded in ( analytically known ) basis functions here a fourier series for the angles and a chebychev series for the radial coordinate . for an introduction to pseudo - spectral methodssee e.g. .this allows to take care of coordinate singularities in a clean way , provided that all tensor components are computed with respect to a regular ( e.g. cartesian ) basis and that no collocation points align with the coordinate singularities .another significant advantage of spectral methods is their fast convergence : for smooth solutions they typically converge exponentially with resolution .the necessary conversions between the collocation and spectral representations are carried out as fast fourier transformations with the fftw library .the nonlinearities are dealt with by a newton iteration , the resulting linear equations are solved by an algebraic multigrid linear solver ( the amg library ) . the constraints needed to complete minimal initial data to data for all evolution variables split into two groups : those that require divisions by the conformal factor to solve for the unknown variable , and those which do not .the latter do not cause any problems and can be solved without taking special care at .the first group , needed to compute , and , however does require special numerical techniques to carry out the division , and furthermore it is not known if their solution outside of actually allows solutions which are sufficiently smooth beyond .thus , at least for these we have to find some ad - hoc extension .note that in the case of analytical minimal data , the additional constraints are solved on the whole time evolution grid .the simplest approach to the division by would be an implementation of lhospital s rule , however this leads to nonsmooth errors and consequently to a loss of convergence . instead hbner has developed a technique to replace a division by solving an elliptic equation of the type ( actually some additional terms added for technical reasons are omitted here for simplicity ) for .for the boundary values , the unique solution is . for technical detailssee .the resulting linear elliptic equations for are solved by the same numerical techniques as the yamabe equation . for technical detailssee hbner .finally , we have to extend the initial data to the full cartesian spatial grid in some way .since solving all constraints also outside of will in general not be possible in a sufficiently smooth way , we have to find an ad hoc extension , which violates the constraints outside of but is sufficiently well behaved to serve as initial data .the resulting constraint violation is not necessarily harmful for the evolution , since causally disconnects the physical region from the region of constraint violation . on the numerical level , errors from the constraint violating region _will _ in general propagate into the physical region , but if our scheme is consistent , these errors have to converge to zero with the convergence order of the numerical scheme ( fourth order in our case ). there may still be practical problems , that prevent us from reaching this aim , of course : making the ad - hoc extension well behaved is actually quite difficult , the initial constraint violation may trigger constraint violating modes in the equations , which take us away from the true solution , singularities may form in the unphysical region , etc . since the time evolution grid is cartesian, its grid points will in general not coincide with the collocation points of the pseudo - spectral grid .thus fast fourier transformations can not be used for transformation to the time evolution grid .the current implementation instead uses standard discrete ( `` slow '' ) fourier transformations , which typically take up the major part of the computational effort of producing initial data .it turns out , that the combined procedure works reasonably well for certain data sets . forother data sets the division by is not yet solved in a satisfactory way , and constraint violations are of order unity for the highest available resolutions . in particular this concerns the constraint ( eq .( 14d ) in ) , since is computed last in the hierarchy of variables and requires two divisions by .further research is required to analyze the problems and either improve the current implementation or apply alternative algorithms .ultimately , it seems desirable to change the algorithm of obtaining initial data to a method that solves the conformal constraints directly and therefore does not suffer from the current problems .this approach may of course introduce new problems like an elliptic system too large to be handled in practice . since the standard definition of a black hole as the interior of an event horizon is a global concept , it is a priori not clear what one should consider as `` black hole initial data '' . in practice ,the singularity theorems and the assumption of cosmic censorship usually lead to the identification of `` black hole initial data '' with data that contain apparent horizons , and to associate the number of apparent horizons with the number of black holes in the initial data .a common strategy to produce apparent horizons is to use topologically nontrivial data , that is data which possess more than one asymptotically flat region . in the time - symmetric case such data obviously possess a minimal surface !asymptotic ends that extend to spatial infinity are relatively easy to produce by compactification methods , see e.g. or the contribution of dain in this volume . from the numerical point of viewit is important that the topology of the computational grid is independent of the number of asymptotic regions or apparent horizons considered : suitable regularization procedures allow to treat spatial infinities as grid points . in the current approach to the hyperboloidal initial value problem , where first the yamabe equation needs to be solved , the grid topology _ does depend _ on the number of topological black holes in this case the number of initial cuts of s , which have spherical topology .one option would be of course to combine both ingredients and consider `` mixed asymptotics '' initial data , which extend to the physical and to unphysical interior spacelike infinities which only serve the purpose of acting as `` topological sources '' for apparent horizons .another option , suggested by hbner in , is to generalize the current code for the initial data , which only allows for one cut of , which has spherical topology , to multiple s of spherical topology . for the case of one black holethis would correspond to the relatively simple modification to topology . for the case of two black holes one could implement the schwarz alternating procedure ( as described in sec .6.4.1 of ref . ) to treat three s with three coordinate patches , where each patch is adapted to a spherical coordinate system with its . a more practical approach ( at least to get started ) could be to produce topologically trivial black hole initial data .since we expect physical black holes to result from the collapse of topologically trivial regular initial data , such data would in some sense be more physical .theorems on the existence of apparent horizons in cauchy data have been presented by beig and murchadha in . numerical studies in this spirithave been performed by the author .such data could in principle be produced with the current code once it gets coupled to an apparent horizon finder . for the hyperboloidal initial value problem it is actually not known , whether such data actually exist , but it seems physically reasonable .finding such data numerically by parameter studies would be an interesting result in itself .a natural question in this context is whether there is any qualitative difference between `` topological '' and `` non - topological '' black holes outside of the event horizon , e.g. regarding their waveforms ?the time evolution algorithm is an implementation of a standard fourth order method of lines ( see e.g. ) , with centered spatial differences and runge - kutta time integration .additionally , a dissipation term of the type discussed in theorems 6.7.1 and 6.7.2 of gustafsson , kreiss and oliger is added to the right - hand - sides to damp out high frequency oscillations and keep the code numerically stable .numerical experiments show that usually small amounts if dissipation are sufficient ( the dissipation term used contains a free parameter ) , and do not change the results in any significant manner .a particularly subtle part of the evolution usually is the boundary treatment . in the conformal approach we are in the situation that the boundary is actually situated outside of the physical region of the grid this is one of its essential advantages ! in typical explicit time evolution algorithms , such as our runge - kutta method of lines ,the numerical propagation speed is actually larger than the speed of all the characteristics ( in our case the speed of light ) .thus does _ not _ shield the physical region from the influence of the boundary but this influence has to converge to zero with the convergence order of the algorithm fourth order in our case .one therefore does not have to choose a `` physical '' boundary condition , the only requirements are stability and `` practicality '' e.g. the boundary condition should avoid , if possible , the development of large gradients in the unphysical region to reduce the numerical `` spill over '' into the physical region , or even code crashes .the current implementation relies on a `` transition layer '' in the unphysical region , which is used to transform the rescaled einstein equations to trivial evolution equations , which are stable with a trivial copy operation at outermost gridpoint as a boundary condition ( see ref . for details and references ) .we thus modify the evolution equations according to replace where is chosen as for and for .this procedure works reasonably well for weak data , however there are some open problems .one is , that the region of large constraint violations outside of may trigger constraint violating modes of the equations that can grow exponentially .another problem ist that a `` thin '' transition zone causes large gradients in the coefficients of the equations thus eventually leading to large gradients in the solution , while a `` thick '' transition zone means to loose many gridpoints .if no transition zone is used at all , and the cartesian grid boundary touches , the ratio of the number of grid points in the physical region versus the number of grid points in the physical region is already . extracting physics from a numerical solution to the einstein equations is a nontrivial task .results typically show a combination of physics and coordinate effects which are hard to disentangle , in particular in the absence of a background geometry or preferred coordinate system . in order to understand what is going on in a simulation , e.g. to find `` hot spots '' of inaccuracy or instability or bugs in an algorithm ,it is often very important to visualize the `` raw '' data of a calculation . herethe visualization of scalar and in particular tensor fields in 3d is a subtle task in itself .but beyond that , one also wants ways to factor out coordinate effects in some way , and ideally access physical information directly .one way commonly used to partially factor our coordinate effects is to look at curvature invariants , another possibility is to trace geodesics through spacetime . in the current codethis is done by concurrent integration of geodesics by means of the same 4th order runge - kutta scheme used already in the method of lines .both null and timelike geodesics , as well as geodesics of the physical and rescaled metrics can be computed , and various quantities such as curvature invariants are computed by interpolation along the geodesics . particularly important are null geodesics propagating along , since they can be used to define a bondi system and thus compute radiation quantities such as the bondi mass or news . note that the foliation of spacetime chosen for evolution will in general _ not _ reproduce cuts of of constant bondi time .hbner has therefore implemented postprocessing algorithms ( using the idl programming language / software system ) which construct slices of constant bondi time in the data corresponding to the null geodesics propagating on by interpolation ( the algorithms are based on unpublished work of hbner and weaver ) .this evolution of geodesics is illustrated by fig .[ meetingpoint ] , which shows three timelike geodesics originating with different initial velocities at the same point meeting a generator of at .in this section i will discuss results of 3d calculations for initial data which evolve into a regular point , and which thus could be called `` weak data '' .bernd schmidt presents results for the kruskal spacetime in this volume ( see also ) .the initial conformal metric is chosen in cartesian coordinates as the boundary defining function is chosen as it is used to satisfy the smoothness condition for the conformal metric at . these data have been evolved previously by hbner for as reported in . for the gauge source functions he has made the `` trivial '' choice : , , , i.e. the conformal spacetime has vanishing scalar curvature , the shift vanishes andthe lapse is given by .this simplest choice of gauge is completely sufficient for data , and has lead to a milestone result of the conformal approach the evolution of weak data which evolve into a regular point of , which is resolved as a single grid cell . with this result hbnerhas illustrated a theorem by friedrich , who has shown that for sufficiently weak initial data there exists a regular point of . the complete future of ( the physical part of ) the initial slice can thus be reconstructed in a finite number of computational time steps .this calculation is an example of a situation for which the usage of the conformal field equations is ideally suited : main difficulties of the problem are directly addressed and solved by using the conformal field equations . the natural next question to ask is : what happens if one increases the amplitude ? to answer this question ,i have performed and analyzed runs for integer values of up to .the results presented here have been produced with low resolutions of ( but for higher or slightly lower resolutions we essentially get the same results ) . for convergence tests of the codesee .while for the code continues beyond without problems , for all higher amplitudes the `` trivial '' gauge leads to code crashes before reaching . here by `` code crash ''we mean that computational values get undefined , e.g. the code produces `` not a number '' ( nan ) values .while the physical data still decay quickly in time , a sharp peak of the lapse develops outside of and crashes the code after bondi time for and for ( here is the initial bondi mass ) . in figs .[ lapse - n1 ] and [ lapse - n5 ] the lapse is plotted for runs with and . while for the lapse only shows significant growth after ( is located at , ) , for a very sharp peak grows outside of and crashes the code at .where does this rapid growth come from ?note that the initial conformal metric eq .( [ eq : standard - h ] ) shows significant growth outside of .combined with the lapse this leads to a growth of the lapse toward the grid boundary . in the present case a positive feedback effect with a growth of metric components in time seems to be responsible for the eventual crash of the code .note that this feedback only takes place in a small region outside of further outward it is prevented by the transition to trivial evolution equations .[ lapse - n5q ] shows a cure of the problem : a modified gauge source function ( ) with leads to a very smooth lapse ( and correspondingly also to smooth metric components ) .note that in fig .[ lapse - n5q ] , due to the different lapse , the point is _ not _ located at .the value of here is found by moderate tuning of to a best value ( significantly decreasing or increasing crashes the code before is reached ) .unfortunately , this modification of the lapse is not sufficient to achieve much higher amplitudes .as is increased , the parameter requires more fine tuning , which was only achieved for . for higher amplitudesthe code crashes with significant differences in the maximal and minimal bondi time achieved , while the radiation still decays very rapidly and the news scales almost linearly .furthermore , the curvature quantities do not show excessive growth it is thus natural to assume that we are still in the weak - field regime , and the crash is not connected to the formation of an apparent horizon or singularity .these results suggest that in order to model a gauge source function that would allow to evolve up to , one would need more that one parameter , e.g. at least 3 parameters for a non - isotropic ansatz such as or something similar .to tune 3 or more parameters for each evolution seems however computationally prohibitive . while some improvement is obviously possible through simple non - trivial models for the lapse ( or other gauge source functions ) , this approach seems very limited and more understanding will be necessary to find practicable gauges .an interesting line of research would be to follow the lines of ref . in order to find evolution equations for the gauge source functions which avoid the development of pathologies .a particular aim would be to find equations such that the resulting system of evolution equations is symmetric hyperbolic .[ fig : news ] shows the news function from three different runs : for and , and for , . the news from the runs for been multiplied by a factor of 25 , which would exactly compensate scaling with the amplitude in the linear regime .we see that the three curves line up very well initially .the line for and deviates significantly to larger values of the news when the runs starts to get inaccurate , but at this time most of the physical radiation has already left the system .the curves from / and / line up perfectly until the value of the news drops below , where the curves level off at different values , due to numerical inaccuracy . fig .[ fig : bondimass ] shows the bondi mass for this situation , again the curve is scaled by a factor of 25 : again we see the quick decay of a sharp pulse of radiation .there is no particular structure except falloff at late times , the deviation of the curves at late times seems to be caused by numerical inaccuracy , in particular in the computation of the bondi mass .in this section i will give a brief description of some computational aspects , such as the computational resources needed to carry out simulations in 3 spatial dimensions .computations of this scale rely on parallel processing , that means execution of our algorithms is spread over different cpu s . from a simplistic point of viewthere are two ways to program for parallel execution : we only take care of parallelizing the algorithm butwe require that all cpu s can access the same memory or we both parallelize the algorithm and the data structures , and separate the total data into smaller chunks that fit into the local memory of each processor .the first alternative requires so called shared memory machines , where the operating system and hardware take care of making data accessible to the cpu s consistently , taking care of several layers of main and cache memory ( which gets increasingly difficult and expensive as the size of the machine is increased ) .the present code has been implemented using a shared memory programming model .the advantage is that this can generally be somewhat easier to program , and avoids overheads in memory .the disadvantage is the high cost of such systems , which makes them difficult to afford and thus nonstandard for most large academic parallel applications .the second alternative , usually referred to as distributed memory , requires more work to be done by the programmer , but more flexibly adapts to different kinds of machines such as clusters of cheap workstations commonly available in academic environments .while this approach usually implies a larger overhead in total memory requirements , speed and programming complexity , it is currently the only approach capable of scaling from small to very large simulations . for a general introduction to the issues of high performance computing , ref . provides a good starting point .so , how much memory do we need ?let us assume a run with grid points ( the size of the largest simulations carried out with the present code so far ) .the current implementation of the fourth order runge - kutta algorithm uses 4 time levels and a minimum of 62 grid functions ( 57 variables and 5 gauge source functions ) . in double precisionthis amounts to gbyte .temporary variables , information on geodesics and various overheads result in a typical increase of memory requirements by roughly . for 150 time steps ( approximately what it takes to reach for weak data ) the total amount of processed data then corresponds to roughly 1 terabyte ! if we half the grid spacing , the allocated memory increases by a factor of ( neglecting overheads ) , the total amount of processed data by a factor of , and the total required cpu time also by a factor of , while the error _ reduces _ by a factor of * if * we are already in the convergent regime ! given that the biggest academic shared memory machines in germany have 16 gbyte of memory available ( the aei s origin 2000 and the hitachi sr8000 at lrz in munich ) this shows that the margin for increasing resolution is currently quite small .such an increase in resolution will however be necessary to resolve physically interesting situations with more structure , such as a black hole , or two merging black holes . a move toward distributed memory processingwill therefore be likely in the long run .the current software - standard for distributed ( scalable ) computing is mpi ( message passing interface ) . unfortunately, writing large scale sophisticated codes in mpi is very time consuming .however , several software packages are available which introduce a software layer between the application programmer and mpi , and thus significantly reduce the effort to write parallel applications .two prime examples are the cactus computational toolkit and petsc .while petsc is a general purpose tool developed at argonne national laboratory as parallel framework for numerical computations , cactus has been developed at the albert einstein institute with numerical relativity in mind .while petsc offers more support for numerical algorithms , in particular for parallel elliptic solvers , cactus already contains some general numerical relativity functionality like apparent horizon finders but no support for generic numerical algorithms .apart from its numerical relativity flavor , the cactus computational toolkit also has the advantage of broad support for parallel i / o and large scale 3d visualization .the ability to successfully mine tens or hundreds of gigabytes of data for relevant features is paramount to successful simulations in 3d . among the essential problems in writing and maintaining large scientific codesare the software engineering aspects and the control of complexity .in other words , codes should be reasonably documented and maintainable . for large scientific codes written and maintained by part - time - programmer scientiststhis poses a significant challenge . writing a clear , modular code that can be understood , maintained and extended to suit new scientific needs requires a good deal of design and planning ahead . for an introduction to software engineering issuessee e.g. , .another important issue for scientific codes is flexibility .being able to do good science often depends on the ability to easily change algorithms , equations , discretization schemes etc . without having to restructure the code , without high risk of introducing new bugs . in the present case examples forthe need of trying different things would be experiments with different evolution equations ( e.g. metric versus frame formalism ) , different boundary treatments or different elliptic solvers .the 3d numerical simulations performed so far show that the evolutions are _ numerically _ stable and quite robust .however , one of the main problems in numerical relativity is the stability of the constraint propagation : while the constraints _ do _ propagate when they are satisfied identically initially , this assumption does not hold for numerical simulations . on the contrary it seems to be quite typical observation that the constraints diverge exponentially , if the evolution does not start at the constraint surface .preliminary results exhibit this behavior of resolution - independent exponential growth associated with a violation of the constraints also for the conformal approach .one of the major goals for the future thus has to be the improvement of the understanding of the constraint propagation equations , and an according modification of the evolution equations ( see ref . for previous work in this direction ) .this is essentially an analytical problem , but will certainly require the numerical testing of ideas . another area where new developments are necessary on the analytical side along with numerical testing is the problem of finding gauges that prevent pathologies like unnecessarily strong gradients .ideally one would want to keep the symmetric hyperbolic character of the evolution system while allowing for a maximum of flexibility in writing down evolution equations for the gauge source functions .well - posedness of the evolution equations is important but by far not sufficient for numerical purposes . while well - posedness unfortunately still has not yet been shown rigorously for many formulations used in numerical relativity , another important task seems to be to improve the understanding of the non - principal part of the equations , including their nonlinearities , in order to be able to construct numerically well - conditioned algorithms . the third area where significant progress seems necessary on the analytical side is the construction of initial data .problems with the current algorithm which necessitates divisions by zero and an ad - hoc extension beyond have not yet been resolved .a possible road toward resolving these problems has been outlined by butscher in this volume . an important role in improving the analytical understanding and in setting up numerical experimentswill be played by the utilization of simplifications . particularly importantare spacetime symmetries and perturbative studies .a particularly interesting case to be studied actually is minkowski space . besides being an important case for code testing, it is used in current investigations to learn more about gauges and the stability of constraint propagation .more complicated are general spherically symmetric spacetimes . in the vacuum case ,this only leaves the kruskal spacetime aside from minkowski space but understanding the gauge problem for kruskal spacetime is an important milestone toward long - time black hole simulations .moreover , spherical symmetry provides a natural testing ground for all kinds of new ideas , e.g. of how to treat the appearance of singularities , of how to treat the unphysical region , numerical methods , etc .an alternative route to simplification , which has been very successful in numerical relativity , is perturbative analysis , e.g. with minkowski or schwarzschild backgrounds . in the context of compactificationthis has been carried out numerically with characteristic codes in , some of the problems that showed up there are likely to be relevant also for the conformal approach .what can we expect from the conformal approach in terms of physics results ?where can we expect contributions to our understanding of general relativity ? one of the most important features of the conformal approach is that it excels at radiation extraction without ambiguities , and at least in principle enables numerical codes to study the global structure of spacetimes describing isolated systems .as has been demonstrated in this paper , in some weak field regime the code works well with relatively simple choices of gauge , and could be used to investigate some of the above problems .it could also provide a very clean way to study nonlinear deviations from linear predictions . for strong fields , in particular one or two black holes ,the problem is much more difficult .an even more difficult problem is the investigation of the structure of singularities . in the spherically symmetric casethis has been achieved by hbner , but it is not clear whether these methods can be carried over to the generic case without symmetries , where the structure of the singularity has to be expected to be much more complicated .what is the roadmap for the future ? as far as 3d simulations are concerned , i believe that one should try to go from relatively well controlled weak data to stronger data and try to identify and solve problems as they come up . in parallel , it will be important to study simplified situations , like spacetimes with symmetries or linear perturbations with a mixture of analytical and numerical techniques. both lines of research are hoped to improve our understanding of issues associated with choosing the gauge source functions , and controlling the growth of constraints .future 3d codes , aimed at producing novel physical results will also require a significant effort devoted to `` computational engineering '' , since flexible and solidly written codes are an absolute necessity for good computational science ! these well known problems plaguing 3d numerical relativity will have to be addressed and solved in the conformal approach in order to harvest its benefits .developing the conformal approach to numerical relativity into a mature tool poses an important challenge for mathematical relativity : not only is the problem hard and requires long - term investments , it also requires to merge sophisticated mathematical analysis with computational engineering .the aim is to produce a solid handle on exciting new physics and some of the physics will even be accessible to experiments .the author thanks h. friedrich , b. schmidt , m. weaver and j. frauendiener for helpful discussions and explanations of their work , c. lechner and j. thornburg for a careful reading of the manuscript , and p. hbner for giving me access to his codes and results , and for support in the early stages of my work on this subject .o. brodbeck , s. frittelli , p. hbner and o. reula , j. math .40 ( 1999 ) 909923 ; m. alcubierre , g. allen , b. bruegmann and e. seidel , wai - mo suen , phys .d62 ( 2000 ) 124011 ; g. yoneda and h. shinkai , class .18 ( 2001 ) 441462 ; f. siebel and p. hbner , phys .d64 ( 2001 ) 024021 .s. balay , w. d. gropp , l. curfman mcinnes and b. f. smith , `` petsc users manual '' , technical report anl-95/11 , 2001 ; s. balay , w. d. gropp , l. curfman mcinnes and b. f. smith , " efficient management of parallelism in object oriented numerical software libraries , in modern software tools in scientific computing , ed . by e. arge and a. m. bruaset and h. p. langtangen , birkhauser press , 1997 ; http://www.mcs.anl.gov/petsc .
|
this talk reports on the status of an approach to the numerical study of isolated systems with the conformal field equations . we first describe the algorithms used in a code which has been developed at the aei in the last years , and discuss a milestone result obtained by hbner . then we present more recent results as examples to sketch the problems we face in the conformal approach to numerical relativity and outline a possible roadmap toward making this approach a practical tool .
|
the basic principle upon which all experimental searches for a neutron electric dipole moment ( edm ) employing stored ultracold neutrons ( ucn ) are based concerns measurements of the neutrons larmor spin precession frequencies in parallel ( ) and anti - parallel ( ) magnetic ( ) and electric ( ) fields , here , and denote the neutron s magnetic and electric dipole moments , respectively .a value for , or a limit on , is then deduced from a comparison of the measured values of and .the frequencies and are typically determined either from sequential measurements in a single volume , or from simultaneous measurements in separate volumes .therefore , a central problem to all neutron edm experiments concerns the determination of the value of the magnetic field averaged over the single or separate volumes , especially in the presence of temporal fluctuations and/or spatial variations in the field .an elegant solution providing for real - time monitoring of the magnetic field is to deploy a so - called `` co - magnetometer '' , whereby an atomic species with no edm ( or , at least , one known to be significantly smaller than the neutron edm ) co - habitates together with the stored ucn the fiducial volume . the general idea is then to carry out a measurement of the co - magnetometer atoms larmor spin precession frequency in the magnetic field , from which the temporal dependence of the _ scalar magnitude _ of the magnetic field averaged over the fiducial volume is then deduced .thus , a co - magnetometer provides for a real - time , _ in situ _ measurement of the _ scalar magnitude _ , which is especially important for detecting any shifts in correlated with the reversal of the direction of relative to . however , there are many optimization parameters and systematic effects in neutron edm experiments associated with the _ vector components _ of the magnetic field , , or , equivalently , the field gradients . for example , the longitudinal and transverse spin relaxation times , and , the values of which contribute to a determination of an experiment s statistical figure - of - merit , depend , among other parameters , on the field gradients . as another example, the dominant systematic uncertainty in the most recent published limit on resulted from the so - called `` geometric phase '' false edms of the neutron and the co - magnetometer atoms , both of which are functions of the field gradients . despite the importance of knowledge of the field gradients in neutron edm experiments , the key point here is that a co - magnetometer does not , in general , provide for a real - time , _ in situ _ measurement of the field gradients . nor is it practical or feasible to carry out direct _ in situ _ measurements of the field components or field gradients in an experiment s fiducial volume with some probe after the experimental apparatus has been assembled . however , the situation is not that grim , as it has been shown that it may be possible to extract some particular field gradients from measurements of the spin relaxation times coupled with measurements of the neutrons and co - magnetometer atoms trajectory correlation functions , and also ( under various assumptions on the symmetry properties of the magnetic field profile ) from a comparison of the neutron s and co - magnetometer atoms precession frequencies and their center - of - mass positions in the magnetic field .the concept we propose to employ for a real - time determination of the interior vector field components , and thus the field gradients , is a completely general method based on boundary - value techniques which does not require any assumptions on the symmetry properties ( or lack thereof ) of the field .the basic idea is to perform measurements of the field components on the surface of a boundary surrounding the experiment s fiducial volume , and then solve ( uniquely ) for the values of the field components in the region interior to this boundary via standard numerical methods .although the physics basis of the concepts we discuss in this paper are certainly not original ( and likely known since the origins of electromagnetic theory ) , to our knowledge this concept has not been suggested for use in a neutron edm experiment , although it certainly has been suggested in other contexts ( e.g. , ) ; nevertheless , we believe the discussion in this paper will be of value to those engaged in neutron edm experiments .the remainder of this paper is organized as follows . in secs .[ sec : boundary_value_problem ] and [ sec : discretization ] we discuss the boundary - value problem under consideration and its applicability to neutron edm experiments .we then show examples from numerical studies of this problem in sec .[ sec : examples ] for the geometry of the neutron edm experiment to be conducted at the spallation neutron source , the concept of which is based on the pioneering ideas of golub and lamoreaux .we then study the specifications ( e.g. , precision ) on a vector field probe in sec .[ sec : specs ] . finally , we conclude with a brief summary in sec .[ sec : summary ] .we begin by considering , as shown schematically in fig . [fig : boundary_value_problem_schematic ] , a closed three - dimensional boundary surface surrounding the fiducial volume of an experiment , which is situated within an arbitrary magnetic field ( i.e. , no assumptions on the symmetry properties of the field are necessary ) .our starting point is the fundamental equations of magnetostatics , which in si units are and , where . if we assume that the volume enclosed by the boundary surface contains : ( 1 ) no sources of currents , such that that the current density everywhere inside of the boundary ; and ( 2 ) no sources of magnetization , such that the magnetization everywhere inside of the boundary , it then follows that . from this, we immediately see , via application of the general vector identity , that the magnetic field ( and , thus , each of its components ) satisfies a laplace equation , everywhere inside of the boundary .is valid only if is expressed in terms of cartesian components .this equality does not hold in curvilinear coordinates .therefore , we will use cartesian coordinates exclusively hereafter . ] alternatively , under the above assumptions that and everywhere inside of the boundary , in a manner analogous to charge - free electrostatics ( i.e. , and ) we can define a magnetic scalar potential which satisfies . from this , it then immediately follows that imposing the requirement leads to a laplace equation for the scalar potential , everywhere inside of the boundary .therefore , in summary , we see that each of the vector field components and the scalar potential satisfy a laplace equation everywhere inside of the boundary , provided the boundary encloses no current or magnetization .solutions to the laplace equation , subject to boundary values , are well known ( e.g. , ) ; thus , determination of the interior field components or the scalar potential from exterior boundary - value measurements is a solvable problem .we now consider the laplace equation for one of the vector components , .if boundary values for are known everywhere on the surface of the boundary , the interior values of everywhere inside the surface of the boundary can , in principle , be obtained from an integral equation over the boundary values and the appropriate dirichlet green s function for the geometry in question .thus , for the continuous version of the dirichlet boundary - value problem posed here , it is theoretically possible to solve for the interior vector components everywhere inside the boundary , provided their boundary values are known everywhere on the surface .such a solution will be unique .note that a limitation of the dirichlet problem we have formulated is that it requires boundary values for the same component everywhere on the surface , with the solution to the problem only yielding interior values for ( i.e. , no information on where can be deduced ) .next we consider the laplace equation for the magnetic scalar potential , .the scalar potential is , of course , not a physical observable ; however , the vector components of the gradient , , are , of course , physical observables .let denote a unit vector normal to the surface of the boundary .if we then assume that boundary values for the normal derivative of the scalar potential , , or , equivalently , the negative of the normal component of the magnetic field , , are known everywhere on the surface of the boundary , the interior values of can , in principle , be obtained from an integral equation over the boundary values and the appropriate neumann green s function for the geometry in question .thus , for the continous version of the neumann boundary - value problem posed here , it is theoretically possible to solve for the interior scalar potential everywhere inside the boundary , provided the normal components of the magnetic field are known everywhere on the surface . unlike the dirichlet problem , the solution to the neumann problem for the interior scalar potential will not be unique , as the value of the scalar potential is arbitrary up to a constant ; however , the resulting interior magnetic field components , , will be unique .note that in contrast to the dirichlet problem , the solution to the neumann problem determines all of the interior vector components of .exterior measurements ( i.e. , outside the fiducial volume ) of the scalar magnitude of the magnetic field , , are certainly useful as they provide for important monitoring of the magnetic field in the vicinity of the fiducial volume .however , we note that such measurements do not provide for a rigorous determination of either the interior scalar magnitude or the interior vector components of , as the scalar magnitude does not satisfy a laplace equation. therefore , any attempt to extract information on the interior field gradients from exterior measurements of will necessarily require various assumptions to be made on the symmetry properties of the magnetic field .in particular , fitting exterior measurements of to a multipole expansion in spherical harmonics in order to determine interior values of is not completely rigorous , as such a multipole expansion is the solution for a quantity which necessarily obeys the laplace equation .in the ( hypothetical ) continuous versions of the dirichlet and neumann boundary - value problems formulated above , it was assumed that the boundary values were known everywhere on the surface ; this leads to the well - known analytic solutions for the interior values in terms of integral equations of green s functions .of course , such a problem can not be realized in practice , as the boundary values can only be determined at discrete measurement points .fortunately , numerical solutions to discretized versions of the dirichlet and neumann boundary - value problems are well known ( e.g. , ) . in the discretized versions of the boundary - value problems we will consider hereafter , we will assume , as indicated schematically in fig .[ fig : boundary_value_problem_discretized ] , that the boundary values ( i.e. , for the dirichlet problem or for the neumann problem ) are known over a regularly - spaced grid on the surface of the boundary , with the ( constant ) spacing between adjacent points along the , , and directions denoted , , and .note that it is not necessary to employ uniform grid spacings .also , it is not necessary to employ `` flat '' boundary surfaces , such as the sides of a rectangular box , although , for simplicity , the illustrative examples we will consider in the next section do utilize a rectangular box geometry .for example , one could discretize the surface of a torus , which would be a natural candidate for a boundary surface surrounding the interior of an experiment located within a circular accelerator storage ring . finally , it is also worthwhile to note that the boundary - value problem must be cast in three dimensions .for example , the solution to the laplace equation need not satisfy in two dimensions .therefore , an attempt to simplify the dirichlet and neumann boundary - value problems for and , respectively , from three to two dimensions will not , in general , yield a valid solution ., , and along their respective directions .the boundary values are assumed to be known over a grid of points on the surface of the boundary ( filled circles ) .the solution is then desired over the grid of interior points ( open circles ) . ] in general , there exists a multitude of techniques for the numerical solution of the laplace equation subject to boundary values ( see , e.g. , ) , and we do not endeavor to discuss these techniques here . we employed the finite differencing method of relaxation ( examples of techniques include jacobi iteration , gauss - seidel iteration , successive overrelaxation , etc . ) , with the results in the next section obtained using approximations to the second - order partial derivatives valid to , i.e. , here the notation denotes the solution to the laplace equation at some grid point indexed by the integers . note that to this order , if one takes , one obtains the well - known result for in terms of the values of the solution at its six nearest neighbor grid points , .\label{eq : laplace_grid_solution}\end{aligned}\ ] ]as a validation of our concept , we now show results from numerical studies of the dirichlet boundary - value problem for and the neumann boundary - value problem for . the example geometry we will consider is that of the neutron edm experiment to be conducted at the spallation neutron source . in particular , this geometry consists of two rectangular measurement volumes , which together span our definition of a rectangular fiducial volume of dimensions 25 cm ( cm cm ) 10 cm ( cm cm ) 40 cm ( cm cm ) .we then employ a rectangular boundary surface of dimensions 80 cm ( cm cm ) 80 cm ( cm cm ) 100 cm ( cm cm ) .thus , the volume enclosed by the boundary surfaces is significantly larger ( factor of 64 ) than the fiducial volume , with the boundary surfaces all located cm from the fiducial volume .the magnetic field we will consider is a calculated field map of a modified coil coil described in , but without its surrounding cylindrically - concentric ferromagnetic shield , as such a calculation would have required significantly more computing time .we chose to use this field for our example because the field shape is not trivial ; as can be seen later , the field shape is quartic near the origin .] under development for this particular experiment .the orientation of the coil is such that the fiducial volume is centered on the coil s center , with the magnetic field oriented along the -direction at the center of the fiducial volume . as our first numerical example, we considered a dirichlet boundary - value problem for each of the field components in a geometry where the spacing between the grid points is cm , thus resulting in 44,802 densely - spaced grid points on the surface of the boundary .as per the discussion in sec .[ sec : discretization ] , we assumed the values of were known at all of the 44,802 boundary grid points .we then proceeded to solve for the values of at all of the 617,859 interior grid points .the computing time required for iterations of our c++ code on a linux machine was 93 minutes .obviously , implementing such a densely - spaced configuration would not be possible or practical in an actual experiment ; instead , the point of this hypothetical example was to first demonstrate the validity of the boundary - value technique for the determination of the interior field components .for the densely - spaced grid of cm ( see text for details ) . calculated interior values for along the - , - , and -axes are shown in panels ( a ) , ( b ) , and ( c ) as the filled circles , andare compared with the exact values shown as the solid curves .panel ( d ) shows a histogram of the fractional error in the calculated interior values of for all of the interior grid points . ] in panel ( a ) and the - , - , and -components of in panel ( b ) for the calculated interior values of at all of the interior grid points from the dirichlet boundary - value problem for the densely - spaced grid of cm ( see text for details ) . ] the results of this exercise are shown in fig .[ fig : case1_figures ] . panels ( a ) , ( b ) , and ( c ) compare the calculated interior values of along the - , - , and -axes with the exact values from the field map , and panel ( d ) then shows histograms of the fractional errors [ defined to be ( calculated exact)/exact ] in the calculated interior values of at all of the interior points .the agreement between the calculated and exact values is seen to be excellent , thus clearly demonstrating the validity of our proposed concept . as a further check , fig .[ fig : case1_del ] shows histograms of values for and determined from the calculated interior values ( using the centered difference approximation ) .as expected , the distributions are centered on zero , consistent with the initial assumptions of the problem .and resulting values for for the densely - spaced grid of cm ( see text for details ) .values for along the - , - , and -axes as calculated from the interior values for are shown in panels ( a ) , ( b ) , and ( c ) as the filled circles , and are compared with the exact values shown as the solid curves .panel ( d ) shows a histogram of the fractional error in the calculated interior values of for all of the interior grid points . ]we now consider the neumann boundary - value problem for for the same densely - spaced grid configuration employed in the discussion of the dirichlet boundary - value problem in section [ sec : examples_dirichlet_dense ] .again , as per the discussion in sec . [ sec : discretization ], we assumed the values of were known at all of the boundary grid points .we then proceeded to solve for the values of at all of the interior grid points .the computing time required for iterations of our c++ code on a linux machine for the solution of the neumann problem for was 24 minutes , a little better than 1/3 of that required for solution of the dirichlet problem for all three components of .the results of this exercise are shown in fig .[ fig : case5_figures ] . as before ,panels ( a ) , ( b ) , and ( c ) compare the calculated interior values of with the exact values from the field map , and panel ( d ) then shows histograms of the fractional errors in the calculated interior values of for all of the interior grid points .again , the agreement between the calculated and exact values for ( i.e. , the dominant field component ) is excellent , again clearly demonstrating the validity of the neumann concept .however , the fractional errors in the calculated values of and are larger than those for ; this is the result of a loss of precision in calculating these significantly smaller components via derivatives of .we now consider more realistic examples of the dirichlet boundary - value problem in which the grids are ( significantly ) more coarsely spaced than those of the previous examples .first , calculated interior values of along the -axis are shown in panel ( a ) of fig .[ fig : coarse_figures ] for a grid with spacings = ( 10 cm , 10 cm , 50 cm ) , which would require measurements of 194 boundary values .the agreement between the calculated and exact values is still quite good .second , panel ( b ) shows results from the same calculation for an even coarser grid with spacings = ( 10 cm , 40 cm , 50 cm ) , requiring measurements of 74 boundary values .the agreement is now somewhat degraded , although the calculated and exact values still agree to the level of % .a drawback of this latter coarse grid is that the number of interior points are limited to those shown in panel ( b ) because and are simply half of the extent of the fiducial volume in their respective directions . for twocoarsely spaced boundary value grids .panel ( a ) shows calculated interior values of along the -axis ( filled circles ) compared with the exact values ( solid curves ) for a grid with = ( 10 cm , 10 cm , 50 cm ) .panel ( b ) is for a grid with = ( 10 cm , 40 cm , 50 cm ) .note that we do not show values for along the - or -axes , as there are very few interior grid points along these dimensions given the relatively large and grid spacings . ]the computing time required for iterations of our codes was seconds for both of these coarse grids .measurements of boundary values in an experiment with a vector field probe will , of course , be subject to noise and/or systematic errors such as uncertainties in the probe s positioning or its calibration . to study the specifications that a probe must satisfy in order to determine the interior field components to a certain precision , we employ a simple model in which we subject each boundary value to a gaussian fluctuation parameter , where is randomly sampled from a gaussian with a mean of zero and a particular width . this simple model accounts for noise fluctuations in the measurement of and also errors in the probe s positioning , the latter of which can be interpreted as equivalent to an error in the measurement at the nominal position .we considered two examples of and which we illustrate within the context of the two coarse grids discussed previously in section [ sec : examples_dirichlet_coarse ] ( i.e. , those with = ( 10 cm , 10 cm , 50 cm ) and ( 10 cm , 40 cm , 50 cm ) , yielding 194 and 74 boundary values , respectively ) . to provide context for an experiment, a of would correspond to a gaussian width of gauss on a gauss field value , where gauss is the typical scale of field magnitudes in recent and future neutron edm experiments .for each of these values , we generated ten random configurations of boundary values in which each of the boundary values was subjected to a gaussian fluctuation according to eq .( [ eq : gaussian_fluctuation ] ) .the impact of these fluctuations on the calculated interior values is shown in fig . [fig : sigma_noise ] , where we show the calculated interior values of along the -axis for each of the ten random configurations . as can be seen there ,if the spread in the calculated interior values is rather large ( and the sign of the gradient that would be deduced would be incorrect in some cases ) , whereas if the spread is small and any differences in the values of deduced from the calculated interior values would be small . along the -axis for ten random configurations of boundary values ( indicated by the different data symbols ) generated according to the gaussian fluctuation model discussed in the text .panel ( a ) : grid spacing of = ( 10 cm , 10 cm , 50 cm ) and gaussian fluctuation parameter .panel ( b ) : ( 10 cm , 10 cm , 50 cm ) and .panel ( c ) : ( 10 cm , 40 cm , 50 cm ) and . panel ( d ) : ( 10 cm , 40 cm , 50 cm ) and . in panels ( b ) and ( d ) the different data symbols all overlap each other . ]thus , within the context of this simple model , we conclude that a reasonable specification on a vector field probe is that the relative uncertainties in the probes measurements of the boundary values must be of order and any errors in the positioning of the probes must not result in measured field values that differ by more than from what their values would be at their nominal positions .in summary , we have proposed a new concept for determining the interior magnetic field vector components in neutron edm experiments via dirichlet and neumann boundary - value techniques , whereby exterior measurements of the field components over a closed boundary surface surrounding the experiment s fiducial volume uniquely determine the interior field components via solution of the laplace equation .we suggest that this technique will be of particular use to neutron edm experiments after they have been assembled and are in operation , when it is no longer possible to perform an in - situ field map .we also emphasize that this technique is certainly not limited in its applicability to neutron edm experiments . indeed , this technique could be of interest of any experiment requiring monitoring of vector field components within some well defined boundary surface .some examples of this could be experimental searches for neutron - antineutron ( ) oscillations along a flight path or experiments utilizing storage rings for measurements of the muon or the proton edm .the concept for an experiment would be to mount field probes along the neutron flight path in the region interior to the magnetic shielding , and for the storage ring experiments on the beam vacuum pipe in the region interior to the storage ring magnets and electrodes .however , as relevant for neutron edm experiments , we do note that one limitation of our boundary - value concept was discussed in sec .[ sec : examples_dirichlet_coarse ] : that is , the number of interior points at which the interior fields can be calculated ( and , thus , the resolution at which the field gradients can be determined ) is limited by the number of grid points ( or , equivalently , the grid spacing ) at which the boundary values are measured . in a forthcoming work , we will explore an alternative technique of fitting measurements of exterior field components to a multipole expansion of the field components or the magnetic scalar potential .such a technique is valid because the field components and the scalar potential satisfy the laplace equation , and an expansion in multipoles is a valid solution to the laplace equation .this technique , via the nature of a `` fit '' ( as compared to the direct solution of the laplace equation in the boundary - value technique discussed in the present work ) , holds the potential for a determination of the interior field components everywhere within the fiducial volume .+ * acknowledgments * + we thank m. p. mendenhall for providing the field map of the coil we used in our example calculations .we thank c. crawford , b. filippone , r. golub , and j. miller for several valuable suggestions regarding the development of the concept , r. w. pattie , jr . for suggesting the possible applicability of the concept to experiments , and r. golub , m. e. hayden , s. k. lamoreaux , and n. nouri for comments on the manuscript .this work was supported in part by the u. s. department of energy office of nuclear physics under award no .de - fg02 - 08er41557 .for example , see : w. h. press , s. a. teukolsky , w. t. vetterling , and b. p. flannery , _ numerical recipes , the art of scientific computing _ , third edition ( cambridge university press , 2007 ) , chapter 20 .we also consulted : j. d. hoffman , _ numerical methods for engineers and scientists _, second edition ( marcel dekker , inc . , 2001 ) , chapter 10 .
|
we propose a new concept for determining the interior magnetic field vector components in neutron electric dipole moment experiments . if a closed three - dimensional boundary surface surrounding the fiducial volume of an experiment can be defined such that its interior encloses no currents or sources of magnetization , each of the interior vector field components and the magnetic scalar potential will satisfy a laplace equation . therefore , if either the vector field components or the normal derivative of the scalar potential can be measured on the surface of this boundary , thus defining a dirichlet or neumann boundary - value problem , respectively , the interior vector field components or the scalar potential ( and , thus , the field components via the gradient of the potential ) can be uniquely determined via solution of the laplace equation . we discuss the applicability of this technique to the determination of the interior magnetic field components during the operating phase of neutron electric dipole moment experiments when it is not , in general , feasible to perform direct _ in situ _ measurements of the interior field components . we also study the specifications that a vector field probe must satisfy in order to determine the interior vector field components to a certain precision . the technique we propose here may also be applicable to experiments requiring monitoring of the vector magnetic field components within some closed boundary surface , such as searches for neutron - antineutron oscillations along a flight path or measurements in storage rings of the muon anomalous magnetic moment and the proton electric dipole moment . interior magnetic field vector components , interior magnetic field gradients , boundary - value methods , electric dipole moment experiments
|
the existence of correlation among price returns of different stocks traded in a financial market is a well - known fact .correlation based clustering procedures have been pioneered in the economic literature .recently , a new correlation based clustering procedure has been introduced in the econophysics literature .it has been shown that this correlation - based clustering procedure and some variant of it are able to filter out information which has a direct economic interpretation from the correlation coefficient matrix .in particular , the clustering procedure is able to detect clusters of stocks belonging to the same or closely related economic sectors starting from the time series of returns only . in this paperwe will consider the problem of the stability associated with the minimum spanning tree ( mst ) obtained both from price return and volatility data . by investigating the stability of the value of the degree ( number of links of the stock in the mst ) of each stockwe will show that volatility mst has less stable values of stock degree than price return mst .moreover , by analysing the degree of elements of msts we will be able to show that the degree has a slow dynamics with a correlation time of several years .the paper is organized as follows . in sect .2 we illustrate our results about the mst of volatility time series of a set of stocks . in sect .3 we comment on the stability of stock degree in the msts of price return and volatility time series and we discuss the time - scale associated with the slow dynamics of degree of msts . in sect .4 we briefly draw our conclusions .we investigate the statistical properties of cross - correlation among volatility and among price return time series for the most capitalized stocks traded in us equity markets during a year time period .our data cover the whole period ranging from january to april ( trading days ) . in the present studywe investigate daily data .in particular , we use for our analysis the open , close , high and low price recorded for each trading day for each considered stock .the stocks were selected by considering the capitalization recorded at august 31 , .starting from the daily price data , we compute both the daily price return and the daily volatility for each stock .price returns are defined as / p_i(t) ] where and are the highest and lowest price of the day , respectively .the correlation based clustering procedure introduced in ref . is based on the computation of the subdominant ultrametric distance associated with a metric distance that one may obtain from the correlation coefficient .the subdominant ultrametric distance can be used to obtain a hierarchical tree and a mst .the selection of the subdominant ultrametric distance for a set of elements whose similarity measure is a metric distance is equivalent to considering the single linkage clustering procedure .further details about this clustering procedure can be found in . in the present investigation , we first aim to consider the mst associated to the correlation coefficient matrix of volatility time series .it should be noted that there is an essential difference between price return and volatility probability density functions .in fact the probability density function of price return is an approximately symmetrical function whereas the volatility probability density function is significantly skewed .bivariate variables whose marginals are very different from gaussian functions can have linear correlation coefficients which are bounded in a subinterval of $ ] . since the empirical probability density function of volatility is very different from a gaussianthe use of a robust nonparametric correlation coefficient is more appropriate for quantifying volatility cross - correlation .in fact the volatility msts obtained starting from a spearman rank - order correlation coefficient are more stable with respect to the dynamics of the degree of stocks than the ones obtained starting from the linear ( or pearson s ) correlation coefficient .the clustering procedure based on the spearman rank - order correlation coefficient uses the volatility rank time series to evaluate the subdominant ultrametric distance .the time series of the rank value of volatility are obtained by substituting the volatility values with their ranks .then one evaluates the linear correlation coefficient between each pair of the rank time series and starting from this correlation coefficient matrix one obtains the associated mst .= 6.5 in an example of the mst obtained starting from the volatility time series and by using the spearman rank - order correlation coefficient is shown in fig .( [ fig1 ] ) .this mst is shown for illustrative purposes and it has been computed by using the widest window available in our database ( trading days ) . a direct inspection of the mst shows the existence of well characterized clusters .examples are the cluster of technology stocks ( hon , hwp , ibm , intc , msft , nsm , orcl , sunw , txn and uis ) and the cluster of energy stocks ( arc , chv , cpb , hal , mob , slb , xon ) . as already observed in the mst obtained from the price return time series , the volatility mst of fig .( [ fig1 ] ) shows the existence of stocks that behave as reference stocks for a group of other stocks .examples are ge ( general electric co ) , jpm ( jp morgan chase & co ) and dd ( du pont de nemours co. ) .a natural question arises whether or not the structure of the mst depends on the particular time period considered .this point has been considered briefly in and it has also been recently addressed in . in the present investigationwe compute a mst for both volatility and price return for each trading day .this is done by considering the records of the time series delimited by a sliding time window of length days ranging from day to day .for example , by using a time window with we approximately compute msts in our sets of data . in each mst, each stock has a number of other stocks that are linked to it .this number is usually referred to as the degree of the stock . by using the above procedure, we obtain a daily historical time series of degree for each of the considered stocks both for price return and for volatility . in the following ,we focus our attention on the analysis of such degree time series to assess the time stability of msts of price return and volatility and to infer conclusions about the time dynamics of the stock degree of msts .each time series of the degree of each stock has about 3000 records .this number of records is not enough to detect reliably the autocorrelation function of the degree time series for each stock .hence , we decide to investigate the properties of the degree time series obtained by joining all the 93 degree time series of each stock .this is done separately for each set of data ( price return or volatility ) and for each value of the time window . from the time seriesobtained as described above we compute the autocorrelation function .the comparison of the results obtained for price return with the one obtained from volatility time series allows us to estimate the stability of msts of these two important financial indicators . for the sake of clarity we will first consider the two set of data separately andthen we will comment on similarity and differences between them .= 5.0 in in fig .( [ fig2 ] ) we show autocorrelation functions of different time series of the degree . the analyzed msts are computed by investigating the linear correlation coefficient which is present among price return and by using three different time windows . specifically , we use time window of size , and trading days . in all casesthe autocorrelation function shows two distinct regimes for low and high time values .the crossover between the two regimes is detected at ( see arrows in fig .( [ fig2 ] ) ) . for low timevalues the autocorrelation function of degree approximately decays exponentially ( a straight line in the semilogarithmic plot of fig .( [ fig2 ] ) ) .this behavior reflects the fact that the sliding window used to compute msts contains overlapping time period of records .for this reason when a day of high correlation among several pairs of stock occurs a memory of this event remains within a time interval of length .this behavior is therefore simply related to the methodology used by us to compute the degree time series .more relevant information is obtained from the degree autocorrelation at time equal or longer than the time window size . for autocorrelation function assumes a non negligible value approximately equals to , and for a time window of , and trading days respectively .these results indicate that the information carried by the degree of the stocks in the msts is robust in spite of the presence of some noise dressing .the increase of the value of the autocorrelation function at , which is detected by increasing indicates that the noise dressing decreases when increases . for time longer than the degree autocorrelation function approximately decays exponentially with a very long time - scale .for instance , in the case of an exponential best fit of the autocorrelation function is obtained with the time - scale trading days .this time - scale approximately corresponds to 2.8 calendar years .it should be noticed that the autocorrelation functions obtained for different values of are approximately parallel to each other and follow an exponential function with the same time - scale . before we move to consider the analogous results obtained for volatility time series we wish to point out that the results presented in fig .( [ fig2 ] ) are essentially independent on the methodology used to compute the correlation coefficient matrix .in fact we obtain the same results when we use the spearman rank - order correlation coefficient .= 5.0 in in fig .( [ fig3 ] ) we show the results of the same analysis performed on volatility time series . in the case of volatility the msts are obtained starting from the spearman rank - order correlation coefficient .in fact if we compute msts and degree time series by using a linear correlation coefficient the results are much less reliable and the degree autocorrelation function for seems to be more affected by noise .msts obtained starting from the spearman rank - order correlation coefficient are more statistically robust and the degree autocorrelation function shows the same general behavior as in the case of price return time series .however some important differences are detected .the first one concerns the amount of correlation observed at .the values of the autocorrelation function are approximately equals to , and for a time window of , and trading days respectively .these results indicate that the information carried by the degree of the stocks in the msts of volatility is less stable over time than the one detected in the msts of price returns .moreover , the increase of the value of the degree autocorrelation time series with the time window is much less pronounced for volatility than for price return .another difference concerns the slow decay of the correlation observed for . for large values of decay of the degree autocorrelation function is again approximately exponential but the time - scale obtained by best fitting the autocorrelation function with an exponential function is trading days in this time interval .this value is approximately double than the time - scale detected in the analysis of price return .the parallel investigation of msts obtained from ( i ) price return and ( ii ) volatility time series of a set of stocks allows us to conclude that the stability of the degree of msts is lower for volatility time series than for price return time series . for price return timeseries , the stability of stock degree dynamics increases when the time window used to compute msts is increased .a similar but much weaker trend is also observed in msts obtained starting from volatility time series .the dynamics of the degree of stocks in the msts is of statistical nature with a time memory which is approximately close to 700 trading days for price return and 1500 trading days for volatility time series .the time - scale of the degree of msts of price return is much less than the maximal time - scale of our investigation and therefore it should not be significantly affected by it . on the other handthe time - scale found for the degree of msts obtained starting from volatility time series is just , which implies that the detection of this specific time - scale could be an artifact of the procedure we used to compute the degree autocorrelation function .however , it should be noted that , the detected value of trading days is certainly a lower bound of the true time - scale of the degree autocorrelation function of volatility mst . in summary ,relevant economic information is stored in the degree of msts obtained from price return and volatility time series .the dynamics of stock degree is statistically more stable for price return than for volatility msts and it has a slow dynamics characterized by a time - scale of the order of 3 calendar years for price return msts and longer than 6 calendar years for volatility msts .the authors thank infm and miur for financial support . this article is part of the miur - firb project on cellular self - organization nets and chaotic nonlinear dynamics to model and control complex systems " .g.b . acknowledges financial support from fet open project cosin ist-2001 - 33555 .
|
we investigate the time series of the degree of minimum spanning trees obtained by using a correlation based clustering procedure which is starting from ( i ) asset return and ( ii ) volatility time series . the minimum spanning tree is obtained at different times by computing correlation among time series over a time window of fixed length . we find that the minimum spanning tree of asset return is characterized by stock degree values , which are more stable in time than the ones obtained by analyzing a minimum spanning tree computed starting from volatility time series . our analysis also shows that the degree of stocks has a very slow dynamics with a time - scale of several years in both cases . pacs : 89.75fb ; 89.75hc ; 89.65gh epsf , , , econophysics , correlation based clustering , volatility .
|
data obfuscation is a mechanism for hiding private data by using misleading , false , or ambiguous information with the intention of confusing an adversary .a data obfuscation mechanism acts as a noisy information channel between a user s private data ( secret ) and an untrusted observer .the noisier this channel is , the higher the privacy of the user will be .we focus on _ user - centric _ mechanisms , in which each user independently perturbs her secret before releasing it .note that we are not concerned with database privacy , but with the privacy issues of releasing a single sensitive data sample ( which however could be continuously shared over time ) .for example , consider a mobile user who is concerned about the information leakage through her location - based queries .in this case , obfuscation is the process of randomizing true locations so that the location - based server only receives the user s perturbed locations . by using obfuscation mechanisms ,the _ privacy _ of a user and her _ utility _ experience are at odds with each other , as the service that the user receives is a function of what she shares with the service provider .there are problems to be addressed here .one is how to design an obfuscation mechanism that protects privacy of the user and imposes a _ minimum _ utility cost .another problem is how to _ guarantee _ the user s privacy , despite the lack of a single best metric for privacy . regarding utility optimization , we define utility loss of obfuscation as the degradation of the user s service - quality expectation due to sharing the noisy data instead of its true value .regarding privacy protection , there are two major metrics proposed in the literature ._ differential _ privacy limits the information leakage through observation .but , it does not reflect the absolute privacy level of the user , i.e. , what actually is learned about the user s secret .so , user would not know how close the adversary s estimate will get to her secret if she releases the noisy data , despite being sure that the relative gain of observation for adversary is bounded ._ distortion _ privacy ( inference error ) metric overcomes this issue and measures the error of inferring user s secret from the observation .this requires assumption of a prior knowledge which enables us to quantify absolute privacy , but is not robust to adversaries with arbitrary knowledge .thus , either of these metrics alone is incapable of capturing privacy as a whole .the problem of optimizing the tradeoff between privacy and utility has already been discussed in the literature , but notably for differential privacy in the context of statistical databases . regarding user - centric obfuscation mechanisms , solves the problem of maximizing distortion privacy under a constraint on utility loss .the authors construct the optimal adaptive obfuscation mechanism as the user s best response to the adversary s optimal inference in a bayesian zero - sum game . in the same context, solves the opposite problem , i.e. , optimizing utility but for differential privacy . in both papers ,the authors construct the optimal solutions using linear programming .differential and distortion metrics for privacy complement each other .the former is sensitive to the likelihood of observation given data .the latter is sensitive to the joint probability of observation and data .thus , by guaranteeing both , we encompass all the defense that is theoretically possible . in this paper, we model and solve the optimal obfuscation mechanism that : ( i ) minimizes utility loss , ( ii ) satisfies differential privacy , and ( iii ) guarantees distortion privacy , given a public knowledge on prior leakage about the secrets .we measure the involved metrics based on separate distance functions defined on the set of secrets .we model prior leakage as a probability distribution over secrets , that can be estimated from the user s previously released data .ignoring such information leads to overestimating the user s privacy and thus designing a weak obfuscation mechanism ( against adversaries who include such exposed information in their inference attack ) .a protection mechanism for distortion privacy metric can be designed such that it is optimal against a particular inference algorithm ( e.g. , bayesian inference as privacy attacks ) .but , by doing so , it is not guaranteed that the promised privacy level can be achieved in practice : an adversarial observer can run inference attacks that are optimally tailored against the very obfuscation mechanism used by the user ( regardless of the algorithm that the user assumes a priori ) .in fact , the adversary has the upper hand as he infers the user s secret ( private information ) _after _ observing the output of the obfuscation mechanism .thus , the obfuscation mechanisms must _ anticipate _ the adaptive inference attack that will follow the observation .this enables us to design an obfuscation mechanism that is independent of the adversary s inference algorithm . to address this concern, we adapt a game - theoretic notion of privacy for designing optimal obfuscation mechanisms against adaptive inference .we formulate this game as a stackelberg game and solve it using linear programming .we then add the differential privacy guarantee as a constraint in the linear program and solve it to construct the optimal mechanism .the result of using such obfuscation mechanism is that , not only the perturbed data samples are indistinguishable from the true secret ( due to differential privacy bound ) , but also they can not be used to accurately infer the secret using the prior leakage ( due to distortion privacy measure ) . to the best of our knowledge, this work is the first to construct utility maximizing obfuscation mechanisms with such formal privacy guarantees .we illustrate the application of optimal protection mechanisms on a real data set of users locations , where users want to protect their location privacy against location - based services .we evaluate the effects of privacy guarantees on utility cost .we also analyze the robustness of our optimal obfuscation mechanism against inference attacks with different algorithms and background knowledge .we show that our joint differential - distortion mechanisms are robust against adversaries with optimal attack and background knowledge .moreover , the utility loss is at most equal to the utility loss of differential or distortion privacy , separately .the novelty of this paper in the context of user - centric obfuscation is twofold : * we construct optimal obfuscation mechanisms that provably limit the user s privacy risk ( i.e. , by guaranteeing the user s distortion privacy ) against _ any _ inference attack , with minimum utility cost .* we design obfuscation mechanisms that optimally balance the tradeoff between utility and joint distortion - differential privacy .the solution is robust against adversary with arbitrary knowledge , yet it guarantees a required privacy given the user s estimation of the prior information leakage .this paper contributes to the broad area of research that concerns designing obfuscation mechanisms , e.g. , in the context of quantitative information flow , quantitative privacy in data sharing systems , as well as differential privacy .the conflict between privacy and utility has been discussed in the literature .we build upon prevalent notions of privacy and protect it with respect to information leakage through both observation ( differential privacy ) and posterior inference ( distortion privacy ) while optimizing the tradeoff between utility and privacy .we also formalize this problem and solve it for user - centric obfuscation mechanisms , where it s each individual user who perturbs her secret data before sharing it with external observers ( e.g. , service providers ) .the problem of perturbing data for differential and distortion privacy , separately , and optimizing their effect on utility has already been discussed in the literature .original metric for differential privacy measures privacy of output perturbation methods in statistical databases . assuming two statistical databases to be neighborif they differ only in one entry , and design utility maximizing perturbation mechanisms for the case of counting queries . in , authors propose different approaches to designing perturbation mechanisms for counting queries under differential privacy .however , presents some impossibility results of extending these approaches to other types of database queries . under some assumptions about the utility metric, shows that the optimal perturbation probability distribution has a symmetric staircase - shaped probability density function . differential privacy metric using generic distance functions on the set of secrets .some extensions of differential privacy also consider the problem of incorporating the prior knowledge into its privacy definition .the most related paper to our framework , in this domain , is where the authors construct utility - maximizing differentially private obfuscation mechanisms using linear programming .the authors prove an interesting relation between utility - maximizing differential privacy and distortion - privacy - maximizing mechanisms that bound utility , when distance functions used in utility and privacy metrics are the same .this , however , can not guarantee distortion privacy for general metrics . the optimal differentially private mechanisms , in general , do not incorporate the available knowledge about the secret while achieving differential privacydistortion privacy , which evaluates privacy as the inference error , is a follow - up of information - theoretic metrics for anonymity and information leakage .this class of metrics is concerned with what can be inferred about the true secret of the user by combining the observation ( of obfuscated information ) and prior knowledge .the problem of maximizing privacy under utility constraint , assuming a prior , is proven to be equivalent to the user s best strategy in a zero - sum game against adaptive adversaries . with this approach, one can find the optimal strategies using linear programming .in fact , linear programming is the most efficient solution for this problem .however , if we want to guarantee a certain level of privacy for the user and maximize her utility , the problem can not be modeled as a zero - sum game anymore and there has been no solution for it so far .we formalize this game , and construct a linear programming solution for these privacy games too .regarding the utility metric , we consider the expected distance between the observation and the secret as the utility metric .the distance function can depend on the user and also the application .in the case of applying obfuscation over time , we need to update the user s estimation of the prior leakage according to what has been shared by the user .we might also need to update the differential privacy budget over time . in this paper, we model one time sharing of a secret , assuming that the prior leakage and the differential privacy budget are properly computed and adjusted based on the previous observations . our problemis also related to the problem of adversarial machine learning and the design of security mechanisms , such as intelligent spam detection algorithms , against adaptive attackers .it is also similar to the problem of placing security patrols in an area to minimize the threat of attackers , and faking location - based queries to protect against localization attack .the survey explores more examples of the relation between security and game theory .in this section , we define different parts of our model .we assume a user shares her data through an information sharing system in order to obtain some service ( utility ) .we also assume that users want to protect their sensitive information , while they share their data with untrusted entities .for example , in the case of sharing location - tagged data with a service provider , a user might want to hide the exact visited locations , their semantics , or her activities that can be inferred from the visited locations .we refer to the user s sensitive information as her _secret_. to protect her privacy , we assume that user obfuscates her data before sharing or publishing it .figure [ fig : framework ] illustrates the information flow that we assume in this paper .the input to the protection mechanism is a secret , where is the set of all possible values that can take ( for example , the locations that the user can visit , or the individuals that she is acquainted with ) .let prior leakage be the probability distribution over values of to reflect the data model and the a priori exposed information about the secret . the probability distribution is estimated by the suer to be the predictability of the user s secret given her exposed information in the past .thus , anytime that user shares some ( obfuscated ) information , she needs to update this probability distribution .this is how we incorporate the correlation between users data shared over time .we assume that a user wants to preserve her privacy with respect to . to protect her privacy, a user obfuscates her secret and shares an inaccurate version of it through the system .we assume that this obfuscated data is observable through the system .we consider a generic class of obfuscation mechanisms , in which the observable is sampled according to the following probability distribution . thus , we model the privacy preserving mechanism as a noisy channel between the user and the untrusted observer .this is similar to the model used in quantitative information flow and quantitative side - channel analysis .the output , i.e. , the set of observables , can in general be a member of the powerset of . as an example , in the most basic case , , i.e. , the protection mechanism can only perturb the secret by replacing it with another possible secret s value. this can happen through adding noise to . in a more generic case, the members of can contain a subset of secrets .for example , the protection mechanism can generalize a location coordinate , by reducing its granularity .[ auto = left , scale=0.45 ] ( p ) at ( 1 , 1 ) ; ( ar ) at ( 4 , 1 ) ; ( f ) at ( 7 , 1 ) ; ( or ) at ( 10 , 1 ) ; ( h ) at ( 13 , 1 ) ; ( er ) at ( 16 , 1 ) ; ( dq ) at ( 7 , -2 ) ; ( dp ) at ( 13 , 5 ) ; \(p ) ( ar ) ; ( ar ) ( f ) ; ( f ) ( or ) ; ( or ) ( h ) ; ( h ) ( er ) ; ( dp ) edge [ < -,bend right ] node ( ar ) edge [ < -,bend left ] node ( er ) ( dq ) edge [ < -,bend left ] node ( ar ) edge [ < -,bend right ] node ( or ) ; users incur a utility loss due to obfuscation .let the distance function determine the utility cost ( information usefulness degradation ) due to replacing a secret with an observable .the cost function is dependent on the application of the shared information , on the specific service that is provided to the user , and also on the user s expectations .we compute the expected utility cost of a protection mechanism as we can also compute the worst ( maximum ) utility cost over all possible secrets as in this work , we do not plan to determine which metrics are the best representative utility loss metrics for different types of services or users .we only assume that the designer of optimal obfuscation mechanism is provided with such a utility function , for example , by constructing it according to the application , or by learning it automatically from the users preferences and application profile .we stated that the user wants to protect her privacy with respect to secret against untrusted observers . to be consistent with this, we define the adversary as an entity who aims at finding the user s secret by observing the outcome of the protection mechanism and minimizing the user s privacy with respect to her privacy sensitivities . for any observation , then we determine the probability distribution over the possible secrets as to be the true secret of the user. the goal of the inference algorithm is to invert a given protection mechanism to estimate .the error of adversary , in this estimation process , determines the effectiveness of the inference algorithm , which is captured by the distortion privacy metric . as stated above ,the user s privacy and the adversary s inference error are two sides of the same coin .we define the privacy gain of the user with secret as a distance between the two data points : , where is the a posteriori estimation of the secret . the distance function is determined by the sensitivity of the user towards each secret when estimated as .a user would be less worried about revealing , if the portrait of her secret in the eyes of adversary is an estimate with a large distance .this distance function is defined by the user .it could be a semantic distance between different values of secrets to reflect the privacy risk of on user when her secret is .usually , the highest risk is associated with the case where the estimate is equal to the secret .however , sometimes even wrong estimates can impose a high risk on the user , for example when they leak information about the semantic of the secret .we compute the user privacy obtained through a protection mechanism , with respect to a given inference algorithm , for a specific secret as by averaging this value over all possible secrets , we compute the expected distortion privacy of the user as this metric shows the average estimation error , or how distorted the reconstructed user s secret is .thus , we refer to it as the _ distortion _ privacy metric . what associates a semantic meaning to this metric is the distance function .many distance functions can be defined to reflect distortion privacy .this depends on the type of the secret and to the sensitivity of the user .for example , if the user s secret is her age , function could be the absolute distance between two numbers .if the secret is the user s location , function could be a euclidean distance between locations , or their semantic dissimilarity .if the secret is the movies that she has watched , function could be the jaccard distance between two sets of movies .the privacy that is achieved by an obfuscation mechanism can be computed with respect to the information leakage through the mechanism , regardless of the secret s inference .for example , the differential privacy metric , originally proposed for protecting privacy in statistical databases , is sensitive only to the difference between the probabilities of obfuscating multiple secrets to the same observation ( which is input to the attack ) .according to the original definition of differential privacy , a randomized function ( that acts as the privacy protection mechanism ) provides -differential privacy if for all data sets and , that differ on at most one element , and all , the following inequality holds .differential privacy is not limited to statistical databases .it has been used in many different contexts where various types of adjacency relations capture the context dependent privacy .a typical example is edge privacy in graphs .it has also been proposed for arbitrary distance function between secrets .this notion can simply be used for measuring information leakage .it has been shown that differential privacy imposes a bound on information leakage . and , this is exactly why we are interested in this metric .let be a distinguishability metric between .a protection mechanism is defined to be differentially private if for all secrets , where , and all observables , the following inequality holds . in this paper , we use a generic definition of differential privacy , assuming arbitrary distance function on the secrets . in this form ,a protection mechanism is differentially private if for all secrets , with distinguishability , and for all observables , the following holds . in fact , the differential privacy metric guarantees that , given the observation , there is not enough convincing evidence to prefer one secret to other similar ones ( given ) .in other words , it makes multiple secret values indistinguishable from each other .the problem that we address in this paper is to find an optimal balance between privacy and utility , and to construct the protection mechanisms that achieve such optimal points .more precisely , we want to construct utility - maximizing obfuscation mechanisms with joint differential - distortion privacy guarantees . the problem is to find a probability distribution function such that it minimizes utility cost of the user , on average , or , alternatively , over all the secrets under the user s privacy constraints .let be the minimum desired distortion privacy level .the user s average distortion privacy is guaranteed if the obfuscation mechanism satisfies the following inequality . where is the optimal inference attack against .let be the differential privacy budget associated with the minimum desired privacy of the user , and be the distinguishability threshold .the user s privacy is guaranteed if satisfies the following inequality . or , alternatively ( following s definition of differential privacy ) : in this paper , we mainly use the latter definition , but make use of the former one as the basis to reduce the computation cost of optimizing differential privacy ( see appendix [ sec : approx ] ) .the flow of information starts from the user where the secret is generated .the user then selects a protection mechanism , and obfuscates her secret according to its probabilistic function .after the adversary observes the output , he can design an optimal inference attack against the obfuscation mechanism to invert it and estimate the secret .we assume the obfuscation mechanism is not oblivious and is known to the adversary .this gives the adversary the upper hand against the user in their conflict .so , designing an obfuscation mechanism against a fixed attack is always suboptimal .the best obfuscation mechanism is the one that _ anticipates _ the adversary s attack .thus , the obfuscation mechanism should be primarily designed against an _ adaptive _ attack which is tailored to each specific obfuscation mechanism .so , by assuming that the adversary designs the best inference attack against each protection mechanism , the user s goal ( as the defender ) must be to design the obfuscation mechanism that maximizes her ( privacy or utility ) objective against an adversary that optimizes the conflicting objective of guessing the user s secret .the adversary is an entity assumed by the user as the entity whose objective s exactly the opposite of the user s .so , we do not model any particular attacker but the one that minimizes user s privacy according to distance functions and . for each obfuscation mechanismthere is an inference attack that optimizes the adversary s objective and leads to a certain privacy and utility payoff for the user .the optimal obfuscation mechanism for the user is the one that brings the maximum payoff for her , against the mechanism s corresponding optimal inference attack .enumerating all pairs of user - attacker mechanisms to find the optimal obfuscation function is infeasible .we model the joint user - adversary optimization problem as a leader - follower ( stackelberg ) game between the user and the adversary .the user leads the game by choosing the protection mechanism , and the adversary follows by designing the inference attack .the solution to this game is the pair of user - adversary best response strategies and which are mutually optimal against each other .if the user implements , we have already considered the strongest attack against it .thus , is robust against _ any _ algorithm used as inference attack . for any secret ,the strategy space of the user is the set of observables . for any observable ,the strategy space of the adversary is the set of secrets ( all possible adversary s estimates ) .for a given secret , we represent a mixed strategy for the user by a vector , where .similarly , a mixed strategy for the adversary , for a given observable is a vector , where .note that the vectors and are respectively the conditional distribution functions associated with an obfuscated function for a secret and an inference algorithm for an observable .let and be the sets of all mixed strategies of the user and the adversary , respectively . a member vector of sets or with a for the component and zeros elsewhere is the pure strategy of choosing action .for example , an obfuscation function for which and is the pure strategy of exclusively and deterministically outputting observable for secret .thus , the set of pure strategies of a player is a subset of mixed strategies of the player . in the case of the distortion privacy metric, the game needs to be formulated as a _bayesian stackelberg game_. in this game , we assume the probability distribution on the secrets and we find and that create the equilibrium point .if user deviates from this strategy and chooses , there would be an inference attack against it such that leads to a lower privacy for the user , i.e. , is optimal . in the case of a differential privacymetric , as the metric is not dependent to the adversary s inference attack , the dependency loop between finding optimal and is broken .nevertheless , it is still the user who plays first by choosing the optimal protection mechanism . in the following sections , we solve these games and provide solutions on how to design the optimal user - adversary strategies .assume that the nature draws secret according to the probability distribution .given , the user draws according to her obfuscation mechanism , and makes it observable to the adversary .given observation , the adversary draws according to his inference attack .we assume that is known to both players .we want to find the mutually optimal : the solution of the bayesian stackelberg privacy game .to this end , we first design the optimal inference attack against any given protection mechanism . this will be the _best response _ of the adversary to the user s strategy .then , we design the optimal protection mechanism for the user according to her objective and constraints , as stated in section [ sec : problem ] .this will be the user s best utility - maximizing strategy that anticipates the adversary s best response .the adversary s objective is to minimize ( the user s privacy and thus ) the inference error in estimating the user s secret . given a secret , the distance function determines the error of an adversary in estimating the secret as .in fact , this distance is exactly what a user wants to maximize ( or put a lower bound on ) according to the distortion privacy metric .we compute the expected error of the adversary as therefore , we design the following linear program , through which we can compute the adversary s inference strategy that , given the probability distribution and obfuscation , minimizes his expected error with respect to a distance function .[ eq : lp : adversary : bayesian ] under the constraint that the solution is a proper conditional probability distribution function . in the next subsection, we will show that the optimal deterministic inference ( that associates one single estimate with probability one to each observation ) results in the same privacy for the user .alternative ways to formulate this problem is given in appendix [ sec : optimalattack ] .in this case , we assume the user would like to minimize her utility cost under a ( lower bound ) constraint on her privacy .therefore , we can formulate the problem as [ eq : lp : user : utility - privacy : nested ] however , solving this optimization problem requires us to know the optimal against , for which we need to know as formulated in .so , we have two linear programs ( one for the user and one for the adversary ) to solve .but , the solution of each one is required in solving the other .this optimization dependency loop reflects the game - theoretic concept of _ mutual best response _ of the two players .this game is a _ nonzero - sum stackelberg game _ as the user ( leader player ) and adversary ( follower player ) have different optimization objectives ( one maximizes utility , and the other minimizes privacy ) .we break the dependency loop between the optimization problems using the game - theoretic modeling , and we prove that the user s best strategy can be constructed using linear programming . given a probability distribution , the distance functions and , and the threshold , the solution to the following linear program is the optimal protection strategy for the user , which is the solution to with respect to adversary s best response .[ eq : lp : user : utility - privacy : game ] see appendix [ sec : proof ] .in this section , we design optimal differentially private protection mechanisms .we solve the optimization problems for maximizing utility under privacy constraint .we design the following linear program to find the user strategy that guarantees user differential privacy , for a maximum privacy budget , and minimizes the utility cost of the obfuscation mechanism .[ eq : lp : user : utility - privacy : diff : mult ] or , alternatively , for a distinguishability bound , we can solve the following .[ eq : lp : user : utility - privacy : diff : mult2 ] mechanisms designed based on distortion and differential privacy protect the user s privacy from two different angles . in general , for arbitrary and , there is no guarantee that a mechanism with a bound on one metric holds a bound on the other .distortion privacy metric reflects the _ absolute _ privacy of the user , based on the posterior estimation on the obfuscated information .differential privacy metric reflects the _ relative _ information leakage of each observation about the secret .however , it is not a measure on the extent to which the observer , who already has some knowledge about the secret from the previously shared data , can guess the secret correctly .so , the inference might be very accurate ( because of the background knowledge ) despite the fact that the obfuscation in place is a differentially - private mechanism . as distortion and differentialmetrics guarantee different dimensions of the user s privacy requirements , we respect both in a protection mechanism .this assures that not only the information leakage is limited , but also the absolute privacy level is at the minimum required level .thanks to our unified formulation of privacy optimization problems as linear programs , the problem of jointly optimizing and guaranteeing privacy with both metrics can also be formulated as a linear program .the solution to the following linear program is a protection mechanism that maximizes the user s utility and guarantees a minimum distortion privacy and a minimum differential privacy , given probability distribution and distance functions and and distinguishability metric .the value of the optimal solution is the utility cost of the optimal mechanism .[ eq : lp : user : utility - jointprivacy ] have implemented all our linear program solutions in a software tool that can be used to process data for different applications , in different settings . in this section, we use our tool to design privacy protection mechanisms , and also to make a comparison between different optimal mechanisms , i.e. , distortion , differential , and joint distortion - differential privacy preserving mechanisms .we study the properties of these mechanisms and we show how robust they are with respect to inference attack algorithms as well as to the adversary s knowledge on secrets .we also investigate their utility cost for protecting privacy .furthermore , we show that the optimal joint distortion - differential mechanisms are more robust than the two mechanisms separately . in appendix[ sec : approx ] , we discuss and evaluate approximations of the optimal solution for large number of constraints .we run experiments on location data , as today they are included in most of data sharing applications .we use a real data - set of location traces collected through the nokia lausanne data collection campaign .the location information belong to a km area .we split the area into cells .we consider location of a mobile user in a cell as her secret .hence , the set of secrets is equivalent to the set of location cells .we assume the set of observables to be the set of cells , so the users obfuscate their location by perturbation ( i.e. , replacing their true location with any location in the map ) .we run our experiments on randomly selected users , to see the difference in the results due to difference in user s location distribution based on users different location access profiles .we build for each user separately given their individual location traces , using maximum likelihood estimation ( normalizing the user s number of visits to each cell in the tarce ) .we assume a euclidean distance function for and .this reflects the sensitivity of user towards her location . by using this distance function for distortion privacy, we guarantee that the adversary can not guess the user s true location with error lower than the required privacy threshold ( ) . choosing euclidean distance function as the metric for distinguishability ensures that the indistinguishability between locations is larger for locations that are located closer to each other .we assume a hamming distortion function for ( i.e. , the utility cost is only if the user s location and the observed location are the same , otherwise the cost is ) .the utility metric can vary depending on the location - based sharing application and also the purpose for which the user shares her location . choosing the hamming function reflects the utility requirement of users who want to inform others about their current location in location check - in applications .we evaluate utility - maximizing optimal protection mechanisms with three different privacy constraints : * _ distortion privacy protection _ , . * _ differential privacy protection _ , . * _ joint distortion - differential privacy protection _ , .we compare the effectiveness of these protection mechanisms against inference attacks by using the distortion privacy metric .we consider two inference attacks : * _ optimal attack _ , . *_ bayesian inference attack _ , using the bayes rule : [ [ scenario-1 . ] ] _ scenario 1 . _+ + + + + + + + + + + + + our first goal is to have a fair comparison between optimal distortion privacy mechanism and optimal differential mechanism . to this end, we set the privacy parameter to . for each user and each value of , 1 .we compute the optimal differential privacy mechanism using .let be the optimal mechanism .we run optimal attack on , and compute the user s absolute distortion privacy as .we compute the optimal distortion privacy mechanism using . for this, we set the privacy lower - bound to .this enforces the distortion privacy mechanism to guarantee what the differential privacy mechanism provides .4 . we compute the optimal joint distortion - differential privacy mechanism using .we set the privacy lower - bounds to and for the differential and distortion constraints , respectively .we run optimal attack on both and , and compute the user s absolute distortion privacy as and , respectively .6 . as a baseline for comparison, we run bayesian inference attack on the three optimal mechanisms , , and .figure [ fig : privacy_loop_optimalattack ] shows the results of our analysis , explained above .distortion privacy is measured in km and is equivalent to the expected error of adversary in correctly estimating location of users .figure [ fig : privacy_loop_epsilon_vs_privacy_optimal ] shows how expected privacy of users decreases as we increase the value of the lower - bound on differential privacy .users have different secret probability distribution , with different randomness .however , as increases , expected error of adversary ( the location privacy of users ) converges down to below km .figure [ fig : privacy_loop_privacy_vs_cost_optimalattack ] plots the utility cost versus distortion privacy of each optimal protection mechanism .as we have set the privacy bound of the optimal distortion mechanism ( and of course the optimal joint mechanism ) to the privacy achieved by the optimal differential mechanism , we can make a fair comparison between their utility costs .we observe that the utility cost for achieving some level of distortion privacy is much higher for optimal differential and joint mechanisms compared with the optimal distortion mechanism .note that the utility cost of differential and joint mechanisms are the same .so , distortion privacy bound does not impose more cost than what is already imposed by the differential privacy mechanism .as we set to , the user s distortion privacy in using optimal distortion and optimal differential mechanism is the same , when we confront them with the optimal attack . in figure[ fig : privacy_loop_diff_vs_bayes_inferenceattack ] , however , we compare the effectiveness of these two mechanisms against bayesian inference attack . it is interesting to observe that the optimal differential mechanism is more robust to such attacks compared to the optimal distortion mechanisms .this explains the extra utility cost due to optimal differential mechanisms . .] . ] in figure [ fig : privacy_loop_inference_vs_optimal ] , we compare the effectiveness of bayesian inference attack and optimal attack . we show the results for all three optimal protection mechanisms .it is clear that optimal attack outperforms the bayesian attack , as users have a relatively higher privacy level under the bayesian inference .however , the difference is more obvious for the case of differential protection and joint protection mechanisms . the bayesian attack overestimates users privacy , as it ignores the distance function , whereas the optimal attack minimizes the expected value of over all secrets and estimates .[ [ scenario-2 . ] ] _ scenario 2 . _+ + + + + + + + + + + + + in this paper , we introduce the optimal joint distortion - differential protection mechanisms to provide us with the benefits of both mechanisms .figure [ fig : privacy_loop_privacy_vs_cost_optimalattack ] shows that the optimal joint mechanism is not more costly than the two optimal distortion and differential mechanisms .it also shows that it guarantees the highest privacy for a certain utility cost . to further study the effectiveness of optimal joint mechanisms , we run the following evaluation scenario .we design optimal differential mechanisms for some values of . and , we design optimal distortion mechanisms for some values of that are higher than the distortion privacy resulted from those differential privacy mechanisms .we also construct their joint mechanisms given the and parameters .figure [ fig : privacy_alljoint_joint_vs_diffbayes_optimalattack ] shows how the optimal joint mechanism adapts itself to guarantee the maximum of the privacy levels guaranteed by optimal bayesian and optimal differential mechanisms individually .this is clear from the fact that users privacy for the optimal joint mechanism is equal to their privacy for distortion mechanism ( that as we set in our scenario , they are higher than that of differential mechanisms ) . thus , by adding the distortion privacy constraints in the design of optimal mechanisms , we can further increase the privacy of users ( with the same utility cost ) that can not be otherwise achieved by only using differential mechanisms .[ [ scenario-3 . ] ] _ scenario 3 . _ + + + + + + + + + + + + + in order to further investigate the relation between the privacy ( and utility ) outcome of the optimal joint mechanism and that of individual differential or distortion privacy mechanisms , we run the following set of experiments on all the available user profiles . 1 . for any value of in , we compute the utility of optimal differential privacy mechanism as well as its privacy against optimal attack .2 . for any value of in , we compute the utility of optimal distortion privacy mechanism as well as its privacy against optimal attack . is dependent on and is the maximum value that the threshold can take ( beyond which there is no solution to the optimization problem ) .3 . for any value of in , and for any value of in ,we compute the utility and privacy of the optimal joint mechanism .figure [ fig : joint_vs_distdiff ] shows the results . by an experiment we refer to the comparison of privacy ( or utility ) of a joint mechanism ( with bounds , ) with the corresponding differential privacy mechanism ( with bound ) and the corresponding distortion privacy mechanism ( with bound ) .note that here the thresholds and are chosen independently as opposed to scenarios 1 ( and also 2 ) .we put the results of all the experiments next to each other in the x - axis .therefore , any vertical cut on the figure [ fig : joint_vs_distdiff ] s plots contain three points for privacy / utility of , , and .to better visualize the results , we have sorted all the experiments based on the privacy / utility of the joint mechanism .as the results show , the privacy achieved by the optimal joint mechanism is equal to the maximum privacy that each of the individual differential / distortion mechanisms provides separately .this means that the user would indeed benefit from including a distortion privacy constraint based on her prior leakage into the design criteria of the optimal obfuscation mechanism .this comes at no extra utility cost for the user , as the utility graph shows .in fact , the utility cost of an optimal joint mechanism is not additive and instead is the maximum of the two components , which is the differential privacy mechanism in all tested experiments .the reason behind this is that the differential privacy component makes the joint obfuscation mechanism robust to the case where the background knowledge of the adversary includes not only the prior leakage but also other auxiliary information available to him . and for a different prior assumed in the attack .the red dots correspond to the cases where the probability assumed in designing the protection mechanism is the same as the attacker s knowledge . ][ fig : prior ] when using distortion metric in protecting privacy , we achieve optimal privacy given the user s estimated prior leakage modeled by probability distribution over the secrets . in the optimal attack against various protection mechanisms ,a real adversary makes use of a prior distribution over the secrets . in this subsection , we evaluate to what extent a more informed adversary can harm privacy of users further than what is promised by the optimal protection mechanisms . note that no matter what protection mechanism is used by the user , a more knowledgable adversary will learn more about the secret . in this section ,our goal is not to show this obvious fact , but to evaluate how robust our mechanisms are with respect to adversaries with different knowledge accuracy levels . to perform this analysis, we consider a scenario in which the adversary s assumption on , for each user , has a lower level of uncertainty compared to .this can happen in the real world when an adversary obtains new evidence about a user s secret that is not used by user for computing .let be the other version of assumed by adversary , for a given user .for the sake of our analysis , we generate by providing the adversary with more evidence about most frequently visited locations , e.g. , home and work .this is equivalent to the scenario in which the adversary knows the user s significant locations , e.g. , where the user lives and works .the entropy of is less than that of , hence it contains more information about the user s mobility .we construct the protection mechanisms assuming , and we attack them by optimal inference attacks , but assuming the lower entropy priors .figure [ fig : prior ] illustrates privacy of users for different assumptions of , using optimal differential protection versus optimal distortion protection ( assuming ) . we observe that a more informed adversary has a lower expected errorhowever , it further shows that an optimal differential protection mechanism compared to an optimal distortion mechanism is more robust to knowledgable adversaries .note that we set to , according to scenario 1 in section [ sec : analysis : firsteval ] .so , when , both optimal protection mechanisms guarantee the same level of privacy .however , as there is more information in than in , more information can be inferred from the optimal distortion mechanism compared to the differential mechanism .we have solved the problem of designing _ optimal _ user - centric obfuscation mechanisms for data sharing systems .we have proposed a novel methodology for designing such mechanisms against any _ adaptive _ inference attack , while maximizing users utility .we have proposed a generic framework for quantitative privacy and utility , using which we formalize the problems of maximizing users utility under a lower - bound constraint on their privacy .the major novelty of the paper is to solve these optimization problems for both state - of - the - art distortion and differential privacy metrics , for the generic case of any distance function between the secrets .being generic with respect to the distance functions , enables us to formalize any sensitivity function on any type of secrets .we have also proposed a new privacy notion , joint distortion - differential privacy , and constructed its optimal mechanism that has the strengths of both metrics .we have provided linear program solutions for our optimization problems that provably achieve minimum utility loss under those privacy bounds .we would like to thank the pc reviewers for their constructive feedback , and kostas chatzikokolakis for very useful discussions on this work .10 m. s. alvim , m. e. andrs , k. chatzikokolakis , p. degano , and c. palamidessi .differential privacy : on the trade - off between utility and information leakage . in _ formal aspects of security and trust _ , pages 3954 .springer , 2012 .m. s. alvim , m. e. andrs , k. chatzikokolakis , and c. palamidessi . on the relation between differential privacy and quantitative information flow . in _automata , languages and programming _ , pages 6076 .springer , 2011 .m. s. alvim , m. e. andrs , k. chatzikokolakis , and c. palamidessi . quantitative information flow and applications to differential privacy . in _ foundations of security analysis and designvi_. 2011 .m. e. andrs , n. e. bordenabe , k. chatzikokolakis , and c. palamidessi .geo - indistinguishability : differential privacy for location - based systems . in _ proceedings of the 2013 acm sigsac conference on computer & communications security _ ,pages 901914 .acm , 2013 .m. barreno , b. nelson , r. sears , a. d. joseph , and j. tygar. can machine learning be secure ?in _ proceedings of the acm symposium on information , computer and communications security _ , 2006 .g. barthe , b. kpf , f. olmedo , and s. zanella bguelin .probabilistic relational reasoning for differential privacy . , 2012 .j. o. berger . .springer , 1985 .i. bilogrevic , k. huguenin , s. mihaila , r. shokri , and j .-hubaux . predicting users motivations behind location check - ins and utility implications of privacy protection mechanisms .in _ in network and distributed system security ( ndss ) symposium _ , 2015 .n. e. bordenabe , k. chatzikokolakis , and c. palamidessi .optimal geo - indistinguishable mechanisms for location privacy . in_ proceedings of the 16th acm conference on computer and communications security _ , 2014 .s. p. boyd and l. vandenberghe . .cambridge university press , 2004 .h. brenner and k. nissim .impossibility of differentially private universally optimal mechanisms . in _foundations of computer science ( focs ) , 2010 51st annual ieee symposium on _ , pages 7180 .ieee , 2010 .j. brickell and v. shmatikov .the cost of privacy : destruction of data - mining utility in anonymized data publishing . in _ proceedings of the 14th acm sigkdd international conference on knowledge discovery and data mining _ , kdd 08 , pages 7078 , new york , ny , usa , 2008 .m. brckner and t. scheffer .stackelberg games for adversarial prediction problems . in _17th acm sigkdd international conference on knowledge discovery and data mining ( kdd 2011 ) _ , 2011 .f. brunton and h. nissenbaum .vernacular resistance to data collection and analysis : a political theory of obfuscation ., 16(5 ) , 2011 .k. chatzikokolakis , m. e. andrs , n. e. bordenabe , and c. palamidessi . broadening the scope of differential privacy using metrics . in _ privacy enhancing technologies _ ,pages 82102 .springer , 2013 .k. chatzikokolakis , c. palamidessi , and p. panangaden .anonymity protocols as noisy channels . , 206(2 - 4):378401 , 2008 .k. chatzikokolakis , c. palamidessi , and m. stronati . a predictive differentially - private mechanism for mobility traces . in _ privacy enhancing technologies_ , pages 2141 .springer international publishing , 2014 .v. conitzer and t. sandholm . computing the optimal strategy to commit to . in _ proceedings of the 7th acm conference on electronic commerce _ , 2006 .g. danezis and c. troncoso .you can not hide for long : de - anonymization of real - world dynamic behaviour . in _ proceedings of the 12th acm workshop on workshop on privacy in the electronic society _ , pages 4960 .acm , 2013 .c. diaz , s. seys , j. claessens , and b. preneel . towards measuring anonymity . in _ privacy enhancing technologies _, pages 5468 .springer berlin heidelberg , 2003 .c. dwork .differential privacy . in _automata , languages and programming _ , pages 112 .springer , 2006 . c. dwork , f. mcsherry , k. nissim , and a. smith . calibrating noise to sensitivity in private data analysis . in _ theory of cryptography _, pages 265284 .springer , 2006 .v. f. farias and b. van roy .tetris : a study of randomized constraint sampling . in _probabilistic and randomized methods for design under uncertainty_. 2006 .q. geng and p. viswanath .the optimal mechanism in differential privacy . , 2012 .a. ghosh , t. roughgarden , and m. sundararajan .universally utility - maximizing privacy mechanisms . in _ proceedings of the 41st annual acm symposium on theory of computing _ , pages 351360 .acm , 2009 .a. ghosh , t. roughgarden , and m. sundararajan .universally utility - maximizing privacy mechanisms ., 41(6):16731693 , 2012 .m. grtschel , l. lovsz , and a. schrijver .the ellipsoid method and its consequences in combinatorial optimization . , 1981 .m. gupte and m. sundararajan .universally optimal privacy mechanisms for minimax agents . in _ proceedings of the twenty - ninth acm sigmod - sigact - sigart symposium on principles of database systems _ , 2010 .x. he , a. machanavajjhala , and b. ding .blowfish privacy : tuning privacy - utility trade - offs using policies . in _ proceedings of the 2014 acmsigmod international conference on management of data _ , pages 14471458 .acm , 2014 .l. huang , a. d. joseph , b. nelson , b. i. rubinstein , and j. tygar .adversarial machine learning . in _ proceedings of the 4th acm workshop on security and artificial intelligence _ , 2011 .s. ioannidis , a. montanari , u. weinsberg , s. bhagat , n. fawaz , and n. taft .privacy tradeoffs in predictive analytics ., 2014 .d. kifer and a. machanavajjhala . no free lunch in data privacy . in _ proceedings of the 2011 acm sigmod international conference on management of data _ , pages 193204 .acm , 2011 .n. kiukkonen , j. blom , o. dousse , d. gatica - perez , and j. laurila . towards rich mobile phone datasets : lausanne data collection campaign . , 2010 .b. kpf and d. basin . an information - theoretic model for adaptive side - channel attacks . in _ proceedings of the 14th acm conference on computer and communications security _ , 2007 .d. korzhyk , z. yin , c. kiekintveld , v. conitzer , and m. tambe .stackelberg vs. nash in security games : an extended investigation of interchangeability , equivalence , and uniqueness . , 41:297327 , may august 2011 . c. li , m. hay , v. rastogi , g. miklau , and a. mcgregor . optimizing linear counting queries under differential privacy . in _ proceedings of the twenty - ninth acm sigmod - sigact - sigart symposium on principles of database systems _ , pages 123134 .acm , 2010 .w. liu and s. chawla .a game theoretical model for adversarial learning . in _ieee international conference on data mining workshops ( icdm 2009 ) _, 2009 .d. j. mackay . .cambridge university press , 2003 .m. manshaei , q. zhu , t. alpcan , t. basar , and j .-game theory meets network security and privacy . , 45(3 ) , 2012 .p. mardziel , m. s. alvim , m. hicks , and m. r. clarkson .quantifying information flow for dynamic secrets . in _ieee symposium on security and privacy _ , 2014 .s. a. mario , k. chatzikokolakis , c. palamidessi , and g. smith .measuring information leakage using generalized gain functions . ,r. t. marler and j. s. arora .survey of multi - objective optimization methods for engineering ., 26(6):369395 , 2004 .k. micinski , p. phelps , and j. s. foster .an empirical study of location truncation on android ., 2:21 , 2013 .k. miettinen ., volume 12 .springer , 1999 .y. e. nesterov and a. nemirovskii . .siam publications .siam , philadelphia , usa , 1993 .k. nissim , s. raskhodnikova , and a. smith .smooth sensitivity and sampling in private data analysis . in _ proceedings of the thirty - ninth annual acm symposium on theory of computing _ , pages 7584 .acm , 2007 .v. pareto . , volume 13 .societa editrice , 1906 .p. paruchuri , j. p. pearce , j. marecki , m. tambe , f. ordez , and s. kraus .efficient algorithms to solve bayesian stackelberg games for security applications . in _ conference on artificial intelligence _, 2008 .j. reed and b. c. pierce .distance makes the types grow stronger : a calculus for differential privacy . , 2010 .a. serjantov and g. danezis . towards an information theoreticmetric for anonymity . in_ privacy enhancing technologies _ ,pages 4153 .springer berlin heidelberg , 2003 .r. shokri , g. theodorakopoulos , j .- y .le boudec , and j .-hubaux . quantifying location privacy . in _ proceedings of the ieee symposium on security and privacy _ , 2011 .r. shokri , g. theodorakopoulos , c. troncoso , j .-hubaux , and j .- y .le boudec . protecting location privacy :optimal strategy against localization attacks . in _ proceedings of the acm conference on computer and communications security _ , 2012 .g. theodorakopoulos , r. shokri , c. troncoso , j .-hubaux , and j .- y .l. boudec . prolonging the hide - and - seek game : optimal trajectory privacy for location - based services . in _acm workshop on privacy in the electronic society ( wpes 2014 ) _ , 2014 .c. troncoso and g. danezis . the bayesian traffic analysis of mix networks . in _ proceedings of the 16th acm conference on computer and communications security _ , 2009 .l. zadeh .optimality and non - scalar - valued performance criteria . ,given the user s protection mechanism , the inference attack is a valid strategy for the adversary , as there is no dependency between the defender and attacker strategies in the case of differential privacy metric .however , as the differential privacy metric ( used in the protection mechanism ) does not include any probability distribution on secrets , we can design an inference attack whose objective is to minimize the conditional expected error : for all secrets .this is a multi - objective optimization problem that does not prefer any of the ( for any secret ) to another . under no such preferences , the objective is to minimize , using weighted sum method with equal weight for each secret .thus , the following linear program constitutes the optimal inference attack , under the mentioned assumptions . as all the wights of are positive ( ) , the minimum of is pareto optimal .thus , minimizing is sufficient for pareto optimality . the optimal point in a multi - objectiveoptimization ( as in our case ) is pareto optimal `` if there is no other point that improves at least one objective function without detriment to another function '' .an alternative approach is to use the min - max formulation , and minimize the maximum conditional expected error over all secrets . for this, we introduce a new unknown parameter ( that will be the maximum ) .the following linear program solves the optimal inference attack using the min - max formulation .this also provides a necessary condition for the pareto optimality .[ eq : lp : adversary : diff : minmax ] we can also consider the expected error conditioned on both secret and estimate as the adversary s objective to minimize .so , we can use instead of in , and use the same approach as in .the following linear program finds the optimal inference attack that minimizes the conditional expected estimation error over all and , using the min - max formulation .[ eq : lp : adversary : diff : minmax2 ] overall , we prefer the linear program as it has the least number of constraints among the above three .we can also use for comparison of optimal protection mechanisms based on distortion and differential metrics .we construct from . in , we condition the optimal obfuscation on its corresponding optimal inference ( best response ) attack .so , for any observable , the inference strategy is the one that , by definition of the best response , minimizes the expected error thus , the privacy value to be guaranteed is note that is an average of over , and thus it must be larger or equal to the smallest value of it for a particular . let be a conditional probability distribution function such that for any given observable , note that is a pure strategy that represents one particular inference attack. moreover , constructs such that it optimizes over the set of all mixed strategies that include all the pure strategies .the minimum value for the optimization over the set of all mixed strategies is clearly less than or equal to the minimum value for the optimization over its subset ( the pure strategies ) .thus , the following inequality holds . therefore , from inequalities and we have where , or equivalently . thus , the constraint in the linear program is equivalent to ( and can be replaced by ) the constraints and in the linear program .here , we briefly discuss the computational aspects of the design of optimal protection mechanisms .although the solution to linear programs provides us with the optimal protection mechanism , their computation cost is quadratic ( for distortion mechanisms ) and cubic ( for differential mechanisms ) in the cardinality of the set of secrets and observables .providing privacy for a large set of secrets needs a high computation budget . to establish a balance between the computation budget and privacy requirements ,we can make use of approximation techniques to design optimal protection mechanisms .we explore some possible approaches .linear programming is one of the fundamental areas of mathematics and computer science , and there is a variety of algorithms to solve a linear program . surveying those algorithms and evaluating their efficiencies is out of the scope of this paper .these algorithms search the set of feasible solutions of a problem for finding the optimal solution that meets the constraints .many of these algorithms are iterative and they converge to the optimal solution as the number of iterations increases .thus , a simple approximation method is to stop the iterative algorithm when our computation budget is over .other approximation methods exist .for example , suggests a sampling algorithm to select a subset of constraints in an optimization problem to speed up the computation .moreover , we can rely on the particular structure of secrets to reduce the set of constraints .we can implement those approximation techniques to solve approximately optimal protection mechanisms in an affordable time .furthermore , we can rely on the definition of privacy to find the constraints that have a minor contribution to the design of the protection mechanism . in this section ,we study one approximation method , following the intuition behind the differential privacy bound : we remove the constraints for which the distance is larger than a threshold .we can justify this by observing that , in the definition of differential privacy metric , the privacy is more protected when for secrets , the distance is small . to put this in perspective ,note that if we use the original definition of differential privacy , there would not be any constraint if .we also apply this approximation to the distance between observables and secrets . in figure[ fig : privacy_appx ] , we show the privacy loss of users as well as the speed - up of their computation due to approximation .we performed the computation on a machine with 4 core cpu model intel(r ) xeon(r ) 2.40ghz .as we increase the approximation threshold ( which is the distance beyond it we ignore the constraints ) , the approximation error goes to zero .this suggests that , for a large set of secrets , if we choose a relatively small threshold the approximated protection mechanism provides almost the same privacy level as in the optimal solution .the computation time , however , increases as the approximation error decreases ( due to increasing the approximation threshold ) . figure [ fig : privacy_appx ] captures such a tradeoff of our approximation method .
|
consider users who share their data ( e.g. , location ) with an untrusted service provider to obtain a personalized ( e.g. , location - based ) service . data obfuscation is a prevalent user - centric approach to protecting users privacy in such systems : the untrusted entity only receives a noisy version of user s data . perturbing data before sharing it , however , comes at the price of the users utility ( service quality ) experience which is an inseparable design factor of obfuscation mechanisms . the entanglement of the utility loss and the privacy guarantee , in addition to the lack of a comprehensive notion of privacy , have led to the design of obfuscation mechanisms that are either suboptimal in terms of their utility loss , or ignore the user s information leakage in the past , or are limited to very specific notions of privacy which e.g. , do not protect against adaptive inference attacks or the adversary with arbitrary background knowledge . + in this paper , we design user - centric obfuscation mechanisms that impose the minimum utility loss for guaranteeing user s privacy . we optimize utility subject to a joint guarantee of differential privacy ( indistinguishability ) and distortion privacy ( inference error ) . this double shield of protection limits the information leakage through obfuscation mechanism as well as the posterior inference . we show that the privacy achieved through joint differential - distortion mechanisms against optimal attacks is as large as the maximum privacy that can be achieved by either of these mechanisms separately . their utility cost is also not larger than what either of the differential or distortion mechanisms imposes . we model the optimization problem as a leader - follower game between the designer of obfuscation mechanism and the potential adversary , and design adaptive mechanisms that anticipate and protect against optimal inference algorithms . thus , the obfuscation mechanism is optimal against any inference algorithm .
|
we study the distribution of thresholding estimators such as hard - thresholding , soft - thresholding , and adaptive soft - thresholding in a linear regression model when the number of regressors can be large .these estimators can be viewed as penalized least - squares estimators in the case of an orthogonal design matrix , with soft - thresholding then coinciding with the lasso ( introduced by frank and friedman ( 1993 ) , alliney and ruzinsky ( 1994 ) , and tibshirani ( 1996 ) ) and with adaptive soft - thresholding coinciding with the adaptive lasso ( introduced by zou ( 2006 ) ) .thresholding estimators have of course been discussed earlier in the context of model selection ( see bauer , ptscher and hackl ( 1988 ) ) and in the context of wavelets ( see , e.g. , donoho , johnstone , kerkyacharian , picard ( 1995 ) ) .contributions concerning distributional properties of thresholding and penalized least - squares estimators are as follows : knight and fu ( 2000 ) study the asymptotic distribution of the lasso estimator when it is tuned to act as a conservative variable selection procedure , whereas zou ( 2006 ) studies the asymptotic distribution of the lasso and the adaptive lasso estimators when they are tuned to act as consistent variable selection procedures . fan and li ( 2001 ) and fan and peng ( 2004 ) study the asymptotic distribution of the so - called smoothly clipped absolute deviation ( scad ) estimator when it is tuned to act as a consistent variable selection procedure . in the wake of fan and li ( 2001 ) and fan and peng ( 2004 ) a large number of papers have been published that derive the asymptotic distribution of various penalized maximum likelihood estimators under consistent tuning ; see the introduction in ptscher and schneider ( 2009 ) for a partial list .except for knight and fu ( 2000 ) , all these papers derive the asymptotic distribution in a fixed - parameter framework .as pointed out in leeb and ptscher ( 2005 ) , such a fixed - parameter framework is often highly misleading in the context of variable selection procedures and penalized maximum likelihood estimators .for that reason , ptscher and leeb ( 2009 ) and ptscher and schneider ( 2009 ) have conducted a detailed study of the finite - sample as well as large - sample distribution of various penalized least - squares estimators , adopting a moving - parameter framework for the asymptotic results .[ related results for so - called post - model - selection estimators can be found in leeb and ptscher ( 2003 , 2005 ) and for model averaging estimators in ptscher ( 2006 ) ; see also sen ( 1979 ) and ptscher ( 1991 ) . ]the papers by ptscher and leeb ( 2009 ) and ptscher and schneider ( 2009 ) are set in the framework of an orthogonal linear regression model with a fixed number of parameters and with the error - variance being known .in the present paper we build on the just mentioned papers ptscher and leeb ( 2009 ) and ptscher and schneider ( 2009 ) .in contrast to these papers , we do not assume the number of regressors to be fixed , but let it depend on sample size thus allowing for high - dimensional models .we also consider the case where the error - variance is unknown , which in case of a high - dimensional model creates non - trivial complications as then estimators for the error - variance will typically not be consistent .considering thresholding estimators from the outset in the present paper allows us also to cover non - orthogonal design .while the asymptotic distributional results in the known - variance case do not differ in substance from the results in ptscher and leeb ( 2009 ) and ptscher and schneider ( 2009 ) , not unexpectedly we observe different asymptotic behavior in the unknown - variance case if the number of degrees of freedom is constant , the difference resulting from the non - vanishing variability of the error - variance estimator in the limit .less expected is the result that under consistent tuning for the variable selection probabilities ( implied by all the estimators considered ) as well as for the distribution of the hard - thresholding estimator , estimation of the error - variance still has an effect asymptotically even if diverges , but does so only slowly .to give some idea of the theoretical results obtained in the paper we next present a rough summary of some of these results . for simplicity of expositionassume for the moment that the design matrix is such that the diagonal elements of are equal to , and that the error - variance is equal to .let denote the hard - thresholding estimator for the -th component of the regression parameter , the threshold being given by , with denoting the usual error - variance estimator and with denoting a tuning parameter .an infeasible version of the estimator , denoted by , which uses instead of , is also considered ( known - variance case ) .we then show that the uniform rate of convergence of the hard - thresholding estimator is if the threshold satisfies and ( `` conservative tuning '' ) , but that the uniform rate is only if the threshold satisfies and ( `` consistent tuning '' ) . the same result also holds for the soft - thresholding estimator and the adaptive soft - thresholding estimator , as well as for infeasible variants of the estimators that use knowledge of ( known - variance case ) .furthermore , all possible limits of the centered and scaled distribution of the hard - thresholding estimator ( as well as of the soft- and the adaptive soft - thresholding estimators and ) under a moving parameter framework are obtained .consider first the case of conservative tuning : then all possible limiting forms of the distribution of as well as of for arbitrary parameter sequences are determined .it turns out that in the known - variance case these limits are of the same functional form as the finite - sample distribution , i.e. , they are a convex combination of a pointmass and an absolutely continuous distribution that is an excised version of a normal distribution . in the unknown - variance case ,when the number of degrees of freedom goes to infinity , exactly the same limits arise . however , if is constant , the limits are `` averaged '' versions of the limits in the known - variance case , the averaging being with respect to the distribution of the variance estimator . again these limitshave the same functional form as the corresponding finite - sample distributions .consider next the case of consistent tuning : here the possible limits of as well as of have to be considered , as is the uniform convergence rate . in the known - variance casethe limits are convex combinations of ( at most ) two pointmasses , the location of the pointmasses as well as the weights depending on and . in the unknown - variance caseexactly the same limits arise if diverges to infinity sufficiently fast ; however , if is constant or diverges to infinity sufficiently slowly , the limits are again convex combinations of the same pointmasses , but with weights that are typically different .the picture for soft - thresholding and adaptive soft - thresholding is somewhat different : in the known - variance case , as well as in the unknown - variance case when diverges to infinity , the limits are ( single ) pointmasses .however , in the unknown - variance case and if is constant , the limit distribution can have an absolutely continuous component .it is furthermore useful to point out that in case of consistent tuning the sequence of distributions of is not stochastically bounded in general ( since is the uniform convergence rate ) , and the same is true for soft - thresholding and adaptive soft - thresholding .this throws a light on the fragility of the oracle - property , see section [ oracle ] for more discussion .while our theoretical results for the thresholding estimators immediately apply to lasso and adaptive lasso in case of orthogonal design , this is not so in the non - orthogonal case . in order to get some insight into the finite - sample distribution of the latter estimators also in the non - orthogonal case, we numerically compare the distribution of lasso and adaptive lasso with their thresholding counterparts in a simulation study .the main take - away messages of the paper can be summarized as follows : * the finite - sample distributions of the various thresholding estimators considered are highly non - normal , the distributions being in each case a convex combination of pointmass and an absolutely continuous ( non - normal)component .* the non - normality persists asymptotically in a moving parameter framework . *results in the unknown - variance case are obtained from the corresponding results in the known - variance case by smoothing with respect to the distribution of . in line with this, one would expect the limiting behavior in the unknown - variance case to coincide with the limiting behavior in the known - variance whenever the degrees of freedom diverge to infinity .this indeed turns out to be so for some of the results , but not for others where we see that the speed of divergence of matters . * in case of conservative tuning the estimators have the expected uniform convergence rate , which is under the simplified assumptions of the above discussion , whereas under consistent tuning the uniform rate is slower , namely under the simplified assumptions of the above discussion .this is intimately connected with the fact that the so - called ` oracle property ' paints a misleading picture of the performance of the estimators . *the numerical study suggests that the results for the thresholding estimators and qualitatively apply also to the ( components of ) the lasso and the adaptive lasso as long as the design matrix is not too ill - conditioned .the paper is organized as follows .we introduce the model and define the estimators in section [ model ] .section [ variable ] treats the variable selection probabilities implied by the estimators .consistency , uniform consistency , and uniform convergence rates are discussed in section [ minimax ] .we derive the finite - sample distribution of each estimator in section [ fs ] and study the large - sample behavior of these in section ls .a numerical study of the finite - sample distribution of lasso and adaptive lasso can be found in section [ numstudy ] .all proofs are relegated to section [ prfs ] .consider the linear regression model with an vector , a nonstochastic matrix of rank , and , .we allow , the number of columns of , as well as the entries of , , and to depend on sample size ( in fact , also the probability spaces supporting and may depend on ) , although we shall almost always suppress this dependence on in the notation .note that this framework allows for high - dimensional regression models , where the number of regressors is large compared to sample size , as well as for the more classical situation where is much smaller than .furthermore , let denote the nonnegative square root of , the -th diagonal element of .now let the least - squares estimator for and the associated estimator for , the latter being defined only if .the hard - thresholding estimator is defined via its components as follows the tuning parameters are positive real numbers and denotes the -th component of the least - squares estimator .we shall also need to consider its infeasible counterpart given by soft - thresholding estimator and its infeasible counterpart are given by .finally , the adaptive soft - thresholding estimator and its infeasible counterpart are defined via note that , , and as well as their infeasible counterparts are equivariant under scaling of the columns of by non - zero column - specific scale factors .we have chosen to let the thresholds ( , respectively ) depend explicitly on ( , respectively ) and in order to give an interpretation independent of the values of and .furthermore , often will be chosen independently of , i.e. , where is a positive real number . clearly , for the feasible versions we always need to assume , whereas for the infeasible versions suffices .we note the simple fact that on the event that , and that on the event that .analogous inequalities hold for the infeasible versions of the estimators .[ lasso]_(lasso ) _ ( i ) consider the objective function are positive real numbers .it is well - known that a unique minimizer of this objective function exists , the lasso - estimator .it is easy to see that in case is diagonal we have , in the case of diagonal , the components of the lasso reduce to soft - thresholding estimators with appropriate thresholds ; in particular , coincides with for the choice .therefore all results derived below for soft - thresholding immediately give corresponding results for the lasso as well as for the dantzig - selector in the diagonal case .we shall abstain from spelling out further details .\(ii ) sometimes in the definition of the lasso is chosen independently of ; more reasonable choices seem to be ( a ) ( where denotes the nonnegative square root of the -th diagonal element of ) , and ( b ) where are positive real numbers ( not depending on the design matrix and often not on ) as then again has an interpretation independent of the values of and .note that in case ( a ) or ( b ) the solution of the optimization problem is equivariant under scaling of the columns of by non - zero column - specific scale factors .\(iii ) similar results obviously hold for the infeasible versions of the estimators .[ alasso]_(adaptive lasso ) _ consider the objective function are positive real numbers .this is the objective function of the adaptive lasso ( where often is chosen independent of ) . againthe minimizer exists and is unique ( at least on the event where for all ) .clearly , is equivariant under scaling of the columns of by non - zero column - specific scale factors provided does not depend on the design matrix .it is easy to see that in case is diagonal we have , in the case of diagonal , the components of the adaptive lasso reduce to the adaptive soft - thresholding estimators ( for ) . therefore all results derived below for adaptive soft - thresholding immediately give corresponding results for the adaptive lasso in the diagonal case .we shall again abstain from spelling out further details .similar results obviously hold for the infeasible versions of the estimators . _( other estimators ) _( i ) the adaptive lasso as defined in zou ( 2006 ) has an additional tuning parameter .we consider adaptive soft - thresholding only for the case , since otherwise the estimator is not equivariant in the sense described above .nonetheless an analysis for the case , similar to the analysis in this paper , is possible in principle .\(ii ) an analysis of a scad - based thresholding estimator is given in ptscher and leeb ( 2009 ) in the known - variance case .[ these results are given in the orthogonal design case , but easily generalize to the non - orthogonal case .] the results obtained there for scad - based thresholding are similar in spirit to the results for the other thresholding estimators considered here .the unknown - variance case could also be analyzed in principle , but we refrain from doing so for the sake of brevity .\(iii ) zhang ( 2010 ) introduced the so - called minimax concave penalty ( mcp)to be used for penalized least - squares estimation . apart from the usual tuning parameter, mcp also depends on a shape parameter .it turns out that the thresholding estimator based on mcp coincides with hard - thresholding in case , and thus is covered by the analysis of the present paper . in case , the mcp - based thresholding estimator could similarly be analyzed , especially since the functional form of the mcp - based thresholding estimator is relatively simple ( namely , a piecewise linear function of the least - squares estimator ) .we do not provide such an analysis for brevity ._ for all asymptotic considerations in this paper we shall always assume without further mentioning that _ _ _satisfies__ every fixed __ _ _ satisfying_ _ _ _ for large enough _ _ . _ _ the case excluded by assumption ( [ xi ] ) seems to be rather uninteresting as unboundedness of means that the information contained in the regressors gets weaker with increasing sample size ( at least along a subsequence ) ; in particular , this implies ( coordinate - wise ) inconsistency of the least - squares estimator .[ in fact , if as well as the elements of do not depend on , this case is actually impossible as is then necessarily monotonically nonincreasing . ]the following notation will be used in the paper : let denote the extended real line endowed with the usual topology .on we shall consider the topology it inherits from .furthermore , and denote the cumulative distribution function ( cdf ) and the probability density function ( pdf ) of a standard normal distribution , respectively . by denote the cdf of a non - central -distribution with degrees of freedom and non - centrality parameter . in the central case , i.e. , , we simply write .we use the convention , with a similar convention for .the estimators , , and can be viewed as performing variable selection in the sense that these estimators set components of exactly equal to zero with positive probability . in this sectionwe study the variable selection probability , where stands for any of the estimators , , and . since these probabilities are the same for any of the three estimators considered we shall drop the subscripts , , and in this section .we use the same convention also for the variable selection probabilities of the infeasible versions .since it suffices to study the variable deletion probability as can be seen from the above formula , depends on only via .we first study the variable selection / deletion probabilities under a `` fixed - parameter '' asymptotic framework .[ select_prob_pointwise]let be given .for every satisfying for large enough we have : \(a ) a necessary and sufficient condition for as for all satisfying ( not depending on ) is .\(b ) a necessary and sufficient condition for as for all satisfying is .\(c ) a necessary and sufficient condition for as for all satisfying is , .the constant is then given by .part ( a ) of the above proposition gives a necessary and sufficient condition for the procedure to correctly detect nonzero coefficients with probability converging to .part ( b ) gives a necessary and sufficient condition for correctly detecting zero coefficients with probability converging to .[ uninteresting]if does not converge to zero , the conditions on in parts ( a ) and ( b ) are incompatible ; also the conditions in parts ( a ) and ( c ) are then incompatible ( except when ) .however , the case where does not converge to zero is of little interest as the least - squares estimator is then not consistent . _( speed of convergence in proposition select_prob_pointwise ) _( i ) the speed of convergence in ( a ) is in case is bounded ( an uninteresting case as noted above ) ; if , the speed of convergence in ( a ) is not slower than for some suitable depending on .\(ii ) the speed of convergence in ( b ) is . in ( c )the speed of convergence is given by the rate at which approaches .[ for the above results we have made use of lemma vii.1.2 in feller ( 1957 ) .] for let . then ( i ) for every now that the entries of do not change with (although the dimension of may depend on ) .is made up of the initial elements of a fixed element of .] then , given that is bounded ( this being in particular the case if is bounded ) , the probability of incorrect non - detection of at least one nonzero coefficient converges to if and only if as for every .[ if is unbounded then this probability converges to , e.g. , if and as for every and and as for a suitable that is determined by . ]\(ii ) for every we have .\end{aligned}\]]suppose again that the entries of do not change with . then , given that is bounded ( this being in particular the case if is bounded ) , the probability of incorrectly classifying at least one zero parameter as a non - zero one converges to as if and only if for every .[ if is unbounded then this probability converges to , e.g. , if as . ]\(iii ) in case is diagonal , the relevant probabilities as well as can be directly expressed in terms of products of or , and proposition [ select_prob_pointwise ] can then be applied . since the fixed - parameter asymptotic framework often gives a misleading impression of the actual behavior of a variable selection procedure ( cf .leeb and ptscher ( 2005 ) , ptscher and leeb ( 2009 ) ) we turn to a `` moving - parameter '' framework next , i.e. , we allow the elements of as well as to depend on sample size . in the proposition to follow ( and all subsequent large - sample results )we shall concentrate only on the case where as , since otherwise the estimators are not even consistent for as a consequence of proposition _ _ _ _ select_prob_pointwise , cf .also theorem [ thresh_consistency ] below . given the condition , we shall then distinguish between the case , , and the case , which in light of proposition [ select_prob_pointwise ] we shall call the case of `` conservative tuning '' and the case of `` consistent tuning '' , respectively . to a ( finite or infinite ) limit , in the sense that this convergence can , for any given sequence , be achieved along suitable subsequences in light of compactness of the extended real line . ][ select_prob_moving_par]suppose that for given satisfying for large enough we have and where .\(a ) assume .suppose that the true parameters and satisfy .then \(b ) assume .suppose that the true parameters and satisfy .then \1 . implies . implies .\3 . and , for some ,imply in a fixed - parameter asymptotic analysis , which in proposition select_prob_moving_par corresponds to the case and , the limit of the probabilities is always in case , and is in case and consistent tuning ( it is in case and conservative tuning ) ; this does clearly not properly capture the finite - sample behavior of these probabilities .the moving - parameter asymptotic analysis underlying proposition select_prob_moving_par better captures the finite - sample behavior and , e.g. , allows for limits other than and even in the case of consistent tuning . in particular , proposition [ select_prob_moving_par ]shows that the convergence of the variable selection / deletion probabilities to their limits in a fixed - parameter asymptotic framework is not uniform in , and this non - uniformity is local in the sense that it occurs in an arbitrarily small neighborhood of ( holding the value of fixed ) . in a neighborhood of zero . ]furthermore , the above proposition entails that under consistent tuning deviations from of larger order than under conservative tuning go unnoticed asymptotically with probability 1 by the variable selection procedure corresponding to . for more discussion in a special case ( which in its essence also applies here ) see ptscher and leeb ( 2009 ) . _( speed of convergence in proposition select_prob_moving_par ) _( i ) the speed of convergence in ( a ) is given by the slower of the rate at which approaches and approaches provided that ; if , the speed of convergence is not slower than any .\(ii ) the speed of convergence in ( b1 ) is not slower than where depends on .the same is true in case ( b2 ) provided ; if , the speed of convergence is not slower than for every . in case ( b3 )the speed of convergence is not slower than the speed of convergence of any in case ; in case it is not slower than any .the preceding remark corrects and clarifies the remarks at the end of section 3 in ptscher and leeb ( 2009 ) and section 3.1 in ptscher and schneider ( 2009 ) . in the unknown - variance casethe finite - sample variable selection / deletion probabilities can be obtained as follows: \rho _ { n - k}(s)ds \notag \\ & = t_{n - k , n^{1/2}\theta _{ i}/(\sigma \xi _ { i , n})}\left ( n^{1/2}\eta _ { i , n}\right ) -t_{n - k , n^{1/2}\theta_ { i}/(\sigma \xi _ { i , n})}\left ( -n^{1/2}\eta _ { i , n}\right ) .\label{select_prob_unknown}\end{aligned}\]]here we have used ( [ select_prob ] ) , and independence of and allowed us to replace by in the relevant formulae , cf .leeb and ptscher ( 2003 , p. 110 ) .in the above denotes the density of times the square root of a chi - square distributed random variable with degrees of freedom .it will turn out to be convenient to set for , making a bounded continuous function on .we now have the following fixed - parameter asymptotic result for the variable selection / deletion probabilities in the unknown - variance case that perfectly parallels the corresponding result in the known - variance case , i.e. , proposition [ select_prob_pointwise ] : [ select_prob_pointwise_unknown]let be given . for every for large enough we have : \(a ) a necessary and sufficient condition for as for all satisfying ( not depending on ) is .\(b ) a necessary and sufficient condition for as for all satisfying is .\(c ) a necessary and sufficient condition for as for all satisfying and with satisfying is , .proposition [ select_prob_pointwise_unknown ] shows that the dichotomy regarding conservative tuning and consistent tuning is expressed by the same conditions in the unknown - variance case as in the known - variance case .furthermore , note that appearing in part ( c ) of the above proposition converges to in the case where , the limit thus being the same as in the known - variance case .this is different in case is constant equal to , say , eventually , the sequence then being constant equal to eventually .we finally note that remark [ uninteresting ] also applies to proposition select_prob_pointwise_unknown above .for the same reasons as in the known - variance case we next investigate the asymptotic behavior of the variable selection / deletion probabilities under a moving - parameter asymptotic framework .we consider the case where is ( eventually ) constant and the case where .there is no essential loss in generality in considering these two cases only , since by compactness of we can always assume ( possibly after passing to subsequences ) that converges in .[ select_prob_moving_par_unknown]suppose that for given satisfying for large enough we have and where .\(a ) assume .suppose that the true parameters and satisfy .( a1 ) if is eventually constant equal to , say , then( a2 ) if holds , then \(b ) assume .suppose that the true parameters and satisfy .( b1 ) if is eventually constant equal to , say , then ( b2 ) if holds , then \1 . implies . implies . and imply for some .\4 . and with imply for some .[ note that the integral in the above display reduces to if , and to if . ] \5 . and imply for some .theorem [ select_prob_moving_par_unknown ] shows , in particular , that also in the unknown - variance case the convergence of the variable selection / deletion probabilities to their limits in a fixed - parameter asymptotic framework is not locally uniform in . in the case of conservative tuningthe theorem furthermore shows that the limit of the variable selection / deletion probabilities in the unknown - variance case is the same as in the known - variance case if the degrees of freedom go to infinity ( entailing that the distribution of concentrates more and more around ) ; if is eventually constant , the limit turns out to be a mixture of the known - variance case limits ( with replaced by ) , the mixture being with respect to the distribution of .[ we note that in the somewhat uninteresting case this mixture also reduces to the same limit as in the known - variance case . ]while this result is as one would expect , the situation is different and more subtle in the case of consistent tuning : if the limits are the same as in the known - variance case if or holds , namely and , respectively .however , in the `` boundary '' case the rate at which diverges to infinity becomes relevant .if the divergence is fast enough in the sense that , again the same limit as in the known - variance case , namely , is obtained ; but if diverges to infinity more slowly , a different limit arises ( which , e.g. , in case 4 of part ( b2 ) is obtained by averaging with respect to a suitable distribution ) .the case where the degrees of freedom is eventually constant looks very much different from the known - variance case and again some averaging with respect to the distribution of takes place .note that in this case the limiting variable deletion probabilities are and , respectively , only if and , respectively , which is in contrast to the known - variance case ( and the unknown - variance case with ) .[ costfree](i ) for later use we note that proposition select_prob_moving_par and theorem [ select_prob_moving_par_unknown ] also hold when applied to subsequences , as is easily seen .\(ii ) the convergence conditions in proposition [ select_prob_moving_par ] on the various quantities involving and are essentially cost - free in the sense that given any sequence we can , due to compactness of , select from any subsequence a further subsubsequence such that along this subsubsequence all relevant quantities such as ( or and ) converge in . since proposition [ select_prob_moving_par ] also holds when applied to subsequences as just noted , an application of this proposition to the subsubsequence then results in a characterization of all possible accumulation points of the variable selection / deletion probabilities in the known - variance case .\(iii ) in a similar manner , the convergence conditions in theorem select_prob_moving_par_unknown ( including the ones on ) are essentially cost - free , and thus this theorem provides a full characterization of all possible accumulation points of the variable selection / deletion probabilities in the unknown - variance case .as just discussed , in the case of conservative tuning we get the same limiting behavior under moving - parameter asymptotics in the known - variance and in the unknown - variance case along any sequence of parameters if or ( which in the conservatively tuned case can equivalently be stated as ) . in the case ofconsistent tuning the same coincidence of limits occurs if fast enough such that .this is not accidental but a consequence of the following fact : [ closeness_prob]suppose that for given satisfying for large enough we have as .then [ weekend]suppose that holds as , the other case being of little interest as noted earlier . if does not converge to zero as , it can be shown from proposition select_prob_moving_par and theorem [ select_prob_moving_par_unknown ] that the limits of the variable deletion probabilities ( along appropriate ( sub)sequences ) for the known - variance and the unknown - variance case do not coincide .this shows that the condition in the above proposition can not be weakened ( at least in case holds ) .for purposes of comparison we start with the following obvious proposition , which immediately follows from the observation that is -distributed .[ ls_consistency]for every satisfying for large enough we have the following : \(a ) is a necessary and sufficient condition for to be consistent for , the convergence rate being .\(b ) suppose .then is uniformly consistent for in the sense that for every in fact , is uniformly -consistent for in the sense that for every there exists a real number such that [note that the probabilities in the displays above in fact neither depend on nor . in particular ,the l.h.s . of the above displays equal and ,respectively . ]the corresponding result for the estimators , , or and their infeasible counterparts , , or is now as follows .[ thresh_consistency]let stand for any of the estimators , , or .then for every satisfying for large enough we have the following : \(a ) is consistent for if and only if and .\(b ) suppose and .then is uniformly consistent in the sense that for every , is uniformly -consistent with in the sense that for every there exists a real number such that \(c ) suppose and and . if for every there exists a real number such that , then necessarily holds .\(d ) let stand for any of the estimators , , or .then the results in ( a)-(c ) also hold for .the preceding theorem shows that the thresholding estimators , , and ( as well as their infeasible versions ) are uniformly -consistent and that this rate is sharp and can not be improved. in particular , if the tuning is conservative these estimators are uniformly -consistent , which is the usual rate one expects to find in a linear regression model as considered here . however ,if consistent tuning is employed , the preceding theorem shows that these thresholding estimators are then only uniformly -consistent , i.e. , have a slower uniform convergence rate than the least - squares ( maximum likelihood ) estimator ( or the conservatively tuned thresholding estimators for that matter ) . for a discussion of the pointwise convergence rate see section [ oracle ] .[ asy - equiv]if , then is asymptotically equivalent to in the sense that for every similar statement holds for . for follows immediately from ( [ closeness_h_s_as_ls_unknown ] ) in section [ prfs ] and the fact that the family of distributions corresponding to is tight ; for this follows from the relation .\(i ) a variation of the proof of theorem [ thresh_consistency ] shows that in case of consistent tuning for the infeasible estimators additionally also for every , and that for the feasible estimators for every provided that .\(ii ) inspection of the proof shows that the conclusion of theorem thresh_consistency(c ) continues to hold if the supremum over is replaced by the supremum over an arbitrarily small neighborhood of and is held fixed at an arbitrary positive value .\(iii ) if and are replaced by and , respectively , in the displays in proposition [ ls_consistency ] and theorem [ thresh_consistency ] as well as in remark [ asy - equiv ] , the resulting statements remain true provided the suprema over are replaced by suprema over , where is an arbitrary real number .we next present the finite - sample distributions of the infeasible thresholding estimators .it will turn out to be convenient to give the results for scaled versions , where the scaling factor is a positive real number , but is otherwise arbitrary . _note that below we suppress the dependence of the distribution functions of the thresholding estimators on the scaling sequence _ _ _ in the notation ._ _ furthermore , observe that the finite - sample distributions depend on only through .[ 1]the cdf of is given by , equivalently , denotes pointmass at . [ 2]the cdf of is given by , equivalently , [ 3]the cdf of is given by are defined by , equivalently , the finite - sample distributions of , , and are seen to be non - normal .they are made up of two components , one being a multiple of pointmass at and the other one being absolutely continuous with a density that is generally bimodal . for more discussion and some graphical illustrations in a special case see ptscher and leeb ( 2009 ) and ptscher and schneider ( 2009 ) .[ diag]in the case where is diagonal , the estimators of the components and for are independent and hence the above results immediately allow one to determine the finite - sample distributions of the entire vectors , , and .in particular , this provides the finite - sample distribution of the lasso and the adaptive lasso in the diagonal case ( cf .remarks lasso and [ alasso ] ) .the finite - sample distributions of , , are obtained next . the same remark on the scaling as in the previous section applies here .[ 4]the cdf of is given by , equivalently , [ 5]the cdf of is given by , equivalently, [ 6]the cdf of is given by , equivalently , as in the known - variance case the distributions are a convex combination of pointmass and an absolutely continuous part . in case of hard - thresholding , the averaging with respect to the density smoothes the indicator functions leading to a continuous density function for the absolutely continuous part ( while in the known - variance case the density function is only piece - wise continuous , cf .figure 1 in ptscher and leeb ( 2009 ) ) .this is not so for soft - thresholding and adaptive soft - thresholding , where the averaging with respect to the density does not affect the indicator functions involved ; here the shape of the distribution is qualitatively the same as in the known - variance case ( figure 2 in ptscher and leeb ( 2009 ) and figure 1 in ptscher and schneider ( 2009 ) ) . in the case where is diagonal , the finite - sample distributions of the entire vectors , , and can be found from the distributions of , , and ( see remark [ diag ] ) by conditioning on and integrating with respect to . in particular , this provides the finite - sample distributions of the lasso and the adaptive lasso in the diagonal case ( cf. remarks [ lasso ] and alasso ) .we next derive the asymptotic distributions of the thresholding estimators under a moving - parameter ( and not only under a fixed - parameter ) framework since it is well - known that asymptotics based only on a fixed - parameter framework often lead to misleading conclusions regarding the performance of the estimators ( cf .also the discussion in section [ oracle ] ) .we first consider the infeasible versions of the thresholding estimators .[ lsdk_h]suppose that for given satisfying for large enough we have and where .\(a ) assume . set the scaling factor .suppose that the true parameters and satisfy .then converges weakly to the distribution with cdf corresponding measure being[this distribution reduces to a standard normal distribution in case or . ]\(b ) assume . set the scaling factor .suppose that the true parameters and satisfy .\1 . if , then converges weakly to .\2 . if , then converges weakly to .if and , for some , then converges weakly to [ lsdk_s]suppose that for given satisfying for large enough we have and where .\(a ) assume . set the scaling factor .suppose that the true parameters and satisfy .then converges weakly to the distribution with cdf corresponding measure being[this distribution reduces to a -distribution in case or . ]\(b ) assume . set the scaling factor .suppose that the true parameters and satisfy .then converges weakly to .[ lsdk_as]suppose that for given satisfying for large enough we have and where .\(a ) assume . set the scaling factor .suppose that the true parameters and satisfy .then converges weakly to the distribution with cdf in case , the corresponding measure being . in case ,the cdf converges weakly to , i.e. , to a standard normal distribution .[ in case the limit always reduces to a standard normal distribution . ]\(b ) assume .set the scaling factor .suppose that the true parameters and satisfy .\1 . if , then converges weakly to .\2 . if , then converges weakly to .\3 . if , then converges weakly to .observe that the scaling factors used in the above propositions are exactly of the same order as in the case of conservative as well as in the case of consistent tuning and thus correspond to the uniform rate of convergence in both cases . in the case of conservative tuningthe limiting distributions have essentially the same form as the finite - sample distributions , demonstrating that the moving - parameter asymptotic framework captures the finite - sample behavior of the estimators in a satisfactory way .in contrast , a fixed - parameter asymptotic framework , which corresponds to setting and in the above propositions , misrepresents the finite - sample properties of the thresholding estimators whenever but small , as the fixed - parameter limiting distribution is in case of hard - thresholding and adaptive soft - thresholding then always , regardless of the size of . for soft - thresholdingwe also observe a strong discrepancy between the finite - sample distribution and the fixed - parameter limit for which is given by .in particular , the above propositions demonstrate non - uniformity in the convergence of finite - sample distributions to their limit in a fixed - parameter framework . in the case ofconsistent tuning we observe an interesting phenomenon , namely that the limiting distributions now correspond to pointmasses ( but not always located at zero ! ) , or are convex combinations of two pointmasses in some cases when considering the hard - thresholding estimator .this essentially means that consistently tuned thresholding estimators are plagued by a bias - problem in that the `` bias - component '' is the dominant component and is of larger order than the `` stochastic variability '' of the estimator ., where we can achieve a limiting probability for that is strictly between and .that this randomness does not survive for the other two estimators in the limit seems to be connected to the fact that these estimators are continuous functions of the data , whereas is not . ] in a fixed - parameter framework we get the trivial limits for every value of in case of hard - thresholding and adaptive soft - thresholding . at first glancethis seems to suggest that we have used a scaling sequence that does not increase fast enough with , but recall that the scaling used here corresponds to the uniform convergence rate .we shall take this issue further up in section [ oracle ] .the situation is different for the soft - thresholding estimator where the fixed - parameter limit is , which reduces to only for ; this is a reflection of the well - known fact that soft - thresholding is plagued by bias problems to a higher degree than are hard - thresholding and adaptive soft - thresholding .we next show that the finite - sample cdfs of , , and and of their infeasible counterparts , , and , respectively , are uniformly ( with respect to the parameters ) close in the total variation distance ( or the supremum norm ) provided the number of degrees of freedom diverges to infinity fast enough .apart from being of interest in their own right , these results will be instrumental in the subsequent section . we note that the results in theorem [ closeness ] below hold for any choice of the scaling factors . [ closeness ] suppose that for given satisfying for large enough we have as .then .[ n - k]in case of conservative tuning , the condition is always satisfied if .[ in fact it is then equivalent to or . ] in case of consistent tuning is clearly a weaker condition than .however , in general , a sufficient condition for is that and .[ scaleinv]suppose that holds as .if does not converge to zero as , remark [ weekend ] shows that none of the convergence results in theorem [ closeness ] holds .[ to see this note that the variable deletion probabilities constitute the weight of the pointmass in the respective distribution functions .] this shows that the condition in the above theorem can not be weakened ( at least in case holds ) .we next obtain the limiting distributions of , , and in a moving - parameter framework under conservative tuning .[ htconservative](hard - thresholding with conservative tuning ) suppose that for given satisfying for large enough we have and where . set the scaling factor .suppose that the true parameters and satisfy .\(a ) if is eventually constant equal to , say , then converges weakly to the distribution with cdf corresponding measure being[the distribution reduces to a standard normal distribution in case or . ]\(b ) if holds , then converges weakly to the distribution given in proposition [ lsdk_h](a ) .[ stconservative](soft - thresholding with conservative tuning ) suppose that for given satisfying for large enough we have and where . set the scaling factor .suppose that the true parameters and satisfy .\(a ) if is eventually constant equal to , say , then converges weakly to the distribution with cdf corresponding measure being[the atomic part in the above expression is absent in case .furthermore , the distribution reduces to a standard normal distribution if . ]\(b ) if holds , then converges weakly to the distribution given in proposition [ lsdk_s](a ) .[ astconservative](adaptive soft - thresholding with conservative tuning ) suppose that for given satisfying for large enough we have and where . set the scaling factor .suppose that the true parameters and satisfy .\(a ) suppose is eventually constant equal to , say .then converges weakly to the distribution with cdf in case , the corresponding measure being given by . in case , the cdf converges weakly to , i.e. , a standard normal distribution .[ if , the limit always reduces to a standard normal distribution . ]\(b ) if , then converges weakly to the distribution given in proposition [ lsdk_as](a ) .it transpires that in case of conservative tuning and we obtain exactly the same limiting distributions as in the known - variance case and hence the relevant discussion given at the end of section lsdkvc applies also here .[ that one obtains the same limits does not come as a surprise given the results in section [ uniform_close ] and the observation made in remark [ n - k ] . ] in the case , where is eventually constant , the limits are obtained from the limits in the known - variance case ( with replaced by ) by averaging with respect to the distribution of . againthe limiting distributions essentially have the same structure as the corresponding finite - sample distributions .the fixed - parameter limiting distributions ( corresponding to setting and in the above theorems ) again misrepresent the finite - sample properties of the thresholding estimators whenever but small , as the fixed - parameter limiting distribution is in case of hard - thresholding and adaptive soft - thresholding then always , regardless of the size of .for soft - thresholding we also observe a strong discrepancy between the finite - sample distribution and the fixed - parameter limit especially for but small , which is given by the distribution with pdf regardless of the size of . as a consequence, we again observe non - uniformity in the convergence of finite - sample distributions to their limit in a fixed - parameter framework also in the case where the number of degrees of freedom is ( eventually ) constant .we next derive the limiting distributions of , , and in a moving - parameter framework under consistent tuning .[ htconsistent](hard - thresholding with consistent tuning ) suppose that for given satisfying for large enough we have and . set the scaling factor .suppose that the true parameters and satisfy .\(a ) if is eventually constant equal to , say , then converges weakly to[the above display reduces to for . ]\(b ) if holds , then \1 . implies that converges weakly to . implies that converges weakly to .\3 . and that converges weakly to for some .\4 . and with imply that converges weakly to for some .[ note that the above display reduces to if , and to if . ] \5 . and imply that converges weakly to for some .[ stconsistent](soft - thresholding with consistent tuning ) suppose that for given satisfying for large enough we have and . set the scaling factor .suppose that the true parameters and satisfy .\(a ) if is eventually constant equal to , say , then converges weakly to the distribution given by we recall the convention that for .[ in case , the atomic part in ( soft_large_sample_unknown_density_c ) is absent and ( soft_large_sample_unknown_density_c ) reduces to . ]\(b ) if holds , then converges weakly to .[ astconsistent](adaptive soft - thresholding with consistent tuning ) suppose that for given satisfying for large enough we have and . set the scaling factor .suppose that the true parameters and satisfy .\(a ) suppose is eventually constant equal to , say .then converges weakly to the distribution with cdf in case , and to the distribution with cdf in case .furthermore , converges weakly to if .[ in case , the distribution has a jump of height at and is otherwise absolutely continuous .in particular , it reduces to in case . ]\(b ) if holds , then \1 . implies that converges weakly to , \2 . implies that converges weakly to , \3 . implies that converges weakly to . we know from theorem [ closeness ] that we obtain the same limiting distributions for , , and as for , , and , respectively , provided diverges to infinity sufficiently fast in the sense that .the theorems in this section now show that for the soft - thresholding as well as for the adaptive soft - thresholding estimator we actually get the same limiting distribution as in the unknown - variance case whenever diverges even if is violated. however , for the hard - thresholding estimator the picture is different , and in case diverges but is violated , limit distributions different from the known - variance case arise ( these limiting distributions still being convex combinations of two pointmasses , but with weights different from the known - variance case ) .it seems that this is a reflection of the fact that the hard - thresholding estimator is a discontinuous function of the data , whereas the other two estimators considered depend continuously on the data .the fixed - parameter limiting distributions for all three estimators are again the same as in the known - variance case . in the case where the degrees of freedom are eventually constant, the limiting distribution of the hard - thresholding estimator is again a convex combination of two pointmasses , with weights that are in general different from the known - variance case .however , for the soft - thresholding as well as for the adaptive soft - thresholding estimator the limiting distributions can also contain an absolutely continuous component .this component seems to stem from an interaction of the more pronounced `` bias - component '' ( as compared to hard - thresholding ) with the nonvanishing randomness in the estimated variance .the fixed - parameter limiting distributions for hard - thresholding and adaptive soft - thresholding are again given by for all values of as in the known - variance case , whereas for soft - thresholding the fixed - parameter limiting distribution is only for and otherwise has a pdf given by ( as compared to a limit of in the known - variance case ) .as already mentioned at the end of sections [ lsdkvc ] and [ consistent ] , under consistent tuning the _ fixed - parameter _ limiting distributions of the hard - thresholding and of the adaptive soft - thresholding estimator in the known - variance as well as in the unknown - variance case always degenerate to pointmass at zero .recall that in these results the estimators ( after centering at ) are scaled by , which corresponds to the uniform convergence rate .we next show that if the estimators are scaled by instead , a limit distribution under _ fixed - parameter _ asymptotics arises that is not degenerate in general ( under an additional condition on the tuning parameter in case of adaptive soft - thresholding ) .in fact , we show that the hard - thresholding as well as the adaptive soft - thresholding estimators then satisfy what has been called the `` oracle - property '' .however , it should be kept in mind that with this faster scaling sequence the centered estimators are no longer stochastically bounded in a moving - parameter framework ( for certain sequences of parameters ) , cf .theorem thresh_consistency .this shows the fragility of the `` oracle - property '' , which is a fixed - parameter concept , and calls into question the statistical significance of this notion .for a more extensive discussion of the `` oracle - property '' and its consequences see leeb and ptscher ( 2008 ) , ptscher and leeb ( 2009 ) , and ptscher and schneider ( 2009 ) .[ oracle_1]let be given .suppose that for given satisfying for large enough we have and .\(a ) as well as converge in distribution to when , and to when .\(b ) as well as converge in distribution to when , and to when , provided the tuning parameter additionally satisfies for .inspection of the proof of part ( b ) given in section prfs_ls shows that the condition is used for the result only in case .if now with , inspection of the proof shows that then in case we have that , where is standard normal and is independent of .hence , we see that the distribution of asymptotically behaves like the convolution of an -distribution and the distribution of times a chi - square distributed random variable with degrees of freedom ( if this reduces to an -distribution ) . if , then is stochastically unbounded .note that this shows that the consistently tuned adaptive soft - thresholding estimator even in a fixed - parameter setting has a convergence rate slower than if and if the tuning parameter is `` too large '' in the sense that .the same conclusion applies to the infeasible estimator ( with the simplification that one always obtains an -distribution in case with ) .we further illustrate the fragility of the fixed - parameter asymptotic results under a -scaling obtained above by providing the moving - parameter limits under this scaling .let denote the cdf of , and define and analogously .the proofs of the subsequent propositions are completely analogous to the proofs of theorem 9 in ptscher and leeb ( 2009 ) and theorem 5 in ptscher and schneider ( 2009 ) , respectively .[ oracle_h](hard - thresholding ) suppose that for given satisfying for large enough we have and .suppose that the true parameters and satisfy and .[ note that in case the convergence of already follows from that of , and is then given by . ]. then weakly to if ; if the total mass of escapes to , in the sense that for every if , and that for every if .suppose . then weakly to .suppose and for some .then converges to every .[ in case the limit reduces to a standard normal distribution . ][ oracle_as](adaptive soft - thresholding ) suppose that for given satisfying for large enough we have and .suppose that the true parameters and satisfy .\1 . if and , then weakly to . \2 .the total mass of escapes to or in the following cases : if , or if and , or if and , then for every .if , or if and , or if and , then for every .\3 . if and , then weakly to .it is easy to see that setting and in proposition [ oracle_h ] immediately recovers the `` oracle - property '' for .similarly , we recover the `` oracle property '' for from proposition [ oracle_as ] provided .the propositions also characterize the sequences of parameters along which the mass of the distributions of the hard - thresholding and the adaptive soft - thresholding estimator escapes to infinity ; loosely speaking these are sequences along which the bias of the estimators exceeds all bounds .the theorems in section [ uniform_close ] also show that the last two propositions above carry over immediately to the unknown - variance case whenever sufficiently fast such that holds . to save space, we do not extend these two propositions to the case where the latter condition fails to hold .the situation is somewhat different for the soft - thresholding estimator .it follows from theorem [ stconsistent ] that the distribution of does not degenerate to pointmass at zero ( in fact , has no mass at zero ) if and is held fixed .consequently , is also the fixed - parameter convergence rate of , in the sense that scaling with a faster rate ( e.g. , ) leads to the escape of the total mass of the finite - sample distribution of the so - scaled ( and centered ) estimator to .for we get with the same argument as for hard - thresholding that converges to . for the infeasible version situation is identical .we conclude by a result analogous to propositions oracle_h and [ oracle_as ] .the proof of this result is completely analogous to the proof of theorem 10 in ptscher and leeb ( 2009 ) .( soft - thresholding ) suppose that for given satisfying for large enough we have and .suppose that the true parameters and satisfy .then weakly to if ; and if , the total mass of escapes to , in the sense that for every if , and that for every if .again , this proposition immediately extends to the unknown - variance case whenever sufficiently fast such that holds .we abstain from extending the result to the case where the latter condition fails to hold .[ costfree2](i ) the convergence conditions on the various quantities involving and ( and on ) in the propositions in sections [ lsdkvc ] and [ oracle ] as well as in the theorems in section [ lsdukvc ] are essentially cost - free for the same reason as explained in remark [ costfree ] .\(ii ) we note that all possible forms of the moving - parameter limiting distributions in the results in this section already arise for sequences belonging to an arbitrarily small neighborhood of zero ( and with fixed ) .consequently , the non - uniformity in the convergence to the fixed - parameter limits is of a local nature .ptscher and leeb ( 2009 ) and ptscher and schneider ( 2009 ) present impossibility results for estimating the finite - sample distribution of the thresholding estimators considered in these papers .in the present context , corresponding impossibility results could be derived under appropriate assumptions .we abstain from presenting such results .as has been discussed in remarks [ lasso ] and [ alasso ] in section model , the soft - thresholding estimator coincides with the lasso , and the adaptive soft - thresholding estimator coincides with the adaptive lasso in case of orthogonal design .a natural question now is if the distributional results for the ( adaptive ) soft - thresholding estimator derived in this paper are in any way indicative for the distribution of the ( adaptive ) lasso in case of non - orthogonal design . in order to gain some insight into this we provide a simulation study to compare the finite - sample distributions of the respective estimators .we simulate the lasso estimator as defined in remark [ lasso ] ( with and not depending on ) and the adaptive lasso estimator as defined in remark alasso ( with not depending on ) and show histograms of where stands for the -th component of lasso or adaptive lasso .[ the scaling used here is chosen on the basis that with this scaling the -th component of the least - squares estimator is standard normally distributed . ]we set and , resulting in degrees of freedom .two different types of designs are considered : for design i we use with .more concretely , is partitioned into blocks of size and each of these blocks is set equal to with , the cholesky factorization of .the value of is set equal to , , and , implying condition numbers for of , , and , respectively .design ii is an `` equicorrelated '' design . herewe set the matrix comprised of the first rows of equal to , where is the matrix with all components equal to and is a real number greater than . the remaining entries of are all set equal to .we choose three values for : first , which implies a correlation of between any two regressors and a condition number of for ; second , which implies a correlation of and a condition number of ; and which implies a correlation of and a condition number of . for either type of designwe proceed as follows : for the given parameters and , we simulate data vectors and compute the corresponding estimator , i.e. , the lasso and adaptive lasso as specified above .we set , implying that the thresholding estimators delete a given irrelevant variable with probability . for the non - zero outcomes of the estimators , we plot the histogram of which is normalized such that its mass corresponds to the proportion of the non - zero values .the zero values are accounted for by plotting `` pointmass '' with height representing the proportion of zero values , i.e. , the simulated variable selection probability . for the purpose of comparison the graph of the distribution of the corresponding ( centered and scaled ) thresholding estimator ( using the same ) as derived analytically in section [ fs ] is then superimposed in red color .the results of the simulation study are presented in figures 1 - 12 below . in comparing the adaptive lasso with the adaptive soft - thresholding estimator, we find remarkable agreement between the respective marginal distributions in all cases where the design matrix is not too multicollinear , see figures 1 , 2 , and 4 .for the cases where the design matrix is no longer well - conditioned a difference between the respective marginal distributions emerges but seems to be surprisingly moderate , see figures 3 , 5 , and 6 . turning to the lasso and its thresholding counterpart, we find a similar situation with a somewhat stronger disagreement between the respective marginal distributions . again in the cases where the design matrix is well - conditioned ( figures 7 , 8 , and 10 ) the difference is less pronounced than in the case of an ill - conditioned design matrix ( figures 9 , 11 , and 12 ) .we have also experimented with other values of , , , , , and and have found the results to be qualitatively the same for these choices . ,scaledwidth=95.0% ] , scaledwidth=95.0% ] , scaledwidth=95.0% ] , scaledwidth=95.0% ] , scaledwidth=95.0% ] , scaledwidth=95.0% ] , scaledwidth=95.0% ] , scaledwidth=95.0% ] , scaledwidth=95.0% ] , scaledwidth=95.0% ] , scaledwidth=95.0% ] , scaledwidth=95.0% ]* proof of proposition [ select_prob_pointwise ] : * we first prove part ( a ) .rewrite as first that and fix . by a standard subsequence argumentwe may assume without loss of generality that converges to a constant which by our maintained assumption ( [ xi ] ) must satisfy .now both converge to , which is non - zero , and consequently both arguments in ( [ fi ] ) converge to .since is continuous on , the expression ( [ fi ] ) converges to zero . to prove the converse , now assume that ( [ fi ] ) converges to zero for all . by a standard subsequence argument, we may assume without loss of generality that converges to a constant satisfying .suppose holds .choose such that holds .it follows that and eventually have opposite signs and are bounded away from zero . by our maintained assumption ( [ xi ] ) , the same is then true for the arguments in ( [ fi ] ) leading to a contradiction .hence must hold , completing the proof of part ( a ) .parts ( b ) and ( c ) are obvious since whenever . * proof of proposition [ select_prob_moving_par ] : * part ( a ) follows immediately from ( [ select_prob ] ) and the assumptions . to prove part ( b ) we use ( [ select_prob ] ) to write first and the second claim then follow immediately . for the third claim , assume first that .then case is handled analogously . * proof of proposition [ select_prob_pointwise_unknown ] : * we prove part ( b ) first . observe that \rho _ { n - k}(s)ds \\ & = t_{n - k}\left ( n^{1/2}\eta _ { i , n}\right ) -t_{n - k}\left ( -n^{1/2}\eta _ { i , n}\right ) .\end{aligned}\]]by a subsequence argument it suffices to prove the result under the assumption that converges in . if the limit is finite , then is eventually constant and the result follows since every -distribution has unbounded support . if then denotes the supremum norm . since if by polya s theorem , the result follows .part ( c ) is proved analogously .we next prove part ( a ) .observe that the collection of distributions corresponding to is tight on , meaning that for every there exist such that and .note that the map is monotonically nondecreasing .hence, ( , respectively ) converges to zero if and only if does so , part ( a ) follows from proposition select_prob_pointwise applied to the estimators . * proof of theorem [ select_prob_moving_par_unknown ] : * ( a ) set for . by proposition [ select_prob_moving_par ]we have that converges to for all , where for .since as well as are continuous functions of , are monotonically nondecreasing in , and have the property that their limits for are while the limits for are , it follows from polya s theorem that the convergence is uniform in .but then using ( [ select_prob_unknown ] ) gives .this completes the proof in case eventually ; in case observe that then converges to as the distribution corresponding to converges weakly to pointmass at and the integrand is bounded and continuous .\(b ) observe that converges to for and to for by proposition [ select_prob_moving_par ] applied to the estimator .now ( [ select_prob_unknown ] ) and dominated convergence deliver the result in ( b1 ) .next consider ( b2 ) : suppose first that .choose small enough such that .then , recalling that is monotonically nondecreasing in , eq .( [ select_prob_unknown ] ) gives the integral on the r.h.s. converges to since , and the probability on the r.h.s. converges to by proposition [ select_prob_moving_par ] applied to the estimator .this completes the proof for the case .next assume that .choose small enough such that holds .then from ( select_prob_unknown ) we have is monotonically nondecreasing in and is not larger than . since and the second term on the r.h.s .goes to zero , while the first term goes to zero by proposition [ select_prob_moving_par ] applied to the estimator .next we prove 3.&4 . and assume first . then using eq .( select_prob_unknown ) and performing the substitution we obtain ( recalling that is zero for negative arguments and using the abbreviations and ) \\ & & \times \left ( 2\left ( n - k\right ) \right ) ^{-1/2}\rho _ { n - k}(\left ( 2\left ( n - k\right ) \right ) ^{-1/2}t+1)dt \\ & = & \int_{-\infty } ^{\infty } \left [ \phi \left ( r_{i , n}+n^{1/2}\eta _ { i , n}\left ( 2\left ( n - k\right ) \right ) ^{-1/2}t\right ) -\phi \left ( r_{i , n}^{\ast } -n^{1/2}\eta _ { i , n}\left ( 2\left ( n - k\right ) \right ) ^{-1/2}t\right ) \right ] \\ & & \times \phi ( t)dt+o(1).\end{aligned}\]]the indicated term in the above display is by the lemma in the appendix and because the expression in brackets inside the integral is bounded by .since and , the integrand converges to under 3. to under 4 .the dominated convergence theorem then completes the proof .the case is treated similarly .it remains to prove 5 .again assume first .define and and rewrite the above display as \\ & & \times \phi ( t)dt+o(1).\end{aligned}\]]observe that and . the expression in brackets inside the integral hence converges to for and to for . by dominated convergencethe integral converges to .the case is treated similarly . * proof of proposition [ closeness_prob ] : * observe that a trivial modification of lemma 13 in ptscher and schneider ( 2010 ) we conclude that for every there exists a real number such that every .using the fact , that is globally lipschitz with constant , this gives proves the result since can be made arbitrarily small . * proof of theorem [ thresh_consistency ] : * ( a ) observe that for any of the estimators .hence , consistency of under and follows immediately from proposition ls_consistency(a ) since the distributions of are tight .conversely , suppose is consistent .then clearly whenever must hold , which implies by proposition [ select_prob_pointwise_unknown](a ) .this then entails consistency of by ( closeness_h_s_as_ls_unknown ) and tightness of the distributions of ; this in turn implies by proposition [ ls_consistency](a ) .\(b ) since , it suffices to prove the second claim in ( b ) .now for every real we have gives the first term on the r.h.s .can be made arbitrarily small in view of proposition [ ls_consistency](b ) by choosing large enough .the second term on the r.h.s . can be written as ( cf .( [ select_prob_unknown ] ) ) choose as in the proof of proposition [ select_prob_pointwise_unknown ] . using continuity of and the fact that the probability appearing on the r.h.s .above is monotonically increasing as approaches from above , this can be further bounded by last inequality holding for and since and .choosing sufficiently large ( depending on ) completes the proof for .next observe that similarly hold .since the set of distributions of ( i.e. , the set of distributions corresponding to ) is tight as already noted , this proves ( b ) then also for and .\(c ) by a subsequence argument we can reduce the argument to the case where and converges in .suppose first that : observe that then eventually .choose and such that , where does not depend on and holds , and set the other coordinates of to arbitrary values ( e.g. , equal to zero ) . observe that there exists a constant such that : if converges to a finite limit , i.e. , is eventually constant , the claim follows from theorem [ select_prob_moving_par_unknown](b1 ) ; if , then use theorem select_prob_moving_par_unknown(b2 ) . by ( [ nec ] )we have for and a suitable that all sufficiently large .but this is only possible if holds eventually , implying that .next consider the case where : observe that then is of the same order as . then define and such that , where does not depend on and holds , and set the other coordinates of to arbitrary values ( e.g. , equal to zero ) . observe that then ( [ delta ] ) also holds , in view of theorem [ select_prob_moving_par_unknown](a1 ) in case is eventually constant , and in view of theorem select_prob_moving_par_unknown(a2 ) in case .the rest of the proof is then similar as before .it remains to consider the case : it follows from ( [ closeness_h_s_as_ls_unknown ] ) , the assumptions on and , from , and from the observation that is -distributed , that converges in distribution to a standard normal distribution for each fixed and .hence , stochastic boundedness of for each ( and a fortiori ( nec ) ) necessarily implies that .\(d ) the proof for is similar and in fact simpler : note that now holds and that in the proof of ( b ) the integration over can simply be replaced by evaluation at . for ( c )one uses proposition [ select_prob_moving_par ] instead of theorem select_prob_moving_par_unknown . * proofs of propositions [ 1 ] , [ 2 ] , and [ 3 ] : * observe that that is .furthermore , we have and with and in ptscher and leeb ( 2009 ) and making use of eq .( 4 ) in that reference immediately gives the result for .the result for then follows from elementary calculations .the result for follows similarly by making use of eq .( 5 ) instead of eq .( 4 ) in ptscher and leeb ( 2009 ) .the result for then follows from elementary calculations .the results for and follow similarly by making use of eqs .( 9)-(11 ) in ptscher and schneider ( 2009 ) . * proofs of propositions [ 4 ] , [ 5 ] , and [ 6 ] : * we have we have used independence of and allowing us to replace by in the relevant formulae , cf .leeb and ptscher ( 2003 , p. 110 ) .substituting ( hard_finite_sample ) , with replaced by , into the above equation gives ( [ hard_finite_sample_unknown ] ) .representing as an integral of given in ( [ hard_finite_sample_density ] ) and applying fubini s theorem then gives ( hard_finite_sample_unknown_density ) .similarly , we have ( [ soft_finite_sample ] ) , with replaced by , into the above equation and noting that gives ( soft_finite_sample_unknown ) .elementary calculations then yield ( soft_finite_sample_unknown_density ) . finally ,we have ( [ adaptive_finite_sample ] ) , with replaced by , into the above equation gives ( adaptive_finite_sample_unknown ) .elementary calculations then yield ( adaptive_finite_sample_unknown_density ) . * proof of proposition [ lsdk_h ] :* the proof of ( a ) is completely analogous to the proof of theorem 4 in ptscher and leeb ( 2009 ) , whereas the proof of ( b ) is analogous to the proof of theorem 17 in the same reference . * proof of proposition [ lsdk_s ] :* the proof of ( a ) is completely analogous to the proof of theorem 5 in ptscher and leeb ( 2009 ) , whereas the proof of ( b ) is analogous to the proof of theorem 18 in the same reference . * proof of proposition [ lsdk_as ] :* the proof of ( a ) is completely analogous to the proof of theorem 4 in ptscher and schneider ( 2009 ) , whereas the proof of ( b ) is analogous to the proof of theorem 6 in the same reference . * proof of theorem [ closeness ] : * observe that the total variation distance between two cdfs is bounded by the sum of the total variation distances between the corresponding discrete and continuous parts .furthermore , recall that the total variation distance between the absolutely continuous parts is bounded from above by the -distance of the corresponding densities .hence , from ( [ hard_finite_sample_density ] ) and ( [ hard_finite_sample_unknown_density ] ) we obtain where \right .\\ & & + \left .\left [ \phi \left ( n^{1/2}\left ( -\theta _ { i}/(\sigma \xi _ { i , n})-\eta _ { i , n}(s\wedge 1)\right ) \right ) -\phi \left ( n^{1/2}\left ( -\theta _ { i}/(\sigma \xi _ { i , n})-\eta _ { i , n}(s\vee 1)\right ) \right ) \right ] \right\ } \rho _ { n - k}(s)ds,\end{aligned}\]]where we have made use of fubini s theorem and performed an obvious substitution . by a trivial modification of lemma 13 in ptscher and schneider ( 2010 ) we conclude that for every there exists a real number such that every .using the fact , that is globally lipschitz with constant , this gives r.h.s .now converges to because .since was arbitrary , this shows that converges to zero .note also that has already been shown to converge to zero in proposition closeness_prob .this completes the proof for the hard - thresholding estimator . with the same argument as above we obtain we have used ( [ soft_finite_sample_density ] ) and ( soft_finite_sample_unknown_density ) .now, where we have used fubini s theorem and an obvious substitution .it is elementary to verify that that holds .consequently , using ( [ c ] ) we obtain we have again used the fact that is globally lipschitz with constant .since and was arbitrary , the proof for soft - thresholding is complete , because goes to zero by proposition [ closeness_prob ] .finally , from ( [ adaptive_finite_sample ] ) and ( adaptive_finite_sample_unknown ) we obtain that on the one hand and are bounded by , and that on the other hand , using the lipschitz - property of and the mean - value theorem, is a mean - value between and which may depend on .the supremum over on the r.h.s .is now clearly assumed for , resulting in the bound same bound is obtained for in exactly the same way .consequently , using ( [ c ] ) we obtain .\end{aligned}\]]since and was arbitrary , the proof is complete .* proof of theorem [ htconservative ] : * ( a ) the atomic part of as given in ( hard_finite_sample_unknown_density ) clearly converges weakly to the atomic part of ( [ hard_large_sample_unknown_density_a ] ) in view of theorem select_prob_moving_par_unknown(a1 ) and the fact that by assumption ; also note that the atomic part converges to the zero measure in case or as then the total mass of the atomic part converges to zero .we turn to the absolutely continuous part next . for later usewe note that what has been established so far also implies that the total mass of the absolutely continuous part converges to the total mass of the absolutely continuous part of the limit , since it is easy to see that the limiting distribution given in the theorem has total mass .the density of the absolutely continuous part of ( [ hard_finite_sample_unknown_density ] ) takes the form that for given , the indicator function in the above display converges to for lebesgue almost all . [ if , this is necessarily true only for with . ] since eventually , we get from the dominated convergence theorem that the above display converges to for every ( for every with in case ) , which is the density of the absolutely continuous part in ( [ hard_large_sample_unknown_density_a ] ) . since the total mass of the absolutely continuous part is preserved in the limit as shown above , the proof is completed by scheff s lemma .\(b ) follows immediately from proposition [ lsdk_h ] and theorem closeness . * proof of theorem [ stconservative ] : * ( a ) the atomic part of as given in ( soft_finite_sample_unknown_density ) converges weakly to the atomic part of ( [ soft_large_sample_unknown_density_a ] ) in view of theorem select_prob_moving_par_unknown(a1 ) and the fact that by assumption ; also note that the atomic part converges to the zero measure in case or as then the total mass of the atomic part converges to zero .we turn to the absolutely continuous part next . for later usewe note that what has been established so far also implies that the total mass of the absolutely continuous part converges to the total mass of the absolutely continuous part of the limit , since it is easy to see that the limiting distribution given in the theorem has total mass .the density of the absolutely continuous part of ( [ soft_finite_sample_unknown_density ] ) takes the form that for given , the functions converge to , respectively , for all .since eventually , we then get from the dominated convergence theorem that the above display converges to every ; the last display is precisely the density of the absolutely continuous part in ( soft_large_sample_unknown_density_a ) . since the total mass of the absolutely continuous part is preserved in the limit as shown above , the proof is completed by scheff s lemma .\(b ) follows immediately from proposition [ lsdk_s ] and theorem closeness . * proof of theorem [ astconservative ] : * ( a ) observe that and reduce to , as well as converge for every to , if , and the dominated convergence theorem shows that the weights of the indicator functions in ( [ above ] ) converge to the corresponding weights in ( adaptive_soft_large_sample_unknown_cdf_a ) .since converges to by assumption , it follows that for every we have convergence of to the cdf given in ( adaptive_soft_large_sample_unknown_cdf_a ) .this proves part ( a ) in case . in case , we have that converges to by an application of proposition 15 in ptscher and schneider ( 2009 ) .consequently , the limit of is now .again applying the dominated convergence theorem and observing that for each we have that is eventually zero , shows that converges to .the case is proved analogously .\(b ) follows immediately from proposition [ lsdk_as ] and theorem closeness . * proof of theorem [ htconsistent ] : * observe that is standard normally distributed .the expressions in front of the indicator functions now converge to and , respectively , in probability as .inspection of the cdf of then shows that this cdf converges weakly to .part ( b ) of theorem select_prob_moving_par_unknown completes the proof of both parts of the theorem in case .if the same theorem shows that the weak limit is now . * proof of theorem [ stconsistent ] : * ( a ) the atomic part of as given in ( soft_finite_sample_unknown_density ) converges weakly to the atomic part given in ( [ soft_large_sample_unknown_density_c ] ) by theorem select_prob_moving_par_unknown(b1 ) .the density of the absolutely continuous part of can be written as the convention that for .note that with this convention is then a bounded continuous function on the real line .since and clearly converge weakly to and , respectively , the density of the absolutely continuous part of is seen to converge to for every .an application of scheff s lemma then completes the proof , noting that the total mass of the absolutely continuous part of converges to the total mass of the absolutely continuous part of ( soft_large_sample_unknown_density_c ) as the same is true for the atomic part in view of theorem [ select_prob_moving_par_unknown](b1 ) ( and since the distributions involved all have total mass ) .\(b ) rewrite as is a sequence of -distributed random variables .observe that converges to and that converges to zero in -probability . now ,if , then by theorem select_prob_moving_par_unknown(b2 ) , and hence converges to in -probability .this proves the result in case . in case we have that , also converges to in -probability since .consequently , converges to in -probability , which proves the case . finally , if , then ( [ sign ] ) continues to hold and we can write refers to a term that converges to zero in -probabilitythis then completes the proof of part ( b ) . * proof of theorem [ astconsistent ] : * ( a ) assume first that holds .note that and now reduce to .\]]first , for we see that eventually reduces to , for we see that for all whereas for we have that for and for .as a consequence , we obtain from the dominated convergence theorem that converges to for and to for .second , for note that eventually reduces to that for all in this case .this shows that for we have that converges to .but this proves the result for the case . in case the same reasoning shows that now eventually reduces to all , and that now for we have for all whereas for we have that for all .this shows that converges weakly to in case .the proof for the case is completely analogous .\(b ) rewrite as is a sequence of -distributed random variables .note that converges to by assumption .now , if , then by theorem select_prob_moving_par_unknown(b2 ) , hence converges to in -probability , establishing the result in this case .furthermore , for rewrite the above display as the convention that in case .if ( including the case ) then by theorem [ select_prob_moving_par_unknown](b2 ) , and hence the last display shows that converges to in -probability , establishing the result in this case . finally , if holds , then the last line in the above display reduces to , completing the proof of part ( b ) . * proof of proposition [ oracle_1 ] : * ( a ) by a subsequence argument we may assume that converges in .applying theorem [ select_prob_moving_par_unknown](b ) we obtain that converges to in case , and to in case .observe that on the event , while on the event .the result then follows in view of the fact that is standard normally distributed .the proof for is similar using proposition select_prob_moving_par(b ) instead of theorem select_prob_moving_par_unknown(b ) ( it is in fact simpler as the subsequence argument is not needed ) .\(b ) again we may assume that converges in .by the same reference as in the proof of ( a ) we obtain that converges to in case , and to in case .now on the event and the claim for follows immediately . on the event we have from the definition of the estimator , if , then the event has probability approaching as shown above .hence , we have on events that have probability tending to and by the assumption and since ; also note that is stochastically bounded since the collection of distributions corresponding to with is tight on as was noted earlier .the proof for is again similar ( and simpler ) by using proposition select_prob_moving_par(b ) instead of theorem select_prob_moving_par_unknown(b ) . , s. & s. a. ruzinsky ( 1994 ) : an algorithm for the minimization of mixed and norms with applications to bayesian estimation . _ _ ieee transactions on signal processing _ _ 42 , 618 - 627 .bauer , p. , ptscher , b. m. & p. hackl ( 1988 ) : model selection by multiple test procedures . _ _ statistics _ _ 19 , 3944 .donoho , d. l. , johnstone , i. m. , kerkyacharian , g. , d. picard ( 1995 ) : wavelet shrinkage : asymptopia ?with discussion and a reply by the authors . _ journal of the royal statistical society series b _ 57 , 301369 .fan , j. & r. li ( 2001 ) : variable selection via nonconcave penalized likelihood and its oracle properties ._ journal of the american statistical association _ 96 , 1348 - 1360 .fan , j. & h. peng ( 2004 ) : nonconcave penalized likelihood with a diverging number of parameters . _annals of statistics _ 32 , 928961 .feller , w. ( 1957 ) : _ an introduction to probability theory and its applications , volume 1 ._ 2nd ed . ,wiley , new york .frank , i. e. & j. h. friedman ( 1993 ) : a statistical view of some chemometrics regression tools ( with discussion ) . _ _ technometrics _ _ 35 , 109 - 148 .ibragimov , i. a. ( 1956 ) : on the composition of unimodal distributions ._ _ theory of probability and its applications _ _ 1 , 255 - 260 .knight , k. & w. fu ( 2000 ) : asymptotics for lasso - type estimators ._ _ annals of statistics _ _ 28 , 1356 - 1378 .leeb , h. & b. m. ptscher ( 2003 ) : the finite - sample distribution of post - model - selection estimators and uniform versus nonuniform approximations ._ econometric theory _ 19 , 100142 .leeb , h. & b. m. ptscher ( 2005 ) : model selection and inference : facts and fiction ._ econometric theory _ 21 , 2159 .leeb , h. & b. m. ptscher ( 2008 ) : sparse estimators and the oracle property , or the return of hodges estimator ._ _ journal of econometrics _ _ 142 , 201 - 211 .ptscher , b. m. ( 1991 ) : effects of model selection on inference ._ econometric theory _ 7 , 163185 .ptscher , b. m. ( 2006 ) : the distribution of model averaging estimators and an impossibility result regarding its estimation . _ ims lecture notes - monograph series _ 52 , 113129 .ptscher , b. m. & h. leeb ( 2009 ) : on the distribution of penalized maximum likelihood estimators : the lasso , scad , and thresholding . _ _ journal of multivariate analysis _ _ 100 , 2065 - 2082 .ptscher , b. m. & u. schneider ( 2009 ) : on the distribution of the adaptive lasso estimator ._ _ journal of statistical planning and inference _ _ 139 , 2775 - 2790 .ptscher , b. m. & u. schneider ( 2010 ) : confidence sets based on penalized maximum likelihood estimators in gaussian regression ._ electronic journal of statistics _ 10 , 334 - 360 .sen , p. k. ( 1979 ) : asymptotic properties of maximum likelihood estimators based on conditional specification ._ _ annals of statistics _ _ 7 , 1019 - 1033 .tibshirani , r. ( 1996 ) : regression shrinkage and selection via the lasso ._ journal of the royal statistical society series b _ 58 , 267 - 288 .zhang , c .- h .( 2010 ) : nearly unbiased variable selection under minimax concave penalty ._ _ annals of statistics _ _ 38 , 894 - 942 .zou , h. ( 2006 ) : the adaptive lasso and its oracle properties ._ _ journal of the american statistical association _ _ 101 , 1418 - 1429 .recall that for . observe that is the density of where denotes a chi - square distributed random variable with degrees of freedom . by the central limit theorem and the delta - method converges in distribution to a standard normal random variable . with the density of we have for we have for .since the cdf associated with is unimodal , this shows that the same is true for the cdf associated with .but then convergence in distribution of implies convergence of to in the -sense by a result of ibragimov ( 1956 ) , scheff s lemma , and a standard subsequence argument .
|
we study the distribution of hard- , soft- , and adaptive soft - thresholding estimators within a linear regression model where the number of parameters can depend on sample size and may diverge with . in addition to the case of known error - variance , we define and study versions of the estimators when the error - variance is unknown . we derive the finite - sample distribution of each estimator and study its behavior in the large - sample limit , also investigating the effects of having to estimate the variance when the degrees of freedom does not tend to infinity or tends to infinity very slowly . our analysis encompasses both the case where the estimators are tuned to perform consistent variable selection and the case where the estimators are tuned to perform conservative variable selection . furthermore , we discuss consistency , uniform consistency and derive the uniform convergence rate under either type of tuning . msc subject classification : 62f11 , 62f12 , 62j05 , 62j07 , 62e15 , 62e20 keywords and phrases : thresholding , lasso , adaptive lasso , penalized maximum likelihood , variable selection , finite - sample distribution , asymptotic distribution , variance estimation , uniform convergence rate , high - dimensional model , oracle property
|
i am grateful to the organizers of the dice2008 conference for the opportunity to present these ideas . at the conferenceitself i gave additional background with emphasis on the need to avoid `` initial conditions prejudice , '' in which the natural structure of language can lure one into circular arguments .this part of my lecture has been omitted from the present article and is mostly covered in .more recent work of mine on this is in which the compatibility of interacting opposite - arrow regions is shown , as well as in which macroscopic causality loses its status as an independent property of nature and is shown to be a consequence of general entropy increase , i.e. , the thermodynamic arrow .the article itself was significantly enhanced by extended discussions with amos ori and marco roncadelli .i am also grateful to bernard gaveau and huw price for helpful discussions .much of this interaction took place in the `` advanced study group , '' _ time : quantum and statistical mechanics aspects _ , of the max planck institute for the physics of complex systems , in dresden .this work was partly supported by the united states national science foundation grant phy 05 55313 .i am also grateful to the lewiner institute for theoretical physics for partial support during a visit to the technion .-40pt schulman l s 2003 _ hyperbolic differential operators and related problems _ ed ancona v and vaillant j ( new york : marcel dekker ) pp 355370 , proceedings of a june 2000 conference , in honor of jean vaillant .arxiv : cond - mat/0009139 10 sep 2000[arxiv : cond - mat/0009139 10 sep 2000 ] schulman l s 2005 - 6 _ time - related issues in statistical mechanics , _ lectures given at the institut de physique thorique , direction des sciences de la matire , cea - saclay and at the physics dept . , technion ( haifa ) , sec . 7 , current http://people.clarkson.edu/\texttilde schulman/[http://people.clarkson.edu/\texttilde schulman/ ]schulman l s 2001 _ causality is an effect _ , in _ time s arrows , quantum measurement and superluminal behavior _ ed mugnai d , ranfagni a and schulman l s ( rome : consiglio nazionale delle ricerche ) pp 99112
|
the puzzle of the thermodynamic arrow of time reduces to the question of how the universe could have had lower entropy in the past . i show that no special entropy lowering mechanism ( or fluctuation ) is necessary . as a consequence of expansion , at a particular epoch in the history of the universe a state that was near maximum entropy under the dominant short range forces becomes extremely unlikely , due to a switchover to newly dominant long range forces . this happened at about the time of decoupling , prior to which i make no statement about arrows . the role of cosmology in thermodynamics was first suggested by t. gold . = 11 auxout = 12 the thermodynamic arrow of time , basically the second law of thermodynamics , is often said to be one of the mysteries of nature , standing in stark contrast to the near time - symmetry of other dynamical laws accepted in physics . this article is less ambitious than other recent discussions of the problem , for example , ref . . i only want to explain the observed phenomenon in terms of other known and accepted pieces of physics . i will not go to the very early universe ( not even the first millennium ) , nor will i invoke unobserved phenomena . what i do assert is that there is less mystery than is alleged . the essential features of this argument have already appeared both in my own writings and in the writings of others . in the present article i will review and extend these arguments . as foreshadowed , the explanation will involve cosmology . notwithstanding the asserted limited scope , the claim is far from trivial : it relates local thermodynamics to the universe at large , a connection that is neither obvious nor intuitive . this leads me to a secondary objective of this presentation , namely to trace the origin of these ideas to their proper source , namely thomas gold , who in at least some of the current literature is either ignored or misquoted . around 1960 , 30 years after the discovery of the expansion of the universe , gold suggested that the thermodynamic arrow was a consequence of that expansion . that this was a significant step can be seen in an earlier article by wheeler and feynman in which they start from a time - symmetric electrodynamics and recover the usual time - asymmetric theory by means of their `` absorber theory '' . their work deals in an essential way with the issue of irreversibility ( for example in presenting a framework for dirac s calculation of radiation reaction ) and in some of their attempts to establish the absorber theory wheeler and feynman make reference to the universe at large . nevertheless , the essential feature , that the universe is _ expanding _ , comes nowhere in their article . ref . even mentions discussion with einstein . this was in the 1940 s and expansion did not seem the route to solving this great puzzle . for context it is interesting to read the record of a conference convened by gold and bondi in 1963 . some participants were sympathetic to the idea that expansion could play a role ( e.g. , ) ; many were not . in any case , the arguments of gold indicated ways in which large scale expansion could be felt locally . he reasoned that any local arrow is imposed by an outside influence , with the ultimate outside influence being the expansion of the universe , manifested through outgoing wave electromagnetic boundary conditions . although i found his thesis attractive , i felt that a proper presentation of his argument required time - symmetric boundary conditions . in the present article a slightly different approach is taken , based on actual observation of earlier states of the universe , combined with known dynamical processes . it was realized long ago by boltzmann that the problem of justifying the arrow was not so much showing that the entropy of isolated systems increased ( accomplished with controversy by his h - theorem ) , but explaining why there was low entropy in the past . his justification was that there was an enormous `` fluctuation . '' for most of the universe and for most of its history it is in equilibrium , but at some point in our past there was a fluctuation leading to our present situation . it is hard to argue with this assertion ; it is also hard to believe . see , notes to sec . 4.0 , and many other discussions . i now argue that what we already know about the history of the universe provides that low entropy , and it occurs by virtue of the expansion . consider the era of recombination or decoupling . for that epoch , we have an excellent image of the universe , namely that provided by the cosmic background radiation . it shows , for the luminous matter , a tremendous level ( 1 part in 10 ) of uniformity . this is reasonable . under short range forces , for example the usual intermolecular interactions , the most likely state , that of maximum entropy , is uniformity . forces were short range prior to recombination because the system was basically a plasma . although electromagnetic forces are long range , because of the presence of uncancelled positive and negative charges there was screening of the electromagnetic forces , making them effectively short range . of course gravitational forces were also present , but for luminous matter they were negligible . for dark matter , it is generally felt that at that time there must already have been some level of non - uniformity , but that does not affect our argument . similarly , there is no evidence for the presence of black holes . to summarize the significant observation : the luminous matter was in equilibrium under its dominant forces and that equilibrium was a uniform state . with the advent of recombination , charged particles combined to form neutral objects , h atoms , and the universe became transparent . atom - atom forces were also negligible since particle separations were on the order of mm . gravity now became the dominant force . under gravity , uniformity is _ not _ the most likely state . on the contrary , matter tends to clump , in fact it seems to do so hierarchically . the uniform distribution that in an earlier era was the most likely configuration , now becomes unstable . this then is the punch line : with no mechanism other than expansion , entropy has been lowered . interestingly , it s not the state that changes , but the dominant forces . what was maximum entropy in one regime becomes low entropy in another . and this happens because of expansion , since it s expansion that induces the cooling , causing in turn the formation of h atoms , and finally leading to a world in which the newly dominant force gravity controls a state that is unlikely from a gravitational perspective . this then is gold s thesis , although the detailed arguments are different . getting from this state of disequilibrium to our experienced arrow of time is qualitatively straightforward , although quantitatively challenging . under gravitational forces there is a continuing emergence of ever greater levels of non - uniformity . eventually this gives rise to star formation and the release of nuclear energy . this in turn leads to the negentropy flow that permits the preparation of local low entropy states . in our case , this negentropy flow takes the form of 6000 photons that allow us to remain out of equilibrium on earth . establishing this picture quantitatively is the objective of the fields of structure formation and stellar evolution . puzzles in the former area suggest that , as remarked above , a degree of structure was already present before recombination . presumably , this would ultimately have led to the high degree of non - uniformity that we see today , but it would not have done so on the observed time scale ( about 13 billion years ) . this presumption , however , applies to a fictional world sans expansion : in our world , expansion implies decoupling , the trigger for the relatively rapid development of non - uniformity . the foregoing argument says nothing about a thermodynamic arrow _ before _ recombination . perhaps there was none or perhaps it was there because of even earlier physical processes . we offer no opinion on this issue , since this paper is devoted to explaining the thermodynamic arrow we now experience.having or not having an arrow need not be a yes - or - no issue . for example , one would say that a gas _ at _ equilibrium has no arrow , while a gas released from a small volume surely does . this suggests that rate of change of entropy be a quantitative measure of an arrow . however , this too has pitfalls . for a metastable system described by stochastic dynamics the derivative may be small ( fixed by the second largest eigenvalue of the matrix of transition probabilities ) while for any particular exemplar the apparent changes may be great ( after a nucleation event ) or seemingly nonexistent ( in the absence of such an event ) . for the systems discussed here there are two differences from the scenario just mentioned . first , with long range forces entropy is problematic . second , the nucleation events are _ not _ sudden and local , but may occur at large distance scales . as such , on smaller scales ( where entropy may be defined in a local sense ) there will be no perceivable arrow . ] finally a comment about the term `` adiabatic '' when used in connection with the post - decoupling expansion of the universe . i mention this because in the discussion in the expansion is characterized as rapid . certainly , adiabatic , suggesting a slow process , is appropriate when considering the ability of the photon gas to keep up with the expansion . however , the clumping process , that which ultimately leads to stars , etc . , is far slower than the expansion . in other words , as the universe expands , it does not manage to reach the state of greatest likelihood associated with gravity - dominated dynamics ( which may not exist ) . in this article i have given qualitative arguments . to what extent can greater precision be obtained ? this could be in the form of better estimates of inhomogeneity , or perhaps model systems exhibiting the features i postulate . frankly , for my purposes i do nt think this is necessary , and for two reasons . first , behind my assertions are decades of experience and calculations by many , many people . gravity makes things clump ; of course much more than that is known , time scales under various conditions and the like , but this is greater precision than i need . second , i am not predicting inhomogeneity ; i am observing it . the evident property of gravity to head toward such a configuration is essential to my argument , but for that argument it is enough simply to observe that uniformity and gravity are at odds . similar remarks apply to models , but discussing the phenomenon in phase transition terms is instructive . consider a hamiltonian with both short and long range interparticle forces . if there is short - range repulsion and it is bounded in magnitude , there may very well may be no stable state of the system . however , if the long - range force is much weaker than the other , there can be a long - lived ( uniform density ) metastable phase . the passage out of this metastable state can be considered a phase transition , from something that looks like equilibrium under short - range forces to something for which the dynamics never comes to equilibrium . note that i am not saying that the actual universe suddenly passed through such a transition . what i mean by `` phase transition '' is the dramatic dependence of system behavior on a parameter . in our case we could take the parameter to be the strength of the short range force . the lifetime of the metastable phase would then drop rapidly as the short range force weakened . thus , if we considered the plasma ( sans dark matter ) that preceded recombination but in a _ _ non__expanding universe , the lifetime for gravity to take over and cause clumping would be extremely long . however , if the short range coupling constant is reduced ( as it is , because of cooling and expansion ) , then that lifetime has the dramatic reduction that one associates with a phase transition . i review what has or has not been explained here . i have _ not _ explained why the universe was in a metastable state prior to recombination , a state in which not only the inhomogeneity of the gravitational dominance was absent , but the ultimate inhomogeneity of black holes was also absent . what i _ have _ explained is how the state that is in fact observed leads to the second law of thermodynamics , as we now see it . my point is thus the connection or correlation of two observed phenomena . in one sentence i restate the explanation : expansion - induced cooling shifted the strength - balance of long and short range forces , allowing nucleation of a matter distribution that is clumpy and appears to have no ultimate equilibrium . or more loosely speaking , what _ was _ high entropy with respect to the dominant forces becomes low entropy by virtue of an expansion - induced change in which forces dominate . as declared at the outset , the scope of the explanation offered here is more modest than that of other recent attempts . there is good reason to be conservative . the field of cosmology undergoes major revisions of its paradigms with impressive frequency . a great deal of modern cosmology is concerned with properties of black holes , and for the topic at hand focuses particularly on their entropic properties . as far as i know there is no experimental or observational evidence on this matter , nor is there a shred of evidence , beyond theoretical conviction , for the existence of radiation from black holes . to summarize , the existence of the observed thermodynamic arrow of time has been traced to observed and non - controversial features of the not - too - early universe . given the state observed at that time , coupled with the expansion of the universe , one obtains the contemporary arrow of time .
|
adaptive beamforming technology is of paramount importance in numerous signal processing applications such as radar , wireless communications , and sonar - . among various beamforming techniques , the beamformers according to the constrained minimum variance ( cmv ) criterion are prevalent , which minimize the total output power while maintaining the gain along the direction of the signal of interest ( soi ) .another alternative beamformer design is performed according to the constrained constant modulus ( ccm ) criterion , which is a positive measure of the average amount that the beamformer output deviates from a constant modulus condition .compared with the cmv , the ccm beamformers exhibit superior performance in many severe scenarios ( e.g. , steering vector mismatch ) since the positive measure provides more information for the parameter estimation .many adaptive algorithms have been developed according to the cmv and ccm criteria for implementation .a simple and popular one is stochastic gradient ( sg ) method , .a major drawback of the sg - based methods is that , when the number of elements in the filter is large , they always require a large amount of samples to reach the steady - state .furthermore , in dynamic scenarios , filters with many elements usually show poor performance in tracking signals embedded in interference and noise .reduced - rank signal processing was motivated to provide a way out of this dilemma , .for the application of beamforming , the reduced - rank technique project the received vector onto a low - dimension subspace and perform the filter optimization within this subspace .one popular reduced - rank scheme is the multistage wiener filter ( mswf ) , which employs the minimum mean squared error ( mmse ) , and its extended versions that utilize the cmv and ccm criteria were reported in , .another technique that resembles the mswf is the auxiliary - vector filtering ( avf ) , . despite improved convergence and tracking performance achieved by these methods ,their implementations require high computational cost and suffer from numerical problems .a joint iterative optimization ( jio ) scheme , which was presented recently in , employs the cmv criterion with a low - complexity adaptive implementation to achieve better performance than the existing methods .considering the fact that the ccm - based beamformers outperform the cmv ones for constant modulus constellations , we propose a robust reduced - rank scheme according to the ccm criterion for the beamformer design .the proposed reduced - rank scheme consists of a bank of full - rank adaptive filters , which constitutes the transformation matrix , and an adaptive reduced - rank filter that operates at the output of the bank of filters .the transformation matrix projects the full - rank received vector onto a low - dimension , which is then processed by the reduced - rank filter to estimate the desired signal .the transformation matrix and the reduced - rank filter are computed based on the jio .the proposed scheme provides an iterative exchange of information between the transformation matrix and the reduced - rank filter , which leads to improved convergence and tracking ability and low - complexity cost .we devise two adaptive algorithms for the implementation of the proposed reduced - rank scheme .the first one employs the sg approach to jointly estimate the transformation matrix and the reduced - rank weight vector subject to a constraint on the array response .the second proposed algorithm is extended from the first one and reformulates the transformation matrix subject to an orthogonal constraint .the gram schmidt ( gs ) technique is employed to realize the reformulation .the performance of the second method outperforms the first one .simulation results are given to demonstrate the preferable performance and stability achieved by the proposed algorithms versus the existing methods in typical scenarios .let us suppose that narrowband signals impinge on an uniform linear array ( ula ) of ( ) sensor elements .the sources are assumed to be in the far field with directions of arrival ( doas ) , , .the received vector can be modeled as where ^{t}\in\mathbb{c}^{q \times 1} ] comprises the signal direction vectors , ^{t}\in\mathbb{c}^{m \times 1} ] is the complex weight vector , and stands for hermitian transpose .let us consider the full - rank ccm filter for beamforming , which can be computed by minimizing the following cost function ^{2}\big\},~~ \textrm{subject~to}~~{\boldsymbol w}^{h}(i){\boldsymbol a}(\theta_{0})=1 % \end{split}\ ] ] where is the direction of the soi and denotes the corresponding normalized steering vector .the cost function is the expected deviation of the squared modulus of the array output to a constant subject to the constraint on the array response , which is set to capture the power of the desired signal and ensure the convexity of the cost function .the weight expression obtained from ( [ 3 ] ) is \boldsymbol a(\theta_0)}{\boldsymbol a^h(\theta_0)\boldsymbol r^{-1}(i)\boldsymbol a(\theta_0)}\big\}\ ] ] where ] , and denotes complex conjugate .note that ( [ 4 ] ) is a function of previous values of ( since ) and thus must be initialized to start the iteration .we keep the time index in and for the same reason .it is obvious that the calculation of weight vector requires high complexity due to the matrix inversion .the sg type algorithms can be employed to reduce the computational load but still suffer from slow convergence and tracking performance when the dimension is large .the reduced - rank schemes like mswf and avf can be used to improve the performance but still need high computational cost and suffer from numerical problems .in this section , by proposing a reduced - rank scheme based on the jio of adaptive filters , we introduce a minimization problem according to the cm criterion subject to different constraints .the reduced - rank ccm filters design is described in details .define a transformation matrix \in\mathbb c^{m\times r} ] , makes up the transformation matrix , is the projected received vector , and in what follows , all -dimensional quantities are denoted by an over bar . here, is the rank and , as we will see , impacts the output performance .an adaptive reduced - rank filter represented by ^{t}\in\mathbb c^{r\times 1} ] , \in\mathbb c^{m\times m} ] .note that the reduced - rank weight vector depends on the received vectors that are random in practice , thus is full - rank and invertible . and are functions of previous values of and due to the presence of .therefore , it is necessary to initialize and to estimate and , and start the iteration .on the other hand , assuming is known , minimizing ( [ 9 ] ) with respect to equal to a null vector and solving for , we obtain \bar{\boldsymbol a}(\theta_0)}{\bar{\boldsymbol a}^h(\theta_0)\bar{\boldsymbol r}^{-1}(i)\bar{\boldsymbol a}(\theta_0)}\big\}\ ] ] where \in\mathbb c^{r\times r} ] , and .the expressions in ( [ 10 ] ) for the transformation matrix and ( [ 11 ] ) for the reduced - rank weight vector depend on each other and so are not closed - form solutions .it is necessary to iterate and with initial values for implementation .therefore , the initialization is not only for estimating but starting the iteration .the proposed scheme provides an iterative exchange of information between the transformation matrix and the reduced - rank filter , which leads to improved convergence and tracking performance .they are jointly estimated to solve the ccm minimization problem .we describe a simple adaptive algorithm for implementation of the proposed reduced - rank scheme according to the minimization problem in ( [ 7 ] ) . assuming and are known , respectively , taking the instantaneous gradient of ( [ 9 ] ) with respect to and , and setting them equal to null matrices , we obtain where , and are the corresponding lagrange multipliers . following the gradient rules and , substituting ( [ 12 ] ) and ( [ 13 ] ) into them , respectively , and solving and by employing the constraint in ( [ 7 ] ) , we obtain the iterative solutions in the form \end{split}\ ] ] \bar{\boldsymbol x}(i)\ ] ] where and are the corresponding step sizes , which are small positive values .the transformation matrix and the reduced - rank weight vector are jointly updated .the filter output is estimated after each iterative procedure with respect to the ccm criterion .we denominate this algorithm as jio - ccm .now , we consider the minimization problem in ( [ 8 ] ) .as explained before , the constraint is added to orthogonalize a set of vectors for the performance improvement .we employ the gram - schmidt ( gs ) technique to realize this constraint .specifically , the adaptive sg algorithm in ( [ 14 ] ) is implemented to obtain .then , the gs process is performed to reformulate the transformation matrix , which is where is the normalized orthogonal vector after gs process and is a reformulation operator .the reformulated transformation matrix is constructed after we obtain a set of orthogonal . by employing to get , , and jointly update with in ( [ 15 ] ) , the performance can be further improved .simulation results will be given to show this result .we denominate this gs version algorithm as jio - ccm - gs , which is performed by computing ( [ 14 ] ) , ( [ 16 ] ) , and ( [ 15 ] ) .the computational complexity with respect to the existing and proposed algorithms is evaluated according to additions and multiplications .the complexity comparison is listed in table [ tab : computational complexity ] .the complexity of the proposed jio - ccm and jio - ccm - gs algorithms increases with the multiplication of .the parameter is more influential since is selected around a small range that is much less than for large arrays , which will be shown in simulations .this complexity is about times higher than the full - rank algorithms , slightly higher than the recent jio - cmv algorithm , but much lower than the mswf - based , , and avf methods . [tab : computational complexity ] l c c + algorithm & additions & multiplications + + full - rank - cmv & & + full - rank - ccm & & + mswf - cmv & & + & & + mswf - ccm & & + & & + avf & & + & & + jio - cmv & & + jio - cmv - gs & & + jio - ccm & & + jio - ccm - gs & & +simulations are performed by an ula containing sensor elements with half - wavelength interelement spacing .we compare the proposed jio - ccm and jio - ccm - gs algorithms with the full - rank , mswf , , and avf methods and in each method , the cmv and ccm criteria are considered with the sg algorithm for implementation .a total of runs are used to get the curves . in all experiments ,the bpsk source power ( including the desired user and interferers ) is and the input snr db with spatially and temporally white gaussian noise . in fig .[ fig : cmv_ccm_sg_gram_final ] , we consider the presence of users ( one desired ) in the system .the transformation matrix and the reduced - rank weight vector are initialized with $ ] and to ensure the constraint in ( [ 7 ] ) .the rank is for the proposed jio - ccm and jio - ccm - gs algorithms .[ fig : cmv_ccm_sg_gram_final ] shows that all output sinr values increase to the steady - state as the increase of the snapshots .the jio - based algorithms have superior steady - state performance as compared with the full - rank , mswf , and avf methods .the gs version algorithms enjoy further developed performance comparing with corresponding jio - cmv and jio - ccm methods .checking the convergence , the proposed algorithms are slightly slower than the avf , which is least squares ( ls)-based , and much faster than the other methods . in fig .[ fig : cmv_ccm_sg_gram_rank_final ] , we keep the same scenario as that in fig .[ fig : cmv_ccm_sg_gram_final ] and check the rank selection for the existing and proposed algorithms .the number of snapshots is fixed to .the most adequate rank values for the proposed algorithms are , which are comparatively lower than most existing algorithms , but reach the preferable performance .we also checked that these values are rather insensitive to the number of users in the system , to the number of sensor elements , and work efficiently for the studied scenarios .finally , the mismatch ( steering vector error ) condition is analyzed in fig .[ fig : cmv_ccm_sg_gram_sve_final_both ] .the number of users is , including one desired user . in fig .[ fig : cmv_ccm_sg_gram_sve_final_both](a ) , the exact doa of the soi is known at the receiver . the output performance of the proposed algorithms is better than those of the existing methods , and the convergence is a little slower than that of the avf algorithm , but faster than the others . in fig .[ fig : cmv_ccm_sg_gram_sve_final_both](b ) , we set the doa of the soi estimated by the receiver to be away from the actual direction .it indicates that the mismatch problem induces performance degradation to all the analyzed algorithms .the ccm - based methods are more robust to this scenario than the cmv - based ones . the proposed algorithms still retain outstanding performance compared with other techniques .we proposed a ccm reduced - rank scheme based on the joint iterative optimization of adaptive filters for beamforming and devised two efficient algorithms , namely , jio - ccm and jio - ccm - gs , for implementation . the transformation matrix and reduced - rank weight vectorare jointly estimated to get the filter output . by using the gs technique to reformulate the transformation matrix, the jio - ccm - gs algorithm achieves faster convergence and better performance than the jio - ccm .the devised algorithms , compared with the existing methods , show preferable performance in the studied scenarios .r. c. de lamare and r. sampaio - neto , low - complexity variable step - size mechanisms for stochastic gradient algorithms in minimum variance cdma receivers , " _ ieee trans . signal processing _2302 - 2317 , june 2006 .r. c. de lamare , m. haardt , and r. sampaio - neto , blind adaptive constrained reduced - rank parameter estimation based on constant modulus design for cdma interference suppression , " _ ieee trans .signal proc .56 , pp . 2470 - 2482 ,jun . 2008 .r. c. de lamare and r. sampaio - neto , adaptive interference suppression for ds - cdma systems based on interpolated fir filters with adaptive interpolators in multipath channels " , _ ieee transactions on vehicular technology _ , vol .56 , no . 6 ,september 2007 .r. c. de lamare and r. sampaio - neto , reduced - rank adaptive filtering based on joint iterative optimization of adaptive filters , " _ ieee signal processing letters _ , vol .14 no . 12 , december 2007 , pp .980 - 983 .r. c. de lamare and r. sampaio - neto , adaptive reduced - rank processing based on joint and iterative interpolation , decimation and filtering " , _ ieee transactions on signal processing _57 , no . 7 , july 2009 , pp .2503 - 2514 .
|
this paper proposes a robust reduced - rank scheme for adaptive beamforming based on joint iterative optimization ( jio ) of adaptive filters . the scheme provides an efficient way to deal with filters with large number of elements . it consists of a bank of full - rank adaptive filters that forms a transformation matrix and an adaptive reduced - rank filter that operates at the output of the bank of filters . the transformation matrix projects the received vector onto a low - dimension vector , which is processed by the reduced - rank filter to estimate the desired signal . the expressions of the transformation matrix and the reduced - rank weight vector are derived according to the constrained constant modulus ( ccm ) criterion . two novel low - complexity adaptive algorithms are devised for the implementation of the proposed scheme with respect to different constrained conditions . simulations are performed to show superior performance of the proposed algorithms in comparison with the existing methods . beamforming techniques , antenna array , constrained constant modulus , reduced - rank methods , joint iterative optimization .
|
in this paper , we tackle the problem of employing infinite - dimensional covariance descriptors ( covds ) for classification .covds are becoming increasingly popular in many computer vision tasks due to their robustness to measurement variations .such descriptors take the form of , , region covariance matrices for pedestrian detection and texture categorization , human joint covariances for activity recognition , and covariance matrices of the local brownian motion of water molecules in diffusion tensor imaging ( dti ) .as the name implies , covds are obtained by computing the second order statistics of feature vectors extracted at a finite number of observation points , such as the pixels of an image .the resulting descriptors are symmetric positive definite ( spd ) matrices and naturally lie on non - linear manifolds known as tensor , or spd manifolds . as a consequence , euclidean geometry is often not appropriate to analyze covds . to overcome the drawbacks of euclidean geometry and betteraccount for the riemannian structure of covds , state - of - the - art methods make use of non - euclidean metrics ( , ) . in particular , bregman divergences have recently been successfully employed in a number of covd - based applications .nevertheless , all previous studies work with relatively small covds ( , at most , to the best of our knowledge ) built from feature vectors whose dimension is typically much smaller than the number of observations .while this could be thought of as a filtering operation , it also implies that the information encoded in such a covd is inherently poorer than the information jointly contained in all the observations .recently , it was shown that covds could be mapped to reproducing kernel hilbert space ( rkhs ) via the use of spd - specific kernels . while this may , to some degree , enhance the discriminative power of the low - dimensional covds , it is unlikely to be sufficient to entirely recover the information lost when constructing them . in this paper, we overcome this issue by introducing an approach to building and analyzing infinite - dimensional covds from a finite number of observations . to this end , we map the original features to rkhs and compute covds in the resulting space . since the dimensionality of the rkhs is much larger than the dimensionality of the observations , the resulting descriptor will encode more information than a covd constructed in the original lower - dimensional space , and is therefore better suited for classification . in practice , of course , the mapping to rkhs is unknown and the covds can not be explicitly computed . however , here , we show that several bregman divergences can be derived in hilbert space via the use of kernels , thus alleviating the need for the explicit mapping . in particular , we consider the burg , jeffreys and stein divergences , that have proven powerful to analyze spd matrices .these divergences allow us to perform classification in hilbert space via a simple nearest - neighbor ( nn ) classifier , or by making use of more sophisticated distance - based classifiers , such as support vector machines ( svm ) with a gaussian kernel .we evaluated the resulting descriptors on the tasks of image - based material , texture and virus recognition , person re - identification , and action recognition from motion capture data .our experimental evaluation clearly evidences the importance of keeping all the data information by mapping to hilbert space before computing the covds .furthermore , our empirical results show that , with this new representation , a simple nn classifier can achieve accuracies comparable to those of much more sophisticated methods , and that these accuracies can even be boosted beyond the state - of - the - art when using more powerful classifiers .in this section , we review several bregman divergences and discuss the properties that motivated our decision to use them to compare covds in rkhs . throughout the paper ,we use bold upper - case letters to denote matrices ( , ) and bold lower - case letters for column vectors ( , ) .the identity matrix is written as . denotes the general linear group , , the group of real invertible matrices . is the space of symmetric positive definite matrices , , .[ def : bregman_divergence ] let be a strictly convex and differentiable function defined on the symmetric positive cone .the bregman matrix divergence is defined as where , and is the gradient of evaluated at .the bregman divergence is non - negative and definite ( , ) .[ def : frob_norm ] the euclidean ( frobenius ) distance is obtained by using as seed function in the bregman divergence of eq .[ eqn : bregman_div ] .[ def : burg_divergence ] the burg , or - , divergence is obtained by using as seed function in the bregman divergence of eq .[ eqn : bregman_div ] , where denotes the determinant of a matrix .the b - divergence can be expressed as while bregman divergences exhibit a number of useful properties , their general asymmetric behavior is often counter - intuitive and undesirable in practical applications .therefore , here , we also consider two symmetrized bregman divergences , namely the _ jeffreys _ and the _ stein _ divergences .[ def : kl_divergence ] the jeffreys , or - , divergence is obtained from the burg divergence , and can be expressed as [ def : stein_divergence ] the stein , or - , divergence ( also known as the jensen - bregman logdet divergence ) is also obtained from the burg divergence , but through _ jensen - shannon _ symmetrization .it can be written as invariance & p.d .gaussian kernel * frobenius * & & rotation & yes * burg * & & affine & no * jeffreys * & & affine & yes * stein * & & affine & partial here , we present the properties of bregman divergences that make them a natural choice as a measure of dissimilarity between two covds . in particular , we discuss these properties in comparison to the popular affine invariant riemannian metric ( airm ) on , which was introduced as a geometrically - motivated way to analyze covds .as indicated by the name , the airm was designed to be invariant to affine transformations , which often is an attractive property in computer vision algorithms . in our case, the -divergence exhibits the same invariance property .more specifically , given , .this can easily be shown from the definition of the -divergence .since the - and -divergences are obtained from the -divergence , it can easily be verified that they inherit this affine invariance property .furthermore , these two divergences are also invariant to inversion , , finally , we also note that .recently , kernel methods have been successfully employed on riemannian manifolds .in particular , an attractive solution is to form a kernel by replacing the euclidean distance in the popular gaussian kernel with a more accurate metric on the manifold .however , the resulting kernel is not necessarily positive definite for any metric . in particular, the airm does not yield a positive definite gaussian kernel in general .in contrast , both the - and the -divergences admit a hilbert space embedding via a gaussian kernel . more specifically , for the -divergence , it was shown in that the kernel is conditionally positive definite ( cpd ) .cpd kernels correspond to hilbertian metrics and can be exploited in a wide range of machine learning algorithms .an example of this is kernel svm , whose optimal solution was shown to only depend on the hilbertian property of the metric .note that while the kernel was claimed to be positive definite , we are not aware of any formal proof of this claim . for the -divergence ,the kernel is not positive definite for all .however , as was shown in , is positive definite iff note that , here , we are not directly interested in positive definite gaussian kernels on to derive our infinite - dimensional covds , but only to learn a kernel - based classifier with the divergences between our infinite - dimensional covds as input .the properties of the bregman divergences that we use in the remainder of this paper are summarized in table [ tbl : divergences ] .in this section , we show how covds can be computed in infinite - dimensional spaces . to this end , we first review some basics on hilbert spaces .[ def : hilbert_space ] a hilbert space is a ( possibly infinite - dimensional ) inner product space which is complete with respect to the norm induced by the inner product . an rkhs is a special type of hilbert space with the additional property that the inner product can be defined by a bivariate function known as the _ reproducing kernel_. for an rkhs on a non - empty set with there exists a kernel function such that .the concept of reproducing kernel is typically employed to recast algorithms that only exploit inner products to high - dimensional spaces ( , svm ) . given these definitions , we now turn to the problem of computing a covariance matrix in an rkhs .let be an matrix , obtained by stacking independent observations from an image or a video .the covariance descriptor is defined as where is the mean of the observations , is a centering matrix , and is a square matrix with all elements equal to 1 .let be a mapping to an rkhs whose corresponding hilbert space has dimensionality ( could go to ) .following eq .[ eqn : covd ] , a covd in this rkhs can be written as where $ ] . if , then is rank - deficient , which would make any divergence derived from the burg divergence indefinite .more precisely , the resulting matrix would be on the boundary of the positive cone , which would make it at an infinite distance from any positive definite matrix , not only for burg - based divergences , but also according to the airm . here, we address this issue by exploiting ideas developed in the context of covariance matrix estimation from a limited number of observations .more specifically , we seek to keep the positive eigenvalues of intact and replace the zero ones with a very small positive number , thus making the covd positive definite .first , using a standard result , we note that the positive eigenvalues of , denoted by , can be computed from , where is the kernel matrix whose elements are defined by the kernel function . by eigenvalue decomposition , we can write this lets us write a ( regularized ) estimate of as where with the identity matrix whose dimension is the number of positive eigenvalues of .note that this derivation can also be employed to model points in with lower - dimensional latent variables by retaining only the top eigenvalues and eigenvectors of to form .in this section , we derive different bregman divergences for the infinite - dimensional covds introduced in section [ sec : covd_rkhs ] . in these derivations, we will make use of the equivalence whose derivation is provided in supplementary material .the frobenius norm can easily be computed as note that , although not a desirable property , the euclidean metric is definite for positive semi - definite matrices , which makes it possible to set to zero . using the sylvester determinant theorem , we first note that then , from the woodbury matrix identity , we have this lets us write , by combining eqs . [ eqn : det_cov_rkhs ] and[ eqn : tr_xinv_y ] , we then obtain note that the burg divergence is independent of .this property is inherited by the jeffreys and stein divergences derived below . from the definition in section [ sec : preliminaries ], the jeffreys divergence can be obtained directly from the burg divergence .this yields to compute the stein divergence in , let us first define .\label{eqn : q}\end{aligned}\ ] ] this lets us write \mat{q } \mat{q}^t \left [ \begin{array}{c } \phi_{\mat{x}}^t\vspace{1ex } \\\phi_{\mat{y}}^t \end{array } \right ] .\label{eqn : cx_cy}\ ] ] similarly as in eq .[ eqn : det_cov_rkhs ] , becomes \mat{q}\mat{q}^{t } \left [ \begin{array}{c } \phi_{\mat{x}}^t \vspace{1ex}\\ \phi_{\mat{y}}^t \end{array } \right ] \bigg ) \\= & \rho^{\mathcal{|h|}}\det \bigg(\mathbf{i}_{\mat{x } + \mat{y } } + \dfrac{1}{2\rho}\mat{q}^{t } \left [ \begin{array}{c } \phi_{\mat{x}}^t \vspace{1ex } \\ \phi_{\mat{y}}^t \end{array } \right ] \big [ \phi_{\mat{x } } \ : \phi_{\mat{y } } \big ] \mat{q}\bigg ) \\ = & \rho^{\mathcal{|h| } } \det \bigg(\mathbf{i}_{\mat{x } + \mat{y } } + \dfrac{1}{2\rho}\mat{q}^{t } \mathbb{k}_{\mat{x},\mat{y}}\mat{q}\bigg)\ ; , \label{eqn : proof_cx_cy}\end{aligned}\ ] ] where \;.\label{eqn : big_k_stein}\ ] ] therefore , we have when computing divergences in rkhs , it is desirable to minimize the effect of the parameter , and thus have divergences that do not depend on its inverse . to this end , let us assume that the same number of eigenvectors were kept to build and . in this case, the stein divergence can be written as where the term can be thought of as a regularizer for .for the jeffreys divergence , we can define in our experiments , we used the definitions of eqs .[ eqn : simple_jefferys_rkhs ] and [ eqn : stein_rkhs_simple ] .here we compare the complexity of computing and against that of and .let and be two given sets of observation , with . computing the covds based on eq .[ eqn : covd ] requires .the inverse of an spd matrix can be computed by cholesky decomposition in flops .therefore , computing the -divergence requires flops , which is dominated by .the complexity of computing the determinant of an matrix by cholesky decomposition is .therefore , computing the -divergence requires flops , which is again dominated by . in rkhs, computing , and requires flops for each matrix .therefore , evaluating eq .[ eqn : svd_kxx ] requires for flops . assuming that , , eigenvectors are used to create in eq .[ eqn : w_x ] , computing according to eq .[ eqn : jefferys_rkhs ] requires . for , evaluating eq .[ eqn : stein_rkhs ] takes flops . generally speaking , the complexity of computing the jeffreys and stein divergences in the observation space is linear in while it is cubic when working in rkhs .our experimental evaluation shows , however , that working in rkhs remains practical . to illustrate this, we compare the runtimes required to compute the stein divergence between pairs of covds on using eq .[ eqn : stein_div ] and eq .[ eqn : stein_rkhs_simple ] .each covd on was obtained from observations . in the observation space ,computing the stein divergence on an i7 machine using matlab took 53s . for , it took 452s , 566s and 868s when keeping 10 , 20 and 50 eigenvectors to estimate the covariances , respectively . while slower , these runtimes remain perfectly acceptable , especially when considering the large accuracy gain that working in rkhs entails , as evidenced by our experiments .we now present our empirical results obtained with the infinite - dimensional covds and their bregman divergences defined in sections [ sec : covd_rkhs ] and [ sec : bregman_divergence_rkhs ] .in particular , due to their symmetry and the fact that they yield valid gaussian kernels , we utilized the jeffreys and stein divergences , and relied on two different classifiers for each divergence : a simple nearest neighbor classifier , which clearly evidences the benefits of using infinite - dimensional covds , and an svm classifier with a gaussian kernel , which further boosts the performance of our infinite - dimensional covds . the different algorithms evaluated in our experiments are referred to as : * * -nn : * jeffreys / stein based nearest neighbor classifier on covds in the observation space . * * -svm : * jeffreys / stein based kernel svm on covds in the observation space . * * -nn : * jeffreys / stein based nearest neighbor classifier on infinite - dimensional covds . * * -svm : * jeffreys / stein based kernel svm on infinite - dimensional covds .we also provide the results of the pls - based covariance discriminant learning ( cdl ) technique of , which can be considered as the state - of - the - art for covd - based classification . in all our experiments, we used the rbf kernel to create infinite - dimensional covds .the parameters of our algorithm , , the rbf bandwidth and the number of eigenvectors , were determined by cross - validation . _scale1.0 .recognition accuracies for the virus dataset . [ cols="<,^",options="header " , ] [ tab : table_mocap_performance ]we have introduced an approach to computing infinite - dimensional covds , as well as several bregman divergences to compare them . our experimental evaluation has demonstrated that the resulting infinite - dimensional covds lead to state - of - the art recognition accuracies on several challenging datasets . in the future ,we intend to explore how other types of similarity measures , such as the airm , can be computed over infinite - dimensional covds .furthermore , we are interested in studying how the frchet mean of a set of infinite - dimensional covds can be evaluated .this would allow us to perform clustering , and would therefore pave the way to extending well - known methods , such as bag of words , to infinite dimensional covds .in the following , we provide the detailed derivation of the bregman divergences in rkhs considered in section of the main paper . we also provide the cmc curves for the person re - identification experiment of section , which were left out of the main paper due to space limitation .recall that in section of the main paper , we have exploited the equivalence ( eq . ) to derive bregman divergences in rkhs .we prove this equivalence below : we now provide additional details for the specific bregman divergences considered in the paper .the euclidean metric in rkhs can be derived as for the burg and related divergences ( , jeffreys and stein divergences ) , we first show that . to this end, we use the sylvester determinant theorem , which states that , for two matrices and of size and , . therefore ,
|
we introduce an approach to computing and comparing covariance descriptors ( covds ) in infinite - dimensional spaces . covds have become increasingly popular to address classification problems in computer vision . while covds offer some robustness to measurement variations , they also throw away part of the information contained in the original data by only retaining the second - order statistics over the measurements . here , we propose to overcome this limitation by first mapping the original data to a high - dimensional hilbert space , and only then compute the covds . we show that several bregman divergences can be computed between the resulting covds in hilbert space via the use of kernels . we then exploit these divergences for classification purpose . our experiments demonstrate the benefits of our approach on several tasks , such as material and texture recognition , person re - identification , and action recognition from motion capture data .
|
the collection and analysis of data is widespread nowadays across many industries .as the size of modern data sets exceeds the disk and memory capacities of a single computer , it is imperative to store them and analyze them distributively .designing efficient and scalable distributed optimization algorithms is a challenging , yet increasingly important task .there exists a large body of literature studying algorithms where either the features or the observations associated with a machine learning task are stored in distributed fashion .nevertheless , little attention has been given to settings where the data is doubly distributed , i.e. , when both features and observations are distributed across the nodes of a computer cluster .this scenario arises in practice as a result of a data collection process . in this work , we propose two algorithms that are amenable to the doubly distributed setting , namely d3ca ( doubly distributed dual coordinate ascent ) and radisa ( random distributed stochastic algorithm ) .these methods can solve a broad class of problems that can be posed as minimization of the sum of convex functions plus a convex regularization term ( e.g. least squares , logistic regression , support vector machines ) .d3ca builds on previous distributed dual coordinate ascent methods , allowing features to be distributed in addition to observations .the main idea behind distributed dual methods is to approximately solve many smaller sub - problems ( also referred to herein as partitions ) instead of solving a large one . upon the completion of the local optimization procedure, the primal and dual variables are aggregated , and the process is repeated until convergence . since each sub - problem contains only a subset of the original features , the same dual variables are present in multiple partitions of the data .this creates the need to aggregate the dual variables corresponding to the same observations . to ensure dual feasibility , we average them and retrieve the primal variables by leveraging the primal - dual relationship , which we discuss in section [ algorithms ] .in contrast with d3ca , radisa is a primal method and is related to a recent line of work on combining coordinate descent ( cd ) methods with stochastic gradient descent ( sgd ) .its name has the following interpretation : the randomness is due to the fact that at every iteration , each sub - problem is assigned a random sub - block of local features ; the stochastic component owes its name to the parameter update scheme , which follows closely that of the sgd algorithm .the work most pertinent to radisa is rapsa .the main distinction between the two methods is that rapsa follows a distributed gradient ( mini - batch sgd ) framework , in that in each global iteration there is a single ( full or partial ) parameter update .such methods suffer from high communication cost in distributed environments .radisa , which follows a local update scheme similar to d3ca , is a communication - efficient generalization of rapsa , coupled with the stochastic variance reduction gradient ( svrg ) technique .the contributions of our work are summarized as follows : * we address the problem of training a model when the data is distributed across observations and features .we propose two doubly distributed optimization methods .* we perform a computational study to empirically evaluate the two methods .both methods outperform on all instances the block splitting variant of admm , which , to the best of our knowledge , is the only other existing doubly distributed optimization algorithm .the remainder of the paper is organized as follows : section [ related_work ] discusses related works in distributed optimization ; section [ algorithms ] provides an overview of the problem under consideration , and presents the proposed algorithms ; in section [ numexperiments ] we present the results for our numerical experiments , where we compare d3ca and two versions of radisa against admm .[ [ stochastic - gradient - descent - methods ] ] stochastic gradient descent methods + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + sgd is one of the most widely - used optimization methods in machine learning .its low per - iteration cost and small memory footprint make it a natural candidate for training models with a large number of observations . due to its popularity , it has been extensively studied in parallel and distributed settings .one standard approach to parallelizing it is the so - called mini - batch sgd framework , where worker nodes compute stochastic gradients on local examples in parallel , and a master node performs the parameter updates .different variants of this approach have been proposed , both in the synchronous setting , and the asynchronous setting with delayed updates .another notable work on asynchronous sgd is hogwild ! , where multiple processors carry out sgd independently and one can overwrite the progress of the other .a caveat of hogwild !is that it places strong sparsity assumptions on the data .an alternative strategy that is more communication efficient compared to the mini - batch framework is the parallelized sgd ( p - sgd ) method , which follows the research direction set by .the main idea is to allow each processor to independently perform sgd on the subset of the data that corresponds to it , and then to average all solutions to obtain the final result .note that in all aforementioned methods , the observations are stored distributively , but not the features .[ [ coordinate - descent - methods ] ] coordinate descent methods + + + + + + + + + + + + + + + + + + + + + + + + + + coordinate descent methods have proven very useful in various machine learning tasks . in its simplestform , cd selects a single coordinate of the variable vector , and minimizes along that direction while keeping the remaining coordinates fixed .more recent cd versions operate on randomly selected blocks , and update multiple coordinates at the same time .primal cd methods have been studied in the parallel and distributed settings . distributed cd as it appears in can be conducted with the coordinates ( features ) being partitioned , but requires access to all observations .recently , dual coordinate ascent methods have received ample attention from the research community , as they have been shown to outperform sgd in a number of settings . in the dual problem , each dual variable is associated with an observation , so in the distributed setting one would partition the data across observations .examples of such algorithms include .cocoa , which serves as the starting point for d3ca , follows the observation partitioning scheme and treats each block of data as an independent sub - problem .due to the separability of the problem over the dual variables , the local objectives that are maximized are identical to the global one .each sub - problem is approximately solved using a dual optimization method ; the stochastic dual coordinate ascent ( sdca ) method is a popular algorithm for this task . following the optimization step ,the locally updated primal and dual variables are averaged , and the process is repeated until convergence . similar to sgd - based algorithms , dual methods have not yet been explored when the feature space is distributed . [[ sgd - cd - hybrid - methods ] ] sgd - cd hybrid methods + + + + + + + + + + + + + + + + + + + + + there has recently been a surge of methods combining sgd and cd .these methods conduct parameter updates based on stochastic partial gradients , which are computed by randomly sampling observations and blocks of variables .with the exception of rapsa , which is a parallel algorithm , all other methods are serial , and typically assume that the sampling process has access to all observations and features .although this is a valid assumption in a parallel ( shared - memory ) setting , it does not hold in distributed environments .rapsa employs an update scheme similar to that of mini - batch sgd , but does not require all variables to be updated at the same time .more specifically , in every iteration each processor randomly picks a subset of observations and a block of variables , and computes a partial stochastic gradient based on them .subsequently , it performs a single stochastic gradient update on the selected variables , and then re - samples feature blocks and observations .despite the fact that rapsa is not a doubly distributed optimization method , its parameter update is quite different from that of radisa . on one hand, rapsa allows only one parameter update per iteration , whereas radisa permits multiple updates per iteration , thus leading to a great reduction in communication .finally , radisa utilizes the svrg technique , which is known to accelerate the rate of convergence of an algorithm .[ [ admm - based - methods ] ] admm - based methods + + + + + + + + + + + + + + + + + + a popular alternative for distributed optimization is the alternating direction method of multipliers ( admm ) .the original admm algorithm is very flexible in that it can be used to solve a wide variety of problems , and is easily parallelizable .a block splitting variant of admm was recently proposed that allows both features and observations to be stored in distributed fashion .one caveat of admm - based methods is their slow convergence rate . in our numerical experimentswe show empirically the benefits of using radisa or d3ca over block splitting admm .in this section we present the d3ca and radisa algorithms .we first briefly discuss the problem of interest , and then introduce the notation used in the remainder of the paper . in a typical supervised learning task ,there is a collection of input - output pairs , where each represents an observation consisting of features , and is associated with a corresponding label .this collection is usually referred to as the training set .the general objective under consideration can be expressed as a minimization problem of a finite sum of convex functions , plus a smooth , convex regularization term ( where is the regularization parameter , and is parametrized by ) : an alternative approach for finding a solution to is to solve its corresponding dual problem .the dual problem of has the following form : where is the convex conjugate of .note that for certain non - smooth primal objectives used in models such as support vector machines and least absolute deviation , the convex conjugate imposes lower and upper bound constraints on the dual variables .one interesting aspect of the dual objective is that there is one dual variable associated with each observation in the training set . given a dual solution , it is possible to retrieve the corresponding primal vector by using for any primal - dual pair of solutions and , the duality gap is defined as , and it is known that .duality theory guarantees that at an optimal solution of , and of , .+ _ notation : _ we assume that the data is distributed across observations and features over computing nodes of a cluster .more specifically , we split the features into partitions , and the observations into partitions ( for simplicity we assume that ) .we denote the labels of a partition by } ] .for instance , if we let and , the resulting partitions are },y_{[1]}) ] , },y_{[2]}) ] .furthermore , } ] is defined similarly ) figure [ fig : setupnotation ] illustrates this partitioning scheme .we let denote the number of observations in each partition , such that , and we let correspond to the number of features in a partition , such that . note that partitions corresponding to the same observations all share the common dual variable } ] .in other words , for some pre - specified values and , the partial solutions } ] represent aggregations of the local solutions } ] for . at any iteration of d3ca, the global dual variable vector can be written as },\alpha_{[2,.]}, ... ,\alpha_{[p,.]}] ] , i.e. the global solutions are formed by concatenating the partial solutions .the d3ca framework presented in algorithm [ alg : d3ca ] hinges on cocoa , but it extends it to cater for the features being distributed as well .the main idea behind d3ca is to approximately solve the local sub - problems using a dual optimization method , and then aggregate the dual variables via averaging .the choice of averaging is reasonable from a dual feasibility standpoint when dealing with non - smooth primal losses the localdualmethod guarantees that the dual variables are within the lower and upper bounds imposed by the convex conjugate , so their average will also be feasible .although in cocoa it is possible to recover the primal variables directly from the local solver , in d3ca , due to the averaging of the dual variables , we need to use the primal - dual relationship to obtain them . note that in the case where , d3ca reduces to cocoa .* data : * },y_{[p]})]localdualmethod},w^{(t-1)}_{[.,q]}) ] [ lst : line : dualave ] * in parallel * }^{(t)}=\frac{1}{\lambda n}{\sum_{p=1}^{p}}((\alpha_{[p , q]}^{(t)})^{t}x_{[p , q]}) ] , }\in\mathbb{r}^{m_{q}} ] * initialize : } ] * , }\leftarrow 0 ] })_{i}=(\delta\alpha_{[p , q]})_{i}+\delta\alpha ] * output : } ] for ) are further divided into non - overlapping sub - blocks .the reason for doing this is to ensure that at no time more than one processor is updating the same variables .although the blocks remain fixed throughout the runtime of the algorithm , the random exchange of sub - blocks between iterations is allowed ( step [ lst : line : sub - block_assignment ] ) .the process of randomly exchanging sub - blocks can be seen graphically in figure [ fig : radisa ] .for example , the two left - most partitions that have been assigned the coordinate block } ] and } ] for and * initialize : * partition each ] [ lst : line : outter_for_loop ] [ lst : line : full_gradient ] * in parallel * [ lst : line : opt_for ] randomly pick sub - block in non-[lst : line : sub - block_assignment ] overlapping manner }][lst : line : svrg_update ] }) ] [ lst : line : sol_concat ] , where }=[{w}^{(l)}_{[.,\bar{q}^{q}_{1}]}, ... ,{w}^{(l)}_{[.,\bar{q}^{q}_{p}]}] ] uniform distribution ; , and the sign of each was randomly flipped with probability 0.1 .the features were standardized to have unit variance .we take the size of each partition to be dense , , but due to the blas issue mentioned earlier , we resorted to smaller problems to obtain comparable run - times across all methods . ] and set and accordingly to produce problems at different scales .for example , for and , the size of the entire instance is .the information about the three data sets is summarized in table [ tab : data1 ] .as far as hyper - parameter tuning is concerned , for admm we set .for radisa we set the step - size to have the form , and select the constant that gives the best performance .to measure the training performance of the methods under consideration , we use the relative optimality difference metric , defined as where is the primal objective function value at iteration , and corresponds to the optimal objective function value obtained by running an algorithm for a very long time . .datasets for numerical experiments ( part 1 ) [ cols="^,>,>,>",options="header " , ] as we can see in figure [ fig : experiments2 ] , radisa exhibits strong scaling properties in a consistent manner . in both datasets the run - time decreases significantly when introducing additional computing resources .it is interesting that early configurations with perform significantly worse compared to the alternate configurations where .let us consider the configurations ( 4,1 ) and ( 1,4 ) .in each case , the number of variable sub - blocks is equal to .this implies that the dimensionality of the sub - problems is identical for both partition arrangements .however , the second partition configuration has to process four times more observations compared to the first one , resulting in an increased run - time .it is noteworthy that the difference in performance tails away as the number of partitions becomes large enough .overall , to achieve consistently good results , it is preferable that .the strong scaling performance of d3ca is mixed . for the smaller data set ( realsim ) , introducing additional computing resources deteriorates the run - time performance . on the larger data set ( news20 ) , increasing the number of partitions pays dividends when . on the other hand , when , providing additional resources has little to no effect .the pattern observed in figure [ fig : experiments2 ] is representative of the behavior of d3ca on small versus large data sets ( we conducted additional experiments to further attest this ) .it is safe to conclude that when using d3ca , it is desirable that . in the weak scaling experiments the workload assigned to each processor stays constant as additional resources are used to solve a larger problem . given that problems can increase either in terms of observations or features , we set up our experiments as follows .we generate artificial data sets in the same manner as outlined earlier , but we take the size of each partition to be .we vary the number of observation partitions from to , and study the performance of our algorithms for and .we also consider two distinct sparsity levels : : and . in terms of measuring performance, we consider the weak scaling efficiency metric as follows .let denote the time to complete a run when for fixed and , and let represent the time to solve a problem with given ( for the same values of and ) .the weak scaling efficiency is given as : note that the termination criterion for a run is reaching a relative optimality difference .furthermore , we use regularization values of and for radisa and d3ca , respectively . in figure[ fig : experiments3 ] , we can see that neither of the two methods is able to achieve a linear decrease in scaling efficiency as becomes larger . for , radisa manages to scale well at first , but when , its performance deteriorates .we should note that the scaling efficiency seems to flatten out for large values of and , which is a positive characteristic . as far as d3ca is concerned , it is interesting that the scaling efficiency is very close for different values of . finally , sparsity has a negative impact on the scaling efficiency of both methods .in this work we presented two doubly distributed algorithms for large - scale machine learning .such methods can be particularly flexible , as they do not require each node of a cluster to have access to neither all features nor all observations of the training set .it is noteworthy that when massive datasets are already stored in a doubly distributed manner , our algorithms are the only option for the model training procedure .our numerical experiments show that both methods outperform the block distributed version of admm .there is , nevertheless , room to improve both methods .the most important task would be to derive a step - size parameter for d3ca that will guarantee the convergence of the algorithm for all regularization parameters .furthermore , removing the bottleneck of the primal vector computation would result into a significant speedup .as far as radisa is concerned , one potential extension would be to incorporate a streaming version of svrg , or a variant that does not require computation of the full gradient at early stages .finally , studying the theoretical properties of both methods is certainly a topic of interest for future research .
|
as the size of modern data sets exceeds the disk and memory capacities of a single computer , machine learning practitioners have resorted to parallel and distributed computing . given that optimization is one of the pillars of machine learning and predictive modeling , distributed optimization methods have recently garnered ample attention in the literature . although previous research has mostly focused on settings where either the observations , or features of the problem at hand are stored in distributed fashion , the situation where both are partitioned across the nodes of a computer cluster ( doubly distributed ) has barely been studied . in this work we propose two doubly distributed optimization algorithms . the first one falls under the umbrella of distributed dual coordinate ascent methods , while the second one belongs to the class of stochastic gradient / coordinate descent hybrid methods . we conduct numerical experiments in spark using real - world and simulated data sets and study the scaling properties of our methods . our empirical evaluation of the proposed algorithms demonstrates the out - performance of a block distributed admm method , which , to the best of our knowledge is the only other existing doubly distributed optimization algorithm . machine learning , distributed optimization , big data , spark .
|
dynamics of self - propelled objects have attracted much attention recently as a fundamental subject of statistical physics far from equilibrium . historically , self - propulsion of flexible body has been formulated in terms of hydrodynamics at low reynolds number . in that circumstance ,viscous force is so large that the particle needs to keep non - reciprocal deformation of its shape for the persistent centroid motion .swimming microorganisms are the typical examples .shape deformation causes the center - of - mass motion in this case . besides the development along this line, another class of self - propelled particles or domains is known in which the persistent motion can be maintained due to a broken symmetry of their interfaces . in this case , interfacial forces are playing important roles .experimental examples are seen in self - propelled oil droplets in water which contains surfactant molecules , and self - propelled motions of vesicles in which chemical reactions take place .synthetic self - propelled systems also make a conversion of chemical energy into directed motion in these systems , one notes that shape deformation or asymmetry of chemical components around the domain is associated with the self - propelled motion .for example , an oily droplet in surfactant solution , which is spherical in a motionless situation becomes a banana shape when it undergoes a straight motion .eukaryotic cells such as amoebas or fibloblast change their shape during migration .therefore , the coupling between the motion and the shape deformation is one of the most important properties to understand the dynamics of self - propulsion from a unified point of view . from the above consideration, one may divide the self - propelled dynamics into two classes .one is the case that deformation is induced by the migration and the other is that the motion of the center of gravity is induced by the shape deformations .the oily droplets are a typical example of the former whereas almost all of the living cells belong to the latter . in this letter, we consider two model systems for self - propelled dynamics in two dimensions .one is called a tensor model in terms of the velocity of the center of gravity and two tensor variables for deformation .the coupled set of equations are given by symmetry consideration and therefore they are quite general independent of any specific details of the self - propelled objects .we shall show that this model is applied , by changing the parameters , to both the deformation - induced motion and the motion - induced deformation .the other model is represented in the form of a partial differential equation for a eucledian invariant variable of a closed loop .the condition of a self - propulsion is added , which is expressed in terms of a local deformation .therefore this model is inherently a model for deformation - induced motion . by solving these two different model equations numerically ,we explore possible universal and/or non - universal behaviors of self - propelled dynamics .in this section , we introduce two model systems for a deformable self - propelled domain in two dimensions .one is based on the phenomenon of propagation of an excited domain in certain reaction - diffusion systems .weak deformation around a circular shape with radius can be written as where note that since the translational motion of the domain will be incorporated in the velocity of the center of gravity , the modes should be removed from the expansion ( [ eq : deltar ] ) .the modes represents an elliptical shape of the domain .we introduce a second rank tensor as and .similarly we introduce the third rank tensor from the modes as and and and .the time - evolution equations of , and are derived by considering the possible couplings . up to the third order of these variables ,we obtain \nonumber \\ & + & \frac{d_2}{3}\big[s_{ij } v_k + s_{ik } v_j+s_{jk } v_i \nonumber \\ & -&\frac{v_{\ell}}{2 } ( \delta_{ij}s_{k\ell}+\delta_{jk}s_{i\ell}+\delta_{ki}s_{j\ell})\big ] \nonumber \\ & -&d_4{\bm v}^2 u_{ijk } - d_5(s_{mn}s_{mn } ) u_{ijk } \nonumber \\ & + & \frac{2d_6}{3 } \big [ s_{ij } s_{k\ell } v_{\ell } + s_{jk } s_{i\ell } v_{\ell } + s_{ki } s_{j\ell } v_{\ell } \nonumber \\ & - & \frac{1}{2 } ( \delta _ { ij } s_{nk } s_{n\ell } v_{\ell } + \delta _ { jk } s_{ni } s_{n\ell } v_{\ell } \nonumber \\ & + & \delta _ { ki } s_{nj } s_{n\ell } v_{\ell } ) \big ] \.\end{aligned}\ ] ] if the terms with the coefficients , and are ignored , these have been derived recently starting from the excitable reaction diffusion equations .furthermore , if the tensor variable is omitted , the set of equations ( [ eq : vtdyn ] ) and ( [ eq : stdyn ] ) was studied previously .we shall call the dynamics of eqs .( [ eq : vtdyn ] ) , ( [ eq : stdyn ] ) and ( [ eq : utdyn ] ) the tensor model . propagation of a domain occurs from the first and the second terms in eq .( [ eq : vtdyn ] ) for .the domain is deformed as the velocity is increased because of the couplings between and and even when and are positive .this case is a motion - induced deformation .it should be noted that when and are negative , the domain is deformed and causes a drift motion due to the couplings and in eq .( [ eq : vtdyn ] ) .this implies a deformation - induced motion .another model for a deformation - induced motion is an active cell model which is expressed in term of the partial differential equation for a closed domain boundary for in two dimensions where is the boundary length and .here we employ the intrinsic representation of a closed loop as with the tangential unit vector .the frenet - serret formula gives us where is the curvature and is the unit normal which is written as where represents deformation around a circular shape . throughout this paper , we assume that is sufficiently small and is a single valued function of . as a phenomenological description of the active dynamics of , we make a symmetry argument .because of the isotropy of space and the parity symmetry , the dynamics of should be invariant against the following transformations : ( i ) , ( ii) with and constants and ( iii) and . keeping these in mind ,we write down the equation for up to bilinear order of as here we assume that there is no instability in the long wave length limit so that is non - negative . to make the circular shape is unstable , then , we impose that is positive and is also positive to recover the stability at the short wave length region . under this condition the term with is expected to be irrelevant and can be ignored .the nonlinear term with the coefficient can be written in a variational form .( to make the potential functional bounded below , we need to add . )therefore , this nonlinearity does not cause any asymptotic complex dynamics such as domain oscillation and should be ignored .the last term is not variational . under these considerations , the minimal non - trivial equation for given by where the coefficients are eliminated by redefining , and .the sign in front of the last term is chosen to be negative without loss of generality .note that the parameter which we can control is only the domain boundary length . in a previous paper , eq .( [ phase_equation ] ) was derived approximately starting from the free energy functional for the interfacial energy and the curvature energy of a domain .equation ( [ phase_equation ] ) describes the dynamics of deformations . by using eqs .( [ closed_loop ] ) , ( [ fs ] ) and ( [ phi ] ) , the shape of the domain is determined .it should be noted , however , that the translational motion can not be obtained by the solution .we have to impose the condition for the time - dependence of the center of mass .since we are considering a deformation - induced motion , it should depend on the curvature . by taking account of the fact that the normal unit is the basic vector variable, the velocity of the center of gravity should take the following form where is an unknown function of .since the deformation is weak as we have assumed , we may expand in powers of as .the lowest order term vanishes because this is the condition that the boundary is closed .the first order term also vanishes identically because of eq .( [ fs ] ) ; as a result , the velocity is given by { \bf n } \nonumber \\ & = & \frac{(2\pi)^2}{l^3 } \int_0^l ds[\alpha_2 ( \partial_s \phi)^2 + \alpha_3 ( \partial_s^2 \phi)^2 ] { \bf n}\ ; , \label{phase_velocity}\end{aligned}\ ] ] where we have used the relations and eqs .( [ closed ] ) and ( [ closed2 ] ) .note that the terms with the coefficients and in eq .( [ phase_velocity ] ) correspond to the terms with and of eq .( [ phase_equation0 ] ) respectively .since we have ignored the term , we retain only the term for consistency equations ( [ phase_equation ] ) and ( [ phase_velocity2 ] ) complete the motion of a domain .first we show the results of numerical simulations of the reduced tensor model for the coupled set of equations for and ignoring .this is justified when the relaxation of is sufficiently rapid , i.e. , is large enough and the velocity is sufficiently small .that is , we consider eqs .( [ eq : vtdyn ] ) and ( [ eq : stdyn ] ) with . the terms with the coefficients and also omitted .furthermore , we allow the case that is negative in eq .( [ eq : stdyn ] ) . in this section, we put without loss of generality . ) and ( [ eq : stdyn ] ) ignoring the terms .the parameters are set to be and .the meanings of the synbols and the lines are given in the text ., width=226 ] the set of equations has been solved numerically for and and changing the parameters and .the simple euler scheme has been employed with the time increment .we have checked the numerical accuracy by using . the phase diagram obtained is displayed in fig .[ fig : phasediagram2 ] . in the region indicated by the cross symbol , the domain is motionless whereas it undergoes a straight motion in the region of the open squares and a circular motion in the region of the open circles .these are essentially the same as the previous findings .the solid line is the boundary between the motionless state and the straight motion whereas the broken line is the phase boundary between the straight motion and the circular motion .the dotted line indicates the subcritical hopf - bifurcation line from the straight motion to the rectangular motion which is described below .this line has been obtained as the stability limit of the straight motion .two interesting dynamics appear for .one is the so called rectangular motion in which the domain repeats a straight motion and stopping alternatively as shown in fig .[ fig : rectangular ] for and .this occurs in the region indicated by the solid squares in fig .[ fig : phasediagram2 ] . during the stopping intervalthe domain changes the shape and the propagation direction almost by .either clock - wise rotation or counter clock - wise rotation seem to occur at random and may depend on noises caused unavoidably in the numerical computations . in the most of the region shown by the solid circles , where and , a kind of circular motion is observed .however , this circular motion does not have a single frequency but has a multi - frequency with irrational ratio and therefore the motion is quasi - periodic as shown in fig . [fig : quasi](a ) for and . in order to confirm that the motion is quasi - periodic ,we have analyzed the return map as shown in fig .[ fig : quasi](b ) where the values of the -component of the location of the domain are plotted every time that the domain crosses the line and . , , and .the arrows and the digits indicate the direction of motion and the time - sequence of the motion respectively.,width=151 ]the full set of equations ( [ eq : vtdyn ] ) , ( [ eq : stdyn ] ) and ( [ eq : utdyn ] ) ( but with the simplification have also been solved numerically . the modified euler method with the time increment either or has been employed .the parameters are chosen as , , , , and .the relaxation rates and are varied . since these relaxation rates are chosen to be positive in this section , the cubic term in eqs .( [ eq : stdyn ] ) and ( [ eq : utdyn ] ) are not considered , i.e. , .the phase diagram is obtained as shown in fig .[ fig : phasediagram3](a ) .the straight motion and the circular motion appear in the region indicated by the squares and by the circles respectively . in the region indicated by the stars for the smaller values of and for ,the domain motion becomes chaotic . in order to confirm the chaotic behavior ,we have evaluated the maximum lyapunov exponent , , associate with the domain trajectory as depicted in fig .[ fig : phasediagram3](b ) .it is evident that becomes positive for . for smaller values of encounter a numerical instability and can not obtain any accurate value of the exponent .when is large and is small , a zig - zag motion is observed as indicated by the triangles . the time - sequence of the snapshots for and is displayed in fig . [ fig : zigzag](a ) .the domain is traveling from the left to the right .the angle of the zig - zag motion is about .a chaotic trajectory is displayed in fig .[ fig : zigzag](b ) which is obtained for and .the tensor model takes account only of two long wavelength deformation modes . in contrast , there is no restriction in the active cell model because the dynamics is represented by the partial differential equation ( [ phase_equation ] ) .we use the spectral method with the 4th order runge - kutta algorithm with the total number of wave modes to solve eq .( [ phase_equation ] ) together with ( [ phase_velocity2 ] ) . the geometrical condition ( [ closed ] )must be satisfied at each time step , which is represented in terms of the complex variable ] where is the unknown error function .we introduce the likelihood function as well as the _ a priori _ distribution for the modes of . by optimizing the logarithmic likelihood, we determine to enclose the domain boundary . , , and rotating in the counterclockwise direction .( b ) return map of the coordinate.,width=302 ] for an internal consistency , we have verified numerically that the length is actually unchanged appreciably by these numerical methods .it is noted that the domain area is time - dependent in this model system . as is seen in figs .[ fig : phasediagram2 ] and [ fig : phasediagram3 ] , the tensor model has exhibited several types of deformation dynamics by changing the three parameters , and .in contrast , the active cell model produces these similar dynamics changing only the value of . numerical simulations of eqs .( [ phase_equation ] ) and ( [ phase_velocity2 ] ) show that the intricate dynamics appear in the three characteristic windows ; \ } , w_2 \simeq \ { l | l \in [ 21.0 , 21.9]\} ] .circular , quasi - periodic , and rectangular motions emerge in , quasi - periodic and chaotic motions in , and zig - zag motions in . these motions occur successively by changing the value of .the straight motion is obtained in the region .no shape instability occurs for and therefore no propagation of domain .there is an obvious reason why the complex dynamics appear in the windows , and .the linear stability analysis of eqs .( [ phase_equation ] ) about the trivial solution shows the fourier mode- deformation becomes unstable at .actually the mode-2 becomes unstable in , the mode-3 in and the mode-4 in .since the codimension two bifurcation points exist near these critical points , the various types of motion would appear .figure [ zigzag4](a ) displays the trajectory obtained in , where an apparently chaotic motion appears .the broad power spectrum of the fourier amplitudes of shown in the inset implies that the time - evolution of is a kind of spatio - temporal chaos .figure [ zigzag4](b ) shows a quasi - periodic solution obtained in .figures [ zigzag4](c ) and ( d ) show the two types of trajectory obtained in .although the trajectory in fig . [ zigzag4](c ) seems complicated , the orderly turns occurring in the trajectory ( , in fig . [ zigzag4](c ) ) have almost .thus this is a kind of zig - zag motion , however unlike the zig - zag motion in fig .[ fig : zigzag](a ) , the effective modes governing the deformation are here .the trajectory in fig .[ zigzag4](d ) is also considered to be a kind of zig - zag motion , because the main part of the trajectory indicated by the dotted gray line shows turns .the zig - zag motion with as in fig , [ fig : zigzag](a ) has not been found in the active cell model . , , , , and .the meaning of the symbols is given in the text .in the region indicated by the symbol , an numerical instability occurs so that we have no definite conclusion about the motion .( b ) maximum lyapunov exponent obtained numerically for .the other parameters are the same as those of the phase diagram ( a ) ., width=302 ]the set of equations ( [ eq : vtdyn ] ) , ( [ eq : stdyn ] ) and ( [ eq : utdyn ] ) can be written in terms of the fourier components in eq .( [ eq : deltar ] ) . herewe define the complex variables as , and .the time - evolution equations for , and are given from eqs .( [ eq : vtdyn ] ) , ( [ eq : stdyn ] ) and ( [ eq : utdyn ] ) by where the dot means the time derivative and the bar indicates the complex conjugate .all the coefficients are real and are given by , , , , , , , , , , , , , , , , , and .it is important to note that this set of equations is invariant under the transformation for an arbitrary phase angle .this invariance arises from the isotropy of space . andthe domain moves from the left to the right .( b ) chaotic motion for and .,width=302 ] when the variable is omitted , the set of equations ( [ eq : dz1 ] ) and ( [ eq : dz2 ] ) is the same as those considered by armbruster et al .they motivated to study some partial differential equation like the kuramoto - sivashinsky equation in one dimension under the periodic boundary condition and had no consideration of the self - propelled domain dynamics .in fact , there are the following correspondences ; the motionless circular shape domain the trivial solution , the deformed motionless domain pure mode , the straight motion the standing wave , the rectangular motion the heteroclinic cycle , the rotating motion the traveling wave and the quasi - periodic motion the modulated wave .the former is the motions obtained in our theory whereas the latter is the terminology of armbruster et al .the variable in the active cell model can also be represented in terms of the fourier modes .if the modes higher than relax rapidly to the stationary values , we may retain only the modes .the time - evolution equations are essentially the same structures as those in eqs .( [ eq : dz1 ] ) , ( [ eq : dz2 ] ) and ( [ eq : dz3 ] ) .consequences of this will be discussed below . .( a ) chaotic motion for .the inset shows the power spectrum calculated from the deformation modes .( b ) quasi - periodic motion for .( c ) zig - zag motion for with the turn angle .the circled numbers mean the qualifying order of trajectory .( d ) zig - zag motions for .the `` coarse - grained '' trajectory ( the dotted gray line ) shows turns . ]we have shown that various self - propelled motions appear both in the tensor model and the active cell model .the dynamics common to these two systems are straight motion , circular periodic and quasi - periodic motions , rectangular motion , zig - zag motions and chaotic motion . herewe note the main difference between our model and the self - propelled swimmers at low reynolds number .our model equations are autonomous , thus the shape deformation and the centroid migration are spontaneously created . on the other hand in the latter frameworks , the deformation of flexible body is operationally given and resulting motion is considered within stokes dynamics . owing to the autonomous properties , our models exhibit successive bifurcations leading to richer dynamics . since the tensor model ( [ eq : vtdyn ] ) , ( [ eq : stdyn ] ) , and ( [ eq : utdyn ] ) have been derived from the excitable reaction diffusion model , the bifurcation from a simple to a complex motion should be explored experimentally in physico - chemical systems such as oily droplet systems . along this line , propagating actin waves and recovery of actin polymerization from complete depolymerization observed in dictyostelium cells might give a clue to a connection between our model and possible bifurcations in the dynamics of living cells .moreover , we note the difference of the zigzag motions in the two models .the zig - zag motion with the angle about has been obtained in the tensor model .this is attributed to the fact that only the second and the third modes are considered in the tensor model so that the deformation with three fold symmetry is possible which triggers the zigzag motion . in the active cell model ,on the other hand , more complicated zigzag motion with the angle about and appears as in figs . [ zigzag4](c ) and ( d ) where is close to 4 .therefore , it is expected that higher modes such as the fourth mode are dominant for the zig - zag motion in the active cell model .we emphasize that the rectangular motion , the zigzag motion and an apparently chaotic motion have been observed in real experiments of amoebas .therefore the present approach based on the symmetry argument to construct the time - evolution equations captures the essential feature of the coupling between the shape and the motion of a self - propelled domain .this work was supported by the grant - in - aid for priority area `` soft matter physics '' from the ministry of education , culture , sports , science and technology ( mext ) of japan .99 taylor g. , _ proc .r. soc . lond .a _ , * 211 * ( 1952 ) 225 .percell e. m. , _ am ._ , * 45 * ( 1977 ) 3 .ishikawa t. and pedley , t. j. , _j. fluid mech ._ , * 588 * ( 2007 ) 399 .hatwalne y. _ et al ._ , _ phys ._ , * 92 * ( 2004 ) 118101 .wada h. and netz r.r ._ , * 99 * ( 2007 ) 108102 .alexander g. p. and yeomans j. m. , _ europhys ._ , * 83 * ( 2008 ) 34006 . sumino y. _ et al ._ , _ phys . rev_ , * 94 * ( 2005 ) 068301 .nagai k. _ et al ._ , _ phys .e _ , * 71 * ( 2005 ) 065301(r ) .suzuki k. _ et al ._ , _ chemistry letters _ , * 38 * ( 2009 ) 1010 .golestanian r. _ et al ._ , _ new j. phys ._ , * 9 * ( 2007 ) 126 .nishimura s. i. _ et al ._ , _ plos comput .biol _ , * 5 * ( 2009 ) e1000310 .tao y .- g . andkapral r. , _ j. chem ._ , * 131 * ( 2009 ) 024113/ li l. , norrelykke s. f. and cox e. c. , _ plos one _ , * 3 * ( 2008 ) e2093 .maeda y. t. _ et al ._ , _ plos one _ , * 3 * ( 2008 ) e3734 . ohta t. and ohkuma t. , _ phys . rev ._ , * 102 * ( 2009 ) 154101 .krischer k. and mikhailov a. , _ phys ._ , * 73 * ( 1994 ) 3165 .ohta t. , ohkuma t. and shitara k. , _ phys .e _ , * 80 * ( 2009 ) 056203 .matsuo m. y. , maeda y. t. and sano m. , unpublished .goldstein r. and langer s. a. , _ phys ._ , * 75 * ( 1995 ) 1094 .armbruster d. , guckenheimer j. and holmes p. , _ physica d _ , * 29 * ( 1988 ) 257 .armbruster d. , guckenheimer j. and holmes p. , _ siam j. appl ._ , * 49 * ( 1989 ) 676 .schervish m. j. , _ theory of statistics _( springer - verlag , new york ) 1995 .gerisch g. _ et al ._ , _ bio .* 87 * ( 2004 ) 3493 .
|
we investigate the dynamical coupling between the motion and the deformation of a single self - propelled domain based on two different model systems in two dimensions . one is represented by the set of ordinary differential equations for the center of gravity and two tensor variables characterizing deformations . the other is an active cell model which has an internal mechanism of motility and is represented by the partial differential equation for deformations . numerical simulations show a rich variety of dynamics , some of which are common to the two model systems . the origin of the similarity and the difference is discussed .
|
the information loss paradox remains unresolved 40 years after hawking first pointed out the problem .consider a cloud of matter in a pure state that collapses to form a black hole .the common viewpoint is that , if physics is indeed unitary , exterior observers should be able to recover a pure state after the black hole has completely evaporated . indeed , according to black hole complementarity , an exterior observer and an in - falling observer can observe different events , but each see a completely self - consistent , unitary - preserving , physics. it may therefore be sufficient to discuss the information loss paradox solely from the point of view of the exterior observers .nevertheless , the fate of the information , as seen from a comoving observer that falls into the black hole , remains an interesting problem . to answer this question, one has to first ask the obvious question : `` _ _ what is inside a black hole ? _ _ '' we shall discuss some possibilities . for simplicity , let us focus our attention on the schwarzschild black hole , described by the following metric , in the units , =-\left(1-\frac{2m}{r}\right)\text{d}t^2 + \left(1-\frac{2m}{r}\right)^{-1}\text{d}r^2 + r^2(\text{d}\theta^2 + \sin^2\theta~ \text{d}\phi).\ ] ]a textbook on general relativity typically mentions that one can analytically continue the schwarzschild manifold to the kruskal - szekeres manifold , which contains another asymptotically flat region inside the black hole , on the other side of the einstein - rosen bridge .there are at least two issues with this picture .the first is well known : in a realistic gravitational collapse , one does not seriously expect that the resulting black hole would contain another universe inside it .the second issue is that , in general , analytic continuations are not unique .if we drop some conditions such as vacuum , then other analytic continuations exist .( analyticity is a rather strong condition , if one drops this and considers smooth continuations only , then even more extensions are possible . )this means that the interior geometries of two black holes can be vastly different even if both of them are exactly schwarzschild as seen from the outside .let us first consider the simplest scenario in which the black hole is formed from gravitational collapse and is thus `` one - sided '' , i.e. , does not harbor another asymptotically flat region .can we say something definitive about its interior geometry ?in particular , what is its volume ?this is a rather tricky question to answer .although the area of the black hole event horizon , which by virtue of being a null surface , is a well - defined geometric invariant independent of the choice of spacelike hypersurfaces , the same is not true for the spatial 3-volume inside the black hole .in fact , various definitions of black hole volumes have been proposed ( see e.g. ref . ) , including definition that is more thermodynamical than geometrical .it is also noted that an invariant way of defining volume does exist , albeit this is not a proper volume .what we are interested in is the question : if we look at , say , the black hole at the center of the milky way , how much volume does it contain ?since the proper volume is hypersurface - dependent , there is no unique answer to this .what one can ask , however , is the _largest _ volume possible . assuming the schwarzschild geometry , christodoulou and rovelli showed that a cross section of the event horizon taken at late times bounds a spatial volume which grows with time as where is the advanced time , and the mass of the black hole .this volume corresponds to the maximal slice at , which we shall refer to as the `` cr - volume '' .more explicitly , this is obtained via the volume integral \sin\theta ~\text{d}\theta~ \text{d}\phi ~\text{d}v.\ ] ] the time dependence is not so surprising since the interior geometry of a schwarzschild black hole is , unlike the exterior geometry , _ not _ static .christodoulou and rovelli estimated that sagittarius , the supermassive black hole at the center of our galaxy , contains sufficient space to fit a _ million _ solar systems , despite its areal radius being only a factor of 10 or so larger than the distance from earth to the moon .taking into account the rotation of sagittarius and repeating the calculation using the kerr metric , bengtsson and jakobsson showed that this estimate does not change by much , and so there is a lot of room inside a black hole !( such volume for asymptotically locally anti - de sitter black hole has also been investigated . )it is therefore tempting to ask whether this large volume can shed some light on the information loss paradox .to answer this question , one must first ask : how much do we trust the naive spherically symmetric schwarzschild geometry to continue to hold in the interior spacetime ? near the singularityit is conjectured that spacetime should become highly chaotic ( bkl singularity ) , and one would not be able to trust the aforementioned naive volume integral to continue to hold . however , this complication should not arise until the very late part of the evolution .therefore , let us for the moment assume that the schwarzschild geometry is a sufficiently good approximation and there is indeed a large volume inside a black hole , at least for some time during its evolution under hawking evaporation .the question is then : how does hawking evaporation affect this volume ?does it shrink together with the horizon area ? for the most part of the evolution ,the mass loss is well - approximated by the thermal loss equation , which is given by where is the radiation constant ( which is times the stefan - boltzmann constant ) , and is the greybody factor , which depends on the number of particle species emitted by the hawking radiation .it is found in ref . that somewhat counter - intuitively , the cr - volume continues to increase even though the black hole is losing mass , and consequently , its area is decreasing .this is easy to see , for the volume integral is ( we remark that the volume is no longer asymptotically linear in , as it would be when the mass of the black hole is a constant . ) so by the fundamental theorem of calculus , one immediately sees that as discussed in ref ., this could imply that at sufficiently late time , one either have a `` bag - of - gold '' type of geometry , which is still connected to the exterior asymptotically flat spacetime via a throat , or that the throat could pinch off entirely and the interior spacetime becomes an isolated universe on its own .this is consistent with the results of ref .. in view of this , it is perhaps tempting to conjecture that information can be stored in the cr - volume despite the shrinking of the black hole area .however , this appears not to be the case since the actual entropy content associated with the cr - volume , , turns out to be proportional to the _ area _( though the coefficient is not the same as the bekenstein - hawking area formula ) , instead of the volume .explicitly , zhang showed that zhang also argued that the thermodynamics associated with the entropy in the cr volume is caused by the vacuum polarization near the horizon .one crucial issue that we have not discussed thus far is the singularity inside a black hole . for a schwarzschild black holethis is particularly important , since it is spacelike and lies in the future of anything that falls into the hole .it is well known that the maximum proper time before an in - falling object eventually terminates at the singularity is .this means that if the singularity is not resolved , information would eventually `` fall off the edge of spacetime '' at the singularity .more precisely , there is an entanglement between particles behind the horizon , and particles that remain in the exterior of it .however , the one behind the horizon ultimately gets destroyed at the singularity hence the information loss paradox .this means that having a large spacelike volume _ by itself _does not resolve the information loss paradox .instead , one has to understand whether the singularity is indeed resolved in quantum gravity , and if so how .although the common viewpoint is that the singularity is a sign that general relativity breaks down and will be cured by a full working theory of quantum gravity , _ it may not be_. only if there is no singularity , then a black hole remnant with a huge interior volume or a baby universe may be a viable candidate to resolve the information loss paradox .this proposal is of course not without problems .the readers should refer to ref . for detailed discussion , but let us mention one of the obvious problem : if information is indeed contained in the large volume in some way or another ( i.e. not necessarily in the cr - volume ) , this seems to violate the common wisdom that the bekenstein - hawking entropy which is proportional to the area should count the number of states of the black hole degrees of freedom .nevertheless , there remains a possibility that the common wisdom is incorrect . for more discussion ,to conclude , general relativity is a _ geometric _ theory of gravity .we should therefore pay more attention to the spacetime geometry , whether it is the exterior geometry or the interior one .let us not be biased by the fact that we are exterior observers .perhaps the interior spacetime has nontrivial geometry that allows information to be stored despite the shrinkage of the black hole horizon during hawking evaporation .in addition , the volume of ( two - sided ) black holes has recently gained some attention in the context of holography . to be more specific, there might exist a volume / complexity relation , such that the computational complexity of a certain quantum state , as a function of some proper time , goes like where is a codimension - one space - like section of the anti - de sitter bulk with extremal volume .see ref . and the references therein for detail .despite the potentially important roles of black hole volumes , we should not ignore the singularities , since if they remain unresolved by quantum gravity information can be lost by simply getting destroyed there . in the context of the firewall controversy , it has been proposed by susskind that perhaps the firewall , if it exists , is just the singularity of the black hole that has `` migrated '' to the horizon , due to the volume inside a black hole gradually disappearing as the entanglement between the interior and exterior spacetimegets broken at sufficiently late times .this further exemplifies the importance of understanding both the volume , as well as the singularity , of a black hole .it is of course possible that the information loss paradox is a question that can be resolved without knowing the details whether or how quantum gravity cures the black hole singularity , but again , _it may not be_. s. w. hawking , breakdown of predictability in gravitational collapse , _ phys .d _ * 14 * , 2460 ( 1976 ) .l. susskind , l. thorlacius and j. uglum , the stretched horizon and black hole complementarity , _ phys .d _ * 48 * , 3743 ( 1993 ) [ arxiv : hep - th/9306069 ] .j. m. m. senovilla , singularity theorems and their consequences ( review ) , gen .. grav . * 30 * , 701 ( 1998 ) .b. p. dolan , d. kastor , d. kubiznak , r. b. mann and j. traschen , thermodynamic volumes and isoperimetric inequalities for de sitter black holes , phys .d * 87 * , 104017 ( 2013 ) [ arxiv:1012.2888 [ hep - th ] ] .y. c. ong , never judge a black hole by its area , jcap * 1504 * , no .04 , 003 ( 2015 ) [ arxiv:1503.01092 [ gr - qc ] ] .v. a. belinskii , i. m. khalatnikov and e. m. lifshitz , oscillatory approach to a singular point in the relativistic cosmology , adv .* 19 * , 525 ( 1970 ) .
|
the information loss paradox is often discussed from the perspective of the observers who stay outside of a black hole . however , the interior spacetime of a black hole can be rather nontrivial . we discuss the open problems regarding the volume of a black hole , and whether it plays any role in information storage . we also emphasize the importance of resolving the black hole singularity , if one were to resolve the information loss paradox .
|
graphical models are widely used to model multivariate systems .estimation of conditional independence structure ( often called network inference " or structure learning " ) is increasingly a mainstream approach , for example in computational biology .given data a network estimator gives an estimate of the conditional independence graph .the type of graph and its scientific interpretation depend on the model and scientific context . in many applications ,data is collected on multiple individuals that may differ with respect to interplay between variables , such that corresponding conditional independence graphs may be individual - specific .for example , in biology , individuals may correspond to different patients or cell lines and the networks themselves to gene regulatory or protein signaling networks .interplay in such networks can depend on the genetic and epigenetic state of the individuals , such that even for a well - defined system , such as signaling downstream of a certain receptor class , or a sub - part of the transcriptional program , details may differ between even closely related samples .for example , in yeast signaling , edges in the well - understood mitogen - activated protein kinase ( mapk ) pathway can change depending on context , whilst in cancer , it is thought that individual cell lines may differ with respect to signaling network connections . continuing reduction in the unit cost of biochemical assays has led to an increase in experimental designs that include panels of potentially heterogeneous individuals . in such settings , given individual specific data , there is scientific interest in the individual specific networks and their similarities and differences .the case of multiple related individuals poses a number of statistical challenges for network inference : * * efficiency . *if the networks share features , then individual - level estimation ( i.e. ) , may be inefficient , since there is no sharing of information at the population level . although individual network estimators may be well - behaved as the individual - specific sample size grows large ( e.g. ) , in practice small - to - moderate s and the inherently high - dimensional nature of network inference render inference challenging . * * data aggregation . * aggregating data from multiple individuals and then performing network inference offers a way to obtain larger sample sizes. however , in settings where data from individuals are inhomogeneous ( in the sense that the graphs may differ between individuals ) , inferences regarding conditional independence structure can not in general be obtained from aggregated data ( simpson s paradox ) and testing whether data aggregation is appropriate may be challenging . estimating sufficiently homogeneous groups using mixture models and related clustering approachesoffers an alternative , but is challenging in the network setting , as we discuss further below . * * ancillary information .* ancillary information may be available both at the global " ( population ) and local " ( individual ) levels .for example , in gene regulation , the biological literature provides general information concerning gene - gene interplay , whilst patient - specific characteristics might also be available . when such ancillary information is available it may be desirable to include it in inference ( the `` conditionality principle '' ) , but doing so requires care in prior elicitation .in this paper we present a bayesian approach to joint estimation of networks .the high - level formulation we propose is general and could be applied to a wide range of graphical model formulations .we present a detailed development for the time - course setting , focusing on directed graphical models called dynamic bayesian networks ( dbns ) .these are directed acyclic graphs ( dags ) with explicit time - indices .the main features of our approach are : * * bayesian framework . *we use a hierarchical bayesian model , summarized in fig .[ model ] .regularization is achieved using priors over both parameters and networks .we focus in particular on regularization of individual networks , introducing a latent network to couple inference across the population .we report posterior marginal inclusion probabilities for every possible edge , thus providing a confidence measure for the inferred network topologies and offering robustness in settings where posterior mass is not highly concentrated on a single model . * * computationally efficient estimation from time - course data .* for the time - course setting , we put forward an efficient and deterministic algorithm .this is done by exploiting modularity of the dbn likelihood coupled with a sparsity restriction and a sum - product - type algorithm . in moderate - dimensional settings this allows exact joint estimation to be carried out in seconds to minutes ( we discuss computational complexity below ) making our approach suitable for interactive use . ** incorporation of ancillary information .* we allow for the inclusion of individual - specific ancillary information .following we also allow for interventional data , in which time courses are obtained under external intervention on network nodes .joint estimation of graphical models has recently been discussed in the penalized likelihood literature , with contributions including . in these studies , penalties , such as the fused graphical lasso , are used to couple together inference of gaussian graphical models ( ggms ) .our work complements these efforts by offering a bayesian formulation of joint estimation .this facilitates regularization using prior and ancillary network information .moreover , our approach provides a natural way to estimate confidence in the inferred structure , providing robustness in multi - modal problems .further , we focus on the time - course setting and dbns rather than static data and ggms .however , we note that unlike the above penalized approaches the bayesian approach we propose is not well - suited to extremely high - dimensional settings with thousands of variables .a recent paper by considers bayesian joint estimation for time - course data .our work is in the same vein but differs in two main respects .first , we allow for prior information regarding the network structure and ancillary information including individual - specific characteristics .network priors and ancillary information can usefully constrain inference , not least in biological settings .for example in the cancer signaling example we consider below , much is known concerning relevant biochemistry ( fig . [ literature ] ) and individual - specific information pertaining to e.g. mutation status and receptor expression is often available ( nowadays also in the clinical setting ) .second , for the time - course setting , the exact algorithm we propose offers massive computational gains in comparison to the approach proposed by . as we discuss in detail below the methodology of is prohibitively computationally expensive for the applications we consider here .third , the computational efficiency of our approach allows us to present a much more extensive study of joint estimation , using both simulated and real data , than has hitherto been possible .this adds to our understanding of the performance of hierarchical bayesian formulations for joint estimation .mixtures of graphical models have been used to explore heterogeneous populations .however , mixture modeling requires the strong assumption that there exist groups which are ( sufficiently ) homogeneous with respect to model parameters .otherwise , mixture components are forced to model heterogeneous populations , resulting in potentially poor fit and networks that may not be scientifically meaningful . moreover , while graphical model estimation remains non - trivial , mixtures of graphical models are still more challenging , due to a number of factors relating to the hidden nature ( and number ) of the mixture components .further related work includes , who propose a bayesian approach to network inference based on multiple , steady - state datasets where in each dataset only a subset of the ( shared ) underlying network is identifiable . extend the information sharing scheme from in the context of inference for time - varying networks . considers covariance estimation from a heterogeneous population , treating individual covariance matrices as samples from a matrix - valued probability distribution .network priors have been discussed in the literature , including .our work differs from these efforts by focusing on joint estimation ; as we describe below , this leads to a different model structure and prior specification .the remainder of the paper is organized as follows . in section[ sec : methods ] we lay out a hierarchical bayesian formulation and in section [ sec : computers ] we discuss computationally efficient exact inference .empirical results are presented in section [ sec : results ] using simulated ( section [ silico ] ) datasets .finally we close with a discussion of our findings in section [ sec : discussion ] .we carry out joint network inference using the hierarchical model shown in fig . [ model ] that includes a prior network ( ) as well as a latent network ( ) ; each individual network ( ; we use superscript notation when referring to a particular individual ) is conceptually viewed as a variation upon the latter .individual data are then conditional upon individual networks .estimates of the individual networks are regularized by shrinkage towards the common latent network which in turn may be constrained by an informative network prior .since the latent network is itself estimated , this allows for adaptive regularization .consider the space of ( directed ) networks ( not necessarily acyclic ) on the vertex set .a network decomposes over parent sets as where are the network parents of .write for the set of possible parent sets for variable , such that formally .write for the set of individuals in the population . as shown in fig. [ model ] , each individual network is conditional on a latent network which in turn depends on a prior network ( section [ priors ] ) . as in any graphical model , data is conditional on graph and parameters ; denotes any ancillary information available on individual . in this sectionwe describe our general model and network priors , while in section [ sec : computers ] we discuss the special case of inference for time - course data , giving full details of the likelihood for that case .the model is specified by where the functionals and hyperparameters must be specified ( section [ priors ] ) .this formulation is borrowed from statistical mechanics , where may be interpreted as energy terms , as inverse temperature parameters and eqns .[ gibbs2],[gibbs1 ] as boltzmann ( or gibbs ) distributions . taken together with a suitable graphical model likelihood , we obtain the data - generating model .jni performs inference jointly over , with information sharing occurring via the latent network .the use of a latent network follows . in some biological settings, it may be natural to think of the latent network as describing a `` wild type network '' , however such an interpretation is not required .we refer to this general formulation as joint network inference ( jni ) . specifying a network prior ( eqn .[ gibbs2 ] ) requires a penalty functional and a prior network , with the former capturing how close a candidate network is to the latter .we discuss choice of below . given , a simple choice of penalty function is the structural hamming distance . here denotes the symmetric difference of sets and and denotes cardinality of the set .the hyperparameter controls the strength of the prior network ( eqn .[ gibbs2 ] ) . for brevitywe follow by restricting attention to shd priors , however our formulation is general ( see below ) and compatible with other penalty functionals . for their work on joint estimation of inverse covariance matrices , employed the fused graphical lasso ( fgl ) penalty , which may be interpreted as a real - valued extension of shd ( strictly speaking , there is no analogue of the latent network here ; fgl directly penalizes the difference between individual networks ) . another interesting extension due to distinguishes ( `` false prior positives '' ) and ( `` false prior negatives '' ) by allocating a separate inverse temperature hyperparameter for each case .alternatively , one could employ a binomial prior as described in , which provides the same distinction , but allows for the hyperparameters of the binomial to be integrated out .conditional on a latent network , individual networks are regularized in a similar way , as . in their work on combining multiple data sources , the to vary over individuals ( data sources ) , with reflecting the quality of dataset .likewise learn the on an individual by individual basis .however , in both studies , hyperparameter elicitation is non - trivial ( see section [ elicit ] ) . to further limit scope ,we consider only the special case where . when ancillary information is available regarding a specific individual network , it is desirable to augment the prior specification in such a way as to condition upon . in general such modificationwill be application specific .although we focus on shd priors , the inference procedures presented in this paper apply to the more general class of modular priors , which may be written in the form for some functionals .modularity here refers to a factorization over variables , implying that only local information is available _ a priori_. the shd priors are clearly modular . up to inclusion of ancillary information ,prior strength is fully determined , in this simplified setting , by the parameter pair .taking requires that the latent network is ( almost surely ) identical to the prior network ; in the limit this corresponds to treating network inference for each individual separately , i.e. the estimator .we call this approach `` independent network inference '' ( ini ) .conversely , taking requires that ( almost surely ) individual networks do not deviate from the latent network ; this corresponds to assuming individuals have identical ( unknown ) network structure , but allowing parameter values to vary between individials , possibly becoming equal to zero .we call this approach `` aggregated network inference '' ( ani ) .taking together corresponds to using only the prior .a further , cruder , approach would be to simply combine all data in order to estimate a single network and parameter set , an approach which call `` monolithic '' .we compare these approaches empirically in section [ sec : results ] .elicitation of hyperparameters for network priors is an important and non - trivial issue .hyperparameters can be set using the data , but this poses a number of challenges , as reported in . in the context of sequential hierarchical network priors , observed that when there is limited data available , hyperparameters inferred from the data may be biased towards imposing too much agreement with the prior . used an improper hyperprior over the individual inverse temperature parameters , reporting that for most individuals posterior marginals did not differ greatly from the prior ( possibly due to uninformative data ) .similarly assigned improper flat prior distributions over the hyperparameters , reporting that estimation was rather difficult . due to such weak identifiability of hyperparameters , we chose instead to specify the hyperparameters in a subjective manner . for subjective elicitation of network hyperparameters ,interpretable criteria are important .we present three criteria below which , for the special case of shd which we consider , are simple to implement and can be used for expert elicitation .these heuristics seek to relate the hyperparameters to more directly interpretable measures of the similarity and difference which they induce between prior , latent and individual networks .firstly , we note the following formula for the probability of maintaining edge status ( present / absent ) between the latent network and an individual network : this probability provides an interpretable way to consider the influence of . for examplea prior confidence of that a given edge status in is preserved in a particular individual translates into a hyperparameter ( see sfig .1 ) . an analogous equation relates and , allowing prior strength to be set in terms of the probability that an edge status in the prior network is maintained in the latent network .a second , related approach is to consider the expected total shd between an individual network and the wild type network : this can be interpreted as the average number of edge changes needed to obtain from .an analogous equation holds for and .thirdly , in certain applications , the latent network may not have a direct scientific interpretation , in which case the criteria presented above may be unintuitive .then , hyperparameters could be elicited by consideration of ( a ) similarity between individual networks , and ( b ) concordance of individual networks with the prior network . specifically , we suggest the following two - step procedure : ( a ) exploit the fact that ( for an uniform prior on ) we have , which facilitates selection of via the formula .( b ) elicit using the observation that , so that .this two - step procedure uniquely determines a pair and hence unique hyperparameters .one drawback of this approach is that is selected under an assumption of a uniform prior on ; that is , .the quality of this procedure will therefore depend on the actual informativeness of the prior network on selected in step ( b ) .this approach to hyperparameter selection has an analogous interpretation using expected total shd. the above heuristics may be useful in setting hyperparameters in practice .however , these heuristics are certainly no panacea and should be accompanied by checks of sensitivity to hyperparameters , as we report below .the jni model and network priors , as described above , are general . to apply the jni framework in a particular contextrequires an appropriate likelihood at the individual level , that is , to specify the distribution of data conditional on graph and parameters . in this sectionwe focus on time - course data , using dbns to provide the likelihood .a dbn is a graphical model based on a dag whose vertices have explicit time indices ; see for details . here ,following and others , we use stationary dbns and permit only edges forwards in time .background and assumptions for dbns are described in appendix a. further assuming a modular network prior , structural inference for dbns can be carried out efficiently , as described in detail in .a novel contribution of this paper is to extend these results to allow for efficient and exact _ joint _ estimation . in order to simplify notation, we define a data - dependent functional which implicitly conditions upon observed history .let denote the observed value of variable in individual at time .the above notation allows us to conveniently summarize the product as .thus , we have that , for dbns , the full likelihood also satisfies modularity : in other words , the parent sets ( , ) are mutually orthogonal in the fisher sense , so that inference for each may be performed separately . for this paper, the local bayesian score corresponds to the marginal likelihood for a linear autoregressive formulation described in appendix b. we consider an extension to facilitate the analysis of datasets which contain interventions ; this is described in appendix c. for this choice of model it is possible to construct a fully conjugate set of priors , delivering a closed form expression for the local score , contained in appendix d. previous studies have used mcmc to generate samples from the posterior distribution over networks . however , ensuring mixing has proven to be extremely challenging for joint estimation , with both studies reporting extremely slow convergence .advances in mcmc and parallel computing may in the future ameliorate these issues , but at present it remains the case that fast , interactive joint estimation is currently challenging or prohibitively demanding using mcmc .we therefore propose an exact approach , using an in - degree restriction coupled with prior modularity and a sum - product - type algorithm , to facilitate efficient estimation .for example , the dream4 problem ( variables , individuals ) considered by was reported to require several hours per node " for mcmc convergence ; our approach solves the entire problem in seconds .our approach therefore complements mcmc - based inference , allowing fast , interactive investigation in moderate - dimensional settings . specifically , we use exact model averaging to marginalize over graphs and report posterior marginal inclusion probabilities .we begin by computing and caching the marginal likelihoods for all parent sets , all variables and all individuals ; these could be obtained using essentially any suitable likelihood . the posterior marginal probability for an edge belonging to the latent network computed as where eqn .[ latent sp ] uses the sum - product lemma to interchange operators ( see appendix e ) .this final step has important consequences for algorithmic complexity ( see section [ computers ] ) .note that , whilst this derivation can made without the explicit marginalization of eqn .[ marginalisation ] , the approach is quite general and may be used analogously to facilitate estimation of individual networks : where again the sum - product lemma justifies the exchange of operators .following we reduced the space of parent sets using an in - degree restriction of the form for all , , .thus the cardinality of the space of parent sets is polynomial in , where it was previously exponential .this reduces summation over an exponential number of terms to a more manageable sum over polynomially many terms .moreover , in the protein signaling example to follow , bounded in - degree is a reasonable biological assumption .sensitivity to choice of is discussed in section [ metrics ] .caching of selected probabilities is used to avoid redundant recalculation .pseudocode is provided in appendix f , which consists of three phases of computation .storage costs are dominated by phases i and ii , which each requiring the caching of real numbers .phase ii dominates computational effort , with total ( serial ) algorithmic complexity . however , within - phase computation is `` embarrassingly parallel '' in the sense that all calculations are independent ( indicated by square parentheses notation in the pseudocode ) .thus an ideal implementation requires computational time .we provide a matlab implementation in supplement b.[ silico ] we tested our joint estimation procedure on simulated time - course data .we compare our approach to the special cases of ( i ) inferring each network separately ( ini ) ; ( ii ) allowing parameters ( but not networks ) to change between individuals ( ani ) ; ( iii ) the naive approach of aggregating all data ( monolithic ) and ( iv ) simple temporal correlations ( absolute pearson coefficient ) . for a fair comparison , all methods , with the exception of ( iv ) ,were implemented so as to take account of the interventional nature of the data .we note that it is not possible to directly compare our results with since these methods do not apply to time - course data .the method of applies to time - course data , but the computational demands of the approach precluded application in this setting .specifically , in the simulated data example we report below , over 3000 rounds of inference were performed in total , on problems larger than dream4 ( , ) . using the approach of , these experiments would have required more than 10 years computational time ; in contrast our approach required less than 24 hours serial computation on a standard laptop .the proposed methodology addresses three questions , some or all of which may be of scientific interest depending on application ; ( i ) estimation of the latent network , ( ii ) estimation of individual networks , and ( iii ) estimation of differences between individual networks .we quantify performance for tasks ( i ) and ( ii ) using the area under the receiver operating characteristic ( roc ) curve ( aur ) .this metric , equivalent to the probability that a randomly chosen true edge is preferred by the inference scheme to a randomly chosen false edge , summarizes , across a range of thresholds , the ability to select edges in the data - generating network .aur may be computed relative to the true latent network , or relative to the true individual networks , quantifying performance on tasks ( i ) and ( ii ) respectively .both sets of results are presented below , in the latter case averaging aur over all individual networks . for ( iii ) , in order to assess ability to estimate individual heterogeneity , we computed aur scores based on the statistics which should be close to one if , otherwise should be close to zero .it is easy to show that inference for the latent network , under only the prior , attains mean aur equal to .similarly , prior inference for the individual networks attains mean aur equal to .this provides a baseline for the proposed methodology at tasks ( i ) and ( ii ) and allows performance to be decomposed into aur due to prior knowledge and aur contributed through inference . using a systematic variation of data - generating parameters , we defined 15 distinct data generating regimes .for all 15 regimes we considered 50 independent datasets ; standard errors accompany average aur scores .results presented below use a computationally favorable in - degree restriction .note that when the maximum in - degree of any of the true networks exceeds the computational restriction , estimator consistency will not be guaranteed . in order to check robustness to , a subset of experiments were repeated using , with close agreement observed ( sfig .a latent network on vertices was drawn from the erds distribution with edge density . in order to simulate heterogeneity , the individual networks were obtained from by maintaining the status ( present / absent ) of each edge independently with probability .a parameter for each parent was independently drawn from the mixture normal distribution ( the mixture distribution ensures that parameters are not vanishingly small , so that the structural inference problem is well - defined ) .collecting together parameters produces matrices , corresponding to networks via if and only if .we also generate , for each individual , intercept parameters representing baseline expression levels .initial conditions were sampled as .data were then generated from the autoregressive model , where are independent for . in this way time courses were obtained ; that is , from distinct initial conditions , so the total number of data for individual is . in order to avoid issues of blow - up and to generate plausible datasets , the matrices were normalized by their spectral radii prior to data generation . in order to investigate the effect of using a prior network , we do not simply want to set equal to the latent network , since in practice this network is unknown .we therefore generated a prior network by correctly specifying each potential edge as either present or absent with probability . in this waywe mimic partial prior knowledge of the networks under study .we augmented the above data - generating scheme to mimic interventional experiments . in this case , for each time course , a randomly chosen variable is marked as the target of an interventional treatment .data are then generated according to the augmented likelihood described in appendix c ( fixed effects were taken to be zero ) .furthermore , in order to investigate the impact of model misspecification , we also considered time series data generated from a computational model of protein signaling , based on nonlinear odes . in order to extend this model , which is for a single cell type , to simulate a heterogeneous population , we randomly selected three protein species per individual and deleted their outgoing edges in the data - generating network ( see supplement a ) .firstly we investigated ability to recover the latent network .initially all estimators are assigned approximately optimal hyperparameter values ( for , ) based on the heuristic of eqn .[ prob interpret ] ; prior misspecification is investigated later in section [ misspecify ] .we found little difference in the ability of jni and ani to recover the latent network structure across a wide range of regimes ( stable 1 ) .since ani enjoys favorable computational complexity , this estimator may be preferred for this task in practice .however , both approaches clearly outperformed monolithic inference , which was no better than inference based on the prior alone , demonstrating the importance of accounting for variation in parameter values .correlations barely outperformed random sampling .in practice , one could also estimate using independent network inference ( ini ) , via the _ ad hoc _ estimator which performs an unweighted average of independent network inferences .however we found that ini offered no advantage over jni and ani , performing worse than both in 14 out of 15 regimes .we obtained qualitatively similar results for both alternative data - generating schemes ( stables 3,6 ) .secondly we investigated the ability to recover individual networks . at this task ,jni outperformed ini in all 15 regimes ( table [ table ar cl 1 ] ) .this demonstrates a substantial increase in statistical power resulting from the hierarchical bayesian approach .jni also outperformed monolithic estimation and inference using temporal correlations in all 15 regimes , with the latter demonstrating substantial bias .one may try to improve upon ini by firstly estimating the wild type network , and then taking this estimate as a prior network within a second round of ini .informed by section [ latent ] , we consider the approach whereby is first estimated using ani , referring to this two - step procedure as `` empirical network inference '' ( eni ) .we found that the performance of eni consistently matched that of jni over a wide range of regimes .since eni avoids all joint computation , this may provide a practical estimator of individual networks in higher dimensional settings .similar results were observed using the alternative data - generating schemes , although jni slightly outperformed eni on the datasets ( stables 4,7 ) .thirdly , we assessed ability to pinpoint sources of variation within the population .interest is often directed toward individual - specific heterogeneity , or _features_. informally ,writing , features correspond to .jni regularizes between individuals ; it therefore ought to eliminate spurious differences , leaving only features which are strongly supported by data .equivalently , since jni offers improved estimation of the latent network , the features ought also to be better estimated . feature detection may also be performed using ini or eni , comparing an latent network estimator ( see _ ad hoc _ estimator in section [ latent ] ) with individual networks .the performance of jni was compared to the performance of ini and eni ( stable 2 ) .we found that , whilst feature detection is much more challenging that previous tasks , jni mostly outperformed both ini and eni , with exceptions occurring whenever the underlying dataset was highly informative ( in which case ini was often superior ) .this suggests that coherence of the jni analysis aids in suppressing spurious features in the small sample setting .alternative data - generating schemes produced qualitatively similar results , although jni outperformed eni on the datasets ( stables 5,8 ) .for the above investigation we used eqn .[ prob interpret ] to elicit hyperparameters .this was possible because the data - generating parameters were known by design ; however in general this will not be the case .it is therefore important that estimator performance does not deteriorate heavily when alternative hyperparameter values are employed . by fixing in the data generating process , we are able to investigate the robustness of jni estimator to hyperparameter misspecification .in particular , when finite values are ascribed to data - generating parameters , ani and ini may be interpreted as inference using misspecified prior distributions ( see section [ limits ] ) . sfig .3 displays how performance of the jni estimator for latent networks depends on the choice of hyperparameters .we notice that aur remains close to that obtained for optimal over a fairly large interval , so that performance is not exquisitely dependent on prior elicitation ..assessment of estimators for inference of individual networks ; autoregressive dataset with interventions .[ values shown are average aur standard error , over 50 realizations .green / red is used to indicate the highest / lowest scoring estimators . number of individuals , number of time points per time course , number of time courses , number of variables , noise magnitude , data generating hyperparameters .`` jni '' joint network inference , `` ani '' aggregate data but control for parameter confounding , `` ini '' average independent network inferences , `` monolithic '' aggregate data without controlling for parameter confounding , `` correl . '' estimation using the absolute pearson temporal correlation coefficient , `` prior '' estimation using only the prior network . ][ cols="^,^,^,^,^,^,^,^,^,^,^,^,^,^ " , ] the biological datasets which motivate this study often contain outliers . at the same time, experimental design may lead to platform - specific batch effects . in order to probe estimator robustness , we generated data as previously described , with the addition of outliers and certain batch effects . specifically , gaussian noise from the contamination model was added to all data prior to inference .at the same time , one individual s data were replaced entirely by gaussian white noise to simulate a batch effect that could arise if preparation of a specific biological sample was incorrect . the relative decrease in performance at feature detectionis reported in sfig .we found that jni remained the optimal estimator for all three estimation problems , in spite of these heavy violations to the modeling assumptions .however , the actual decrease in performance was more pronounced for jni than for ini , suggesting that decoupled estimation ( ini ) may confer robustness to batch effects which affect single individuals .there are three distinct , though related , structure learning problems which may be addressed in the context of an heterogeneous population of individuals : 1 . recovering a shared or `` wild type '' network from the heterogeneous data .2 . recovering networks for specific individuals . 3 . pinpointing network variation within the population .each problem may be of independent scientific interest , and the joint approaches investigated here address all three problems simultaneously within a coherent framework .we considered simulated data , with and without model misspecification .for all three problems we demonstrated that a joint analysis performs at least as well as independent or aggregate analyses . our analysis , based on exact bayesian model averaging , was massively faster then the sampling - based schemes of .moreover , our estimators are deterministic , so that difficulties pertaining to mcmc convergence were avoided .indeed , attaining convergence on joint models of this kind appears to be challenging .the proposed methodology is scalable , with an embarrassingly parallel algorithm provided in section [ computers ] .furthermore , we described approximations to a joint analysis which enjoy further reduced computational complexity whilst providing almost equal estimator performance across a wide range of data - generating regimes . whilst we considered the simplest form of regularization , based on prior modularity, there is potential to integrate richer knowledge into inference .one possibility would be hierarchical regularization that allows entire pathways to be either active or inactive .however , in this setting it would be important to revisit hyperparameter elicitation ; the procedures which we have described are specific to shd priors .in particular we restricted the joint model to have equal inverse temperatures . relaxingthis assumption may improve robustness to batch effects which target single individuals , since then weak informativeness ( ) may be learned from data .it would also be interesting to distinguish between ( `` loss of function '' ) and ( `` gain of function '' ) features . in this workwe did not explore information sharing through parameter values , yet this may yield more powerful estimators of network structure in settings where individuals parameters are not independent .the jni model could be formulated as a penalized ( log-)likelihood the frequentist approaches described by enjoy favorable computational complexity ( esp . who provide an example with variables and individuals ) .however , in small to moderate dimensional settings , the bayesian methods proposed here are complementary in several respects : ( i ) bayesian approaches provide a confidence measure for inferred topology , dealing with non - identifiable and multi - modal problems ; ( ii ) no convexity assumptions are required on the form of the penalty functions , in the bayesian setting , which may assist with integration of ancillary information ; ( iii )the above penalized likelihood methods do not apply directly to time course data ( but could be extended to do so ) .these experiments employed a promising formulation of likelihood under intervention due to .there are a number of interesting extensions which may be considered in future work : ( i ) in high dimensions , bayesian variable selection requires multiplicity correction in order to avoid degeneracy .such correction is required to control the false discovery rate and is independent to the penalty on model complexity provided by the marginal likelihood . in this moderate - dimensional work , in order to simplify the presentation , we did not employ a multiplicity correction ; this should be an avenue for future development .( ii ) inference was based upon a local score borrowed from bayesian linear regression .we chose to employ the -prior due to , where following we used ( conditional ) empirical bayes to select the hyperparameter .others have suggested setting ( unit information prior ; * ? ? ?* ) , whilst and propose prior distributions over with attractive theoretical properties .our empirical investigation suggested that the choice of hyperparameter elicitation is influential , but a thorough comparison of linear model specifications is beyond the scope of this paper .( iii ) as discussed in , linear autoregressive formulations may be inadequate in realistic settings ; in particular , samples which are obtained unevenly in time can be problematic .recent advances which incorporate mechanistic detail into the likelihood may prove advantageous .since the jni approach decouples the marginal likelihood and model averaging computations , it may be applied directly to the output of more sophisticated models .( iv ) in the case of linear models , showed that the median probability model ( i.e. model averaging ) provides superior predictive performance over the maximum _ a posteori _ ( map ) model .however we are unaware of an analogous result for causal inference in the bayesian setting .techniques for modeling heterogeneous data are clearly widely applicable .the methodology presented here may be applicable in other disciplines .for example , our approach is suited to meta - analyses of network analyses , integration of multiple data sources or data arising from context dependent networks .the ideas discussed here share many connections with time - heterogeneous dbns which , for brevity , we did not discuss in this paper .we would like to thank j.d .aston , f. dondelinger , c.a .penfold , s.e.f .spencer and s.m .hill for helpful discussion and comments .financial support was provided by nci u54 ca112970 , uk epsrc ep / e501311/1 and the cancer systems center grant from the netherlands organisation for scientific research .dbns have emerged as popular tools for the analysis of multivariate time course data due to ( i ) the fact that no acyclicity assumption is required on the ( static ) network , and ( ii ) computational tractability resulting from a factorization of the likelihood function over variables . for the dbnsused here , an edge from to in means that , the observed value of variable in individual at time , depends directly upon , the observed value of in individual at time ( fig .[ time slice ] ; note that indexes sample index , rather than actual sampling time ) .let denote a vector containing all observations for individual .then is conditionally independent of given and ( first - order markov assumption ) .these conditional independence relations are conveniently summarized as a ( static ) network with exactly vertices ( fig .[ static ] ) ; note that this latter network need not be acyclic .we follow in formulating inference in dbns as a regression problem .we entertain models for the response as predicted by covariates . in many casesmultiple time series will be available . in this casethe vector contains the concatenated time series .the dbn formulation gives rise to the following linear regression likelihood where .the matrix {n \times 2}$ ] contains a term for the initial time point in each experiment .the elements of corresponding to initial observations are simply set to zero .parameters are specific to model , variable and cell line . in the simplest casethe model - specific component of the design matrix consists of the raw predictors where denotes the elements of the vector belonging to the set , though more complex basis functions may be used .following we model interventional data by modification to the dag in line with a causal calculus .we mention briefly some of the key ideas and refer the interested reader to the references for full details .a `` perfect intervention '' corresponds to 100% removal of the target s activity with 100% specificity . in the context of protein phosphorylation, kinases may be intervened upon using agents such as monoclonal antibodies , small molecule inhibitors or even si - rna .we make the simplifying assumptions that these interventions are perfect and use the `` perfect out fixed effects '' ( pofe ) approach recommended by .we refer the reader to for an extended discussion of pofe .this changes the dag structure to model the intervention and also estimates a fixed effect parameter to model the change under intervention in the log - transformed data .we employed a jeffreys prior for over the common parameters .prior to inference , the non - interventional components of the design matrix were orthogonalized using the transformation , where .we then assumed a -prior for regression coefficients , given by where . using these priors for the dbns with interventionas outlined above , the marginal likelihood can be obtained in closed - form : where , and .empirical investigations have previously demonstrated good results for network inference based on the above marginal likelihood . following we used the ( conditional ) empirical bayes approach to determine the hyperparameter ( details in supplement a ) .the `` sum - product '' lemma , which forms the basis for several exact inference procedures in graphical models , can be expressed in its most basic form as follows : for a finite set of functionals on finite domains indexed by we have the proof is straight forward ( induction on ) and can be found in e.g. .the sum - product lemma is typically used to reduce algorithmic complexity , replacing the expression on the left hand side by the expression on the right hand side .this appendix contains pseudocode for exact joint model averaging . [ computational complexity of calculating marginal likelihoods will scale with sample size ; scaling exponents shown here assume . ]below we provide pseudocode for computation of posterior marginal inclusion probabilities for edges in individual networks : aliferis , c.f ._ et al . _( 2010 ) local causal and markov blanket induction for causal discovery and feature selection for classification , part i : algorithms and empirical evaluation ._ j. mach .res . _ * 11*:171 - 234 .dondelinger , f. , lebre , s. , husmeier , d. ( 2010 ) heterogeneous continuous dynamic bayesian networks with flexible structure and inter - time segment information sharing ._ proceedings of the 27th international conference on machine learning _ , 303 - 310 .dondelinger , f. , lebre , s. , husmeier , d. ( 2012 ) non - homogeneous dynamic bayesian networks with bayesian regularization for inferring gene regulatory networks with gradually time - varying structure ._ * 90*(2):191 - 230 .grzegorczyk , m. , husmeier , d. ( 2011 ) improvements in the reconstruction of time - varying gene regulatory networks : dynamic programming and regularization by information sharing among genes ._ bioinformatics _ * 27*(5):693 - 699. hennessey , b.t ._ et al . _( 2010 ) a technical assessment of the utility of reverse phase protein arrays for the study of the functional proteome in nonmicrodissected human breast cancer .proteom . _ * 6*:129 - 151 .werhli , a.v . , husmeier , d. ( 2008 ) gene regulatory network reconstruction by bayesian integration of prior knowledge and/or different experimental conditions . _journal of bioinformatics and computational biology _* 6*(3):543 - 572 .zellner , a. ( 1986 ) on assessing prior distributions and bayesian regression analysis with g - prior distributions , _ bayesian inference and decision techniques - essays in honor of bruno de finetti , eds . p. k. goel and a. zellner _, 233 - 243 .
|
graphical models are widely used to make inferences concerning interplay in multivariate systems , as modeled by a conditional independence graph or network . in many applications , data are collected from multiple individuals whose networks may differ but are likely to share many features . here we present a hierarchical bayesian formulation for joint estimation of such networks . the formulation is general and can be applied to a number of specific graphical models . motivated by applications in biology , we focus on time - course data with interventions and introduce a computationally efficient , deterministic algorithm for exact inference in this setting . application of the proposed method to simulated data demonstrates that joint estimation can improve ability to infer individual networks as well as differences between them . finally , we describe approximations which are still more computationally efficient than the exact algorithm and demonstrate good empirical performance .
|
in the last few years the developments in modern communication systems produced many results in a short amount of time .especially quantum communication systems allow us to exploit new possibilities while at the same time imposing fundamental limitations .quantum mechanics differs significantly from classical mechanics , it has its own laws .quantum information theory unifies information theory with quantum mechanic , generalizing classical information theory to the quantum world .the unit of quantum information is called the `` qubit '' , the quantum analogue of the classical `` bit '' . unlike a bit , which is either `` 0 '' or `` 1 '', a qubit can be in `` superposition '' , i.e. both states at the same time , this is a fundamental tool in quantum information and computing .a quantum channel is a communication channel which can transmit quantum information . in general , there are two ways to represent a quantum channel with linear algebraic tools , either as a sum of several transformations , or as a single unitary transformation which explicitly includes the unobserved environment. quantum channels can transmit both classical and quantum information .we consider the capacity of quantum channels carrying classical information .this is equivalent to considering the capacity of classical - quantum channels , where the classical - quantum channels are quantum channels whose sender s inputs are classical variables .the classical capacity of quantum channels has been determined in , , , and .our goal is to investigate in communication that takes place over a quantum channel which is , in addition to the noise from the environment , subjected to the action of a jammer which actively manipulates the states .the messages ought also to be kept secret from an eavesdropper . a classical - quantum channel with a jammer is called an arbitrarily varying classical - quantum channel , where the jammer may change his input in every channel use and is not restricted to use a repetitive probabilistic strategy . in the model of an arbitrarily varying channel, we consider a channel which is not stationary and can change with every use .we interpret this as an attack of a jammer .it works as follows : the sender and the receiver have to select their coding scheme first .after that the jammer makes his choice of the channel state to sabotage the message transmission .however , due to the physical properties , we assume that the jammer s changes only take place in a known set . the arbitrarily varying channel was first introduced in . showed a surprising result which is known as the ahlswede dichotomy : either the capacity of an arbitrarily varying channel is zero or it equals its shared randomness assisted capacity .after the discovery in it remained as an open question when the deterministic capacity is positive . in a sufficient condition for that has been given , and in it is proved that this condition is also necessary . the ahlswede dichotomy demonstrates the importance of shared randomness for communication in a very clear form . in the capacity of arbitrarily varying classical - quantum channelsis analyzed .a lower bound of the capacity has been given .an alternative proof of s result and a proof of the strong converse are given in . in the ahlswede dichotomy for the arbitrarily varying classical - quantum channels is established , and a sufficient and necessary condition for the zero deterministic capacity is given . in a simplification of this condition for the arbitrarily varying classical - quantum channels is given . in the model of a wiretap channelwe consider secure communication .this was first introduced in .we interpret the wiretap channel as a channel with an eavesdropper .for a discussion of the relation of the different security criteria we refer to and . a classical - quantum channel with an eavesdropper is called a classical - quantum wiretap channel , its secrecy capacity has been determined in and . this work is a progress of our previous papers and , where we considered channel robustness against jamming and at the same time security against eavesdropping . a classical - quantum channel with a jammer and at the same time an eavesdropperis called an arbitrarily varying classical - quantum wiretap channel .it is defined as a family of pairs of indexed channels with a common input alphabet and possible different output systems , connecting a sender with two receivers , a legal one and a wiretapper , where is called a channel state of the channel pair .the legitimate receiver accesses the output of the first part of the pair , i.e. , the first channel in the pair , and the wiretapper observes the output of the second part , i.e. , the second channel .a channel state , which varies from symbol to symbol in an arbitrary manner , governs both the legal receiver s channel and the wiretap channel .a code for the channel conveys information to the legal receiver such that the wiretapper knows nothing about the transmitted information .in we established the ahlswede dichotomy for arbitrarily varying classical - quantum wiretap channels , i.e. , either the deterministic capacity of an arbitrarily varying channel is zero or equals its randomness assisted capacity .our proof is similar to the proof of the ahlswede dichotomy for arbitrarily varying classical - quantum channels in : we build a two - part code word , the first part is used to create the common randomness for the sender and the legal receiver , the second is used to transmit the message to the legal receiver .we also analyzed the secrecy capacity when the sender and the receiver used various resources .in we determined the secrecy capacities under common randomness assisted coding of arbitrarily varying classical - quantum wiretap channels .we also examined when the secrecy capacity was a continuous function of the system parameters .furthermore , we proved the phenomenon `` super - activation '' for arbitrarily varying classical - quantum wiretap channels , i.e. , there were two channels , both with zero deterministic secrecy capacity , such that if they were used together they allowed perfect secure transmission with positive deterministic secrecy capacity . combining the results of these two paper we get the formula for deterministic secrecy capacity of the arbitrarily varying classical - quantum wiretap channel .as aforementioned the lower bound in and is shown by building a two - part deterministic code .however that code concept still leaves something to be desired because we had to reduce the generality of the code concept when we explicitly allowed a small part of the code word to be non - secure .the code word we built was a composition of a public code word to synchronize the second part and a common randomness assisted code word to transmit the message .we only required security for the last part . as we will show in corollary [ enrtitvwsc ] , when the jammer has access to the first part , it will be rendered completely useless .thus the code concept only works when the jammer is limited in his action , e.g. we have to assume that eavesdropper can not send messages towards the jammer .nevertheless this code concept with weak criterion can be useful when small amount of public messages is desired , e.g. when the receiver uses it to estimate the channels . in this work we consider the general code concept when we construct a code in such a way that every part of it is secure .we show that when the legal channel is not symmetrizabel the sender can send a small amount of secure transmissions which push the secure capacity to the maximally attainable value .thus the entire security is granted .we call it the strong code concept .this completes our analysis of arbitrarily varying classical - quantum wiretap channel . in we analyzed the secrecy capacities of various coding schemes with resource assistance .we showed that when the jammer was not allowed to has access to the resource , it was very helpful for the secure message transmission through an arbitrarily varying classical - quantum wiretap channel . in this workwe analyze the case when the shared randomness is not secure against the jammer .in we showed that the secrecy capacity was in general not a continuous function of the system parameters .in we proved super - activation for arbitrarily varying classical - quantum wiretap channels . in this workwe establish complete characterizations for continuity and positivity of the capacity function of arbitrarily varying classical - quantum wiretap channels , and a complete characterization for super - activation .this paper is organized as follows : the main definitions are given in section [ bdacs2 ] .in section [ cavqw ] we determine a secrecy capacity formula for a mixed channel model which is called the classic arbitrarily varying quantum wiretap channel . this formula is used for our result in section [ tsmwscr ] . in section [ tsmwscr ]our main result is presented . in this sectionwe determine the secrecy capacity for the arbitrarily varying classical - quantum channels under strong code concept .in section [ cwrioppc ] we analyze when the sender and the legal receiver had the possibility to use shared randomness which is not secure against the jammer .we also determine the secrecy capacity of arbitrarily varying classical - quantum wiretap channels shared randomness which is secure against eavesdropping . as an application of the results of our earlier works , in section [ someapp ] we establish when the secrecy capacity of an arbitrarily varying classical - quantum wiretap channel is positive and when it is a continuous quantity of the system parameters .furthermore we show when `` super - activation '' occurs for arbitrarily varying classical - quantum wiretap channels .for a finite set we denote the set of probability distributions on by .let and be hermitian operators on a finite - dimensional complex hilbert space .we say and if is positive - semidefinite . for a finite - dimensional complex hilbert space , we denote the ( convex ) space of density operators on by where is the set of linear operators on , and is the null matrix on . note that any operator in is bounded . for finite sets and , we define a ( discrete ) classical channel : , to be a system characterized by a probability transition matrix . for and , expresses the probability of the output symbol when we send the symbol through the channel .the channel is said to be memoryless if the probability distribution of the output depends only on the input at that time and is conditionally independent of previous channel inputs and outputs .further we can extend this definition when we define a classical channel to a map : by denoting .let .we define the -th memoryless extension of the stochastic matrix by , i.e. , for and , . for finite - dimensional complex hilbert spaces and a quantum channel : , is represented by a completely positive trace - preserving map which accepts input quantum states in and produces output quantum states in .if the sender wants to transmit a classical message of a finite set to the receiver using a quantum channel , his encoding procedure will include a classical - to - quantum encoder to prepare a quantum message state suitable as an input for the channel .if the sender s encoding is restricted to transmit an indexed finite set of quantum states , then we can consider the choice of the signal quantum states as a component of the channel .thus , we obtain a channel with classical inputs and quantum outputs , which we call a classical - quantum channel .this is a map : , which is represented by the set of possible output quantum states , meaning that each classical input of leads to a distinct quantum output . in view of this, we have the following definition .let be a finite - dimensional complex hilbert space .a classical - quantum channel is a linear map , . let . for a , defined by write instead of .frequently a classical - quantum channel is defined as a map , .this is a special case when the input is limited on the set . for any finite set , any finite - dimensional complex hilbert space , and , we define , and . we also write for the elements of .let .we define the -th memoryless extension of the stochastic matrix by , i.e. , for and , .we define the -th extension of classical - quantum channel as follows .associated with is the channel map on the n - block : , such that if can be written as .let be a finite set .let be a set of classical - quantum channels . for , define the n - block such that for if can be written as . for a discrete random variable on a finite set and a discrete random variable on a finite set denote the shannon entropy of by and the mutual information between and by . here is the joint probability distribution function of and , and and are the marginal probability distribution functions of and respectively , and `` '' means logarithm to base . for a quantum state denote the von neumann entropy of by where `` '' means logarithm to base .let and be quantum systems .we denote the hilbert space of and by and , respectively .let be a bipartite quantum state in .we denote the partial trace over by where is an orthonormal basis of .we denote the conditional entropy by here .let : be a classical - quantum channel . for the conditional entropy of the channel for with input distribution is denoted by let be a set of quantum states labeled by elements of .for a probability distribution on , the holevo quantity is defined as note that we can always associate a state to such that holds for the quantum mutual information .for a set and a hilbert space let : be a classical - quantum channel . for a probability distribution on the holevo quantity of the channel for with input distribution defined as we denote the identity operator on a space by and the symmetric group on by . for a probability distribution on a finite set and a positive constant , we denote the set of typical sequences by where is the number of occurrences of the symbol in the sequence .let be a finite - dimensional complex hilbert space .let and .we suppose has the spectral decomposition , its -typical subspace is the subspace spanned by , where .the orthogonal subspace projector onto the typical subspace is similarly let be a finite set , and be a finite - dimensional complex hilbert space .let : be a classical - quantum channel . for suppose has the spectral decomposition for a stochastic matrix .the -conditional typical subspace of for a typical sequence is the subspace spanned by . here is an indicator set that selects the indices in the sequence for which the -th symbol is equal to .the subspace is often referred as the -conditional typical subspace of the state .the orthogonal subspace projector onto it is defined as the typical subspace has following properties : for and there are positive constants , , and , depending on such that for there are positive constants , , and , depending on such that for the classical - quantum channel and a probability distribution on we define a quantum state on . for define an orthogonal subspace projector fulfilling ( [ te1 ] ) , ( [ te2 ] ) , and ( [ te3 ] ) .let .for there is a positive constant such that following inequality holds : we give here a sketch of the proof . for a detailed proof please see .( [ te1 ] ) holds because .( [ te2 ] ) holds because .( [ te3 ] ) holds because for and a positive .( [ te4 ] ) , ( [ te5 ] ) , and ( [ te6 ] ) can be obtained in a similar way .( [ te7 ] ) follows from the permutation - invariance of .let and be finite sets , and be a finite - dimensional complex hilbert spaces .let : = be a finite set . forevery let be a classical channel , and be a classical - quantum channel .we call the set of the classical channels an * arbitrarily varying channel and the set of the classical - quantum channels an * arbitrarily varying classical - quantum channel when the channel state varies from symbol to symbol in an arbitrary manner . * * strictly speaking , the set generates the arbitrarily varying classical - quantum channel . when the sender inputs a into the channel ,the receiver receives the output , where is the channel state of .[ symmet ] we say that the arbitrarily varying channel is symmetrizable if there exists a parametrized set of distributions on such that for all , , and we say that the arbitrarily varying classical - quantum channel is symmetrizable if there exists a parametrized set of distributions on such that for all , , let and be finite sets , and be a finite - dimensional complex hilbert spaces .let be an index set .for every let be a classical channel and be a classical - quantum channel .we call the set of the classical / classical - quantum channel pairs an * classic arbitrarily varying quantum wiretap channel _ _ , when the state varies from symbol to symbol in an arbitrary manner , while the legitimate receiver accesses the output of the first channel , i.e. , in the pair , and the wiretapper observes the output of the second channel , i.e. , in the pair , respectively . _ _ * let be a finite set .let and be finite - dimensional complex hilbert spaces .let be an index set . for every let be a classical - quantum channel and be a classical - quantum channel .we call the set of the classical - quantum channel pairs an * arbitrarily varying classical - quantum wiretap channel _ _ , when the state varies from symbol to symbol in an arbitrary manner , while the legitimate receiver accesses the output of the first channel , i.e. , in the pair , and the wiretapper observes the output of the second channel , i.e. , in the pair , respectively . _ _ * an ( deterministic ) code for the arbitrarily varying classical - quantum wiretap channel consists of a stochastic encoder : , , specified by a matrix of conditional probabilities , and a collection of positive - semidefinite operators on , which is a partition of the identity , i.e. .we call these operators the decoder operators .a code is created by the sender and the legal receiver before the message transmission starts .the sender uses the encoder to encode the message that he wants to send , while the legal receiver uses the decoder operators on the channel output to decode the message .an deterministic code with deterministic encoder consists of a family of -length strings of symbols and a collection of positive - semidefinite operators on which is a partition of the identity .the deterministic encoder is a special case of the stochastic encoder when we require that for every , there is a sequence chosen with probability .the standard technique for message transmission over a channel and robust message transmission over an arbitrarily varying channel is to use the deterministic encoder ( cf . and ) . however , we use the stochastic encoder , since it is a tool for secure message transmission over wiretap channels ( cf . and ) .[ detvsran ] a non - negative number is an achievable secrecy rate for the arbitrarily varying classical - quantum wiretap channel if for every , , and sufficiently large there exist an code such that , and where is the uniform distribution on . here ( the average probability of the decoding error of a deterministic code , when the channel state of the arbitrarily varying classical - quantum wiretap channel is ) , is defined as is the set of the resulting quantum state at the output of the wiretap channel when the channel state of is . the supremum on achievable secrecy rate for the is called the secrecy capacity of denoted by .[ defofrate ] a weaker and widely used security criterion is obtained if we replace ( [ b40 ] ) with .[ remsc ] when we defined as then is defined as .when deterministic encoder is used then is defined as .we denote the set of deterministic codes by .a non - negative number is an achievable * secrecy rate _ for the arbitrarily varying classical - quantum wiretap channel * under randomness assisted coding _ if for every , , and , if is sufficiently large , there is an s a distribution on such that , and here is a sigma - algebra so chosen such that the functions and are both -measurable with respect to for every _ * _ * a non - negative number is an achievable * secrecy rate _ for the arbitrarily varying classical - quantum wiretap channel * under non - secure randomness assisted coding _ if for every , , and , if is sufficiently large , there is an s a distribution on such that , and here is a sigma - algebra so chosen such that the functions and are both -measurable with respect to ._ * _ * the supremum on achievable secrecy rate for the under randomness assisted coding is called the randomness assisted secrecy capacity of denoted by .the supremum on achievable secrecy rate for the under non - secure randomness assisted coding is called the non - secure randomness assisted secrecy capacity of denoted by .[ wdtsonjdc ] an code for the classic arbitrarily varying quantum wiretap channel consists of a stochastic encoder : , , specified by a matrix of conditional probabilities , and a collection of mutually disjoint sets ( decoding sets ) .a non - negative number is an achievable secrecy rate for the classic arbitrarily varying quantum wiretap channel if for every , , and sufficiently large there exist an code such that , and and the supremum on achievable secrecy rate for the is called the secrecy capacity of denoted by .[ defofrate2 t ] a non - negative number is an achievable secrecy rate for the arbitrarily varying classical - quantum wiretap channel under common randomness assisted quantum coding using an amount of secret common randomness , where is a non - negative number depending on , if for every , , , and sufficiently large , there is a set of codes such that , , and where is the uniform distribution on . unlike in and require that the randomness to be secure against eavesdropping here .the supremum on achievable secrecy rate under random assisted quantum coding using an amount of common randomness of is called the secret random assisted secrecy capacity of using an amount of common randomness , denoted by .we say super - activation occurs to two arbitrarily varying classical - quantum wiretap channels and when the following hold : and for super - activation we do not require the strong code concept .at first we determine a capacity formula for a mixed channel model , i.e. the secrecy capacity of classic arbitrarily varying quantum wiretap channel .this formula will be used for our result for secrecy capacity of arbitrarily varying classical - quantum wiretap channels using secretly sent common randomness .let be a classic arbitrarily varying quantum wiretap channel .when is not symmetrizable , then here are the resulting classical random variables at the output of the legitimate receiver s channels and are the resulting quantum states at the output of wiretap channels .the maximum is taken over all random variables that satisfy the markov chain relationships : for every and . is here a random variable taking values on , a random variable taking values on some finite set with probability distribution .[ lgwtvttitbaca ] we fix a probability distribution and choose an arbitrarily positive .let and let + and be a family of random matrices whose components are i.i.d . according to .we fix a and define a map by for we define . is trivially a typical sequence of . for , defines a map .let [ gentle operator , cf . and ] [ eq_4a ] let be a quantum state and be a positive operator with and .then the gentle operator was first introduced in , where it has been shown that . in ,the result of has been improved , and ( [ tenderoper ] ) has been proved . in view of the fact that and are both projection matrices , by ( [ te1 ] ) , ( [ te7 ] ) , and lemma [ eq_4a ] for any and , it holds that the following lemma has been showed in : [ cov3 ] let be a finite - dimensional hilbert space .let and be finite sets .suppose we have an ensemble of quantum states .let be a probability distribution on .suppose a total subspace projector and codeword subspace projectors exist which project onto subspaces of the hilbert space in which the states exist , and for all there are positive constants ,1[ ] .the fannes inequality was first introduced in , where it has been shown that . in result of has been improved , and ( [ faaudin ] ) has been proved . by lemma [ eq_9 ] andthe inequality ( [ eq_82w ] ) , for a uniformly distributed random variable with values in a and , we have by ( [ wehaveb2w ] ) , for any positive if is sufficiently large , we have we define and . by ( [ elrttljx ] ) and ( [ clruztr2w ] ) ,when is not symmetrizable the deterministic secrecy capacity of is larger or equal to the achievability of and the converse are shown by the standard arguments ( cf . and ) .now we are going to prove our main result : the secrecy capacity formula for arbitrarily varying classical - quantum wiretap channels using secretly sent common randomness . in our previous papers and we determined the secrecy capacity formula for arbitrarily varying classical - quantum wiretap channels .our strategy is to build a two - part code word , which consists of a non - secure code word and a common randomness - assisted secure code word .the non - secure one is used to create the common randomness for the sender and the legal receiver .the common randomness - assisted secure code word is used to transmit the message to the legal receiver .now we build a code in such a way that the transmission of both the message and the randomization is secure .since the technique introduced in for classical channels can not be easily transferred into quantum channels , our idea is to construct a classical arbitrarily varying quantum wiretap channel and apply theorem [ lgwtvttitbaca ] . in technique has been introduced to construct a classical arbitrarily varying channel by means of an arbitrarily varying classical - quantum channel .however this technique does not work for classical arbitrarily varying quantum wiretap channel since it can not provide security .we have to find a more sophisticated way .if the arbitrarily varying classical - quantum channel is not symmetrizable , then when we use a two - part code word that both parts are secure . here are the resulting quantum states at the output of the legitimate receiver s channels . are the resulting quantum states at the output of wiretap channels .the maximum is taken over all random variables that satisfy the markov chain relationships : for every and . is here a random variable taking values on , a random variable taking values on some finite set with probability distribution .[ pdwumsfsov ] since the security of both the message and the randomization implies the security of only the message , the secrecy capacity of for the message and the randomization transmission can not exceed , which is the secrecy capacity of for only the message transmission ( cf .thus the converse is trivial . for the achievabilitywe at first assume that is symmetrizable . in this casethe secrecy capacity of for only the message transmission is zero and there is nothing to prove .at next we assume that for all we have . in this casethe secrecy capacity of for only the message transmission is also zero and again there is nothing to prove .now we assume that is not symmetrizable and for all sufficiently large and a positive holds . _i ) construction of a non - symmetrizable channel with random pre - coding _we consider the markov chain , where we define the classical channel by .it may happen that is symmetrizable although is not symmetrizable , as following example shows : we assume that : is not symmetrizable but there is a subset such that limited on is symmetrizable .we choose a such that for every there is such that , and for all and .it is clear that is symmetrizable ( cf .also for an example for classical channels ) .we now use a technique introduced in to overcome this : without loss of generality we may assume that by optimization . furthermore without loss of generalitywe may assume that by relabeling the symbols .for every we define a new classical channel by setting , i.e. , we have where for we denote . since is not symmetrizable , is not symmetrizable . furthermore , for any positive sufficiently large we have for every and we define by this shows that , if for a non - symmetrizable channel we have ( [ finmpibqic1 ] ) , then holds , where are the resulting quantum states at the output of ._ ii ) definition of a classical arbitrarily varying channel which is not symmetrizable _we denote and define for all .now we consider the arbitrarily varying wiretap classical - quantum channel .we choose an arbitrary , by ( [ finmpibqic ] ) if is sufficiently large we may assume that for at least one by theorem 1 of if is sufficiently large we can find a code and positive , such that for some and for all and for a which is independent of . here for define its permutation matrix on by .( notice that is a deterministic code for a mixed channel model called compound - arbitrarily varying wiretap classical - quantum channel which we introduced in . )we now combine a technique introduced in with the concept of superposition code to define a set of classical channels .we choose hermitian operators , which span the space of hermitian operators on and fulfill by the technique introduced in : we choose arbitrarily hermitian operators , which span the space of hermitian operators on and denote the trace of by .now we define for and .now we defined the classical arbitrarily varying channel by since we have for all .thus this definition is valid .when is symmetrizable then there is a on such that for all .this implies that for all .since span the space of hermitian operators on we have this is a contradiction to our assumption that is not symmetrizable , therefore is not symmetrizable . _iii ) the deterministic secrecy capacity of is positive _ by ( [ 1f1jmspi ] ) for all and we have and for all and we denote the uniform distribution on by . for any positive if is sufficiently large by ( [ 1f1jmspi1 ] ) and ( [ 1f1jmspi2 ] ) for all we have where is the resulting distribution at the output of . applying the lemma [ eq_9 ] and ( [ balf1lnsl1 ] )if is sufficiently large for any , positive , and for all we have &\frac{1}{n ' } \max _ { t^{mn ' } \in \theta^{mn'}}\chi\left({r'}^{\otimes n ' } , \check{z}_{t^{mn'}}\right)\notag\\ & = \frac{1}{n ' } \biggl ( s\left ( \frac{1}{2^{n ' } } \sum_{j\in \{1,2\}^{n ' } } \check{v}_{t^{mn'}}(({e^{m}})^{\otimes n'}(\cdot\mid j))\right ) - \frac{1}{2^{n'}}\sum_{j\in \{1,2\}^{n ' } } s\left ( \check{v}_{t^{mn'}}(({e^{m}})^{\otimes n'}(\cdot\mid j))\right)\biggr)\notag\\ & \leq \frac{1}{n ' } \left\vert s\left ( \frac{1}{2^{n ' } } \sum_{j\in \{1,2\}^{n ' } } \check{v}_{t^{mn'}}(({e^{m}})^{\otimes n'}(\cdot\mid j))\right ) - s\left ( \theta_{t^{mn'}}\right ) \right\vert\notag\\ & + \frac{1}{n ' } \left\vert s\left ( \theta_{t^{mn ' } } \right ) - \frac{1}{2^{n ' } } \sum_{j\in \{1,2\}^{n ' } } s\left ( \check{v}_{t^{mn'}}(({e^{m}})^{\otimes n'}(\cdot\mid j))\right ) \right\vert\notag\\ & = \frac{1}{n ' } \left\vert \sum_{i=1}^{n ' } \biggl ( s\left ( \frac{1}{2 } \sum_{j\in \{1,2\ } } \check{v}_{t^{m}_{i}}(({e^{m}})(\cdot\mid j))\right ) - s\left ( \theta_{t^{m}_{i}}\right ) \biggr ) \right\vert\notag\\ & + \frac{1}{n ' } \left\vert \sum_{i=1}^{n ' } \left ( s\left ( \theta_{t^{m}_{i } } \right ) - \frac{1}{2 } \sum_{j\in \{1,2\ } } s\left ( \check{v}_{t^{m}_{i}}(({e^{m}})(\cdot\mid j))\right ) \right)\right\vert\notag\\ & \leq 2\cdot 2^{-\sqrt{m}\zeta}\log ( d^m-1)+2\cdot h(2^{-\sqrt{m}\zeta})\notag\\ & \leq \zeta'\text { , } \label{mtmnitmnc}\end{aligned}\ ] ] where is the resulting quantum state at the output of .we choose and a sufficiently large such that ( [ mqiptmilem ] ) and ( [ mtmnitmnc ] ) hold .sine is not symmetrizable , by theorem [ lgwtvttitbaca ] the deterministic secrecy capacity of is equal to _ iv ) the secure transmission of the message with a deterministic code _ since we can build a code such that and for a which is independent of .we define for . for define here for we set .since , is a valid set of decoding operators . is a code which fulfills _ v ) the secure transmission of both the message and the randomization index _we choose an arbitrary positive .let by the results of if is sufficiently large there is a common randomness assisted quantum code , a quantum state such that for all and for all , and all for a which is independent of .using technique in to reduce the amount of common randomness if is sufficiently large we can find a set such that and furthermore by the permutation - invariance of we also have for all .now we can construct a code by for every and by ( [ f1n3mtwtnr2 ] ) and ( [ mtnitn1n3f1jn ] ) for every we have \left [ \tilde{d}_i^{(\log n)^3 } \otimes ( p_{\pi_i}d_j^n p_{\pi_i}^{t})\right]\biggr ) \allowdisplaybreaks\notag\\ & = \frac{1}{n^3}\sum_{i=1}^{n^3 } \mathrm{tr}\biggl(\left[\check{w}_{t^{(\log n)^3 } } ( \tilde{e}^{(\log n)^3}(\cdot|i ) ) \tilde{d}_i^{(\log n)^3}\right ] \otimes\left [ \frac{1}{j_n}\sum_{j=1}^{j_n}\left ( \check{w}_{t^n}(\pi_i(e^n(\cdot|j)))\right ) p_{\pi_i}d_j^n p_{\pi_i}^{t}\right]\biggr ) \allowdisplaybreaks\notag\\ & = \frac{1}{n^3}\sum_{i=1}^{n^3 } \biggl ( \mathrm{tr}\left(\check{w}_{t^{(\log n)^3 } } ( \tilde{e}^{(\log n)^3}(\cdot|i ) ) \tilde{d}_i^{(\log n)^3}\right ) \cdot \mathrm{tr}\left ( \frac{1}{j_n}\sum_{j=1}^{j_n}\left ( \check{w}_{t^n}(\pi_i(e^n(\cdot|j)))\right ) p_{\pi_i}d_j^n p_{\pi_i}^{t}\right ) \biggr)\notag\\ & \geq 1-\frac{1}{n^{1/16}}2^{\lambda}-2\cdot 2^{-n^{1/16}\lambda}\notag\\ & \geq 1- \varepsilon\label{l21gnbrtippnjd}\end{aligned}\ ] ] for any positive . by ( [ mt6l2it ] ) and ( [ balf1lnsl13 ] ) for every and and we have let be the uniform distribution on .we define a random variable on the set by .applying lemma [ eq_9 ] we obtain for any positive . here is the resulting quantum state at after has been sent with . for any positive ,if is large enough we have .thus the secrecy rate of to transmission of both the message and the randomization index is large than our previous papers and we determined the randomness assisted secrecy capacities of arbitrarily varying classical - quantum wiretap channels .in we gave an example when the deterministic capacity of an arbitrarily varying classical - quantum wiretap channel is not equal to its randomness - assisted capacity .thus having resources is very helpful for achieving a positive secrecy capacity . for the proofs in and we did not allow the jammer to have access to the shared randomness .now we consider the case when the shared randomness is not secure , i.e. when the jammer can have access to the shared randomness .let be an arbitrarily varying classical - quantum wiretap channel .we have [enrtitvwsc ] let be an code such and we define a such that , it holds thus every achievable secrecy rate for is also an achievable secrecy rate for under non - secure randomness assisted coding .now we assume that there is a such that then for any we have thus every achievable secrecy rate for under non - secure randomness assisted coding is also an achievable secrecy rate for under randomness assisted coding . therefore , at first let us assume that is not symmetrizable . by when is not symmetrizable it holds .thus when is not symmetrizable we have now let us assume that is symmetrizable .when is symmetrizable and hold then by for any code there is are and a positive such that thus when is symmetrizable for any we have which implies we can only have when is less or equal to .this means by when is symmetrizable it holds and therefore when is symmetrizable we have in we showed that an arbitrarily varying classical - quantum channel with zero deterministic secrecy capacity allowed secure transmission if the sender and the legal receiver had the possibility to use shared randomness as long as the shared randomness was kept secret against the jammer .corollary [ enrtitvwsc ] shows that when the jammer is able have access to the outcomes of the shared random experiment we can only achieve the rate as when we do not use any shared randomness at all .this means the shared randomness will be completely useless when it is known by the jammer .applying theorem [ pdwumsfsov ] we can now determine the random assisted secrecy capacity with the strongest code concept for shared randomness , i.e. , the randomness which is secure against both the jammer and eavesdropping .let be a finite index set .let be an arbitrarily varying classical - quantum wiretap channel .when is not symmetrizable , we have here we use the strong code concept .[ commpermn2 ] when is positive and independent of , ( [ ckwtvttit ] ) always holds and we do not have to assume that is not symmetrizable . we define it holds .notice that when is positive and independent of we always have have for sufficiently large and thus .we fix a probability distribution .let and let be a family of random variables taking value according to .it holds and . similar to the proof of theorem 1 in and theorem 3.1 in , with a positive probability there is a realization of and a set with following properties : there exits a set of decoding operators such that for every , , and sufficiently large , and when holds we use the strategy of theorem [ pdwumsfsov ] by build a two - part secure code word , the first part is used to send , the second is used to transmit the message to the legal receiver .thus the achievability of and is then shown via standard arguments .now we are going to prove the converse . holds trivially .let be a sequence of code such that for every and where and .it is known that for sufficiently large we have let .we denote and .let be the uniformly distributed random variable with value in .we have &\frac{1}{2^{ng_n}}\sum_{\gamma=1}^{2^{ng_n}}\chi\left(r_{uni};b_{q}^{\gamma\otimes n}\right ) - \chi\left(r_{uni};\frac{1}{2^{ng_n}}\sum_{\gamma=1}^{2^{ng_n}}b_{q}^{\gamma\otimes n}\right)\notag\\ & = \frac{1}{2^{ng_n}}\sum_{\gamma=1}^{2^{ng_n}}s\left(\frac{1}{j_n}\sum_{j=1}^{j_n}\psi_{q}^{j,\gamma\otimes n}\right ) -\frac{1}{2^{ng_n}}\frac{1}{j_n}\sum_{\gamma=1}^{2^{ng_n}}\sum_{j=1}^{j_n}s\left(\psi_{q}^{j,\gamma\otimes n}\right)\notag\\ & -\biggl[s\left(\frac{1}{2^{ng_n}}\frac{1}{j_n}\sum_{\gamma=1}^{2^{ng_n}}\sum_{j=1}^{j_n}\psi_{q}^{j,\gamma\otimes n}\right ) - \frac{1}{j_n}\sum_{j=1}^{j_n}s\left(\frac{1}{2^{ng_n}}\sum_{\gamma=1}^{2^{ng_n}}\psi_{q}^{j,\gamma\otimes n}\right)\biggr]\allowdisplaybreaks\notag\\ & = \frac{1}{2^{ng_n}}\sum_{\gamma=1}^{2^{ng_n}}s\left(\frac{1}{j_n}\sum_{j=1}^{j_n}\psi_{q}^{j,\gamma\otimes n}\right ) -s\left(\frac{1}{2^{ng_n}}\frac{1}{j_n}\sum_{\gamma=1}^{2^{ng_n}}\sum_{j=1}^{j_n}\psi_{q}^{j,\gamma\otimes n}\right)\notag\\ & -\biggl[\frac{1}{2^{ng_n}}\frac{1}{j_n}\sum_{\gamma=1}^{2^{ng_n}}\sum_{j=1}^{j_n}s\left(\psi_{q}^{j,\gamma\otimes n}\right ) - \frac{1}{j_n}\sum_{j=1}^{j_n}s\left(\frac{1}{2^{ng_n}}\sum_{\gamma=1}^{2^{ng_n}}\psi_{q}^{j,\gamma\otimes n}\right)\biggr]\allowdisplaybreaks\notag\\ & = \frac{1}{j_n}\sum_{j=1}^{j_n}s\left(g_{uni},\tilde{b}_{q}^{j\otimes n}\right)-s\left(g_{uni},\tilde{b}_{q}^{\otimes n } \right)\allowdisplaybreaks\notag\\ & \leq \frac{1}{j_n}\sum_{j=1}^{j_n}s\left(g_{uni},\tilde{b}_{q}^{j\otimes n}\right)\allowdisplaybreaks\notag\\ & \leq \frac{1}{j_n}\sum_{j=1}^{j_n } h\left(g_{uni}\right)\allowdisplaybreaks\notag\\ & = h\left(g_{uni}\right)\notag\\ & = ng_n \text { .}\label{slgutbqnrn}\end{aligned}\ ] ] by ( [ ckwtvttitgn ] ) , ( [ lsntiljnl ] ) , and ( [ slgutbqnrn ] ) we have in this section we present some applications of our results in and .in it has been shown that the deterministic secrecy capacity of an arbitrarily varying classical - quantum wiretap channel is in general not continuous .now we deliver the sufficient and necessary conditions for the continuity of the capacity function of arbitrarily varying classical - quantum wiretap channels . for an arbitrarily varying classical - quantum channel define where the set of parametrized distributions sets on .the statement is equivalent to being symmetrizable . for an arbitrarily varying classical - quantum wiretap channel , where and , and a positive let be the set of all arbitrarily varying classical - quantum wiretap channels , where and , such that and for all . , the deterministic secrecy capacity of arbitrarily varying classical - quantum wiretap channel is discontinuous at if and only if the following hold : + 1 ) the secrecy capacity of under common randomness assisted quantum coding is positive ; + 2 ) but for every positive there is a such that .[ftsdmiait ] at first we assume that the secrecy capacity of under common randomness assisted quantum coding is positive and .we choose a positive such that . by corollary 5.1 in the secrecy capacity under common randomnessassisted quantum coding is continuous . thus thereexist a positive such that the for all we have now we assume that there is a such that .this means that is not symmetrizable . by theorem 1 in it holds since , is symmetrizable . by theorem 1 in the deterministic secrecy capacity is discontinuous at when 1 ) and 2 ) hold .now let us consider the case when the deterministic secrecy capacity is discontinuous at .we fix a and , .the map is continuous in the following sense : when holds then for every positive and any we have thus if for a we have for all , , we also have this means that when holds we can find a positive such that holds for all . by theorem 1 in it holds by corollary 5.1 in is continuous .therefore , when the deterministic secrecy capacity is discontinuous at , can not be positive .we consider now that holds . by theorem 1 in when for every we have , then by theorem 1 in andthe deterministic secrecy capacity is thus continuous at .therefore , when the deterministic secrecy capacity is discontinuous at , for every positive there is a such that . when for every positive there is a such that and holds , then by theorem 1 in we have and the deterministic secrecy capacity is continuous at . therefore , when the deterministic secrecy capacity is discontinuous at , must be positive .let be an arbitrarily varying classical - quantum wiretap channel .when the secrecy capacity of is positive then there is a such that for all we have suppose we have .then is not symmetrizable , which means that is positive . in the proof of corollary [ ftsdmiait ]we show that is continuous .thus there is a positive such that for all .when is not symmetrizable then we have . by corollary 5.1 in , the secrecy capacity under common randomnessassisted quantum coding is continuous .thus there is a positive such that for all .we define and the corollary is shown .one of the properties of classical channels is that in the majority of cases , if we have a channel system where two sub - channels are used together , the capacity of this channel system is the sum of the two sub - channels capacities .particularly , a system consisting of two orthogonal classical channels , where both are `` useless '' in the sense that they both have zero capacity for message transmission , the capacity for message transmission of the whole system is zero as well ( `` '' ) .in contrast to the classical information theory , it is known that in quantum information theory , there are examples of two quantum channels , and , with zero capacity , which allow perfect transmission if they are used together , i.e. , the capacity of their product is positive .this is due to the fact that there are different reasons why a quantum channel can have zero capacity .we call this phenomenon `` super - activation '' ( `` '' ) . in - activation has been shown for arbitrarily varying classical - quantum wiretap channels .now we deliver a complete characterization of super - activation for arbitrarily varying classical - quantum wiretap channels .let and be two arbitrarily varying classical - quantum wiretap channels .\1 ) if then is positive if and only if is not symmetrizable and is positive .\2 ) if the secrecy capacity under common randomness assisted quantum coding shows no super - activation for and then the secrecy capacity can only then show super - activation for and if one of and has positive secrecy capacity under common randomness assisted quantum coding and a symmetrizable legal channel and while the other one has zero secrecy capacity under common randomness assisted quantum coding and a non - symmetrizable legal channel . by theorem 1 in equal to when is not symmetrizable and to zero when is symmetrizable .thus 1 ) holds .when and are both symmetrizable then there exists two parametrized set of distributions , on such that for all , , we have , , we can set and obtain for all , , which means that is symmetrizable and super - activation does not occur because of 1 ) . when and are both not symmetrizable then their secrecy capacities are equal to their secrecy capacities under common randomness assisted quantum coding . .because of our assumption . by 1 ), super - activation can not occur .when one of and , say is not symmetrizable while the other one is symmetrizable , then indicate that .when is also zero then by our assumption super - activation can not occur .thus 2 ) holds .supports by the bundesministerium fr bildung und forschung ( bmbf ) via grant 16kis0118k and 16kis0117k , the german research council ( dfg ) via grant 1129/1 - 1 , the erc via advanced grant irquat , the spanish mineco via project fis2013 - 40627-p , and the generalitat de catalunyacirit via project 2014 sgr 966 are gratefully acknowledged .xxx r. ahlswede , a note on the existence of the weak capacity for channels with arbitrarily varying channel probability functions and its relation to shannon s zero error capacity , the annals of mathematical statistics , vol .41 , no . 3 , 1970 .r. ahlswede , elimination of correlation in random codes for arbitrarily varying channels , z. wahrscheinlichkeitstheorie verw .gebiete , vol .44 , 159 - 175 , 1978 .r. ahlswede , i. bjelakovi , h. boche , and j. ntzel , quantum capacity under adversarial quantum noise : arbitrarily varying quantum channels , comm .317 , no . 1 , 103 - 156 , 2013 .r. ahlswede and v. blinovsky , classical capacity of classical - quantum arbitrarily varying channels , ieee trans .theory , vol .53 , no . 2 , 526 - 533 , 2007 . k. m. r. audenaert , a sharp continuity estimate for the von neumann entropy , j. phys . a : math . theor . ,40 , 8127 - 8136 , 2007 . i. bjelakovi and h. boche , classical capacities of averaged and compound quantum channels .ieee trans .theory , vol .57 , no . 7 , 3360 - 3374 , 2009 . i. bjelakovi , h. boche , g. janen , and j. ntzel , arbitrarily varying and compound classical - quantum channels and a note on quantum zero - error capacities , information theory , combinatorics , and search theory , in memory of rudolf ahlswede , h. aydinian , f. cicalese , and c. deppe eds ., lncs vol.7777 , 247 - 283 , arxiv:1209.6325 , 2012 .d. blackwell , l. breiman , and a. j. thomasian , the capacities of a certain channel classes under random coding , ann .31 , no . 3 , 558 - 567 , 1960 .v. blinovsky and m. cai , arbitrarily classical - quantum varying wiretap channel , information theory , combinatorics , and search theory , in memory of rudolf ahlswede , h. aydinian , f. cicalese , and c. deppe eds ., lncs vol.7777 , 234 - 246 , 2013 . m. bloch and j. n. laneman , on the secrecy capacity of arbitrary wiretap channels , communication , control , and computing , forty - sixth annual allerton conference allerton house , uiuc , usa , 818 - 825 , 2008 h. boche , m. cai , c. deppe , and j. ntzel , classical - quantum arbitrarily varying wiretap channel - ahlswede dichotomy - positivity - resources - super activation , quantum information processing , vol .15 , no . 11 , 4853 - 489 , arxiv:1307.8007 , 2016 .h. boche , m. cai , c. deppe , and j. ntzel , classical - quantum arbitrarily varying wiretap channel : common randomness assisted code and continuity , quantum information processing , vol .16 , no . 1 , 1 - 48 , 2016 . h. boche and j. ntzel , arbitrarily small amounts of correlation for arbitrarily varying quantum channel , j. math ., vol . 54 , issue 11 , arxiv 1301.6063 , 2013 .n. cai , a. winter , and r. w. yeung , quantum privacy and quantum wiretap channels , problems of information transmission , vol .40 , no . 4 , 318 - 336 , 2004 .i. csiszr and p. narayan , the capacity of the arbitrarily varying channel revisited : positivity , constraints , ieee trans .theory , vol .34 , no . 2 , 181 - 193 , 1988 . i. devetak , the private classical information capacity and quantum information capacity of a quantum channel , ieee trans .theory , vol .51 , no . 1 , 44 - 55 , 2005 . t. ericson , exponential error bounds for random codes in the arbitrarily varying channel , ieee trans .theory , vol .31 , no . 1 , 42 - 48 , 1985 . m. fannes , a continuity property of the entropy density for spin lattice systems , communications in mathematical physics , vol . 31 .291 - 294 , 1973 .a. s. holevo , the capacity of quantum channel with general signal states , ieee trans .theory , vol .44 , 269 - 273 , 1998 .j. ntzel , m. wiese , and h. boche , the arbitrarily varying wiretap channel secret randomness , stability and super - activation , ieee trans .theory , vol .62 , no . 6 , 3504 - 3531 , arxiv:1501.07439 , 2016 .t. ogawa and h. nagaoka , making good codes for classical - quantum channel coding via quantum hypothesis testing , ieee trans .theory , vol .53 , no . 6 , 2261 - 2266 , 2007 . b. schumacher and m. a. nielsen , quantum data processing and error correction , phys . rev .54 , 2629 , 1996 . b. schumacher and m. d. westmoreland , sending classical information via noisy quantum channels , phys .56 , 131 - 138 , 1997 .m. wiese , j. ntzel , and h. boche , the arbitrarily varying wiretap channel - deterministic and correlated random coding capacities under the strong secrecy criterion , ieee trans .theory , vol .62 , no . 7 , 3844 - 3862 , arxiv:1410.8078 , 2016 . m. wilde , quantum information theory , cambridge university press , 2013 .a. winter , coding theorem and strong converse for quantum channels , ieee trans .inform . theory ,45 , no . 7 ,2481 - 2485 , 1999 .a. d. wyner , the wire - tap channel , bell system technical journal , vol .54 , no . 8 ,1355 - 1387 , 1975 .
|
we analyze arbitrarily varying classical - quantum wiretap channels . these channels are subject to two attacks at the same time : one passive ( eavesdropping ) , and one active ( jamming ) . we progress on previous works and , by introducing a reduced class of allowed codes that fulfills a more stringent secrecy requirement than earlier definitions . in addition , we prove that non - symmetrizability of the legal link is sufficient for equality of the deterministic and the common randomness assisted secrecy capacities . at last , we focus on analytic properties of both secrecy capacities : we completely characterize their discontinuity points , and their super - activation properties .
|
popularity prediction on social networks can help users sift through the vast stream of online contents and enable advertisers to maximize revenue through differential pricing for access to content or advertisement placement .popularity prediction is challenging since numerous factors can affect the popularity of online content .moreover , popularity is very asymmetric and broadly - distributed . several pioneering work devoted to the characteristics and mechanisms of information diffusion .several efforts have been made to study the popularity prediction on social networks .szabo et al . found that the final popularity is reflected by the popularity in early period by investigating digg and youtube .a direct extrapolation method is then employed to predict the long - term popularity .lerman et al . modeled users vote process on digg by considering both the interestingness and the visibility of online content .hong et al. formulated the popularity prediction as a classification problem .however , existing methods pay little attention to the structural characteristic of the propagation path of online content . in this paper, we consider the popularity prediction problem by studying the relationship between the popularity of online content and the structural characteristics of the underlying propagation network .the study is conducted on the sina weibo , the biggest microblogging network in china .experimental results demonstrate that our method significantly outperforms the state - of - the - art method which neglects the structural characteristics of social networks .this indicates that the structural diversity would give us some insights to understand the mechanism of information diffusion and to predict the long - term popularity of a tweet .in this paper , the popularity prediction aims to predict the popularity of a tweet at a _ reference time _ , given the forward information of this tweet before an _ indicating time _ .the indicating time is the time at which we observe the information of a tweet and the reference time is the time at which we predict the popularity of the tweet .the popularity is measured by the number of times that a tweet is re - tweeted at time .we first study the structural characteristics of the forward path of tweets . encouraged by the work in , we investigate whether the final popularity of a tweet is well indicated by the structural characteristics of the network consisting of users that re - tweet the tweet at an earlier time .specifically , we analyze the structural characteristics of a tweet with the following two measurements on its re - tweet path at hour after it is posted .the first measurement is _link density_. among all users that have forwarded the tweet at time , link density is the ratio of the number of followship links to the number of all possible links .the other measurement is the _ diffusion depth _ , which is the longest length of the path from the submitter to any user that has retweeted the tweet at time .we report the final popularity of a tweet with respect to the link density and the diffusion depth .as shown in figure [ fig : structural ] , there exists a strong negative linear correlation between the final popularity and the link density , and there exists a strong positive near - linear correlation between the final popularity and the diffusion depth .this finding tells us that a diverse group of earlier users , reflected with low link density and large diffusion depth , leads to a wide spreading of a tweet .therefore , the structural characteristics of diffusion paths of a tweet at an earlier time can help predict its final popularity . based on the above findings , we propose two improved approaches to predict the final popularity using earlier popularity and structural characteristics .we estimate the logarithmic final popularity with a combination of the logarithmic early popularity and the logarithmic link density , where is the link density at or before time , and , and are global coefficients that will be learned from the data .similarly , we define a diffusion depth version to estimate the logarithmic final popularity of a tweet as where is the diffusion depth of the tweet at or before time , and , and are also global coefficients . to demonstrate the effectiveness of our proposed approaches , we compare them with a baseline approach which estimates the final popularity with the early popularity alone .the baseline predicts the final popularity using where and are also global coefficients that will be learned from the data .we use sina weibo dataset published by wise 2012 challenge .we select the tweets posted during july 1 - 31 , 2011 and all the re - tweet paths occurred during july 1-august 31 , 2011 .the data set consists of 16.6 million tweets .this data set also contains a snapshot of the social network of sina weibo .the social network contains 58.6 millions of registered users and 265.5 millions of following relations ..prediction error of three approaches . [ cols="^,^,^",options="header " , ] we take of all the tweets in the dataset as the training set and the rest as the testing set .the predictions are evaluated with _ rmse _( root mean squared error ) and _ mae _ ( mean absolute error ) . as reported in table [ table:1 ] ,the approach incorporating the link density significantly reduces the prediction error compared with the baseline , and the approach incorporating the diffusion depth performs even better . here , the values of and in previous formulas that we learned from the data is 0.04 and 0.07 separately .the results empirically demonstrate that early structural characteristics affect the final popularity .low link density and long diffusion path implies that a tweet is more probably spread to different parts of the network , which helps the tweet become known by a greater population .in this paper , we have studied how to predict the popularity of short message in sina weibo .we find that structural characteristics provide strong evidence for the final popularity . a low link density and a deep diffusion usually lead to wide spreading , capturing the intuition that a diverse group of individuals spread a message to wider audience than a dense group . based on such a finding , we propose two approaches by incorporating the early popularity with the link density and the diffusion depth of early adopters .experiments demonstrate that the proposed approaches significantly reduce the error of popularity prediction .our finding provides a new perspective to understand the popularity prediction problem and is helpful to build accurate prediction models in the future .this work is funded by the national natural scientific foundation of china under grant nos .61232010 , 61202215 and national basic research program of china ( the 973 program ) under grant no .this work is partly funded by the beijing natural scientific foundation of china under grant no .this work is also supported by key lab of information network security , ministry of public security .
|
predicting the popularity of content is important for both the host and users of social media sites . the challenge of this problem comes from the inequality of the popularity of content . existing methods for popularity prediction are mainly based on the quality of content , the interface of social media site to highlight contents , and the collective behavior of users . however , little attention is paid to the structural characteristics of the networks spanned by early adopters , i.e. , the users who view or forward the content in the early stage of content dissemination . in this paper , taking the sina weibo as a case , we empirically study whether structural characteristics can provide clues for the popularity of short messages . we find that the popularity of content is well reflected by the structural diversity of the early adopters . experimental results demonstrate that the prediction accuracy is significantly improved by incorporating the factor of structural diversity into existing methods . = 10000 = 10000
|
for many systems , temporal logic formulas are used to describe desirable system properties such as safety , stability , and liveness . given a stochastic system modeled as a , the synthesis problem is to find a policy that achieves optimal performance under a given quantitative criterion regarding given temporal logic formulas .for instance , the objective may be to find a policy that maximizes the probability of satisfying a given temporal logic formula .in such a problem , we need to keep track of the evolution of state variables that capture system dynamics as well as predicate variables that encode properties associated with the temporal logic constraints .as the number of states grows exponentially in the number of variables , we often encounter large s , for which the synthesis problems are impractical to solve with centralized methods .the insight for control synthesis of large - scale systems is to exploit the modular structure in a system so that we can solve the original problem by solving a set of small subproblems . in literature , distributed control synthesis methods are proposed in the pioneering work for s with discounted rewards .the authors formulate a two - stage distributed reinforcement learning method : the first stage constructs and solves an abstract problem derived from the original one , and the second stage iteratively computes parameters for local problems until the collection of local problems solutions converge to one that solves the original problem .recently , is combined with a sub - gradient method into planning for average - reward problems in large mdps in .however , the method in applies only when some special conditions are satisfied on the costs and transition kernels .alternatively , hierarchical reinforcement learning introduces _ action - aggregation _ and _ action - hierarchies _ to address the planning problems with large mdps . in action - aggregation ,a micro - action is a local policy for a subset of states and the global optimal policy maps histories of states into micro - actions .however , it is not always clear how to define the action hierarchies and how the choice of hierarchies affects the optimality in the global policy .additionally , the aforementioned methods are in general difficult to implement and can not handle temporal logic specifications . for synthesis problems in s with quantitative temporal logic constraints , centralized methods and tools are developed and applied to control design of stochastic systems and robotic motion planning . since centralized algorithmsare based on either value iteration or linear programming , they inevitably hit the barrier of scalability and are not viable for large s. in this paper , we develop a distributed optimization method for large s subject to temporal logic contraints . we first introduce a decomposition method for large s and prove a property in such a decomposition that supports the application of the proposed distributed optimization . for a subclass of swhose graph structures are planar graphs , we introduce an efficient decomposition algorithm that exploits the modular structure for the underlying caused by loose coupling between subsets of states and its constituting components . then , given a decomposition of the original system, we employ a distributed optimization method called _ block splitting algorithm _ to solve the planning problem with respect to discounted - reward objectives in large s and average - reward objectives in large ergodic s. comparing to two - stage methods in , our method concurrently solves the set of sub - problems and penalizes solutions mismatches in one step during each iteration , and is easy to implement .since the distributed control synthesis is independent from the way how a large is decomposed , any decomposition method can be used .lastly , we extend the method to solve the synthesis problems for s with two classes of quantative temporal logic objectives . through case studieswe investigate the performance and effectiveness of the proposed method .let be a finite set .let be the set of finite and infinite words over . is the cardinality of the set .a probability distribution on a finite set is a function ] ( resp . ] . a discounted - reward ( resp .an average - reward ) problem is , for a given initial state distribution , to obtain a policy that maximizes the discounted - reward value ( resp .average - reward value ) . for discounted - reward ( average - reward )problems , the optimal value can be attained by memoryless policies . a solution to the discounted - reward problem can be found by solving the problem : [ eq : constraintdiscounted ] where is the total number of state - action pairs in the , is the non - negative orthant of , and variable can be interpreted as the expected discounted time of being in state and taking action . once the problem in is solved , the optimal policy is obtained as and the objective function s value is the optimal discounted - reward value under policy given the initial distribution of states . in an ergodic ,the average - reward value is a constant regardless of the initial state distribution ( see for the definition of ergodicity ) .we obtain an optimal policy for an average - reward problem by solving the problem [ eq : averagelp ] where is understood as the long - run fraction of time that the system is at state and the action is taken .once the problem in is solved , the optimal policy is obtained as .the optimal objective value is the optimal average - reward value and is the same for all states .* distributed optimization : * as a prelude to the distributed synthesis method developed in section [ sec : planning ] , now we describe the _ alternating direction method of multipliers _ ( ) for the generic convex constrained minimization problem where function is closed proper convex and set is closed nonempty convex . in iteration of the algorithm the following updates are performed : [ eq : subeqns ] where and are auxiliary variables , is the ( euclidean ) projection onto , and is the _ proximal operator _ of with parameter .the algorithm handles separately the objective function in and the constraint set in . in the dual update step coordinates these two steps and results in convergence to a solution of the original problem .* temporal logic : * formulas are defined by : where is an atomic proposition , and and are temporal modal operators for `` next '' and `` until '' .additional temporal logic operators are derived from basic ones : ( eventually ) and . given an , let be a finite set of atomic propositions , and a function be a labeling function that assigns a set of atomic propositions to each state that are valid at the state . can be extended to paths in the usual way , i.e. , for .a path satisfies a temporal logic formula if and only if satisfies . in an , a policy induces a probability distribution over paths in .the probability of satisfying an formula is the sum of probabilities of all paths that satisfy in the induced markov chain .[ prob ] given an and an formula , synthesize a policy that optimizes a quantitative performance measure with respect to the formula in the .we consider the probability of satisfying a temporal logic formula as one quantitative performance measure .we also consider the expected frequency of satisfying certain recurrent properties specified in an formula . by formulating a product that incorporates the underlying dynamics of a given and its temporal logic specification, it can be shown that problem [ prob ] with different quantitative performance measures can be formulated through pre - processing as special cases of discounted - reward and average - reward problems .thus , in the following , we first introduce decomposition - based distributed synthesis methods for large s with discounted - reward and average - reward criteria .then , we show the extension for solving s with quantitative temporal logic constraints .to exploit the modular structure of a given , the initial step is to decompose the state space into small subsets of states , each of which can then be related to a small problem . in this section ,we introduce some terminologies in decomposition of s from . given an , let be any partition of the state set .that is , , , when and . a set in called a _region_. the _ periphery _ of a region is a set of states _ outside _ , each of which can be reached with a non - zero probability by taking some action from a state _ in _ .formally , let . given a region , we call the _ kernel _ of .we denote the number of state - action pairs restricted to , for each .that is , is the cardinality of the set .we call the partition a _ decomposition _ of .the following property of a decomposition is exploited in distributed optimization .[ lm : kernel ] given a decomposition obtained from partition , for a state where , if there is a state and an action such that , then either or .suppose and , then it must be the case that for some and .since from state , after taking action , the probability of reaching is non - zero , we can conclude that , which implies .the implication contradicts the fact that since .hence , either or .* example : * consider the in figure [ fig : exmdp ] , which is taken from .the shaded region shows a partition of the state space .then , and .we obtain a decomposition of as , , and .it is observed that state can only be reached with non - zero probabilities by actions taken from states and . ,actions , and transition probability function as indicated.,scaledwidth=50.0% ] various methods are developed to derive a decomposition of an , for example , decompositions based on partitioning the state space of an according to the communicating classes in the induced graph ( defined in the following ) of that ( see a survey in ) . for the distributed synthesis method developed in this paper , it will be shown later in section [ sec : planning ] that the number of state - action pairs and the number of states in are the number of variables and the number of constraints in a sub - problem , respectively .thus , we prefer a decomposition that meets one simple desirable property : for each , the number of state - action pairs in is small in the sense that the classical linear programming algorithm can efficiently solve an with state - action pairs of this size .next , we propose a method that generates decompositions which meet the aforementioned desirable property for a subclass of s. for an in this subclass , its induced graph is _planar_. it can be shown that s derived from classical gridworld examples , which have many practical applications in robotic motion planning , are in this subclass .we start by relating an with a directed graph .the _ labeled digraph _ induced from an is a tuple where is a set of nodes , and is a set of labeled edges such that if and only if .let be the total number of nodes in the graph .a partition of states in the gives rise to a partition of nodes in the graph . given a partition and a region ,a node is said to be _ contained _ in if some edge of the region is incident to the node .a node contained in more than one regions is called a _ boundary _ node .that is , is a boundary node if and only if there exists or with .formally , the boundary nodes of are where and .we define .note that since , .we use the number of boundary nodes as an upper bound on the size of the set of states . an _ -division of an -node graph _ is a partition of nodes into subsets , each of which have nodes and boundary nodes .reference shows an algorithm that divides a planar graph of vertices into an -division in time .[ lma : decomp]given a partition of an with states obtained with a -division the induced graph , the number of states in is upper bounded by and the number of states in is upper bounded by . since each boundary node is contained in at most three regions and at least one region by the property of an -division , the total number of boundary nodes is .the number of states in is upper bounded by the size of , which is . to obtain a decomposition, the user specifies an approximately upper bound on the number of variables for all sub - problems .then , the algorithm decides whether there is an -division for some that gives rise to a decomposition that has the desirable property .although the decomposition method proposed here is applicable for a subclass of s , the distributed synthesis method developed in this paper does not constrain the way by which a decomposition is obtained . a decomposition may be given or obtained straight - forwardly by exploiting the existing modular structure of the system .even if a decomposition does not meet the desirable property for the distributed synthesis method , the proposed method still applies as long as each subproblem derived from that decomposition ( see section [ sec : admm ] ) can be solved given the limitation in memory and computational capacities .in this section , we show that under a decomposition , the original problem for a discounted - reward or average - reward case can be formulated into one with a sparse constraint matrix .then , we employ block - splitting algorithm based on in for solving the problem in a distributed manner . given a decomposition of an , let be a vector consisting of variables for all with all actions enabled from .let be an index function .the constraints in can be written as : for each , and for each , , recall that , in lemma [ lm : kernel ] , we have proven that each with can only be reached with non - zero probabilities from states in and . as a result , for each state in with and each action , the constraint on variable is only related with variables in and .let .we denote the number of variables in by and the number of states in the set by .let .the problem in is then where , and where if .the transformation from and to is straightforward by rewriting the constraints and we omit the detail . for an ergodic , the constraints in the problem of maximizing the average reward , described by , can be rewritten in the way just as how is rewritten into and for the discounted - reward problem .the difference is that for the average - reward case , we let and replace with , for all , in and .an additional constraint for the average - reward case is that .hence , for an average - reward problem in an ergodic , the corresponding lp problem in is formulated as where is a row vector of ones , and the block - matrix has the same structure as of that in the discounted case with .we can compactly write the constraint in as where is a sparse constraint matrix similar in structure to the matrix in the discounted - reward case .we solve the problems in and by employing the block splitting algorithm based on in .we only present the algorithm for the discounted - reward case in .the extension to the average - reward case is straight - forward .first , we introduce new variables and let , where for a convex set , is a function defined by for , for .then , adding the term into the objective function enforces .let .the term enforces that is a non - negative vector .we rewrite the lp problem in as follows . with this formulation , we modify the block splitting algorithm in to solve in a parallel and distributed manner ( see the appendix for the details ) .the algorithm takes parameters , and : is a penalty parameter to ensure the constraints are satisfied , is a relative tolerance and is an absolute tolerance .the choice of and depends on the scale of variable values . in synthesis of s , and be chosen in the range of to .the algorithm is ensured to converge with any choice of and the value of may affect the convergence rate .we now extend the distributed control synthesis methods for s with discounted - reward and average - reward criteria to solve problem [ prob ] in which quantitative temporal logic constraints are enforced .* preliminaries * given an formula as the system specification , one can always represent it by a where is a finite state set , is the alphabet , is the initial state , and the transition function .the acceptance condition is a set of tuples .the _ run _ for an infinite word is the infinite sequence of states where and for .a run is accepted in if there exists at least one pair such that and where is the set of states that appear infinitely often in .given an augmented with a set of atomic propositions and a labeling function , one can compute the product with the components defined as follows : is the set of states . is the set of actions .the initial probability distribution of states is $ ] such that given with , it is that . is the transition probability function . given , , and , let .the rabin acceptance condition is . by construction , a path satisfies the formula if and only if there exists , and . to maximize the probability of satisfying ,the first step is to compute the set of _ end components _ in , each of which is a pair where is non - empty and is a function such that for any , for any , and the induced directed graph is strongly connected . here, is an edge in the graph if there exists , .an end component is _ accepting _ if and for some .let the set of s in be and the set of _ accepting end states _be .once we enter some state , we can find an such that , and initiate the policy such that for some , states in will be visited a finite number of times and some state in will be visited infinitely often .* formulating the problem * an optimal policy that maximizes the probability of satisfying the specification also maximizes the probability of hitting the set of accepting end states .reference develops gpu - based parallel algorithms which significantly speed up the computation of end components for large s. after computing the set of s , we formulate the following problem to compute the optimal policy using the proposed decomposition and distributed synthesis method for discounted - reward cases . given a product and the set of accepting end states , the modified product is where is the set of states obtained by grouping states in as a single state . for all , and .the initial distribution of states is defined as follows : for , , and . the reward function is defined such that for all that is not , where is the indicator function that outputs if and only if and otherwise . for any action , . by definition of reward function ,the discounted reward with from state in the modified product is the probability of reaching a state in from under policy in the product .hence , with a decomposition of , the proposed distributed synthesis method for discounted - reward problems can be used to compute the policy that maximizes the probability of satisfying a given specification .* preliminaries * consider a temporal logic formula that can be expressed as a where are defined similar to a and is a set of _ accepting states_. a run is accepted in if and only if .given an and a , the _ product with bchi objective _ is where components are obtained similarly as in the product with rabin objective .the difference is that is the set of accepting states .a path satisfies the formula if and only if . * formulating the problem * for a product with bchi objective , we aim to synthesize a policy that maximizes the expected frequency of visiting an accepting state in the product .this type of objectives ensures some recurrent properties in the temporal logic formula are satisfied as frequently as possible .for example , one such objective can be requiring a mobile robot to maximize the frequency of visiting some critical regions .this type of objectives can be formulated as an average - reward problem in the following way : let the reward function be defined by . by definition of the reward function , the optimal policy with respect to the average - reward criterion is the one that maximizes the frequency of visiting a state in .if the product is ergodic , we can then solve the resulting average - reward problem by the distributed optimization algorithm with a decomposition of product .we demonstrate the method with three robot motion planning examples .all the experiments were run on a machine with intel xeon 4 ghz , 8-core cpu and 64 gb ram running linux .the distributed optimization algorithm is implemented in matlab .the decomposition and other operations are implemented in python .0.15 gridworld .the dash arrow represents that if the robot takes action ` n ' , there are non - zero probabilities for it to arrive at nw , n , and ne cells .( b ) a gridworld .a natural partition of state space using the walls gives rise to subsets of states .states in are enclosed using the squares ., title="fig : " ] 0.32 [ fig : gridworld2 ] figure [ fig : singlestep ] shows a fraction of a gridworld .a robot moves in this gridworld with uncertainty in different terrains ( ` grass ' , ` sand ' , ` gravel ' and ` pavement ' ) . in each terrain and for robot s different action ( heading north ( ` n ' ) , south ( ` s ' ) , west ( ` w ' ) and east ( ` e ' ) ) , the probability of arriving at the correct cell is for pavement , for grass , for gravel and for sand . with a relatively small probability , the robot will arrive at the cell adjacent to the intended one .figure [ fig : decompgw ] displays a gridworld. the grey area and the boundary are walls . if the robot runs into the wall , it will be bounce back to its original cell .the walls give rise to a natural partition of the state space , as demonstrated in this figure .if no explicit modular structure in the system can be found , one can compute a decomposition using the method in section [ sec : decompmethod ] . in the following example, the wall pattern is the same as in the gridworld .we select a subset of cells as `` restricted area '' and a subset of cells as `` targets '' .the reward function is given : for , , counts for the amount of time the robot takes action .for , for all , . for , for all . intuitively , this reward function will encourage the robot to reach the target with as fewer expected number of steps as possible , while avoiding running into a cell in the restricted area .we select .* case 1 : * to show the convergence and correctness of the distributed optimization algorithm , we first consider a gridworld example that can be solved directly with a centralized algorithm . since at each cell there are four actions for the robot to pick , the total number of variables is for the gridworld ( the wall cells are excluded from the set of states ) . in this gridworld , there is only target cell . the restricted area include cells .the resulting problem can be solved using cvx , a package for specifying and solving convex programs .the problem is solved in seconds , and the optimal objective value under the optimal policy given by cvx is .next , we solve the same problem by decomposing the state space of the along the walls into regions , each of which is a gridworld .this partition of state space yields states for each and states for .in which follows , we select to show the convergence of the distributed optimization algorithm .irrespective of the choices for , the average time for each iteration is about sec .the solution accuracy relative to cvx is summarized in table [ tbl : summary ] .the ` rel .error in objval ' is the relative error in objective value attained , treating the cvx solution as the accurate one , and the infeasibility is the relative primal infeasibility of the solution , measured by figure [ fig : reobjval100 ] shows the convergence of the optimization algorithm . [cols="^,^,^,^,^,^,^ " , ] [ tbl : summary ] gridworld with discounted reward , under . for clarity, we did not draw the relative error for the initial steps , which are comparatively large.,scaledwidth=45.0% ] * case 2 : * for a gridworld , the centralized method in cvx fails to produce a solution for this large - scale problem .thus , we consider to solve it using the decomposition and distributed synthesis method . in this example , we partition the gridworld such that each region has cells , which results in regions .there are states in and about states in each , for . in this example, we randomly select cells to be the targets and cells to be the restricted areas . by choosing , , ,the optimal policy is solved within seconds and it takes about seconds for one iteration .the total number of iterations is . under the ( approximately)-optimal policy obtained by distributed optimization, the objective value is .the relative primal infeasibility of the solution is .figure [ fig : objval1000 ] shows the convergence of distributed optimization algorithm. 0.45 gridworld ( the initial steps are omitted ) .( b ) objective value vesus iterations in gridworld with a bchi objective .here we only show the first iterations as the objective value converges to the optimal one after steps ., title="fig:",scaledwidth=100.0% ] 0.45 gridworld ( the initial steps are omitted ) .( b ) objective value vesus iterations in gridworld with a bchi objective . herewe only show the first iterations as the objective value converges to the optimal one after steps ., title="fig:",scaledwidth=100.0% ] we consider a gridworld with no obstacles and critical regions labeled `` '' , `` '' , `` '' and `` '' .the system is given a temporal logic specification .that is , the robot has to always eventually visit region and then , and also always eventually visit region and then .the number of states in the corresponding is after trimming the unreachable states , due to the fact that the robot can not be at two cells simultaneously .the quantitative objective is to maximize the frequency of visiting all four regions ( an accepting state in the ) .the formulated is ergodic and therefore our method for average - reward problems applies . for an average - reward case, we need to satisfy the constraint in .this constraint leads to slow convergence and policies with large infeasibility measures in distributed optimization . to handle this issue, we approximate average reward with discounted reward : for ergodic s , the discounted accumulated reward , scaled by , is approximately the average reward .further , if is large compared to the mixing time of the markov chain , then the policy that optimizes the discounted accumulated reward with the discounting factor can achieve an approximately optimal average reward .given , and , , the distributed synthesis algorithm terminates in iteration steps and the optimal discounted reward is .scaling by , we obtain the average reward , which is the approximately optimal value for this average reward under the obtained policy .the convergence result is shown in figure [ fig : reobjval50 ] and the infeasibility measure of the obtained solution is .for solving large markov decision process models of stochastic systems with temporal logic specifications , we developed a decomposition algorithm and a distributed synthesis method .this decomposition exploits the modularity in the system structure and deals with sub - problems of smaller sizes .we employed the block splitting algorithm in distributed optimization based on the alternating direction method of multipliers to cope with the difficulty of combining the solutions of sub - problems into a solution to the original problem .moreover , the formal decomposition - based distributed control synthesis framework established in this paper facilitates the application of other distributed and parallel large - scale optimization algorithms to further improve the rate of convergence and the feasibility of solutions for control synthesis in large s. in the future , we will develop an interface to prism toolbox with an implementation of the proposed decomposition and distributed synthesis algorithms .at the -th iteration , for , where denotes the projection to the nonnegative orthant , denotes projection onto . is the elementwise averaging ; , , in the elementwise averaging , these will not be included . ] and is the exchange operator , defined as below . is given by and .the variables can be initialized to at .note that the computation in each iteration can be parallelized .the iteration terminates when the stopping criterion for the block splitting algorithm is met ( see for more details ) .the solution can be obtained .s. thibaux , c. gretton , j. k. slaney , d. price , f. kabanza , and others , `` decision - theoretic planning with non - markovian rewards . '' _ journal of artificial intelligence research _ , vol . 25 , pp .1774 , 2006 .t. dean and s .- h .lin , `` decomposition techniques for planning in stochastic domains , '' in _ proceedings of the 14th international joint conference on artificial intelligence - volume 2_.1em plus 0.5em minus 0.4em morgan kaufmann publishers inc ., 1995 , pp . 11211127 .m. kwiatkowska , g. norman , and d. parker , `` prism 4.0 : verification of probabilistic real - time systems , '' in _ proceedings of international conference on computer aided verification _ , ser .lncs , g. gopalakrishnan and s. qadeer , eds .6806.1em plus 0.5em minus 0.4emspringer , 2011 , pp. 585591 .x. c. ding , s. l. smith , c. belta , and d. rus , `` mdp optimal control under temporal logic constraints , '' in _ ieee conference on decision and control and european control conference _ , 2011 ,. 532538 .m. lahijanian , s. andersson , and c. belta , `` temporal logic motion planning and control with probabilistic satisfaction guarantees , '' _ ieee transactions on robotics _28 , no . 2 ,pp . 396409 , april 2012 .s. boyd , n. parikh , e. chu , b. peleato , and j. eckstein , `` distributed optimization and statistical learning via the alternating direction method of multipliers , '' _ foundations and trends in machine learning _ ,vol . 3 , no . 1 , pp . 1122 , 2011 .t. brzdil , v. brozek , k. chatterjee , v. forejt , and a. kucera , `` two views on multiple mean - payoff objectives in markov decision processes , '' in _ annual ieee symposium on logic in computer science _ , 2011 ,. 3342 .c. baier , m. gr er , m. leucker , b. bollig , and f. ciesinski , `` controller synthesis for probabilistic systems ( extended abstract ) , '' in _ exploring new frontiers of theoretical informatics _ , ser .ifip international federation for information processing , j .- j .levy , e. mayr , and j. mitchell , eds.1em plus 0.5em minus 0.4emspringer us , 2004 , vol . 155 , pp . 493506 .a. wijs , j .-katoen , and d. bonaki , `` , '' in _ _ , ser .lecture notes in computer science , a. biere and r. bloem , eds.1em plus 0.5em minus 0.4emspringer international publishing , 2014 , vol . 8559 , pp .310326 .g. scutari , f. facchinei , l. lampariello , and p. song , `` parallel and distributed methods for nonconvex optimization , '' in _ ieee international conference on acoustics , speech and signal processing _ , may 2014 , pp . 840844 .
|
optimal control synthesis in stochastic systems with respect to quantitative temporal logic constraints can be formulated as linear programming problems . however , centralized synthesis algorithms do not scale to many practical systems . to tackle this issue , we propose a decomposition - based distributed synthesis algorithm . by decomposing a large - scale stochastic system modeled as a markov decision process into a collection of interacting sub - systems , the original control problem is formulated as a linear programming problem with a sparse constraint matrix , which can be solved through distributed optimization methods . additionally , we propose a decomposition algorithm which automatically exploits , if exists , the modular structure in a given large - scale system . we illustrate the proposed methods through robotic motion planning examples .
|
the theory of complex networks has flourished thanks to the availability of new datasets on large complex systems , such as the internet or the interaction networks inside the cell . in the last ten years attentionhas been focusing mainly on static or growing complex networks , with little emphasis on the rewiring of the links .the topology of these networks and their modular structure are able to affect the dynamics taking place on them .only recently temporal networks , dominated by the dynamics of rewirings , are starting to attract the attention of quantitative scientists working on complexity .one of the most beautiful examples of temporal networks are social interaction networks . indeed, social networks are intrinsically dynamical and social interactions are continuously formed and dissolved .recently we are gaining new insights into the structure and dynamics of these temporal social networks , thanks to the availability of a new generation of datasets recording the social interactions of the fast time scale .in fact , on one side we have data on face - to - face interactions coming from mobile user devices technology , or radio - frequency - identification - devices , on the other side , we have extensive datasets on mobile - phone calls and agent mobility .this new generation of data has changed drastically the way we look at social networks .in fact , the adaptability of social networks is well known and several models have been suggested for the dynamical formation of social ties and the emergence of connected societies .nevertheless , the strength and nature of a social tie remained difficult to quantify for several years despite the careful sociological description by granovetter . only recently , with the availability of data on social interactions and their dynamics on the fast time scale , it has become possible to assign to each acquaintance the strength or weight of the social interaction quantified as the total amount of time spent together by two agents in a given time window .the recent data revolution in social sciences is not restricted to data on social interaction but concerns all human activities , from financial transaction to mobility . from these new data on human dynamics evidenceis emerging that human activity is bursty and is not described by poisson processes .indeed , a universal pattern of bursty activities was observed in human dynamics such as broker activity , library loans or email correspondence .social interactions are not an exception , and there is evidence that face - to - face interactions have a distribution of duration well approximated by a power - law while they remain modulated by circadian rhythms .the bursty activity of social networks has a significant impact on dynamical processes defined on networks .here we compare these observations with data coming from a large dataset of mobile - phone communication and show that human social interactions , when mediated by a technology , such as the mobile - phone communication , demonstrate the adaptability of human behavior .indeed , the distribution of duration of calls does not follow any more a power - law distribution but has a characteristic scale determined by the weights of the links , and is described by a weibull distribution . at the same time , however , this distribution remains bursty and strongly deviates from a poisson distribution .we will show that both the power - law distribution of durations of social interactions and the weibull distribution of durations and social interactions observed respectively in face - to - face interaction datasets and in mobile - phone communication activity can be explained phenomenologically by a model with a reinforcement dynamics responsible for the deviation from a pure poisson process . in this model ,the longer two agents interact , the smaller is the probability that they split apart , and the longer an agent is non interacting , the less likely it is that he / she will start a new social interaction .we observe here that this framework is also necessary to explain the group formation in simple animals .this suggests that the reinforcement dynamics of social interactions , much like the hebbian dynamics , might have a neurobiological foundation .furthermore , this is supported by the results on the bursty mobility of rodents and on the recurrence patterns of words encountered in online conversations . we have therefore found ways to quantify the adaptability of human behavior to different technologies .we observe here that this change of behavior corresponds to the very fast time dynamics of social interactions and it is not related to macroscopic change of personality consistently with the results of on online social networks .moreover , temporal social networks encode information in their structure and dynamics .this information is necessary for efficiently navigating the network , and to build collaboration networks that are able to enhance the performance of a society .recently , several authors have focused on measures of entropy and information for networks .the entropy of network ensembles is able to quantify the information encoded in a structural feature of networks such as the degree sequence , the community structure , and the physical embedding of the network in a geometric space .the entropy rate of a dynamical process on the networks , such a biased random walk , are also able to characterize the interplay between structure of the networks and the dynamics occurring on them .finally , the mutual information for the data of email correspondence was shown to be fruitful in characterizing the community structure of the networks and the entropy of human mobility was able to set the limit of predictability of human movements . herewe will characterize the entropy of temporal social networks as a proxy to characterize the predictability of the dynamical nature of social interaction networks .this entropy will quantify how many typical configurations of social interactions we expect at any given time , given the history of the network dynamical process .we will evaluate this entropy on a typical day of mobile - phone communication directly from data showing modulation of the dynamical entropy during the circadian rhythm .moreover we will show that when the distribution of duration of contacts changes from a power - law distribution to a weibull distribution the level of information and the value of the dynamical entropy significantly change indicating that human adaptability to new technology is a further way to modulate the information content of dynamical social networks .human social dynamics is bursty , and the distribution of inter - event times follows a universal trend showing power - law tails .this is true for e - mail correspondence events , library loans , and broker activity .social interactions are not an exception to this rule , and the distribution of inter - event time between face - to - face social interactions has power - law tails .interestingly enough , social interactions have an additional ingredient with respect to other human activities . while sending an email can be considered an instantaneous event characterized by the instant in which the email is sent , social interactions have an intrinsic duration which is a proxy of the strength of a social tie .in fact , social interactions are the microscopic structure of social ties and a tie can be quantified as the total time two agents interact in a given time - window .new data on the fast time scale of social interactions have been now gathered with different methods which range from bluetooth sensors , to the new generation of radio - frequency - identification - devices . in all these datathere is evidence that face - to - face interactions have a duration that follows a distribution with a power - law tail .moreover , there is also evidence that the inter - contact times have a distribution with fat tails . in this chapterwe report a figure of ref . ( fig .[ barrat ] of this chapter ) in which the duration of contact in radio - frequency - device experiments conducted by sociopatterns experiments is clearly fat tailed and well approximated by a power - law ( straight line on the log - log plot ) . in this figurethe authors of ref . report the distribution of the duration of binary interactions and the distribution of duration of a the triangle of interacting agents .moreover they report data for the distribution of inter - event time .how do these distributions change when human agents are interfaced with a new technology ?this is a major question that arises if we want to characterize the universality of these distributions . in this book chapterwe report an analysis of mobile - phone data and we show evidence of human adaptability to a new technology .we have analysed the call sequence of subscribers of a major european mobile service provider . in the datasetthe users were anonymized and impossible to track .we considered calls between users who called each other mutually at least once during the examined period of months in order to examine calls only reflecting trusted social interactions .the resulted event list consists of calls between users .we have performed measurements for the distribution of call durations and non - interaction times of all the users for the entire 6 months time period .the distribution of phone call durations strongly deviates from a fat - tail distribution . in fig .[ interaction ] we report these distributions and show that they depend on the strength of the interactions ( total duration of contacts in the observed period ) but do not depend on the age , gender or type of contract in a significant way .the distribution of duration of contacts within agents with strength is well fitted by a weibull distribution with .the typical times of interactions between users depend on the weight of the social tie .in particular the values used for the data collapse of figure 3 are listed in table [ tauw ] .these values are broadly distributed , and there is evidence that such heterogeneity might depend on the geographical distance between the users .the weibull distribution strongly deviates from a power - law distribution to the extent that it is characterized by a typical time scale , while power - law distribution does not have an associated characteristic scale .the origin of this significant change in the behavior of humans interactions could be due to the consideration of the cost of the interactions although we are not in the position to draw these conclusions ( see fig . [ pay ] in which we compare distribution of duration of calls for people with different type of contract ) or might depend on the different nature of the communication .the duration of a phone call is quite short and is not affected significantly by the circadian rhythms of the population . on the contrary , the duration of no - interaction periods is strongly affected by the periodic daily of weekly rhythms . in fig .[ non - interaction ] we report the distribution of duration of no - interaction periods in the day periods between 7 am and 2 am next day .the typical times used in figure 5 are listed in table [ tauk ] .the distribution of non - interacting times is difficult to fit due to the noise derived by the dependence on circadian rhythms . in any casethe non - interacting time distribution if it is clearly fat tail ..typical times used in the data collapse of fig .[ interaction ] . [ cols= " < , < " , ]it has been recognized that human dynamics is not poissonian .several models have been proposed for explaining a fundamental case study of this dynamics , the data on email correspondence .the two possible explanations of bursty email correspondence are described in the following . * a queueing model of tasks with different prioritieshas been suggested to explain bursty interevent time .this model implies rational decision making and correlated activity patterns .this model gives rise to power - law distribution of inter event times .* a convolution of poisson processes due to different activities during the circadian rhythms and weekly cycles have been suggested to explain bursty inter event time .these different and multiple poisson processes are introducing a set of distinct characteristic time scales on human dynamics giving rise to fat tails of interevent times . in the previous sectionwe have showed evidence that the duration of social interactions is generally non poissonian .indeed , both the power - law distribution observed for duration of face - to - face interactions and the weibull distribution observed for duration of mobile - phone communication strongly deviate from an exponential .the same can be stated for the distribution of duration of non - interaction times , which strongly deviates from an exponential distribution both for face - to - face interactions and for mobile - phone communication . in order to explain the data on duration of contacts we can not use any of the models proposed for bursty interevent time in email correspondence . in fact , on one side it is unlikely that the decision to continue a conversation depends on rational decision making .moreover the queueing model can not explain the observed stretched exponential distribution of duration of calls . on the other side , the duration of contacts it is not effected by circadian rhythms and weekly cycles which are responsible for bursty behavior in the model .this implies that a new theoretical framework is needed to explain social interaction data .therefore , in order to model the temporal social networks we have to abandon the generally considered assumption that social interactions are generated by a poisson process . in this assumptionthe probability for two agents to start an interaction or to end an interaction is constant in time and not affected by the duration of the social interaction . instead , to build a model for human social interactions we have to consider a reinforcement dynamics , in which the probability to start an interaction depends on how long an individual has been non - interacting , and the probability to end an interaction depends on the duration of the interaction itself .generally , to model the human social interactions , we can consider an agent - based system consisting of agents that can dynamically interact with each other and give rise to interacting agent groups . in the following subsections we give more details on the dynamics of the models .we denote by the state of the agent , the number of agents in his / her group ( including itself ) .in particular we notice here that a state for an agent , denotes the fact that the agent is non - interacting . a reinforcement dynamics for such systemis defined in the following frame .+ the longer an agent is interacting in a group the smaller is the probability that he / she will leave the group . + the longer an agent is non - interacting the smaller is the probability that he / she will form or join a new group .+ the probability that an agent change his / her state ( value of ) is given by where , is the total number of agents in the model and is the last time the agent has changed his / her state , and is a parameter of the model .the reinforcement mechanism is satisfied by any function that is decreasing with but social - interaction data currently available are reproduced only for this particular choice .the function only depends on the actual time in which the decision is made .this function is able to modulate the activity during the day and throughout the weekly rhythms . for the modelling of the interaction data we will first assume that the function is a constant in time . moreover in the following subsectionswe will show that in order to obtain power - law distribution of duration of contacts and non - interaction times ( as it is observed in face - to - face interaction data ) we have to take while in order to obtain weibull distribution of duration of contacts we have to take .therefore , summarizing here the results of the following two sections , we can conclude with the following statement for the adaptability of human social interactions + the adaptability of human social interactions to technology can be seen as an effective way to modulate the parameter in eq . parametrizing the probability to start or to end the social interactions . herewe recall the model of face - to - face interactions presented in and we delineate the main characteristics and outcomes .a simple stochastic dynamics is imposed to the agent - based system in order to model face - to - face interactions .starting from given initial conditions , the dynamics of face - to - face interactions at each time step is implemented as the following algorithm . * an agent is chosen randomly . *the agent updates his / her state with probability .+ if the state is updated , the subsequent action of the agent proceeds with the following rules .* * if the agent is non - interacting ( ) , he / she starts an interaction with another non - interacting agent chosen with probability proportional to .therefore the coordination number of the agent and of the agent are updated ( and ) . * * if the agent is interacting in a group ( ) , with probability the agent leaves the group and with probability he / she introduces an non - interacting agent to the group .if the agent leaves the group , his / her coordination number is updated ( ) and also the coordination numbers of all the agents in the original group are updated ( , where represent a generic agent in the original group ) . on the contrary , if the agent introduces another isolated agent to the group , the agent is chosen with probability proportional to and the coordination numbers of all the interacting agents are updated ( , and where represents a generic agent in the group ) .* time is updated as ( initially ) .the algorithm is repeated from ( 1 ) until .we have taken in the reinforcement dynamics with parameter such that in eq .,for simplicity , we take for every , indicating the fact the interacting agents change their state independently on the coordination number .we note that in this model we assume that everybody can interact with everybody so that the underline network model is fully connected .this seems to be a very reasonable assumption if we want to model face - to - face interactions in small conferences , which are venues designed to stimulate interactions between the participants .nevertheless the model can be easily modified by embedding the agents in a social network so that interactions occur only between social acquaintances . in the following we review the mean - field solution to this model . for the detailed description of the solution of the outline non - equilibrium dynamicsthe interest reader can see .we denote by the number of agents interacting with agents at time , who have not changed state since time . in the mean field approximation ,the evolution equations for are given by & = & -2f_1(t - t)-(1-)(t ) f_1(t - t)+_i > 1_i,2(t)_tt + & = & -2f_2(t - t)+[_1,2(t)+_3,2(t)]_tt + & = & -n f_n(t - t ) + [ _ n-1,n(t)+_n+1,n(t)+_1,n(t)]_tt , n > 2 .[ dnib ] in these equations , the parameter indicates the rate at which isolated nodes are introduced by another agent in already existing groups of interacting agents .moreover , indicates the transition rate at which agents change its state from to ( i.e. ) at time . in the mean - field approximation the value of be expressed in terms of as ( t)=. [ epsilon ] assuming that asymptotically in time converges to a time independent variable , i.e. , the solution to the rate equations ( [ dnib ] ) in the large time limit is given by [ nib ] n_1(t , t)&=&n_1(t,t)(1+)^-b_1[2+(1- ) ] + n_2(t , t)&=&n_2(t,t)(1+)^-2b_2 + n_n(t , t)&=&n_n(t,t)(1+)^-nb_2 n > 2,withn_1(t,t)&=&_n > 1_n,1(t ) + n_2(t,t)&=&_1,2(t)+_3,2(t ) + n_n(t,t)&=&_n-1,n(t)+_n+1,n(t)+_0,n(t ) n > 2.[pig ] .we denote by the distribution of duration of different coordination number which satisfies the relation p_n()=_t=0^t - f_n(t - t)n(t , t)dt. and using eq.([p ] ) and eqs.([nib ] ) we find that simply satisfy p_1 ( ) & & ( 1+)^-b_1[2+(1-)]-1 + p_n ( ) & & ( 1+)^-nb_2 - 1 .[ pn ] as shown in fig.[groups_stationary ] , the analytic prediction eqs.([pn ] ) is in good agreement with the computer simulation .{phase1 } & \includegraphics[width=30 mm , height=30mm]{phase2 } & \includegraphics[width=30 mm , height=30mm]{phase3 } \end{array}$ ] despite the simplicity of this model , the non - equilibrium dynamics of this system is characterized by a non trivial phase diagram .the phase - diagram of the model is summarized in fig.[fig_phase ] .we can distinguish between three phases : * _ region i - the stationary region : , and - _ this region corresponds to the white area in fig.[fig_phase ] .the region is stationary and the transition rates between different states are constant . *_ region ii - the non - stationary region : or , and -_this region corresponds to the blue area in fig.[fig_phase ] .the region is non - stationary and the transition rates between different states are decaying with time as power - law .* _ region iii - formation of a big group : -_in this region there is an instability for the formation of a large group of size . in both regionsi and region ii the distribution of the duration of groups of size follows a power - law distribution with an exponent which grows with the group size .this fact is well reproduced in the face - to - face data and implies the following principle on the stability of groups in face - to - face interactions . in face - to - face interactions , groups of larger sizeare less stable than groups of smaller size .in fact the stability of a group depends on the independent decisions of the agents in the group to remain in contact .to model cell - phone communication , we consider once again a system of agents representing the mobile phone users .moreover , we introduce a static weighted network , of which the nodes are the agents in the system , the edges represent the social ties between the agents , such as friendships , collaborations or acquaintances , and the weights of the edges indicate the strengths of the social ties .therefore the interactions between agents can only take place along the network ( an agent can only interact with his / her neighbors on the network ) . herewe propose a model for mobile - phone communication constructed with the use of the reinforcement dynamic mechanism .this model shares significant similarities with the previously discussed model for face - to - face interactions , but has two major differences .firstly , only pairwise interactions are allowed in the case of cell - phone communication .therefore , the state of an agent only takes the values of either ( non - interacting ) or ( interacting ) .secondly , the probability that an agent ends his / her interaction depends on the weight of network .the dynamics of cell - phone communication at each time step is then implemented as the following algorithm . * an agent is chosen randomly at time .* the subsequent action of agent depends on his / her current state ( i.e. ) : * * if , he / she starts an interaction with one of his / her non - interacting neighbors of with probability where denotes the last time at which agent has changed his / her state .if the interaction is started , agent is chosen randomly with probability proportional to and the coordination numbers of agent and are then updated ( and ) . * * if , he / she ends his / her current interaction with probability where is the weight of the edge between and the neighbor that is interacting with . if the interaction is ended , the coordination numbers of agent and are then updated ( and ) .* time is updated as ( initially ) .the algorithm is repeated from ( 1 ) until . herewe take the probabilities according to the following functional dependence f_1(t ,t)&=&f_1()= + f_2(t , t|w)&=&f_2(|w)= [ f2 t ] where the parameters are chosen in the range , , , is a positive decreasing function of its argument , and is given by . in order to solve the model analytically, we assume the quenched network to be annealed and uncorrelated . herewe outline the main result of this approach and we suggest for the interested reader to look at papers for the details of the calculations . therefore we assume that the network is rewired while the degree distribution and the weight distribution remain constant .we denote by the number of non - interacting agents with degree at time who have not changed their state since time .similarly we denote by the number of interacting agent pairs ( with degree respectively and and weight of the edge ) at time who have not changed their states since time . in the annealed approximation the probability that an agent with degree is called by another agent is proportional to its degree .therefore the evolution equations of the model are given by & = & -f_1(t - t)-ckf_1(t - t)+_21^k(t)_tt + & = & -2f_2(t - t|w)+_12^k , k,w(t)_tt [ dn1w ] where the constant is given by c=. [ c_sum ] in eqs . the rates indicate the average number of agents changing from state to state at time .the solution of the dynamics must of course satisfy the conservation equation dt = np(k ) .[ n_conserve ] in the following we will denote by the probability distribution that an agent with degree is non - interacting in the period between time and time and we will denote by the probability that an interaction of weight is lasting from time to time which satisfy p_1^k(t , t)&=&(1+ck)f_1(t , t)n_1^k(t , t ) + p_2^w(t , t)&=&2f_2(t , t|w)_k , kn_2^k , k,w(t , t ) . [ p12 ] as a function of the value of the parameter of the model we found different distribution of duration of contacts and non - interaction times. * _ case _ the system allows always for a stationary solution with and .the distribution of duration of non - interaction times for agents of degree in the network and the distribution of interaction times for links of weight is given by + p_1^k()&&e^-(1+)^1- + p_2^w()&&e^-(1+)^1- .[ p2kt1 ] rescaling eqs.([p2kt1 ] ) , we obtain the weibull distribution which is in good agreement with the results observed in mobile - phone datasets . *_ case _ another interesting limiting case of the mobile - phone communication model is the case such that and . in this casethe model is much similar to the model used to mimic face - to - face interactions described in the previous subsection , but the interactions are binary and they occur on a weighted network . in this casewe get the solution n_1^k ( ) & = & n_21^k(1+)^-b_1(1+ck ) + n_2^k , k,w ( ) & = & n_12^k , k,w(1+)^-2b_2g(w ) . and consequently the distributions of duration of given states eqs . are given by p_1^k ( ) & & _21^k(1+)^-b_1(1+ck)-1 + p_2^w ( ) & & _ 12^k , k,w(1+)^-2b_2g(w)-1 .the probability distributions are power-laws.this result remains valid for every value of the parameters nevertheless the stationary condition is only valid for b_1(1+ck)>1 + 2b_2g(w)>1 .indeed this condition ensures that the self - consistent constraints eqs .( [ c_sum ] ) , and the conservation law eq .( [ n_conserve ] ) have a stationary solution . *_ case _ this is the case in which the process described by the model is a poisson process and their is no reinforcement dynamics in the system .therefore we find that the distribution of durations are exponentially distributed .in fact for the functions and given by eqs. reduce to constants , therefore the process of creation of an interaction is a poisson process . in this casethe social interactions do not follow the reinforcement dynamics .the solution that we get for the number of non interacting agents of degree , and the number of interacting pairs is given by n_1^k()&=&n_21^ke^-b_1(1+ck ) + n_2^k , k,w()&=&n_12^k , k,we^-2b_2g(w ) .consequently the distributions of duration of given states eqs . are given by p_1^k ( ) e^-b_1(1+ck ) + p_2^w ( ) e^-2b_2g(w ) .therefore the probability distributions and are exponentials as expected in a poisson process .in this section we introduce the entropy of temporal social networks as a measure of information encoded in their dynamics .we can assume that the following stochastic dynamics takes place in the network : according to this dynamics at each time step , different interacting groups can be formed and can be dissolved giving rise to the temporal social networks .the agents are embedded in a social network such that interaction can occur only by acquaintances between first neighbors of the network .this is a good approximation if we want to model social interactions on the fast time scale . in the case of a small conference , where each participant is likely to discuss with any other participant we can consider a fully connected network as the underlying network of social interactions . in the network each set of interacting agents can be seen as a connected subgraph of , as shown in fig [ fig1 ] .we use an indicator function to denote , at time , the maximal set , , ... , of interacting agents in a group . if is the maximal set of interacting agents in a group , we let otherwise we put . therefore at any given time the following relation is satisfied , where is an arbitrary connected subgraph of . then we denote by the history of the dynamical social networks , and the probability that given the history . therefore the likelihood that at time the dynamical social networks has a group configuration is given by we denote the entropy of the dynamical networks as indicating the logarithm of the typical number of all possible group configurations at time which can be explicitly written as the value of the entropy can be interpreted as following : if the entropy is larger , the dynamical network is less predictable , and several possible dynamic configurations of groups are expected in the system at time . on the other hand ,a smaller entropy indicates a smaller number of possible future configuration and a temporal network state which is more predictable . in this subsectionwe discuss the evaluation of the entropy of phone - call communication . for phone - call communication ,we only allow pairwise interaction in the system such that the product in eq.([likelihood ] ) is only taken over all single nodes and edges of the quenched network which yields with where is the adjacency matrix of .the entropy then takes a simple form in this subsection we use the entropy of temporal social networks to analyze the information encoded in a major european mobile service provider , making use of the same dataset that we have used to measure the distribution of call duration in section 2 .here we evaluate the entropy of the temporal networks formed by the phone - call communication in a typical week - day in order to study how the entropy of temporal social networks is affected by circadian rhythms of human behavior . for the evaluation of the entropy of temporal social networks we consider a subset of the large dataset of mobile - phone communication .we selected users who executed at least one call a day during a weeklong period .we denote by the transition probability that an agent in state ( changes its state at time given that he / she has been in his / her current state for a duration .the probability can be estimated directly from the data .therefore , we evaluate the entropy in a typical weekday of the dataset by using the transition probabilities and the definition of entropy of temporal social networks ( readers should refer to the supplementary material of ref . for the details ) . in fig .[ entropyt ] we show the resulting evaluation of entropy in a typical day of our phone - call communication dataset . the entropy of the temporal social network is plotted as a function of time during one typical day .the mentioned figure shows evidence that the entropy of temporal social networks changes significantly during the day reflecting the circadian rhythms of human behavior . the adaptability of human behavior is evident when comparing the distribution ofthe duration of phone - calls with the duration of face - to - face interactions . in the framework of the model for mobile - phone interactions described in sec .[ sec3.2 ] , this adaptability , can be understood , as a possibility to change the exponent in eqs .( [ f ] ) and ( [ f2 t ] ) regulating the duration of social interactions .changes in the parameter correspond to different values entropy of the dynamical social networks .therefore , by modulating the exponent , the human behavior is able to modulate the information encoded in temporal social networks . in order to show the effect on entropy of a variation of the exponent in the dynamics of social interaction networks , we considered the entropy corresponding to the model described in sec .[ sec3.2 ] as a function of the parameters and modulating the probabilities eqs.([f2 t ] ) . in fig .[ entropy_network ] we report the entropy of the proposed model a function of and .the entropy , given by eq.([s_pair2 ] ) , is calculated using the annealed approximation for the solution of the model and assuming the large network limit . in the calculation of the entropy have taken a network of size with exponential degree distribution of average degree , weight distribution and function and .our aim in fig . [ entropy_network ] is to show only the effects on the entropy due to the different distributions of duration of contacts and non - interaction periods .therefore we have normalized the entropy with the entropy of a null model of social interactions in which the duration of groups are poisson distributed but the average time of interaction and non - interaction time are the same as in the model of cell - phone communication ( readers should refer to the supplementary material of ref . for more details ) . from fig .[ entropy_network ] we observe that if we keep constant , the ratio is a decreasing function of the parameter .this indicates that the broader is the distribution of probability of duration of contacts , the higher is the information encoded in the dynamics of the network .therefore the heterogeneity in the distribution of duration of contacts and no - interaction periods implies higher level of information in the social network .the human adaptive behavior by changing the exponent in face - to - face interactions and mobile phone communication effectively changes the entropy of the dynamical network .the goal of network science is to model , characterize , and predict the behavior of complex networks . here , in this chapter , we have delineated a first step in the characterization of the information encoded in temporal social networks . in particularwe have focused on modelling phenomenologically social interactions on the fast time scale , such a face - to - face interactions and mobile phone communication activity .moreover , we have defined the entropy of dynamical social networks , which is able to quantify the information present in social network dynamics .we have found that human social interactions are bursty and adaptive .indeed , the duration of social contacts can be modulated by the adaptive behavior of humans : while in face - to - face interactions dataset a power - law distribution of duration of contacts has been observed , we have found , from the analysis of a large dataset of mobile - phone communication , that mobile - phone calls are distributed according to a weibull distribution . we have modeled this adaptive behavior by assuming that the dynamics underlying the formation of social contacts implements a reinforcement dynamics according to which the longer an agent has been in a state ( interacting or non - interacting ) the less likely it is that he will change his / her state .we have used the entropy of dynamical social networks to evaluate the information present in the temporal network of mobile - phone communication , during a typical weekday of activity , showing that the information content encoded in the dynamics of the network changes during a typical day . moreover , we have compared the entropy in a social network with the duration of contacts following a weibull distribution , and with the duration of contacts following a power - law in the framework of the stochastic model proposed for mobile - phone communication .we have found that a modulation of the statistics of duration of contacts strongly modifies the information contents present in the dynamics of temporal networks .finally , we conclude that the duration of social contacts in humans has a distribution that strongly deviates from an exponential .moreover , the data show that human behavior is able to modify the information encoded in social networks dynamics during the day and when facing a new technology such as the mobile - phone communication technology .we thank a. barrat and j. stehl for a collaboration that started our research on face - to - face interactions .moreover we especially thank a .-barabsi for his useful comments and for the mobile call data used in this research .mk acknowledges the financial support from eu s 7th framework program s fet - open to ictecollective project no .238597 tang j , scellato s , musolesi m , mascolo c , latora v ( 2010 ) small - world behavior in time - varying graphs .phys rev e 81:055101 .parshani r , dickison m , cohen r , stanley he , havlin s ( 2010 ) dynamic networks and directed percolation .europhys lett 90:38004 .cattuto c , van den broeck w , barrat a , colizza v , pinton jf , vespignani a ( 2010 ) dynamics of person - to - person interactions from distributed rfid sensor networks .plos one 5:e11596 .isella l , stehl j , barrat a , cattuto c , pinton jf , van den broeck w ( 2011 ) what s in a crowd ?analysis of face - to - face behavioral networks .j theor biol 271:166 - 180 .hui p , chaintreau a , scott j , gass r , crowcroft j , diot c ( 2005 ) pocket switched networks and human mobility in conference environments .proceedings of the 2005 acm sigcomm workshop on delay - tolerant networking ( philadelphia , pa ) pp 244 - 251 .onnela jp , saramki j , hyvnen j , szab g , lazer d , kaski k , kertsz j , barabsi al ( 2007 ) structure and tie strengths in mobile communication networks .proc natl acad sci usa 104:7332 - 7336 .brockmann d , hufnagel l , geisel t ( 2006 ) the scaling laws of human travel .nature 439:462 - 465 .davidsen j , ebel h , bornholdt s ( 2002 ) emergence of a small world from local interactions : modeling acquaintance networks .phys rev lett 88:128701 .marsili m , vega - redondo f , slanina f ( 2004 ) the rise and fall of a networked society : a formal model .proc natl acad sci usa 101:1439 - 1442 .holme p , newman mej ( 2006 ) nonequilibrium phase transition in the coevolution of networks and opinions .phys rev e 74:056108 .vazquez f , eguluz vm , san miguel m ( 2008 ) generic absorbing transition in coevolution dynamics .phys rev lett 100:108702 .scherrer a , borgnat p , fleury e , guillaume jl , robardet c ( 2008 ) description and simulation of dynamic mobility networks .comp net 52:2842 - 2858 .stehl j , barrat a , bianconi g ( 2010 ) dynamical and bursty interactions in social networks .phys rev e 81:035101 .anteneodo c , chialvo dr ( 2009 ) unraveling the fluctuations of animal motor activity .chaos 19:033123 .altmann eg , pierrehumbert jb , motter ae ( 2009 ) beyond word frequency : bursts , lulls , and scaling in the temporal distributions of words .plos one 4:e7678 .quercia d , lambiotte r , stillwell d , kosinski m , crowcroft j ( 2012 ) the personality of popular facebook users , acm cscw 12 : 955 - 964 .cover t and thomas ja ( 2006 ) elements of information theory .wiley - interscience .bianconi g ( 2008 ) the entropy of randomized network ensembles .europhys lett 81:28005 .anand k , bianconi g ( 2009 ) entropy measures for networks : toward an information theory of complex topologies .phys rev e 80:045102 .gmez - gardenes j and latora v ( 2008 ) entropy rate of diffusion processes on complex networks .e 78:065102(r )eckmann jp , moses e , sergi d ( 2004 ) entropy of dialogues creates coherent structures in e - mail traffic .proc natl acad sci usa 101:14333 .lambiotte r , blondel v d , de kerchovea c , huensa e , prieurc c , smoredac z , van dooren p(2008 ) geographical dispersal of mobile communication networks , physica a , 387 : 5317 - 5325 .malmgren r d , stouffer d b , motter a e , and amaral l an ( 2008 ) a poissonian explanation for heavy tails in e - mail communication proceedings of national academy of science 47:18153 - 18158 .
|
temporal social networks are characterized by heterogeneous duration of contacts , which can either follow a power - law distribution , such as in face - to - face interactions , or a weibull distribution , such as in mobile - phone communication . here we model the dynamics of face - to - face interaction and mobile phone communication by a reinforcement dynamics , which explains the data observed in these different types of social interactions . we quantify the information encoded in the dynamics of these networks by the entropy of temporal networks . finally , we show evidence that human dynamics is able to modulate the information present in social network dynamics when it follows circadian rhythms and when it is interfacing with a new technology such as the mobile - phone communication technology .
|
social dilemmas are situations in which individuals are torn between what is best for them and what is best for the society .if selfishness prevails , the pursuit of short - term individual benefits may quickly result in loss of mutually rewarding cooperative behavior and ultimately in the tragedy of the commons .evolutionary game theory is the most commonly adopted theoretical framework for the study of social dilemmas , and none has received as much attention as the prisoner s dilemma game .each instance of the game is contested by two players who have to decide simultaneously whether they want to cooperate or defect .the dilemma is given by the fact that although mutual cooperation yields the highest collective payoff , a defector will do better if the opponent decides to cooperate . since widespread cooperation in natureis one of the most important challenges to darwin s theory of evolution and natural selection , ample research has been devoted to the identification of mechanisms that may lead to a cooperative resolution of social dilemmas .classic examples reviewed in include kin selection , direct and indirect reciprocity , network reciprocity , as well as group selection .recently , however , interdisciplinary research linking together knowledge from biology and sociology as well as mathematics and physics has revealed many refinements to these mechanisms and also new ways by means of which the successful evolution of cooperation amongst selfish and unrelated individuals can be understood .one of the more recent and very promising developments in evolutionary game theory is the introduction of so - called multigames or mixed games ( for earlier conceptually related work see ) , where different players in the population adopt different payoff matrices .indeed , it is often the case that a particular dilemma is perceived differently by different players , and this is properly taken into account by considering a multigame environment .a simple example to illustrate the point entails two drivers meeting in a narrow street and needing to avoid collision .however , while the first driver drives a cheap old car , the second driver drives a brand new expensive car .obviously , the second driver will be more keen on averting a collision .several other examples could be given to illustrate that , when we face a conflict , we are likely to perceive differently what we might loose in case the other player chooses to defect .the key question then is , how the presence of different payoff matrices , motivated by the different perception of a dilemma situation , will influence the cooperation level in the whole population ?multigames were thus far studied in well - mixed systems , but since stable solutions in structured populations can differ significantly a prominent example of this fact being the successful evolution of cooperation in the prisoner s dilemma game through network reciprocity it is of interest to study multigames also within this more realistic setup .indeed , interactions among players are frequently not random and best described by a well - mixed model , but rather they are limited to a set of other players in the population and as such are best described by a network . with this as motivation , we here study evolutionary multigames on the square lattice and scale - free networks , where the core game is the weak prisoner s dilemma while at the same time some fraction of players adopts either a positive or a negative value of the sucker s payoff .effectively , we thus have some players using the weak prisoner s dilemma payoff matrix , some using the traditional prisoner s dilemma payoff matrix , and also some using the snowdrift game payoff matrix . within this multigame environment, we will show that the higher the heterogeneity of the population in terms of the adopted payoff matrices , the more the evolution of cooperation is promoted .furthermore , we will elaborate on the responsible microscopic mechanisms , and we will also test the robustness of our observations . taken together , we will provide firm evidence in support of heterogeneity - enhanced network reciprocity and show how different perceptions of social dilemmas contribute to their resolution .first , however , we proceed with presenting the details of the mathematical model .we study evolutionary multigames on the square lattice and the barabsi - albert scale - free network , each with an average degree and size . these graphs ,being homogeneous and strongly heterogeneous , represent two extremes of possible interaction topology .each player is initially designated either as cooperator ( ) or defector ( ) with equal probability . moreover ,each instance of the game involves a pairwise interaction where mutual cooperation yields the reward , mutual defection leads to punishment , and the mixed choice gives the cooperator the sucker spayoff and the defector the temptation .the core game is the weak prisoner s dilemma , such that , and .a fraction of the population , however , uses different values to take into account the different perception of the same social dilemma .in particular , one half of the randomly chosen players uses , while the other half uses , where .we adopt the equal division of positive and negative values to ensure that the average over all payoff matrices returns the core weak prisoner s dilemma , which is convenient for comparisons with the baseline case .primarily , we consider multigames where , once assigned , players do not change their payoff matrices , but we also verify the robustness of our results by considering multigames with time - varying matrices .we simulate the evolutionary process in accordance with the standard monte carlo simulation procedure comprising the following elementary steps .first , according to the random sequential update protocol , a randomly selected player acquires its payoff by playing the game with all its neighbors .next , player randomly chooses one neighbor , who then also acquires its payoff in the same way as previously player .importantly , at each instance of the game the applied payoff matrix is that of the randomly chosen player who collects the payoffs , which may result in an asymmetric payoff allocation depending on who is central .this fact , however , is key to the main assumption that different players perceive the same situation differently .once both players acquire their payoffs , then player adopts the strategy from player with a probability determined by the fermi function },\ ] ] where quantifies the uncertainty related to the strategy adoption process . in agreement with previous works, the selected value ensures that strategies of better - performing players are readily adopted by their neighbors , although adopting the strategy of a player that performs worse is also possible .this accounts for imperfect information , errors in the evaluation of the opponent , and similar unpredictable factors .each full monte carlo step ( mcs ) consists of elementary steps described above , which are repeated consecutively , thus giving a chance to every player to change its strategy once on average .all simulation results are obtained on networks typically with players , but larger system size is necessary on the proximity to phase transition points , and the fraction of cooperators is determined in the stationary state after a sufficiently long relaxation lasting up to mcs . to further improve accuracy ,the final results are averaged over independent realizations , including the generation of the scale - free networks , at each set of parameter values .before turning to the main results obtained in structured populations , we first briefly summarize the evolutionary outcomes in well - mixed populations .although the subpopulation adopting the , , and parametrization fulfills , and thus in principle plays the snowdrift game where the equilibrium is a mixed phase , cooperators in the studied multigame actually never survive . since there are also players who adopt either the weak ( ) or the traditional ( ) prisoner s dilemma payoff matrix , the asymmetry in the interactions renders cooperation evolutionary unstable .in fact , in well - mixed populations the baseline case given by the average over all payoff matrices is recovered , which in our setup is the weak prisoner s dilemma , where for all cooperators are unable to survive .more precisely , cooperators using die out first , followed by those using and , and this ranking is preserved even if the subpopulation using is initially significantly larger than the other two subpopulations ( at small values ) .although in finite well - mixed populations the rank of this extinction pattern could be very tight , it does not change the final fate of the population to arrive at complete defection . in structured populations , as expected from previous experience , we can observe different solutions , where cooperators can coexist with defectors over a wide range of parameter values .but more importantly , the multigame environment , depending on and , can elevate the stationary cooperation level significantly beyond that warranted by network reciprocity alone .we first demonstrate this in fig .[ delta](a ) , where we plot the fraction of cooperators as a function of the temptation value , as obtained for and by using different values of .it can be observed that the larger the value of the larger the value of at which cooperators are still able to survive .indeed , for cooperation prevails across the whole interval of .since some players use a negative value of , it is nevertheless of interest to test whether the elevated level of cooperation actually translates to a larger average payoff of the population .it is namely known that certain mechanisms aimed at promoting cooperative behavior , like for example punishment , elevate the level of cooperation but at the same time fail to raise the average payoff accordingly due to the entailed negative payoff elements . as illustrated in fig .[ delta](b ) , however , this is not the case at present since larger values of readily translate to larger average payoffs of the population . in the light of these results ,we focus solely on the fraction of cooperators and show in fig .[ rho ] how varies in dependence on and at a given temptation value . presented results indicate that what we have observed in fig .[ delta](a ) , namely the larger the value of the better , actually holds irrespective of the value of .more to the point , larger values support cooperation stronger , which corroborates the argument that the more heterogeneous the multigame environment the better .results presented in fig .[ rho ] also suggest that it is better to have many players using higher values , regardless of the fact that the price is an equal number of players in the population using equally high but negative values .these observations hold irrespective of the temptation , and they fit well to the established notion that heterogeneity , regardless of its origin , promotes cooperation by enhancing network reciprocity . to support these arguments and to pinpoint the microscopic mechanism that is responsible for the promotion of cooperation in the multigame environment , we first monitor the fraction of cooperators within subgroups of players that use different payoff matrices . for clarity ,we use , where only two subpopulations exist ( players use either or , but nobody uses ) , and where the positive effect on the evolution of cooperation is the strongest ( see fig . [ rho ] ) . accordingly, one group is formed by players who use , and the other is formed by players who use .we denote the fraction of cooperators in these two subpopulations by and , respectively . as fig .[ asymmetry](a ) shows , even if only a moderate value is applied , the cooperation level among players who use a positive value is significantly higher than among those who use a negative value .unexpectedly , even among those players who effectively play a traditional prisoner s dilemma ( ) , the level of cooperation is still much higher than the level of cooperation that is supported solely by network reciprocity ( without multigame heterogeneity ) in the weak prisoner s dilemma ( ) .this fact further supports the conclusion that the introduction of heterogeneity through the multigame environment involves the emergence of strong cooperative leaders , which further aid and invigorate traditional network reciprocity . unlike defectors , cooperators benefit from a positive feedback effect , which originates in the subpopulation that uses positive values and then spreads towards the subpopulation that uses negative values , ultimately giving rise to an overall higher social welfare ( see fig .[ delta](b ) ) . this explanation can be verified directly by monitoring the information exchange between the two subpopulations .more precisely , we measure the frequency of strategy imitations between players belonging to the two different subpopulations .the difference is positive when players belonging to the `` - '' subpopulation adopt the strategy from players belonging to the `` + '' subpopulation more frequently than vice versa .results presented in fig .[ asymmetry](b ) demonstrate clearly that the level of cooperation is increased only if there is significant asymmetry in the strategy imitation flow in favor of the `` + '' subpopulation .such symmetry breaking , which is due to the multigame environment , supports a level of cooperation in the homogeneous weak prisoner s dilemma that notably exceeds the level of cooperation that is supported solely by traditional network reciprocity .we proceed by testing the robustness of our observations and expanding this study to heterogeneous interaction networks .first , we consider the barabsi - albert scale - free network , where influential players are a priori present due to the heterogeneity of the topology .previous research , however , has shown that the positive impact of degree heterogeneity vanishes if payoffs are normalized with the degree of players , as to account for the elevated costs of participating in many games .we therefore apply degree - normalized payoffs to do away with cooperation promotion that would be due solely to the heterogeneity of the topology .furthermore , by striving to keep the average over all payoff matrices equal to the weak prisoner s dilemma , it is important to note that the heterogeneous interaction topology allows us to introduce only a few strongly connected players into the subpopulation , while the rest can use only a moderately negative value . specifically , we assigned to only 2% of the hubs , while the rest used to fulfill ( average over all in the population equal to zero to yield , on average , the weak prisoner s dilemma payoff ranking ) .as results depicted in fig .[ sf ] show , even with this relatively minor modification that introduces the multigame environment , the promotion of cooperation is significant if only is sufficiently large ( see legend ) .evidently , returns the modest cooperation level that has been reported before on scale - free networks with degree - normalized payoffs , but for the coexistence of cooperators and defectors is possible almost across the whole interval of .it is also important to note that the positive effect could be easily amplified further simply by introducing more players into the subpopulation and letting the remainder use an accordingly even less negative values of .these results indicate that the topology of the interaction network has only secondary importance , because the heterogeneity that is introduced by payoff differences already provides the necessary support for the successful evolution of cooperation .consequently , in the realm of the introduced multigame environment , we have observed qualitatively identical cooperation - supporting effects when using the random regular graph or the configurational model of bender and canfield for generating the interaction network .lastly , we present results obtained within a time - varying multigame environment to further corroborate the robustness of our main arguments .several examples could be provided as to why players perception might change over time .the key point is that players may still perceive the same dilemma situation differently , and hence they may use different payoff matrices .our primary goal here is to present the results obtained with a minimal model , although extensions towards more sophisticated and realistic models are of course possible .accordingly , unlike considered thus far , players do not have a permanently assigned value , but rather , they can choose between and with equal probability at each instance of the game .naturally , this again returns the weak prisoner s dilemma on average over time , and as shown in , in well - mixed populations returns the complete defection stationary state . in structured populations , however , for , we can again observe promotion of cooperation beyond the level that is warranted solely by network reciprocity .for simplicity , results presented in fig . [ temporary ] were obtained by using the square lattice as the underlying interaction network , but in agreement with the results presented in fig . [ sf ], qualitatively identical evolutionary outcomes are obtained also on heterogeneous interaction networks . comparing to the results presented in fig .[ delta](a ) , where the time invariable multigame environment was applied , we conclude that in the time - varying multigame environment the promotion of cooperation is less strong .this , however , is understandable , since the cooperation - supporting influential players emerge only for a short period of time , but on average the overall positive effect in the stationary state is still clearly there . to conclude, it is worth pointing out that time - dependent perceptions of social dilemmas open the path towards coevolutionary models , as studied previously in the realm of evolutionary games , and they also invite the consideration of the importance of time scales in evolutionary multigames .we have studied multigames in structured populations under the assumption that the same social dilemma is often perceived differently by competing players , and that thus they may use different payoff matrices when interacting with their opponents .this essentially introduces heterogeneity to the evolutionary game and aids network reciprocity in sustaining cooperative behavior even under adverse conditions . as the core game and the baseline for comparisons , we have considered the weak prisoner s dilemma , while the multigame environment has been introduced by assigning to a fraction of the population either a positive or a negative value of the sucker s payoffwe have shown that , regardless of the structure of the interaction network , and also irrespective of whether the multigame environment is time invariant or not , the evolution of cooperation is promoted the more the larger the heterogeneity in the population . as the responsible microscopic mechanism behind the enhanced level of cooperation, we have identified an asymmetric strategy imitation flow from the subpopulation adopting positive sucker s payoffs to the population adopting negative sucker s payoffs .since the subpopulation where players use positive sucker s payoffs expectedly features a higher level of cooperation , the asymmetric strategy imitation flow thus acts in favor of cooperative behavior also in the other subpopulations , and ultimately it raises the overall level of social welfare in the population .the obtained results in structured populations are in contrast to the results obtained in well - mixed populations , where simply the baseline weak prisoner s dilemma is recovered regardless of multigame parametrization .although it is expected that structured populations support evolutionary outcomes that are different from the mean - field case , the importance of this fact for multigames is of particular relevance since interactions among players are frequently not best described by a well - mixed model , but rather they are limited to a set of other players in the population and as such are best described by a network .put differently , although sometimes analytically solvable , the well - mixed models can at best support proof - of - principle studies , but otherwise have limited applicability for realistic systems .taken together , the presented results add to the existing evidence in favor of heterogeneity - enhanced network reciprocity , and they further establish heterogeneity among players as a strong fundamental feature that can elevate the cooperation level in structured populations past the boundaries that are imposed by traditional network reciprocity .the rather surprising role of different perceptions of the same conflict thus reveals itself as a powerful mechanism for resolving social dilemmas , although it is rooted in the same fundamental principles as other mechanisms for cooperation promotion that rely on heterogeneity .we hope this paper will motivate further research on multigames in structured populations , which appears to be an underexplored subject with many relevant implications .this research was supported by the hungarian national research fund ( grant k-101490 ) , tamop-4.2.2.a-11/1/konv-2012 - 0051 , the slovenian research agency ( grants j1 - 4055 and p5 - 0027 ) , and the fundamental research funds for central universities ( grant dut13lk38 ) .
|
motivated by the fact that the same social dilemma can be perceived differently by different players , we here study evolutionary multigames in structured populations . while the core game is the weak prisoner s dilemma , a fraction of the population adopts either a positive or a negative value of the sucker s payoff , thus playing either the traditional prisoner s dilemma or the snowdrift game . we show that the higher the fraction of the population adopting a different payoff matrix , the more the evolution of cooperation is promoted . the microscopic mechanism responsible for this outcome is unique to structured populations , and it is due to the payoff heterogeneity , which spontaneously introduces strong cooperative leaders that give rise to an asymmetric strategy imitation flow in favor of cooperation . we demonstrate that the reported evolutionary outcomes are robust against variations of the interaction network , and they also remain valid if players are allowed to vary which game they play over time . these results corroborate existing evidence in favor of heterogeneity - enhanced network reciprocity , and they reveal how different perceptions of social dilemmas may contribute to their resolution .
|
the cognitive radio channel is the simplest kind of the overlay form of cognitive radio networks , wherein the cognitive radio simultaneously utilizes the same spectrum as the primary user - pair for its own data transmission .the cognitive radio channel with the cognitive source having non - causal knowledge of the primary message have been considered in many recent information theoretic works .various achievable rate regions for the non - causal case have been proposed in , etc . in real deployments , some resources ( in time or frequency ) need to be expended by the system for the cognitive source to acquire the primary message , and this overhead should be explicitly modeled to obtain more realistic coding schemes and rate regions . in , the authors consider half - duplex operation of the cognitive source , and propose four two - phase protocols . on the other hand , in , a full - duplex operation of the cognitive source is assumed , and block markov spc along with sliding - window decoding , and rate - splitting for the two messages are used to obtain an achievable rate region . in , the causal scenario is considered from a z interference channel ( zic ) perspective , wherein the primary destination does not experience any interference from the secondary transmission .recently , we derived a new achievable rate region for the full - duplex causal cognitive radio channel in . in this work , we consider the causal cognitive radio channel , wherein the cognitive source is subjected to the half - duplex constraint .first , we present a discrete memoryless channel model for the half - duplex causal cognitive radio channel ( hd - ccrc ) , and then propose a generalized coding scheme for this channel .it is also proved that the new rate region contains the previously known rate region of for the gaussian hd - ccrc .the hd - ccrc is depicted in fig . [fig : dmc_hdccrc ] , wherein the primary source node intends to transmit information to its destination node .a cognitive ( or secondary ) source - destination pair , and , wishes to communicate as well , with having its own information to transmit to .the primary message is only causally available at . to incorporate the half - duplex constraint for the discrete memoryless channel model, we consider a second input at , , to indicate the state of listening or transmitting . with this, the channel transition probability is determined by the state of the cognitive source as follows : where denotes an erasure at , and if and otherwise . to incorporate the fact that can not transmit when in the listening state , we restrict the joint probability distribution of the inputs as , where is the `` null '' symbol . in channel uses, has message to transmit to , while has message to transmit to .let , and be the input and output alphabets respectively .further , .a rate pair is achievable if there exist an encoding function for , , and a sequence of encoding functions for , with , and corresponding decoding functions and such that the average probability of error , where ] , and . owing to the half - duplex constraint to the channel model , we restrict the distributions for the codewords used in the codebook construction as follows : [ thm : hd - causal_ach_rates ] for the discrete memoryless hd - ccrc , all rate tuples , where , , , with non - negative reals satisfying \label{eqn : hd - causal_m}\\ & & \hspace{-.75cm}r_{c } \leq \bar{\alpha } \left [ i\left(u_{cco } , u_{cpr } ; y_c | x_{p2co } , t_{p1co } , s = t \right ) \right . \nonumber\\ & & \hspace{1.5cm}\left .- i \left(u_{cco } , u_{cpr } ; t_{p1pr } | t_{p1co } , s = t\right ) \right ] \label{eqn : hd - causal_n}\\ & & \hspace{-.75cm}r_{p2co } + r_{cpr } \leq \bar{\alpha } \left [ i\left(x_{p2co } , u_{cpr } ; y_c , u_{cco } | t_{p1co } , s = t \right ) \right .\nonumber\\ & & \hspace{1.5cm}\left .- i \left(u_{cpr } ; t_{p1pr } , u_{cco } | t_{p1co } , s = t\right ) \right ] \label{eqn : hd - causal_o}\\ & & \hspace{-.75cm}r_{p2co } + r_{c } \leq \bar{\alpha } \left [ i\left(x_{p2co } , u_{cco } , u_{cpr } ; y_c | t_{p1co } , \right .\nonumber\\ & & \hspace{.5cm}\left.\left .s = t \right ) - i \left(u_{cco } , u_{cpr } ; t_{p1pr } | t_{p1co } , s = t\right ) \right ] \label{eqn : hd - causal_p}\\ & & \hspace{-.75cm}r_e + r_{p2co } + r_{c } \leq \alpha i \left ( x_{p1co } ; y_c | s = l \right ) \nonumber\\ & & \hspace{.75 cm } + \bar{\alpha } \left [ i\left(t_{p1co } , x_{p2co } , u_{cco } , u_{cpr } ; y_c | s = t \right ) \right . \nonumber\\ & & \hspace{1.75cm}\left . -i \left ( u_{cco } , u_{cpr } ;t_{p1pr } | t_{p1co } , s = t\right ) \right ] \label{eqn : hd - causal_q}\\ & & \hspace{-.75cm}r_{p1co } + r_{p2co } + r_{c } \leq i\left(s;y_c\right ) + \alpha i \left ( x_{p1co } ; y_c | s = l \right ) \nonumber\\ & & \hspace{.75 cm } + \bar{\alpha } \left [ i\left(t_{p1co } , x_{p2co } , u_{cco } , u_{cpr } ; y_c | s = t \right ) \right . \nonumber\\ & & \hspace{1.75cm}\left . -i \left ( u_{cco } , u_{cpr } ; t_{p1pr } | t_{p1co } , s = t\right ) \right ] \label{eqn : hd - causal_r } \ ] ] are achievable for some joint distribution that factors as and satisfies - , and for which the right - hand sides of - are non - negative .[ proof : causal_ach_rates ] let denote set of jointly -typical sequences according to the distribution of random variables as induced by the same distribution used to generate the codebooks . for the sake of space, the dependence on the random variables will not be stated explicitly , and should be clear from the context . +* codebook generation : * split the primary and cognitive users rates as , and respectively . fix a distribution as in theorem [ thm : hd - causal_ach_rates ] . *generate i.i.d .codewords , , according to .* for each codeword , generate conditionally i.i.d .codewords , , according to .* for each codeword pair , generate conditionally i.i.d .codewords , , according to . * for each codeword pair , generate conditionally i.i.d .codewords , , according to .* for each codeword tuple , generate conditionally i.i.d .codewords , , according to .* for each codeword pair , generate conditionally i.i.d .codewords , , according to .* for each codeword tuple , generate conditionally i.i.d .codewords , , according to .* for each codeword pair , generate i.i.d .codewords , and , according to .* for each codeword tuple , generate i.i.d .codewords , and , according to .* generate where is a deterministic function of .* generate where is a deterministic function of such that if .* encoding : * at : in block , transmits . in the first block, transmits , while in block , it transmits .note that the actual rate for the primary message is , but it converges to as the number of blocks goes to infinity . at : in block , to transmit , searches for bin index such that where and are s estimates of and respectively from the previous block .once is determined , it searches for a bin index in order to transmit such that it sets or if the respective bin index is not found .it can be shown using arguments similar to those in that the probabilities of the events of not able to find a unique or satisfying and can be made arbitrarily small if the following hold true : where may be arbitrarily small . transmits .* decoding : * at : assume that decoding till block has been successful .then , in block , knows and .it declares that the pair was transmitted in block if there exists a unique pair that else , an error is declared .it can be shown that the probability of error for this decoding step can be made arbitrarily low if and are satisfied . at : the primary destination waits until block , and then performs backward decoding .we consider the decoding process using the output in block .the decoding for the first and last blocks can be seen as special cases of the above .thus , for block , assuming that the decoding for the pair has been successful from block , searches for a unique tuple and some tuple such that the error analysis for this decoding step ( omitted due to space constraints ) can be used to prove that , for large enough , with arbitrarily small probability of error if - are satisfied . at : the cognitive destination also waits until block , and then performs backward decoding to jointly decode the messages intended for it and the common part of the primary message . for block , is assumed to have successfully decoded from block . with this knowledge ,it searches for a unique tuple and some such that again , using the properties of joint typicality , it can be established that , for large enough , with an arbitrarily low probability of error if - are satisfied .thus , the constraints on the rates as given in - ensure that the average probability of error at the two destinations can be driven to zero and thus , they describe an achievable rate region for the causal cognitive radio channel .[ rem : w_p2transmission ] according to the above coding scheme , a part of the primary message ( ) is not decoded by .this is different from the non - causal case .as can not receive while it transmits , may improve its rates by transmitting `` fresh '' information directly to the destination during -transmit states , thereby increasing the achievable rate region .note that the maximum increase in the achievable rates in using a random listen - transmit schedule for is .[ rem : convexity ] the achievable rate region described in theorem [ thm : hd - causal_ach_rates ] is convex and hence , no time - sharing is required to enlarge the rate region .this can be easily proved using the markov chain structure of the code as was used in ( * ? ? ?* lemma 5 ) , with the random variable in theorem [ thm : hd - causal_ach_rates ] playing a role similar to that of in .[ rem : gaussian ] for the gaussian channel model with a fixed listen - transmit schedule , the coding scheme of theorem [ thm : hd - causal_ach_rates ] yields the same rate region as with a time - division strategy with the use of gaussian parallel channels , instead of a block markov structure , for the decoding of at and at . according to this strategy , transmits during the first time - slot while is in listening mode . in the second time - slot ,both and encode and transmit as a non - causal cognitive radio channel , and also superposes on top of .both destinations decode only at the end of the second time - slot and exploit the parallel gaussian channel structure to decode .in , an achievable rate region for the gaussian hd - ccrc was presented .the authors proposed four protocols and the overall achievable rate region ( ) is given by the convex hull of the four rate regions ( * ? ? ? * theorem 5 ) . in this section , we show that the rate region of theorem [ thm : hd - causal_ach_rates ] , , contains .we show that an outer bound ( not necessarily achievable ) to the rate region presented in is contained in a subspace of the achievable rate region of theorem [ thm : hd - causal_ach_rates ] . for the non - causal cognitive radio channel ( nc - crc ) , the containment of the region of ( * ?* corollary 2 ) , , in the region of ( * ? ? ?* theorem 1 ) is clear .it is shown in that ( * ? ? ?* theorem 1 ) contains .more specifically , shows that , where is obtained from by removing certain rate constraints , and is obtained from by restricting the input distribution to match that for .the coding scheme of theorem [ thm : hd - causal_ach_rates ] may be specialized to yield a rate region for the nc - crc . towards this, we set , and assume that a genie provides with .this gives us an achievable rate region for the nc - crc .moreover , by restricting the input distribution to independent rate - splitting and independent binning of the secondary messages ( as in ) instead of conditional rate - splitting and conditional binning at , it can be shown using an appropriate mapping of the codebook random variables ( omitted due to lack of space ) , that the resulting region is identical to , and hence , .next , we show that the rate regions obtained via each of the protocols proposed in are contained in .note that for all these protocols , , with rates , etc . according to protocol 1 , for any choice of ,the rate pair is achievable if , \label{eqn : protocol1_bd1}\\ & & \hspace{-.75 cm } ( r^0_p , r^0_c ) \in \mathcal{r}_{dmt } , ~r_c = \bar{\alpha } r^0_c,~r_{pco } = \bar{\alpha } r^0_{pco } , \label{eqn : protocol1_bd2}\\ & & \hspace{-.75 cm } r_{ppr } \leq \frac{\alpha}{2 } \log\left(1 + \frac{\eta p_p}{1+\bar{\eta}p_p}\right ) + \bar{\alpha}r^0_{ppr } , \label{eqn : protocol1_bd3}\end{aligned}\ ] ] where and are the respective power constraints for and , is the channel gain for the link , and ] is the power fraction allocated for transmitting in the first time - slot .note that , given an value , may be chosen such that .then , comparing - to - establishes that the region corresponding to protocol 1 is contained in . the inclusion of the rate region corresponding to protocol 2 can be easily proved by considering the same coding structure and input distribution as used to obtain , with one further restriction - the input distribution at for the first time - slot is given by .this yields an achievable rate region ( ) , that has exactly the same bounds as that for protocol 2 , except that the achievable rate region for the nc - crc ( during the second time - slot ) is , thereby proving the above inclusion .the rate region for protocol 3 can be obtained by setting in theorem [ thm : hd - causal_ach_rates ] .finally , the rate pair corresponding to protocol 4 may be obtained by using a fixed listen - transmit schedule , and by setting . as the four rate regions of are contained in , the convex hull of these regions ( )is also contained in ( cf .remark [ rem : convexity ] ) . a numerical example comparing the han - kobayashi ( hk ) region , , and presented in fig .[ fig : hd_weak_weak_interf ] .
|
coding for the causal cognitive radio channel , with the cognitive source subjected to a half - duplex constraint , is studied . a discrete memoryless channel model incorporating the half - duplex constraint is presented , and a new achievable rate region is derived for this channel . it is proved that this rate region contains the previously known causal achievable rate region of for gaussian channels .
|
the multifunctionality and relative simplicity of plant cells is a marvel . unlike humans and animals, plants do not possess a centralized skeleton and complex control system . yetthey can create their own food through photosynthesis , reproduce , carry considerable external loads and in some cases are even capable of rapid movements .this article is concerned with the nastic movement of plants as known , for example , from mimosa pudica and dionaea muscipula .the focus is particularly on cell geometries and material properties of cell walls .cell walls carry considerable loads due to turgor pressures of up to 5 mpa .the resulting prestress increases their compression strength and thus has a positive impact on a plants overall stiffness .hence it could be argued that plants possess a decentralized skeleton that is formed by a large number of connected pressure vessels . in order to withstand the relatively high tensile stresses ,cell walls are made from a composite material as shown in figure [ pic : figure_1 ] .most of the tensile stresses are carried by the primary cell wall .it is made from a network of microfibrils that are connected to each other by hemicellulose tethers .this network is embedded in a pectin matrix , a hydrated gel that pushes the microfibrils apart , thus easing their sideways motion .this allows plants to continuously optimize their cell wall properties and thus to maximize stiffness and minimize stress peaks for given loads .furthermore , the young s modulus of cell walls increases with tensile stresses and volume changes of cells due to elastic deformations are negligible .the nastic movement of plants is driven by pressure changes that require a water flow between neighbouring cells .skotheim and mahadevan found that the speed of plant movements increases for decreasing cell sizes and pumping distances . ] .consequently it is best if water fluxes occur mainly between neighbouring cell layers .nastic movements are only a small aspect of a plants capabilities .hence a cells construction principle is influenced by numerous alternative objectives .in contrast , an exclusive focus on the nastic movement reduces the number of objectives and thus increases the range of potential cell - sizes , -materials and -pressures . based on these observations , pagitz et al developed a novel concept for pressure actuated cellular structures that can alter their shape between any given set of one- , and two - dimensional functions .an example of a cellular structure for two one- and two - dimensional target shapes is shown in figure [ pic : figure_2 ] .it can be seen that both structures consist of two layers of equally pressurized cells . changing the pressure ratio between cell layersalters the shape of the structure .computing the length and thickness of each cell side allows the creation of structures that can take up any target shape for given cell pressures and material properties . pressure actuated cellular structures possess a large shape changing capability and a high strength to self - weight ratio and energy efficiency .hence their potential application ranges from passenger seats and hospital beds to leading and trailing edges of aircraft .the remainder of this article is organized as follows : analytical expressions for optimal materials of compliant cellular structures with identical properties are derived as a function of cell sizes in _section 2_. furthermore , the relationship between cell - sizes , -materials and -pressures is investigated ._ section 3 _ presents extensions to the previously published numerical model and determines their application ranges . _section 4 _ introduces two end cap designs for prismatic cells that can withstand substantial differential pressure while being flexible enough to allow large cross sectional shape changes .furthermore , a manufacturing approach that is based on cytoskeletons is presented ._ section 5 _ concludes the paper .cell sides of pressure actuated cellular structures have to carry large axial forces .furthermore , boundary sides or sides between cell layers require an increased central bending stiffness due to differential pressures . both , axial forces and an increased central thicknesses promote regions around cell corners where bending strains are concentrated .these localization allow the use of numerical models that are based on rigid bars , eccentric cell corner hinges and rotational springs as shown in figure [ pic : figure_3 ] .the geometry of compliant cellular structures is relatively complex .consequently , a detailed finite element model is required to accurately compute equivalent cell corner springs and hinge eccentricities . in order to simplify mattersit is subsequently assumed that regions of concentrated bending strains can be approximated by rectangular continua with linearized stress distributions as shown in figure [ pic : figure_4 ] .an equivalent rotational spring stiffness can be computed for a rectangular continuum by using the euler - bernoulli beam theory so that where is the young s modulus , the thickness and the ratio between length and thickness of a rectangular continuum .the corresponding bending moment for a bending angle is so that the absolute axial and maximum bending stresses of a rectangular continuum are therefore , the required thickness of a rectangular continuum is where is the yield strength of the considered material .the bending moment and energy of an elastic region is the required thickness of a central cell side is subsequently determined .equilibrium requires transverse forces at both ends so that where it is assumed for the sake of simplicity that hinge eccentricities are constant throughout a cellular structure . the bending moment along a central cell side due to end moments and differentialpressure is where ] is a stress safety factor . based on a least square interpolationthe constants , are previous interpolation can be used to derive expressions that relate geometry and loading of a cell side to optimal material parameters .the bending energy of a rectangular continuum is which is minimal if so that and hence the bending moment that acts on a rectangular continuum is and the required cell side thickness at a cell corner for an axial force and bending angle is previous three equations are illustrated in figure [ pic : figure_6 ] .it can be seen that the optimal young s modulus decreases exponentially for increasing bending angles .furthermore , bending energy and required cell side thickness decrease exponentially for decreasing bending angles .each rectangular continuum of a rectangular structure is usually subjected to a different bending angle and axial force .hence it would be optimal if every elastic zone is made from a different material .if only one material is used throughout a cellular structure it is best to choose the optimal material for the largest bending angle that occurs .this is due to the fact that elastic regions with large bending angles are dominant since .the number of base pentagons of a cellular structure for given target shapes and stiffness requirements is an important variable during the design process .the effects that a varying number of base pentagons has on the geometry , weight and potential energies of a cellular structure are subsequently investigated .it is assumed that all considered structures * _ are designed for the same target shapes _ * _ possess the same overall stiffness _ * _ are made from an optimal material that minimizes bending energy ._ two cellular structures with seven and fourteen base pentagons are shown in figure [ pic : figure_7 ] for the same two target shapes .it can be seen that doubling the number of base pentagons approximately halves the cell side lengths and bending angles . furthermore , the total number of cell sides is doubled since note that , and are the number of cell rows , cell sides and base pentagons .thus it can be concluded that the number of base pentagons does not affect the sum of all cell side lengths .the stiffness of a single cell is constant if its pressure potential is constant . as a consequence ,halving cell sizes and thus quartering cross sectional cell areas requires a fourfold increase in cell pressures .however , an eightfold increase in cell pressures is required to preserve the overall structural stiffness since there are now twice as many cells in each row .the scaled cell side lengths , bending angles and pressures can be written for a scaling factor as thus the scaled cell side forces are and the scaled pressure potential of the whole structure is inserting the scaled expressions for bending angles and cell side forces into the previously derived equations for optimal material properties results in and thus furthermore it is assumed that the scaled hinge eccentricities are proportional to the scaled cell side thicknesses at cell corners so that it can be seen that cell side thicknesses at cell corners decrease sublinear for an increasing number of base pentagons .hence , from a geometrical point of view , relative cell side thicknesses become larger for decreasing cell sizes .the scaled bending energy of a cellular structure is since the number of elastic continua is proportional to the number of base pentagons .the scaled moment along a central cell side is so that its bounds for are and the corresponding bounds for the scaled thickness distribution along a cell side are where the lower bound is determined for . hence it can be concluded that in summary it can be said that an increasing number of base pentagons reduces the total volume of a cellular structure and increases the ratio between pressure and elastic energy .this might be an explanation for the small cell sizes and relatively large cell pressures that are found in nastic plants .figure [ pic : figure_8 ] illustrates the previous results .reducing cell sizes while preserving the overall structural stiffness requires increasing cell pressures .gasses are the best choice for large cell volumes and small pressures whereas incompressible liquids are better for small volumes and large pressures .for example , the pressure of a given mass of gas for a varying volume is where and are the reference pressure and volume . hence the change of pressure potential due to a volume change from to is in contrast, the pressure potential of an incompressible hydraulic fluid is always zero .furthermore , the weight of air at 30 mpa exceeds 300 kg / m which is about one third of the weight of common hydraulic fluids .therefore , the compressibility of gasses limits the usable upper pressure for most applications to about 1 mpa due to safety and energy efficiency concerns .in contrast , hydraulic systems are often operated at pressures of up to 70 mpa .depending on the geometry , material and pressures of a cellular structure it is possible to use numerical models of varying complexity as illustrated in figure [ pic : figure_9 ] .rigid cell sides and central , frictionless hinges at cell corners can be used if the ratio between pressure and elastic energy of a cellular structure is large .a decreasing ratio between pressure and bending energy requires the consideration of rotational springs .this can be done in a first step by assuming rotational springs around central cell corner hinges and in a second step by additionally taking into account hinge eccentricities .furthermore , a decreasing ratio between pressure and axial strain energy requires the consideration of axial strains .the influences of axial cell side strains , compliant hinges and hinge eccentricities on the equilibrium configurations of a cellular structure are subsequently studied .this is done by using an example structure that consists of two cell rows and 60 base pentagons as shown in figure [ pic : figure_10 ] .it can be seen that all pentagonal and hexagonal cells are identical .furthermore , cell side lengths are chosen such that the structure deforms into a half , full circle for given cell pressures , rigid sides and central frictionless hinges .the reference configuration is chosen in between both equilibrium configurations to minimize changes in cell corner angles .axial cell side strains , compliant hinges or hinge eccentricities affect both equilibrium configurations .hence , deviations from the half , full circle are subsequently used to measure their impact as shown in figure [ pic : figure_11 ] .axial strains alter cell side lengths and thus change the subtended angles of both equilibrium configurations . the numerical model used to simulate their influenceis based on elastic bars and central frictionless cell corner hinges .an average thickness is used for cell sides with a varying thickness .it can be seen that deviations from the full circle are larger than from the half circle .furthermore , all deviations decrease for an increasing young s modulus and a decreasing safety factor .+ compliant cell corners reduce the shape changing capability of a cellular structure . the numerical model used to simulate their influence is based on rigid bars , central frictionless cell corner hinges and rotational springs between neighbouring cell sides .the thickness of each side is optimized such that the material is fully utilized at either equilibrium configuration .the assumed aspect ratio of the rectangular continua is .it can be seen that deviations from both target shapes are minimal for certain young s moduli .furthermore , deviations increase for a decreasing safety factor due to increasing cell side thicknesses .+ hinge eccentricities in compliant cellular structures are due to finite cell side thicknesses at cell corners .their influence is modeled with stiff bars and frictionless hinges .to simplify matters it is assumed that hinge eccentricities are equal to the cell side thicknesses that were computed for the axial strain model .it can be seen that deviations from the full , half circle decrease for an increasing young s modulus and increase for a decreasing safety factor .in summary it can be said that deviations from both target shapes are mainly due to rotational springs and hinge eccentricities caused by compliant cell corners .furthermore , a decreasing safety factor increases the influence of compliant cell corners and decreases the influence of axial strains . since compliant structures are exposed to fatigue loads it is likely that safety factors are required .hence it can be concluded that the effect of axial strains is negligible in most applications .pressurized prismatic cells can be sealed at both ends via end caps .they have to sustain significant differential pressures while being flexible enough to allow large cross sectional shape changes .the cross sectional geometry of the pentagonal cell shown in figure [ pic : figure_12 ] is subsequently used to study two different end cap designs .it can be seen that a reflection symmetry is assumed so that its geometry is defined by an angle .furthermore the width of each cell side reduces symmetrically towards the end .the region with a reduced width allows a continuous adaption between the non - uniform thicknesses of a cell side and the constant thickness of an end cap .its geometry is described by a cubic polynomial that possesses a point reflection symmetry so that \end{aligned}\ ] ] where is the gradient of the polynomial at .an optimal end cap for a prismatic cell with a fixed cross sectional geometry possesses , like a soap film , an isotropic stress state .deviations from this reference configuration introduce additional , non - isotropic stresses. however , these deviations are usually small so that minimal surfaces are a good basis for end cap designs .the shapes of end caps with an isotropic stress state are computed with the updated reference strategy by bletzinger & ramm .subsequently used parameters for end caps are summarized in table [ tab : materialminimalsurface ] .lll + & =1 - 2.6 mm & membrane thickness + + + & =2,000 mpa & young s modulus of membrane and tendons + & =0 & poisson s ratio of membrane + & =30 mpa & target stress for form finding + + + & =2 mpa & cell pressure + & =.75 & state angle of cell geometry .the initial discretization of the cross sectional reference configuration , the resulting minimal surface and the corresponding stresses after ten iterations with the updated reference strategy are shown in figure [ pic : figure_13 ] .it can be seen that the membrane and tendon stresses are nearly uniform .furthermore , the tendons carry the forces from the differential pressure directly to the cell side centers where the thickness and thus the axial stiffness is maximal .the total material volume , deformation energy and the smallest possible cross sectional tendon area are shown in figure [ pic : figure_14 ] for a varying membrane thickness . without using tendonsthe smallest possible membrane thickness is about mm .thinner membranes require local reinforcements through tendons .the lightest end cap does not possess any tendons and has a membrane thickness of mm .however , although the use of tendons increases the weight of an end cap , it reduces the required deformation energy due to the smaller membrane thickness .in general it can be said that it is advantageous to use materials for end caps with large ratios ratio than titan . ] . the argument for this is as follows .the membrane thickness of an end cap is proportional to hence , the additional energy density of a membrane due to an unidirectional axial strain is note that and thus are independent of the membrane thickness .hence , increasing the young s modulus increases the energy required to deform an end cap .the elastic energy required to deform an end cap by is shown in figure [ pic : figure_14 ] .the corresponding change of pressure energy is subsequently derived to get a reference value .the cross sectional area of a pentagonal cell is therefore , the change of the cross sectional area for is so that the change of pressure energy of a one meter long cell is which is significantly larger than the energy needed to deform the end caps .hence it can be concluded that the energy required to deform well designed end caps can be neglected for large cell lengths .pressure actuated cellular structures can be manufactured by using rapid prototyping .however , such an approach is relatively expensive and time consuming and thus limited to a small scale production .a remedy is to separately manufacture the main components by using manufacturing techniques such as injection molding and extrusion .pagitz et al showed that cytoskeletons can greatly enhance the mechanical properties of pressure actuated cellular structures .furthermore , they can be used to simplify the manufacturing and assembly process as illustrated in figure [ pic : figure_15 ] .it can be seen that end caps are manufactured together with cytoskeletons that can additionally reinforce the membrane .these cytoskeletons stick out so that they can be inserted into the cellular structure .this approach has the advantage that the forces that act on an end cap are carried by the cytoskeleton within a cell so that the gap between end cap and cellular structure is stress free and thus easy to seal .this article presented analytical expressions for optimal material properties of compliant pressure actuated cellular structures .it was shown that , for given target shapes and stiffness requirements , a cellular structure can be made from a wide range of cell sizes .furthermore it was shown that the use of small cells increases the ratio between pressure and bending energy .this might be an explanation for the relatively small cell sizes found in nastic plants .the required complexity of a numerical model for the accurate simulation of cellular structures depends on the geometry , material and cell pressures .six different numerical models were presented and their application range was studied by means of an example structure . finally , two different end cap designs for the termination of prismatic cells were investigated .it was found that it is possible to construct end caps such that the required deformation energy is relatively small if compared to the pressure energy .material properties need to be transformed for the simulation and optimization of prismatic cellular structures . particularly the young smoduli need to be recomputed to take into account the stiffening effect of three dimensional stress states . the stress - strain relationship for plane stress is = \frac{e}{1-\nu^2 } \left [ \begin{array}{cc } 1 & \nu\\ \nu & 1 \end{array } \right ] \left [ \begin{array}{c } \varepsilon_x\\ \varepsilon_y \end{array } \right ] \end{aligned}\ ] ] = \frac{e}{\left(1+\nu\right)\left(1 - 2\nu\right ) } \left [ \begin{array}{cc } 1-\nu & \nu\\ \nu & 1-\nu \end{array } \right ] \left [ \begin{array}{c } \varepsilon_x\\ \varepsilon_y \end{array } \right ] \end{aligned}\ ] ] it can be seen that the equivalent tensile stress decreases .however , this stress reduction can not be justified for all possible stress states of a cellular structure .hence , for the sake of safety , it is not taken into account .bletzinger , k .- u . ,ramm , e. ( 1999 ) , a general finite element approach to the form finding of tensile structures by the updated reference strategy ._ international journal of space structures _ , * 14 * , 131 - 145 fleurat - lessard , p. , frangne , n. , maeshima , m. , ratajczak , r. , bonnemain , j .- l . and martinoia , e. ( 1997 ) , increased expression of vacuolar aquaporin and h-atpase related to motor cell function in mimosa pudica l. , _ plant physiology _ , * 114 * , 827 - 834 .
|
a novel concept for pressure actuated cellular structures was published in pagitz et al 2012 bioinspir . biomim . 7 . the corresponding mathematical foundation for the simulation and optimization of compliant cellular structures with eccentric cell corner hinges was published in pagitz 2015 arxiv:1403.2197 . the aim of this article is threefold : _ first _ , analytical expressions for optimal materials of compliant cellular structures with identical properties are derived as a function of cell sizes . it is shown that cellular structures can be made from either a large , small number of highly , lowly pressurized cells that consist of a stiff , soft material . _ second _ , extensions to the previously published numerical model are presented and their application ranges are determined . _ third _ , end cap designs for prismatic cells are developed that can withstand substantial differential pressures while being flexible enough to allow large cross sectional shape changes . furthermore , a manufacturing approach that is based on cytoskeletons is presented .
|
wireless data traffic is predicted to continue its exponential growth in the coming years , mainly driven by the proliferation of mobile devices with increased processing and display capabilities , and the explosion of available online contents .current wireless architecture is widely acknowledged not to be sufficient to sustain this dramatic growth . a promising approach to alleviate the looming network congestion is to _ proactively _ place popular contents , fully or partially , at the network edge during off - peak traffic periods ( see , for example , , and references therein ) .conventional caching schemes utilize orthogonal unicast transmissions , and benefit mainly from local duplication . on the other hand , by _ coded caching _, a novel caching mechanism introduced in , further gains can be obtained by creating multicasting opportunities even across different requests .this is achieved by jointly optimizing the _ placement _ and _ delivery _ phases .coded caching has recently been investigated under various settings , e.g. , decentralized coded caching , online coded caching , distributed caching , etc .most of the existing literature follow the model in , in the sense that each file is assumed to have a fixed size , and users are interested in the whole file .however , in many practical applications , particularly involving multimedia contents , files can be downloaded at various quality levels depending on the channel and traffic conditions , or device capabilities .this calls for the design of _ lossy _ caching and delivery mechanisms .we model the scenario in which each user has a preset distortion requirement known to the server .for example , a laptop may require high quality descriptions of requested files , whereas a mobile phone is satisfied with much lower resolution .users may request any of the popular files , and the server is expected to satisfy all request combinations at their desired quality levels .we model the files in the server as independent sequences of gaussian distributed random variables . exploiting the successive refinability of gaussian sources ,we derive the optimal caching scheme for the two - user , two - file scenario . for the general case, we propose an efficient coded caching scheme which considers multiple layers for each file , and first allocates the available cache capacity among these layers , and then solves the lossless caching problem with asymmetric cache capacities for each layer .we propose two algorithms for cache capacity allocation , namely _ proportional cache allocation ( pca ) _ and _ ordered cache allocation ( oca ) _ , and numerically compare the performance of the proposed layered caching scheme with the cut - set lower bound .the most related work to this paper is , in which hassanzadeh et al .solve the inverse of the problem studied here , and aim at minimizing the average distortion across users under constraints on the delivery rate as well as the cache capacities . in , authors also consider lossy caching taking into account the correlation among the available contents , based on which the tradeoff between the compression rate , reconstruction distortion and cache capacity is characterized for single , and some special two - user scenarios .the rest of the paper is organized as follows .we present the system model in section [ sec1 ] .section [ section:2 ] presents results on the case with two files and two users .general case is investigated in section [ sec2b ] , including a lower bound on the delivery rate .numerical simulations are presented in section [ sec4 ] .finally , we conclude the paper in section vi .we consider a server that is connected to users through a shared , error - free link .the server has a database of independent files , , ... , , where file consists of independent and identically distributed ( i.i.d ) samples , ... , from a gaussian distribution with zero - mean and variance , i.e. , , for .the system operates in two phases . in the _ placement phase _, users caches are filled with the knowledge of the number of users and each user s quality requirement ; but without the particular user demands .each user has a cache of size bits , whose content at the end of the _ placement phase _ is denoted by , .users requests , , , are revealed after the _ placement phase_. in the _ delivery phase _ , the server transmits a single message of size bits over the shared link according to all the users requests and the cache contents . using and , each user aims at reconstructing the file it requests within a certain distortion target .an _ lossy caching code _consists of cache placement functions : where ; one delivery function : where ; and decoding functions : where .note that each user knows the requests of all other users in the delivery phase .we consider quadratic ( squared - error ) distortion , and assume that each user has a fixed distortion requirement , . without loss of generality , let .accordingly , we say that a distortion tuple is _ achievable _ if there exists a sequence of caching codes , such that holds for all possible request combinations .we reemphasize that is not known during the _ placement phase _ , while is known . for a given distortion tuple , we define the _ cache capacity - delivery rate tradeoff _ as follows : note that this problem is closely related to the classical rate - distortion problem .let denote the _ rate - distortion function _ of a gaussian source .we have . in the sequelwe heavily exploit the _ successive refinability _ of a gaussian source under squared - error distortion measure .successive refinement refers to compressing a sequence of source samples in multiple stages , such that the quality of reconstruction improves , i.e. , distortion reduces , at every stage .a given source is said to be successively refinable under a given distortion measure if the single resolution distortion - rate function can be achieved at every stage .successive refinement has been extensively studied in the source coding literature ; please see for its use in the caching context .in this section , we characterize the optimal cache capacity - delivery rate tradeoff for the lossy caching problem with two users ( ) and two files ( ) .the target average distortion values for user 1 and user 2 are and , respectively , with .let and be the minimum compression rates that achieve and , respectively ; that is , .this means that , to achieve the target distortion of , the user has to receive a minimum of bits corresponding to its desired file .[ table1 ] [ cols="<,^,^,^,^,^,^,^,^ " , ] we first present lemma 1 specifying the lower bound on the delivery rate for given and in this particular scenario , followed by the coded caching scheme achieving this lower bound .the proof of the lemma is skipped due to space limitations .[ lemma_nk2 ] for the lossy caching problem with , a lower bound on the cache capacity - delivery rate tradeoff is given by the first three terms in ( [ eq44 ] ) are derived from the cut - set lower bound , which will be presented for the general scenario in theorem 1 .based on ( [ eq44 ] ) , we consider five cases depending on the cache capacities of the users , illustrated in fig . 1 : [ fig1 ] and , depending on the distortion requirements of the users , and .,title="fig : " ] _ case i _ : .in this case , . _ caseii _ : , , .we have ._ case iii _ : , , . then ._ case iv _ : , , .it yields ._ case v _ : , . then . next , for each of these cases , we explain the coded caching scheme that achieves the corresponding .we assume that the server employs an optimal successive refinement source code , denoted by the source codeword of length bits that can achieve a distortion of for file .thanks to the successive refinability of gaussian sources , a receiver having received only the first of these bits can achieve a distortion of .we refer to the first bits as the first layer , and the remaining bits as the second layer . in each case, we divide the first layers of codewords and into six disjoint parts denoted by , , and , , , and the second layers into two disjoint parts denoted by , and , , such that for , where denotes the length of the binary sequence ( normalized by ) .table i illustrates the placement of contents in users caches for each case .the second and third rows illustrate how the first and second layers are partitioned for each file .the fourth and fifth rows indicate the cache contents of each user at the end of the _ placement phase_. in all the cases , user 1 caches and user 2 caches .the entries from the 6th row to the 10th specify the size of each portion in each case .for example , the 6th row implies that in _ case i _ , , , , , and the sizes of all other portions are equal to , which is equivalent to dividing into four portions , , and .thus , in the placement phase , user 1 caches , and user 2 caches so that and , which meets the cache capacity constraints .the cache placements of the other 4 cases are presented in a similar manner in table i. next , we focus on the delivery phase. we will explain the delivered message in each case to satisfy demands .all other requests can be satisfied similarly , without requiring higher delivery rates ._ case i _ ( ) : the server sends , , , and .thus , the delivery rate is _ case ii _( , , ) : server delivers , and .we have _ case iii _( , , ) : the values of and in table i are given as : and .the server sends , and in the delivery phase , which results in _ case iv _ ( , , ) : the server sends , and we have _ case v _ ( , ) : the cache capacities of both users are sufficient to cache the required descriptions for both files .thus , any request can be satisfied from local caches at desired distortion levels , and we have . for , the proposed caching scheme meets the lower bound in lemma [ lemma_nk2 ] ; and hence , it is optimal , i.e. , we have .in this section , we tackle the lossy content caching problem in the general setting with files and users . recall that the distortion requirements are assumed to be ordered as .let , . exploiting the successive refinability of gaussian sequences , we consider a layered structure of descriptions for each file , where the first layer , called the -description , consists of bits , and achieves distortion when decoded .the layer , called the -refinement , , consists of bits , and having received the first layers , a user achieves a distortion of .the example in section [ section:2 ] illustrates the complexity of the problem ; we had five different cases even for two users and two files .the problem becomes intractable quickly with the increasing number of files and users .however , note that only users , whose distortion requirements are lower than , need to decode the layer for the file they request , for . therefore ,once all the contents are compressed into layers based on the distortion requirements of the users employing an optimal successive refinement source code , we have , for each layer , a lossless caching problem .however , each user also has to decide how much of its cache capacity to allocate for each layer .hence , the lossy caching problem is divided into two subproblems : the lossless caching problem of each source coding layer , and the cache allocation problem among different layers .here we focus on the first subproblem , and investigate centralized lossless caching with heterogeneous cache sizes , which is unsolved in the literature , regarding each layer separately .consider , for example , the refinement layers of all the files .there are only users ( users ) who may request these layers .let user , , allocate ( normalized by ) of its cache capacity for this layer . without loss of generality , we order users according to the cache capacity they allocate , and re - index them , such that .we would like to have symmetry among allocated cache capacities to enable multicasting to a group of users . based on this intuition ,we further divide layer into sub - layers , and let each user in allocate of its cache for the first sub - layer , and each user in allocate of its cache for the sublayer , for .overall , we have sub - layers , and users allocate of their caches for sub - layer , whereas no cache is allocated by users .we denote by the size of the sub - layer of the refinement layer , and by the minimum required delivery rate for this sub - layer .the rates , , , should be optimized jointly in order to minimize the total delivery rate for the layer .the optimization problem can be formulated as follows : we explore the achievable based on the existing caching schemes in in and , which are referred to as _ coded delivery _ and _ coded placement _ , respectively .we consider two cases : case 1 ) . in this case ,coded placement scheme of provides no global caching gain .thus , we employ only coded delivery , and illustrate this scheme in our setup by focusing on the sub - layer : users to each allocate of cache capacity , while users to allocate no cache for this sublayer . if , where , we have the first term on the right hand side is due to unicasting to users to , while the second term is the _ coded delivery _ rate to users to given in . based on the memory sharing argument , any point on the line connecting two points , and , is also achievable , i.e. , if $ ] , then we have where ; and if , we have case 2 ) . in this case , _ coded placement _ outperforms _ coded delivery _ if the allocated cache capacity satisfies .note that for the sub - layer , there are users with no cache allocation .if , there will be no gain with either schemes .when and , the delivery rate of _ coded placement _is when , the delivery rate is given by the lower convex envelope of points given by ( [ eq6 ] ) and , and for , given by ( [ eq3 ] ) .we propose two algorithms for cache allocation among layers : _ proportional cache allocation _ ( pca ) and _ ordered cache allocation _ ( oca ) , which are elaborated in algorithms 1 and 2 , respectively , where is as defined earlier , and we let . .[ alg3 ] [ alg4 ] pca allocates each user s cache among the layers it may request proportionally to the sizes of the layers , while oca gives priority to lower layers .the server can choose the one resulting in a lower delivery rate .numerical comparison of these two allocation schemes will be presented in section v. the following lower bound is obtained using cut - set arguments .( cut - set bound ) for the lossy caching problem described in section [ sec1 ] , the optimal achievable delivery rate is lower bounded by this section , we numerically compare the achievable delivery rates for uncoded caching , the proposed caching schemes , and the lower bound . in fig .2 , we consider users and files in the server .cache sizes of the users are identical , i.e. , .the distortion levels are such that .while we observe that the proposed coded caching scheme greatly reduces the delivery rate , oca performs better for small cache sizes , while pca dominates as increases .using memory sharing , we can argue that the dotted curve in fig .2 , which is obtained through the convex combination of the delivery rates achieved by the two proposed schemes , is also achievable .[ fig4 ] in fig .3 , we consider the same setting but with heterogeneous cache sizes , where , for . in thissetting , pca allocates the same amount of cache to each layer at different users , which creates symmetry among the caches .the achievable delivery rates in fig . 3 illustrate significant improvements in coded caching with pca over both uncoded and oca schemes in terms of the achievable delivery rates .we observe that the gains become more significant as the cache capacity , , increases . while the lower bound is not tight in general , we see in both figures that the pca performance follows the lower bound with an approximately constant gap over the range of values considered .we investigated the lossy caching problem where users have different distortion requirements for the reconstruction of contents they request .we proposed a coded caching scheme that achieves the information - theoretic lower bound for the special case with two users and two files .then , we tackled the general case with users and files in two steps : delivery rate minimization , which finds the minimum delivery rate for each layer separately , and cache allocation among layers .we proposed two different algorithms for the latter , namely , pca and oca .our simulation results have shown that the proposed pca scheme improves the required delivery rate significantly for a wide range of cache capacities ; and particularly when the users cache capacities are heterogenous .n. golrezaei , k. shanmugam , a. g. dimakis , a. f. molisch and g. caire , `` femtocaching : wireless video content delivery through distributed caching helpers , '' in _ proc .ieee infocom _ ,orlando , fl , mar .2012 , pp.11071115 .
|
_ centralized coded caching _ of popular contents is studied for users with heterogeneous distortion requirements , corresponding to diverse processing and display capabilities of mobile devices . users distortion requirements are assumed to be fixed and known , while their particular demands are revealed only after the _ placement phase_. modeling each file in the database as an independent and identically distributed gaussian vector , the minimum _ delivery rate _ that can satisfy any demand combination within the corresponding distortion target is studied . the optimal delivery rate is characterized for the special case of two users and two files for any pair of distortion requirements . for the general setting with multiple users and files , a layered caching and delivery scheme , which exploits the successive refinability of gaussian sources , is proposed . this scheme caches each content in multiple layers , and it is optimized by solving two subproblems : lossless caching of each layer with heterogeneous cache capacities , and allocation of available caches among layers . the delivery rate minimization problem for each layer is solved numerically , while two schemes , called the _ proportional cache allocation ( pca ) _ and _ ordered cache allocation ( oca ) _ , are proposed for cache allocation . these schemes are compared with each other and the cut - set bound through numerical simulations .
|
in order to motivate the problem dealt in this paper , we have considered the results of an experiment carried out by doll and pygott ( 1952 ) to assess the factors influencing the rate of healing of gastric ulcers .two treatments groups were compared .patients in group 2 were treated in bed in hospital for four weeks .for the first two weeks they were given a moderate strict orthodox diet and for the last two weeks a more liberal one .they were then reexamined radiographically , discharged , recommended to continue on a convalescent diet and advised return to work as soon as they felt fit enough .patients in group 1 were discharged immediately .they were treated from the outset in the way that group 2 patients were treated after their month s stay in hospital . in table[ tttt1 ] , we present the results showed by doll and pygott ( 1952 , table iv ) for three months after starting the treatments .this article proposes new families of test - statistics when we are interested in studying the possibility that the ulcer treatment ( treatment ) is better than the control ( treatment ) .2.8pt {lcccc}\hline & larger & ] <(y=1|x=1)(y=2|x=1)(y=3|x=1)(y=4|x=1)(y=1|x=2)(y=2|x=2)(y=3|x=2)(y=4|x=2) there are several ways of formulating the statement the treatment is better than the control .initially , we shall consider that treatment is at least as good as treatment if the ratio increases as the response category , , increases , i.e. and treatment 2 is better than the treatment 1 if ( [ eq1 ] ) holds with at least one strict inequality .if we assume that treatment 2 is at least as good as treatment 1 , i.e. , ( [ eq1 ] ) holds , is there any evidence to support the claim that treatment is better ?in such a case null and alternative hypotheses may be the null hypothesis means that both treatments are equally effective , while the alternative hypothesis means that treatment 2 is more effective than treatment 1 .note that if we multiply on the left and right hand side of ( [ eq2 ] ) and ( [ eq3 ] ) by we obtain where is the number of ordered categories for response variable , are local odds ratios associated with response category , and in case of considering the opposite inequalities given in ( [ eq3 ] ) or ( [ eq3b ] ) , the easiest way to carry out the test is to exchange the observation of the two rows in the contingency table ( in the example , treatment in the first row and treatment in the second row ) . in this way ,the mathematical background is not changed but the interpretation of the aim is changed . in the examplehowever , there is no sense in considering that the control ( ) is better than the treatment ( ) , if the experiment is carried out with humans and it is assumed that the treatment will not harm these patients .the non - parametric statistical inference associated with the likelihood ratio ordering for two multinomial samples was introduced for the first time in dykstra et al .( 1995 ) using the likelihood ratio test - statistic . in the literature related to different types of orderings , in generalthere is not very clear what is the most appropriate ordering to compare two treatments according to a categorized ordinal variable . in the case of having two independent multinomial samples ,the likelihood ratio ordering is the most restricted ordering type ; for example , if the likelihood ratio ordering holds , then the simple stochastic ordering also holds .dardanoni and forcina ( 1998 ) proposed a new method for making statistical inference associated with different types of orderings . for unifying and comparing different types of orderings , they reparametrize the initial model .different ordering types can be considered to be nested models and the likelihood ratio ordering is the most parsimonious one .the advantage of nested models is that the most restricted models tend to be more powerful for the alternatives that belong to the most restricted alternatives . in thissetting , our proposal in this paper is to introduce new test - statistics that provide substantially better power for testing ( [ eq2 ] ) against ( [ eq3 ] ) .the structure of the paper is as follows . in section [ sec : lm ] , we have considered the likelihood ratio order associated with a non - parametric model , as in dardanoni and forcina ( 1998 ) , but the specification of the model through a saturated loglinear model is substantially different .section [ sec : pd ] presents the phi - divergence test - statistics as extension of the likelihood ratio and chi - square test - statistics .the applied methodology in section [ sec : main results ] for proving the asymptotic distribution of the phi - divergence test - statistics , based on loglinear modeling , has been developed by following a completely new and meaningful method even for the likelihood ratio test . a numerical example is given in section [ sec : numerical example ] .the aim of section [ sec : simulation study ] is to study through simulation the behaviour of the phi - divergence test - statistics for small and moderate simple sizes .finally , we present an appendix in which we establish the part of the proofs of the results not shown in section [ sec : main results ] .we display the whole distribution of , given in ( [ eq5 ] ) , in a rectangular table having rows for the categories of and columns for the categories of ( for the initial example , table [ ttt2 ] ) and we denote the matrix , with two rows of probability vectors , , .we consider two independent random samples , , where sizes are prefixed and , that is the probability distribution of r.v . is product - multinomial .let be the joint probability distribution .since , i.e. , , where , we can express ( [ 2 ] ) also in terms of the joint probabilities let , with , , be the probability matrix and a probability vector obtained by stacking the columns of ( i.e. , the rows of matrix ) .note that the components of are ordered in lexicographical order in .the likelihood function of is , where is a constant which does not depend on and the kernel of the loglikelihood function in matrix notation , we are interested in testing where is the -vector of -s , . note that ( [ 4 ] ) involves non - linear constraints on , defined by ( [ 0 ] ) . in this articlethe hypothesis testing problem is formulated making a reparametrization of using the saturated loglinear model , so that some linear restrictions are considered with respect to the new parameters .this fact is important and interesting .focussed on , the saturated loglinear model with canonical parametrization is defined by with the identifiabilty restrictions it is important to clarify that we have used the identifiability constraints ( [ ident ] ) in order to make easier the calculations and this model formulation for making statistical inference with inequality restrictions with local odds - ratios has been given in this paper for the first time .similar conditions have been used for instance in lang ( 1996 , examples of section 7 ) and silvapulle and sen ( 2005 , exercise 6.25 in page 345 ) .let , denote subvectors of the unknown parameters .the components of are redundant parameters since the term can be expressed in function of using the fact that , i.e. and taking into account that , i.e. in matrix notation ( [ 3 ] ) is given by where is such that the components are defined by ( [ 3 ] ) , is a matrix with being the -vector of ones , the -vector of zeros , the kronecker product ; the full rank design matrix of size , such that with being the identity matrix of order , the matrix of size with zeros . the condition ( [ eq1 ] ) can be expressed by the linear constraint since condition ( [ 6 ] ) in matrix notation is given by , with , is the -th unit vector and is a matrix with -s in the main diagonal and -s in the upper superdiagonal .observe that the restrictions can be expressed also as , and are are nuisance parameters because they do not take part actively in the restrictions .the kernel of the likelihood function with the new parametrization is obtained replacing by in ( [ 0b ] ) , i.e. hypotheses ( [ 4 ] ) can be now formulated as under , the parameter space is and the maximum likelihood estimator ( mle ) of in is .the overall parameter space is and the mle of in is .it is worthwhile to mention that the probability vectors for both parametric spaces , and can be obtained by following the invariance property of the mles first estimating and later plugging it into , however has an explicit expression, where ( see christensen ( 1997 ) , section 2.3 , for more details ) .the likelihood ratio statistic for testing ( [ 4 ] ) , equivalent to one given by dykstra et al .( 1995 ) but adapted for loglinear modeling , is where , , .taking into account the identifiability constraints ( [ ident ] ) and , , , ( see formulas ( [ u])-([u1 ] ) ) , ( [ lrt ] ) can also be expressed as the chi - square statistic for testing ( [ 4 ] ) is the kullback - leibler divergence measure between two -dimensional probability vectors and is defined as and the pearson divergence measure it is not difficult to check that and being the vector of relative frequencies .more general than the kullback - leibler divergence and pearson divergence measures are -divergence measures , defined as where is a convex function such that from a statistical point of view , the first asymptotic statistical results based on divergence measures in multinomial populations were obtained in zografos et al .( 1990 ) . for more details about -divergence measuressee pardo ( 2006 ) and cressie and pardo ( 2002 ) .apart from the likelihood ratio statistic ( [ lrt ] ) and the chi - square ( [ cs ] ) statistic , we shall consider two new families of test - statistics based on -divergence measures .the first new family is obtained by replacing in ( [ n1 ] ) the kullback divergence measure by a -divergence measure, the second new family is obtained by replacing in ( [ n2 ] ) the pearson divergence measure by a -divergence measure, if we consider in ( [ 5a ] ) , we get , and if we consider in ( [ 5a ] ) , we get .test - statistics based on -divergence measures have been used in the framework of loglinear models for some authors , see cressie and pardo ( 2000 , 2002 , 2003 ) , martn and pardo ( 2006 , 2008b , 2011 ) .as starting point , we shall establish the observed fisher information matrix associated with , , for a loglinear model with product - multinomial sampling as where is the diagonal matrix of vector .to proof ( [ fim1 ] ) , we take into account that the overall observed fisher information matrix for product multinomial sampling is the weighted observed fisher information matrix associated with each multinomial sample , , , i.e. such that , and .when , we shall denote to be the true value of the unknown parameter under , and in such a case it holds , where is defined as the probability vector with the terms given in ( [ eq5 ] ) and related to the loglinear model through , .notice that is fixed as and we shall assume that is fixed but unknown , i.e. , .we shall also denote the -dimensional vector obtained removing from the last element .focussing on the parameter structure , with , and the specific structure of , see ( [ w ] ) , we shall establish asymptotically the specific shape of ( [ fim1 ] ) , a fundamental result for the posterior theorems .the asymptotic fisher information matrix of , when is given by replacing by and the explicit expression of in the general expression of the finite sample size fisher information matrix for two independent multinomial samples , ( [ fim1 ] ) , we obtain through the property of the kronecker product given in ( 1.22 ) of harville ( 2008 , page 341 ) that and then the following theorem establishes that the asymptotic distribution of the families of test statistics ( [ 5a ] ) and ( [ 5b ] ) corresponds to a -dimensional chi - bar squared random variable , a mixture of chi - squared distributions .let be the whole set of all row - indices of matrix , the family of all possible subsets of , and is a submatrix of with row - ndices belonging to .we must not forget that and therefore .we denote by the following tridiagonal matrix and by the submatrix of obtained by deleting from it the row - indices contained in the set and column - indices contained in the set .[ th1]under , the asymptotic distribution of and is where a.s .and is the set of weights such that and where and denotes the cardinal of the set . by following similar arguments of martn and balakrishnanwe obtain ( see appendix [ proofth1contra ] , for the details ) .in particular , with , i.e. where is ( [ if ] ) . by following the properties of the inverse of the kronecker product for calculating the inverse of ( [ if]), and replacing it in the previous expression of , which is equal to ( [ h ] ) . even though there is an equality in ( [ 4b ] ) , is not a fixed vector under the null hypothesis since such an equality is effective only for , and thus is a vector of nuisance parameters .this means that we have a composite null hypothesis which requires estimation of , through and we can not use directly the results based on theorem [ th1 ] .the tests performed replacing the parameter of the asymptotic distribution by are called local tests ( see dardanoni and forcina ( 1998 ) ) and they are usually considered to be good approximations of the theoretical tests . in relation to the weights , , there are explicit expressions when based on the matrix given in ( [ h ] ) and formulas ( 3.24 ) , ( 3.25 ) and ( 3.26 ) in silvapulle and sen ( 2005 , page 80 ) . when , .when , the estimators of the weights are{l}w_{0}(\widehat{\boldsymbol{\theta}})=\tfrac{1}{2}-w_{2}(\widehat{\boldsymbol{\theta}}),\\ w_{1}(\widehat{\boldsymbol{\theta}})=\frac{1}{2},\\ w_{2}(\widehat{\boldsymbol{\theta}})=\tfrac{1}{2\pi}\arccos\widehat{\rho}_{12 } , \end{array } \right .\label{weightsj=3}\ ] ] where is the correlation associated with the -th and -th variable of a central random variable with variance - covariance matrix where . when ,{l}w_{0}(\widehat{\boldsymbol{\theta}})=\tfrac{1}{4\pi}\left ( 2\pi -\arccos\widehat{\rho}_{12}-\arccos\widehat{\rho}_{13}-\arccos\widehat{\rho } _ { 23}\right ) , \\w_{1}(\widehat{\boldsymbol{\theta}})=\tfrac{1}{4\pi}\left ( 3\pi -\arccos\widehat{\rho}_{12\cdot3}-\arccos\widehat{\rho}_{13\cdot2}-\arccos\widehat{\rho}_{23\cdot1}\right ) , \\ w_{2}(\widehat{\boldsymbol{\theta}})=\tfrac{1}{2}-w_{0}(\widehat{\boldsymbol{\theta}}),\\ w_{3}(\widehat{\boldsymbol{\theta}})=\tfrac{1}{2}-w_{1}(\widehat{\boldsymbol{\theta } } ) , \end{array } \right .\label{weightsj=4}\ ] ] which depend on the estimation of the marginal ( [ cor ] ) and conditional correlations associated with the -th and -th variable , given a value of the -th variable , of a central random variable with variance - covariance matrix it is interesting to point out that the factor related to the sample size in each multinomial sample , , have no effect in the expression of estimator for the weights of the chi - bar squared distribution these formulas will be considered in the forthcoming sections .it is worthwhile to mention that the normal orthant probabilities for the weights given in ( [ eqw ] ) , can also be computed for any value of using the ` mvtnorm ` r package ( see http://cran.r-project.org/package=mvtnorm , for details ) .in this section the data set of the introduction ( table [ tttt1 ] ) , where , is analyzed .the sample , a realization of , is summarized in the following vector the order restricted mle under likelihood ratio order , obtained through the ` e04ucf ` subroutine of ` nag ` fortran library ( http://www.nag.co.uk/numeric/fl/fldescription.asp ) , is the estimation of the probability vectors of interest is and the estimation of the weights , based on ( [ weightsj=4 ] ) , are in order to solve analytically the example we shall consider a particular function in ( [ 5a ] ) and ( [ 5b ] ) .taking we get the the power divergence family in such a way that for each a different divergence measure is obtained , and thus it is also possible to cover the real line for , by defining and by considering , , for , i.e. and it is well known that and , which is very interesting since and are members of the power divergence based test - statistics .it is also worthwhile to mention that . in table[ t1 ] , the power divergence based test - statistics for some values of in , and their corresponding asymptotic -values are shown . in all of them it is concluded , with a significance level equal to , that an equal effect of both treatments is rejected and hence the treatment is more effective than the control to heal the ulcer . 2.8pt {cccccccccc}\hline\hline test - statistic & ] and .for the corresponding test - statistic, if , then and ; if , then and .hence , both one sided test - statistics , the composite null one , , and the simple null one , , are almost equal and {ll}\frac{1}{2}\pr\left ( \chi_{1}^{2}>2{\displaystyle\sum\limits_{i=1}^{2 } } { \displaystyle\sum\limits_{j=1}^{2 } } n_{ij}\log\frac{n_{ij}/n_{i}}{n_{\bullet j}/n}\right ) , & \text{if } \frac{n_{11}}{n_{1}}\geq\frac{n_{21}}{n_{2}},\\ 1 , & \text{if } \frac{n_{11}}{n_{1}}<\frac{n_{21}}{n_{2}}. \end{array } \right.\ ] ] the mid - rank test - statistic for ( [ tt1 ] ) and ( [ tt2b ] ) is the same , ( [ wilc ] ) , as well as the distribution under the null , but } { 12}}}\right)\ ] ] for ( [ tt1 ] ) and } { 12}}}\right)\ ] ] for ( [ tt2b ] ) .2.8pt [ c]ct1.pdf + t2.pdf + w.pdf the following short simulation study considers realizations , , , , of with and and . in figure [ fighh ] a histogram of , and shown where the shape of the density function of each can be recognized . in table[ tthh ] , the simulated significance levels ( ) and powers ( ) are calculated as the proportion of statistics with -values smaller than the nominal level .the test - statistic based on the hellinger distance , given in ( [ hel ] ) , is also included . from this simulation studyit is concluded that the likelihood ratio test - statistic and the wilcoxon mid - rank test for contingency tables , are specific procedures for the one sided test ( [ tt1 ] ) since the parameter spaces are different , but are strongly related with the two sided test ( [ tt2b ] ) in the way of calculating the value of the test - statistic and the corresponding -value .it is remarkable that the simulated significance level for the one - sided wilcoxon mid - rank test for contingency tables exhibits a slightly better approximation of the nominal level in comparison with the likelihood ratio test for the one sided test ( [ tt1 ] ) , and the likelihood ratio test slightly better than the test - statistic based on the hellinger distance .the powers of the test - statistics are calculated for . the test - statistic based on the hellinger distance has the greatest power and the wilcoxon mid - rank test the smallest power for the one sided test ( [ tt1 ] ) . in section [ sim ] a more extensive simulation study is considered with a criterion to select the best test - statistic within a broader class of power divergence based test - statistics .finally , the two sided test - statistics , and , exhibit a worse power than the one sided test - statistics .this behaviour was obviously expected , since being or equivalently , the one sided tests have always a better power than the two sided tests .2.8pt [ c]cccccccccccc & & ( one sided ) & & ( one sided ) & & ( two sided ) & & one sided & & two sided & + & & & & & & & & & & & + & & & & & & & & & & & + in this section the performance of the power divergence test statistics ( [ pd1])-([pd6 ] ) is studied in terms of the simulated exact size and simulated power of the test , based on small and moderate sample sizes . a simulation experiment with seven scenarios is designed in table [ tt ] , taking into account the sample sizes of the two independent samples .the pairs of scenarios ( a , g ) , ( b , f ) and ( c , e ) should have very similar exact significance levels , since the sample sizes of the two samples are symmetrical ( the ratio of one sample is the inverse of the other one ) .with respect to the choice of , the parameters for the power divergence test statistics , the interest is focused on the interval ] n_1.25.8.5.2 the algorithm described in section [ sec : numerical example ] is taken into account to calculate the -value of each test - statistic } ] and on the right for the test - statistic } ] , in all the scenarios .this criterion has been considered for some authors , see for instance cressie et al .( 2003 ) and martn and pardo ( 2012 ) .the cases satisfying the criterion are marked in bold in table [ alfas ] , and comprise those values in the abscissa of the plot between the dashed band ( the dashed line in the middle represents the nominal size ) , and we can conclude that we must not consider in our study . _ step 2 _ : we compare all the test statistics obtained in step 1 with the classical likelihood ratio test ( ) as well as the classical pearson test statistic ( ) .to do so , we have calculated the relative local efficiencies in figures [ fig2]-[fig7 ] the powers and the relative local efficiencies are summarized .the second rows of the figures represent , while in the third row is plotted , on the left it is considered } ] on the right . in figure [ fig1 ]we show only one row since it represents the atypical case in which the exact powers are less that the exact significance level for the values of satisfying the dale s criterion and so , it does not make sense to compare the powers . 2.8pt [ c]l{ccccccccccc}\hline\hline sc & ] _s_-1.5-1-1/2/3.5 2.8pt [ c]cc & + esca_potencia_t.pdf & esca_potencia_s.pdf 2.8pt [ c]cc & + escb_potencia_t.pdf & escb_potencia_s.pdf + escb_eficiencias_t.pdf & escb_eficiencias_s.pdf + escb_eficiencias_asterisco_t.pdf & escb_eficiencias_asterisco_s.pdf 2.8pt [ c]cc & + escc_potencia_t.pdf & escc_potencia_s.pdf + escc_eficiencias_t.pdf & escc_eficiencias_s.pdf + escc_eficiencias_asterisco_t.pdf & escc_eficiencias_asterisco_s.pdf 2.8pt [ c]cc & + escd_potencia_t.pdf & escd_potencia_s.pdf + escd_eficiencias_t.pdf & escd_eficiencias_s.pdf + escd_eficiencias_asterisco_t.pdf & escd_eficiencias_asterisco_s.pdf 2.8pt [ c]cc & + esce_potencia_t.pdf & esce_potencia_s.pdf + esce_eficiencias_t.pdf & esce_eficiencias_s.pdf + esce_eficiencias_asterisco_t.pdf & esce_eficiencias_asterisco_s.pdf 2.8pt [ c]cc & + escf_potencia_t.pdf & escf_potencia_s.pdf + escf_eficiencias_t.pdf & escf_eficiencias_s.pdf + escf_eficiencias_asterisco_t.pdf & escf_eficiencias_asterisco_s.pdf 2.8pt [ c]cc & + escg_potencia_t.pdf & escg_potencia_s.pdf + escg_eficiencias_t.pdf & escg_eficiencias_s.pdf + escg_eficiencias_asterisco_t.pdf & escg_eficiencias_asterisco_s.pdf the plots are interpreted as follows:*a ) * in all the scenarios a similar pattern is observed when plotting the exact power , , for ] and thus it confirms what was said in a ) . on the other hand , comparing the left hand ( ) side of with the right side ( ) and doing the same for , a slightly higher values of the local efficiencies of are seen in comparison with .for this reason we consider that have a better performance than the classical test - statistics , and in scenarios b - e and } ] not very common in the literature of phi - divergence test - statistics . as exception ,notice that where is the hellinger distance between the probability vectors and .therefore , one of the test - statistic we are proposing in this paper is a function of the well - known hellinger distance , which has been used in many different statistical problems .we think that the reason why this happens is related to the robust properties of such a test - statistic , since when dealing with the likelihood ratio ordering , under the alternative hypothesis , on the left side of the contingency table empty cells tend to appear .in particular , the theoretical probability in the first cell for the second treatment , , is the smallest one and this circumstance does influence in the results obtained for skew sample sample sizes in both treatments .the authors acknowledge the referee .we modified and improved the manuscript according to comments and questions pointed by the referee .99 barlow , r. e. , bartholomew , d. j. and brunk , h.d ._ statistical inference under order restrictions_. wiley .bazaraa , m. s. , sherali , h. d. and shetty , c. m. ( 2006 ) ._ nonlinear programming : theory and algorithms _( 3rd edition ) .john wiley and sons .christensen , r. ( 1997 ) ._ log - linear models and logistic regression_. springer .cressie , n. and pardo , l. ( 2002 ) .phi - divergence statistics . _ encyclopedia of environmetrics _ ( a. h. elshaarawi and w. w. piegorich , eds . ) .volume 3 , 1551 - 1555 , john wiley and sons , new york .cressie , n. and pardo , l. ( 2003 ) .minimum phi - divergence estimator and hierarchical testing in loglinear models ._ statistica sinica _ , * 10 * , 867 - 884 .cressie , n. , pardo , l. and pardo , m.c .size and power considerations for testing loglinear models using -divergence test statistics ._ statistica sinica _ , * 13 * , 550 - 570 .dale , j.r .asymptotic normality of goodness - of - fit statistics for sparse product multinomials ._ journal of the royal statistical society _ , * b * , 48 - 59 .dykstra , r. l. , kocbar , s. and robertson , t. ( 1995 ) .inference for likelihood ratio ordering in the two - sample problem ._ journal of the american statistical association _ ,* 90 * , 1034 - 1040 . doll , r. and pygott , f. ( 1952 ) .factors influencing the rate of healing of gastric ulcers ._ lancet _ , * 259 * , 171 - 175 .harville , d. a. ( 2008 ) . _ matrix algebra from a statistician s perspective_. springer .ferguson , t. s. ( 1996 ) ._ a course in large sample theory_. chapman & hall .kud , a. ( 1963 ) . a multivariate analogue of the one - sided test . _ biometrika _ , * 50 * , 403 - 418 .lang , j. b. ( 1996 ) . on the comparison of multinomial and poisson log - linear models ._ journal of the royal statistical society series b _ , 58 , 253 - 266 .letierce ; a.,tubert - bitter , p. , kramar , a. and maccario , j. ( 2003 ) .two - treatment comparison based on joint toxicity and efficacy ordered alternatives in cancer trials . _statistics in medicine _ , * 22 * , 859868 . martin , n. and pardo , l.(2006 ) . choosing the best phi - divergence goodness - of - fit statistic in multinomial sampling for loglinear models with linear constraints ._ kybernetika _ , * 42 * , 711722 .martin , n. and pardo , l.(2008a ) .new families of estimators and test statistics in log - linear models ._ journal of multivariate analysis , _ * 99*(8 ) , 15901609 .martin , n. and pardo , l. ( 2008b ) .phi - divergence estimators for loglinear models with linear constraints and multinomial sampling ._ statistical papers _ , * 49 * , 1536 martin , n. and pardo , l. ( 2011 ) .fitting dna sequences through log - linear modelling with linear constraints . _statistics : a journal of theoretical and applied statistics _ , * 45 * , 605 - 621 .martin , n. and pardo , l. ( 2012 ) .poisson - loglinear modeling with linear constraints on the expected cell frequencies ._ sankhya b _ , * 74*(2 ) , 238 - 267 .mehta , c.r . ,patel , n.r . and tsiatis , a.a .exact significance testing to establish treatment equivalence with ordered categorical data . _ biometrics _ , * 40*(3 ) , 819 - 825 .pardo , l. ( 2006 ) ._ statistical inference based on divergence measures_. statistics : series of textbooks and monograhps .chapman & hall / crc .sen , p. k. , singer , j. m. and pedroso de lima , a. c. ( 2010 ) ._ from finite sample to asymptotic methods in statistics_. cambridge university press .shan , g. and ma , c. ( 2004 ) .unconditional tests for comparing two ordered multinomials .statistical methods in medical research ( in press ) .doi : http://dx.doi.org/10.1177/0962280212450957 shapiro , a. ( 1985 ) .asymptotic distribution of test statistics in the analysis of moment structures under inequality constraints ._ biometrika _ , * 72 * , 133144 .shapiro , a. ( 1988 ) . toward a unified theory of inequality constrained testing in multivariate analysis ._ international statistical review _ , * 56 * , 4962 .silvapulle , m. j. and sen . , p. k. ( 2005 ) ._ constrained statistical inference .inequality , order , and shape restrictions_. wiley series in probability and statistics .wiley - interscience ( john wiley & sons ) .zografos , k. , ferentinos , k. and papaioannou , t. ( 1990 ) .-divergence statistics : sampling properties and multinomial goodness of fit and divergence tests ._ communications in statistics - theory and methods _, * 19 * , 1785 - 1802 .suppose we are interested in testing : vs and . with the complete notation ,our interest is, under , the parameter space is and the mle of in is given by .under the alternative hypothesis the parameter space is , where , that is , under both hypotheses , and , the parameter space is and the mle of in is . by following the same idea we used for building test - statistics ( [ 5a])-([5b ] ) we shall consider two family of test - statistics based on -divergence measures, and under , the asymptotic distribution of ( [ 5ab ] ) and ( [ 5bb ] ) is with .the second order taylor expansion of function about is where and was defined at the beginning of section [ sec : main results ] .let be the parameter vector such that , where , with , is the saturated log - linear model .in particular , for we have in a similar way it is obtained multiplying both sides of the equality by and taking the difference in both sides of the equality now we are going to generalize the three types of estimators by , understanding that for , , , for , , , and , and as originally defined .it is well - known that where is the true and unknown value of the parameter, is the variance covariance matrix of , and by the central limit theorem .we shall denote taking the differences of both sides of the equality in ( [ eq14 ] ) with cases and , we obtain with cases and , and taking into account , where with and is the cholesky s factorization matrix for a non singular matrix such a fisher information matrix , that is . in other words where the variance covariance matrix is idempotent and symmetric .following lemma 3 in ferguson ( 1996 , page 57 ) , is idempotent and symmetric , if only if is a chi - square random variable with degrees of freedom since the condition is reached .the effective degrees of freedom are given by regarding the other test - statistic , observe that if we take ( [ eq16 ] ) , in particular for it is obtained in addition , ( [ a])([b ] ) is and taking into account and ( [ c ] ) , it follows ( [ d ] ) , which means from slutsky s theorem that both test - statistics have the same asymptotic distribution .let be a -dimensional random variable with normal distribution with being a projection matrix , that is idempotent and symmetric , and let be the fixed -dimensional vectors such that for them either or , , is true .then , where .this result can be found in several sources , for instance in kud ( 1963 , page 414 ) , barlow et al .( 1972 , page 128 ) and shapiro ( 1985 , page 139 ) .we shall perform the proof for .it suppose that it is true and we want to test ( ) .it is clear that if is not true is because there exists some index such that .let us consider the family of all possible subsets in , denoted by , then we shall specify more thoroughly by when there exists such that it is clear that for a sample can be true only for a unique set of indices , and thus by applying the theorem of total probability from the karush - khun - tucker necessary conditions ( see for instance theorem 4.2.13 in bazaraa et al .( 2006 ) ) to solve the optimization problem s.t . , associated with , the only conditions which characterize the mle with a specific , are the complementary slackness conditions , for and , for , since , , , for and , for are redundant conditions once we know that the karush - khun - tucker necessary conditions are true for all the possible sets which define . for this reason we can consider where is the vector of the vector of karush - khun - tucker multipliers associated with estimator . furthermore , under , , because , hence where . on the other hand , ( [ kkt1 ] ) and( [ kkt2 ] ) are also true for according to the lagrange multipliers method .hence , and .it follows that: under , and taking into account proposition [ th1contr ] where . under and from sen et al .( 2010 , page 267 formula ( 8.6.28)) where under and from ( [ eq14]) that is, where taking into account that and , by applying the lemma given in section [ lemcontra] where finally, and since , it holds which means that and are independent , that is where the expression of is ( [ eqw ] ) .we have also , the proof of is almost immediate from the proof for and taking into account that for some .... ! -------------------------------------------------------------------------------- !this program is only valid for 2 by 4 contingency tables ! ( for other sizes some changes must be done : ! change the value of j and follow the formulas of the weights ) ! to run it , the nag library is required to have installed ! to change the sample go to line 18 ! the fortran program generates the outputs in 8 text files ! -------------------------------------------------------------------------------- module parglob integer fail integer , parameter : : i=2 , j=4 , nlam=9 double precision pr(i*j ) , w(i*j , i*j-1 ) , rr((i-1)*(j-1),i*(j-1 ) ) , betatil(i*(j-1 ) ) , & phat(i*j ) , zz((i-1)*(j-1 ) ) , tbt((i-1)*(j-1),(i-1)*(j-1 ) ) , bb((i-1)*(j-1),(i-1)*(j-1 ) ) , & we(0:(i-1)*(j-1 ) ) , k1((i-1),(i-1 ) ) , k2((j-1),(j-1 ) ) , hh((i-1)*(j-1),(i-1)*(j-1 ) ) , & hinv((i-1)*(j-1),(i-1)*(j-1 ) ) , ntt , nu(i ) , ppi(j ) , nn(i*j ) , ppit(i , j ) , un , sample(i*j ) , & odds(i-1,j-1 ) , nt(i ) double precision , parameter : : lamb(nlam)=(/-1.5d0,-1.d0,-0.5d0,0.d0,2.d0/3.d0,1.d0,1.5d0 , & 2.d0,3.d0/),del=0.0d0 , pi=3.14159265358979323846264338327950d0 , sample=(/11.d0,8.d0 , & 8.d0,5.d0,6.d0,4.d0,10.d0,12.d0/ ) end module parglob ! --------------------------------------------------------------------------------program example use parglob implicit none integer n , m , ifail double precision estt , ests , pval , table(i , j ) , contt(nlam ) , conts(nlam ) , initheta(i*j-1 ) , & ro(3,2 ) , marg(j ) , rank(j ) , wilc0 , wilc , meanwilc , sdwilc , pvalwilc , g01eaf do n=1,i do m=1,j ppit(n , m)=(1.d0/3.d0)*((1.d0+n*(m-1.d0)*del)/(1.d0+n*del ) ) enddo enddo do n=1,i-1 do m=1,j-1 odds(n , m)=ppit(n , m)*ppit(n+1,m+1)/(ppit(n+1,m)*ppit(n , m+1 ) ) enddo enddo marg = sample(1:j)+sample(j+1:2*j ) rank=0.d0 do n=2,j rank(n)=rank(n-1)+marg(n-1 ) enddo rank = rank+(marg+1.d0)/2.d0 wilc0=sum(rank*sample(1:j ) ) nt(1)=sum(sample(1:j ) ) nt(2)=sum(sample(j+1:2*j ) ) ntt = sum(nt ) nu = nt / ntt meanwilc = nt(1)*(nt(1)+nt(2)+1.d0)/2.d0 sdwilc = nt(1)*nt(2)*(nt(1)+nt(2)+1.d0)/12.d0 sdwilc = sdwilc - nt(1)*nt(2)*sum(marg**3-marg)/(12.d0*(nt(1)+nt(2))*(nt(1)+nt(2)-1.d0 ) ) sdwilc = sqrt(sdwilc ) wilc=(wilc0-meanwilc)/sdwilc call designm ( ) call restricm ( ) nn = sample table = transpose(reshape(nn,(/j , i/ ) ) ) do m=1,j ppi(m)=sum(table(:,m))/ntt enddo initheta=0.d0 call emvh01(initheta ) if ( fail.ne.0 ) then initheta=0.1d0 call emvh01(initheta ) if ( fail.ne.0 ) then initheta=-0.1d0 call emvh01(initheta ) endif endif 21 format ( 20f10.4 ) 22 format ( 20f15.10 ) open ( 10 , file = " theta-tilde.dat " , action="write",status="replace " ) write(10 , * ) " * * theta tilde * * " write(10 , * ) " --------------------------------- " write(10,21 ) ( betatil(m ) , m=1,i*(j-1 ) ) close(10 ) open ( 10 , file = " p-bar.dat " , action="write",status="replace " ) write(10 , * ) " * * probability vector : p - bar * * " write(10 , * ) " ------------------------------------- " write(10,21 ) ( nn(n)/(sum(nn ) ) , n=1,i*j ) close(10 ) open ( 10 , file = " p-theta-tilde.dat " , action="write",status="replace " ) write(10 , * ) " * * probability vector : p - theta - tilde * * " write(10 , * ) " --------------------------------------------- " write(10,21 ) ( pr(n ) , n=1,i*j ) close(10 ) call probvector2(nu , ppi ) open ( 10 , file = " p-theta-hat.dat " , action="write",status="replace " ) write(10 , * ) " * * probability vector : p - theta - hat * * " write(10 , * ) " ------------------------------------------- " write(10,21 ) ( phat(n ) , n=1,i*j ) close(10 ) call kmatrices ( ) call hmatrix ( ) ro(1,1)=hh(1,2)/sqrt(hh(1,1)*hh(2,2 ) ) ro(2,1)=hh(1,3)/sqrt(hh(1,1)*hh(3,3 ) ) ro(3,1)=hh(2,3)/sqrt(hh(2,2)*hh(3,3 ) ) ro(1,2)=(ro(1,1)-ro(2,1))/sqrt((1.d0-ro(2,1)*ro(2,1))*(1.d0-ro(3,1)*ro(3,1 ) ) ) ro(2,2)=(ro(2,1)-ro(1,1)*ro(3,1))/sqrt((1.d0-ro(1,1)*ro(1,1))*(1.d0-ro(3,1)*ro(3,1 ) ) ) ro(3,2)=(ro(3,1)-ro(2,1)*ro(1,1))/sqrt((1.d0-ro(2,1)*ro(2,1))*(1.d0-ro(1,1)*ro(1,1 ) ) ) we(0)=(2.d0*pi - acos(ro(1,1))-acos(ro(2,1))-acos(ro(3,1)))/(4.d0*pi ) we(1)=(3.d0*pi - acos(ro(1,2))-acos(ro(2,2))-acos(ro(3,2)))/(4.d0*pi ) we(2)=0.5d0-we(0 ) we(3)=0.5d0-we(1 ) ifail=-1 pvalwilc = g01eaf('l',wilc , ifail ) open ( 10 , file = " t-tests.dat " , action="write",status="replace " ) write(10 , * ) " * * t - test statistics * * " write(10 , * ) " -------------------------------- " write(10,21 ) ( lamb(n ) , n=1,nlam ) write(10 , * ) ' test - statistics ' write(10,21 ) ( estt(lamb(n ) ) , n=1,nlam ) write(10 , * ) ' p - values ' write(10,22 ) ( pval(estt(lamb(n ) ) ) , n=1,nlam ) write(10 , * ) " * * wilcoxon statistics * * " write(10 , * ) " --------------------------------- " write(10 , * ) ' test - statistic ' write(10,21 ) wilc0 write(10 , * ) ' p - value ' write(10,21 ) pvalwilc close(10 ) open ( 10 , file = " s-tests.dat " , action="write",status="replace " ) write(10 , * ) " * * s - test statistics * * " write(10 , * ) " -------------------------------- " write(10,21 ) ( lamb(n ) , n=1,nlam ) write(10 , * ) ' test - statistics ' write(10,21 ) ( ests(lamb(n ) ) , n=1,nlam ) write(10 , * ) ' p - values ' write(10,22 ) ( pval(ests(lamb(n ) ) ) , n=1,nlam ) write(10 , * ) " * * wilcoxon statistics * * " write(10 , * ) " --------------------------------- " write(10 , * ) ' test - statistic ' write(10,21 ) wilc0 write(10 , * ) ' p - value ' write(10,21 ) pvalwilc close(10 ) open ( 10 , file = " weights.dat " , action="write",status="replace " ) write(10 , * ) " * * weights chi - bar * * " write(10 , * ) " ----------------------------- " write(10 , * ) " " write(10,22 ) ( real(we(n ) ) , n=0,(i-1)*(j-1 ) ) write(10 , * ) " ---------------------------------------------------------- " close(10 ) end program example !this soubrutine calculates the design matrix of a saturated log - linear model ! with canonical parametrization ! -------------------------------------------------------------------------------- subroutine designm ( ) use parglob implicit none integer h double precision one_i(i ) , one_j(j ) , a(i , i-1 ) , b(j , j-1 ) , w12(i*j,(i-1)*(j-1 ) ) , & w1(i*j , i-1 ) , w2(i*j , j-1 ) one_i=1.d0 one_j=1.d0 a=0.d0 do h=1,i-1 a(h , h)=1.d0 enddo b=0.d0 do h=1,j-1 b(h , h)=1.d0 enddo call kronecker(i , i-1,a , j,1,one_j , w1 ) call kronecker(i,1,one_i , j , j-1,b , w2 ) call kronecker(i , i-1,a , j , j-1,b , w12 ) w(:,1:i-1)=w1 w(:,i : i+j-2)=w2 w(:,i+j-1:i*j-1)=w12 end subroutine designm ! -------------------------------------------------------------------------------- ! --------------------------------------------------------------------------------this soubrutines calculates the restriction matrix !-------------------------------------------------------------------------------- subroutine restricm ( ) use parglob implicit none integer h double precision r2((i-1)*(j-1),j-1 ) , r12((i-1)*(j-1),(i-1)*(j-1 ) ) , gi(i-1,i-1 ) , & gj(j-1,j-1 ) gi=0.d0 do h=1,i-1 gi(h , h)=1.d0 if ( h.lt.i-1 ) then gi(h , h+1)=-1.d0 endif enddo gj=0.d0 do h=1,j-1 gj(h , h)=1.d0 if ( h.lt.j-1 ) then gj(h , h+1)=-1.d0 endif enddo r2 = 0.d0 call kronecker(i-1,i-1,gi , j-1,j-1,gj , r12 ) rr(1:(i-1)*(j-1),1:j-1 ) = r2 rr(1:(i-1)*(j-1),j : i*(j-1 ) ) = r12 end subroutine restricm ! -------------------------------------------------------------------------------- ! -------------------------------------------------------------------------------- ! given matrices a and b , this subroutines calculates c as the kronecker product ! a 's dimension n by m !b 's dimension p by q ! -------------------------------------------------------------------------------- subroutine kronecker(n , m , a , p , q , b , c ) implicit none integer n , m , p , q double precision a(n , m ) , b(p , q ) , c(n*p , m*q ) integer i , j , k , d do i=1,n do j=1,m do k=1,p do d=1,q c((i-1)*p+k,(j-1)*q+d ) = a(i , j)*b(k , d ) enddo enddo enddo enddo end subroutine kronecker ! -------------------------------------------------------------------------------- ! -------------------------------------------------------------------------------- ! given !a ) vector theta !b ) the design matrix x=(1,w ) ! this subroutine calculates the probabilities of a log - linear model . !-------------------------------------------------------------------------------- subroutine probvector(beta ) use parglob implicit none integer n double precision beta(i*(j-1 ) ) , theta(i*j-1 ) , u theta(i : i*j-1)=beta u = log(nt(i))-log(ntt)-log(1.d0+sum(exp(beta(1:j-1 ) ) ) ) do n=1,i-1 theta(n)=log(nt(n))-log(ntt)-u - log(1.d0+sum(exp(beta(1:j-1)+ & beta(n*(j-1)+1:(n+1)*(j-1 ) ) ) ) ) enddo pr = exp(matmul(w , theta))*exp(u ) end subroutine probvector ! -------------------------------------------------------------------------------- ! -------------------------------------------------------------------------------- ! subroutine to calculate p(theta - hat ) !-------------------------------------------------------------------------------- subroutine probvector2(nnu , pppi ) use parglob implicit none integer h , s double precision nnu(i ) , pppi(j ) , aux(i , j ) do h=1,i do s=1,j if ( pppi(s).gt.0.d0 ) then aux(h , s)=nnu(h)*pppi(s ) else aux(h , s)=1.d-5 endif enddo enddo phat = reshape(transpose(aux),(/i*j/ ) ) end subroutine probvector2 ! -------------------------------------------------------------------------------- ! -------------------------------------------------------------------------------- ! subroutine to calculate theta_tilde . !-------------------------------------------------------------------------------- subroutine emvh01(x ) use parglob implicit none integer , parameter : : n = i*j-1 , nclin = ( i-1)*(j-1 ) , ncnln = 0 , lda = nclin integer , parameter : : ldcj = 1 , ldr = n , liw= 3*n+nclin+2*ncnln , lw=530 integer iter , ifail , istate(n+nclin+ncnln ) ,iwork(liw ) , iuser(1 ) , nstate double precision objf , a(nclin , n ) , user(1 ) , work(lw ) , r(ldr , n ) , c(ncnln ) , cjac(ldcj , n ) double precision clamda(n+nclin+ncnln ) , bl(n+nclin+ncnln ) , bu(n+nclin+ncnln ) , x(n ) , objgrd(n ) external confun , e04ucf , e04uef , objfun a=0.d0 a(:,i : i*j-1)=rr bl(1:n)=-1.d6 bl(n+1:n+nclin)=0.d0 bu=1.d6 ifail = -1 call e04uef ( ' infinite bound size = 1.e5 ' ) call e04uef ( ' iteration limit = 250 ' ) call e04uef ( ' print level = 0 ' ) call e04ucf(n , nclin , ncnln , lda , ldcj , ldr , a , bl , bu , confun , objfun , iter , istate , c , & cjac , clamda , objf , objgrd , r , x , iwork , liw , work , lw , iuser , user , ifail ) betatil = x(i : i*j-1 ) fail = ifail end subroutine emvh01 subroutine objfun(mode , n , x , objf , objgrd , nstate , iuser , user ) use parglob implicit none integer mode , n , iuser(1 ) , nstate double precision objf , objgrd(n ) , x(n ) , user(1 ) call probvector(x(i : i*(j-1 ) ) ) if ( mode .eq.0 .or .mode .eq.2 ) then objf = -sum(nn*log(pr ) ) endif if ( mode .eq.1 .or . mode .eq.2 ) then objgrd = matmul(transpose(w),sum(nn)*pr - nn ) endif end subroutine confun ( mode , ncnln , g , ldcj , needc , x , c , cjac , nstate , iuser , user ) integer mode , ncnln , g , ldcj , needc ( * ) , nstate , iuser ( * ) double precision x ( * ) , c ( * ) , cjac(ldcj , * ) , user ( * ) end ! -------------------------------------------------------------------------------- !subroutine to calculate t - statistic . !-------------------------------------------------------------------------------- function estt(lan ) use parglob implicit none double precision estt , lan , aux , ninteger h n = sum(nn ) aux=0.d0 if ( ( lan .ge .-1.d-9 ) .and .( lan .le .1.d-9 ) ) then ! lan=0 do h=1,i*j if ( ( pr(h).gt.0.d0).and.(phat(h).gt.0.d0 ) ) then aux = aux+nn(h)*log(pr(h)/phat(h ) ) endif enddo estt=2.d0*aux else if ( ( lan .ge .-1.d0 - 1.d-9 ) .and .( lan .le .-1.d0 + 1.d-9 ) ) then !lan=-1 do h=1,i*j if ( ( pr(h).gt.0.d0).and.(phat(h).gt.0.d0).and.(nn(h).gt.0.5d0 ) ) then aux = aux+phat(h)*log((n*phat(h))/nn(h ) ) aux = aux - pr(h)*log((n*pr(h))/nn(h ) ) endif enddo estt=2.d0*n*aux else ! lan<>0 , lan<>-1 do h=1,i*j if ( ( pr(h).gt.0.d0).and.(phat(h).gt.0.d0).and.(nn(h).gt.0.5d0 ) ) then aux = aux+nn(h)*((nn(h)/(n*phat(h)))**lan-(nn(h)/(n*pr(h)))**lan ) endif enddo estt=2.d0*aux/(lan*(1.d0+lan ) ) endif endif end function estt ! -------------------------------------------------------------------------------- !subroutine to calculate s - statistic . !-------------------------------------------------------------------------------- function ests(lan ) use parglob implicit none double precision ests , lan , aux , n integerh n = sum(nn ) aux=0.d0 if ( ( lan .ge .-1.d-9 ) .and .( lan .le .1.d-9 ) ) then ! lan=0 do h=1,i*j if ( ( pr(h).gt.0.d0).and.(phat(h).gt.0.d0 ) ) then aux = aux+pr(h)*log(pr(h)/phat(h ) ) endif enddo ests=2.d0*n*aux else if ( ( lan .ge .-1.d0 - 1.d-9 ) .and .( lan .le .-1.d0 + 1.d-9 ) ) then !lan=-1 do h=1,i*j if ( ( pr(h).gt.0.d0).and.(phat(h).gt.0.d0 ) ) then aux = aux+phat(h)*log(phat(h)/pr(h ) ) endif enddo ests=2.d0*n*aux else ! lan<>0 , lan<>-1 do h=1,i*j if ( ( pr(h).gt.0.d0).and.(phat(h).gt.0.d0 ) ) then aux = aux+(pr(h)**(lan+1.d0))/(phat(h)**lan ) endif enddo ests=2.d0*n*(aux-1.d0)/(lan*(1.d0+lan ) ) endif endif end function ests ! -------------------------------------------------------------------------------- !subroutine to calculate matrix k. ! -------------------------------------------------------------------------------- subroutine kmatrices ( ) use parglob implicit none integer n k1=0.d0 do n=1,i-1 k1(n , n)=(nu(n)+nu(n+1))/(nu(n)*nu(n+1 ) ) if ( n.ge.2 ) then k1(n , n-1)=-1.d0/nu(n ) endif if ( n.le.i-2 ) then k1(n , n+1)=-1.d0/nu(n+1 ) endif enddo k2=0.d0 do n=1,j-1 k2(n , n)=(ppi(n)+ppi(n+1))/(ppi(n)*ppi(n+1 ) ) if ( n.ge.2 ) then k2(n , n-1)=-1.d0/ppi(n ) endif if ( n.le.j-2 ) then k2(n , n+1)=-1.d0/ppi(n+1 ) endif enddo end subroutine kmatrices ! -------------------------------------------------------------------------------- ! subroutine to calculate matrix h. !-------------------------------------------------------------------------------- subroutine hmatrix ( ) use parglob implicit none call kronecker(i-1,i-1,k1,j-1,j-1,k2,hh ) end subroutine hmatrix ! -------------------------------------------------------------------------------- ! soubrotine to calculate p - values in terms of a specific lambda : t(lam ) o s(lam ) ! -------------------------------------------------------------------------------- function pval(est )use parglob implicit none integer n , ifail double precision pval , est , aux , g01ecf if ( est.le.0.d0 ) then aux=1.d0 else aux=0.d0 do n=1,(i-1)*(j-1 ) ifail=-1 aux = aux+g01ecf('u',est , n*1.d0,ifail)*we((i-1)*(j-1)-n ) enddo if ( est.lt.0 ) then aux = aux+we((i-1)*(j-1 ) ) endif endif pval = aux end function pval ........ ! -------------------------------------------------------------------------------- !this program is only valid for 2 by 3 contingency tables ! ( for other sizes some changes must be done : ! change the value of j and follow the formulas of the weights ) ! to run it , the nag library is required to have installed !the fortran program generates the outputs in several text files ! -------------------------------------------------------------------------------- module parglob integer fail integer , parameter : : i=2 , j=3 , nrr=25000 , nlam=301 double precision pr(i*j ) , w(i*j , i*j-1 ) , rr((i-1)*(j-1),i*(j-1 ) ) , betatil(i*(j-1 ) ) , & phat(i*j ) , zz((i-1)*(j-1 ) ) , tbt((i-1)*(j-1),(i-1)*(j-1 ) ) , bb((i-1)*(j-1),(i-1)*(j-1 ) ) , & we(0:(i-1)*(j-1 ) ) , k1((i-1),(i-1 ) ) , k2((j-1),(j-1 ) ) , hh((i-1)*(j-1),(i-1)*(j-1 ) ) , & hinv((i-1)*(j-1),(i-1)*(j-1 ) ) , ntt , nu(i ) , ppi(j ) , nn(i*j ) , ppit(i , j ) , un , & sample(nrr , i*j ) , odds(i-1,j-1 ) ,lamb(nlam ) double precision , parameter : : nt(i ) = ( /16.d0,20.d0/ ) , starting=-1.5d0 , ending=3.d0 , & del=0.d0 , pi=3.14159265358979323846264338327950d0 !if nlam=1 , the program only consideres the ending end module parglob ! -------------------------------------------------------------------------------- do n=1,nlam-1 lamb(n)=starting+(ending - starting)*(n*1.d0 - 1.d0)/(nlam*1.d0 ) enddo lamb(nlam)=ending contt=0.d0 conts=0.d0 contw=0.d0 do n=1,i do m=1,j ppit(n , m)=(1.d0/3.d0)*((1.d0+n*(m-1.d0)*del)/(1.d0+n*del ) ) enddo enddo do n=1,i-1 do m=1,j-1 odds(n , m)=ppit(n , m)*ppit(n+1,m+1)/(ppit(n+1,m)*ppit(n , m+1 ) ) enddo enddo ntt = sum(nt ) nu = nt / ntt call designm ( ) call g05cbf(150 ) call generamult ( ) do rep=1,nrr nn = sample(rep , : ) do n=1,i*j if ( nn(n).le.0.d0 ) then nn(n)=1.d-5 endif enddo marg = nn(1:j)+nn(j+1:2*j ) rank=0.d0 do kk=2,j rank(kk)=rank(kk-1)+marg(kk-1 ) enddo rank = rank+(marg+1.d0)/2.d0 wilc = sum(rank*nn(1:j ) ) meanwilc = nt(1)*(nt(1)+nt(2)+1.d0)/2.d0 sdwilc = nt(1)*nt(2)*(nt(1)+nt(2)+1.d0)/12.d0 sdwilc = sdwilc - nt(1)*nt(2)*sum(marg**3-marg)/(12.d0*(nt(1)+nt(2))*(nt(1)+nt(2)-1.d0 ) ) sdwilc = sqrt(sdwilc ) wilc=(wilc - meanwilc)/sdwilc ifail=-1 pvalwilc = g01eaf('l',wilc , ifail ) table = transpose(reshape(nn,(/j , i/ ) ) ) do m=1,j ppi(m)=sum(table(:,m))/ntt enddo initheta=0.d0 call emvh01(initheta ) if ( fail.ne.0 ) then initheta=0.1d0 call emvh01(initheta ) if ( fail.ne.0 ) then initheta=-0.1d0 call emvh01(initheta ) endif endif if ( pvalwilc.le.0.05d0 ) then contw = contw+1.d0 endif do n=1,nlam if ( pval(estt(lamb(n))).le.0.05d0 ) then contt(n)=contt(n)+1.d0 endif if ( pval(ests(lamb(n))).le.0.05d0 ) then conts(n)=conts(n)+1.d0 endif enddo enddo open ( 10 , file = " signlevt-2s.dat " , action="write",status="replace " ) write(10 , * ) " * * significance levels for t - test statistics * * " write(10 , * ) " ------------------------------------------------------- " do n=1,nlam write(10,21 ) real(lamb(n)),real(contt(n)/(nrr*1.d0 ) ) enddo close(10 ) open ( 10 , file = " signlevs-2s.dat " , action="write",status="replace " ) write(10 , * ) " * * significance levels for s - test statistics * * " write(10 , * ) " ------------------------------------------------------- " do n=1,nlam write(10,21 ) real(lamb(n)),real(conts(n)/(nrr*1.d0 ) ) enddo close(10 ) open ( 10 , file = " wilcoxon-2s.dat " , action="write",status="replace " ) write(10 , * ) " * * significance level for wilcoxon statistics * * " write(10 , * ) " ------------------------------------------------------- " write(10 , * ) real(contw/(nrr*1.d0 ) ) close(10 ) end program simulation ! -------------------------------------------------------------------------------- ! this soubrutine calculates the design matrix of a saturated log - linear model ! with canonical parametrization ! -------------------------------------------------------------------------------- subroutine designm ( ) use parglob implicit none integer h double precision one_i(i ) , one_j(j ) , a(i , i-1 ) , b(j , j-1 ) , w12(i*j,(i-1)*(j-1 ) ) , & w1(i*j , i-1 ) , w2(i*j , j-1 ) ! -------------------------------------------------------------------------------- ! -------------------------------------------------------------------------------- !this soubrutines calculates the restriction matrix ! -------------------------------------------------------------------------------- subroutine restricm ( ) use parglob implicit none integer h double precision r2((i-1)*(j-1),j-1 ) , r12((i-1)*(j-1),(i-1)*(j-1 ) ) , gi(i-1,i-1 ) , & gj(j-1,j-1 ) gi=0.d0 do h=1,i-1 gi(h , h)=1.d0 if ( h.lt.i-1 ) then gi(h , h+1)=-1.d0 endif enddo gj=0.d0 do h=1,j-1 gj(h , h)=1.d0 if ( h.lt.j-1 ) then gj(h , h+1)=-1.d0 endif enddo r2 = 0.d0 call kronecker(i-1,i-1,gi , j-1,j-1,gj , r12 ) rr(1:(i-1)*(j-1),1:j-1 ) = r2 rr(1:(i-1)*(j-1),j : i*(j-1 ) ) = r12 end subroutine restricm ! -------------------------------------------------------------------------------- ! -------------------------------------------------------------------------------- ! given matrices a and b , this subroutines calculates c as the kronecker product !a 's dimension n by m !b 's dimension p by q !-------------------------------------------------------------------------------- subroutine kronecker(n , m , a , p , q , b , c ) implicit none end subroutine kronecker ! -------------------------------------------------------------------------------- ! -------------------------------------------------------------------------------- ! given !a ) vector theta !b ) the design matrix x=(1,w ) ! this subroutine calculates the probabilities of a log - linear model . !-------------------------------------------------------------------------------- subroutine probvector(beta ) use parglob implicit none end subroutine probvector ! -------------------------------------------------------------------------------- ! -------------------------------------------------------------------------------- ! subroutine to calculate p(theta - hat ) ! -------------------------------------------------------------------------------- subroutine probvector2(nnu , pppi ) use parglob implicit none do h=1,i do s=1,j if ( pppi(s).gt.0.d0 ) then aux(h , s)=nnu(h)*pppi(s ) else aux(h , s)=1.d-5 endif enddo enddo ! nuestros vectores est\'{a}n en orden lexicogr\'{a}fico , por eso trasponemos phat = reshape(transpose(aux),(/i*j/ ) )end subroutine probvector2 ! -------------------------------------------------------------------------------- ! -------------------------------------------------------------------------------- ! subroutine to calculate theta_tilde . !-------------------------------------------------------------------------------- integer , parameter : : n = i*j-1 , nclin = ( i-1)*(j-1 ) , ncnln = 0 , lda = nclin integer , parameter : : ldcj = 1 , ldr = n , liw= 3*n+nclin+2*ncnln , lw=530 integer iter , ifail , istate(n+nclin+ncnln ) ,iwork(liw ) , iuser(1 ) , nstate double precision objf , a(nclin , n ) , user(1 ) , work(lw ) , r(ldr , n ) , c(ncnln ) , cjac(ldcj , n ) double precision clamda(n+nclin+ncnln ) , bl(n+nclin+ncnln ) , bu(n+nclin+ncnln ) , x(n ) , & objgrd(n ) external confun , e04ucf , e04uef , objfun a=0.d0 a(:,i : i*j-1)=rr bl(1:n)=-1.d6 bl(n+1:n+nclin)=0.d0 bu=1.d6 ifail = -1 call e04uef ( ' infinite bound size = 1.e5 ' ) call e04uef ( ' iteration limit = 250 ' ) call e04uef ( ' print level = 0 ' ) call e04ucf(n , nclin , ncnln , lda , ldcj , ldr , a , bl , bu , confun , objfun , iter , istate , c , & cjac , clamda , objf , objgrd , r , x , iwork , liw , work , lw , iuser , user , ifail ) betatil = x(i : i*j-1 ) fail = ifail end subroutine emvh01 ! -------------------------------------------------------------------------------- !subroutine to calculate t - statistic . !n = sum(nn ) aux=0.d0 if ( ( lan .ge .-1.d-9 ) .and .( lan .le .1.d-9 ) ) then ! lan=0 do h=1,i*j if ( ( pr(h).gt.0.d0).and.(phat(h).gt.0.d0).and.(nn(h).gt.0.d0 ) ) then aux = aux+nn(h)*log(pr(h)/phat(h ) ) endif enddo estt=2.d0*aux else if ( ( lan .ge .-1.d0 - 1.d-9 ) .and .( lan .le .-1.d0 + 1.d-9 ) ) then !lan=-1 do h=1,i*j if ( ( pr(h).gt.0.d0).and.(phat(h).gt.0.d0).and.(nn(h).gt.0.5d0 ) ) then aux = aux+phat(h)*log((n*phat(h))/nn(h ) ) aux = aux - pr(h)*log((n*pr(h))/nn(h ) ) endif enddo estt=2.d0*n*aux else ! lan<>0 ,lan<>-1 do h=1,i*j if ( ( pr(h).gt.0.d0).and.(phat(h).gt.0.d0).and.(nn(h).gt.0.5d0 ) ) then aux = aux+nn(h)*((nn(h)/(n*phat(h)))**lan-(nn(h)/(n*pr(h)))**lan ) endif enddo estt=2.d0*aux/(lan*(1.d0+lan ) ) endif endif ! -------------------------------------------------------------------------------- !subroutine to calculate s - statistic . !n = sum(nn ) aux=0.d0 if ( ( lan .ge .-1.d-9 ) .and .( lan .le .1.d-9 ) ) then ! lan=0 do h=1,i*j if ( ( pr(h).gt.0.d0).and.(phat(h).gt.0.d0 ) ) then aux = aux+pr(h)*log(pr(h)/phat(h ) ) endif enddo ests=2.d0*n*aux else if ( ( lan .ge .-1.d0 - 1.d-9 ) .and .( lan .le .-1.d0 + 1.d-9 ) ) then !lan=-1 do h=1,i*j if ( ( pr(h).gt.0.d0).and.(phat(h).gt.0.d0 ) ) then aux = aux+phat(h)*log(phat(h)/pr(h ) ) endif enddo ests=2.d0*n*aux else !lan<>0 , lan<>-1 do h=1,i*j if ( ( pr(h).gt.0.d0).and.(phat(h).gt.0.d0 ) ) then aux = aux+(pr(h)**(lan+1.d0))/(phat(h)**lan ) endif enddo ests=2.d0*n*(aux-1.d0)/(lan*(1.d0+lan ) ) endif endif ! -------------------------------------------------------------------------------- ! subroutine to calculate matrix h. ! -------------------------------------------------------------------------------- subroutine hmatrix ( ) use parglob implicit none call kronecker(i-1,i-1,k1,j-1,j-1,k2,hh ) end subroutine hmatrix ! -------------------------------------------------------------------------------- !soubrotine to calculate p - values in terms of a specific lambda : t(lam ) o s(lam ) ! --------------------------------------------------------------------------------if ( est.le.0.d0 ) then aux=1.d0 else aux=0.d0 do n=1,(i-1)*(j-1 ) ifail=-1 aux = aux+g01ecf('u',est , n*1.d0,ifail)*we((i-1)*(j-1)-n ) enddo if ( est.lt.0 ) then aux = aux+we((i-1)*(j-1 ) ) endif endif pval = aux !soubrotine to generate multinomial samples with the parameters specified as ! global parameters ( first lines of this program ) ! -------------------------------------------------------------------------------- c=0.d0 sample=0.d0 do n=1,i do h=1,j c(n , h)=c(n , h-1)+ppit(n , h ) enddo enddo do s=1,nrr do n=1,i do m=1,int(nt(n ) ) un = g05caf(un ) h=1 dowhile ( .not.((un.ge.c(n , h-1)).and.(un.lt.c(n , h ) ) ) ) h = h+1 enddo sample(s,(n-1)*j+h)=sample(s,(n-1)*j+h)+1.d0 enddo enddo enddo
|
in this paper new families of test statistics are introduced and studied for the problem of comparing two treatments in terms of the likelihood ratio order . the considered families are based on phi - divergence measures and arise as natural extensions of the classical likelihood ratio test and pearson test statistics . it is proven that their asymptotic distribution is a common chi - bar random variable . an illustrative example is presented and the performance of these statistics is analysed through a simulation study . through a simulation study it is shown that , for most of the proposed scenarios adjusted to be small or moderate , some members of this new family of test - statistic display clearly better performance with respect to the power in comparison to the classical likelihood ratio and the pearson s chi - square test while the exact size remains closed to the nominal size . in view of the exact powers and significance levels , the study also shows that the wilcoxon test - statistic is not as good as the two classical test - statistics . = 1 _ _ keywords and phrases _ _ * * : * * divergence measure , kullback divergence measure , inequality constrains , likelihood ratio order , loglinear models .
|
a new imaging technique has emerged in recent years that can overcome many limitations of light , electron , and x - ray microscopy .coherent x - ray diffraction microscopy ( cxdm ) promises to enable the study of thick objects at high resolution . in this technique onerecords the 3d diffraction pattern generated by a sample illuminated with coherent x - rays , and as in x - ray crystallography a computer recovers the unmeasured phases .this is done by alternately applying constraints such as the measured intensity in reciprocal space and the object support the region where the object is assumed to be different from 0in real space .this corresponds to defining the envelope of a molecule in crystallography . in our implementationthe support is periodically updated based on the current object estimate . by avoiding the use of a lens , the experimental requirements are greatly reduced , and the resolution becomes limited only by the radiation damage .however the imaging task is shifted from the experiment to the computer , and the technique may be limited by our understanding of the phase recovery process as well as the algorithm s ability to recover meaningful images in the presence of noise and limited prior knowledge .recently we have presented experimental results of high - resolution 3d x - ray diffraction imaging of a well - characterized test object to demonstrate the practical application of these advances . herewe extend the analyis of image reconstruction and determine low - order phase errors ( essentially image aberrations ) that can occur when reconstructing general complex - valued images .we present two methods to improve the accuracy and stability of reconstructions .three - dimensional coherent x - ray diffraction data were collected at the advanced light source from a test object that consisted of 50-nm diameter gold spheres located on a 2.5- - wide silicon nitride pyramid ( fig . [ fig1]a ) . a bare ccd lo , cated in the far field recorded the diffraction patterns with a pixel sampling that was more than 4 times the shannon sampling rate for the ( phased ) complex amplitudes .diffraction patterns were collected for many sample orientations over an angular range of 129 .these were interpolated onto a 3d grid .we reconstructed a full 3d image by performing phase retrieval on the entire 3d diffraction dataset ( i.e. the iterations involved three - dimensional ffts ) .the resulting volume image reveals the structure of the object in all three dimensions and can be visualized in many ways including projections through the data , slices ( tomographs ) , or isosurface rendering of the data .in addition to 3d images , we perform much analysis and algorithm development on 2d datasets . for the work in this paper we choose central plane sections extracted from the 3d diffraction pattern . by the fourier projection theorem ,the image formed from a central section is an infinite depth - of - focus projection image ( fig . [we carry out _ ab initio _ image reconstructions using the relaxed averaged alternating reflections ( raar ) algorithm with the `` shrinkwrap '' dynamic support constraint .details of the algorithm parameters used are given in chapman .the phase retrieval process recovers the diffraction phases with limited accuracy , due to factors including snr of the diffraction amplitudes , missing data , the inconsistency of constraints , and systematic errors in the data ( such as errors in interpolation ) .these errors in phase reduce the resolution of the synthesized image . with a complex imagea loose support constraint will lead to unconstrained low - order aberrations .as is well known an object could be shifted by a few pixels each time we reconstruct , which is equivalent to a varying linear phase ramp in reciprocal space .in addition to this shift low order phase variations , such as defocus and astigmatism can also be unconstrained if the aberrated object fits inside the support .one way to quantify the effect of these phase variations is to determine the variation in retrieved phases as a function of resolution .given a reconstructed image obtained by phase retrieval starting from random phases , and its fourier transform , we define the phase retrieval transfer function by with the average over the complex diffraction amplitudes of many reconstructed images starting from random phases . where the phases are random and completely uncorrelated , the average will approach zero .thus , the ratio is effectively a transfer function for the phase retrieval process , and the average image ( the fourier tranform of is the best estimate of the .,scaledwidth=25.0% ] in our case when reconstructing complex 2d images , with low frequencies missing due to the beamstop , we have observed that phase retrieval from independent random starts may differ by a phase vortex ( right or left handed ) , centered at the zero spatial frequency ( fig .[ fig2 ] ) .we find that we can improve the estimate of the image by separating out the vortex modes .these phase vortices are due to stagnation of the phase retrieval process .other phase vortices can appear near local minima of the measured intensities , and our method of separating solutions will fail to detect vortices not centered near the beamstop . in order to remove these vortex aberrations we modified the reconstruction algorithm as follows :( i ) average independent reconstructions which will likely average out the phase vortex modes but will also smooth the resulting image , reducing the resolution .( ii ) refine this averaged image by inputting it to the raar algorithm and carrying out 200 iterations . using this `` averaged raar '' algorithm we reduced the probability of recovering an image with phase vortex mode from 40% to 15% , resulting in an improvement of the prtf by almost a factor of two .we compute the final image , and the prtf , by averaging 1000 such reconstructions ( fig .[ fig3 ] ) . before averagingmany images we make sure that they are not shifted with respect to one another by finding the linear phase ramp that minimizes the difference between their fourier transforms .fluctuations of the linear phase term indicate fluctuations in positions .fluctuations in higher order polynomial phase terms indicate that phase aberrations are present in the reconstructions . to quantify the instabilities of these low order phase modes , we find the low order phase modes ( focus , astigmatism , coma , up to a polynomial of order that minimize the difference between each new reconstruction and the first recovered image .this is done by minimizing with the 2d polynomial defined by coefficients as with .the linear terms representing shifts in real space are found using the method described by fienup , while higher order terms are obtained by fitting the phase difference , , to the higher order 2d polynomial terms and iterating until the correction is less than 1 .the fluctuations of the second order polynomial coefficients are obtained by calculating their standard deviation among 1000 reconstructions , and we find that the linear terms ( represent a shift of } \mathord{\left/ { \vphantom { { \left [ { { 0.31 , 0.53 } } \right ] } { 2\pi = \left [ { { 0.049,0.085 } } \right ] } } } \right .\kern-\nulldelimiterspace } { 2\pi = \left [ { { 0.049,0.085 } } \right]} ] , which is plotted in fig .[ fig5 ] for a reconstructed image .we have performed a characterization of high - resolution imaging of an isolated 3d object by _ ab initio _ phase retrieval of the coherent x - ray diffraction , and examined metrics to allow the quality of image reconstructions to be assessed .the phase retrieval process does not produce unique images , in that varying low - order phase modes arise , akin to aberrations in an imaging system .other than the tilt terms , the low - order phase aberrations discussed here will be reduced in case of a real object ( for which only antisymmetric terms are allowed ) and will not be present when a real - space positivity constraint can be imposed , since defocusing or otherwise aberrating an image causes it to be complex . however , in the case of samples consisting of more than one material ( such as biological samples ) the object can not be considered positive and we must reduce the effects of aberrations .we have proposed two methods of overcoming limitations of computer reconstruction : in order to improve the stability of the reconstructions we average several reconstructed images and use the result to feed a new round of phase retrieval . from an experimental point of view, the use of a reference point , or other well - defined object , should enable us to greatly reduce low order phase aberrations .coccolith samples were provided by j. young from the natural history museum , london .this work was performed under the auspices of the u.s .department of energy by university of california , lawrence livermore national laboratory under contract w-7405-eng-48 and the director , office of energy research , office of basics energy sciences , materials sciences division of the u. s. department of energy , under contract no .de - ac03 - 76sf00098 .this work has been supported by funding from the national science foundation .the center for biophotonics , an nsf science and technology center , is managed by the university of california , davis , under cooperative agreement no .phy 0120999 .
|
in coherent x - ray diffraction microscopy the diffraction pattern generated by a sample illuminated with coherent x - rays is recorded , and a computer algorithm recovers the unmeasured phases to synthesize an image . by avoiding the use of a lens the resolution is limited , in principle , only by the largest scattering angles recorded . however , the imaging task is shifted from the experiment to the computer , and the algorithm s ability to recover meaningful images in the presence of noise and limited prior knowledge may produce aberrations in the reconstructed image . we analyze the low order aberrations produced by our phase retrieval algorithms . we present two methods to improve the accuracy and stability of reconstructions .
|
quantum entanglement plays a pivotal role in understanding the deepest nature of reality . in classical worldthere is no counter part of quantum entanglement .entanglement is a very useful resource in the sense that using entanglement a lot of things can be done that can not be done otherwise .entanglement is also essential for the communication tasks like quantum teleportation , quantum cryptography and quantum secret sharing .0.1 cm in a secret sharing protocol , one distributes a secret message among a group of people .this is done by allocating a share of the secret to each of these participants .the beauty of the entire secret sharing process lies in the fact that , if there is a dishonest member in the group of participants , he will not be able to find the secret without the collaboration of other members . in other words ,the secret can be reconstructed only when a sufficient number of shares are combined together ; individual shares are of no use.0.1 cm the secret sharing protocol in a quantum scenario was first introduced in ref .after its introduction , karlsson et.al. studied the similar quantum secret sharing protocol using bipartite pure entangled state .many authors studied the concept of quantum secret sharing using tripartite pure entangled states and also for multi partite states like graph states . recently q. li et.al .proposed semi - quantum secret sharing protocols using maximally entangled ghz state which was shown to be secure against eavesdropping . recently in , it was shown that quantum secret sharing is possible with bipartite two qubit mixed states ( formed due to noisy environment ) .quantum secret sharing can also be realized in experiment .0.1 cm the purpose of this paper is to introduce a protocol which can be used to secretly share classical information in the presence of noisy quantum communication channels .we first show that this secret sharing scheme is deterministically possible for a shared pure three qubit ghz state .then , we consider a realistic situation where a source creates a pure ghz state and then the qubits are distributed to different parties through noisy channels .these noisy channels convert the initial pure state into a mixed state .we carry out the analysis and find the number of classical bits that can be secretly shared for a specific noisy channel , the phase - damping channel .one of the important feature of this channel is that it describes the loss of quantum information without loss of energy .we also talk about several other noisy channels and comment on the possibility of secret sharing using those channels .the organization of the paper is as follows . in sectionii , we describe our protocol for pure three qubit ghz state . in section iii ,we deviate from the ideal scenario and consider the realistic situation where qubits are transferred through phase - damping channels and reinvestigate our secret sharing scheme . in the last section ,we discuss other noisy channels and present our conclusions .in this section , we introduce a protocol for quantum secret sharing with shared pure ghz state . in this protocol ,three parties start with a shared pure ghz state .then one of the members encodes secret by doing some local unitary operation on her qubit .thereafter , she sends her qubit to one of the other two members .interestingly neither of these two members would be able to know about the local unitaries performed by the encoder individually .however , we show that if they agree to collaborate , then one of the parties can decode the two bit secrets .our protocol goes like this . + * * step i : pure ghz state shared by three parties**0.1 cm let us consider three parties say , alice ( a ) , bob ( b ) and charlie ( c ) share a pure ghz state unitary operations on her qubit . after performing one of the unitary operations the state ( [ ghz ] ) transforms correspondingly to one of the following states ,\nonumber\\ \label{unitary } ( \sigma_x \otimes i \otimes i)|\psi\rangle_{abc } = \frac{1}{\sqrt{2}}[|100\rangle+|011\rangle],\nonumber\\ ( i \sigma_y \otimes i \otimes i)|\psi\rangle_{abc } = \frac{1}{\sqrt{2}}[|100\rangle-|011\rangle ] , \nonumber\\ ( \sigma_z \otimes i \otimes i)|\psi\rangle_{abc } = \frac{1}{\sqrt{2}}[|000\rangle-|111\rangle].\end{aligned}\ ] ] alice then sends her qubit to bob.0.1 cm * * step iii : charlie performs single - qubit measurement**0.1 cm the above set of equations ( [ unitary ] ) can be rewritten as , |\psi^{\pm}\rangle = \frac{1}{\sqrt{2}}[|01\rangle\pm|10\rangle]$].0.1 cm at this stage , it is not possible either for bob or for charlie to decipher the secret encoded by alice .however , bob can unmask the secret if charlie agrees to cooperate with him .since charlie now has a single particle at his disposal , he performs a single - qubit measurement in the hadamard basis .then he can help bob to decode the message by conveying to him the outcomes of his measurement .0.1 cm * * step iv : bob performs bell - state measurement**0.1 cm according to the measurement outcomes announced by charlie , bob performs a bell - state measurement on his two qubits . according to his bell - state measurement outcome ,he can find the secret encoded by alice .the two bits secret decoded by bob as a result of the declaration of the measurement outcome by charlie is given in the following table .+ 0.1 cm [ cols="^,^,^ " , ] + + where , charlie sends his results to bob through a classical channel by spending one classical bit .this is done by encoding for and for respectively . 0.2 cm * * step iv : bob performs two qubit projective measurement and povm**0.1 cm as we see in the above table when charlie s measurement result is then bob can have one of the four possible states . similarlywhen charlie s qubit collapses into the state , bob can have any one of the state four possible states at his disposal .if charlie sends , then bob guesses that the two qubit states in his possession would be either or or or .he then performs projective measurements and to get close to identify the secret .the projectors and classify the above four states into two classes as and respectively .the states within the two classes are now lying in a two dimensional subspace spanned by and respectively . + after classifying the states , bob performs optimal povm to identify the state in which secret is encoded .first of all he considers the class and constructs the optimal povm operators , for discriminating the density matrices present in the class .the optimal povm measurement is the one that minimizes the error rate +tr[\pi_2\rho_1^{bb+}].\end{aligned}\ ] ] subject to the constraints that they forms a complete set of projectors ( i.e ) .the optimal povm elements are and the error rate in discriminating the states and is similarly , bob can distinguish the mixed states belonging to the other class by using the same set of povm operators and .the error rate in this case will also be the same .therefore the total probability of success in distinguishing these states is {secret.eps } \end{array}\ ] ] {secret1.eps } \end{array}\ ] ] in such a situation , the total number of classical bits that bob can extract are clearly , the amount of classical information that can be extracted by bob will depend upon the channel noise ( ) and also on the basis that charlie uses for the measurement .we note that when , the channel is totally noisy , and bob can extract at most one classical bit .this can also be seen from figure 1 . is independent of and is always equal to when .the limit corresponds to the case when there is no noise . in this case, is maximum when measurement has been done in the hadamard basis ( i.e ) .this is also clear from the figure 1 .in general , the obtains the largest value when charlie makes his measurement in hadamard basis .in such a scenario in figure 2 , we have plotted as a function of the channel parameter ( ) .it takes maximum value 2 when and the minimum value when .+ thus we see that deterministic secret sharing is not possible with a phase - damping channel .we also find that the amount of classical information decoded by bob is dependent on the noise parameter ( ) and also on the choice of basis . in a practical situationwhen we carry out quantum information processing task we face the decoherence problem and we always have mixed state at our disposal . as a consequence of which the tasks which can be done deterministically in case of pure states , can not be done so for the mixed states .in this paper , we have introduced a protocol for secret sharing which is different from the existing secret sharing schemes .we have considered a realistic scenario where there are noisy quantum channels .in such a scenario , the deterministic secret sharing is not possible .we consider povm measurements to implement our protocol and find out the number of classical bits that alice can share with bob with the help of charlie .the answer is classical bits with characterizing the noisy channel .the earlier scheme of secret sharing with pure ghz state was more like cooperative teleportation while our scheme of secret sharing is like cooperative dense coding .the three - qubit mixed state considered here is generated by passing the qubits through noisy channels .in particular , we have shown how the phase - damping noisy channel generated three - qubit mixed state can be used in our secret sharing protocol.0.1cm now it would be important to ask that whether our secret sharing scheme succeeds only when the noisy channel is a phase - damping channel . indeedthe answer is no. we find that if phase - flip channel is the noisy channel , then our secret sharing scheme would succeed .however , one needs to explore further if the secret sharing scheme can succeed with noisy channels like amplitude - damping channel , depolarizing channel , bit - flip channel , bit - phase flip channel or two pauli channels . in the case of phase - damping and phase - flip channels ,the kraus operators are diagonal and it is not difficult to construct appropriate povm operators .we also note that these channels are related by unitary transformations .therefore , it appears that our proposed protocol would succeed if the noisy channel is related to phase - damping channel by a unitary transformation .the reason behind the success of our protocol may be the diagonal form of the kraus operators that represent the noisy channels .the cases of other noisy channels that are described by the kraus operators with off - diagonal elements may involve loss of energy and need further exploration . in these cases ,probabilistic secret sharing may be possible with more complicated povm measurements . c. h. bennett , g. brassard , c. crepeau , r. jozsa , a. peres , w. k. wootters , phys .lett . * 70 * , 1895 ( 1993 ) ; d. bouwmeester , j - w pan , k. mattle , m. eibl , h. weinfurter and a. zeilinger , nature * 390 * , 575 ( 1997 ) .
|
in a realistic situation , the secret sharing of classical or quantum information will involve the transmission of this information through noisy channels . we consider a three qubit pure state . this state becomes a mixed - state when the qubits are distributed over noisy channels . we focus on a specific noisy channel , the phase - damping channel . we propose a protocol for secret sharing of classical information with this and related noisy channels . this protocol can also be thought of as cooperative superdense coding . we also discuss other noisy channels to examine the possibility of secret sharing of classical information . = 10000 [ theorem]lemma [ theorem]corollary [ theorem]proposition [ theorem]conjecture [ theorem]definition
|
fluorescence correlation spectroscopy ( fcs ) is a technique based on the analysis of the fluctuations of fluorescence that permits to quantify a wide range of phenomena such as photophysical , photochemical , interaction , diffusion and transport properties of fluorescently labeled molecules. benefitting from the dramatic progress of bright fluorescent molecules and high sensitivity detectors , this technique can now be performed within the confocal volume of high numerical aperture microscope objectives .the spatial resolution , the intrinsic steady - state regime , as well as the capability to record a huge number of events make it now a routine tool for cell biology. however , the original fcs technique is limited by the fact the dynamic system under study is exclusively analyzed in terms of temporal fluctuations at a single location , at a spatial scale related to the measurement volume , while a much more complete description would require investigations in the spatio - temporal domain . as a consequence , various phenomena of similar timescale taking place within the same specimencan not be efficiently discriminated without a strong a priori knowledge of the system .for instance , fcs measurements of translational diffusion performed under photobleaching conditions may actually monitor the survival time of the fluorophores instead of their diffusion time. in order to overcome this lack of spatial information , several fcs - based techniques have been proposed .the analysis can be performed by cross - correlating signals recorded at two distant locations ; this approach suits the best to transport phenomena but it can also successfully be extended to the measurement of absolute diffusion , although the time gating that is required in order to overcome the crosstalk between the two overlapping volumes makes it more difficult to implement. the two observation volumes can also be simply realized by splitting the detected fluorescence onto two shifted detectors, but this causes the two resulting measurement volumes to be slightly deformed .a more versatile scheme has been proposed using an array detector , but it is still limited by a finite readout speed. another approach allowing to achieve a better spatial description of a system is to perform fcs measurements at various spatial scales , by changing the observation volumes. it has been shown that this approach allows to discriminate between possible diffusion regimes of molecular species in cell membranes and even to quantify submicron structures. another class of methods derived from fcs consists in scanning the observation spot in a repetitive fashion in the sample , and therefore collecting sequentially fluorescence information from many locations , which improves the statistical accuracy of the measurement and reduces photobleaching. line or circle scan are used in so - called scanning fcs ( sfcs ) experiments , while image scans produce image frames that are analyzed using image correlation spectroscopy ( ics ) techniques . although sfcs and ics are based on the same principle , they address different time scales .sfcs can analyze dynamics in a wide range down to a fraction of a millisecond. it was shown recently that this method allows to measure absolute diffusion coefficient in biological systems, and even to discriminate different dynamic processes of comparable timescale. except raster ics ( rics ) , which allows to analyze fast dynamics by exploiting the time taken by the raster scan, ics and its variants suit better to slowly diffusing systems .all these methods can be in principle easily implemented on any laser scanning confocal microscopy system . in this article, we describe a fcs system that is based on two custom laser scanning confocal microscopes sharing the same objective , and creating within the specimen two fully independent diffraction - limited measurement spots .each spot can operate either in a static mode at an arbitrary location or in a scanning mode , along an arbitrary periodic trajectory that can be one or several image frames , circles , lines , or any other closed loop , while the signals of fluorescence are recorded by two dedicated confocal detection channels .the instrument has been designed in order to offer a high level of flexibility .the operating modes include dual spot twin measurements ( i.e. , simultaneous measurements performed in the same mode at two locations ) , dual spot cross analysis , as well as hybrid measurements that assign a different measurement mode to each spot .the article is organized as follows .section [ sec : setup ] details the optical setup and hardware .section [ sec : theory ] describes briefly the theoretical background of confocal microscopy and fcs measurements .section [ sec : calibrations ] addresses the protocol of calibration of the system . finally , section [ sec : examples ] reports examples of measurements exploiting the versatility and the dual spot nature of the systemthe two observation volumes in our system are produced by using two identical custom laser scanning confocal microscopy systems sharing the same objective .although this is likely the most complex technical approach for generating two measurement spots , we believe that it provides the highest level of versatility without sacrificing the optical quality of the spots .schematics of the dual spot fcs system .abbreviations : smf , single mode fiber ; l1-l6 , achromatic doublets ( l1 = 25 mm , l2 = 30 mm , l3 = 100 mm , l4 = 300 mm , l5 = 150 mm , l6 = 50 mm ) ; hwp , half - wave plate ; gp , glan polarizer ; pbs , polarizing beamsplitter ; m1-m3 , mirrors ; dm , dichroic mirror ; t , telescope ; gm a ( resp . , b ) , galvanomtric mirror set for channel a ( resp . ,b ) ; obj , water immersion microscope objective ; ph , pinhole ; ef , emission filter ; spcm : single photon counting module . ] the optical setup is presented in fig .[ fig : setup ] .emission from a continuous wave 491-nm diode - pumped solid state laser ( calypso , cobolt ) is coupled into a single mode fiber .the purpose of the single mode fiber is to allow a convenient coupling for other laser sources for future development of the system .light exiting the fiber is collimated by an achromatic doublet and attenuated using a half - wave plate placed in front of a glan polarizer .the beam is divided into two excitation arms by a polarizing beam splitter . a second half - wave plate located before the beam splitter controls the power ratio between the two arms .the two excitation beams follow paths of equal distance .they are first reflected by a dichroic mirror ( xf2037 - 500drlp , omega optical ) and then sent on a couple of galvanometric scanning mirrors ( 6200h , cambridge technology ) .the two excitation beams are combined by a non - polarizing beam splitter and introduced into the side port of the microscope stand ( axiovert 200 m , carl zeiss ) .a scanning telescope images the scanners with a magnification of 3 onto the rear aperture of the infinity corrected water immersion microscope objective ( c - apochromat , focal length 4.1 mm , na = 1.2 , uv - vis - nir , carl zeiss ) .two independent excitation spots , that we denote a and b , are created within the sample , each one at a location controlled by its dedicated scanning system .fluorescence emitted at each spot location is collected by the same objective , follows the same path back through its corresponding scanner , in a so - called descanned scheme , and is sent through the dichroic mirror of the corresponding channel .the two detection benches are identically constituted of a 75- m diameter pinhole ( i.e. , 1.2 airy units with our total magnification of 120 ) placed at the focus of a tubelens . another lens images ( with a magnification of unity ) the pinhole onto the 175- m diameter active area of a single photon counting module ( spcm - aqr-14 , perkinelmer optoelectronics ) .this setup creates two identical excitation spots with their dedicated confocal detection , while the descanned detection scheme keeps the conjugation unaffected when the spots are moved . as illustrated in fig .[ fig : setup ] , an additional telescope ( denoted t ) of magnification has been inserted on channel a. by displacing one lens with respect to the other along the optical axis , it is possible to modify slightly the divergence of excitation beam a , and therefore to change the plane in which spot a is focused . note that the conjugation between the spot and the pinhole is not affected because the divergence induced on the excitation is compensated on the detection beam . a procedure for calibrating the axial displacement of the focus and assessing the spot quality in this configurationwill be presented in section [ sec : calibrations ] . a diagram of the data acquisition hardware is presented in fig .[ fig : data_acquisition ] . a high - speed voltage analog output pci board( ni 6731 , national instruments ) generates static or waveform voltages on two pairs of channels , each one commanding one dual - axis analog driver ( micromax 67320 , cambridge technology ) that controls one set of scanning mirrors .the ttl pulses generated by the two single photon counting modules are recorded by a pci counting board ( ni 6602 , national instruments ) .the synchronization between scanning and data acquisition is obtained by triggering the counters by a digital `` start '' signal generated by the analog output board when voltage generation starts . since both pci cards use direct memory access , the data are transferred through a buffer , ensuring high - speed operation . for fcs measurements ,both apd signals are also sent to a multiple tau digital correlator ( flex02 - 12d , correlator.com ) , that builds up in real time the temporal auto and cross correlations , and can optionally deliver photon counting histories for off - line software correlation analysis .the focus of the microscope stand is motorized .the whole system is connected to a personal computer and controlled by a program developed in house in a labview ( national instruments ) environment .it provides with a unique graphic user interface a control over the scanning parameters , the photon counts acquisition and processing , the correlator , and the microscope stand parameters ( focus , ports , objective turret , filter turret , shutters , etc . ) .diagram of the acquisition hardware . ]fcs measurements in solution were carried out using a rhodamine 6 g ( diffusion coefficient at room temperature ) solution , with concentrations ranging from 100 nm up to 1 m .the excitation power , measured by a powermeter inserted before the sideport of the microscope stand ( between l4 and m3 , see fig .[ fig : setup ] ) was 300 , a value that was checked to be low enough to prevent any effect of saturation or photobleaching .point spread function measurement were carried out using 100-nm diameter yellow - green fluorescent microspheres ( fluospheres 505/515 , moleculer probes ) , that have been dispersed on a cleaned microscope coverslip .typical excitation power , measured as previously , was 5 .measurement on cells have been performed at room temperature on cos-7 cells , the gfp - tagged thy1 protein of which has been transiently expressed using the protocol detailed in ref . .the response of a confocal fluorescence microscope is described by a 3d point spread function ( psf ) , that we denote , which takes into account two contributions : i ) the spatial distribution of the excitation intensity within the specimen , described by the excitation psf , that we denote , ii ) the collection efficiency , described by the intensity collection psf , denoted , with with these definitions , the intensity recorded on an specimen , is given by ( \mathbf{r})\nonumber\end{aligned}\ ] ] since our system is made of two independent channels , each of them will therefore be described by its own set of psfs. a schematic view of the psfs involved in our system is presented in fig .[ fig : volumes ] , where only the spatial extent of the excitation and collection psfs are represented in the plane for a better clarity .this kind of representation constitutes a powerful way to figure out the respective roles of the two spots , their possible interaction , and more generally all degrees of freedom of the system .note that the correct pinhole alignment performed on each channel before each campaign of measurement ensures that and ( and the same for psfs of channel b ) are centered with respect to each other , as illustrated in fig .[ fig : volumes ] , keeping in mind that these properties are conserved during the scanning due to the descanned detection scheme . schematic view of the two measurement spots , with the corresponding lateral extents of the excitation ( solid line ) and collection ( dashed line ) psfs . in this example, the two spots are addressing different locations . ] the technique of fcs is based on the analysis of the fluctuation of the fluorescence intensity , by means of its temporal autocorrelation function , defined as where the brackets indicate a temporal average . since the psf of the system can be reasonably described by a gaussian distribution ,\ ] ] it has been shown that the autocorrelation function for free diffusing fluorescent molecules can be written as ,\end{aligned}\ ] ] where is the translational diffusion coefficient , the average number of molecules in the sample volume , the triplet fraction and the triplet lifetime , and . by fitting experimental data with the expression of eq .[ eq : fcs ] , it is possible either to measure the diffusion coefficient if and are known , or to measure and from a solution of known diffusion coefficient .the conversion factor between the command voltage and the transverse displacement of the spot in the sample has been calibrated for each of the two scanners by recording the image of calibrated micrometers ( graticules , ltd ) .the total area that can be covered by scanning is limited in our microscope stand by the side port accessible diameter . with our objective , it is typically .( color online ) comparison of several images of a fixed sample acquired with different pixel dwell times ( indicated in the corner of each image ) . in order to compare images of similar signal to noise ratio ,an appropriate number of accumulations was set , keeping constant the total dwell time per pixel at a value of 100 ( 1 accumulation at 100 , 2 accumulations for 50 , etc . ) .for the shortest dwell times , the image shift is clearly visible .in addition the narrow left part of the image shows the signal that has been recorded when the spot on its way back , between successive line scans . ] since the main purpose of the system is fcs measurements , highly sensitive detectors have been chosen , which operate in photon counting mode .the count rate is therefore limited to a few counts per second , i.e. , a few counts per microsecond .thus , as far as imaging is concerned , the main limitation in terms of acquisition rate is not the velocity of the scanning system itself ( that can reach dwell time down to the micro second ) , but the dynamics of photon counts within the image , that need a sufficient dwell time , and should provide a contrast with an acceptable signal to noise ratio . in practice , the minimum pixel dwell time is typically , a value that can be obtained either by one single scan , or by the accumulation of several frames .images are obtained by raster scanning , i. e. , all lines from an image are recorded sequentially , the spot being scanned for each line in the same direction . for extremely fast scanning conditions ( dwell time smaller than 50 ) , the mechanical inertia of the scanners causes the beam displacement to be delayed with respect to the signal acquisition . in these conditions , photon counts that are attributed to one pixel actuallymay actually come from an earlier location , and the final image appears therefore slightly shifted along one direction , as illustrated in fig .[ fig : shift ] .fortunately , no deformation has been noticed , so that distance measurements are unaffected by this effect .since the two spots are independently controlled by their own scanners , their location is given in their own set of coordinates .for all measurements that will involve two spots , it is crucial that these two coordinate systems match perfectly .a first coarse alignment is performed by placing the two scanning systems symmetrically with respect to the beam splitter ( denoted bs in fig .[ fig : setup ] ) .then , a finer calibration is realized by cross imaging , using a homogeneous solution of rhodamine 6 g ( 1 m , typically ) . spot a is on , static , located for instance at while channel b , with laser excitation off , is performing a raster scan .the maximum of fluorescence is collected by channel b when the centers of and match , which allows to measure the location of spot a in the system of coordinates of b. by tilting scanner b , it is possible to move spot a to , and therefore to ensure that , as illustrated in fig .[ fig : crossimaging]a , with a typical accuracy of 20 nm . without any further alinement ,a quick check can be realized by operating vice versa , as illustrated in fig .[ fig : crossimaging]b .finally , it was check that the channel matching occurs for all spot locations within the scanning field .( color online ) calibration of channel matching by cross imaging in a solution of rhodamine 6 g .( a ) left , schematic of the mode of operation , with psf extents as defined in fig .[ fig : volumes ] ; center , image recorded on channel b ; right , cross section of the image along axis plotted in white .left axis is in arbitrary unit , minimum is zero .( b ) same procedure , vice versa . ] as discussed in section [ sec : setup ] , the scanning plane of spot a can be moved backward or forward with respect to spot b by simply acting on a telescope ( denoted t in fig .[ fig : setup ] ) . figure [ fig : zscan]a illustrates the telescope system , where the shift to the nominal position is denoted .the resulting axial distance can be monitored by performing a scan with the two spots through the interface between a coverslip and a fluorescent solution , as illustrated in fig .[ fig : zscan]b .note that the two spots have been laterally split apart by 5 m in order to prevent from unwanted crosstalk .the signal recorded on both channels shows the same typical smooth step shape represented in fig .[ fig : zscan]c , where the half value of maximum intensity is obtained when center of the spot is exactly at the interface .the axial distance can be therefore measured as the distance between the two half maxima , with an accuracy of 100 nm .the relationship between and is plotted in fig .[ fig : zscan]d .it shows that an axial shift up to m can be reached with this system .the issue of the spot quality in off - plane configuration will be addressed in section [ sec : offplane ] .a ) schematic of the telescope t inserted in channel a and allowing to modify the plane of focusing of spot a. the distance is defined as the shift to the nominal distance between the two lenses .b ) schematic view of the spots focused in different planes in the case and laterally shifted .c ) example of intensity profiles recorded by z - scan .d ) values of measured versus . ] the spatial resolution of a confocal microscope system is usually assessed by recording the image of a subwavelength isolated fluorescent microsphere , that can therefore be considered as a point source . by replacing the specimen function in eq .[ eq : psf ] by a dirac distribution , the intensity recorded by channel a is indeed given by the shape of the intensity distribution is a good indicator of the overall alinement of channel a , while its spatial extends and are directly related to the spatial resolution .a sketch of the measurement procedure , an example of recorded intensity distribution , and the corresponding cross - section are represented in fig .[ fig : psfs]a .in addition , during this measurement , if the collection volume of channel b is overlapping the microsphere , with laser off ( see sketch of fig . [fig : psfs]b ) , the intensity that is recorded by channel b while a is scanning ( laser on ) can be written as the intensity recorded by channel b is therefore proportional . finally ,if the excitation is delivered to the sphere by spot b ( static , laser on ) , scanning with channel a ( laser off , see sketch of fig .[ fig : psfs]c ) will provide an intensity distribution proportional to .( color online ) ( a - c ) left : measurement protocole , the green circle is the fluorescent microsphere . center : recorded intensity distribution .right : cross section ( circles ) and best fit ( solid line ) according to eq .[ eq : gauss3d ] .( d ) measurement protocole for and resulting intensity distribution , with best fit . ] since the measurement schemes of figs .[ fig : psfs]a and [ fig : psfs]b can be performed simultaneously , only two measurements ( b and c ) are needed to provide independently all three psfs involved in one channel .the characterization of the second channel will be performed in an identical way by reversing the role played by the two channels .figure [ fig : psfs ] summarized the protocols and results obtained for channel a , intensity cross sections along and axis , as well as the best fit ( solid line ) using eq .[ eq : gauss3d ] .although this later assumption of a 3d gaussian distribution is only an approximation in the case of confocal psfs, it describes reasonably the present data , especially in the transverse direction .the corresponding widths for all psfs for channel a and b ( raw data not shown ) are summarized in table [ tab : widths ] .these values show that the two channels possess very similar features , close to the diffraction limit of the microscope objective ..[tab : widths]summary of all psf widths measured for both channels , as defined by eq.[eq : psf ] .values are given in nm .[ cols="<,^,^,^,^,^,^ " , ] these values have been compared to those obtained by fcs performed in a solution of rhodamine 6 g , as described in the experimental section .the obtained autocorrelation function for channel a is plotted in fig .[ fig : fcs_sol ] , as well as the best fit using eq .[ eq : fcs ] .the corresponding values of for both channels have been reported in table [ tab : widths ] .the obtained values of the spatial extend are in the same range as the ones obtained by imaging . in spite of the slight difference that could be explained by the uncertainty on the diffusion coefficient of the solution ,fcs measurement remains a valid method to assess the overall size of the confocal volume .moreover , unlike single micro - sphere imaging , this method can be performed in a few seconds , and can be easily automated , as it will be illustrated below .( color online ) example of fcs measurement by channel a performed in a solution of rhodamine 6 g .autocorrelation data have been averaged over 10 measurements of duration 10 s. data are plotted with circles , while the best fit using eq .[ eq : fcs ] is plotted with a solid line .the residues are plotted in the top graph .the corresponding fitting parameters are , nm , nm , , and . ] in order that the system performs fcs measurement at arbitrary locations , it is important that the shape and size of the confocal volume are not affected when the spot is focused out of the optical axis of the objective .the measurement of the confocal volume using fcs analysis of a solution of rhodamine 6 g reported above has been extended over a large number of discrete points within a scanning range of 60 around the center .autocorrelation functions recorded at each point were fitted according to eq .[ eq : fcs ] .the corresponding values of lateral extend are plotted as a color map in fig .[ fig : mapfcs]a .they show a slight variation over the scanning range , the extreme values being found only far from the center .a more pronounced behavior was observed for the average number of molecule ( data not shown ) .the resulting map of molecular brightness , given by the ratio of the intensity to , is reported in fig .[ fig : mapfcs]b .the drop by a factor of 3 at the corners of the scanning field can be explained by the increased value of and to a lesser extent by the degradation of the microscope objective transmission in off - axis conditions .measurements performed on channel b ( data not shown ) show a similar behavior .this confirms the ability of the system to carry out dual spot measurements at the spatial scale of a cell .( color online ) colormaps showing how the performances of the confocal system depend on the spot location within the scanning range .( a ) map of the values of the lateral width of .( b ) map of the molecular brightness for channel a. ] the same strategy was used to characterize the confocal volume of channel a when its plane of focus is changed by acting on the variable telescope .the values of the lateral width and of the molecular brightness are plotted for different axial distances between spots in fig .[ fig : zeffect ] .note that the measurements performed for , i. e. , spot b being the closest to the microscope objective , gave rise to data that can not be fitted properly .this was also the case to a lesser extent for m .this is probably due to a strong deformation of the spot , with makes the assumption of a three - dimensional gaussian shape no longer valid .plot of dependence of lateral width ( left axis ) and molecular brightness ( right axis ) when spot a is moved axially with respect to the nominal focal plane . ]the main original feature of this system is its fully independent dual spot nature .although the presented configuration includes photon counting and temporal correlation analysis , it can of course be extended by implementing dedicated modules , such as lifetime analysis , polarization - resolved excitation and/or collection , etc. however , care has to be taken in order to prevent from unwanted cross - talk between channels . indeed , as it was extensively exploited in section [ sec : calibrations ] , the two channels can interact in case of spatial overlap , because no discrimination was possible using two channel of identical spectral features under a continuous wave excitation . in case spatial overlapcan not be avoided , an additional discrimination scheme should be implemented .although biophysical investigations are much beyond the scope of this instrumental article , a couple of measurements are described in this section .they have been performed on living cells and illustrate well the versatility of the setup . because laser scanning confocal imaging relies on a sequential acquisition scheme , high definition imaging with a high repetition ratecan only be performed on a restricted observation area , eliminating therefore any possibility of control of the sample in its entire scale .because the two scanning channels of our system are independent , two confocal acquisition can be performed simultaneously at two different locations in a sample , with identical or different magnification . as illustrated in fig .[ fig : simultaneous_imaging ] , one small area on a sample can be imaged with relatively high definition and frame rate , while the other channel allow to image simultaneously the entire cell ( at the same frame rate , but a lower definition ) , enabling therefore to control the entire cell , and to monitor any drift or perturbing event .( color online ) example of simultaneous imaging .these two images have been acquired simultaneously on the cos7 ( thy1-gfp ) living cell with the same frame rate , channel a ( a ) offering a high definition imaging of a limited area ( a ) , while channel b provides a simultaneous low - definition overview of the sample ( b ) .the red square on image b indicates the area scanned by channel a. scale bars are 10 m . ]the issue of the overall control of the sample is even more relevant in the case of a fcs experiment . indeed , since this later technique relies on the hypothesis of stationarity , any perturbation such as mechanical drift , or passage of unwanted aggregates , may disturb the acquisition and produce erroneous measurements .figure [ fig : fcs_imaging ] is an example of measurement performed on a living cell .first , an image of the cell is recorded ( fig .[ fig : fcs_imaging]a ) . then , channel a is dedicated to a static fcs measurement ( fig . [ fig : fcs_imaging]b ) , while channel b performs simultaneously a continuous image acquisition on an area slightly apart ( fig .[ fig : fcs_imaging]c ) . therefore , fcs data are supported by the additional information of the image sequence that can help to select measurements that are in good agreement with the fcs hypotheses .( color online ) a ) overview of the cos7 ( thy1-gfp ) cell recorded by confocal imaging .scale bar is 10 m .the fcs measurement was then performed by channel a at the location indicated by a cross , while a sequence of control images was recorded by channel b at the location indicated by a square .b ) autocorrelation curve recorded by channel a. c ) extract of the sequence of images recorded by channel b. ] finally , a measurement protocol involving on a genuine dual spot analysis is presented in fig . [fig : sfcs ] . as illustrated in the inset of fig .[ fig : sfcs]a , the two measurement volumes are scanned in a periodic fashion with the same angular velocity on the same circular orbit located on the cell membrane , spot a following b with a delay of a quarter of an orbit .an orbit radius of 0.5 m was chosen , as well as a rotation frequency of 1 khz .the four temporal correlations ( , , , and ) are plotted in figs .[ fig : sfcs]b and [ fig : sfcs]c . as it has been detailed in the literature, the temporal correlations measured in scanning fcs present , in addition to the usual decay due to translational diffusion , a modulation with a period given by the scanning frequency , i. e. , 1 ms in the present case .the peaks of correlation are obtained for delays that bring to correspondence photon counts acquired at the same point of the orbit .for the two autocorrelations and , this happens for ms , 2 ms , 3 ms , and so on , as it is clearly visible in fig .[ fig : sfcs]c .because spot is following spot , the first peak of corresponds to a quarter of an orbit , i. e. , is obtained at ms , the following peaks occurring at ms , 2.25 ms , etc .finally , peaks for correlation are measured at ms , 1.75 ms , etc .obtaining scanning fcs data for short delays is a challenging issue in a single spot geometry because the shortest delay is directly given by the fastest scanning period , which is usually limited by mechanical response of the scanning system. without modifying the scanning features , the dual spot scheme that we propose allows to address correlation delays that are significantly shorter .therefore we believe that this approach can provide more accurate measurements . in the present example , although the scanning period is limited to 1 ms , scanning fcs cross correlation data have been obtained for delays down to 0.25 ms .given a scanning period , the limitation for the short delays is now constituted by the crosstalk between the two channels .the full potential of dual spot scanning fcs will be addressed in a dedicated article .( color online ) a ) overview of the cos7 ( thy1-gfp ) cell under study .scale bar is 10 m .the scanning orbit is indicated by a circle .b ) correlation data .correlation is denoted , and so on .note that correlation values for ms are given by the hardware correlator , while the ones for larger delays have been software computed off - line .c ) same as ( b ) , but plotted within a restricted range in a linear timescale . ]a versatile dual spot fcs system has been presented .a complete calibration protocol has been detailed , which allow to control accurately the location , size and shape of the two measurement volumes . a method for separating the contribution of excitation andcollection volume has been proposed .the spot quality appeared to be compatible with fcs measurements in a wide transverse area , and in a few microns apart the nominal focus plane .the versatility of the setup was illustrated by measurements carried out on living cells using two spots in scanning and/or static measurement modes .this project was funded by the french agence nationale de la recherche under contract anr-05-blan-0337 - 02 and by rgion provence alpes cte dazur .the authors are grateful to emmanuel schaub for his advice for setting up the scanning system .
|
a fluorescence correlation spectroscopy ( fcs ) system based on two independent measurement volumes is presented . the optical setup and data acquisition hardware are detailed , as well as a complete protocol to control the location , size and shape of the measurement volumes . a method that allows to monitor independently the excitation and collection efficiency distribution is proposed . finally , a few examples of measurements that exploit the two spots in static and/or scanning schemes , are reported .
|
computer simulation methods have been widely employed in recent years to study the behavior of granular materials . among the numerical techniques , discrete element methods , including _ soft particle molecular dynamics _( md ) , _ event - driven _( ed ) and _ contact dynamics _ ( cd ) , constitute an important class where the material is simulated on the level of particles . in such algorithms the trajectory of each particle is calculated as a result of interaction with other particles , confining boundaries and external fields .the differences between the discrete element methods stem from the way how interactions between the particles are treated , which leads also to different ranges of applicability . in low density granular systems , where interactions are mainly binary collisions ,the event - driven method is an efficient technique .the particles are modeled as perfectly rigid and the contact duration is supposed to be zero .the handling of dense granular systems , where the frequency of collisions is large or long - lasting contacts appear , becomes problematic in ed simulations . in case of dense granular mediathe approach of soft particle molecular dynamics is more favorable and widely used . in md , the time step is usually fixed and the original undeformed shapes of the particles may overlap during the dynamics .these overlaps are interpreted as elastic deformations which generate repulsive restoring forces between the particles .based on this interaction , which is defined in the form of a visco - elastic force law , the stiffness of the particles can be controlled . when the stiffness is increased md simulations become slower since the time step has to be chosen small enough so that the velocities and positions vary as smooth functions of time .the contact dynamics method considers the grains as perfectly rigid .therefore no overlaps between the particles are expected and they interact with each other only at contact points . the contact forces in cd do not stem from visco - elastic force laws but are calculated in terms of constraint conditions ( for more details see sec . [ cd - method ] ) .this method has shown its efficiency in the simulation of dense frictional systems of hard particles .packings of hard particles interacting with repulsive contact forces are extensively used as models of various complex many - body systems , e.g. dense granular materials , glasses , liquids and other random media .jamming in hard - particle packings of granular materials has been the subject of considerable interest recently .furthermore hard - particle packings , and especially hard - sphere packings , have inspired mathematicians and been the source of numerous challenging theoretical problems , from which many are still open .real systems in the laboratory and in nature contain far too large number of particles to model the whole system in computer simulations .due to the limited computer capacity the simulations are often restricted to test a small mesoscopic part of a large system .typically , the studies are focused to a local homogeneous small piece of the material inside the bulk far from the border of the system .therefore simulation methods are required that are able to generate and handle packings of hard particles without side effects of confining walls .the usual simulation methods of dense systems involve confining boxes where the material is compactified by moving pistons or gravity . however , the properties of the material differ in the vicinity of walls and corners of the confining cell from those in the bulk far from the walls .the application of walls in computer simulations leads to inhomogeneous systems due to undesired side effects ( e.g. layering effect ) .moreover , the structure of the packings becomes strongly anisotropic in these cases due to the orientation of walls and special direction of the compaction . for studies where such anisotropy is unwanted other type of compaction methodsare needed . in this paper, we present a compaction method where boundary effects are avoided due to exclusion of side walls .this simulation method is based on the contact dynamics algorithm where we applied the concept of the andersen dynamics , which enables us to produce homogeneous granular packings of hard particles with desired internal pressure .the compaction method involves variable volume of the simulation cell with periodic boundary conditions in all directions .this paper is organized as follows .first we present some basic features of cd method in section [ cd - method ] .then , section [ andersen - method ] describes the equations of motion for a system of particles with variable volume . in section [ cd+andersen ]we present a modified version of cd with coupling to an external pressure bath . in section [ results ]we report the results of some test simulations . section [ conclusions ] concludes the paper ._ contact dynamics _ ( cd ) , developed by m. jean and j. j. moreau , is a discrete element method in the sense that the time evolution of the system is treated on the level of individual particles .once the total force and torque acting on the particle is known , the problem is reduced to the integration of newton s equations of motion which can be solved by numerical methods .here we use the implicit first order euler scheme : which gives the change in the position and velocity of the center of mass of the particle with mass after the time step . is chosen so that the relative displacement of adjacent particles during one time step is small compared to the particle size and to the radius of curvature of the contacting surfaces .corresponding equations are used also for the rotational degrees of freedom , describing the time evolution of the orientation and the angular velocity caused by the new total torque acting on the particle .the interesting part of the cd method is how the interaction between the particles are handled . for simplicitywe assume that the particles are noncohesive and dry , we exclude electrostatic and magnetic forces between them and consider only interactions via contact forces .the particles are regarded as _ perfectly rigid _ and the contact forces are calculated in terms of constraint conditions .such constaints are the impenetrability and the no - slip condition , i.e. the contact force has to prevent the overlapping of the particles and the sliding of the contact surfaces .this latter condition is valid only below the coulomb limit of static friction , which states that the tangential component of a contact force can not exceed the normal component times the friction coefficient : if the friction is not strong enough to ensure the no - slip condition the contact will be sliding and the tangential component of the contact force is given by the expression where stands for the tangential component of the relative velocity between the contacting surfaces . in the cd methodthe constraint conditions are imposed on the new configuration at time , i.e. , the unknown contact forces are calculated in a way that the constraints conditions are fulfilled in the new configuration .this is the reason why an implicit time stepping is used . in order to let the system evolve one stepfrom time to one has to determine the total force and torque acting on each particle which may consist of external forces ( like gravity ) and contact forces from neighboring particles .let us suppose that all the unknown contact forces are already determined except for one force between a pair of particles already in contact or with a small gap between them .here we explain briefly how the constraint conditions help to determine the interection between these two particles .a detailed description of the method can be found in .the algorithm starts with the assumption that the contact force we are searching for is zero and checks whether this leads to an overlap of the undeformed shapes of the two particles after one time step .this is done based on the time stepping [ eq .( [ veloc - update ] ) ] : the external forces and other contact forces provide and for both particles thus the new relative velocity of the contacting surfaces can be calculated .here we use the term _ contacting surfaces _ for simplicity thought the two particles are not necessarily in contact .there might be a positive gap between them , which is the length of the shortest line connecting the surfaces of the two particles ( fig .[ fig - schematiccontact ] ) .we will refer to the relative velocity of the endpoints of the line as the relative velocity of the contact and denote the direction of the line by the unit vector . in the limit ofa real contact is zero and becomes the contact normal .negative gap has the meaning of an overlap .the superscript _ free _ in denotes that the relative velocity has been calculated assuming no interaction between the two particles .we use the sign convention that negative normal velocity ( ) means approaching particles .the new value of the gap ( after one time step ) is estimated by the algorithm based on the current gap and the new relative velocity according to the implicit time stepping .if the new gap is positive : then the zero contact force ( no interaction ) is accepted because no contact is formed between the two particles . however, if the estimated new gap is negative then a contact force has to be applied in order to avoid the violation of the constraint conditions .generally , one expects the following relation between the unknown new contact force and the unknown new relative velocity : where is the mass matrix that describes the inertia of the contact , i.e. is the relative acceleration of the contacting surfaces due to the contact force .the mass matrix depends on the shape , mass and moment of inertia of the two particles . on one hand, the interpenetration of the two rigid particles has to be avoided , which gives the following constraint for the normal component of : on the other hand , the tangential component of has to be zero in order to ensure the no - slip condition the required contact force that fulfills eqs .( [ gap - close ] ) and ( [ no - slip ] ) then reads this contact force is acceptable only if it fulfills the coulomb condition [ eq .( [ coulomb - cone ] ) ] .otherwise we can not exploit the non - slip contact assumption . in this case , is not zero , the contact slides and the contact force has to be recalculated .( [ general - contact - force ] ) and ( [ gap - close ] ) then provide where the number of unknowns ( components of and ) exceeds the number of equations . in order to determine the contact force one has to solve eq .( [ contact - force-2 ] ) together with eq .( [ tangential - force ] ) .it is recommended to use instead of in eqs .( [ contact - force ] ) and ( [ contact - force-2 ] ) .the gap size should always be non - negative and using apparently makes no difference .however , due to the inaccuracy of the calculations small overlaps can be created between neighboring particles .if instead of is used then these overlaps are eliminated in the next time step by imposing larger repulsive contact forces to satisfy eq .( [ gap - close ] ) , which pumps kinetic energy into the system . using instead of this artifact on the cost that an already existing overlap is not removed ( which then serves to check the inaccuracies of the simulation ) , only its further growth is prevented . regarding the above mentioned points , we rewrite the equations ( [ contact - force ] ) and ( [ contact - force-2 ] ) as a flowchart of the single contact force calculation is given in fig .[ fig - flowchart ] .so far we have explained only how the cd algorithm determines a single existing or incipient contact , based on the assumption that all the surrounding contact forces are known .however , in a dense granular media , many particles contact simultaneously and form a contact network . in this case, a contact force can not be evaluated locally , since it depends on the adjacent contact forces which are also unknown . to find a globally consistent force network at each time step , an _ iterative scheme _is applied in cd . at each iteration step , all contacts are chosen one by one and the force at the contact is updated according to the scheme shown in fig .[ fig - flowchart ] .the update is sequential , i.e. , the freshly updated contact force is stored immediately as the current force and then a new contact is chosen for the next update .after one iteration step , constraint conditions are not necessarily fulfilled for each contact . in order to find a global solution the iteration process has to be repeated several times until the resulting force network converges .the convergence of the iteration process is smooth , i.e. , the precision of the solution increases with the number of iterations .higher provides more precise solution but also requires more computational effort .the cd method can be used with a constant number of iterations in subsequent time steps or with a convergence criterion that prescribes a given precision to the force calculation . in this latter case ,the number of iterations varies from time step to time step .after the new force network is determined with a prescribed precision , the system evolves at the end of the time step according to the time - stepping scheme described at the beginning of this section .it is important to note that choosing small and/or large time step causes systematic errors of the force calculation which lead to a spurious soft particle behavior in spite of the original assumption of perfect rigidity . to conclude this section , we present briefly the scheme of the solver .in the simulation of granular materials , it is often desirable to investigate systems which are not surrounded by walls and to apply periodic boundary conditions in all directions. it is a nice feature of periodic boundary conditions that they make points of the space equivalent , the boundary effects are eliminated . that way the bulk properties of the materialcan be studied more easily .however , the application of an external pressure becomes problematic since the total volume is fixed and the system can not be compressed by pistons or moving walls . in order to overcome this problem , but at the same time keep the advantageous periodic boundaries , andersen proposed a method for molecular dynamics simulations .here , we recall his method briefly as its basic ideas will be used later on in this paper . according to the andersen methodthe boundaries are still periodically connected in all directions and no walls are present , but the volume of the system is a dynamical variable which evolves in time due to interaction with an external pressure bath .when a system of atoms is compressed or expanded it is done in an isotropic and homogeneous way : the distances between the atoms are rescaled by the same factor regardless of the relative or absolute positions .let us give the equations of motion of a system with particle positions in a cubic volume ( each component of is between and ) : eq .( [ equation - motion-1 ] ) describes the change in the position .the first term on the right is the usual one , the momentum divided by the mass of the particle .the last term is the extension by the andersen method that rescales the position according to the relative volume change .( [ equation - motion-2 ] ) provides the time evolution of the momentum due to two terms .the first one is the usual total force acting on the particle which originates from the interaction with other particles and/or from external fields .the additional term leads to further acceleration of the particle if the volume is changing .e.g. , if the system is compressed the kinetic energy of the particles is increased due to the work done by the compression .the energy input is achieved by rescaling all particle momenta regardless of their positions .this is in contrast to usual pistons where the energy enters at the boundary .( [ equation - motion-3 ] ) can be interpreted as newton s second law that governs the change of the volume .it describes the time evolution of an imaginary piston which has the inertia parameter and is driven by the generalized force .this latter is the pressure difference between the external pressure imposed by the pressure bath and the internal pressure of the system .the pressure difference drives the system towards the external pressure , the sensitivity of the system to this driving force is controlled by the inertia parameter . in the limit of infinite inertia together with the initial condition the volume of the system remains constant and eqs .( [ equation - motion-1 ] ) and ( [ equation - motion-2 ] ) correspond to the usual newtonian dynamics of the particles . in order to get more insight into the andersen dynamicslet us consider a simple example of a system of non - interacting particles with all .initially , the velocities and the volume velocity are set to zero . because the internal pressure is zerothe system with finite inertia and under external pressure will start contracting according to eq .( [ equation - motion-3 ] ) .the acceleration of the particles [ eq .( [ equation - motion-2 ] ) ] remains zero during the time evolution ; one might say that the particles are standing there all the time .however , the distances between them are decreasing because of the contraction of the `` world '' around them .this is caused by the second term on the right hand side of eq .( [ equation - motion-1 ] ) while the first term remains zero .this suggests the picture of an imaginary background membrane that contracts or dilates homogeneously together with the volume and carries the particles along .the velocity of this background at position is given by where is the dilation rate defined by the rate of the relative change in the system size : and is the dimension of the system .then the right hand side of eq .( [ equation - motion-1 ] ) can be interpreted as the sum of two velocities : the second one is the velocity of the background at the position of the particle and the first one is the intrinsic velocity of the particle measured compared to the background .the sum of these two forces gives the changing rate of the absolute position . in the rest of the paperthe velocity will refer always to the intrinsic velocity .we rewrite eq .( [ equation - motion-1 ] ) in the following form next we turn to the modelling of granular systems .our goal is to achieve static granular packings that are compressed from a loose gas - like state . hereagain it is advantageous to exclude confining walls and in order to apply pressure and achieve contraction of the volume we will use the concept of the andersen method .however , the equations of motion will be slightly changed in order to make them suit better to our goals . in granular materials the interaction between the particlesare dissipative . when the material is poured into a container or is compressed by a piston the particles gain kinetic energy due to the work done by gravity or the pistonall this energy has to be dissipated ( turned into heat ) by the interactions between the particles before the material can settle into a static dense packing of the particles .this relaxation process requires a massive computational effort when large packings are modeled in computer simulations .one encounters the same problem if the andersen dynamics is applied straight to granular systems .the relaxation time can be reduced if the second term on the right hand side of eq .( [ equation - motion-2 ] ) is omitted , because then the total amount of energy pumped into the system is reduced . in this casethe particles are accelerated only by the forces but they receive no additional energy due to the decreasing volume . thus the following equation will be applied for granular compaction here which results in a more effective relaxation rather than eq .( [ equation - motion-2 ] ) .this change is advantageous also from the point of view of momentum conservation .if the system is compactified by using eq .( [ equation - motion-2 ] ) from an initial condition where the total momentum of the particles is non - zero , then this momentum will be increased inverse proportionally to the size of the system .thus the total momentum e.g. due to initial random fluctuations is magnified which can lead to non - negligible overall rigid body motion of the final static packing .this is in contrast to eq .( [ equation - motion - new-2 ] ) which provides momentum conservation in the absence of external fields . concerning the equation that describes the time evolution of the system size we find it more convenient to control instead of .this is actually not an important change and leads to very similar dynamics .our third equation reads the equations of motion ( [ equation - motion - new-1 ] ) , ( [ equation - motion - new-2 ] ) and ( [ equation - motion - new-3 ] ) describe an effective compaction dynamics for granular systems , they are able to provide static packings under the desired pressure and if they are restricted to the limit of we receive back the classical newtonian dynamics . of course , in order to close the equations we need to define interactions between the particles .the interparticle forces provide in eq .( [ equation - motion - new-2 ] ) and they are also needed to evaluate the inner pressure .the stress tensor is not apriori spherical in granular materials .the average of the system is determined by the interparticle forces and the particle velocities as where and denote the number of particles and the number of contacts , respectively .if two contacting particles at contact are labelled by and , then is the force exerted on particle by particle , and the vector is pointing from the center of mass of particle to that of particle where periodic boundary conditions and nearest image neighbors are taken into account . thus is the minimum distance between particles and : where is an integer - component translation vector . the inner pressure is then given by the trace of the stress tensor divided by the dimension of the system , \label{inner - pressure}\ ] ] which has the meaning of an average normal stress .the implementation of the above method in computer simulations is straightforward if the interparticle forces are functions of the positions and velocities of the particles , e.g. , in soft particle md simulations .the implementation is less trivial for the case of the contact dynamics method where interparticle forces are constraint forces .we devote the next section to this problem .in this section we present a modified version of the contact dynamics algorithm which enables us to perform cd simulations when the system is in contact with an external pressure bath . according to sec .[ andersen - method ] let us suppose that the system is subjected to a constant external pressure and its time evolution is given by eqs .( [ equation - motion - new-1 ] ) , ( [ equation - motion - new-2 ] ) and ( [ equation - motion - new-3 ] ) . herewe will follow the description of the cd method given in sec .[ cd - method ] and discuss the required modifications .once the force calculation process is completed , the implicit euler integration can proceed one time step further . now the time stepping has to involve also the equations of motion of the system size . by discretizing the eqs .( [ dilation - rate ] ) and ( [ equation - motion - new-3 ] ) in the same implicit manner as for the particles [ eqs .( [ veloc - update ] ) and ( [ pos - update ] ) ] we obtain the new values of the system size and the dilation rate : \label{system - update}\ ] ] where the `` velocity '' and the `` position '' are updated by the new `` force '' and by the new `` velocity '' , respectively . the discretized equations governing the translational degrees of freedom of the particles [ eqs . ( [ veloc - update ] ) and ( [ pos - update ] ) ]are rewritten according to eqs .( [ equation - motion - new-1 ] ) and ( [ equation - motion - new-2 ] ) in the following form : + \vec v_i({t+\delta t } ) \delta t. \label{particles - pos - update}\ ] ] the time stepping for the rotational degrees of freedom remains unchanged because the dilation ( contraction ) of the system has no direct effect on the rotation of the particles . in the cd method , as we explained in sec .[ cd - method ] the particles are perfectly rigid and are interacting with constraint forces , i.e. those forces are chosen between contacting particles that are needed to fulfill the constraint conditions .e.g. the contact force has to prevent the interpenetration of the contacting surfaces . if an external pressure bath is used then the calculation of the constraint forces has to be reconsidered because the relative velocity of the contacting surfaces is influenced by the variable volume .when the system is dilating or contracting , particles gain additional relative velocities compared to each other . for a pair of particles ,this velocity is where is the vector connecting the two centers of mass . the same change appears in the relative velocity of the contacting surfaces as the size of the particles is kept fixed .if this change led to interpenetration then it has to be compensated by a larger contact force .it may also happen that existing contacts open up due to expansion of the system resulting in zero interaction force for those pair of particles . in the calculation of a single contact force, the relative velocity ( i.e. the contribution of the changing system size ) has to be added to .the new relative velocity of the contact assuming no interaction between the two particles is calculated here in the same way as in sec .[ cd - method ] , i.e. , based on the intrinsic velocities of the particles .thus the effect of the dilation / contraction of the system is not taken into account in .therefore one has to replace with ( ) in all equations of sec .[ cd - method ] in order to impose the constraint conditions properly .let us first suppose that the system has infinite inertia ( ) thus the dilation rate is constant . in this casethe modified equations of the force update ( containing already the term ) provide the right constraint forces at the end of the iteration process .these forces will alter the relative velocity ( ) in such a way that the prescribed constrain conditions will be fulfilled in the new configuration at .more consideration is needed if finite inertia is used and the dilation rate is time - dependent .the problem is that in order to calculate the proper contact force one has to know the new dilation rate . the new dilation rate , however , depends on the new value of the inner pressure [ eq . ( [ dilation - rate - update ] ) ]which , in turn , depends on the new value of the contact forces .this problem can be solved by incorporating and into the iteration process .instead of using the old values and during the iteration , we always use the expected values and . these represent our best guess for the new dilation rate and for the new inner pressure . is defined based on the current values of the contact forces during the force iteration . whenever a contact force is updatedwe recalculate the expected inner pressure . with the help of the current contact forces we can determine the total forces acting on the particles and then , following eq .( [ particles - vel - update ] ) we obtain also the expected new velocities of the particles . the expected inner pressure , according to eq .( [ inner - pressure ] ) , then reads : .\label{expected - inner - pressure}\ ] ] of course , there is no need to recalculate all the terms in eq .( [ expected - inner - pressure ] ) in order to update .when the force at a single contact is changed it affects only three terms : one due to the force itself and two due to the velocities of the contacting particles . in order to save computational time , only the differences in these three terms have to be taken into account when is updated .following eq .( [ dilation - rate - update ] ) , we obtain also the corresponding value of the expected dilation rate : this way , and are updated many times between two consecutive time steps ( in fact they are updated times ) but in turn and are always consistent with the current system of the contact forces . at the end of the iteration process and not just an approximation of the new inner pressure and new dilation rate but they are equal to and , respectively . to complete the algorithm , we list here also the equations that are used for the force calculation of a single contact . the inequality ( [ positive - gap ] )is replaced by i.e. there is no interaction between the two particles if the inequality is satisfied .otherwise we need a contact force .the force , previously given by eq .( [ contact - force - gpos ] ) , that is required by a sticking contact is this force again has to be recalculated according to a sliding contact if in eq .( [ contact - force - gpos - new ] ) violates the coulomb condition : which replaces the original equation ( [ contact - force-2-gpos ] ) . except the above changes, the cd algorithm remains the same . in each timestep the same iteration process is applied in order to reach convergence of the contact forces .after the iteration process we apply eqs .( [ dilation - rate - update])-([particles - pos - update ] ) to complete the time step .the scheme of the solver for the modified version of cd can be presented as in the next section we will present some simulations with the above method . we will test the algorithm and analyse the properties of the resulting packings . as an alternative to this fully implicit method we considered another possibility to discretize the eqs .( [ dilation - rate])-([equation - motion - new-3 ] ) in the spirit of the contact dynamics and , at the same time , impose the constraint conditions on the new configuration .the main difference is that the new value of the inner pressure is determined based on the old velocities and not on the new ones , while the contribution of the forces are taken into account in the same way , i.e. the new contact forces are used in eq .( [ inner - pressure ] ) .therefore this version of the method is only partially implicit , however , the constraint conditions and the force calculation [ eqs .( [ gap - check - new])-([contact - force-2-gpos - new ] ) ] can be applied in the same way .only , the expected values and has to be changed consistently with the new pressure : \label{expected - inner - pressure - notfullyimplicit}\ ] ] and then this modified is used to determine the expected dilation rate with the help of eq .( [ expected - dilation - rate ] ) . again here , and are calculated anew after each force update during the iteration process and their last values equal the new pressure and the new dilation rate .we implemented and tested this second version of the method and found that the constraint conditions are handled here also with the same level of accuracy .although the second method is perhaps less transparent than the fully implicit version , for practical applications it seems to be more useful .first , the second version of the method is easier to implement into a program code , second , it turned out to be faster by in our test simulations .the improvement of the computational speed originates from the smaller number of the operations .one does not have to handle the expected particle velocities and the recalculation of is more simple as the change of a contact force affects only one term in the eq .( [ expected - inner - pressure - notfullyimplicit ] ) .we perform numerical simulations using the cd algorithm with the fully implicit pressure bath scheme of section [ cd+andersen ] .this algorithm has been used to study mechanical properties of granular packings in response to local perturbations . here , the main goal is to show that the algorithm works indeed in practical applications and to test the method from several aspects ; we investigate how the simulation parameters influence the required cpu time and the accuracy of the simulation .such parameters are the external pressure , the inertia parameter and the computational parameters , like the number of iterations per time step and the length of the time step .we also analyse the properties of the resulting packings . here , we report only simulations of two - dimensional systems of disks , where the behavior is very similar to that we found for spherical particles in three - dimensional systems .length parameters , the time and the two - dimensional mass density of the particles are measured in arbitrary units of , and , respectively .the samples are polydisperse and the disk radii are distributed uniformly between 0.8 and 1.2 , thus the average grain radius is 1 .the material of the grains has unit density and the masses of the disks are proportional to their areas . in this sectionwe have one reference system that contains disks .the interparticle friction coefficient is set to .the value of other parameters are : , , ( this latter is expressed in units of ) and the inertia ( in units of ) . throughout this section, we either use these reference parameters or the modified values will be given explicitly .usually , we will vary only one parameter to check its effect while other parameters are kept fixed at their reference values . in each simulation , we start with a dilute sample of nonoverlapping rigid disks randomly distributed in a two dimensional square - shaped cell .no confining walls are used according to the boundary conditions specified in sec .[ cd+andersen ] .gravity and the initial dilation rate are set to zero .due to coupling to the external pressure bath the dilute system starts shrinking . as the size of the cell decreases ,particles collide , dissipate energy and after a while a contact force network is formed between touching particles in order to avoid interpenetrations .the contact forces build up the inner pressure which inhibits further contraction of the system .finally , a static configuration is reached in which equals and mechanical equilibrium is provided for each particle .technically , we finish the simulation when the system is close enough to the equilibrium state : we apply a convergence threshold for the mean velocity and mean acceleration of the particles ( which are measured in units and , respectively ) . only if both and become smaller than the threshold we regard the system as relaxed and stop the simulation .the typical time evolution can be seen in fig .[ fig - timeevolutions ] where we show the compaction process in the case of the reference system . fig .[ fig - timeevolutions](left ) implies that the magnitude of grows linearly in the beginning when the inner pressure is close to zero .the negative value of the dilation rate indicates contraction which becomes slower after the particles build up the inner pressure [ fig .[ fig - timeevolutions](middle ) ] .the fluctuations in are due to collisions of the particles . in the final stage ofthe compression goes to zero , converges to the external pressure and the size of the system reaches its final value [ fig .[ fig - timeevolutions](right ) ] .( left ) the inner pressure ( middle ) and the system size ( right ) during the compression of a 2d polydisperse sample . the data shown here were recorded in the reference system specified in the text . ]next we investigate how the required cpu time of the simulation is affected by the various parameters .all simulations are performed with a processor intel(r ) core(tm)2 cpu t7200 @ 2.00ghz and the cpu time is measured in seconds .figure [ fig - cputime ] reveals that the variation of , , and have direct influence on the required cpu time .the final packing is achieved with less computational expenses if larger , larger or smaller is used .the role of system inertia is more complicated . reflects the sensitivity of the system to the pressure difference .if the level of the sensitivity is too small or too large , the simulation becomes inefficient .it is advantageous to choose the inertia near to its optimal value which depends on the specific system ( e.g. on the number and mass of the particles ) . regarding the efficiency of the computer simulation , not only the computational expenses play an important role but the accuracy of the simulation is also essential . herewe use the overlaps of the particles as a measure of the inaccuracy of the simulation ( see sec . [ cd - method ] ) . in an ideal casethere would be no overlaps between perfectly rigid particles .[ fig - overlaps ] shows the mean overlaps measured in the final packings .it can be seen for the parameters , and that the reduction of the computational expenses at the same time leads also to the reduction of the accuracy of the simulation . in fig .[ fig - cputime - overlap](left ) we plot the cpu time versus the mean overlap for these three parameters .the data points collapse approximately on the same master curve which tells us that the computational expenses are determined basically by the desired accuracy ; smaller errors require more computations .the efficiency of the simulation is approximately independent of the parameters , and .the situation is , however , different for the inertia of the system .first , it has relatively small effect on the accuracy of the simulation ( fig . [ fig - overlaps ] ) .variation of by orders of magnitude could hardly change the mean overlap of the particles .second , affects strongly the efficiency of the simulation which is shown clearly by fig .[ fig - cputime - overlap](right ) .if is varied then larger computational expense is not necessarily accompanied with smaller errors .in fact , in the whole range of studied here , the fastest simulation turned out to be the most accurate one [ open circle in fig .[ fig - cputime - overlap](right ) ] . and [ fig - overlaps ] .these parameters are the external pressure , the number of iterations per time step , the length of the time step ( left ) and the inertia of the system ( right ) .the open circle on the right indicates the most efficient simulation we could achieve by controlling the inertia .,title="fig : " ] and [ fig - overlaps ] .these parameters are the external pressure , the number of iterations per time step , the length of the time step ( left ) and the inertia of the system ( right ) .the open circle on the right indicates the most efficient simulation we could achieve by controlling the inertia .,title="fig : " ] next we turn to the question whether the parameters of the simulation used in the compaction process influence the physical properties of the final packing .there are many ways to characterize static packings of disks . herewe test only one quantity , the frequently used volume fraction .the volume fraction gives the ratio between the total volume ( total area in 2d ) of grains and the volume ( area ) of the system .[ fig - volumefraction ] shows the volume fraction of the same packings that were studied already in fig .[ fig - overlaps ] .it can be seen that the volume fraction remains approximately unchanged under the variation of the four parameters , , and .this is except for one data point for large time step , where the simulation is very inaccurate .the corresponding mean overlap ( fig .[ fig - overlaps ] ) is comparable to the typical size of the particles which is the reason why the volume fraction appears to be much larger .finally , we investigate the inner structure of the resulting random packings . for that, we study larger samples with particles , otherwise , the default parameters are used during the compaction .to suppress random fluctuations , we produce different systems and all quantities reported hereafter represent average values over these systems . in fig .[ fig - contactdensity](left ) we study the angular distribution of the contact normals and find that it is very close to uniform .however , there is a small but definite deviation ( around ) : the density of the contact normals are slightly larger parallel to the periodic boundaries . in this sensethe packing is not completely isotropic .although the effect is very small , the orientation of the boundaries can be observed also in such local quantities like the direction of the contacts .disks in both figures.,title="fig : " ] disks in both figures.,title="fig : " ] in connection to the question of the isotropy we checked also the global stress tensor . in the original frame reads this stress is isotropic with good approximation .the diagonal entries are close to which equals while the off - diagonal elements are approximately zero .compared to the unit matrix , the elements deviate around of .the final packings are expected to be homogeneous as all points of the space are handled equivalently by the compaction method .apart from random fluctuations we do not observe any inhomogeneity in our test systems .as an example , we show the spatial distribution of the contacts in fig .[ fig - contactdensity](right ) , where the density is approximately constant .in this work we have proposed and tested a simulation method to produce homogeneous random packings in the absence of confining walls .we combined the contact dynamics algorithm with a modified version of the andersen method to handle granular systems which are in contact with an external pressure bath .our main concern was to discuss how constraint conditions can be applied to determine the interaction between the particles in an andersen - type of dynamics .we have presented the results of some numerical tests and discussed the effect of the main parameters on the efficiency of the simulations and on the physical properties of the final packings .we restricted our study to the simple case where we allow only spherical strain of the system in order to achieve the desired pressure .however , the method can be generalized to apply other type of constraints to the stress tensor and , consequently , to allow more general strain deformations where shape as well as size of the simulation cell can be varied .we acknowledge support by grants no .otka t049403 , no .otka pd073172 and bolyai fellowship of has .p. a. cundall , o. d. l. strack , a discrete numerical model for granular assemblies , gotechnique 29 , 47 ( 1979 ) .l. e. silbert , d. ertas , g. s. grest , t. c. halsey , d. levine , geometry of frictionless and frictional sphere packings , phys .e 65 , 031304 ( 2002 ) .d. c. rapaport , the event scheduling problem in molecular dynamic simulation , j. comp34 , 184 ( 1980 ) .o. r. walton , r. l. braun , viscosity , granular - temperature , and stress calculations for shearing assemblies of inelastic , frictional disks , journal of rheology 30 , 949 ( 1986 ) .m. jean , j. j. moreau , unilaterality and dry friction in the dynamics of rigid body collections , in proc . of contact mechanics intern .symposium , lausanne , switzerland , 1992 , edited by a. curnier ( presses polytechniques et universitaires romandes , 1992 ) , pp .j. j. moreau , some numerical - methods in multibody dynamics - application to granular - materials , eur .j. mech . a - solids 13 , 93 ( 1994 ) .m. jean , the non - smooth contact dynamics method , comput .methods appl .177 , 235 ( 1999 ) .l. brendel , t. unger , d. e. wolf , contact dynamics for beginners , in the physics of granular media ( wiley - vch , weinheim , 2004 ) , pp .325 - 343 .p. k. haff , grain flow as a fluid - mechanical phenomenon , j. fluid mech .134 , 401 ( 1983 ) .s. mcnamara , w. r. young , inelastic collapse in two dimensions , phys .e 50 , r28-r31 ( 1994 ) .a. mehta , granular matter ( springer - verlag , new york , 1994 ) .r. zallen , the physics of amorphous solids ( wiley , new york , 1983 ) .j. p. hansen , i. r. mcdonald , theory of simple liquids ( academic press , new york , 1986 ) .s. torquato , random heterogeneous materials ( springer - verlag , new york , 2002 ) .a. j. liu , s. r. nagel , jamming is not just cool any more , nature 396 , 21 ( 1998 ) .g. combe , j. n. roux , strain versus stress in a model granular material : a devil s staircase , phys .85 , 3628 ( 2000 ) .t. aste , d. weaire , the pursuit of perfect packing ( iop publishing , new york , 2000 ) .h. c. andersen , molecular dynamics simulations at constant pressure and/or temperature , j. chem .72 , 2384 ( 1980 ) .t. unger , l. brendel , d. e. wolf , j. kertsz , elastic behavior in contact dynamics of rigid particles , phys .e 65 , 061305 ( 2002 ) .j. christoffersen , m. m. mehrabadi , s. nemat - nasser , a micromechanical description of granular material behavior , j. appl .48 , 339 ( 1981 ) .m. r. shaebani , t. unger , j. kertsz , unjamming of granular packings due to local perturbations : stability and decay of displacements , phys .e 76 , 030301(r ) ( 2007 ) .m. r. shaebani , t. unger , j. kertsz , unjamming due to local perturbations in granular packings with and without gravity , to be submitted to phys . rev .e. ( 2008 ) .a. kolb , b. dunweg , optimized constant pressure stochastic dynamics , j. chem .111 , 4453 ( 1999 ) .m. parrinello , a. rahman , crystal structure and pair potentials : a molecular dynamics study , phys .45 , 1196 ( 1980 ) .m. p. allen , d. j. tildesley , computer simulation of liquids ( oxford university press , new york , 1996 ) .
|
the contact dynamics method ( cd ) is an efficient simulation technique of dense granular media where unilateral and frictional contact problems for a large number of rigid bodies have to be solved . in this paper we present a modified version of the contact dynamics to generate homogeneous random packings of rigid grains . cd is coupled to an external pressure bath , which allows the variation of the size of a periodically repeated cell . we follow the concept of the andersen dynamics and show how it can be applied within the framework of the contact dynamics method . the main challenge here is to handle the interparticle interactions properly , which are based on constraint forces in cd . we implement the proposed algorithm , perform test simulations and investigate the properties of the final packings . , , granular material , nonsmooth contact dynamics , pressure bath , homogeneous compaction , jamming , random granular packing 45.70.-n , 45.70.cc , 02.70.-c , 45.10.-b
|
making detailed measurements of the temperature structure of the solar upper atmosphere is fundamental to constraining the coronal heating problem .recent work has suggested that many structures in the corona have relatively narrow distributions of temperatures ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?this result is difficult to reconcile with the parker nanoflare model of coronal heating .all theoretical calculations and numerical simulations indicate that magnetic reconnection occurs on spatial scales that are much smaller than can be observed with current instrumentation .this implies that observed temperature distributions should be broad ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) . unfortunately , determining the temperature structure of the corona from remote sensing observations is difficult . in principal , measurements of individual emission line intensitiesshould yield this information .the intensity , however , is a convolution of the temperature , density , and geometry along the line of sight and such observations must be inverted to yield the differential emission measure distribution ( dem ) .since the inversion is ill posed it is nt clear what to make of the solution . in their classic paper `` consequently , in the derivation of thermal structure from spectral data , observational errors are always magnified and often to such an extent that the solution becomes meaningless . '' the inverse problem can be regularized so that the solution is less sensitive to error but this introduces additional assumptions that may have no physical basis . in this paperwe introduce the application of sparse bayesian inference to the differential emission measure inversion problem . herewe are attempting to infer the temperature structure of the atmosphere through observations of optically thin emission lines , where is the observed intensity , is the relevant plasma emissivity , which is assumed to be known , and is the differential emission measure ( dem ) , which is a function of the electron density and path length along the line of sight . as has been done in many previous studies , we assume that the dem can be represented by a sum of simple basis functions . motivated by the `` relevance vector machine , '' a bayesian regression algorithm developed by , we adopt a prior that encodes a preference for solutions that utilize a minimum number of basis function .the important implication of this assumption is that the complexity of the inferred temperature distribution is determined primarily by the observations and their statistical significance and not by ad hoc assumptions about the solution . to demonstrate the efficacy of this approach we have constructed a test library of 40 dems that cover the range of what we expect to observe in the solar corona . for each distributionwe estimate the intensity and statistical uncertainty for a number of emission lines using the effective areas of the extreme ultraviolet imaging spectrometer ( eis , ) on the _ hinode _ mission .these intensities are used to attempt to recover the input dem .we show that our method outperforms another bayesian dem solver ( mcmc by ) , which assumes an uninformative prior for the weights .this paper is structured in the following way .we first consider the application of our bayesian framework to a linear `` toy problem '' that closely follows example regression problems that are often found in the literature ( e.g. , * ? ? ?this allows us to make a gentle introduction of the notation and to compare our approach with other established methods .we then consider the application of our approach to the dem inversion problem .in standard linear regression problems we wish to fit a set of observed data points ( ) to some smooth function with free parameters .we assume that the modeled data points ( ) can be written as the linear superposition of a set of specified basis functions and our task is to find the weights ( ) that `` best fit '' the data . in this paperwe will assume that the basis functions are simple gaussians \ ] ] with a fixed width .anticipating the dem problem we will not assume that the positions of the basis functions ( ) lie on the available data points but are evenly spaced on a fixed domain .for a given set of data the optimal weights can be found by minimizing the familiar expression for which the optimal weights can be found by gradient descent or levenberg - marquardt ( e.g. , * ? ? ? * ) . also recall that we can cast this in matrix form where , which can be solved directly with or iteratively with gradient descent .the challenge is to allow for many degrees of freedom so that we can fit a wide variety of functions while not overfitting the data . to illustrate this , we generate 50 noisy data points from the function .the noise is drawn from a normal distribution with zero mean and a standard deviation of . as is shown in figure [fig : toyprob ] , a model with with basis functions fits the data points perfectly , but is highly oscillatory .our intuition is that this model will not do a good job of predicting the values of new data points . indeed ,if we generate another 50 noisy data points we see that is about an order of magnitude larger for this new set . the traditional approach to constraining such regression problems is penalized least squares , where we minimize a function of the form which can be solved with gradient descent or directly with the parameter balances the goodness of fit against the smoothness of the solution .it can be found by using some of the data ( the training data ) to infer the weights and the remainder of the data ( the test data ) to evaluate the goodness of fit .since the computational complexity of the problem is small , we can determine the value of that best fits the test data through a simple one - dimensional parameter search .figure [ fig : toyprob ] illustrates this approach to the regression problem .this method works well when we have many data points and can easily divide them into training and test sets .unfortunately , for the dem problem we often have a limited number of observed intensities and this `` leave some out '' cross - validation technique is not useful . by reformulating the problem using a bayesian frameworkwe can achieve a similar result without resorting to cross - validation .we write bayes theorem as the product of a likelihood and a hierarchical prior the likelihood is the usual expression assuming normally distributed errors on the observations .\ ] ] for the prior on the weights we simply chose a cauchy distribution , we choose an uninformative prior for the hyperparameter , which implicitly assumes that the values for the weights will be only weakly dependent on the value of that we use .we recognize that there are techniques for optimizing the hyperparameters and we will return to this issue in the summary and discussion section .if we are interested in the distributions of the weights implied by the data and our choice of the posterior , we must generate samples from it .the standard approach to this is the metropolis - hastings algorithm . in the next sectionwe will describe the application of a powerful new parallel sampling technique that we can apply to this problem .if we are only interested in the set of weights that maximize the posterior , we take the negative log of the posterior , discard all constant terms , and minimize which is similar to equation [ eq : pls ] . this can be minimized using gradient descent .figure [ fig : toyprob ] shows the solution assuming and using all 100 data points simultaneously in the optimization .we see that this approach also avoids overfitting the data , but by limiting the number of non - zero basis functions rather than by limiting the sum of the weights .the cauchy distribution encourages such sparse solutions because it has `` fat tails . '' to see how this comes about consider a simple case where three basis functions could be used to describe the data equally well . in the absence of a prior, the solution is likely to have weights of comparable magnitude , e.g. , , while we would like the solution to be as simple as possible , say . with the assumption of the cauchy prior, however , we have for and we see that we can encode this preference for models with a limited number of non - zero weights into the prior . since the temperature domain of the dem is not unambiguously determined by the data , this preference for sparse solutions is a useful property .we want emission measure to be inferred only when the observations imply that it is statistically significant .we note that this approach does nt eliminate any degeneracy among the solutions .several sets of weights ( e.g , 1,0,0 or 0,1,0 or 0,0,1 in our simple example ) could be nearly equally likely. it could also be that the numerical scheme used to explore the posterior has difficulty with such a multi - modal landscape .we will discuss the problem of degeneracy in more detail in the next section .a more sophisticated bayesian approach to sparsity has been formulated by , who called his method the relevance vector machine ( rvm ) . herethe prior on the weights is assumed to be ^{1/2 } \exp\left[-\frac{\alpha_m w_m^2}{2}\right],\ ] ] where there is a hyperparameter for each weight and the prior on each hyperparameter is assumed to be a gamma distribution . iteratively solves for the optimal set of weights and hyperparameters by alternating between maximizing the likelihood with fixed and maximizing the evidence ( the denominator of equation [ eq : bayes ] ) with fixed .as the iteration proceeds , some of the s become large and these basis functions are pruned from the model . upon convergencewe are typically left with a model that has only a few remaining basis functions .figure [ fig : toyprob ] illustrates the application of this approach to our problem .the rvm solution is similar to that obtained using the cauchy prior , suggesting that this distribution is consistent with sparsity .the gamma function prior for makes the equivalent regularizing penalty proportional to ( , equation 33 ) , similar to what we obtain with the cauchy prior , so perhaps this is not surprising . for the scale of problem that we are interested in ,the rvm is computationally efficient .unfortunately , the rvm allows for negative weights and can not be used for the emission measure problem since a negative emission measure has no physical meaning .the simple bayesian approach described by equations [ eq : bayes ] and [ eq : log_pos ] , in contrast , is much more computationally expensive to implement , but it can easily accommodate the essential positive definite constraint on the weights .we close this section with a brief comment on mathematical rigor .it is clear that the further we stray from simple least - squares solutions the less certain we are that our mathematical assumptions will have their intended effect .we have certainly not proven that the cauchy prior will always lead to sparse solutions or that our assumptions about the hyper - parameter will not have some perverse consequence in some circumstances .similarly , has not proven that the iterative approach to the rvm always converges to the optimal solution ( for example , see section of 2.2 of for cases where the rvm fails to perform optimally ) .our feeling is that considering a wide variety of test problems is the easiest way of dealing with this lack of mathematical certainty .we now turn to the actual problem of interest , inferring the temperature structure of the solar atmosphere from observations of optically thin line emission .specifically , we wish to invert the equation for the line - of - sight differential emission measure distribution given a set of observed intensities ( ) , their associated statistical uncertainties ( ) , and the relevant plasma emissivites ( . this problem differs from the linear problem in that the function we wish to fit is not observed directly , but indirectly through the intensity .as discussed in the previous section , the solution should have several properties : since the density and path length are positive quantities , it must be positive definite ; since in many circumstances we will have only a limited number of observed intensities , it can not rely on cross validation methods for optimizing parameters ; and it should be sparse , inferring emission measure only at temperatures where there is a statistically significant reason to do so .as we will see , the bayesian approach that we applied to the linear problem satisfies all three of these properties .we assume that the product of the temperature and the dem can be represented as the weighted sum of known basis functions , the exponential weight is chosen so that the dem is positive definite. note that since the range of temperatures is large it is more efficient to integrate over intervals of constant log temperature using .putting the extra factor of on the right side of equation [ eq : dem ] allows both the weights and the basis functions to have uniform magnitudes . for the basis functions we chose gaussians in , .\ ] ] for a given temperature domainwe assume that the basis functions are equally spaced and that the width of each component is so that the width is automatically adjusted depending on the number of components .rcrrrrr mg v 276.579 & 5.45 & 2.06e-10 & 0.00 & 0.41 & 0.00 & 0.29 + mg vi 270.394 & 5.65 & 2.61e-10 & 0.00 & 0.32 & 0.00 & 1.79 + mg vii 280.737 & 5.80 & 1.38e-10 & 0.30 & 0.72 & 0.29 & 1.00 + si vii 275.368 & 5.80 & 2.24e-10 & 0.17 & 0.44 & 0.12 & 1.40 + fe ix 188.497 & 5.90 & 3.65e-10 & 1.72 & 0.74 & 1.37 & 1.26 + fe ix 197.862 & 5.90 & 6.44e-10 & 1.05 & 0.43 & 0.90 & 1.17 + fe x 184.536 & 6.05 & 1.54e-10 & 18.55 & 3.64 & 16.88 & 1.10 + si x 258.375 & 6.15 & 1.52e-10 & 62.36 & 5.65 & 62.28 & 1.00 + fe xi 180.401 & 6.15 & 3.98e-11 & 152.36 & 20.65 & 150.47 & 1.01 + fe xi 188.216 & 6.15 & 3.48e-10 & 76.93 & 4.84 & 75.87 & 1.01 + s x 264.233 & 6.15 & 2.16e-10 & 11.25 & 2.02 & 11.37 & 0.99 + fe xii 195.119 & 6.20 & 7.11e-10 & 209.69 & 5.48 &217.88 & 0.96 + fe xii 192.394 & 6.20 & 6.01e-10 & 67.57 & 3.41 & 70.20 & 0.96 + fe xiii 202.044 & 6.25 & 1.94e-10 & 115.64 & 7.68 & 120.15 & 0.96 + fe xiii 203.826 & 6.25 & 1.03e-10 & 174.43 & 12.90 & 182.51 & 0.96 + fe xiv 264.787 & 6.30 & 2.22e-10 & 243.24 & 9.07 & 245.37 & 0.99 + fe xiv 270.519 & 6.30 & 2.61e-10 & 124.92 & 5.94 & 125.95 & 0.99 + fe xiv 211.316 & 6.30 & 2.47e-11 & 456.62 & 41.82 & 459.89 & 0.99 + fe xv 284.160 & 6.35 & 9.54e-11 & 6114.98 & 66.95 & 6237.17 & 0.98 + s xiii 256.686 & 6.40 & 1.35e-10 & 517.05 & 17.22 & 526.64 & 0.98 + fe xvi 262.984 & 6.45 & 2.02e-10 & 1028.86 & 19.62 & 1057.81 & 0.97 + ca xiv 193.874 & 6.55 & 6.73e-10 & 246.54 & 6.13 & 259.09 & 0.95 + ar xiv 194.396 & 6.55 & 6.93e-10 & 70.62 & 3.23 & 75.22 & 0.94 + ca xv 200.972 & 6.65 & 2.95e-10 & 210.21 & 8.40 & 213.79 & 0.98 + ca xvi 208.604 & 6.70 & 3.98e-11 & 120.19 & 17.08 & 115.32 & 1.04 + ca xvii 192.858 & 6.75 & 6.25e-10 & 157.95 & 5.10 & 138.12 & 1.14 + aia 94 & 6.85 & & 20.39 & 0.64 & 19.41 & 1.05 [ table : line_list ] with these assumptions it is possible to integrate the emissivity over each basis function , and the calculation of each modeled intensity is reduced to a simple sum .the only differences between this problem and the problem discussed in the previous section are that the intensities are non - linear functions of the weights and , since spectroscopic instruments rarely have uniform sensitivity , each intensity has a uncertainty ( ) .the line - of - sight emission measures observed on the sun are typically large , in an active region , for example .to bias the weights to the domain of interest we introduce a scaling factor into the basis functions so that a weight of zero corresponds to instead of 1 .further we modify the prior on the weights ( equation [ eq : log_prior ] ) by mapping to .given a set of observed intensities and the corresponding atomic data , we can find the optimal values for the weights either by sampling the posterior ( equation [ eq : bayes ] ) or minimizing the negative log - posterior ( equation [ eq : log_pos ] ) .unfortunately , finding global extrema in a high dimensional space is a formidable problem .our approach is to begin by sampling the posterior using the method introduced by and implemented by and , among others .this method differs from the traditional single monte carlo markov chain random walk method for sampling the posterior by utilizing an ensemble of walkers , potentially thousands of them , that can be updated in parallel .further , instead of using a proposal distribution with parameters that must be adjusted to achieve the desired acceptance ratio , updates to the chain are made using the current positions of pairs of walkers where is a random number from the distribution on the domain ] and begin iterating . after a burn - in of accepted samples , the next proposed steps that are accepted are recorded to form the final sampling of the posterior .after each iteration we also recorded the highest value of the posterior and saved the corresponding set of weights .this procedure takes about 45min per dem on a standard 4-core 4ghz intel i7 processor with all four cores utilized .we will discuss how this algorithm could be made more efficient in the next section . in figure[ fig : library_dist ] we show the distributions of the weights obtained for one of the dem calculations ( # 18 ) .the corresponding intensities are shown in table [ table : line_list ] .unfortunately , these posterior distributions are not simple , single peaked functions , but have multiple peaks . as discussed in the previous section , multiple peaks are an inevitable consequence of having many basis functions .recall our simple example of having three degenerate basis functions where each could represent the data equally well .the posterior in this simple case would consist of approximately equal parts ( 1,0,0 ) , ( 0,1,0 ) , and ( 0,0,1 ) and the distribution for each weight would have a strong peak at 0 and a secondary peak at 1 .for the dem problem we have not specified so many basis functions that we expect to see such extreme degeneracy .still , multiple peaks are clearly present in the posterior distributions for many weights . .] ) and form an envelope on the emission measure distribution . ]so which weights do we use to compute the final dem ?the median and mean of the weight distributions are potentially problematic .the mode of the distribution , however , is generally well behaved . using these weights to compute the dem both reproduces the observed intensities and the input dem very closely . for each runwe also used the walker with the highest value of the posterior as initial conditions for a gradient descent calculation . as is illustrated in figure [ fig : library_evo ] ,the dems recovered using gradient descent generally matched those recovered using the mode of the posterior weight distribution . in all cases the weights obtained after gradient descent actually produced a higher value for the posterior than the mode , suggesting that using the mode is not optimal .for this reason we use the weights obtained from gradient descent to represent the recovered dem .we have performed this calculation for all of the dems in the library .some example dems are shown in figure [ fig : library_dems ] .the summary of all of the calculations is presented in figure [ fig : library_image ] . in almost all casesthe input intensities are reproduced to within a few percent and the general properties of the input dem are also recovered and no emission measure is inferred far away from the peaks in the input distributions .we note that the excellent agreement between the recovered and actual dems is only achieved after a very long burn - in period ( recall that we accepted samples before collecting the samples for the posterior ) . for a burn - in of samples the mode of the posterior distribution resulted in a poor approximation of the input dem with finite weights for most of basis functions . for a burn - in of samplesthe recovered dems were generally close to the input dems but there were a number of spurious peaks .many of these peaks remained even after performing gradient descent .the sampling of the posterior is computationally expensive and one might wonder if it would be easier to simply select some random initial conditions and then do gradient descent .our experiments with this were not encouraging . in almost all cases the calculation converged to some local maximum far from the solution that we obtained through sampling .we note that we did not study this extensively .our intuition , however , is that the sampling is an invaluable aid in exploring a complex posterior in a high dimensional space and is worth the computational expense . to provide a point of comparison for our calculationswe have used another bayesian solver , the `` mcmc '' routine of , on each set of computed intensities in the dem library .here we use 20 temperature bins in and run 100 simulations on each dem .the resulting dems , which are summarized in figure [ fig : library_image ] , also recovers the main features of each input distribution .we also see , however , that mcmc infers emission measure at temperatures far away from the main peaks in the input distributions .this is not surprising given the assumption of an uninformative prior over the weights .finally , we have also applied this technique to two sets of observed eis intensities .these results are summarized in figure [ fig : observed ] .the sparse technique that we introduced here recovers the observed intensities as well as mcmc . since the dem library was constructed with these types of dems in mind narrow dems for off - limb quiet sun and somewhat broader dems for active regions it is not surprising that the algorithm can recover them .we have explored the application of sparse bayesian inference to the problem of determining the temperature structure of the solar corona . we have found that by adopting a cauchy distribution as a prior for the weights we can encode a preference for a sparse representation of the temperature distribution .thus we can use a rich set of basis functions and recover a wide variety of dems , while limiting the amount of spurious emission measure at other temperatures .we have also shown that a complex , high - dimensional posterior can be explored in detail using the parallel sampling technique of .the results from this sparse bayesian approach compare favorably to the results obtained from mcmc .as one would expect , the absence of any constraints on the weights in mcmc leads to considerable emission measure being inferred at temperatures away from the peaks in the input dems .there are two aspects of our algorithm that could be improved .first , we have not considered the optimization of the hyperparameters .the magnitude of the weight penalty ( in equation [ eq : log_prior ] ) , the number of basis functions , and the widths of the basis functions , are left fixed . as mentioned in the discussion of the rvm in the previous section , one method for optimizing the hyperparametersis maximizing the evidence ( see chapter 18 of for more details ) .unfortunately , our non - linear representation of the dem makes the integration of the evidence analytically intractable .a linear representation of the dem would make many aspects of the problem simpler , but this would require an algorithm for optimizing the weights that incorporates the positive definite constraint .we conjecture that such an approach would be orders of magnitude faster than our current approach .we have limited our comparisons to mcmc .there have been many papers written on dem inversions , but a complete summary of techniques is well beyond the scope of this work .two recent efforts , however , directly address some of the issues that we have discussed in this paper . implement a penalized least squares algorithm ( equation [ eq : pls ] ) where the hyperparmeter is adjusted using the statistical uncertainty in observations and the non - negativity constraint . as illustrated with our linear problem , this produces solutions that are smooth , but not necessarily sparse , and thus sensitive to the domain specified for the inversion .one typically chooses a temperature range that extends slightly beyond the temperature of formation of the coolest and hottest line that is observed .this is not a particularly controversial assumption and so this is not a major shortcoming . have implemented an algorithm with a weight penalty based on the l1 norm ( e.g. , * ? ? ?* ; * ? ? ? do not address the optimization of the hyperparmeter on the weight penalty .they also use a l1 norm for the likelihood and it is not clear if this is consistent with the uncertainties of the observations . both and have tested their algorithms on a wide variety of input dems and they perform well .these algorithms are significantly faster than our sparse bayesian method .finally , we stress that our goals for the dem are relatively modest .in general , we simply wish to understand if structures in the solar atmosphere have narrow or broad temperature distributions .we also stress that no amount of mathematical machinery can replace the need for well - calibrated observations that contain emission lines that cover a wide range of temperatures .highly accurate atomic data is also a critical component of any effort to understand the temperature structure of the solar atmosphere ( e.g. , * ? ? ?this work was sponsored by the chief of naval research and nasa s hinode project .hinode is a japanese mission developed and launched by isas / jaxa , with naoj as domestic partner and nasa and stfc ( uk ) as international partners .candela , j. q. 2004 , phd thesis , informatics and mathematical modelling , technical university of denmark , dtu , richard petersens plads , building 321 , dk-2800 kgs .lyngby , supervised by professor lars kai hansen
|
measuring the temperature structure of the solar atmosphere is critical to understanding how it is heated to high temperatures . unfortunately , the temperature of the upper atmosphere can not be observed directly , but must be inferred from spectrally resolved observations of individual emission lines that span a wide range of temperatures . such observations are `` inverted '' to determine the distribution of plasma temperatures along the line of sight . this inversion is ill - posed and , in the absence of regularization , tends to produce wildly oscillatory solutions . we introduce the application of sparse bayesian inference to the problem of inferring the temperature structure of the solar corona . within a bayesian framework a preference for solutions that utilize a minimum number of basis functions can be encoded into the prior and many ad hoc assumptions can be avoided . we demonstrate the efficacy of the bayesian approach by considering a test library of 40 assumed temperature distributions .
|
in the current structure formation paradigm the properties of galaxies are coupled to the evolution of their dark matter ( dm ) hosting halo . in this paradigm the sizes and dynamics of galaxiesare driven by the halo internal dm distribution .the internal dm distribution in a halo is usually parameterized through the density profile . in a first approximationthis profile is spherically symmetric ; the density only depends on the radial coordinate .one of the most popular radial parameterizations is the navarro - frenk - white ( nfw ) profile .this profile can be considered as universal , assuming that one is not interested in the very central region where galaxy formation takes place , and where the effects of baryon physics on the dm distribution are still unknown .this profile is a double power law in radius , where the transition break happens at the so - called scale radius , .the ratio between the scale radius and the halo virial radius is known as the concentration .the concentration of the nfw profile provides a conceptual framework to study simulated dm halos as a function of redshift and cosmological parameters .numerical studies summarized their results through the mass - concentration relationship ; that is , the distribution of concentration values at a fixed halo mass and redshift .the success of such numerical experiments rests on a reliable algorithm to estimate the concentration .such an algorithm should provide unbiased results and must be robust when applied at varying numerical resolution .there are two established algorithms to estimate the concentration parameter .the first method takes the halo particles and bins them into logarithmic radii to estimate the density in each bin , then it proceeds to fit the density as a function of the radius .a second method uses an analytic property of the nfw profile that relates the maximum of the ratio of the circular velocity to the virial velocity , / .the concentration can be then found as the root of an algebraic equation dependent on this maximum value .the first method is straightforward to apply but presents two disadvantages .first , it requires a large number of particles in order to have a proper density estimate in each bin .this makes the method robust only for halos with at least particles .the second problem is that there is not a way to estimate the optimal radial bin size , different choices may produce different results for the concentration .the second method solves the two problems mentioned above .it works with low particle numbers and does not involve data binning .however , it effectively takes into account only a single data point and discards the rest of the data .small fluctuations on the maximum can yield large perturbations on the estimated concentration parameter . in this letterwe use bootstrapping to estimate the bias and standard deviation on the concentration estimates as a function of particle number .we show that the two standard methods to estimate concentrations have increasing biases for decreasing particle numbers .this motivates us to present a third alternative based on fitting the integrated mass profile .this approach has two advantages with respect to the above mentioned methods .it does not involve any data binning and does not throw away data points .this translates into a robust estimate even at low resolution / particle numbers .furthermore , since the method does not require any binning , there is no need to tune numerical parameters .this is a new independent method to estimate the concentration parameter .let us review first the basic properties of the nfw density profile . this shall help us to define our notation .the nfw density profile can be written as where is the universe critical density , is the hubble constant , the universal gravitational constant , is the halo dimensionless characteristic density and is the scale radius .this radius marks the point where the logarithmic slope of the density profile is equal to -2 , the transition between the power law scaling for and for .we define the virial radius of a halo , , as the boundary of the spherical volume that encloses a density of times the mean density of the universe .the corresponding mass , the virial mass , can be written as . from these virial quantitieswe define new dimensionless variables for the radius and mass and . in this letterwe use , a number roughly corresponding to 200 times the critical density at redshift z=0 . from these definitionswe can compute the total mass enclosed inside a radius : ,\ ] ] or in terms of the dimensionless mass and radius variables , \label{eq : profile}\ ] ] where and the parameter corresponds to the concentration . from this normalization and for later conveniencewe define the following function the most interesting feature of eq .( [ eq : profile ] ) is that the concentration is the only free parameter to describe the integrated mass profile .it is also customary to express the mass of the halo in terms of the circular velocity . from thiswe can define a new dimensionless circular velocity , using the result in eq .[ eq : profile ] we have : },\ ] ] this normalized profile always shows a maximum provided that the concentration is larger than .it is possible to show that for the nfw profile the maximum is provided by where and the function corresponds to the definition in eq .( [ eq : f_nfw ] ) .to date , there are two standard methods to estimate concentrations in dm halos extracted from n - body simulations .the first method takes all the particles in the halo and bins them in the logarithm of the radial coordinate from the halo center .then , it estimates the density in each logarithmic bin . at this point is possible to make a direct fit to the density as a function of the radial coordinate .this method has been broadly used for more than two decades to study the mass - concentration - redshift relation of dm halos .a second method uses the circular velocity profile .it finds the value of for which the normalized circular velocity shows a maximum . using this value it solves numerically for the corresponding value of the concentration using eq .( [ eq : max_v ] ) . herewe propose a new method to estimate the concentration .it uses the integrated mass profile defined in eq .( [ eq : profile ] ) . we build it from n - body data as follows .first , we define the center of the halo to be at the position of the particle with the lowest gravitational potential. then we rank the particles by their increasing radial distance from the center . from thisranked list of particles , the total mass at a radius is , where is the position of the -th particle and is the mass of a single computational particle .we then divide the enclosed mass and the radii by their virial values to finally obtain the dimensionless variables and . using bootstrapping data ( [ sec : bootstrapping ] ) we find that at a given normalized radius , , the logarithm of the normalized integrated mass , , approximately follows a gaussian distribution with variance if the integrated mass values at different radii were independent from each other we could write a likelihood distribution as with ^ 2}{\sigma_i^2},\ ] ] where , corresponds to the values in eq.([eq : profile ] ) at for a given value of the concentration parameter , and the index sums over all the particles in the numerical profile . in this computation the particles and discarded to avoid divergent terms in the sum .however , tests on the bootstrapping data show that using , instead of the full inverse covariance matrix , grossly overestimates , providing small uncertainties around the best concentration value .to avoid the expensive computation and inversion of a full covariance matrix we use the bootstrapping data to calibrate an effective .we impose two conditions on the approximate .it must keep the dependence on that we have discovered for the diagonal elements and must give similar curves of vs. around the minimum as the full covariance matrix .we found that the effective can be approximated as we then use an affine invariant markov chain monte carlo ( mcmc ) implemented in the python module ` emcee ` to sample the likelihood function distribution . from the distribution we find the optimal concentration value and its associated uncertainty .we stress that different choices for do not affect the optimal concentration value , only its uncertainty .run - time is roughly proportional to . using a single 2.3ghz cpu core with two walkers over 500 steps takes milliseconds per halo per particle in the halo ,i.e. a halo with can be fit in one second .we use two different simulations to test our methods .the first is the bolshoi run , a cosmological simulation that follows the non - linear evolution of a dm density field sampled with particles over a cubic box of on a side .the cosmological parameters use a hubble parameter , a matter density and a normalization of the power spectrum .the data is publicly available at http://www.cosmosim.org/. details about the structure of the database and the simulation can be found in .we use the halos located in a cubic sub - volume of on a side containing a total of objects . from this samplewe select all the halos at detected with a friends - of - friends ( fof ) algorithm with more than 300 particles , meaning that the masses are in the interval .the fof algorithm used a linking length of times the mean inter - particle distance .this choice translates into an overdensity dependent on the halo concentration . from this set of particleswe follow the procedure spelled out in section [ sec : method ] with to select an spherical region that we redefine to be our halo .this choice makes that the overdensities are fully included inside the original fof particle group . on the interest of providing a fair comparison against the density methodwe only report results from overdensities with at least particles ( ) .we also use public data from the via lactea simulation project .this simulation contains a single isolated halo with a virial mass of the order of using the tree code pkdgrav .the simulation had particles to resolve this region .the cosmological parameters are different from those in the bolshoi simulation , with a hubble parameter , a matter density and a normalization of the power spectrum .the data available to the public corresponds to a downsampled set of particles , which corresponds to a particle mass of .we take halos with at least particles and subsample them by factors of up to .we measure the concentration at every resampling .we use a two - sample kolmogorov - smirnov ( ks ) test to compare the list of radial distances from each subsample against that of its parent halo .we find that the resulting p - value distribution is flat .this confirms that the radial particle distribution in the bootraped halo is consistent with coming from the distribution given by the parent halo . why not using different simulations with the same initial conditions andlower resolutions ( i.e. * ? ? ?because we want to be sure that we are only measuring the bias of a given method as a function of particle number for statistically identical halos , and not a possible simulation artifact that changes the halo structure . for every subsample we keep fixed the virial radius and the center found for the high resolution halo . leaving the virial radius and center free in each bootstrapping iteration has an effect smaller than in the concentration . in the bolshoi simulationwe select 14 massive halos and create 700 subsamples for each one . for the via lactea simulationthe same halo is subsampled 10000 times .the average concentration value for the largest number of particles , , provides a baseline to compare all the other results .we use the following statistic to account for the offset between the concentration at a given downsampled particle number and the baseline .figure [ fig : downsampling ] summarizes our results .the plot on the left shows the average value of as a function of particle number .this can be interpreted as the statistical bias on the concentration estimate . for large enough particle numbers , the results of the three algorithms show a bias below the level . for a lower number of particlesthe results start to deviate . at particles the velocity method overestimates the concentration by a factor while the density method overestimates it by . around the same sampling scale ,the new algorithm shows a more stable behaviour underestimating the concentration only by a factor of - .the thin lines on the same panel show a fit to the function with , ; , and , for the density , velocity and mass method , respectively .the right panel in figure [ fig : downsampling ] shows different uncertainty results .the lines show the difference between the 14 and 86 percentiles in the distribution at fixed mass .each line corresponds to the three different methods to estimate the concentration applied to both simulations .this shows that the bootstrapping technique can help us to assign a uncertainty to the concentration values at a fixed the circles in the same figure show the uncertainty on all the relaxed halos in the bolshoi simulation sample using the mcmc results . to allow for a fair comparison with the bootstrapping results ,this uncertainty is normalized to the concentration value .the uncertainty from the bootstrapping experiment provides an upper bound uncertainty on the concentration estimate for individual halos .we now inspect the mass - concentration relationship results with the three different algorithms .this can help us to identify possible consequences of the biases detected through the bootstrapping experiments .figure [ fig : concentration ] shows the mass - concentration relationship for the density , velocity and integrated mass method .the left panel shows the results as they are produced by each of the algorithms .the thin dashed line marks the trend reported by using the velocity method , showing that our velocity method implementation can reproduce their results .the results from the new algorithm follow very closely the velocity algorithm at high masses ( or equivalently for particles ) . for lower massesthere is a difference between the median of the two methods , but they are still consistent within the statistical uncertainties .we hypothesize that the increase in the results for the velocity and density methods below particles comes from the systematic bias described in the previous section . to test the general consistency of this hypothesis, we correct the concentration values in the velocity and integrated mass methods by a factor of , using the definition in equation ( [ eq : f_off ] ) and the parameters obtained from the data presented in figure ( [ fig : downsampling ] ) .the correction brings into good agreement the results between the velocity / density methods and the new algorithm .we also notice that the results from the density method have a systematic offset from the velocity methods .this offset was already presented by for low concentrations ( ) and high ( ) halo masses .recently summarized results for the mass - concentration relationship coming from different methods and datasets to show that similar systematic offsets are present . studied the mass - concentration relationship using the maximum velocity and density methods and did not report any significant difference .however , they implemented a modified version of the velocity algorithm that bins the particle data , which might explain why they the offset was not reported .how do these results impact the most recent mass - concentration estimates ? and estimated the mass - concentration relation over different suites of cosmological n - body simulations using the density and velocity methods , respectively . both used halos with at least particles .this imposes a lower halo mass limit of (figure 8 in , figure 17 in ) to have robust estimates .this means that their results for individual halos should not be affected by the bias we report here .this also leaves open the question about what other methods can robustly say about the flattening we report below the new method .however , there are other results at lower masses and higher redshifts ( i.e. * ? ? ?* ) that should be reconfirmed using higher resolution simulations as they use halos with only particles .in this letter we used bootstrapping to quantify the biases on concentration estimates .we found that methods commonly used in the literature can overestimate the concentrations by factors of - for halos with particles , or - for halos with particles .this procedure provides a robust technique to quantify the bias in concentration estimates with the advantage that it works without having to run new simulations .these results motivated us to introduce a new method based on the integrated mass profile that show a robust performance at low particle numbers .the new algorithm showed a bias of for halos with particles and less than for halos with particles or more . to keep the bias of the velocity and density methods below only halos with at least particlesshould be considered .the three methods are in broad agreement , within the statistical uncertainties , concerning their estimates of the mass - concentration relationship .some noticeable differences include a systematically higher concentrations in the density method compared to the velocity method .this systematic offset has been reported before with the same dataset and with different simulations without any conclusive explanation for its origin .another difference is that the velocity and integrated mass methods start to differ for masses below ( particles ) .we found that correcting the mean concentration by the mean bias factor found through bootstrapping brings these two techniques into agreement .these results show that using the integrated mass profile to estimate the dm halo concentrations is a tool deserving deeper scrutiny .further tests with larger simulated volumes , varying numerical resolution , higher redshifts , stacked data and different density profiles are the next natural step to explore the full potential of this new method .we acknowledge financial support from uniandes and estrategia de sostenibilidad 2014 - 2015 universidad de antioquia .we thank toms verdugo , stefan gottloeber and nelson padilla for their feedback .we thank the anonymous referees for comments that improved the presentation of these results .
|
we use bootstrapping to estimate the bias of concentration estimates on n - body dark matter halos as a function of particle number . we find that algorithms based on the maximum radial velocity and radial particle binning tend to overestimate the concentration by for halos sampled with particles and by - for halos sampled with particles . to control this bias at low particle numbers we propose a new algorithm that estimates halo concentrations based on the integrated mass profile . the method uses the full particle information without any binning , making it reliable in cases when low numerical resolution becomes a limitation for other methods . this method reduces the bias to for halos sampled with - particles . the velocity and density methods have to use halos with at least particles in order to keep the biases down to the same low level . we also show that the mass - concentration relationship could be shallower than expected once the biases of the different concentration measurements are taken into account . these results show that bootstrapping and the concentration estimates based on the integrated mass profile are valuable tools to probe the internal structure of dark matter halos in numerical simulations .
|
the backward stochastic differential equation ( bsde , for short ) we shall consider in this paper takes the following form : where is a standard brownian motion , is the given terminal value and is the given ( random ) generator . to solvethis equation is to find a pair of adapted processes and satisfying the above equation ( [ bsde ] ) .linear backward stochastic differential equations were first studied by bismut in an attempt to solve some optimal stochastic control problem through the method of maximum principle .the general nonlinear backward stochastic differential equations were first studied by pardoux and peng . sincethen there have been extensive studies of this equation .we refer to the review paper by el karoui , peng and quenez , to the books of el karoui and mazliak and of ma and yong and the references therein for more comprehensive presentation of the theory .a current important topic in the applications of bsdes is the numerical approximation schemes . in most work on numerical simulations , a certain forward stochastic differential equation of the following form : is needed .usually it is assumed that the generator in ( [ bsde ] ) depends on at the time : , where is a deterministic function of , and is global lipschitz in .if in addition the terminal value is of the form , where is a deterministic function , a so - called four - step numerical scheme has been developed by ma , protter and yong in .a basic ingredient in this paper is that the solution to the bsde is of the form , where is determined by a quasi - linear partial differential equation of parabolic type .recently , bouchard and touzi propose a monte - carlo approach which may be more suitable for high - dimensional problems . again in this forward backward setting , if the generator has a quadratic growth in , a numerical approximation is developed by imkeller and dos reis in which a truncation procedure is applied . in the case where the terminal value is a functional of the path of the forward diffusion , namely , , different approaches to construct numerical methods have been proposed .we refer to bally for a scheme with a random time partition . in the work by zhang ,the -regularity of is obtained , which allows one to use deterministic time partitions as well as to obtain the rate estimate ( see bender and denk , gobet , lemor and warin and zhang for different algorithms ) .we should also mention the works by briand , delyon and mmin and ma et al . , where the brownian motion is replaced by a scaled random walk .the purpose of the present paper is to construct numerical schemes for the general bsde ( [ bsde ] ) , without assuming any particular form for the terminal value and generator .this means that can be an arbitrary random variable , and can be an arbitrary -measurable random variable ( see assumption [ a.3.2 ] in section [ sec2 ] for precise conditions on and ) .the natural tool that we shall use is the malliavin calculus .we emphasize that the main difficulty in constructing a numerical scheme for bsdes is usually the approximation of the process .it is necessary to obtain some regularity properties for the trajectories of this process .the malliavin calculus turns out to be a suitable tool to handle these problems because the random variable can be expressed in terms of the trace of the malliavin derivative of , namely , .this relationship was proved in the paper by el karoui , peng and quenez and was used by these authors to obtain estimates for the moments of .we shall further exploit this identity to obtain the -hlder continuity of the process , which is the critical ingredient for the rate estimate of our numerical schemes .our first numerical scheme was inspired by the paper of zhang , where the author considers a class of bsdes whose terminal value takes the form , where is a forward diffusion of the form ( [ e.1.2 ] ) , and satisfies a lipschitz condition with respect to the or norms ( similar assumptions for ) .the discretization scheme is based on the regularity of the process in the mean square sense ; that is , for any partition , one obtains , and is a constant independent of the partition .we consider the case of a general terminal value which is twice differentiable in the sense of malliavin calculus , and the first and second derivatives satisfy some integrability conditions ; we also made similar assumptions for the generator ( see assumption [ a.3.2 ] in section [ sec2 ] for details ) . in this senseour framework extends that of and is also natural . in this framework, we are able to obtain an estimate of the form where is a constant independent of and . clearly , ( [ e.1.4 ] ) with implies ( [ e.1.3 ] ) .moreover , ( [ e.1.4 ] ) implies the existence of a -hlder continuous version of the process for any .notice that , up to now the path regularity of has been studied only when the terminal value and the generator are functional of a forward diffusion . after establishing the regularity of , we consider different types of numerical schemes .first we analyze a scheme similar to the one proposed in [ see ( [ e.4.2 ] ) ] . in this casewe obtain a rate of convergence of the following type : notice that this result is stronger than that in which can be stated as ( when ) we also propose and study an `` implicit '' numerical scheme [ see ( [ e.5.1 ] ) in section [ sec4 ] for the details ] . for this scheme we obtain a much better result on the rate of convergence , depends on the assumptions imposed on the terminal value and the coefficients . in both schemes ,the integral of the process is used in each iteration , and for this reason they are not completely discrete schemes . in order to implement the scheme on computers, one must replace an integral of the form by discrete sums , and then the convergence of the obtained scheme is hardly guaranteed . to avoid this discretizationwe propose a truly discrete numerical scheme using our representation of as the trace of the malliavin derivative of ( see section [ sec5 ] for details ) . for this new scheme, we obtain a rate of convergence result of the form for any .in fact , we have a slightly better rate of convergence ( see theorem [ t.6.1 ] ) , however , this type of result on the rate of convergence applies only to some classes of bsdes , and thus this scheme remains to be further investigated . in the computer realization of our schemes or any other schemes ,an extremely important procedure is to compute the conditional expectation of form . in this paperwe shall not discuss this issue but only mention the papers and .the paper is organized as follows . in section [ sec2 ]we obtain a representation of the martingale integrand in terms of the trace of the malliavin derivative of , and then we get the -hlder continuity of by using this representation . the conditions that we assume on the terminal value and the generator also specified in this section .some examples of application are presented to explain the validity of the conditions .section [ sec3 ] is devoted to the analysis of the approximation scheme similar to the one introduced in . under some differentiability and integrability conditions in the sense of malliavin calculus on and the nonlinear coefficient , we establish a better rate of convergence for this scheme . in section [ sec4 ] , we introduce an `` implicit '' scheme and obtain the rate of convergence in the norm .a completely discrete scheme is proposed and analyzed in section [ sec5 ] . throughout the paper for simplicitywe consider only scalar bsdes .the results obtained in this paper can be easily extended to multi - dimensional bsdes .let be a one - dimensional standard brownian motion defined on some complete filtered probability space .we assume that is the filtration generated by the brownian motion and the -null sets , and .we denote by the progressive -field on the product space \times\omega ] denotes the banach space of all progressively measurable processes \times\omega , \mathcal{p})\rightarrow(\mathbb{r } , \mathcal{b}) ] denotes the banach space of all the rcll ( right continuous with left limits ) adapted processes \times \omega , \mathcal{p})\rightarrow ( \mathbb{r } , \mathcal{b}) ] be the separable hilbert space of all square integrable real - valued functions on the interval ] .for any and , the derivative , i=1,\ldots , k\}\ ] ] is a measurable function on the product space ^k\times\omega ] . .notice that we can choose a progressively measurable version of the -valued process .the generator in the bsde ( [ bsde ] ) is a measurable function \times \omega\times\mathbb{r}\times\mathbb{r } , \mathcal{p}\otimes \mathcal{b}\otimes\mathcal{b})\rightarrow ( \mathbb{r } , \mathcal{b}) ] and is uniformly lipschitz in ; namely , there exists a positive number such that a.e . for all and .then there exists a unique solution pair )\times h_{\mathcal{f}}^{q}([0,t]) ] .assume that and are two progressively measurable processes satisfying conditions and in assumption [ a.2.1 ] .suppose that the random variables and belong to , where is the solution to ( [ joint ] ) .then the following linear bsde , \,dr-\int_{t}^{t}z_{r}\,dw_{r},\qquad0\le t\le t,\ ] ] has a unique solution pair , and there is a constant such that for all ] .the solution to ( [ joint ] ) can be written as for any real number , we have then , fixing any and using hlder s inequality , we obtain where and .set .then is a martingale due to ( h1 ). we can rewrite ( [ martingale ] ) into by doob s maximal inequality , we have for some constant depending only on . finally , choosing any , such that and applying again the hlder inequality yield combining this inequality with ( [ martingale-1 ] ) and ( [ martingale-2 ] ) we complete the proof .proof of theorem [ t.3.2 ] the existence and uniqueness is well known .we are going to prove ( [ e.2.4 ] ) .let ] where is a constant independent of and .we return to the study of ( [ bsde ] ) .the main assumptions we make on the terminal value and generator are the following : [ a.3.2 ] fix . , and there exists , such that for all ] .assume that and satisfy the above conditions ( i ) and ( ii ) .let be the unique solution to ( [ bsde ] ) with terminal value and generator .for each , , and belong to , and the malliavin derivatives , and satisfy and there exists such that for any ] , and each pair of , and it has continuous partial derivatives with respect to , which are denoted by and , and the malliavin derivative satisfies the following property is easy to check and we omit the proof . conditions ( [ e5 - 1 ] ) and ( [ e6 ] ) imply and respectively .the following is the main result of this section .[ t.3.1 ] let assumption [ a.3.2 ] be satisfied .there exists a unique solution pair to the bsde ( [ bsde ] ) , and are in .a version of the malliavin derivatives of the solution pair satisfies the following linear bsde : & & \hphantom{d_\theta\xi+\int_t^t [ } { } + \partial_z{f(r , y_r , z_r)}d_\theta z_r + d_\theta f(r , y_r , z_r)]\,dr\\ & & { } -\int_t^td_\theta z_r\,dw_r,\qquad 0\le\theta\leq t\leq t ; \nonumber\vadjust{\goodbreak}\\[-2pt ] \label{e.3.12 - 2 } d_\theta y_t&=&0,\qquad d_\theta z_t=0,\qquad 0\le t<\theta\leq t.\end{aligned}\ ] ] moreover , defined by ( [ e.3.12 ] ) gives a version of , namely , a.e . there exists a constant , such that , for all ]. then , by assumption [ a.3.2](ii ) , the processes and satisfy conditions ( h1 ) and ( h2 ) in assumption [ a.2.1 ] , and from ( [ e.3.12 ] ) we have for ] for any ] , and are elements in , where for any , let us compute & & \hphantom{\rho_r\biggl\{\int_\theta^r [ } { } + \partial_{zz}f(u , y_u , z_u)d_\theta z_u+ d_\theta\partial_{z}f(u , y_u , z_u)]\,dw_u\\[-2pt ] & & \hphantom{\rho_r\biggl\ { } { } + \partial_zf(\theta , y_\theta , z_\theta)\\[-2pt ] & & \hphantom{\rho_r\biggl\{}+\int_\theta^r\bigl(\partial_{yy}f(u , y_u , z_u)-\partial _{ yz}f(u , y_u , z_u)\beta_u\bigr)d_\theta y_u\,du\\[-2pt ] & & \hphantom{\rho_r\biggl\ { } { } + \int_\theta^r\bigl(\partial_{yz}f(u , y_u , z_u)-\partial _ { zz}f(u , y_u , z_u)\beta_u\bigr)d_\theta z_u\,du\\[-2pt ] & & \hphantom{\rho_r\biggl\ { } \hspace*{17.3pt}{}+\int_\theta^r\bigl(d_\theta\partial_{y}f(u , y_u , z_u)-\beta_ud_\theta \partial_{z}f(u , y_u , z_u)\bigr)\,du\biggr\}.\end{aligned}\ ] ] by the boundedness of the first- and second - order partial derivatives of with respect to and , ( [ e5 - 1 ] ) , ( [ e6 ] ) , ( [ e.3.19 - 1 ] ) , lemma [ l.3.1 ] , the hlder inequality and the burkholder davis gundy inequality , it is easy to show that for any , by the clark ocone haussman formula , we have & = & { \mathbb{e}}(\rho_td_s\xi)+\int_0^t{\mathbb{e}}(d_\theta\rho_td_s\xi+\rho _ td_\theta d_s\xi|\mathcal{f}_\theta ) \,dw_\theta\\[-2pt ] & = & { \mathbb{e}}(\rho_td_s\xi)+\int_0^tu_\theta^s \,dw_\theta\end{aligned}\ ] ] and \,dr\big|\mathcal{f}_\theta\biggr)\,dw_\theta\\ & & \qquad={\mathbb{e}}\int_s^t\rho_rd_sf(r , y_r , z_r)\,dr+\int_0^tv_\theta^s \,dw_\theta.\end{aligned}\ ] ] we claim that and .in fact , by ( [ e2 ] ) , ( [ e2 - 2 ] ) , ( [ e.3.21 ] ) and lemma [ l.3.1 ] , we have .on the other hand , \,dr\big|\mathcal{f}_\theta\biggr)\biggr\vert ^{p^{\prime}}\\ & \le&4^{p^{\prime}-1}[j_1+j_2+j_3+j_4],\end{aligned}\ ] ] where and for , we have for , we have using a similar techniques as before , we obtain that and by ( [ e3 ] ) , ( [ e5 - 1])([e6 - 2 ] ) , ( [ e.3.21 ] ) and lemma [ l.3.1 ] , we obtain that therefore , and belong to . thus by theorem [ t.3.2 ] with , there is a constant , such that for all ] . combining ( [ e.3.23 ] ) with ( [ e.3.18 ] ) and ( [ e.3.19 ] ), we obtain that there is a constant independent of and , such that for all ] be the unique solution pair to ( [ bsde ] ) .if , then there exists a constant , such that , for any ] is a deterministic function that has uniformly bounded first- and second - order partial derivatives with respect to and , and .the terminal value is a multiple stochastic integral of the form ^n}g(t_1,\ldots , t_n)\,dw_{t_1}\cdots dw_{t_n},\ ] ] where is an integer and is a symmetric function in^n) ] ^{n-1}}|g(t_1,\ldots , t_{n-1},u)-g(t_1,\ldots , t_{n-1},v)|^2\,dt_1\cdots dt_{n-1}<l|u - v|.\ ] ] from ( [ mul ] ) , we know that ^{n-1}}g(t_1,\ldots , t_{n-1},u)\,dw_{t_1}\cdots dw_{t_{n-1}}.\ ] ] the above assumption implies assumption [ a.3.2 ] , and therefore , satisfies the hlder continuity property ( [ e - z ] ) .let ) ] .assume that : \times\mathbb{r}\times\mathbb{r}\rightarrow \mathbb{r} ] associated with , there exists a constant such that for all , )\vert^p\le l\vert\theta-\theta^\prime\vert^{{p/2}}\ ] ] for some .it is easy to show that ) ] , where denotes the signed measure on \times[0,1] ] , , . is twice differentiable , and there exist a constant and a positive integer such that where for any ) ] . for any fixed , we have .then , under all the assumptions in this example , by theorem 2.2.1 and lemma 2.2.2 in and the results listed above , we can verify assumption [ a.3.2 ] .therefore , has the hlder continuity property ( [ e - z ] ) .note that in the multidimensional case we do not require the matrix to be invertible .in the remaining part of this paper , we let be a partition of the interval ] , comparing with the numerical schemes for forward stochastic differential equations , we could introduce a numerical scheme of the form where is an approximation of the terminal condition .this leads to a backward recursive formula for the sequence .in fact , once and are defined , then we can find by and is determined by the stochastic integral representation of the random variable although can be expressed explicitly by clark ocone haussman formula , its computation is a hard problem in practice . on the other hand , there are difficulties in studying the convergence of the above scheme . an alternative scheme is introduced in , where the approximating pairs are defined recursively by where , by convention , when . in the following rate of convergenceis proved for this approximation scheme , assuming that the terminal value and the generator are functionals of a forward diffusion associated with the bsde , the main result of this section is the following , which on one hand improves the above rate of convergence , and on the other hand extends terminal value and generator to more general situation .consider the approximation scheme ( [ e.4.2 ] ) .let assumption [ a.3.2 ] be satisfied , and let the partition satisfy , where is a constant .assume that a constant exists such that }\ ] ] and then , when , by ( [ e.4.11 ] ) and ( [ e.4.12 ] ) we can write {\delta}_{k-1}\\ & & { } -\int_{t}^t \delta z_r^\pi\,dw_r+r_t^\pi+\delta\xi^\pi,\end{aligned}\ ] ] where .therefore , we obtain {\delta}_{k-1 } + r_t^\pi+ \delta\xi^\pi\big|\mathcal{f}_{t}\biggr ) .\hspace*{-36pt}\ ] ] denote . from equality ( [ g1 ] ) for , where , and taking into account that , we obtain ] ( ] , , \ldots , t-2\delta l\in[0,t_{i_l}] ] , has length less than , that is , . on each interval , j=0,1,\ldots , l ] .we shall use the fixed point theorem for the mapping from ) ] which maps to , where is the solution of the following bsde : .\ ] ] in fact , by the martingale representation theorem , there exist a progressively measurable process such that and by the integrability properties of and , one can show that ) ] .then satisfies equation ( [ e - ab ] ) .notice that is a martingale .then by the lipschitz condition on , the integrability of and , and doob s maximal inequality , we can prove that ) ] , and let , be the associated solutions , that is , , i=1,2.\ ] ] denote then for all ] . thus by doob s maximal inequality , we have \\[-8pt ] & \le & c \mathbb{e } \biggl| \int_a^bz_r^1\,dr- \int_a^b z_r^2\,dr \biggr|^p \nonumber\\ & \le & c ( b - a)^{{p/2 } } \mathbb{e } \biggl(\int_a^b |\bar{z}_r| ^2 \,dr\biggr)^{{p/2 } } , \nonumber\end{aligned}\ ] ] where is a generic constant depending on and , which may vary from line to line . from ( [ e.5.3 ] ) , it is easy to see for all ] to the bsde ( [ e.5.2 ] ) .now we begin to study the convergence of the scheme ( [ e.5.1 ] ) .[ t.5.2 ] let assumption [ a.3.2 ] be satisfied , and let be any partition. assume that and there exists a constant such that , for all ] . using doob s maximal inequality , ( [ e.4.13 ] ) , and the lipschitz condition on , we have ^p \\ \hspace*{-2pt}&&\qquad\le c \mathbb{e}\biggl ( \sum_{k = i+1}^{n } r_r^\pi| + |\delta\xi^\pi| \biggr)^p \\ \hspace*{-2pt}&&\qquad\le c \biggl\ { \mathbb{e}\biggl ( \sum_{k = i+1}^{n } \mathbb{e}\sup_{0\le r\le t}| r_r^\pi|^p + { \mathbb{e}}|\delta \xi^\pi|^p\biggr\ } \\\hspace*{-2pt}&&\qquad\le c\biggl\ { \mathbb{e}\biggl ( \sum_{k = i+1}^{n } |\delta y_{t_k}^\pi|\delta_{k-1 } \biggr)^p + \mathbb{e } \biggl ( \sum_{k = i+1}^n |\widehat{z}_{t_k}^\pi| { \delta}_{k-1 } \biggr)^p+ |\pi|^{{p/2 } } + { \mathbb{e}}|\delta\xi^\pi|^p\biggr\ } \\\hspace*{-2pt}&&\qquad\le c\biggl\ { ( t - t_i)^p\mathbb{e } \sup_{i+1\le k\le n } \hspace*{-2pt}&&\qquad\quad\hphantom{c\biggl\ { } { } + \mathbb{e } \biggl ( \sum_{k = i+1}^n |\widehat{z}_{t_k}^\pi| { \delta}_{k-1 } \biggr)^p+ |\pi|^{{p/2 } } + { \mathbb{e}}|\delta\xi^\pi|^p\biggr\},\end{aligned}\ ] ] where , and in the following , denotes a generic constant independent of the partition and may vary from line to line . on the other hand, we have , by the hlder continuity of given by ( [ e - z ] ) , hence , we obtain where is a constant independent of the partition . by the burkholder davis gundy inequality , we have from ( [ e.5.5 ] ) , we obtain thus , from ( [ e.5.8 - 1 ] ) and ( [ e.5.8 - 2 ] ) , we obtain & & \qquad\le c_p\biggl\ { \mathbb{e}\biggl| \sum_{k = i+1}^n \widetilde{f}_{t_k}^\pi { \delta}_{k-1}\biggr|^p+ \mathbb{e } |\delta\xi^\pi \mathbb{e } |r_{t_i}^\pi|^p+\mathbb{e}|\delta y_{t_i}^\pi|^p\biggr\}.\end{aligned}\ ] ] similar to ( [ e.5.8 ] ) , we have where is a constant independent of the partition . if , then we have substituting ( [ e.5.9 ] ) into ( [ e.5.8 ] ) , we have \\[-8pt ] & & \qquad\quad { } + c_1\bigl(1 + 2c_2(t - t_i)^{{p/2}}\bigr)(|\pi|^ { { p}/{2}}+{\mathbb{e}}|\delta\xi^\pi|^p)\nonumber\\ & & \qquad\le 2c_1 ( t - t_i)^p \mathbb{e } \sup_{i+1 \le k\le n } |\delta y_{t_k } we can find a positive constant independent of the partition , such that , and . denote ] , , \ldots , t-2\delta l\in[0,t_{i_l}] ] , has length less than , that is , .on ] , based on the recursive formula ( [ e.5.1 ] ) and ( [ e.5.12 ] ) , inequality ( [ e.5.9 ] ) becomes combining ( [ e.5.12 ] ) and ( [ e.5.13 ] ) ,we know that there exists a constant independent of the partition , such that the advantages of this implicit numerical scheme are : we can obtain the rate of convergence in sense ; the partition can be arbitrary ( should be small enough ) without assuming .for all the numerical schemes considered in sections [ sec3 ] and [ sec4 ] , one needs to evaluate processes with continuous index . in this section ,we use the representation of in terms of the malliavin derivative of to derive a completely discrete scheme . from ( [ e.3.12 ] ) , can be represented as where with and . using that , a.e . , from ( [ bsde ] ) , ( [ e.6.3 ] ) and ( [ e.6.4 ] ) , we propose the following numerical scheme .we define recursively \\[-9pt ] z_{t_i } ^\pi&=&\mathbb{e } \biggl ( \rho_{t_{i+1 } , t_n}^\pi d_{t_i } \xi+\sum_{k = i}^{n-1 } \rho_{t_{i+1 } , t_{k+1}}^\pi d_{t_i } f(t_{k+1 } , y_{t_{k+1}}^\pi , z_{t_{k+1}}^\pi ) { \delta } _ k \big|\mathcal{f}_{t_i}\biggr),\nonumber\\ \eqntext{i = n-1 , n-2,\ldots , 0,}\end{aligned}\ ] ] where , and for , \\[-9pt ] & & \hphantom{\exp\biggl\ { } { } + \sum_{k = i}^{j-1 } \int_{t_k}^{t_{k+1 } } \biggl ( \partial_y f(r , y_{t_k}^\pi , z_{t_k}^\pi ) -\frac12 [ \partial _ z f(r , y_{t_k}^\pi , z_{t_k}^\pi)]^2\biggr)\,dr \biggr\ } .\nonumber\end{aligned}\ ] ] an alternative expression for is given by the following formula : \\[-9pt ] & & \hphantom{\exp\biggl\ { } { } + \sum_{k = i}^{j-1 } \biggl ( \partial_y f({t_k } , y_{t_k}^\pi , z_{t_k}^\pi ) - \frac12 [ \partial _ z f({t_k } , y_{t_k}^\pi , z_{t_k}^\pi)]^2 \biggr){\delta}_k \biggr\}.\nonumber\end{aligned}\ ] ] however , we will only consider the scheme ( [ e.6.5 ] ) with given by ( [ e.6.7 ] ) .we make the following assumptions : is deterministic , which implies . is linear with respect to and ; namely , there are three functions , and such that assume that , are bounded and ) ] , for all .notice that ( g1 ) and ( g2 ) imply ( ii ) and ( iii ) in assumption [ a.3.2 ] .we propose condition ( g1 ) in order to simplify in formula ( [ e.6.5 ] ) .in fact , there are some difficulties in generalizing the condition ( g)s , especially ( g1 ) , to a forward backward stochastic differential equation ( , for short ) case .if we consider a fbsde where , and the functions are deterministic , then under some appropriate conditions [ e.g. , ( h1)(h4 ) in example [ eg-2 - 11 ] ] for in ( [ e.6.5 ] ) is of the form & & \hphantom{\mathbb{e } \biggl ( } { } + \sum_{k = i}^{n-1 } \rho_{t_{i+1 } , t_{k+1}}^\pi \partial_x f(t_{k+1},x_{t_{k+1}}^\pi , y_{t_{k+1}}^\pi , z_{t_{k+1}}^\pi)d_{t_i}x_{t_{k+1}}^\pi{\delta } _ k \big|\mathcal{f}_{t_i}\biggr),\end{aligned}\ ] ] where is a certain numerical scheme for .it is hard to guarantee the existence and the convergence of malliavin derivative of , and therefore , the convergence of is difficult to derive .[ t.6.1 ] let assumption [ a.3.2 ] and assumptions be satisfied. then there are positive constants and independent of the partition , such that , when we have in the proof , will denote a constant independent of the partition , which may vary from line to line . under the assumption ( g1 ), we can see that latexmath:[\[z_{t_i}^\pi=\mathbb{e } ( \rho_{t_{i+1 } , t_n}^\pi d_{t_i } \xi , since , we deduce , for all , \big|\mathcal{f}_{t_i } \biggr ) .\end{aligned}\ ] ] from ( g2 ) , we have where is a constant independent of the partition . in the same way, we obtain thus for , \big|\mathcal{f}_{t_i } \biggr)\\ & \le & 2c_1 \mathbb{e } \biggl ( \bigl({\sup_{0\le\theta\le t}}|d_\theta\xi|\bigr)\biggl(\sup_{0\le t\le t}\exp\biggl\{\int_t^th(r)\,dw_r\biggr\}\biggr)\\ & & \hphantom{2c_1 \mathbb{e } \biggl ( } { } \times \biggl[{\sup_{0\lek\le n-1 } } \biggl|\int_{t_k}^{t_{k+1 } } h(r ) \,dw_r\biggr| + \sup_{0\le k\le n-1}\int_{t_k}^{t_{k+1 } } |g(r)|\,dr\\ & & \hspace*{153.8pt } { } + \frac12 \sup_{0\le k\le n-1 } \int_{t_k}^{t_{k+1 } } h(r)^2 \,dr\biggr ] \big|\mathcal{f}_{t_i } \biggr ) .\end{aligned}\ ] ] the right - hand side of the above inequality is a martingale as a process indexed by .let .then , satisfies the following linear stochastic differential equation : by ( g1 ) , ( g2 ) , the hlder inequality and lemma [ l.3.1 ] , it is easy to show that , for any , & & \qquad={\mathbb{e}}\biggl(\exp\biggl\{\int_0^th(u)\,dw_u\biggr\}\sup_{0\le t\le t}\exp\biggl\{-\int_0^th(u)\,dw_u\biggr\}\biggr)^r\nonumber\\[-2pt ] & & \qquad\le\biggl({\mathbb{e}}\exp\biggl\{2r\int_0^th(u)\,dw_u\biggr\ } \biggr)^{1/2}\\[-2pt ] & & \qquad\quad{}\times\biggl({\mathbb{e}}\sup_{0\le t\le t}\exp\biggl\{-2r\int_0^th(u)\,dw_u\biggr\}\biggr)^ { 1/2}\nonumber\\[-2pt ] & & \qquad=\exp\biggl\{r^2\int_0^th(u)^2\,dr\biggr\}\bigl({\mathbb{e}}\sup_{0\le t\le t}\eta_t^{2r}\bigr)^{1/2}<\infty.\nonumber\end{aligned}\ ] ] for any , by doob s maximal inequality and the hlder inequality , ( g3 ) and ( [ ee.6.1 ] ) , we have & & \qquad\le c\mathbb{e } \biggl ( \bigl(\sup_{0\le\theta\le t}|d_\theta\xi|\bigr)^p\biggl(\sup_{0\le t\le t}\exp\biggl\{\int_t^th(r)\,dw_r\biggr\}\biggr)^p\\[-2pt ] & & \qquad\quad\hphantom{c\mathbb{e } \biggl ( } { } \times\biggl[\sup_{0\le k\le n-1 } \biggl|\int_{t_k}^{t_{k+1 } } h(r ) \,dw_r\biggr| \\[-2pt ] & & \qquad\quad\hspace*{41pt } { } + \sup_{0\le k\le n-1}\int_{t_k}^{t_{k+1 } } |g(r)|\,dr + \frac12 \sup_{0\le k\le n-1 } \int_{t_k}^{t_{k+1 } } h(r)^2 \,dr\biggr]^p \biggr)\\[-2pt ] & & \qquad\le c\biggl[\mathbb{e } \biggl ( \bigl(\sup_{0\le\theta\le t}|d_\theta\xi|\bigr)^{{pp^\prime}/({p^\prime - p})}\\[-2pt ] & & \hspace*{58.6pt}{}\times\biggl(\sup _ { 0\le t\le t}\exp\biggl\{\int_t^th(r)\,dw_r\biggr\}\biggr)^{{pp^\prime } /({p^\prime - p})}\biggr)\biggr]^{({p^\prime - p})/{p^\prime}}\\[-2pt ] & & \qquad\quad{}\times\biggl[{\mathbb{e}}\biggl(\sup_{0\le k\le n-1 } \biggl|\int_{t_k}^{t_{k+1 } } h(r ) \,dw_r\biggr| + \sup_{0\le k\le n-1}\int_{t_k}^{t_{k+1 } } |g(r)|\,dr \\[-2pt ] & & \qquad\quad\hspace*{136pt } { } + \frac12 \sup_{0\le k\le n-1 } \int_{t_k}^{t_{k+1 } } h(r)^2 \,dr\biggr)^{p^\prime } \biggr]^{{p}/{p^\prime}}\\[-2pt ] & & \qquad\le c\bigl[\mathbb{e } \bigl(\sup_{0\le\theta\le t}|d_\theta\xi|\bigr)^{{2pp^\prime}/({p^\prime - p } ) } \bigr]^{{p^\prime}/({2(p^\prime - p)})}\\[-2pt ] & & \qquad\quad{}\times\biggl[{\mathbb{e}}\biggl(\sup_{0\le t\le t}\exp\biggl\{\int_t^th(r)\,dw_r\biggr\}\biggr)^{{2pp^\prime } /({p^\prime - p})}\biggr]^{{p^\prime}/({2(p^\prime - p)})}\\[-2pt ] & & \qquad\quad{}\times\biggl[{\mathbb{e}}\sup_{0\le k\le n-1 } \biggl|\int_{t_k}^{t_{k+1 } } h(r ) \,dw_r\biggr|^{p^\prime } + { \mathbb{e}}\sup_{0\le k\le n-1}\biggl(\int_{t_k}^{t_{k+1 } } |g(r)|\,dr\biggr)^{p^\prime}\\[-2pt ] & & \hspace*{148.3pt}\qquad\quad { } + { \mathbb{e}}\sup_{0\le k\le n-1 } \biggl(\int_{t_k}^{t_{k+1 } } h(r)^2 \,dr\biggr)^{p^\prime } \biggr]^{{p}/{p^\prime}}\\[-2pt ] & & \qquad = c[i_1+i_2+i_3]^{{p}/{p^\prime } } .\end{aligned}\ ] ] for any , by the hlder inequality we can obtain for any centered gaussian variable , and any , we know that where is a constant independent of .thus , we can see that .assume is small enough ; then we have it is easy to see that consequently , we obtain applying recursively the scheme given by ( [ e.6.5 ] ) , we obtain therefore , for , where is exactly the same as in section [ sec3 ] and .in fact , we keep the term to indicate the role it plays as the terminal value . for , we have ) , we obtain & & \qquad\le c\mathbb{e } \biggl ( \sum_{k = i+1}^n |f(t_k , y_{t_k } , z_{t_k } ) -f(t_k , y_{t_k}^\pi , z_{t_k}^\pi ) |{\delta}_{k-1 } \biggr)^p\\[-2pt ] & & \qquad\quad { } + c(|\pi|^ { { p/2 } } + { \mathbb{e}}|\delta\xi^\pi|^p)\\[-2pt ] & & \qquad\le c \biggl\ { \mathbb{e } \biggl ( \sum_{k = i+1}^n | y_{t_k } - y_{t_k}^\pi|{\delta}_{k-1 } \biggr)^p + \mathbb{e } \biggl ( \sum_{k = i+1}^n | z_{t_k } - z_{t_k}^\pi| { \delta}_{k-1 } \biggr)^p\biggr\}\\[-2pt ] & & \qquad\quad{}+c(|\pi|^{{p/2 } } + { \mathbb{e}}|\delta\xi^\pi|^p ) \\[-2pt ] & & \qquad\le c_2 ( t - t_i)^p { \mathbb{e } \sup_{i+1\le k\le n } } | y_{t_k } - y_{t_k}^\pi| ^p \\[-3pt ] & & \qquad\quad { } + c_3\biggl(|\pi|^{{p/2}- { p}/({2\log({1}/{|\pi| } } ) ) } \biggl ( \log\frac{1}{|\pi|}\biggr)^{{p/2}}+{\mathbb{e}}|\delta\xi^\pi|^p\biggr ) , \end{aligned}\ ] ] where and are constants independent of the partition . we can obtain the estimate for ) in theorem [ t.5.2 ] to get the estimate for .we appreciate the referee s very constructive and detailed comments to improve the presentation of this paper .
|
in this paper we study backward stochastic differential equations with general terminal value and general random generator . in particular , we do not require the terminal value be given by a forward diffusion equation . the randomness of the generator does not need to be from a forward equation , either . motivated from applications to numerical simulations , first we obtain the -hlder continuity of the solution . then we construct several numerical approximation schemes for backward stochastic differential equations and obtain the rate of convergence of the schemes based on the obtained -hlder continuity results . the main tool is the malliavin calculus . , and .
|
aerial robotics have demonstrated their ability to provide rapid coverage of complex areas and environments by exploiting miniaturized sensing technology and their advanced locomotion capabilities . nowadays ,aerial robots of very limited cost present robust flight behavior , and can be equipped with a multi modal sensing suite that may contain visible light cameras , thermal imaging or even light detection and ranging ( lidar ) devices and more . at the same time , progress in robotic perception has enabled the online , real time , reconstruction of the environment , tracking of areas and targets of interest or even semantic scene understanding .finally , the sucessful combination of modern path planning strategies with the real time localization and mapping capabilities of the robot has allowed aerial robots to navigate or even explore autonomously in possibly cluttered , challenging and previously unknown environments .aiming to further leverage these outstanding achievements , this work deals with the challenge of using aerial robots to monitor dynamic social phenomena such as parades taking place in our cities .in particular , we aim to address the problem of optimally coordinating and positioning a team of aerial robots each of them equipped with a camera sensor such that they can provide optimal surveillance of a dynamically evolving parade route taking place within an urban environment .the parade route is able to change its spatial distribution and form dynamically , the aerial robots are subject ot the limitations of their sensing modules and the goal is to optimize the totally achieved coverage along the parade route . as the parade route is only as well covered as its least covered point , the optimization objective is to place the aerial robots of the team such that they maximize the minimum coverage over the points in the route at every time instant of it .figure [ fig : motivation ] presents the motivation behind the algorithmic contribution of our work . to approach this problem, we contribute an algorithm that considers a team of aerial robots capable of flying holonomic trajectories and equipped with a camera sensor of limited field of view , assumes a dynamically evolving parade route within an urban environment consisting of buildings or other occlusion structures and aims to find the best possible `` guarding '' positions of the robot team such that optimal coverage is provided at every instance of the parade . as the parade route evolves , the robot team modifies its position to provide the best coverage at any time . as this problem is in general nonconvex and np hard, we contribute an algorithm that provides approximate solutions via convexification very fast . to demonstrate the capabilities of the algorithm we present a set of simulation studies , while the computational properties of the algorithm are also analyzed .the rest of this document is organized as follows : section [ sec : problem ] overviews and details the specific problem considered , while section [ sec : algorithm ] describes the proposed optimal multi aerial robot dynamic parade route surveillance algorithm .subsequently , section [ sec : sim ] presents detailed simulation results and computational analysis of the algorithm .finally , conclusions are drawin section [ sec : concl ] .a dynamic parade following the route trajectory is considered to take place in an urban map , subsets of which are occupied by buildings obstacles of rectangle shape .given a set of aerial robots capable of flying holonomic trajectories and a sensor model constrained by a field of view , the problem is to find the set of aerial robot trajectories ] . in general , the solution to this relaxed problem , will have fractional variables . as a boolean allocationis considered in order to specifically assign a guard location to every robot , the _ iterated weighted _ _ heuristic _ will be used to achieve the recovery of a boolean solution . in order to recover a boolean solution ,an approach is to solve a sequence of convex problems where the linear term is added to the objective , and then picking the weight vector at each iteration to try and induce a sparse solution vector . enhancing sparsity via reweighted optimization is an extensively employed approach in convex optimization .broadly , given a set and denoting its _ cardinality _ as , the iterated weighted heuristic is the process of minimizing over through the following process : [ alg : loneheur ] naturally , this process is extended for the case of matrices , while the matrix rank operator is then acting with the role of the cardinality operator . for the problem of finding solution to the relaxed , convex , problem of section [ sec : relaxation ] , the iterated heuristic consists of initializing and repeating the two steps : _ step 1 : _ _ step 2 : _ until a boolean solution is reached . within these expressions , and adjusted to promote a sparse solution .typical choices would be and .intuitively , the weight vector is incentivizing elements of which were close to zero in the last iteration towards zero in the next iteration .it is highlighted that the heuristic is characterized by increased performance as it typically converges within or fewer iterations .the aforementioned steps provide the solution of placing a team of aerial robots at the optimal guard positions to ensure the best coverage of a fixed instane of the parade route .as the parade is in fact dynamic , these steps are executed iteratively .at every step sampled at a possibly varying sampling period the current instance of the route is used and the relevant optimal robot positions are computed .the reference commands to the robots are then provided to the team on a nearest neighbor fashion .to verify and evaluate the functionality of the algorithm , a set of simulation studies are considered . within those ,a city is considered and parades are designed to follow complex trajectories within the city building blocks . at the same time , we varied the number of robots as well as the number of potential guard positions sampled in the environment .below , a subset of these results will be presented and the computational analysis will be summarized .figure [ fig : res1 ] presents the case of a aerial robots commanded to monitor a complicated parade route traveling within the a city environment consisting of building blocks .the dynamic trajectory of the parade is discretized to samples and a total of possible guard positions are sampled within the obstacle free subset of the workspace of the problem .each robot is considered to be equipped with a camera with horizontal field of view . as shown, the algorithm dynamically adapts the positions of the robots to find feasible , full coverage solutions at all times . figure [ fig : res_time ] presents the computation characteristics of the solution per step of iteration .robots monitoring a dynamic parade route .the parade is considered to be taking place within a city like environment consisting of building blocks .camera - occlusions are accounted for , while the field of view of the camera that equips every robot is considered to be . for this study possible guard locations are sampled within the obstacle free subset of the world . ]similarly , figure [ fig : res1b ] presents the results of the identical set up with the exception of sampling possible guard locations .as shown the results of the robots positioning are very similar for almost all iterations which indicates that as long as a _sufficient _ number of guard positions is sampled , then further enlargement of this sampling space will not tend to lead to significantly better solutions . on the other hand , computational time increases a lot as shown in figure [ fig : res_time ] , a fact that further highlights the need for a good prior tuning of the amount of guard positions to be sampled . as the sampling of possible guard positions is uniform however , tuning this value is in general only about having one good reference value for a given environment and then scaling with the surface of free space .robots monitoring a dynamic parade route .the parade is considered to be taking place within a city like environment consisting of building blocks .camera - occlusions are accounted for , while the field of view of the camera that equips every robot is considered to be . for this study possible guard locations are sampled within the obstacle free subset of the world . ]figure [ fig : res2 ] presents the same case but now with aerial robots . for this case , initially a total of possible guard positions are sampled within the obstacle free subset of the workspace of the problem . as shown , the solution is characterized with more close pressence of robots around the parade route .figure [ fig : res_time ] presents the computation characteristics of the solution per step of iteration .robots monitoring a dynamic parade route .the parade is considered to be taking place within a city like environment consisting of building blocks .camera - occlusions are accounted for , while the field of view of the camera that equips every robot is considered to be .for this study possible guard locations are sampled within the obstacle free subset of the world . ]similarly , figure [ fig : res2b ] presents the results of the identical set up with the exception of sampling possible guard locations .again the results of the robots positioning are similar for almost all iterations , which further denotes that very large sets of possible guard locations are not providing significant solution quality benefits .on the other hand , computational time increases a lot as shown in figure [ fig : res_time ] .robots monitoring a dynamic parade route .the parade is considered to be taking place within a city like environment consisting of building blocks .camera - occlusions are accounted for , while the field of view of the camera that equips every robot is considered to be .for this study possible guard locations are sampled within the obstacle free subset of the world . ]figure [ fig : res_time ] summarizes the computational properties of the algorithm for the above mentioned simulation cases .furthermore , figure [ fig : timemultirob ] presents the computational analysis of a set of studies with robots , while keeping the amount of potential sampled guard positions fixed to .as shown , the computational cost is very similar for the different robot teams both in the sense of the average value as well as of the evolution of it .this indicates the good scalability properties of the algorithm for arbitrary large teams of aerial robots . or robots and different sizes of potential guard positions sets .as illustrated , the factor that greatly impacts computational time is the size of the set of possible guard locations . ] and robots given that the set of potential guard positions is set to the fixed value of .as shown the dynamics as well as the cost of the computation per iteration are similar regardless of the size of the team , a fact that highlights the scalability of the proposed approach . ] in summary , it was shown that the algorithm is able to deal with complex parade routes taking place in urban like environments .different sizes of robotic teams can be considered and the algorithm presents good computational scalability .computation time is primarily affected by the size of the set of potential guard locations , which indicates that the size of the problem can influence the computation time . however , even in cases of very large potential guard location sets , the algorithm finds solutions within seconds - a performance considered to be sufficient given the large time scales of dynamic variations in social parades . at the current implementation of the algorithm ,connection of subsequent optimal positions of the aerial robots team members relies on the nearest - neighbor concept as computed over collision free trajectories . future work will incorportate a full optimal solution employing multiple vehicle routing problem solvers such as the implementation in .this technical report deals with the problem of positioning of a team of aerial robots such that they provide optimal coverage of a dynamically evolving parade taking place in an urban environment .the problem is solved iteratively over sampled representations of the parade route and it relies on convex approximates of the original noncovex problem .as the parade route is only as well covered as its least covered point , the optimization objective is to place the aerial robots such that they maximize the minimum coverage over the points in the route at every time instant of it .simulation studies verify the functionality of the algorithm , present its capacity to handle large robot teams and complex parade routes , as well as its low computational cost .
|
this technical report addresses the problem of optimal surveillance of the route followed by a dynamic parade using a team of aerial robots . the dynamic parade is considered to take place within an urban environment , it is discretized and at every iteration , the algorithm computes the best possible placing of the aerial robotic team members , subject to their camera model and the occlusions arising from the environment . as the parade route is only as well covered as its least covered point , the optimization objective is to place the aerial robots such that they maximize the minimum coverage over the points in the route at every time instant of it . a set of simulation studies is used to demonstrate the operation and performance characteristics of the approach , while computational analysis is also provided and verifies the good scalability properties of the contributed algorithm regarding the size of the aerial robotics team .
|
recently , the complex features of financial time series have been studied using a variety of methods developed in econophysics [ 1 - 2 ] .the analysis of extensive financial data has empirically pointed to the breakdown of the efficient market hypothesis(emh ) , in particular , the weak - form of emh [ 4 - 6 , 12 - 14 ] .for example , the distribution function of the returns of various financial time series is found to follow a universal power law distribution with varying exponents [ 4 - 6 , 13 ] .the returns of financial time series without apparent long - term memory are found to possess the long - term memory in absolute value series , indicating a long - term memory in the volatility of financial time series [ 7,8,9,11,15 ] . in this paper , we use a method developed in statistical physics to test the market efficiency of the financial time series .the approximate entropy(apen ) proposed by pincus __ can be used to quantify the randomness in the time series [ 16 , 17 ] .the apen can not only quantify the randomness in financial time series with a relatively small number of data but also be used as a measure for the stability of time series .previously , the hurst exponent was used to analyze various global financial time series , which suggested that the mature markets have features different from the emerging markets .thus , the hurst exponents for the mature markets exhibit a short - term memory , while those for the emerging markets exhibit a long - term memory [ 9 , 12 ] .it was also shown that the liquidity and market capitalization may play an important role in understanding the market efficiency [ 10 ] . using the apen , we study the market efficiency of the global foreign exchange markets .we use the daily foreign exchange rates for 17 countries from 1984 to 1998 , and ones for 17 countries from 1999 to 2004 around the asian currency crisis .we found that the apen values for european and north american foreign exchange markets are larger than those for african and asian ones except japan .we also found that the market efficiency of asian foreign exchange markets measured by apen increases significantly after the asian currency crisis . in section [ sec :methodology ] , we describe the financial data used in this paper and introduce the apen method . in section [ sec : results ] , we apply the apen method to global foreign exchange rates and investigate the relative efficiency of the diverse foreign exchange markets .finally , we end with a summary .we investigate the market efficiency of the financial time series for various foreign exchange markets . for this purpose, we use the return series of daily foreign exchange rates for 17 countries from 1984 to 1998 ( data a ) and from 1999 to 2004 ( data b ) .the data a and data b are obtained before and after the asian crisis , respectively .the data are grouped into european , north american , african , asian and pacific countries ( from http://www.federalreserve.gov/releases/ ) .the returns of the financial time series are calculated by a log - difference and properly normalized , respectively .the normalized return at a given time t is defined by where is the daily foreign exchange rate time series , the return time series after a log - difference , and the standard deviation of the return .pincus _ et al ._ proposed the apen to quantify the randomness inherent in time series data [ 16 , 17 ] .recently , pincus and kalman applied the apen method to a variety of financial time series in order to investigate various features of the market , in particular , the randomness [ 18 ] .the apen is defined as follows : where is the embedding dimension , the tolerance in similarity .the function is given by , \\\ ] ] where is the number of data pairs within a distance , \leq r.\ ] ] the distance $ ] between two vectors and in is defined by = \underset{k=1,2, .. ,m}{max}(|u(i+k-1)-u(j+k-1)|),\ ] ] where is a time series .the apen value compares the relative magnitude between repeated pattern occurrences for the embedding dimensions , and .when the time series data have a high degree of randomness , the apen is large . on the other hand , apen is small for the time series with a low degree of randomness .therefore , the apen can be used as a measure of the market efficiency . in this work, the apen is estimated with the embedding dimension , , and the distance , of the standard deviation of the time series , similar to the preivous works[18 ] .the red , pink , yellow , green and blue color bars correspond to european , north american , african , asian and pacific countries , respectively.,width=566,height=453 ] in this section , we investigate the relative market efficiency for various foreign exchange markets .we measure the randomness in financial time series using the approximate entropy ( apen ) method .we analyze the apen values for the global foreign exchange rates in the data a and data b defined in section ii .figure 1(a ) shows the apen values for the foreign exchange rates of data a before the asian currency crisis .the red , pink , yellow , green , and blue colors denote the european , north american , african , asian , and pacific foreign exchange markets , respectively .we found that the average apen for european foreign exchange rates is 2.0 and the apen for north american one is 1.98 , which are larger than the apen values for asian ones with 1.1 ( except japan ) , and african ones with 1.52 .the apen for the pacific foreign exchange rates is 1.84 , which is intermediate between two groups .this is due to the liquidity or trading volumes in european and north american foreign exchange markets , which are much larger than those for other foreign exchange markets .the market with a larger liquidity such as european and north american foreign exchange markets shows a higher market efficiency than the market with a smaller liquidity such as asian ( except japan ) and african foreign exchange markets . in order to estimate the change in market efficiency after the market crisis, we investigate the apen for the foreign exchange rates of data b after the asian currency crisis .figure 1(b ) shows the apen values for data b. we found that the average apen values for european and north american foreign exchange markets do not change much from the case of data a. however , the apen values for asian increased sharply from 1.1 to 1.5 after the asian currency crisis , indicating the improved market efficiency .note that the apen of pacific foreign exchange rates is 1.92 , which is close to those for european and north american markets .notably , the apen of the korean foreign exchange market increased sharply from 0.55 to 1.71 after the asian currency crisis .this may be attributed to the factors such as the higher volatility and the less liberalization of the korean foreign exchange market , the stagnancy of business activities , and the coherent movement patterns among the companies during the asian currency crisis .our findings suggest that the apen can be a good measure of the market efficiency .in this paper , we have investigated the degree of randomness in the time series of various foreign exchange markets .we employed the apron to quantify a market efficiency in the foreign exchange markets .we found that the average apron values for european and north american foreign exchange markets are larger than those for african and asian ones except japan , indicating a higher market efficiency for european and north american foreign exchange markets than other foreign exchange markets .we found that the efficiency of markets with a small liquidity such as asian foreign exchange markets improved significantly after the asian currency crisis .our analysis can be extended to other global financial markets .this work was supported by a grant from the most / kosher to the national core research center for systems bio - dynamics ( r15 - 2004 - 033 ) , and by the korea research foundation ( ktf-2005 - 042-b00075 ) , and by the ministry of science & technology through the national research laboratory project , and by the ministry of education through the program kb 21 , and by the korea research foundation ( krf-2004 - 041-b00219 ) .j.p bouchaud and m. potters , theory of financial risks : from statistical physics to risk managements , ( cambridge university press , cambridge , 2000 ) r.n . mantegna and h.e .stanley , an introduction to econophysics : correlations and complexity in finance , ( cambridge university press , cambridge , 1999 ) e.f .fama , journal of finance ,
|
we investigate the relative market efficiency in financial market data , using the approximate entropy(apen ) method for a quantification of randomness in time series . we used the global foreign exchange market indices for 17 countries during two periods from 1984 to 1998 and from 1999 to 2004 in order to study the efficiency of various foreign exchange markets around the market crisis . we found that on average , the apen values for european and north american foreign exchange markets are larger than those for african and asian ones except japan . we also found that the apen for asian markets increase significantly after the asian currency crisis . our results suggest that the markets with a larger liquidity such as european and north american foreign exchange markets have a higher market efficiency than those with a smaller liquidity such as the african and asian ones except japan .
|
in the past ten years , modern societies have developed enormous communication and social networks .the world wide web ( www ) alone has about 50 billion indexed web pages , so that their classification and information retrieval processing becomes a formidable task .various search engines have been developed by private companies such as google , yahoo ! and others which are extensively used by internet users .in addition , social networks ( facebook , livejournal , twitter , etc ) have gained huge popularity in the last few years .in addition , use of social networks has spread beyond their initial purpose , making them important for political or social events . to handle such massive databases , fundamental mathematical tools and algorithms related to centrality measures and network matrix propertiesare actively being developed .indeed , the pagerank algorithm , which was initially at the basis of the development of the google search engine , is directly linked to the mathematical properties of markov chains and perron - frobenius operators . due to its mathematical foundation, this algorithm determines a ranking order of nodes that can be applied to various types of directed networks. however , the recent rapid development of www and communication networks requires the creation of new tools and algorithms to characterize the properties of these networks on a more detailed and precise level .for example , such networks contain weakly coupled or secret communities which may correspond to very small values of the pagerank and are hard to detect .it is therefore highly important to have new methods to classify and rank hige amounts of network information in a way adapted to internal network structures and characteristics .this review describes matrix tools and algorithms which facilitate classification and information retrieval from large networks recently created by human activity .the google matrix , formed by links of the network has , is typically huge ( a few tens of billions of webpages ) .thus , the analysis of its spectral properties including complex eigenvalues and eigenvectors represents a challenge for analytical and numerical methods .it is rather surprising , but the class of such matrices , which belong to the class of markov chains and perron - frobenius operators , has been essentially overlooked in physics . indeed , physical problems typically belong to the class of hermitian or unitary matrices .their properties have been actively studied in the frame of random matrix theory ( rmt ) and quantum chaos .the analytical and numerical tools developed in these research fields have paved the way for understanding many universal and peculiar features of such matrices in the limit of large matrix size corresponding to many - body quantum systems , quantum computers and a semiclassical limit of large quantum numbers in the regime of quantum chaos .in contrast to the hermitian problem , the google matrices of directed networks have complex eigenvalues .the only physical systems where similar matrices had been studied analytically and numerically correspond to models of quantum chaotic scattering whose spectrum is known to have such unusual properties as the fractal weyl law .( and ) .matrix corresponds to ( and ) axis with on panel ( a ) , and with on panel ( b ) ; all nodes are ordered by pagerank index of matrix and thus we have two matrix indexes for matrix elements in this basis .panel ( a ) shows the first matrix elements of matrix ( see sec .[ s3 ] ) . panel ( b ) shows density of all matrix elements coarse - grained on cells where its elements , , are written in the pagerank basis with indexes ( in -axis ) and ( in a usual matrix representation with on the top - left corner ) .color shows the density of matrix elements changing from black for minimum value ( ) to white for maximum value via green ( gray ) and yellow ( light gray ) ; here the damping factor is after .[ fig1_1],scaledwidth=48.0% ] -0.3 cm in this review we present an extensive analysis of a variety of google matrices emerging from real networks in various sciences including www of uk universities , wikipedia , physical review citation network , linux kernel network , world trade network from the un comtrade database , brain neural networks , networks of dna sequences and many others . as an example , the google matrix of the wikipedia network of english articles ( 2009 ) is shown in fig .[ fig1_1 ] .we demonstrate that the analysis of the spectrum and eigenstates of a google matrix of a given network provides a detailed understanding about the information flow and ranking .we also show that such types of matrices naturally appear for ulam networks of dynamical maps in the framework of the ulam method .currently , wikipedia , a free online encyclopaedia , stores more and more information and has become the largest database of human knowledge . in this respectit is similar to _ the library of babel _ , described by jorge luis borges .the understanding of hidden relations between various areas of knowledge on the basis of wikipedia can be improved with the help of google matrix analysis of directed hyperlink networks of wikipedia articles as described in this review .the specific tools of rmt and quantum chaos , combined with the efficient numerical methods for large matrix diagonalization like the arnoldi method , allow to analyze the spectral properties of such large matrices as the entire twitter network of 41 millions users . in 1998brin and page pointed out that _ `` despite the importance of large - scale search engines on the web , very little academic research has been done on them '' _ .the google matrix of a directed network , like _ the library of babel _ of borges , contains all the information about a network .the pagerank eigenvector of this matrix finds a broad range of applications being at the mathematical foundations of the google search engine .we show below that the spectrum of this matrix and its other eigenvectors also provide interesting information about network communities and the formation of pagerank vector .we hope that this review yields a solid scientific basis of matrix methods for efficient analysis of directed networks emerging in various sciences .the described methods will find broad interdisciplinary applications in mathematics , physics and computer science with the cross - fertilization of different research fields .our aim is to combine the analytic tools and numerical analysis of concrete directed networks to gain a better understanding of the properties of these complex systems .an interested reader can find a general introduction about complex networks ( see also sec . [ s2 ] ) in well established papers , reviews and books , , , .descriptions of markov chains and perron - frobenius operators are given in , while the properties of random matrix theory ( rmt ) and quantum chaos are described in .the data sets for the main part of the networks considered here are available at from quantware group .the distributions of the number of ingoing or outgoing links per node for directed networks with nodes and links are well known as indegree and outdegree distributions in the community of computer science .a network is described by an adjacency matrix of size with when there is a link from a node to a node in the network , i. e. `` points to '' , and otherwise .real networks are often characterized by power law distributions for the number of ingoing and outgoing links per node with typical exponents and for the www .for example , for the wikipedia network of fig. [ fig1_1 ] one finds , as shown in fig .[ fig2_1 ] . of number of ingoing ( a ) and outgoing ( b ) links for english articles ( aug 2009 ) of fig . [ fig1_1 ] with total number of links .the straight dashed fit line shows the slope with ( a ) and ( b ) .after .[ fig2_1],scaledwidth=48.0% ] -0.2 cm statistical preferential attachment models were initially developed for undirected networks .their generalization to directed networks generates a power law distribution for ingoing links with but the distribution of outgoing links is closer to an exponential decay .we will see below that these models are not able to reproduce the spectral properties of in real networks .the most recent studies of www , crawled by the common crawl foundation in 2012 for nodes and links , provide the exponents , , even if the authors stress that these distributions describe probabilities at the tails which capture only about one percent of nodes .thus , at present the existing statistical models of networks capture only in an approximate manner the real situation in large networks even if certain models are able to generate a power law decay of pagerank probability .the matrix of markov transitions is constructed from the adjacency matrix by normalizing elements of each column so that their sum is equal to unity ( ) and replacing columns with only zero elements ( _ dangling nodes _ ) by .such matrices with columns sum normalized to unity and belong to the class of perron - frobenius operators with a possibly degenerate unit eigenvalue and other eigenvalues obeying ( see sec .[ s3.2 ] ) .then the google matrix of the network is introduced as : the damping factor in the www context describes the probability to jump to any node for a random surfer . at a given nodea random surfer follows the available direction of links making a random choice between them with probability proportional to the weight of links . for www the google search engine uses . for matrix also belongs to the class of perron - frobenius operators as and with its columns sum normalized .however , for its largest eigenvalue is not degenerate and the other eigenvalues lie inside a smaller circle of radius , i.e. . , where the size of node is proportional to pagerank probability and color of node is proportional to cheirank probability , with maximum at red / gray and minimum at blue / black ; the location of nodes of panel ( a ) on plane is : , , , , for original nodes respectively ; pagerank and cheirank vectors are computed from the google matrices and shown in fig .[ fig3_2 ] at a damping factor .[ fig3_1],scaledwidth=48.0% ] -0.3 cm the right eigenvector at , which is called the pagerank , has real nonnegative elements and gives the stationary probability to find a random surfer at site .the pagerank can be efficiently determined by the power iteration method which consists of repeatedly multiplying to an iteration vector which is initially chosen as a given random or uniform initial vector .developing the initial vector in a basis of eigenvectors of one finds that the other eigenvector coefficients decay as and only the pagerank component , with , survives in the limit .the finite gap between the largest eigenvalue and other eigenvalues ensures , after several tens of iterations , the fast exponential convergence of the method also called the `` pagerank algorithm '' .a multiplication of to a vector requires only multiplications due to the links and the additional contributions due to dangling nodes and damping factor can be efficiently performed with operations . sinceoften the average number of links per node is of the order of a few tens for www and many other networks one has effectively and of the same order of magnitude . at matrix coincides with the matrix and we will see below in sec .[ s8 ] that for this case the largest eigenvalue is usually highly degenerate due to many invariant subspaces which define many independent perron - frobenius operators with at least one eigenvalue for each of them .once the pagerank is found , e.g. at , all nodes can be sorted by decreasing probabilities .the node rank is then given by the index which reflects the relevance of the node .the top pagerank nodes , with largest probabilities , are located at small values of . it is known that on average the pagerank probability is proportional to the number of ingoing links , characterizing how popular or known a given node is . assuming that the pagerank probability decays algebraically as we obtain that the number of nodes with pagerank probability scales as with so that for being in a agreement with the numerical data for www and wikipedia network . more recent mathematical studies on the relation between pagerank probability decay and ingoing links are reported in . at the same timethe proportionality relation between pagerank probability and ingoing links assumes certain statistical properties of networks and works only on average .we note that there are examples of ulam networks generated by dynamical maps where such proportionality is not working ( see and sec .[ s6.5 ] ) .in addition to a given directed network with adjacency matrix it is useful to analyze an inverse network where links are inverted and whose adjacency matrix is the transpose of , i.e. . the matrices and the google matrix of the inverse network are then constructed in the same way from as described above and according to the relation ( [ eq3_1 ] ) using the same value of as for the matrix .the right eigenvector of at eigenvalue is called cheirank giving a complementary rank index of network nodes .the cheirank probability is proportional to the number of outgoing links highlighting node communicativity ( see e.g. ) . in analogy with the pagerankwe obtain that with for typical .the statistical properties of distribution of nodes on the pagerank - cheirank plane are described in for various directed networks .we will discuss them below . of network of fig .[ fig3_1](a ) with indexes used there , ( b ) adjacency matrix for the network with inverted links ; matrices ( c ) and ( d ) corresponding to the matrices , ; the google matrices ( e ) and ( f ) corresponding to matrices and for ( only 3 digits of matrix elements are shown ) .[ fig3_2],scaledwidth=48.0% ] for an illustration we consider an example of a simple network of five nodes shown in fig .[ fig3_1](a ) .the corresponding adjacency matrices , are shown in fig .[ fig3_2 ] for the indexes given in fig .[ fig3_1](a ) .the matrices of markov transitions , and google matrices are computed as described above and from eq .( [ eq3_1 ] ) .the distribution of nodes on plane is shown in fig .[ fig3_1](b ) .after permutations the matrix can be rewritten in the basis of pagerank index as it is done in fig .[ fig1_1 ] .matrices with real non - negative elements and column sums normalized to unity belong to the class of markov chains and perron - frobenius operators , which have been used in a mathematical analysis of dynamical systems and theory of matrices . a numerical analysis of finite size approximants of such operatorsis closely linked with the ulam method which naturally generates such matrices for dynamical maps .the ulam method generates ulam networks whose properties are discussed in sec.[s6 ] .matrices of this type have at least ( one ) unit eigenvalue since the vector is obviously a left eigenvector for this eigenvalue .furthermore one verifies easily that for any vector the inequality holds where the norm is the standard 1-norm .from this inequality one obtains immediately that all eigenvalues of lie in a circle of radius unity : . for the google matrix as given in ( [ eq3_1 ] ) one can furthermore show for that the unity eigenvalue is not degenerate and the other eigenvalues obey even .these and other mathematical results about properties of matrices of such type can be found at . it should be pointed out that due to the asymmetry of links on directed networks such matrices have in general a complex eigenvalue spectrum and sometimes they are not even diagonalizable , i.e. there may also be generalized eigenvectors associated to non - trivial jordan blocks .matrices of this type rarely appear in physical problems which are usually characterized by hermitian or unitary matrices with real eigenvalues or located on the unitary circle .the universal spectral properties of such hermitian or unitary matrices are well described by rmt .in contrast to this non - trivial complex spectra appear in physical systems only in problems of quantum chaotic scattering and systems with absorption . in such casesit may happen that the number of states , with finite values ( ) , can grow algebraically with increasing matrix size , with an exponent corresponding to a fractal weyl law proposed first in mathematics .therefore most of eigenvalues drop to with .we discuss this unusual property in sec.[s5 ] . for typical networksthe set of nodes can be decomposed in invariant _ subspace nodes _ and fully connected _core space nodes _ leading to a block structure of the matrix in ( [ eq3_1 ] ) which can be represented as : the core space block contains the links between core space nodes and the coupling block may contain links from certain core space nodes to certain invariant subspace nodes . by construction there are no links from nodes of invariant subspaces to the nodes of core space .thus the subspace - subspace block is actually composed of many diagonal blocks for many invariant subspaces whose number can generally be rather large .each of these blocks corresponds to a column sum normalized matrix with positive elements of the same type as and has therefore at least one unit eigenvalue .this leads to a high degeneracy of the eigenvalue of , for example as for the case of uk universities ( see sec .[ s8 ] ) . in order to obtain the invariant subspaces, we determine iteratively for each node the set of nodes that can be reached by a chain of non - zero matrix elements of . if this set contains all nodes ( or at least a macroscopic fraction ) of the network , the initial node belongs to the _ core space _ . otherwise , the limit set defines a subspace which is invariant with respect to applications of the matrix . at a second step all subspaces with common members are merged resulting in a sequence of disjoint subspaces of dimension and which are invariant by applications of .this scheme , which can be efficiently implemented in a computer program , provides a subdivision over core space nodes ( 70 - 80% of for uk university networks ) and subspace nodes belonging to at least one of the invariant subspaces .this procedure generates the block triangular structure ( [ eq3_2 ] ) .one may note that since a dangling node is connected by construction to all other nodes it belongs obviously to the core space as well as all nodes which are linked ( directly or indirectly ) to a dangling node . as a consequencethe invariant subspaces do not contain dangling nodes nor nodes linked to dangling nodes . the detailed algorithm for an efficient computation of the invariant subspacesis described in . as a result the total number of all subspace nodes , the number of independent subspaces , the maximal subspace dimension etc . can be determined .the statistical properties for the distribution of subspace dimensions are discussed in sec .[ s8 ] for uk universities and wikipedia networks .furthermore it is possible to determine numerically with a very low effort the eigenvalues of associated to each subspace by separate diagonalization of the corresponding diagonal blocks in the matrix . for this , either exact diagonalization or , in rare cases of quite large subspaces , the arnoldi method ( see the next subsection ) can be used .after the subspace eigenvalues are determined one can apply the arnoldi method to the projected core space matrix block to determine the leading core space eigenvalues . in this way one obtains accurate eigenvalues because the arnoldi method does not need to compute the numerically very problematic highly degenerate unit eigenvalues of since the latter are already obtained from the separate and cheap subspace diagonalization .actually the alternative and naive application of the arnoldi method on the full matrix , without computing the subspaces first , does not provide the correct number of degenerate unit eigenvalues and also the obtained clustered eigenvalues , close to unity , are not very accurate .similar problems hold for the full matrix ( with damping factor ) since here only the first eigenvector , the pagerank , can be determined accurately but there are still many degenerate ( or clustered ) eigenvalues at ( or close to ) .since the columns sums of are less than unity , due to non - zero matrix elements in the block , the leading core space eigenvalue of is also below unity even though in certain cases the gap to unity may be very small ( see sec .[ s8 ] ) .we consider concrete examples of such decompositions in sec .[ s8 ] and show in this review spectra with subspace and core space eigenvalues of matrices for several network examples .the mathematical results for properties of the matrix are discussed in .the most adapted numerical method to determine the largest eigenvalues of large sparse matrices is the arnoldi method . indeed, usually the matrix in eq .( [ eq3_1 ] ) is very sparse with only a few tens of links per node .thus , a multiplication of a vector by or is numerically cheap .the arnoldi method is similar in spirit to the lanzcos method , but is adapted to non - hermitian or non - symmetric matrices .its main idea is to determine recursively an orthonormal set of vectors , which define a _ krylov space _ , by orthogonalizing on the previous vectors by the gram - schmidt procedure to obtain and where is some normalized initial vector .the dimension of the krylov space ( in the following called the _ arnoldi - dimension _ ) should be `` modest '' but not too small . during the gram - schmidt procedure oneobtains furthermore the explicit expression : with matrix elements , of the arnoldi representation matrix of on the krylov space , given by the scalar products or inverse normalization constants calculated during the orthogonalization . in order to obtain a closed representation matrix one needs to replace the last coupling element which introduces a mathematical approximation .the eigenvalues of the matrix are called the _ ritz eigenvalues _ and represent often very accurate approximations of the exact eigenvalues of , at least for a considerable fraction of the ritz eigenvalues with largest modulus . in certain particular cases ,when belongs to an invariant subspace of small dimension , the element vanishes automatically ( if and assuming that numerical rounding errors are not important ) and the arnoldi iteration stops at and provides exact eigenvalues of for the invariant subspace .one can mention that there are more sophisticated variants of the arnoldi method where one applies ( implicit ) modifications on the initial vector in order to force this vector to be in some small dimensional invariant subspace which results in such a vanishing coupling matrix element .these variants known as ( implicitly ) restarted arnoldi methods allow to concentrate on certain regions on the complex plane to determine a few but very accurate eigenvalues in these regions .however , for the cases of google matrices , where one is typically interested in the largest eigenvalues close to the unit circle , only the basic variant described above was used but choosing larger values of as would have been possible with the restarted variants .the initial vector was typically chosen to be random or as the vector with unit entries . concerning the numerical resourcesthe arnoldi method requires double precision registers to store the non - zero matrix elements of , registers to store the vectors and const. registers to store ( and various copies of ) .the computational time scales as for the computation of , with for the gram - schmidt orthogonalization procedure ( which is typically dominant ) and with const. for the diagonalization of .the details of the arnoldi method are described in refs . given above .this method has problems with degenerate or strongly clustered eigenvalues and therefore for typical examples of google matrices it is applied to the core space block where the effects of the invariant subspaces , being responsible for most of the degeneracies , are exactly taken out according to the discussion of the previous subsection . in typical examples it is possible to find about eigenvalues with largest for the entire twitter network with ( see sec . [ s10 ] ) and about eigenvalues for wikipedia networks with ( see sec . [ s9 ] ) .for the two university networks of cambridge and oxford 2006 with it is possible to compute eigenvalues ( see sec .[ s8 ] ) . for the case of the citation network of physical review( see sec .[ s12 ] ) with it is even possible and necessary to use high precision computations ( with up to 768 binary digits ) to determine accurately the arnoldi matrix with . according to the perron - frobenius theoremall eigenvalues of are distributed inside the unitary circle .it can be shown that at there is only one eigenvalue and all other having a simple dependence on : ( see e.g. ) .the right eigenvectors are defined by the equation only the pagerank vector is affected by while other eigenstates are independent of due to their orthogonality to the left unit eigenvector at .left eigenvectors are orthonormal to right eigenvectors .it is useful to characterize the eigenvectors by their inverse participation ratio ( ipr ) which gives an effective number of nodes populated by an eigenvector .this characteristics is broadly used for description of localized or delocalized eigenstates of electrons in a disordered potential with anderson transition ( see e.g. ) .we discuss the specific properties of eigenvectors in next secs .( red / gray curve ) and cheirank ( blue / black curve ) vectors on the corresponding rank indexes and for networks of wikipedia aug 2009 ( top curves ) and university of cambridge ( bottom curves , moved down by a factor ) .the straight dashed lines show the power law fits for pagerank and cheirank with the slopes respectively , corresponding to for wikipedia ( see fig . [ fig2_1 ] ) , and for cambridge .after and .[ fig4_1],scaledwidth=48.0% ]it is established that ranking of network nodes based on pagerank order works reliably not only for www but also for other directed networks . as an example it is possible to quote the citation network of physical review , wikipedia network and even the network of world commercial trade . herewe describe the main properties of pagerank and cheirank probabilities using a few real networks .more detailed presentation for concrete networks follows in next secs .wikipedia is a useful example of a scale - free network .an article quotes other wikipedia articles that generates a network of directed links . for wikipedia of english articles dated by aug 2009 we have , ( ) .the dependencies of pagerank and cheirank probabilities on indexes and are shown in fig .[ fig4_1 ] . in a large rangethe decay can be satisfactory described by an algebraic law with an exponent .the obtained values are in a reasonable agreement with the expected relation with the exponents of distribution of links given above .however , the decay is algebraic only on a tail , showing certain nonlinear variations well visible for at large values of . similar data for network of university of cambridge ( 2006 ) with , are shown in the same fig .[ fig4_1 ] . here ,the exponents have different values with approximately the same statistical accuracy of .thus we come to the same conclusion as : the probability decay of pagerank and cheirank is only approximately algebraic , the relation between exponents and also works only approximately .each network node has both pagerank and cheirank indexes so that it is interesting to know what is a correlation between the corresponding vectors of pagerank and cheirank .it is convenient to characterized this by a correlator introduced in as a function of the number of nodes for different networks : wikipedia networks , phys rev network , 17 uk universities , 10 versions of kernel linux kernel pcn , escherichia coli and yeast transcription gene networks , brain model network , c.elegans neural network and business process management network .after with additional data from , , , .[ fig4_2],scaledwidth=48.0% ] shown on the plane of pagerank and cheirank indexes in logscale for all , density is computed over equidistant grid in plane with cells ; color shows average value of in each cell , the normalization condition is .density is shown by color with blue ( dark gray ) for minimum in ( a),(b ) and white ( a ) and yellow ( white ) ( b ) for maximum ( black for zero ) .panel ( a ) : data for wikipedia aug ( 2009 ) , , green / red ( light gray / dark gray ) points show top 100 persons from pagerank / cheirank , yellow ( white ) pluses show top 100 persons from ; after . panel ( b ) : density distribution for linux kernel v2.4 network with , after .[ fig4_3],scaledwidth=48.0% ] even if all the networks from fig .[ fig4_2 ] have similar algebraic decay of pagerank probability with and similar exponents we see that the correlations between pagerank and cheirank vectors are drastically different in these networks .thus the networks of uk universities and 9 different language editions of wikipedia have the correlator while all other networks have .this means that there are significant differences hidden in the network architecture which are no visible from pagerank analysis .we will discuss the possible origins of such a difference for the above networks in next secs. a more detailed characterization of correlations between pagerank and cheirank vectors can be obtained from a distribution of network nodes on the two - dimensional plane ( 2d ) of indexes .two examples for wikipedia and linux networks are shown in fig .[ fig4_3 ] .a qualitative difference between two networks is obvious .for wikipedia we have a maximum of density along the line that results from a strong correlation between pagerank and cheirank with .in contrast to that for the linux network v2.4 we have a homogeneous density distribution of nodes along lines corresponding to uncorrelated probabilities and and even slightly negative value of .we note that if for wikipedia we generate nodes with independent probabilities distributions and , obtained from this network at the corresponding value of , then we obtain a homogeneous node distribution in plane ( in plane it takes a triangular form , see fig.4 at ) . in fig .[ fig4_3](a ) we also show the distribution of top 100 persons from pagerank and cheirank compared with the top 100 persons from .there is a significant overlap between pagerank and hart ranking of persons while cheirank generates mainly another listing of people .we discuss the wikipedia ranking of historical figures in sec .[ s9 ] .pagerank and cheirank indexes order all network nodes according to a monotonous decrease of corresponding probabilities and . while top nodes are most popular or known in the network , top nodes are most communicative nodes with many outgoing links .it is useful to consider an additional ranking , called 2drank , which combines properties of both ranks and .the ranking list is constructed by increasing and increasing 2drank index by one if a new entry is present in the list of first entries of cheirank , then the one unit step is done in and is increased by one if the new entry is present in the list of first entries of cheirank .more formally , 2drank gives the ordering of the sequence of sites , that appear inside the squares ] , the density is averaged over all nodes inside each cell of the grid , the normalization condition is . color varies from black for zero to yellow / gray for maximum density value with a saturation value of so that the same color is fixed for to show in a better way low densities .the panels show networks of university of cambridge 2006 with ( a ) and ens paris 2011 for crawling level 7 with ( b ) .after .[ fig8_4],scaledwidth=48.0% ]the free online encyclopedia wikipedia is a huge repository of human knowledge .its size is growing permanently accumulating huge amount of information and becoming a modern version of _ library of babel _ , described by jorge luis borges .the hyperlink citations between wikipedia articles provides an important example of directed networks evolving in time for many different languages .in particular , the english edition of august 2009 has been studied in detail .the effects of time evolution and entanglement of cultures in multilingual wikipedia editions have been investigated in .the statistical distribution of links in wikipedia networks has been found to follow a power law with the exponents ( see e.g. ) .the probabilities of pagerank and cheirank are shown in fig .[ fig4_1 ] .they are satisfactory described by a power law decay with exponents . the density distribution of articles over pagerank - cheirank plane is shown in fig .[ fig4_3](a ) for english wikipedia aug 2009 .we stress that the density is very different from those generated by the product of independent probabilities of and given in fig .[ fig4_1 ] . in the latter casewe obtain a density homogeneous along lines being rather similar to the distribution for linux network also shown in fig .[ fig4_3 ] .this result is in good agreement with a fact that the correlator between pagerank and cheirank vectors is rather large for wikipedia while it is close to zero for linux network .the difference between pagerank and cheirank is clearly seen from the names of articles with highest ranks ( ranks of all articles are given in ) .at the top of pagerank we have 1 . _ united states _ , 2 ._ united kingdom _ , 3 ._ france _ while for cheirank we find 1 ._ portal : contents / outline of knowledge / geography and places _ , 2 ._ list of state leaders by year _ , 3 ._ portal : contents / index / geography and places_. clearly pagerank selects first articles on a broadly known subject with a large number of ingoing links while cheirank selects first highly communicative articles with many outgoing links .the 2drank combines these two characteristics of information flow on directed network . at the top of 2drank find 1 ._ india _ , 2 ._ singapore _ , 3 . _pakistan_. thus , these articles are most known / popular and most communicative at the same time .the top 100 articles in are determined for several categories including countries , universities , people , physicists .it is shown in that pagerank recovers about 80% of top 100 countries from sjr data base , about 75% of top 100 universities of shanghai university ranking , and , among physicists , about 50% of top 100 nobel winners in physics .this overlap is lower for 2drank and even lower for cheirank .however , as we will see below in more detail , 2drank and cheirank highlight other properties being complementary to pagerank . let us give an example of top three physicists among those of 754 registered in wikipedia in 2010 : 1 ._ aristotle _ , 2 . _ albert einstein _ , 3 ._ isaac newton _ from pagerank ; 1 ._ albert einstein _ , 2 ._ nikola tesla _ , 3 ._ benjamin franklin _ from 2drank ; 1 ._ hubert reeves _, 2 . _ shen kuo _ , 3 ._ stephen hawking _ from cheirank .it is clear that pagerank gives most known , 2drank gives most known and active in other areas , cheirank gives those who are known and contribute to popularization of science .indeed , e.g. _ hubert reeves _ and _ stephen hawking _ are very well known for their popularization of physics that increases their communicative power and place them at the top of cheirank . _shen kuo _ obtained recognized results in an enormous variety of fields of science that leads to the second top position in cheirank even if his activity was about thousand years ago . according to wikipedia ranking the top universities are 1 ._ harvard university _ , 2 ._ university of oxford _ , 3 ._ university of cambridge _ in pagerank ; 1 ._ columbia university _ , 2 ._ university of florida _ , 3 . _florida state university _ in 2drank and cheirank .cheirank and 2drank highlight connectivity degree of universities that leads to appearance of significant number of arts , religious and military specialized colleges ( 12% and 13% respectively for cheirank and 2drank ) while pagerank has only 1% of them .cheirank and 2drank introduce also a larger number of relatively small universities who are keeping links to their alumni in a significantly better way that gives an increase of their ranks .it is established that top pagerank universities from english wikipedia in years recover correspondingly from top 10 of .the time evolution of probability distributions of pagerank , cheirank and two - dimensional ranking is analyzed in showing that they become stabilized for the period 2007 - 2011 .on the basis of these results we can conclude that the above algorithms provide correct and important ranking of huge information and knowledge accumulated at wikipedia .it is interesting that even dow - jones companies are ranked via wikipedia networks in a good manner .we discuss ranking of top people of wikipedia a bit later . the complex spectrum of eigenvalues of for english wikipedia network of aug 2009 is shown in fig .[ fig9_1 ] . as for university networks ,the spectrum also has some invariant subspaces resulting in degeneracies of the leading eigenvalue of ( or ) .however , due to the stronger connectivity of the wikipedia network these subspaces are significantly smaller compared to university networks .for example of aug 2009 edition in fig .[ fig9_1 ] there are invariant subspaces ( of the matrix ) covering nodes with unit eigenvalues and eigenvalues on the complex unit circle with . for the matrix of wikipediathere are invariant subspaces with nodes , unit eigenvalues and 8968 eigenvalues on the unit circle .the complex spectra of all subspace eigenvalues and the first core space eigenvalues of and are shown in fig .[ fig9_1 ] . as in the university cases ,in the spectrum we can identify cross and triple - star structures similar to those of orthostochastic matrices shown in fig .[ fig8bis ]. however , for wikipedia ( especially for ) the largest complex eigenvalues outside the real axis are more far away from the unit circle . for of wikipediathe two largest core space eigenvalues are and indicating that the core space gap is much smaller than the secondary gap . as a consequence the pagerank of wikipedia ( at )is strongly influenced by the leading core space eigenvector and actually both vectors select the same 5 top nodes . the time evolution of spectra of and for english wikipedia is studied in . it is shown that the spectral structure remains stable for years 2007 - 2011 .of ( a ) and ( b ) for english wikipedia of aug 2009 with articles and links .red / gray dots are core space eigenvalues , blue / black dots are subspace eigenvalues and the full green / gray curve shows the unit circle .the core space eigenvalues are computed by the projected arnoldi method with arnoldi dimension .after .[ fig9_1],scaledwidth=48.0% ] for english wikipedia aug 2009 .highlighted eigenvalues represent different communities of wikipedia and are labeled by the most repeated and important words following word counting of first 1000 nodes .panel ( a ) shows complex plane for positive imaginary part of eigenvalues , while panels ( b ) and ( c ) zoom in the negative and positive real parts . after .[ fig9_2],scaledwidth=48.0% ] the properties of eigenstates of gogle matrix of wikipedia aug 2009 are analyzed in .the global idea is that the eigenstates with large values of select certain specific communities .if is close to unity then a relaxation of probability from such nodes is rather slow and we can expect that such eigenstates highlight some new interesting information even if these nodes are located on a tail of pagerank .the important advantage of the wikipedia network is that its nodes are wikipedia articles with a relatively clear meaning allowing to understand the origins of appearance of certain nodes in one community .the localization properties of eigenvectors of the google matrix can be analyzed with the help of ipr ( see sec .[ s3.5 ] ) .another possibility is to fit a decay of an eigenstate amplitude by a power law where is the index ordering by monotonically decreasing amplitude ( similar to for pagerank ) .the exponents on the tails of are found to be typically in the range . at the same timethe eigenvectors with large complex eigenvalues or real eigenvalues close to are quite well localized on nodes that is much smaller than the whole network size . to understand the meaning of other eigenstates in the core space we order selected eigenstates by their decreasing value and apply word frequency analysis for the first articles with .the mostly frequent word of a given eigenvector is used to label the eigenvector name .these labels with corresponding eigenvalues are shown in fig .[ fig9_2 ] .there are four main categories for the selected eigenvectors belonging to countries ( red / gray ) , biology and medicine ( orange / very light gray ) , mathematics ( blue / black ) and others ( green / light gray ) . the category of others contains rather diverse articles about poetry , bible , football , music , american tv series ( e.g. quantum leap ) , small geographical places ( e.g. gaafru alif atoll ) .clearly these eigenstates select certain specific communities which are relatively weakly coupled with the main bulk part of wikipedia that generates relatively large modulus of .for example , for the article _ gaafu alif atoll _ the eigenvector is mainly localized on names of small atolls forming _ gaafu alif atoll_. clearly this case represents well localized community of articles mainly linked between themselves that gives slow relaxation rate of this eigenmode with being rather close to unity .another eigenvector has a complex eigenvalue with and the top article _ portal : bible_. another two articles are _ portal : bible / featured chapter / archives _ , _ portal : bible / featured article_. these top articles have very close values of that seems to be the reason why we have being very close to .examples of other eigenvectors are discussed in in detail .the analysis performed in for wikipedia aug 2009 shows that the eigenvectors of the google matrix of wikipedia clearly identify certain communities which are relatively weakly connected with the wikipedia core when the modulus of corresponding eigenvalue is close to unity . for moderate values of still have well defined communities which are however have stronger links with some popular articles ( e.g. countries ) that leads to a more rapid decay of such eigenmodes .thus the eigenvectors highlight interesting features of communities and network structure .however , a priori , it is not evident what is a correspondence between the numerically obtained eigenvectors and the specific community features in which someone has a specific interest . in fact , practically each eigenvector with a moderate value selects a certain community and there are many of them .so it remains difficult to target and select from eigenvalues a specific community one is interested .the spectra and eigenstates of other networks like www of cambridge 2011 , le monde , bbc and pcn of python are discussed in .it is found that ipr values of eigenstates with large are well localized with .the spectra of each network have significant differences from one another .there is always a significant public interest to know who are most significant historical figures , or persons , of humanity .the hart list of the top 100 people who , according to him , most influenced human history , is available at .hart `` ranked these 100 persons in order of importance : that is , according to the total amount of influence that each of them had on human history and on the everyday lives of other human beings '' .of course , a human ranking can be always objected arguing that an investigator has its own preferences .also investigators from different cultures can have different view points on a same historical figure .thus it is important to perform ranking of historical figures on purely mathematical and statistical grounds which exclude any cultural and personal preferences of investigators .a detailed two - dimensional ranking of persons of english wikipedia aug 2009 has been done in .earlier studies had been done in a non - systematic way without any comparison with established top lists ( see these refs . in ) . also at those times wikipedia did not yet entered in its stabilized phase of development .the top people of wikipedia aug 2009 are found to be 1 ._ napoleon i of france _ , 2 ._ george w. bush _ , 3 ._ elizabeth ii of the united kingdom _ for pagerank ; 1._michael jackson _ , 2 . _ frank lloyd wright _ , 3 . _david bowie _ for 2drank ; 1 . _kasey s. pipes _ , 2 ._ roger calmel _ , 3 ._ yury g. chernavsky _ for cheirank . for the pagerank list of overlap with the hart list is at 35% ( pagerank ) , 10% ( 2drank ) and almost zero for cheirank .this is attributed to a very broad distribution of historical figures on 2d plane , as shown in fig .[ fig4_3 ] , and a large variety of human activities .these activities are classified by main categories : politics , religion , arts , science , sport . for the top pagerank persons we have the following distribution over these categories : , , , , respectively .clearly pagerank overestimates the significance of politicians which list is dominated by usa presidents not always much known to a broad public .for 2drank we find respectively , , , , .thus this rank highlights artistic sides of human activity .for cheirank we have , , , , so that the dominant contribution comes from arts , science and sport .the interesting property of this rank is that it selects many composers , singers , writers , actors . as an interesting feature of cheirankwe note that among scientists it selects those who are not so much known to a broad public but who discovered new objects , e.g. george lyell who discovered many australian butterflies or nikolai chernykh who discovered many asteroids .cheirank also selects persons active in several categories of human activity . for english wikipedia aug 2009 the distribution of top 100 pagerank , cheirank and hart s persons on pagerank - cheirank planeis shown in fig .[ fig4_3 ] ( a ) .the distribution of hart s top persons on plane for english wikipedia in years 2003 , 2005 , 2007 , aug 2009 , dec 2009 , 2011 is found to be stable for the period 2007 - 2011 even if certain persons change their ranks .the distribution of top persons of wikipedia aug 2009 remains stable and compact for pagerank and 2drank for the period 2007 - 2011 while for cheirank the fluctuations of positions are large .this is due to the fact that outgoing links are easily modified and fluctuating .the time evolution of distribution of top persons over fields of human activity is established in .pagerank persons are dominated by politicians whose percentage increases with time , while the percent of arts decreases . for 2drank the arts are dominant but their percentage decreases with time .we also see the appearance of sport which is absent in pagerank .the mechanism of the qualitative ranking differences between two ranks is related to the fact that 2drank takes into account via cheirank a contribution of outgoing links . due to that singers , actors ,sportsmen improve their cheirank and 2drrank positions since articles about them contain various music albums , movies and sport competitions with many outgoing links . due to that the component of arts gets higher positions in 2drank in contrast to dominance of politics in pagerank .the interest to ranking of people via wikipedia network is growing , as shows the recent study of english edition .the english edition allows to obtain ranking of historical people but as we saw the pagerank list is dominated by usa presidents that probably does not correspond to the global world view point .hence , it is important to study multilingual wikipedia editions which have now languages and represent broader cultural views of the world .one of the first cross - cultural study was done for largest language editions constructing a network of links between set of articles of people biographies for each edition . however , the number of nodes and links in such a biographical network is significantly smaller compared to the whole network of wikipedia articles and thus the fluctuations become rather large .for example , from the biographical network of the russian edition one finds as the top person _napoleon iii _( and even not _napoleon i _ ) , who has a rather low importance for russia .another approach was used in ranking top 30 persons by pagerank , 2drank and cheirank algorithms for all articles of each of 9 editions and attributing each person to her / his native language .the selected editions are english ( en ) , french ( fr ) , german ( de ) , italian ( it ) , spanish ( es ) , dutch ( nl ) , russian ( ru ) , hungarian ( hu ) and korean ( ko ) .the aim here is to understand how different cultures evaluate a person ? is an important person in one culture is also important in the other culture ?it is found that local heroes are dominant but also global heroes exist and create an effective network representing entanglement of cultures .the top article of pagerank is usually _ usa _ or the name of country of a given language ( fr , ru , ko ) . for nl we have at the top _ beetle , species , france_. the top articles of cheirank are various listings .the distributions of articles density and top 30 persons for each rank algorithm are shown in fig .[ fig9_3 ] for four editions en , fr , de , ru .we see that in global the distributions have a similar shape that can be attributed to a fact that all editions describe the same world . however , local features of distributions are different corresponding to different cultural views on the same world ( other 5 editions are shown in fig.2 in ) .the top 30 persons for each edition are selected manually that represents a weak point of this study . from the lists of top persons , the `` fields '' of activityare identified for each top 30 rank persons in which he / she is active on .the six activity fields are : politics , art , science , religion , sport and etc ( here `` etc '' includes all other activities ) . as shown in fig .[ fig9_4 ] , for pagerank , politics is dominant and science is secondarily dominant .the only exception is dutch where science is the almost dominant activity field ( politics has the same number of points ) . in case of 2drank in fig .[ fig9_4 ] , art becomes dominant and politics is secondarily dominant . in case of cheirank ,art and sport are dominant fields ( see fig.3 in ) .thus for example , in cheirank top 30 list we find astronomers who discovered a lot of asteroids , e.g. karl wilhelm reinmuth ( 4th position in ru and 7th in de ) , who was a prolific discoverer of about 400 of them . as a result , his article contains a long listing of asteroids discovered by him and giving him a high cheirank .the distributions of persons over activity fields are shown in fig .[ fig9_4 ] for 9 languages editions ( marked by standard two letters used by wikipedia ) . for four different language wikipedia editions .the red ( gray ) points are top pagerank articles of persons , the green ( light gray ) squares are top 2drank articles of persons and the cyan ( dark gray ) triangles are top cheirank articles of persons .wikipedia language editions are english en ( a ) , french fr ( b ) , german de ( c ) , and russian ru ( d ) .color bars show natural logarithm of density , changing from minimal nonzero density ( dark ) to maximal one ( white ) , zero density is shown by black .after .[ fig9_3],scaledwidth=48.0% ] the change of activity priority for different ranks is due to the different balance between incoming and outgoing links there .usually the politicians are well known for a broad public , hence , the articles about politicians are pointed by many articles .however , the articles about politicians are not very communicative since they rarely point to other articles .in contrast , articles about persons in other fields like science , art and sport are more communicative because of listings of insects , planets , asteroids they discovered , or listings of song albums or sport competitions they gain . on the basis of this approach oneobtains local ranks of each of 30 persons for each edition and algorithm .then an average ranking score of a person is determined as for each algorithm .this method determines the global historical figures .the top global persons are 1._napoleon _ , 2._jesus _ , 3._carl linnaeus _ for pagerank ; 1._micheal jackson _ , 2._adolf hitler _ , 3._julius caesar _ for 2drank . for cheirank the lists of different editions have rather low overlap andsuch an averaging is not efficient .the first positions reproduce top persons from english edition discussed in sec .[ s9.4 ] , however , the next ones are different . and 2drank for each of 9 wikipedia editions .the color bar shows the values in percent .after .[ fig9_4],scaledwidth=48.0% ] , scaledwidth=48.0% ] since each person is attributed to her / his native language it is also possible for each edition to obtain top local heroes who have native language of the edition .for example , we find for pagerank for en _george w. bush _, _ barack obama _ ,_ elizabeth ii _; for fr _ napoleon _ , _ louis xiv of france _ , _ charles de gaulle _ ; for de _ adolf hitler _ , _ martin luther _ , _ immanuel kant _ ; for ru _ peter the great _ , _ joseph stalin _ , _ alexander pushkin_. for 2drank we have for en _ frank sinatra _ , _ paul mccartney _, _ michael jackson _ ; for fr _ francois mitterrand _ , _ jacques chirac _ , _ honore de balzac _ ; for de _ adolf hitler _ , _ otto von bismarck _ , _ ludwig van beethoven _ ; for ru _ dmitri mendeleev _ , _ peter the great _ , _ yaroslav the wise_. these ranking results are rather reasonable for each language .results for other editions and cheirank are given in .a weak point of above study is a manual selection of persons and a not very large number of editions .a significant improvement has been reached in a recent study where 24 editions have been analyzed .these 24 languages cover 59 percent of world population , and these 24 editions covers 68 percent of the total number of wikipedia articles in all 287 available languages .also the selection of people from the rank list of each edition is now done in an automatic computerized way .for that a list of about 1.1 million biographical articles about people with their english names is generated . from this list of persons , with their biographical article title in the english wikipedia , the corresponding titles in other language editions are determined using the inter - language links provided by wikipedia . using the corresponding articles , identified by the inter - languages links in different language editions ,the top 100 persons are obtained from the rankings of all wikipedia articles of each edition . a birth place , birth date , and gender of each top 100 ranked personare identified , based on dbpedia or a manual inspection of the corresponding wikipedia biographical article , when for the considered person no dbpedia data were available . in this way24 lists of top 100 persons for each edition are obtained in pagerank with 1045 unique names and in 2drank with 1616 unique names .each of the 100 historical figures is attributed to a birth place at the country level , to a birth date in year , to a gender , and to a cultural language group .the birth place is assigned according to the current country borders .the cultural group of historical figures is assigned by the most spoken language of their birth place at the current country level .the considered editions are : english en , dutch nl , german de , french fr , spanish , es , italian it , potuguese pt , greek , el , danish da , swedish sv , polish pl , hungarian hu , russian ru , hebrew he , turkish tr , arabic ar , persian fa , hindi hi , malaysian ms , thai th , vietnamese vi , chinese zh , korean ko , japanese ja ( dated by february 2013 ) . the size of network changes from maximal value for en to minimal one for th .all persons are ranked by their average rank score with similar to the study of 9 editions described above . for pagerankthe top global historical figures are _ carl linnaeus _ , _jesus _ , _ aristotle _ and for 2drank we obtain _ adolf hitler _ , _ michael jackson _ , _ madonna ( entertainer)_. thus the averaging over 24 editions modifies the top ranking .the list of top 100 pagerank global persons has overlap of 43 persons with the hart list .thus the averaging over 24 editions gives a significant improvement compared to 35 persons overlap for the case of english edition only .for comparison we note that the top 100 list of historical figures has been also determined recently by having overlap of 42 persons with the hart list .this pantheon mit list is established on the basis of number of editions and number of clicks on an article of a given person without using rank algorithms discussed here .the overlap between top 100 pagerank list and top 100 pantheon list is 44 percent .more data are available in .the fact that _ carl linnaeus _ is the top historical figure of wikipedia pagerank list came out as a surprise for media and broad public ( see ) .this ranking is due to the fact that _ carl linnaeus _ created a classification of world species including , animals , insects , herbs , trees etc .thus all articles of these species point to the article _ carl linnaeus _ in various languages . as a result _ carl linnaeus_ appears on almost top positions in all 24 languages .hence , even if a politician , like _ barak obama _ , takes the second position in his country language en ( _ napoleon _ is at the first position in en ) he is usually placed at low ranking in other language editions . as a result _carl linnaeus _ takes the first global pagerank position .the number of appearances of historical persons in 24 lists of top 100 for each edition can be distributed over present world countries according to the birth place of each person .this geographical distribution is shown in fig .[ fig9_5 ] for pagerank and 2drank . in pagerankthe top countries are _ de , usa , it _ and in 2drank _ us , de , uk_. the appearance of many uk and us singers improves the positions of english speaking countries in 2drank .centuries of top historical figures from each wikipedia edition marked by two letters standard notation of wikipedia .panels : ( a ) column normalized birth date distributions of pagerank historical figures ; ( b ) same as ( a ) for 2drank historical figures .after .[ fig9_6],scaledwidth=48.0% ] the distributions of the top pagerank and 2drank historical figures over 24 wikipedia editions for each century are shown in fig .[ fig9_6 ] .each person is attributed to a century according to the birth date covering the range of centuries from bc 15th to ad 20th centuries . for each centurythe number of persons for each century is normalized to unity to see more clearly relative contribution of each language for each century .the greek edition has more historical figures in bc 5th century because of greek philosophers .also most of western - southern european language editions , including english , dutch , german , french , spanish , italian , portuguese , and greek , have more top historical figures because they have augustine the hippo and justinian i in common .the persian ( fa ) and the arabic ( ar ) wikipedia have more historical figures comparing to other language editions ( in particular european language editions ) from the 6th to the 12th century that is due to islamic leaders and scholars .the data of fig .[ fig9_6 ] clearly show well pronounced patterns , corresponding to strong interactions between cultures : from bc 5th century to ad 15th century for ja , ko , zh , vi ; from ad 6th century to ad 12th century for fa , ar ; and a common birth pattern in en , el , pt , it , es , de , nl ( western european languages ) from bc 5th century to ad 6th century .a detailed analysis shows that even in bc 20th century each edition has a significant fraction of persons of its own language so that even with on going globalization there is a significant dominance of local historical figures for certain cultures .more data on the above points and gender distributions are available in .we now know how a person of a given language is ranked by editions of other languages .therefore , if a top person from a language edition appears in another edition , we can consider this as a cultural influence from culture to .this generates entanglement in a network of cultures .here we associate a language edition with its corresponding culture considering that a language is a first element of culture , even if a culture is not reduced only to a language . in person is attributed to a given language , or culture , according to her / his native language fixed via corresponding wikipedia article . in attribution to a culture is done via a birth place of a person , each language is considered as a proxy for a cultural group and a person is assigned to one of these cultural groups based on the most spoken language of her / his birth place at the country level . if a person does not belong to any of studied editions then he / she is attributed to an additional cultural group world wr .after such an attributions of all persons the two networks of cultures are constructed based on the top pagerank historical figures and top 2drank historical figures respectively . each culture ( i.e. language )is represented as a node of the network , and the weight of a directed link from culture to culture is given by the number of historical figures belonging to culture ( e.g. french ) appearing in the list of top 100 historical figures for a given culture ( e.g. english ) .for example , according to , there are 5 french historical figures among the top 100 pagerank historical figures of the english wikipedia , so we can assign weight 5 to the link from english to french . thus , fig . [ fig9_7](a ) and fig .[ fig9_7](b ) represent the constructed networks of cultures defined by appearances of the top pagerank historical figures and top 2drank historical figures , respectively .in total we have two networks with 25 nodes which include our 24 editions and an additional node wr for all other world cultures .persons of a given culture are not taken into account in the rank list of language edition of this culture . then following the standard rules ( [ eq3_1 ] ) the google matrix of network of culturesis constructed by normalization of sum of all elements in each column to unity . the matrix , written in the pagerank indexes shown in fig .[ fig9_8 ] for persons from pagerank and 2drank lists .the matrix is constructed in the same way as for the network with inverted directions of links . ,scaledwidth=48.0% ] ( a ) and ( b ) respectively .the matrix elements are shown by color with damping factor .after .[ fig9_8],scaledwidth=48.0% ] from the obtained matrix and we determine pagerank and cheirank vectors and then the pagerank - cheirank plane , shown in fig . [ fig9_9 ] , for networks of cultures from fig . [ fig9_7 ] . here indicates the ranking of a given culture ordered by how many of its own top historical figures appear in other wikipedia editions , and indicates the ranking of a given culture according to how many of the top historical figures in the considered culture are from other cultures .it is important to note that for 24 editions the world node wr appears on positions or , for panels in fig .[ fig9_9 ] , signifying that the 24 editions capture the main part of historical figures born in these cultures .we note that for 9 editions in the node wr was at the top position for pagerank so that a significant fraction of historical figures was attributed to other cultures .and obtained from the network of cultures based on ( a ) top 100 pagerank historical figures , ( b ) top 100 2drank historical figures .after .[ fig9_9],scaledwidth=48.0% ] from the data of fig .[ fig9_9 ] we obtain at the top positions of cultures en , de , it showing that other cultures strongly point to them .however , we can argue that for cultures it is also important to have strong communicative property and hence it is important to have 2drank of cultures at top positions . on the top 2drank position we have greek ,turkish and arabic ( for pagerank persons ) in fig .[ fig9_9](a ) and french , russian and arabic ( for 2drank persons ) in fig .[ fig9_9](b ) .this demonstrates the important historical influence of these cultures both via importance ( incoming links ) and communicative ( outgoing links ) properties present in a balanced manner .thus the described research across wikipedia language editions suggests a rigorous mathematical way , based on markov chains and google matrix , for recognition of important historical figures and analysis of interactions of cultures at different historical periods and in different world regions .such an approach recovers 43 percent of persons from the well established hart historical study , that demonstrates the reliability of this method .we think that a further extension of this approach to a larger number of wikipedia editions will provide a more detailed and balanced analysis of interactions of world cultures .social networks like facebook , livejournal , twitter , vkontakte start to play a more and more important role in modern society .the twitter network is a directed one and here we consider its spectral properties following mainly the analysis reported in .twitter is a rapidly growing online directed social network . for july 2009a data set of this entire network is available with nodes and links ( for data sets see refs . in ) . for this casethe spectrum and eigenstate properties of the corresponding google matrix have been analyzed in detail using the arnoldi method and standard pagerank and cheirank computations .for the twitter network the average number of links per node and the general inter - connectivity between top pagerank nodes are considerably larger than for other networks such as wikipedia ( sec .[ s9 ] ) or uk universities ( sec .[ s8 ] ) as can be seen in figs .[ fig10_1 ] and [ fig10_2 ] .are shown in the basis of pagerank index of matrix . here , ( and ) axis show ( and ) with the range .panel ( b ) shows the density of nodes of twitter on pagerank - cheirank plane , averaged over logarithmically equidistant grids for with the normalization condition .the -axis corresponds to and the -axis to . in both panelscolor varies from blue / black at minimal value to red / gray at maximal value ; here .after .[ fig10_1],scaledwidth=48.0% ] the decay of pagerank probability can be approximately described by an algebraic decay with the exponent while for cheirank we have a larger value that is opposite to the usual situation .the image of top matrix elements of with is shown in fig . [ fig10_1 ] .the density distribution of nodes on plane is also shown there .it is somewhat similar to those of wikipedia case in fig .[ fig9_3 ] , may be with a larger density concentration along the line . however , the most striking feature of matrix elements is a very strong inteconnectivity between top pagerank nodes .thus for twitter the top elements fill about 70% of the matrix and about 20% for size . for wikipediathe filling factor is smaller by a factor .in particular the number of links between top pagerank nodes behaves for as while for wikipedia .the exponent for , being close to 2 for twitter , indicates that for the top pagerank nodes the google matrix is macroscopically filled with a fraction of non - vanishing matrix elements ( see also figs .[ fig10_1 ] and [ fig10_2 ] ) and the very well connected top pagerank nodes can be considered as the twitter elite . for wikipediathe interconnectivity among top pagerank nodes has an exponent being somewhat reduced but still stronger as compared to certain university networks where typical exponents are close to unity ( for the range ) .the strong interconnectivity of twitter is also visible in its global logarithmic density distribution of nodes in the pagerank - cheirank plane ( fig .[ fig10_1 ] ( b ) ) which shows a maximal density along a certain ridge along a line . with a significant large number of nodes at small values . of nonzero elements of the adjacency matrix among top pagerank nodes on the pagerank index for twitter ( blue / black curve ) and wikipedia ( red / gray curve ) networks ,data are shown in linear scale .( b ) linear density of the same matrix elements shown for the whole range of in log - log scale for twitter ( blue curve ) , wikipedia ( red curve ) , oxford university 2006 ( magenta curve ) and cambridge university 2006 ( green curve ) ( curves from top to bottom at ) . after .[ fig10_2],scaledwidth=48.0% ] the decay exponent of the pagerank is for twitter ( for ) , which indicates a precursor of a delocalization transition as compared to wikipedia ( ) or www ( ) , caused by the strong interconnectivity .the twitter network is also characterized by a large value of pagerank - cheirank correlator that is by a factor larger than this value for wikipedia and university networks .such a larger value of results from certain individual large values .it is argued that this is related to a very strong inter - connectivity between top k pagerank users of the twitter network .( a ) and ( c ) , and ( b ) and ( d ) .panels ( a ) and ( b ) show subspace eigenvalues ( blue / black dots ) and core space eigenvalues ( red / gray dots ) in -plane ( green / gray curve shows unit circle ) ; there are 17504 ( 66316 ) invariant subspaces , with maximal dimension 44 ( 2959 ) and the sum of all subspace dimensions is ( 180414 ) .the core space eigenvalues are obtained from the arnoldi method applied to the core space subblock of with arnoldi dimension .panels ( c ) and ( d ) show the fraction of eigenvalues with for the core space eigenvalues ( red / gray bottom curve ) and all eigenvalues ( blue / black top curve ) from raw data ( ( a ) and ( b ) respectively ) .the number of eigenvalues with is 34135 ( 129185 ) of which 17505 ( 66357 ) are at ; this number is ( slightly ) larger than the number of invariant subspaces which have each at least one unit eigenvalue .note that in panels ( c ) and ( d ) the number of eigenvalues with is artificially reduced to 200 in order to have a better scale on the vertical axis .the correct numbers of those eigenvalues correspond to ( c ) and ( d ) which are strongly outside the vertical panel scale .after .[ fig10_3],scaledwidth=48.0% ] the spectra of matrices and are obtained with the help of the arnoldi method for a relatively modest arnoldi dimension due to a very large matrix size .the largest modulus eigenvalues are shown in fig .[ fig10_3 ] .the invariant subspaces ( see sec .[ s3.3 ] ) for the twitter network cover about ( ) nodes for ( ) leading to ( ) eigenvalues with or even ( ) eigenvalues with .however , for twitter the fraction of subspace nodes is smaller than the fraction for the university networks of cambridge or oxford ( with ) since the size of the whole twitter network is significantly larger .the complex spectra of and also show the cross and triple - star structures , as in the cases of cambridge and oxford 2006 ( see fig .[ fig8_1 ] ) , even though for the twitter network they are significantly less pronounced . from a physical viewpointone can conjecture that the pagerank probabilities are described by a steady - state quantum gibbs distribution over certain quantum levels with energies by the identification with .in some sense this conjecture assumes that the operator matrix can be represented as a sum of two operators and where describes a hermitian system while represents a non - hermitian operator which creates a system thermalization at a certain effective temperature with the quantum gibbs distribution over energy levels of the operator . on the damping factor for twitter network .data points on curves with one color corresponds to the same node ; about 150 levels are shown close to the minimal energy .panel ( b ) represents the histogram of unfolded level spacing statistics for twitter at .the poisson distribution and the wigner surmise are also shown for comparison .after .[ fig10_4],scaledwidth=48.0% ] the identification of pagerank with an energy spectrum allows to study the corresponding level statistics which represents a well known concept in the framework of random matrix theory .the most direct characteristic is the probability distribution of unfolded level spacings . here is a spacing between nearest levels measured in the units of average local energy spacing . the unfolding procedure requires the smoothed dependence of on the index which is obtained from a polynomial fit of with as argument .the statistical properties of fluctuations of levels have been extensively studied in the fields of rmt , quantum chaos and disordered solid state systems .it is known that integrable quantum systems have well described by the poisson distribution .in contrast the quantum systems , which are chaotic in the classical limit ( e.g. sinai billiard ) , have given by the rmt being close to the wigner surmise . also the anderson localized phase is characterized by while in the delocalized regime one has .the results for the twitter pagerank level statistics are shown in fig .[ fig10_4 ] .we find that is well described by the poisson distribution .furthermore , the evolution of energy levels with the variation of the damping factor shows many level crossings which are typical for poisson statistics . we may note that here each level has its own index so that it is rather easy to see if there is a real or avoided level crossing .the validity of the poisson statistics for pagerank probabilities is confirmed also for the networks of wikipedia editions in english , french and german from fig .[ fig9_3 ] .we argue that due to absence of level repulsion the pagerank order of nearby nodes can be easily interchanged .the obtained poisson law implies that the nearby pagerank probabilities fluctuate as random independent variables .during the last decades the trade between countries has been developed in an extraordinary way . usually countries are ranked in the world trade network ( wtn ) taking into account their exports and imports measured in _usd _ . however , the use of these quantities , which are local in the sense that countries know their total imports and exports , could hide the information of the centrality role that a country plays in this complex network . in this section we present the two - dimensional google matrix analysis of the wtn introduced in .some previous studies of global network characteristics were considered in , degree centrality measures were analyzed in and a time evolution of network global characteristics was studied in .topological and clustering properties of multiplex network of various commodities were discussed in , and an ecological ranking based on the nestedness of countries and products was presented in .the money exchange between countries defines a directed network .therefore google matrix analysis can be introduced in a natural way .pagerank and cheirank algorithms can be easily applied to this network with a straightforward correspondence with imports and exports .two - dimensional ranking , introduced in sec .[ s4 ] , gives an illustrative representation of global importance of countries in the wtn .the important element of google ranking of wtn is its democratic treatment of all world countries , independently of their richness , that follows the main principle of the united nations ( un ) .the wtn is a directed network that can be constructed considering countries as nodes and money exchange as links .we follow the definition of the wtn of where trade information comes from .these data include all trades between countries for different products ( using standard international trade classification of goods , sitc1 ) from 1962 to 2009 .all useful information of the wtn is expressed via the _ money matrix _ , which definition , in terms of its matrix elements , is defined as the money transfer ( in _ usd _ ) from country to country in a given year .this definition can be applied to a given specific product or to _ all commodities _ , which represent the sum over all products .in contrast to the binary adjacency matrix of www ( as the ones analyzed in s[s8 ] and s[s10 ] for example ) has weighted elements .this corresponds to a case when there are in principle multiple number of links from to and this number is proportional to _ usd _ amount transfer .such a situation appears in sec .[ s6 ] for ulam networks and sec .[ s7 ] for linux pcn with a main difference that for the wtn case there is a very large variation of mass matrix elements , related to the fact that there is a very strong variation of richness of various countries .google matrices and are constructed according to the usual rules and relation ( [ eq3_1 ] ) with and its transposed : and where and , if for a given all elements and respectively . here and are the total export and import mass for country .thus the sum in each column of or is equal to unity . in this waygoogle matrices and of wtn allow to treat all countries on equal grounds independently of the fact if a given country is rich or poor .this kind of analysis treats in a democratic way all world countries in consonance with the standards of the un .the probability distributions of ordered pagerank and cheirank depend on their indexes in a rather similar way with a power law decay given by . for the fit of top 100 countries and _ all commodities _the average exponent value is close to corresponding to the zipf law . for world trade in various commodities in 2008 .each country is shown by circle with its own flag ( for a better visibility the circle center is slightly displaced from its integer position along direction angle ) .the panels show the ranking for trade in the following commodities : _ all commodities _ ( a ) and ( b ) ; and _ crude petroleum _ ( c ) and ( d ) .panels ( a ) and ( c ) show a global scale with all 227 countries , while ( b ) and ( d ) give a zoom in the region of top ranks .after .[ fig11_1],scaledwidth=48.0% ] the distribution of countries on pagerank - cheirank plane for trade in _ all commodities _ in year 2008 is shown in panels ( a ) and ( b ) of fig .[ fig11_1 ] at . even if the google matrix approach is based on a democratic ranking of international trade , being independent of total amount of export - import and pib for a given country , the top ranks and belong to the group of industrially developed countries .this means that these countries have efficient trade networks with optimally distributed trade flows .another striking feature of global distribution is that it is concentrated along the main diagonal .this feature is not present in other networks studied before .the origin of this density concentration is related to a simple economy reason : for each country the total import is approximately equal to export since each country should keep in average an economic balance .this balance does not imply a symmetric money matrix , used in gravity model of trade ( see e.g. ) , as can be seen in the significant broadening of distribution of fig .[ fig11_1 ] ( especially at middle values of ) . for a given countryits trade is doing well if its so that the country exports more than it imports .the opposite relation corresponds to a bad trade situation ( e.g. greece being significantly above the diagonal ) .we also can say that local minima in the curve of correspond to a successful trade while maxima mark bad traders . in 2008most successful were china , r of korea , russia , singapore , brazil , south africa , venezuela ( in order of for ) while among bad traders we note uk , spain , nigeria , poland , czech rep , greece , sudan with especially strong export drop for two last cases .a comparison between local and global rankings of countries for both imports and exports gives a new tool to analyze countries economy .for example , in 2008 the most significant differences between cheirank and the rank given by total exports are for _ canada _ and _ mexico _ with corresponding money export ranks and and with and respectively .these variations can be explained in the context that the export of these two countries is too strongly oriented on _usa_. in contrast _ singapore _ moves up from export position to that shows the stability and broadness of its export trade , a similar situation appears for _ india _ moving up from to ( see for more detailed analysis ) .if we focus on the two - dimensional distribution of countries in a specific product we obtain a very different information .the symmetry approximately visible for _ all commodities _ is absolutely absent : the points are scattered practically over the whole square ( see fig .[ fig11_1 ] ) .the reason of such a strong scattering is clear : e.g. for _ crude petroleum _ some countries export this product while other countries import it .even if there is some flow from exporters to exporters it remains relatively low .this makes the google matrix to be very asymmetric .indeed , the asymmetry of trade flow is well visible in panels ( c ) and ( d ) of fig .[ fig11_1 ] .( coarse - graining inside each of cells ) ; data from the un comtrade database .after .[ fig11_2],scaledwidth=48.0% ] the same comparison of global and local rankings done before for _ all commodities _ can be applied to specific products obtaining even more strong differences .for example for _ crude petroleum _ russia moves up from export position to showing that its trade network in this product is better and broader than the one of saudi arabia which is at the first export position in money volume .iran moves in opposite direction from money position down to showing that its trade network is restricted to a small number of nearby countries .a significant improvement of ranking takes place for kazakhstan moving up from to .the direct analysis shows that this happens due to an unusual fact that kazakhstan is practically the only country which sells _ crude petroleum _ to the cheirank leader in this product russia .this puts kazakhstan on the second position .it is clear that such direction of trade is more of political or geographical origin and is not based on economic reasons .the same detailed analysis can be applied to all specific products given by sitc1 .for example for trade of _ cars _ france goes up from position in exports to due to its broad export network .the wtn has evolved during the period 1962 - 2009 .the number of countries is increased by 38% , while the number of links per country for _ all commodities _ is increased in total by 140% with a significant increase from 50% to 140% during the period 1993 - 2009 corresponding to economy globalization . at the same time for a specific commoditythe average number of links per country remains on a level of 3 - 5 links being by a factor 30 smaller compared to _ all commodities _ trade . during the whole period the total amount of trade in _usd _ shows an average exponential growth by 2 orders of magnitude .a statistical density distribution of countries in the plane in the period 1962 - 2009 for _ all commodities _ is shown in fig .[ fig11_2 ] .the distribution has a form of _ spindle _ with maximum density at the vertical axis .we remind that good exporters are on the lower side of this axis at , while the good importers ( bad exporters ) are on the upper side at . , for some selected countries for _ all commodities_. the countries shown panels ( a ) and ( b ) are : japan ( jp - black ) , france ( fr - red ) , fed r of germany and germany ( de - both in blue ) , great britain ( gb - green ) , usa ( us - orange ) [ curves from top to bottom in 1962 in ( a ) ] .the countries shown panels ( c ) and ( d ) are : argentina ( ar - violet ) , india ( in - dark green ) , china ( cn - cyan ) , ussr and russian fed ( ru - both in gray ) [ curves from top to bottom in 1975 in ( c ) ] .after .[ fig11_3],scaledwidth=48.0% ] the evolution of the ranking of countries for _ all commodities _ reflects their economical changes .the countries that occupy top positions tend to move very little in their ranks and can be associated to a_ solid phase_. on the other hand , the countries in the middle region of have a gas like phase with strong rank fluctuations .examples of ranking evolution and for japan , france , fed r of germany and germany , great britain , usa , and for argentina , india , china , ussr and russian fed are shown in fig .[ fig11_3 ] .it is interesting to note that sharp increases in mark crises in 1991 , 1998 for russia and in 2001 for argentina ( import is reduced in period of crises ) .it is also visible that in recent years the solid phase is perturbed by entrance of new countries like china and india .other regional or global crisis could be highlighted due to the big fluctuations in the evolution of ranks .for example , in the range , during the period of 1992 - 1998 some financial crises as black wednesday , mexico crisis , asian crisis and russian crisis are appreciated with this ranking evolution .interesting parallels between multiproduct world trade and interactions between species in ecological systems has been traced in .this approach is based on analysis of strength of transitions forming the google matrix for the multiproduct world trade network .ecological systems are characterized by high complexity and biodiversity linked to nonlinear dynamics and chaos emerging in the process of their evolution .the interactions between species form a complex network whose properties can be analyzed by the modern methods of scale - free networks .the analysis of their properties uses a concept of mutualistic networks and provides a detailed understanding of their features being linked to a high nestedness of these networks . using the un comtrade databasewe show that a similar ecological analysis gives a valuable description of the world trade : countries and trade products are analogous to plants and pollinators , and the whole trade network is characterized by a high nestedness typical for ecological networks. an important feature of ecological networks is that they are highly structured , being very different from randomly interacting species .recently is has been shown that the mutualistic networks between plants and their pollinators are characterized by high nestedness which minimizes competition and increases biodiversity . and corresponding for years 2008 ( c , d ) and 1968 ( e , f ) and 2008 for import ( c , e ) and export ( d , f ) panels .red / gray and blue / black represent unit and zero elements respectively ; only lines and columns with nonzero elements are shown . the order of plants - animals , countries - products is given by the nestedness algorithm , the perfect nestedness is shown by green / gray curves for the corresponding values of .after .[ fig11_4],scaledwidth=48.0% ] the mutualistic wtn is constructed on the basis of the un comtrade database from the matrix of trade transactions expressed in usd for a given product ( commodity ) from country to country in a given year ( from 1962 to 2009 ) . for product classificationwe use 3digits sitc rev.1 discussed above with the number of products .all these products are described in in the commodity code document sitc rev1 .the number of countries varies between in 1962 and in 2009 .the import and export trade matrices are defined as and respectively .we use the dimensionless matrix elements and where for a given year ,max[m^{(e)}_{p , c}]\} ] .this triangular matrix structure can be seen in fig .[ fig12_1](a ) which shows the amplitudes of .the vertical green / gray lines correspond to the extra contribution due to the dangling nodes .these non - vanishing eigenvalues of can be efficiently calculated as the zeros of the reduced polynomial ( [ eq_polyred ] ) up to with . for largest eigenvalues are , , and for .the dependence of the eigenvalues on seems to scale with the parameter for and in particular .therefore the first eigenvalue is clearly separated from the second eigenvalue and one can chose the damping factor without any problems to define a unique pagerank .are shown by color with blue / black for minimal zero elements and red / gray for maximal unity elements , with corresponding to ( with corresponding to the left column ) and for ( with corresponding to the upper row ) .panel ( b ) : the full lines correspond to the dependence of pagerank probability on index for the matrix sizes , , with the pagerank evaluated by the exact expression .the green / gray crosses correspond to the pagerank obtained by the power method for ; the dashed straight line shows the zipf law dependence .after .[ fig12_1],scaledwidth=48.0% ] for and the exact pagerank dependence .panel ( b ) : comparison of the dependence of the rescaled probabilities and on .both panels correspond to the case .after .[ fig12_2],scaledwidth=48.0% ] the large values of are possible because the vector iteration can actually be computed without storing the non - vanishing elements of by using the relation : }\,\frac{m(mn , m)}{q(mn)}\,v^{(j)}_{mn } \ ; , \quad{\rm if}\ n\ge 2\ ] ] and .the initial vector is given by and is the number of divisors of ( taking into account the multiplicity ) .the multiplicity can be recalculated during each iteration and one needs only to store integer numbers .it is also possible to reformulate ( [ eq_efficient_iter ] ) in a different way without using .the vectors allow to compute the coefficients in the reduced polynomial and the pagerank . fig .[ fig12_1](b ) shows the pagerank for obtained in this way and for comparison also the result of the power method for .actually fig .[ fig12_2 ] shows that in the sum already the first three terms give a quite satisfactory approximation to the pagerank allowing a further analytical simplified evaluation with the result for , where is the normalization constant and for prime numbers and for numbers being a product of two prime numbers and . the behavior , which takes approximately constant values on several branches , is also visible in fig .[ fig12_2 ] with decreasing if is a product of many prime numbers .the numerical results up to show that the numbers , corresponding to the leading pagerank values for , are , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , with about 30% of non - primes among these values . a simplified model for the network for integer numbers with m is divisor of n and has also been studied with similar results .citation networks for physical review and other scientific journals can be defined by taking published articles as nodes and linking an article a to another article b if a cites b. pagerank and similar analysis of such networks are efficient to determine influential articles . in citation network linksgo mostly from newer to older articles and therefore such networks have , apart from the dangling node contributions , typically also a ( nearly ) triangular structure as can be seen in fig .[ fig12_3 ] which shows a coarse - grained density of the corresponding google matrix for the citation network of physical review from the very beginning until 2009 .however , due to the delay of the publication process in certain rare instances a published paper may cite another paper that is actually published a little later and sometimes two papers may even cite mutually each other .therefore the matrix structure is not exactly triangular but in the coarse - grained density in fig .[ fig12_3 ] the rare `` future citations '' are not well visible .the nearly triangular matrix structure implies large dimensional jordan blocks associated to the eigenvalue .this creates the jordan error enhancement ( [ eqjordan ] ) with severe numerical problems for accurate computation of eigenvalues in the range when using the arnoldi method with standard double - precision arithmetic . in the basis of the publication time index ( and ) .( b ) density of matrix elements in the basis of journal ordering according to : phys .series i , phys .. a , b , c , d , e , phys .rev . stab and phys .stper . andwith time index ordering inside each journal .note that the journals phys .series i , phys .rev . stab and phys .stper are not clearly visible due to a small number of published papers . also rev .phys . appears only as a thick line with 2 - 3 pixels ( out of 500 ) due to a limited number of published papers .the different blocks with triangular structure correspond to clearly visible seven journals with considerable numbers of published papers .both panels show the coarse - grained density of matrix elements on square cells for the entire network .color shows the density of matrix elements ( of at ) changing from blue / black for minimum zero value to red / gray at maximum value .after .[ fig12_3],scaledwidth=48.0% ] one can eliminate the small number of future citations ( which is % of the total number of links ) and determine the complex eigenvalue spectrum of a triangular reduced citation network using the semi - analytical theory presented in previous subsection .it turns out that in this case the matrix is nilpotent with which is much smaller than the total network size .the 352 non - vanishing eigenvalues can be determined numerically as the zeros of the polynomial ( [ eq_polyred ] ) but due to an alternate sign problem with a strong loss of significance it is necessary to use the high precision library gmp with 256 binary digits .the semi - analytical theory can also be generalized to the case of _ nearly _ triangular networks , i.e. the full citation network including the future citations . in this casethe matrix is no longer nilpotent but one can still generalize the arguments of previous subsection and discuss the two cases where the quantity either vanishes ( eigenvectors of first group ) or is different from zero ( eigenvectors of second group ) .the eigenvalues for the first group , which may now be different from zero , can be determined by a quite complicated but numerically very efficient procedure using the subspace eigenvalues of and degenerate subspace eigenvalues of ( due to absence of dangling node contributions the matrix produces much larger invariant subspaces than ) .the eigenvalues of the second group are given as the complex zeros of the rational function : with given as in ( [ eq_polyred ] ) and now the series is not finite since is not nilpotent . for the citation network of physical reviewthe coefficients behave as where is the largest eigenvalue of the matrix with an eigenvector non - orthogonal to .therefore the series in ( [ eq_rationalfunction ] ) converges well for but in order to determine the spectrum the rational function needs to be evaluated for smaller values of .this problem can be solved by interpolating with ( another ) rational function using a certain number of support points on the complex unit circle , where ( [ eq_rationalfunction ] ) converges very well , and determining the complex zeros , well inside the unit circle , of the numerator polynomial using again the high precision library gmp . in this way using 16384 binary digits one may obtain 2500 reliable eigenvalues of the second group .binary digits , eigenvalues ; green ( light gray ) dots show the degenerate subspace eigenvalues of the matrix which are also eigenvalues of with a degeneracy reduced by one ( eigenvalues of the first group ) ; blue / black dots show the direct subspace eigenvalues of .( b ) spectrum of numerically accurate 352 non - vanishing eigenvalues of the google matrix for the triangular reduced physical review network determined by the newton - maehly method applied to the reduced polynomial ( [ eq_polyred ] ) with a high - precision calculation of 256 binary digits ; note the absence of subspace eigenvalues for this case . in both panelsthe green / gray curve represents the unit circle .after .[ fig12_4],scaledwidth=48.0% ] the numerical high precision spectra obtained by the semi - analytic methods for both cases , triangular reduced and full citation network , are shown in fig .[ fig12_4 ] .one may mention that it is also possible to implement the arnoldi method using the high precision library gmp for both cases and the resulting eigenvalues coincide very accurately with the semi - analytic spectra for both cases . when the spectrum of is determined with a good accuracy we can test the validity of the fractal weyl law ( [ eq5_1 ] ) changing the matrix size by considering articles published from the beginning to a certain time moment measured in years .the data presented in fig . [ fig12_5 ] show that the network size grows approximately exponentially as with the fit parameters , .the time interval considered in fig .[ fig12_5 ] is since the first data point corresponds to with papers published between 1893 and 1913 .the results , for the number of eigenvalues with , show that its growth is well described by the relation for the range when the number of articles becomes sufficiently large .this range is not very large and probably due to that there is a certain dependence of the exponent on the range parameter .at the same time we note that the maximal matrix size studied here is probably the largest one used in numerical studies of the fractal weyl law .we have for all that is definitely smaller than unity and thus the fractal weyl law is well applicable to the phys .rev . network .the value of increases up to for the data points with but this is due to the fact here also includes some numerically incorrect eigenvalues related to the numerical instability of the arnoldi method at standard double - precision ( 52 binary digits ) as discussed above . we conclude that the most appropriate choice for the description of the data is obtained at which from one side excludes small , partly numerically incorrect , values of and on the other side gives sufficiently large values of .here we have corresponding to the fractal dimension .furthermore , for we have a rather constant value with . of course, it would be interesting to extend this analysis to a larger size of citation networks of various type and not only for phys .we expect that the fractal weyl law is a generic feature of citation networks .further studies of the citation network of physical review concern the properties of eigenvectors ( different from the pagerank ) associated to relatively large complex eigenvalues , the fractal weyl law , the correlations between pagerank and cheirank ( see also subsection [ s4.3 ] ) and the notion of `` impactrank '' . to define the impactrank one may ask the question howa paper influences or has been influenced by other papers .for this one considers an initial vector , localized on a one node / paper .then the modified google matrix ( with a damping factor ) produces a `` pagerank '' by the propagator . in the vector leading nodes / papers have strongly influenced the initial paper represented in . doing the same for one obtains a vector where the leading papers have been influenced by the initial paper represented in .this procedure has been applied to certain historically important papers . of eigenvalues with for ( or ) versus the effective network size where the nodes with publication times after a cut time are removed from the network .the green / gray line shows the fractal weyl law with parameters ( ) and ( ) obtained from a fit in the range .the number includes both exactly determined invariant subspace eigenvalues and core space eigenvalues obtained from the arnoldi method with double - precision ( 52 binary digits ) for ( red / gray crosses ) and ( blue / black squares ) .panel ( b ) : exponent with error bars obtained from the fit in the range versus cut value .panel ( d ) : effective network size versus cut time ( in years ) . the green / gray line shows the exponential fit with and representing the number of years after which the size of the network ( number of papers published in all physical review journals ) is effectively doubled .after .[ fig12_5],scaledwidth=48.0% ] in summary , the results of this section show that the phenomenon of the jordan error enhancement ( [ eqjordan ] ) , induced by finite accuracy of computations with a finite number of digits , can be resolved by advanced numerical methods described above .thus the accurate eigenvalues can be obtained even for the most difficult case of quasi - triangular matrices .we note that for other networks like www of uk universities , wikipedia and twitter the triangular structure of is much less pronounced ( see e.g. fig . [ fig1_1 ] ) that gives a reduction of jordan blocks so that the arnoldi method with double precision computes accurate values of .there are various preferential attachment models generating complex scale - free networks ( see e.g. ) .such undirected networks are generated by the albert - barabsi ( ab ) procedure which builds networks by an iterative process . such a procedure has been generalized to generate directed networks in with the aim to study properties of the google matrix of such networks .the procedure is working as follows : starting from nodes , at each step links are added to the existing network with probability , or links are rewired with probability , or a new node with links is added with probability . in each casethe end node of new links is chosen with preferential attachment , i.e. with probability where is the total number of ingoing and outgoing links of node .this mechanism generates directed networks having the small - world and scale - free properties , depending on the values of and .the results are averaged over random realizations of the network to improve the statistics . the studies are done mainly for , and two values of corresponding to scale - free ( ) and exponential ( ) regimes of link distributions ( see fig . 1 in for undirected networks ) .for the generated directed networks at , one finds properties close to the behavior for the www with the cumulative distribution of ingoing links showing algebraic decay and average connectivity . for one finds and . for outgoing links ,the numerical data are compatible with an exponential decay in both cases with for and for .it is found that small variations of parameters near the chosen values do not qualitatively affect the properties of matrix .it is found that the eigenvalues of for the ab model have one with all other at ( see fig . 1 in ) .this distribution shows no significant modification with the growth of matrix size .however , the values of ipr are growing with for typical values .this indicates a delocalization of corresponding eigenstates at large . at the same time the pagerank probability is well described by the algebraic dependence with being practically independent of .these results for directed ab model network shows that it captures certain features of real directed networks , as e.g. a typical pagerank decay with the exponent .however , the spectrum of in this model is characterized by a large gap between and other eigenvalues which have at .this feature is in a drastic difference with spectra of such typical networks at www of universities , wikipedia and twitter ( see figs .[ fig8_1],[fig9_1],[fig10_2 ] ) .in fact the ab model has no subspaces and no isolated or weakly coupled communities . in this network all sites can be reached from a given site in a logarithmic number of steps that generates a large gap in the spectrum of google matrix and a rapid relaxation to pagerank eigenstate . in real networksthere are plenty of isolated or weakly coupled communities and the introduction of damping factor is necessary to have a single pagerank eigenvalue at .thus the results obtained in show that the ab model is not able to capture the important spectral features of real networks .additional studies in analyzed the model of a real www university network with rewiring procedure of links , which consists in randomizing the links of the network keeping fixed the number of links at any given node .starting from a single network , this creates an ensemble of randomized networks of same size , where each node has the same number of ingoing and outgoing links as for the original network .the spectrum of such randomly rewired networks is also characterized by a large gap in the spectrum of showing that rewiring destroys the communities existing in original networks .the spectrum and eigenstate properties are studied in the related work on various real networks of moderate size which have no spectral gap .above we saw that the standard models of scale - free networks are not able to reproduce the typical properties of spectrum of google matrices of real large scale networks . at the same timewe believe that it is important to find realistic matrix models of www and other networks . herewe discuss certain results for certain random matrix models of .analytical and numerical studies of random unistochastic or orthostochastic matrices of size and lead to triplet and cross structures in the complex eigenvalue spectra ( see also fig .[ fig8bis ] ) .however , the size of such matrices is too small .here we consider other examples of random matrix models of perron - frobenius operators characterized by non - negative matrix elements and column sums normalized to unity .we call these models random perron - frobenius matrices ( rpfm ). a number of rpfm , with arbitrary size , can be constructed by drawing independent matrix elements from a given distribution with finite variance and normalizing the column sums to unity .the average matrix is just a projector on the vector ( with unity entries on each node , see also sec . [ s12.0 ] ) and has the two eigenvalues ( of multiplicity ) and ( of multiplicity ) . using an argument of degenerate perturbation theory on and known results on the eigenvalue density of non - symmetric random matrices finds that an arbitrary realization of has the leading eigenvalue and the other eigenvalues are uniformly distributed on the complex unit circle of radius ( see fig .[ fig13_1 ] ) . shows the spectrum ( red / gray dots ) of one realization of a full uniform rpfm with dimension and matrix elements uniformly distributed in the interval ] and a triangular matrix with non - vanishing elements ( blue / black squares ) ; here is the index - number of non - empty columns and the first column with corresponds to a dangling node with elements for both triangular cases .panels show the complex eigenvalue spectrum ( red / gray dots ) of a sparse rpfm with dimension and non - vanishing elements per column at random positions .panel ( or ) corresponds to the case of uniformly distributed non - vanishing elements in the interval ] .sparse models with non - vanishing elements per column can be modeled by a distribution where the probability of is and for non - zero ( either uniform in ] and for we have .then the first column is empty , that means it corresponds to a dangling node and it needs to be replaced by entries .for the triangular rpfm the situation changes completely since here the average matrix ( for and ) has already a nontrivial structure and eigenvalue spectrum .therefore the argument of degenerate perturbation theory which allowed to apply the results of standard full non - symmetric random matrices does not apply here . in fig .[ fig13_1 ] one clearly sees that for the spectra for one realization of a triangular rpfm and its average are very similar for the eigenvalues with large modulus but both do not have at all a uniform circular density in contrast to the rprm models without the triangular constraint discussed above . for the triangular rpfm the pagerank behaves as with the ranking index being close to the natural order of nodes that reflects the fact that the node 1 has the maximum of incoming links etc .the above results show that it is not so simple to propose a good random matrix model which captures the generic spectral features of real directed networks .we think that investigations in this direction should be continued .the phenomenon of anderson localization of electron transport in disordered materials is now a well - known effect studied in detail in physics ( see e.g. ) . in one and two dimensionseven a small disorder leads to an exponential localization of electron diffusion that corresponds to an insulating phase .thus , even if a classical electron dynamics is diffusive and delocalized over the whole space , the effects of quantum interference generates a localization of all eigenstates of the schdinger equation . in higher dimensions a localization is preserved at a sufficiently strong disorder , while a delocalized metallic phase appears for a disorder strength being smaller a certain critical value dependent on the fermi energy of electrons .this phenomenon is rather generic and we can expect that a somewhat similar delocalization transition can appear in the small - world networks . at ; , ; averaging is done over network realizations .( b ) stars give dependence of on a disorder strength at the critical point when , and at fixed ; the straight line corresponds to ; the dashed curve is drown to adapt an eye .after .[ fig13_2],scaledwidth=48.0% ] -0.3 cm indeed , it is useful to consider the anderson model on a ring with a certain number of shortcut links , described by the schdinger equation where are random on site energies homogeneously distributed within the interval , and is the hopping matrix element .the sum over is taken over randomly established shortcuts from a site to any other random site of the network .the number of such shortcuts is , where is the total number of sites on a ring and is the density of shortcut links .this model had been introduced in .the numerical study , reported there , showed that the level - spacing statistics for this model has a transition from the poisson distribution , typical for the anderson localization phase , to the wigner surmise distribution , typical for the anderson metallic phase .the numerical diagonalization was done via the lanczos algorithm for the sizes up to and the typical parameter range and .an example , of the variation of with a decrease of is shown in fig .[ fig13_2](a ) .we see that the wigner surmise provides a good description of the numerical data at , when the maximal localization length in the 1d anderson model ( see e.g. ) is much smaller than the system size . to identify a transition from one limiting case to another it is convenient to introduce the parameter , where is the intersection point of and .in this way varies from ( for ) to ( for ) ( see e.g. ) . from the variation of with system parameters and size ,the critical density can be determined by the condition being independent of .the obtained dependence of on obtained at a fixed critical point is shown in fig .[ fig13_2](b ) .the anderson delocalization transition takes place when the density of shortcuts becomes larger than a critical density where is the length of anderson localization in .a simple physical interpretation of this result is that the delocalization takes place when the localization length becomes larger than a typical distance between shortcuts .the further studies of time evolution of wave function and ipr variation also confirmed the existence of quantum delocalization transition on this quantum small - world network .thus the results obtained for the quantum small - world networks show that the anderson transition can take place in such systems .however , the above model represents an undirected network corresponding to a symmetric matrix with a real spectrum while the typical directed networks are characterized by asymmetric matrix and complex spectrum .the possibility of existence of localized states of for www networks was also discussed by but the fact that in a typical case the spectrum of is complex has not been analyzed in detail .above we saw certain indications on a possibility of anderson type delocalization transition for eigenstates of the matrix .our results clearly show that certain eigenstates in the core space are exponentially localized ( see e.g. fig [ fig8_2](b ) ) .such states are localized only on a few nodes touching other nodes of network only by an exponentially small tail .a similar situation would appear in the 1d anderson model if an absorption would be introduced on one end of the chain .then the eigenstates located far away from this place would feel this absorption only by exponentially small tails so that the imaginary part of the eigenenergy would have for such far away states only an exponentially small imaginary part .it is natural to expect that such localization can be destroyed by some parameter variation . indeed , certain eigenstates with for the directed network of the ab model have ipr growing with the matrix size ( see sec .[ s13.1 ] and ) even if for the pagerank the values of remain independent of .the results for the ulam network from figs .[ fig6_6 ] , [ fig6_7 ] provide an example of directed network where the pagerank vector becomes delocalized when the damping factor is decreased from to .this example demonstrates a possibility of pagerank delocalization but a deeper understanding of the conditions required for such a phenomenon to occur are still lacking .the main difficulty is an absence of well established random matrix models which have properties similar to the available examples of real networks .indeed , for hermitian and unitary matrices the theories of random matrices , mesoscopic systems and quantum chaos allow to capture main universal properties of spectra and eigenstates . for asymmetricgoogle matrices the spectrum is complex and at the moment there are no good random matrix models which would allow to perform analytical analysis of various parameter dependencies .it is possible that non - hermitian anderson models in , which naturally generates a complex spectrum and may have delocalized eigenstates , will provide new insights in this direction .we note that the recent random google matrix models studied in give indications on appearance of the anderson transition for google matrix eigenstates and a mobility edge contour in a plane of complex eigenvalues .in this section we discuss additional examples of real directed networks . in 1958john von neumann traced first parallels between architecture of the computer and the brain .since that time computers became an unavoidable element of the modern society forming a computer network connected by the www with about indexed web pages spread all over the world ( see e.g. http://www.worldwidewebsize.com/ ) .this number starts to become comparable with neurons in a human brain where each neuron can be viewed as an independent processing unit connected with about other neurons by synaptic links ( see e.g. ) .about 20% of these links are unidirectional and hence the brain can be viewed as a directed network of neuron links . at present , more and more experimental information about neurons and their links becomes available and the investigations of properties of neuronal networks attract an active interest ( see e.g. ) .the fact that enormous sizes of www and brain networks are comparable gives an idea that the google matrix analysis should find useful application in brain science as it is the case of www .first applications of methods of google matrix methods to brain neural networks was done in for a large - scale thalamocortical model based on experimental measures in several mammalian species .the model spans three anatomic scales .( i ) it is based on global ( white - matter ) thalamocortical anatomy obtained by means of diffusion tensor imaging of a human brain .( ii ) it includes multiple thalamic nuclei and six - layered cortical microcircuitry based on in vitro labeling and three - dimensional reconstruction of single neurons of cat visual cortex .( iii ) it has 22 basic types of neurons with appropriate laminar distribution of their branching dendritic trees . according to model exhibits behavioral regimes of normal brain activity that were not explicitly built - in but emerged spontaneously as the result of interactions among anatomical and dynamic processes . for the google matrices and at for the neural network of _c.elegans _ ( black and red / gray symbols ) .( b ) values of ipr of eigenvectors are shown as a function of corresponding ( same colors ) .after .[ fig14_1],scaledwidth=48.0% ] -0.3 cm the model studied in contains neuron with .the obtained results show that pagerank and cheirank vectors have rather large being comparable with the whole network size at .the corresponding probabilities have very flat dependence on their indexes showing that they are close to a delocalized regime .we attribute these features to a rather large number of links per node being even larger than for the twitter network . at the same time the pagerank - cheirank correlator is rather small .thus this network is structured in such a way that functions related to order signals ( outgoing links of cheirank ) and signals bringing orders ( ingoing links of pagerank ) are well separated and independent of each other as it is the case for the linux kernel software architecture .the spectrum of has a gapless structure showing that long living excitations can exist in this neuronal network . of course, model systems of neural networks can provide a number of interesting insights but it is much more important to study examples of real neural networks . in an analysis is performed for the neural network of _c.elegans _ ( worm ) .the full connectivity of this directed network is known and well documented at wormatlas .the number of linked neurons ( nodes ) is with the number of synaptic connections and gap junctions ( links ) between them being . showing distribution of neurons according to their ranking .( a ) : soma region coloration - head ( red / gray ) , middle ( green / light gray ) , tail ( blue / dark gray ) .( b ) : neuron type coloration - sensory ( red / gray ) , motor ( green / light gray ) , interneuron ( blue / dark gray ) , polymodal ( purple / light - dark gray ) and unknown ( black ) .the classifications and colors are given according to wormatlas .after .[ fig14_2],scaledwidth=48.0% ] -0.3 cm the google matrix of _c.elegans _ is constructed using the connectivity matrix elements , where is an asymmetric matrix of synaptic links whose elements are if neuron connects to neuron through a chemical synaptic connection and otherwise .the matrix part is a symmetric matrix describing gap junctions between pairs of cells , if neurons and are connected through a gap junction and otherwise .then the matrices and are constructed following the standard rule ( [ eq3_1 ] ) at .the connectivity properties of this network are similar to those of www of cambridge and oxford with approximately the same number of links per node .the spectra of and are shown in fig .[ fig14_1 ] with corresponding ipr values of eigenstates .the imaginary part of is relatively small due to a large fraction of symmetric links .the second by modulus eigenvalues are for and for .thus the network relaxation time is approximately iterations of .certain ipr values of eigenstates of have rather large while others have located only on about ten nodes .we have a large value for pagerank and a more moderate value for cheirank vectors . herewe have the algebraic decay exponents being for and for .of course , the network size is not large and these values are only approximate .however , they indicate an interchange between pagerank and cheirank showing importance of outgoing links . it is possible that such an inversion is related to a significant importance of outgoing links in neural systems : in a sense such links transfer orders , while ingoing links bring instructions to a given neuron from other neuronsthe correlator is small and thus , the network structure allows to perform a control of information flow in a more efficient way without interference of errors between orders and executions .we saw already in sec .[ s7.1 ] that such a separation of concerns emerges in software architecture .it seems that the neural networks also adopt such a structure .we note that a somewhat similar situation appears for networks of business process management where _ principals _ of a company are located at the top cheirank position while the top pagerank positions belong to company _ contacts _ .indeed , a case study of a real company structure analyzed in also stress the importance of company managers who transfer orders to other structural units . for this networkthe correlator is also small being .we expect that brain neural networks may have certain similarities with company organization .each neuron belongs to two ranks and and it is convenient to represent the distribution of neurons on pagerank - cheirank plane shown in fig .[ fig14_2 ] .the plot confirms that there are little correlations between both ranks since the points are scattered over the whole plane .neurons ranked at top positions of pagerank have their soma located mainly in both extremities of the worm ( head and tail ) showing that neurons in those regions have important connections coming from many other neurons which control head and tail movements .this tendency is even more visible for neurons at top positions of cheirank but with a preference for head and middle regions . in general , neurons , that have their soma in the middle region of the worm ,are quite highly ranked in cheirank but not in pagerank .the neurons located at the head region have top positions in cheirank and also pagerank , while the middle region has some top cheirank indexes but rather large indexes of pagerank ( fig .[ fig14_2 ] ( a ) ) .the neuron type coloration ( fig . [ fig14_2 ] ( b ) ) also reveals that sensory neurons are at top pagerank positions but at rather large cheirank indexes , whereas in general motor neurons are in the opposite situation .top nodes of pagerank and cheirank favor important signal relaying neurons such as and that integrate signals from crucial nodes and in turn pilot other crucial nodes .neurons , and are considered to belong to the rich club analyzed in .the top neurons in 2drank are aval , avar , avbl , avbr , pvcr that corresponds to a dominance of interneurons .more details can be found in .the technological progress allows to obtain now more and more detailed information about neural networks ( see e.g. ) even if it is not easy to get information about link directions .in view of that we expect that the methods of directed network analysis described here will find useful future applications for brain neural networks . the approaches of markov chains and google matrix can be also efficiently used for analysis of statistical properties of dna sequences .the data sets are publicly available at .the analysis of poincar recurrences in these dna sequences shows their similarities with the statistical properties of recurrences for dynamical trajectories in the chirikov standard map and other symplectic maps .indeed , a dna sequence can be viewed as a long symbolic trajectory and hence , the google matrix , constructed from it , highlights the statistical features of dna from a new viewpoint . an important step in the statistical analysis of dna sequences was done in applying methods of statistical linguistics and determining the frequency of various words composed of up to 7 letters .a first order markovian models have been also proposed and briefly discussed in this work .the google matrix analysis provides a natural extension of this approach .thus the pagerank eigenvector gives most frequent words of given length .the spectrum and eigenstates of characterize the relaxation processes of different modes in the markov process generated by a symbolic dna sequence .thus the comparison of word ranks of different species allows to identify their proximity .are shown in the basis of pagerank index ( and ) .here , and axes show and within the range ( a ) and ( b ) .the element at is placed at top left corner .color marks the amplitude of matrix elements changing from blue / black for minimum zero value to red / gray at maximum value .after .[ fig14_3],scaledwidth=48.0% ] -0.3 cm the statistical analysis is done for dna sequences of the species : homo sapiens ( hs , human ) , canis familiaris ( cf , dog ) , loxodonta africana ( la , elephant ) , bos taurus ( bull , bt ) , danio rerio ( dr , zebrafish ) . for hs dna sequences are represented as a single string of length base pairs ( bp ) corresponding to 5 individuals .similar data are obtained for bt ( bp ) , cf ( bp ) , la ( bp ) , dr ( bp ) .all strings are composed of 4 letters and undetermined letter .the strings can be found from . for a given sequencewe fix the words of letters length corresponding to the number of states .we consider that there is a transition from a state to state inside this basis when we move along the string from left to right going from a word to a next word .this transition adds one unit in the transition matrix element .the words with letter are omitted , the transitions are counted only between nearby words not separated by words with .there are approximately such transitions for the whole length since the fraction of undetermined letters is small .thus we have .the markov matrix of transitions is obtained by normalizing matrix elements in such a way that their sum in each column is equal to unity : .if there are columns with all zero elements ( dangling nodes ) then zeros of such columns are replaced by . then the google matrix is constructed from by the standard rule ( [ eq3_1 ] ) .it is found that the spectrum of has a significant gap and a variation of in a range does not affect significantly the pagerank probability . thus all dna results are shown at . of google matrix elements with as a function of .( a ) various species with 6-letters word length : elephant la ( green ) , zebrafish dr(black ) , dog cf ( red ) , bull bt ( magenta ) , and homo sapiens hs ( blue ) ( from left to right at ) .( b ) data for hs sequence with words of length ( brown ) , ( blue ) , ( red ) ( from right to left at ) ; for comparison black dashed and dotted curves show the same distribution for the www networks of universities of cambridge and oxford in 2006 respectively .after .[ fig14_4],scaledwidth=48.0% ] -0.3 cm of sum of ingoing matrix elements with .panels ( a ) and ( b ) show the same cases as in fig .[ fig14_4 ] in same colors .the dashed and dotted curves are shifted in -axis by one unit left to fit the figure scale .after .[ fig14_5],scaledwidth=48.0% ] -0.3 cm on pagerank index .( a ) data for different species for word length of 6-letters : zebrafish dr ( black ) , dog cf ( red ) , homo sapiens hs ( blue ) , elephant la ( green ) and bull bt ( magenta ) ( from top to bottom at ) .( b ) data for hs ( full curve ) and la ( dashed curve ) for word length ( brown ) , ( blue / green ) , ( red ) ( from top to bottom at ) .after .[ fig14_6],scaledwidth=48.0% ] -0.3 cm the image of matrix elements is shown in fig .[ fig14_3 ] for hs with .we see that almost all matrix is full that is drastically different from the www and other networks considered above .the analysis of statistical properties of matrix elements shows that their integrated distribution follows a power law as it is seen in fig .[ fig14_4 ] . here is the number of matrix elements of the matrix with values .the data show that the number of nonzero matrix elements is very close to .the main fraction of elements has values ( some elements since for certain there are many transitions to some node with and e.g. only one transition to other with ) .at the same time there are also transition elements with large values whose fraction decays in an algebraic law with some constant and an exponent .the fit of numerical data in the range of algebraic decay gives for : ( bt ) , ( cf ) , ( la ) , ( hs ) , ( dr ) .for hs case we find at and at with the average for .there are visible oscillations in the algebraic decay of with but in global we see that on average all species are well described by a universal decay law with the exponent . for comparisonwe also show the distribution for the www networks of university of cambridge and oxford in year 2006 .we see that in these cases the distribution has a very short range in which the decay is at least approximately algebraic ( ) .in contrast to that for the dna sequences we have a large range of algebraic decay . since in each column we have the sum of all elements equal to unity we can say that the differential fraction gives the distribution of outgoing matrix elements which is similar to the distribution of outgoing links extensively studied for the www networks .indeed , for the www networks all links in a column are considered to have the same weight so that these matrix elements are given by an inverse number of outgoing links with the decay exponent .thus , the obtained data show that the distribution of dna matrix elements is similar to the distribution of outgoing links in the www networks .indeed , for outgoing links of cambridge and oxford networks the fit of numerical data gives the exponents ( cambridge ) and ( oxford ) .as discussed above , on average the probability of pagerank vector is proportional to the number of ingoing links that works satisfactory for sparse matrices . for dnawe have a situation where the google matrix is almost full and zero matrix elements are practically absent .in such a case an analogue of number of ingoing links is the sum of ingoing matrix elements .the integrated distribution of ingoing matrix elements with the dependence of on is shown in fig .[ fig14_5 ] . here is defined as the number of nodes with the sum of ingoing matrix elements being larger than . a significant part of this dependence , corresponding to large values of and determining the pagerank probability decay , is well described by a power law .the fit of data at gives ( bt ) , ( cf ) , ( la ) , ( hs ) , ( dr ) .for hs case at we find respectively and . for and other specieswe have an average . for www one usually have .indeed , for the ingoing matrix elements of cambridge and oxford networks we find respectively the exponents and ( see curves in fig .[ fig14_5 ] ) . for ingoing links distribution of cambridge and oxford networkswe obtain respectively and which are close to the usual www value .in contrast the exponent for dna google matrix elements gets significantly larger value .this feature marks a significant difference between dna and www networks .the pagerank vector can be obtained by a direct diagonalization .the dependence of probability on index is shown in fig .[ fig14_6 ] for various species and different word length .the probability describes the steady state of random walks on the markov chain and thus it gives the frequency of appearance of various words of length in the whole sequence .the frequencies or probabilities of words appearance in the sequences have been obtained in by a direct counting of words along the sequence ( the available sequences were shorted at that times ) .both methods are mathematically equivalent and indeed our distributions are in good agreement with those found in even if now we have a significantly better statistics .plane diagrams for different species in comparison with homo sapiens : ( a ) -axis shows pagerank index of a word and -axis shows pagerank index of the same word with of bull , ( b ) of dog , ( c ) of elephant and ( d ) of zebrafish ; here the word length is .the colors of symbols marks the purine content in a word ( fractions of letters or in any order ) ; the color varies from red / gray at maximal content , via brown , yellow , green , light blue , to blue / black at minimal zero content .after .[ fig14_7],scaledwidth=48.0% ] -0.3 cm the decay of with can be approximately described by a power law .thus for example for hs sequence at we find for the fit range that is rather close to the exponent found in .since on average the pagerank probability is proportional to the number of ingoing links , or the sum of ingoing matrix elements of , one has the relation between the exponent of pagerank and exponent of ingoing links ( or matrix elements ) : . indeed ,for the hs dna case at we have that gives being close to the above value of obtained from the direct fit of dependence .the agreement is not so perfect since there is a visible curvature in the log - log plot of vs and also since a small value of gives a moderate variation of that produces a reduction of accuracy of numerical fit procedure . in spite of this only approximate agreementwe conclude that in global the relation between and works correctly .it is interesting to plot a pagerank index of a given species versus the index of hs for the same word . for identical sequences one should have all points on diagonal , while the deviations from diagonal characterize the differences between species .the examples of such pagerank proximity diagrams are shown in fig .[ fig14_7 ] for words at .a visual impression is that cf case has less deviations from hs rank compared to bt and la .the non - mammalian dr case has most strong deviations from hs rank .the fraction of purine letters or in a word of letters is shown by color in fig .[ fig14_7 ] for all words ranked by pagerank index .we see that these letters are approximately homogeneously distributed over the whole range of values . to determine the proximity between different species or different hs individuals we compute the average dispersion between two species ( individuals ) and . comparing the words with length we find that the scaling works with a good accuracy ( about 10% when is increased by a factor 16 ) . to represent the result in a form independent of we compare the values of with the corresponding random model value .this value is computed assuming a random distribution of points in a square when only one point appears in each column and each line ( e.g. at we have and ) .the dimensionless dispersion is then given by .from the ranking of different species we obtain the following values at : ; , ; , , ; , , , ( other have similar values ) . according to this statistical analysis of pagerank proximity between species we find that value is minimal between cf and hs showing that these are two most similar species among those considered here .the comparison of two hs individuals gives the value being significantly smaller then the proximity correlator between different species .the spectrum of is analyzed in detail in .it is shown that it has a relatively large gap due to which there is a relatively rapid relaxation of probability of a random surfer to the pagerank values . for escherichia coli v1.1 ( a ) , and yeast ( b ) gene transcription networks on ( network data are taken from and ) .the nodes with five top probability values of pagerank , cheirank and 2drank are labeled by their corresponding operon ( node ) names ; they correspond to 5 lowest values of indexes .after .[ fig14_8],scaledwidth=48.0% ] -0.3 cm at present the analysis of gene transcription regulation networks and recovery of their control biological functions becomes an active research field of bioinformatics ( see e.g. ) . here ,following , we provide two simple examples of 2dranking analysis for gene transcriptional regulation networks of escherichia coli ( , ) and yeast ( , ) . in the construction of matrixthe outgoing links to all nodes in each column are taken with the same weight , . the distribution of nodes in pagerank - cheirank plane is shown in fig .[ fig14_8 ] .the top 5 nodes , with their operon names , are given there for indexes of pagerank , cheirank and 2drank .this ranking selects operons with most high functionality in communication ( ) , popularity ( ) and those that combines these both features ( ) . for these networksthe correlator is close to zero ( for escherichia coli and for yeast , see fig .[ fig4_2 ] ) ) that indicates the statistical independence between outgoing and ingoing links being quite similarly to the case of the pcn for the linux kernel .this may indicate that a slightly negative correlator is a generic property for the data flow network of control and regulation systems .a similar situation appears for networks of business process management and brain neural networks .thus it is possible that the networks performing control functions are characterized in general by small correlator values .we expect that 2dranking will find further useful applications for large scale gene regulation networks .the complexity of the well - known game go is such that no computer program has been able to beat a good player , in contrast with chess where world champions have been bested by game simulators .it is partly due to the fact that the total number of possible allowed positions in go is about , compared to e.g. only for chess .it has been argued that the complex network analysis can give useful insights for a better understanding of this game . with this aima network , modeling the game of go , has been defined by a statistical analysis of the data bases of several important historical professional and amateur japanese go tournaments . in thisapproach moves / nodes are defined as all possible patterns in plaquettes on a go board of intersections .taking into account all possible obvious symmetry operations the number of non - equivalent moves is reduced to .moves which are close in space ( typically a maximal distance of 4 intersections ) are assumed to belong to the same tactical fight generating transitions on the network . using the historical data of many games, the transition probabilities between the nodes may be determined leading to a directed network with a finite size perron - frobenius operator which can be analyzed by tools of pagerank , cheirank , complex eigenvalue spectrum , properties of certain selected eigenvectors and also certain other quantities .the studies are done for plaquettes of different sizes with the corresponding network size changing from for plaquettes squares with intersections up to maximal for diamond - shape plaquettes with intersections plus the four at distance two from the center in the four directions left , right , top , down .it is shown that the pagerank leads to a frequency distribution of moves which obeys a zipf law with exponents close to unity but this exponent may slightly vary if the network is constructed with shorter or longer sequences of successive moves .the important nodes in certain eigenvectors may correspond to certain strategies , such as protecting a stone and eigenvectors are also different between amateur and professional games .it is also found that the different phases of the game go are characterized by a different spectrum of the matrix .the obtained results show that with the help of the google matrix analysis it is possible to extract communities of moves which share some common properties .the authors of these studies argue that the google matrix analysis can find a number of interesting applications in the theory of games and the human decision - making processes .understanding the nature and origins of mass opinion formation is an outstanding challenge of democratic societies . in the last few yearsthe enormous development of such social networks as livejournal , facebook , twitter , and vkontakte , with up to hundreds of millions of users , has demonstrated the growing influence of these networks on social and political life .the small - world scale - free structure of the social networks , combined with their rapid communication facilities , leads to a very fast information propagation over networks of electors , consumers , and citizens , making them very active on instantaneous social events .this invokes the need for new theoretical models which would allow one to understand the opinion formation process in modern society in the 21st century .the important steps in the analysis of opinion formation have been done with the development of various voter models , described in great detail in .this research field became known as sociophysics . here ,following , we analyze the opinion formation process introducing several new aspects which take into account the generic features of social networks .first , we analyze the opinion formation on real directed networks such as www of universities of cambridge and oxford ( 2006 ) , twitter ( 2009 ) and livejournal .this allows us to incorporate the correct scale - free network structure instead of unrealistic regular lattice networks , often considered in voter models .second , we assume that the opinion at a given node is formed by the opinions of its linked neighbors weighted with the pagerank probability of these network nodes .the introduction of such a weight represents the reality of social networks where network nodes are characterized by the pagerank vector which provides a natural ranking of node importance , or elector or society member importance . in a certain sense , the top nodes of pagerank correspond to a political elite of the social network whose opinion influences the opinions of other members of the society .thus the proposed pagerank opinion formation ( prof ) model takes into account the situation in which an opinion of an influential friend from high ranks of the society counts more than an opinion of a friend from a lower society level .we argue that the pagerank probability is the most natural form of ranking of society members .indeed , the efficiency of pagerank rating had been well demonstrated for various types of scale - free networks .the prof model is defined in the following way . in agreement with the standard pagerank algorithmwe determine the probability for each node ordered by pagerank index ( using ) .in addition , a network node is characterized by an ising spin variable which can take values or , coded also by red or blue color , respectively .the sign of a node is determined by its direct neighbors , which have pagerank probabilities . for that we compute the sum over all directly linked neighbors of node : where and denote the pagerank probability of a node pointing to node ( ingoing link ) and a node to which node points to ( outgoing link ) , respectively . here, the two parameters and are used to tune the importance of ingoing and outgoing links with the imposed relation ( ) .the values and correspond to red and blue nodes , and the spin takes the value or , respectively , for or . in a certain sensewe can say that a large value of parameter corresponds to a conformist society in which an elector takes an opinion of other electors to which he / she points .in contrast , a large value of corresponds to a tenacious society in which an elector takes mainly the opinion of those electors who point to him / her .a standard random number generator is used to create an initial random distribution of spins on a given network .the time evolution then is determined by the relation ( [ eqopinion1 ] ) applied to each spin one by one . when all spins are turned following ( [ eqopinion1 ] ) a time unit is changed to . up to random initial generations of spins are used to obtain statistically stable results .we present results for the number of red nodes since other nodes are blue . to find a final red fraction , shown in , in dependence on an initial red fraction , shown in axis ;data are shown inside the unit square .the values of are defined as a relative number of realisations found inside each of cells which cover the whole unit square . here realizations of randomly distributed colors are used to obtained values ; for each realization the time evolution is followed up the convergence time with up to iterations ; ( a ) cambridge network ; ( b ) oxford network at . the probability is proportional to color changing from zero ( blue / black ) to unity ( red / gray ) .after .[ fig14_9],scaledwidth=48.0% ] -0.3 cm the main part of studies is done for the www of cambridge and oxford discussed above .we start with a random realization of a given fraction of red nodes which evolution in time converges to a steady state with a final fraction of red nodes approximated after time . however , different initial realisations with the same value evolve to different final fractions clearly showing a bistability phenomenon . to analyze how the final fraction of red nodes depends on its initial fraction ,we study the time evolution for a large number of initial random realizations of colors following it up to the convergence time for each realization .we find that the final red nodes are homogeneously distributed in pagerank index .thus there is no specific preference for top society levels for an initial random distribution .the probability distribution of final fractions is shown in fig .[ fig14_9 ] as a function of initial fraction at .the results show two main features of the model : a small fraction of red opinion is completely suppressed if and its larger fraction dominates completely for ; there is a bistability phase for the initial opinion range .of course , there is a symmetry in respect to exchange of red and blue colors .for the small value we have with .for the larger value we have , . to find a final red fraction , shown in , in dependence on an initial red fraction , shown in axis ; data are shown inside the unit square .the values of are defined as a relative number of realizations found inside each of cells which cover the whole unit square . here realizations of randomly distributed colors are used to obtained values ; for each realization the time evolution is followed up the convergence time with up to steps .( a ) cambridge network ; ( b ) oxford network ; here .the probability is proportional to color changing from zero ( blue / black ) to unity ( red / gray ) .after .[ fig14_10],scaledwidth=48.0% ] -0.3 cm our interpretation of these results is the following . for small values of the opinion of a given society memberis determined mainly by the pagerank of neighbors to whom he / she points ( outgoing links ) .the pagerank probability of nodes to which many nodes point is usually high , since is proportional to the number of ingoing links .thus at the society is composed of members who form their opinion by listening to an elite opinion . in such a society its elite with one color opinion can impose this opinion on a large fraction of the society .indeed , the direct analysis of the case , where the top nodes of pagerank index have the same red color , shows that this 1% of the society elite can impose its opinion to about 50% of the whole society at small values ( conformist society ) while at large values ( tenacious society ) this fraction drops significantly ( see fig.4 in ) .we attribute this to the fact that in fig .[ fig14_9 ] we start with a randomly distributed opinion , since the opinion of the elite has two fractions of two colors this creates a bistable situation when the two fractions of society follow the opinions of this divided elite , which makes the situation bistable on a larger interval of compared to the case of a tenacious society at . when we replace in ( [ eqopinion1 ] ) by the bistability disappears .however , the detailed understanding of the opinion formation on directed networks still waits it development .indeed , the results of prof model for the livejournal and twitted networks show that the bistability in these networks practically disappears .also e.g. for the twitter network studied in sec .[ s10.1 ] , the elite of ( about 0.1% of the whole society ) can impose its opinion to 80% of the society at small and to about 30% for .it is possible that a large number of links between top pagerank nodes in twitter creates a stronger tendency to a totalitarian opinion formation comparing to the case of university networks . at the same timethe studies of opinion formation with the prof model on the ulam networks , which have not very large number of links , show practically no bistability in opinion formation .it is expected that a small number of loops is at the origin of such a difference in respect to university networks .finally we discuss a more generic version of opinion formation called the prof - sznajd model .indeed , we see that in the prof model on university network opinions of small groups of red nodes with are completely suppressed that seems to be not very realistic .in fact , the sznajd model features the idea of resistant groups of a society and thus incorporates a well - known trade union principle `` united we stand , divided we fall '' .usually the sznajd model is studied on regular lattices .its generalization for directed networks is done on the basis of the notion of group of nodes at each discrete time step .the evolution of group is defined by the following rules : we pick in the network by random a node and consider the polarization of highest pagerank nodes pointing to it ; if node and all other nodes have the same color ( same polarization ) , then these nodes form a group whose effective pagerank value is the sum of all the member values ; consider all the nodes pointing to any member of the group and check all these nodes directly linked to the group : if an individual node pagerank value is less than the defined above , the node joins the group by taking the same color ( polarization ) as the group nodes and increase by the value of ; if it is not the case , a node is left unchanged .the above time step is repeated many times during time , counting the number of steps and choosing a random node on each next step .the time evolution of this prof - sznajd model converges to a steady state approximately after steps .this is compatible with the results obtained for the prof model .however , the statistical fluctuations in the steady - state regime are present keeping the color distribution only on average .the dependence of the final fraction of red nodes on its initial value is shown by the density plot of probability in fig .[ fig14_10 ] for the university networks .the probability is obtained from many initial random realizations in a similar way to the case of fig . [ fig14_9 ] .we see that there is a significant difference compared to the prof model : now even at small values of we find small but finite values of , while in the prof model the red color disappears at .this feature is related to the essence of the sznajd model : here , even small groups can resist against the totalitarian opinion .other features of fig .[ fig14_10 ] are similar to those found for the prof model : we again observe bistability of opinion formation .the number of nodes , which form the group , does not significantly affect the distribution ( for studied ) .the above studies of opinion formation models on scale - free networks show that the society elite , corresponding to the top pagerank nodes , can impose its opinion on a significant fraction of the society .however , for a homogeneous distribution of two opinions , there exists a bistability range of opinions which depends on a conformist parameter characterizing the opinion formation .the proposed prof - sznajd model shows that totalitarian opinions can be escaped from by small subcommunities .the enormous development of social networks in the last few years definitely shows that the analysis of opinion formation on such networks requires further investigations .above we considered many examples of real directed networks where the google matrix analysis finds useful applications .the examples belong to various sciences varying from www , social and wikipedia networks , software architecture to world trade , games , dna sequences and ulam networks .it is clear that the concept of markov chains and google matrix represents now the mathematical foundation of directed network analysis . for hermitian and unitary matricesthere are now many universal concepts , developed in theoretical physics , so that the main properties of such matrices are well understood .indeed , such characteristics as level spacing statistics , localization and delocalization properties of eigenstates , anderson transition , quantum chaos features can be now well handled by various theoretical methods ( see e.g. ) .a number of generic models has been developed in this area allowing to understand the main effects via numerical simulations and analytical tools .in contrast to the above case of hermitian or unitary matrices , the studies of matrices of markov chains of directed networks are now only at their initial stage . in this review , on examples of real networks we illustrated certain typical properties of such matrices . among themthere is the fractal weyl law , which has certain traces in the field of quantum chaotic scattering , but the main part of features are new ones .in fact , the spectral properties of markov chains had not been investigated on a large scale .we try here to provide an introduction to the properties of such matrices which contain all information about large scale directed networks .the google matrix is like _ the library of babel _ , which contains everything .unfortunately , we are still not able to find generic markov matrix models which reproduce the main features of the real networks . among themthere is the possible spectral degeneracy at damping , absence of spectral gap , algebraic decay of eigenvectors . due to absence of such generic modelsit is still difficult to capture the main properties of real directed networks and to understand or predict their variations with a change of network parameters . at the momentthe main part of real networks have an algebraic decay of pagerank vector with an exponent .however , certain examples of ulam networks ( see figs . [ fig6_6 ] , [ fig6_7 ] ) show that a delocalization of pagerank probability over the whole network can take place .such a phenomenon looks to be similar to the anderson transition for electrons in disordered solids .it is clear that if an anderson delocalization of pagerank would took place , as a result of further developments of the www , the search engines based on the pagerank would loose their efficiency since the ranking would become very sensitive to various fluctuations . in a sensethe whole world would go blind the day such a delocalization takes place . due tothat a better understanding of the fundamental properties of google matrices and their dependencies on various system parameters have a high practical significance .we believe that the theoretical research in this direction should be actively continued . in many respects , as _ the library of babel _ ,the google matrix still keeps its secrets to be discovered by researchers from various fields of science .we hope that a further research will allow `` _ to formulate a general theory of the library and solve satisfactorily the problem which no conjecture had deciphered : the formless and chaotic nature of almost all the books . _ '' are grateful to our colleagues m. abel , a. d. chepeliankii , y .- h .eom , b. georgeot , o. giraud , v. kandiah , o. v. zhirov for fruitful collaborations on the topics included in this review .we also thank our partners of the ec fet open project nadine a. benczr , n. litvak , s. vigna and colleague a.kaltenbrunner for illuminating discussions .our special thanks go to debora donato for her insights at our initial stage of this research .our research presented here is supported in part by the ec fet open project `` new tools and algorithms for directed network analysis '' ( nadine 288956 ) .this work was granted access to the hpc resources of calmip ( toulouse ) under the allocation 2012-p0110 .we also thank the united nations statistics division for provided help and friendly access to the un comtrade database .
|
in the past decade modern societies have developed enormous communication and social networks . their classification and information retrieval processing has become a formidable task for the society . due to the rapid growth of the world wide web , and social and communication networks , new mathematical methods have been invented to characterize the properties of these networks in a more detailed and precise way . various search engines use extensively such methods . it is highly important to develop new tools to classify and rank massive amount of network information in a way that is adapted to internal network structures and characteristics . this review describes the google matrix analysis of directed complex networks demonstrating its efficiency using various examples including world wide web , wikipedia , software architectures , world trade , social and citation networks , brain neural networks , dna sequences and ulam networks . the analytical and numerical matrix methods used in this analysis originate from the fields of markov chains , quantum chaos and random matrix theory . `` the library exists _ ab aeterno_. '' + jorge luis borges _ the library of babel _
|
in wireless power transfer , a concept originally conceived by nikola tesla in the 1890s , energy is transmitted from a power source to a destination over the wireless medium . the use of wireless power transfer can avoid the costly process of planning and installing power cables in buildings and infrastructure .one of the challenges for implementing wireless power transfer is its low energy transfer efficiency , as only a small fraction of the emitted energy can be harvested at the receiver due to severe path loss and the low efficiency of radio frequency ( rf ) - direct current ( dc ) conversion .in addition , early electronic devices , such as first generation mobile phones , were bulky and suffered from high power consumption .for the aforementioned reasons , wireless power transfer had not received much attention until recently , although tesla had already provided a successful demonstration to light electric lamps wirelessly in 1891 . in recent years, a significant amount of research effort has been dedicated to reviving the old ambition of wireless power transfer , which is motivated by the following two reasons .the first reason is the tremendous success of wireless sensor networks ( wsns ) which have been widely applied for intelligent transportation , environmental monitoring , etc .however , wsns are energy constrained , as each sensor has to be equipped with a battery which has a limited lifetime in most practical cases .it is often costly to replace these batteries and the application of conventional energy harvesting ( eh ) technologies relying on natural energy sources is problematic due to their intermittent nature .wireless power transfer can be used as a promising alternative to increase the lifetime of wsns .the second reason is the now widespread use of low - power devices that can be charged wirelessly .for example , intel has demonstrated the wireless charging of a temperature and humidity meter as well as a liquid - crystal display using the signals of a tv station km away .this article considers the combination of wireless power transfer and information transmission , a recently developed technique termed _ simultaneous wireless information and power transfer _ ( swipt ) , in which information carrying signals are also used for energy extraction .efficient swipt requires some fundamental changes in the design of wireless communication networks .for example , the conventional criteria for evaluating the performance of a wireless system are the information transfer rates and the reception reliability . however , if some users in the system perform eh by using rf signals , the trade - off between the achievable information rates and the amount of harvested energy becomes an important figure of merit . in this context , an ideal receiver , which has the capability to perform information decoding ( i d ) andeh simultaneously , was considered in . in , a more practical receiver architecturewas proposed , in which the receiver has two circuits to perform i d and eh separately .this article focuses on the application of smart antenna technologies , namely multiple - input multiple - output ( mimo ) and relaying , in swipt systems .the use of these smart antenna technologies is motivated by the fact that they have the potential to improve the energy efficiency of wireless power transfer significantly .for example , mimo can be used to increase the lifetime of energy constrained sensor networks , in which a data fusion center is equipped with multiple antennas with which it can focus its rf energy on sensors that need to be charged wirelessly , leading to a more energy efficient solution compared to a single - antenna transmitter .furthermore , a relay can harvest energy from rf signals from a source and then use the harvested energy to forward information to the destination , which not only facilitates the efficient use of rf signals but also provides motivation for information and energy cooperation among wireless nodes .the application of smart antenna technologies to swipt opens up many new exciting possibilities but also brings some challenges for improving spectral and energy efficiency in wireless systems .the organization of this article is as follows. some basic concepts of swipt are introduced first .then , the separate and joint application of mimo and relaying in swipt is discussed in detail . finally some future research challenges for the design of multi - antenna and multi - node swipt systems are provided .in swipt systems , i d and eh can not be performed on the same received signal in general . furthermore , a receiver with a single antenna typically may not be able to collect enough energy to ensure reliable power supply .hence , centralized / distributed antenna array deployments , such as mimo and relaying , are required to generate sufficient power for reliable device operation . in the following ,we provide an overview of mimo swipt receiver structures , namely the power splitting , separated , time - switching , and antenna - switching receivers , as shown in fig .[ fig : cap_sys ] . in a separated receiver architecture , an eh circuit and an i d circuitare implemented into two separate receivers with separated antennas , which are served by a common multiple antenna transmitter .the separated receiver structure can be easily implemented using off - the - shelf components for the two individual receivers . moreover , the trade - off between the achievable information rate and the harvested energy can be optimized based on the channel state information ( csi ) and feedback from the two individual receivers to the transmitter . for instance, the covariance matrix of the transmit signal can be optimized for capacity maximization of the i d receiver subject to a minimum required amount of energy transferred to the eh receiver .this receiver consists of an information decoder , an rf energy harvester , and a switch at each antenna . in particular ,each receive antenna can switch between the eh circuit and the i d circuit periodically based on a time switching sequence for eh and i d , respectively . by taking into account the channel statistics and the quality of service requirements regarding the energy transfer , the time switching sequence and the transmit signal can be jointly optimized for different system design objectives .employing a passive power splitting unit , this receiver splits the received power at each antenna into two power streams with a certain power splitting ratio before any active analog / digital signal processing is performed .then , the two streams are sent to an energy harvester and an information decoder , respectively , to facilitate simultaneous eh and i d . the power splitting ratio can be optimized for each receive antenna .in particular , a balance can be struck between the system achievable information rate and the harvested energy by varying the value of the power splitting ratios .further performance improvement can be achieved by jointly optimizing the signal and the power splitting ratios . with multiple antennas , low - complexity antenna switching between decoding / rectifying can be used to enable swipt .for instance , given antennas , a subset of antennas can be selected for i d , while the remaining antennas are used for eh . unlike the time switching protocol which requires stringent time synchronization and the power splitting protocol where performance may degrade in case of hardware imperfections ,the antenna switching protocol is easy to implement , and attractive for practical swipt designs . from a theoretical point of view, antenna switching may be interpreted as a special case of power splitting with binary power splitting ratios at each receive antenna .[ fig : cap_eh ] illustrates the performance trade - offs of the considered swipt receiver structures .in particular , we show the average total harvested energy versus the average system achievable information rate in a point - to - point scenario with one transmitter and one receiver .a transmitter equipped with antennas is serving a receiver equipped with receive antennas .resource allocation is performed to achieve the respective optimal system performance in each case . for a fair comparison , for the separated receiver, the eh receiver and the i d receiver are equipped with a single antenna , respectively , which results in . besides, we also illustrate the trade - off region for a suboptimal power splitting receiver with a fixed power splitting ratio of at each antenna .it can be observed that the optimized power splitting receiver achieves the largest trade - off region among the considered receivers at the expense of incurring the highest hardware complexity and the highest computational burden for resource allocation .mimo can be exploited to bring two distinct benefits to swipt networks . on the one hand ,due to the broadcast nature of wireless transmission , the use of additional antennas at the receiver can yield more harvested energy . on the other hand, the extra transmit antennas can be exploited for beamforming , which could significantly improve the efficiency of information and energy transfer .the impact of mimo on point - to - point swipt scenarios with one source , one eh receiver , and one i d receiver was studied in , where the trade - off between the mimo information rate and power transfer was characterized .the benefits of mimo are even more obvious for the multiuser mimo scenario illustrated in fig . [fig1](a ) .specifically , a source equipped with multiple antennas serves multiple information receivers , where the rf signals intended for the i d receivers can also be used to charge eh receivers wirelessly .since there are multiple users in the system , co - channel interference ( cci ) needs to be taken into account , and various interference mitigation strategies can be incorporated into swipt implementations , e.g. block diagonalization precoding as in , where information is sent to receivers that are interference free , and energy is transmitted to the remaining receivers . furthermore , it is beneficial to employ user scheduling , which allows receivers to switch their roles between an eh receiver and an i d receiver based on the channel quality in order to further enlarge the trade - off region between the information rate and the harvested energy . the multi - source multiuser mimo scenario illustrated in fig .[ fig1](b ) is another important swipt application , where multiple source - destination pairs share the same spectrum and the associated interference control is challenging . since in interference channels , interference signals and information bearing signals co - exist , issues such as interference collaboration and coordination bring both new challenges and new opportunities for the realization of swipt , which are very different from those in the single source - destination pair scenario . for example , with antenna selection and interference alignment as illustrated in , the received signal space can be partitioned into two subspaces , where the subspace containing the desired signals is used for information transfer , and the other subspace containing the aligned interference is used for power transfer .this design is a win - win strategy since the information transfer is protected from interference , and the formerly discarded interference can be utilized as an energy source .more importantly , this approach offers a new look at interference control , since the formerly undesired and useless interference can be used to enhance the performance of swipt systems . on the other hand ,the use of rf eh introduces additional constraints to the design of transmit beamforming .hence , the solutions well - known from conventional wireless networks , such as zero forcing and maximum ratio transmission , need to be suitably modified to be applicable in swipt systems , as shown in .centralized mimo as described in section [ section : mimo swipt ] may be difficult to implement due to practical constraints , such as the size and cost of mobile devices .this motivates the use of relaying in swipt networks .in addition , the use of wireless power transfer will encourage mobile nodes to participate in cooperation , since relay transmissions can be powered by the energy harvested by the relay from the received rf signals and hence the battery lifetime of the relays can be increased .the benefits of using eh relays can be illustrated based on the following example .consider a relaying network with one source - destination pair and a single decode - and - forward ( df ) relay .swipt is performed at the relay by using the power splitting receiver structure shown in fig .[ fig : cap_sys ] .the performance of the scheme using this eh relay is compared to that of direct transmission , i.e. , when the relay is not used , in fig .[ fig : location ] .as can be observed from the figure , the use of an eh relay can decrease the outage probability from to , a more than ten - fold improvement in reception reliability , compared to direct transmission . the performance of time sharing and power splitting swipt systems employing amplify - and - forward ( af ) and df relays was analyzed in , and the impact of power allocation was investigated in .these existing results demonstrate that the behavior of the outage probability in relay assisted swipt systems is different from that in conventional systems with self - powered relays .for example , in the absence of a direct source - destination link , the outage probability with an eh relay decays with increasing signal - to - noise ratio ( snr ) at a rate of , i.e. , slower than the rate of in conventional systems .the reason for this performance loss is that the relay transmission power fluctuates with the source - relay channel conditions .this performance loss can be mitigated by exploiting user cooperation .for example , in a network with multiple user pairs and an eh relay , advanced power allocation strategies , such as water filling based and auction based approaches , can be used to ensure that the outage probability decays at the faster rate of .this performance gain is obtained because allowing user pairs to share power can avoid the situation in which some users are lacking transmission power whereas the others have more power than needed .relay selection is an important means to exploit multiple relays with low system complexity , and the use of eh also brings fundamental changes to the design of relay selection strategies . in conventional relay networks, it is well known that the source - relay and relay - destination channels are equally important for relay selection , which means that the optimal location of the relay is the middle of the line connecting the source and the destination , i.e. , ( m,0 ) for the scenario considered in fig . [fig : location ] . nevertheless , fig .[ fig : location ] shows that an eh relay exhibits different behavior than a conventional relay , i.e. , moving the relay from the source towards the middle point ( m,0 ) has a detrimental effect on the outage probability .we note that this observation is also valid for swipt systems with af relays .this phenomenon is due to the fact that in eh networks , the quality of the source - relay channels is crucial since it determines not only the transmission reliability from the source to the relays , but also the harvested energy at the relays . in , it was shown that the max - min selection criterion , a strategy optimal for conventional df relaying networks , can only achieve a small fraction of the full diversity gain in relaying swipt systems .mimo and cooperative relaying represent two distinct ways of exploiting spatial diversity , and both techniques can significantly enhance the system s energy efficiency , which is of paramount importance for swipt systems .hence , the combination of these two smart antenna technologies is a natural choice for swipt systems .the benefits of this combination can be illustrated using the following example .consider a lecture hall packed with students , in which there are many laptops / smart phones equipped with multiple antennas as well as some low - cost single - antenna sensors deployed for infrastructure monitoring .this hall can be viewed as a heterogeneous network consisting of mobile devices with different capabilities .inactive devices with mimo capabilities can be exploited as relays to help the active users in the network , particularly the low - cost sensors . since the relays have multiple antennas , more advanced receiver architectures , such as antenna switching receivers , can be used .in addition , the use of these mimo relays opens the possibility to serve multiple source - destination pairs simultaneously . in this context, it is important to note that the use of swipt will encourage the inactive mimo users to serve as relays since helping other users will not reduce the lifetime of the relay batteries .therefore , the mimo relays can be exploited as an extra dimension for performance improvement , and can achieve an improved trade - off between the information rate and the harvested energy . as discussed in section [ section : mimo swipt ] ,one unique feature of swipt systems is the energy efficient use of cci , which is viewed as a detrimental factor that limits performance in conventional wireless systems . in particular, cci can be exploited as a potential source of energy in mimo relay swipt systems . to illustrate this point ,let us consider the following example .an af relay with antennas is employed to help a single - antenna source which communicates with a single - antenna destination .the relay first harvests energy from the received rf signals with the power splitting architecture , and then uses this energy to forward the source signals .two separate cases are considered , i.e. , without cci and with cci . to exploit the benefits of multiple antennas , linear processing of the information streamis performed to facilitate i d .since the optimal linear processing matrix is difficult to characterize analytically , a heuristic rank-1 processing matrix is adopted . as such , in the case without cci , the processing matrix is designed based on the principle of maximum ratio transmission , i.e. , , where the vectors of size and of size are chosen to match the first and second hop channels , respectively , and is a scaling factor to ensure the relay transmit power constraint . on the other hand , in the presence of cci , the relay first applies the minimum mean square error criterion to suppress the cci , and then forwards the transformed signal to the destination using maximum ratio transmission .[ fig : cci ] illustrates the achievable ergodic rate as a function of the average strength of the cci , with the optimized power splitting ratio .we observe that increasing the number of relay antennas significantly improves the achievable rate .for instance , increasing the number of antennas from three to six nearly triples the rate .moreover , we see that when the cci is weak ( ) , the rate difference is negligible compared to the case without cci .however , when the cci is strong , a substantial rate improvement is realized .in fact , the stronger the cci , the higher the rate gain . for example , in some applications , the relays will operate at the cell boundaries and the benefit of exploiting cci will be significant in such situations .in the following , we discuss some research challenges for future mimo and relay assisted swipt . 1 .energy efficient mimo swipt : because of severe path loss attenuation , the energy efficiency of mimo swipt systems may not be satisfactory for long distance power transfer unless advanced green technologies , such as eh technologies relying on natural energy sources , and mimo resource allocation are combined .we now discuss two possible approaches to address this problem .* eh transmitter : in this case , the transmitter can harvest energy from natural renewable energy sources such as solar , wind , and geothermal heat . then, the energy harvested at the transmitter can be transferred to the desired receiver over the wireless channel , thereby reducing substantially the operating costs of the service providers and improving the energy efficiency of the system , since renewable energy sources can be exploited virtually for free .however , the time varying availability of the energy generated from renewable energy sources may introduce energy outages in swipt systems and efficient new techniques have to be developed to overcome them .* mimo energy efficiency optimization : energy efficient mimo resource allocation can be formulated as an optimization problem in which the degrees of freedom in the system such as space , power , frequency , and time are optimized for maximization of the energy efficiency . by taking into account the circuit power consumption of all nodes , the finite energy storage at the receivers , the excess spatial degrees of freedom in mimo systems , and the utilization of the recycled transmit power and the interference power , the energy efficiency optimization reveals the operating regimes for energy efficient swipt systems . yet, the non - convexity of the energy efficiency objective function is an obstacle in designing algorithms for achieving the optimal system performance and low - complexity but efficient algorithms are yet to be developed .energy efficient swipt relaying : the concepts of swipt and relaying are synergistic since the use of swipt can stimulate node cooperation and relaying is helpful to improve the energy efficiency of swipt . in the following ,several research challenges for relay assisted swipt are discussed : * practical relaying systems suffer from spectral efficiency reduction due to half - duplex operation .one possible approach to overcome this limitation is to use the idea of successive relaying , where two relays listen and transmit in succession . when implemented in a swipt system , the inter - relay interference , which is usually regarded as detrimental ,can now be exploited as a source of energy .another promising solution is to adopt full - duplex transmission . in the ideal case ,full - duplex relaying can double the spectral efficiency , but the loopback interference corrupts the information signal in practice .advanced mimo solutions can be designed to exploit such loopback interference as an additional source of energy .* relay assisted swipt is not limited to the case of eh relays , and can be extended to scenarios in which rf eh is performed at the source and/or the destination based on the signals sent by the relay .for example , in wsns , two sensors may communicate with each other with the help of a self - powered data fusion center . for this type of swipt relaying ,the relaying protocol needs to be carefully redesigned , since an extra phase for transmitting energy to the source and the destination is needed .* most existing works on swipt relaying have assumed that all the energy harvested at the relays can be used as relay transmission power . in practice , this assumption is difficult to realize due to non - negligible circuit power consumption , power amplifier inefficiency , energy storage losses , and the energy consumed for relay network coordination , which need to be considered when new swipt relaying protocols are designed .in addition , the superior performance of mimo / relay swipt is often due to the key assumption that perfect csi knowledge is available at the transceivers ; however , a large amount of signalling overhead will be consumed to realize such csi assumptions .therefore , for fair performance evaluation , future works should take into account the extra energy cost associated with csi acquisition .communication security management : energy transfer from the transmitter to the receivers can be facilitated by increasing the transmit power of the information carrying signal .however , a higher transmit power leads to a larger susceptibility for information leakage due to the broadcast nature of wireless channels .therefore , communication security is a critical issue in systems with swipt .* energy signal : transmitting an energy signal along with the information signal can be exploited for expediting eh at the receivers . in general, the energy signal can utilize arbitrary waveforms such as a deterministic constant tone signal .if the energy signal is a gaussian pseudo - random sequence , it can also be used to provide secure communication since it serves as interference to potential eavesdroppers .on the other hand , if the sequence is known to all legitimate receivers , the energy signal can be cancellated at the legitimate receivers before i d .however , to make such cancellation possible , a secure mechanism is needed to share the seed information for generating the energy signal sequence , to which mimo precoding / beamforming can be applied . *jamming is an important means to prevent eavesdroppers from intercepting confidential messages ; however , performing jamming also drains the battery of mobile devices .the use of swipt can encourage nodes in a network to act as jammers , since they can be wirelessly charged by the rf signals sent by the legitimate users .however , the efficiency of this harvest - and - jam strategy depends on the network topology , where a harvest - and - jam node needs to be located close to legitimate transmitters to harvest a sufficient amount of energy .advanced multiple - antenna technologies are needed to overcome this problem .in this article , the basic concepts of swipt and corresponding receiver architectures have been discussed along with some performance trade - offs in swipt systems . in particular, the application of smart antenna technologies , such as mimo and relaying , in swipt systems has been investigated for different network topologies .in addition , future research challenges for the design of energy efficient mimo and relay assisted swipt systems have been outlined .d. w. k. ng , e. s. lo , and r. schober , `` robust beamforming for secure communication in systems with wireless information and power transfer , '' _ ieee trans .wireless commun ._ , vol . 13 , pp . 4599 - 4615 , augd. w. k. ng , e. s. lo , and r. schober , `` wireless information and power transfer : energy efficiency optimization in ofdma systems , '' _ ieee trans .wireless commun ._ , vol . 12 , pp . 6352 6370 , dec. 2013 .i. krikidis , s. sasaki , s. timotheou and z. ding , a low complexity antenna switching for joint wireless information and energy transfer in mimo relay channels , " _ ieee trans .62 , no.5 , pp . 15771587 , may 2014 .w. wang , l. li , q. sun and j. jin , power allocation in multiuser mimo systems for simultaneous wireless information and power transfer , in _ proc .( vtc ) _ , las vegas , nv , sept .2013 , pp . 15 .b. koo and d. park , interference alignment and wireless energy transfer via antenna selection , _ ieee commun .4 , pp . 548551 , apr . 2014 .s. timotheou , i. krikidis , g. zheng and b. ottersten , beamforming for miso interference channels with qos and rf energy transfer , " _ ieee trans . wireless commun ._ , vol . 13 , no . 5 , pp . 2646 - 2658 , may 2014. a. a. nasir , x. zhou , s. durrani , and r. kennedy , `` relaying protocols for wireless energy harvesting and information processing , '' _ ieee trans .wireless commun .12 , no . 7 , pp. 36223636 , jul .z. ding , s. m. perlaza , i. esnaola and h. v. poor , power allocation strategies in energy harvesting wireless cooperative networks , " _ ieee trans .wireless commun ._ , vol.13 , no.2 , pp . 846860 , feb .z. ding and h. v. poor , user scheduling in wireless information and power transfer networks , " in _ proc .conf . on commun .systems ( iccs ) _ , macau , china , nov . 2014 , pp . 15 ( a journal version available at http://arxiv.org/abs/1403.0354 ) .s. yatawatta , a. p. petropulu and c. j. graff , energy efficient channel estimation in mimo systems , " in _ proc .conf . on acoustics , speech , and signal processing ( icassp ) _ , las vegas , nv , mar .2005 , pp .
|
simultaneous wireless information and power transfer ( swipt ) is a promising solution to increase the lifetime of wireless nodes and hence alleviate the energy bottleneck of energy constrained wireless networks . as an alternative to conventional energy harvesting techniques , swipt relies on the use of radio frequency signals , and is expected to bring some fundamental changes to the design of wireless communication networks . this article focuses on the application of advanced smart antenna technologies , including multiple - input multiple - output and relaying techniques , to swipt . these smart antenna technologies have the potential to significantly improve the energy efficiency and also the spectral efficiency of swipt . different network topologies with single and multiple users are investigated , along with some promising solutions to achieve a favorable trade - off between system performance and complexity . a detailed discussion of future research challenges for the design of swipt systems is also provided .
|
the challenge of finding and defining 2-dimensional complexity measures has been identified as an open problem of foundational character in complexity science . indeed ,for example , humans understand 2-dimensional patterns in a way that seems fundamentally different than 1-dimensional .these measures are important because current 1-dimensional measures may not be suitable to 2-dimensional patterns for tasks such as quantitatively measuring the spatial structure of self - organizing systems . on the one hand ,the application of shannon s entropy and kolmogorov complexity has traditionally been designed for strings and sequences .however , -dimensional objects may have structure only distinguishable in their natural dimension and not in lower dimensions .this is indeed a question related to the lost in dimension reductionality . a few measures of 2-dimensional complexity have been proposed before building upon shannon s entropy and block entropy , mutual information and minimal sufficient statistics and in the context of anatomical brain mri analysis . a more recent application ,also in the medical context related to a measure of consciousness , was proposed using lossless compressibility for egg brain image analysis was proposed in . on the other hand , for kolmogorov complexity, the common approach to evaluating the algorithmic complexity of a string has been by using lossless compression algorithms because the length of lossless compression is an upper bound of kolmogorov complexity .short strings , however , are difficult to compress in practice , and the theory does not provide a satisfactory solution to the problem of the instability of the measure for short strings . herewe use so - called _ turmites _( 2-dimensional turing machines ) to estimate the kolmogorov complexity of images , in particular space - time diagrams of cellular automata , using levin s coding theorem from algorithmic probability theory .we study the problem of the rate of convergence by comparing approximations to a universal distribution using different ( and larger ) sets of small turing machines and comparing the results to that of lossless compression algorithms carefully devising tests at the intersection of the application of compression and algorithmic probability .we found that strings which are more random according to algorithmic probability also turn out to be less compressible , while less random strings are clearly more compressible .compression algorithms have proven to be signally applicable in several domains ( see e.g. ) , yielding surprising results as a method for approximating kolmogorov complexity .hence their success is in part a matter of their usefulness .here we show that an alternative ( and complementary ) method yields compatible results with the results of lossless compression .for this we devised an artful technique by grouping strings that our method indicated had the same program - size complexity , in order to construct files of concatenated strings of the same complexity ( while avoiding repetition , which could easily be exploited by compression ) .then a lossless general compression algorithm was used to compress the files and ascertain whether the files that were more compressed were the ones created with highly complex strings according to our method .similarly , files with low kolmogorov complexity were tested to determine whether they were better compressed .this was indeed the case , and we report these results in section [ comparison ] . in subsection [ eca ]we also show that the coding theorem method yields a very similar classification of the space - time diagrams of elementary cellular automata , despite the disadvantage of having used a limited sample of a _universal distribution_. in all cases the statistical evidence is strong enough to suggest that the coding theorem method is sound and capable of producing satisfactory results .the coding theorem method also represents the only currently available method for dealing with very short strings and in a sense is an expensive but powerful microscope " for capturing the information content of very small objects .central to algorithmic information theory ( ait ) is the definition of algorithmic ( kolmogorov - chaitin or program - size ) complexity : that is , the length of the shortest program that outputs the string running on a universal turing machine . a classic example is a string composed of an alternation of bits , such as , which can be described as repetitions of 01 " .this repetitive string can grow fast while its description will only grow by about . on the other hand , a random - looking string such as may not have a much shorter description than itself .a technical inconvenience of as a function taking to the length of the shortest program that produces is its uncomputability . in other words , there is no program which takes a string as input and produces the integer as output .this is usually considered a major problem , but one ought to expect a universal measure of complexity to have such a property .on the other hand , is more precisely upper semi - computable , meaning that one can find upper bounds , as we will do by applying a technique based on another semi - computable measure to be presented in the next section .the invariance theorem guarantees that complexity values will only diverge by a constant ( e.g. the length of a compiler , a translation program between and ) and that they will converge at the limit . +* invariance theorem * ( ) : if and are two universal turing machines and and the algorithmic complexity of for and , there exists a constant such that : latexmath:[\[\label{invariance } hence the longer the string , the less important is ( i.e. the choice of programming language or universal turing machine ) . however , in practice can be arbitrarily large because the invariance theorem tells nothing about the rate of convergence between and for a string of increasing length , thus having an important impact on short strings .the algorithmic probability ( also known as levin s semi - measure ) of a string is a measure that describes the expected probability of a random program running on a universal ( prefix - free . ) for details see . ] ) turing machine producing upon halting .formally , levin s semi - measure defines a distribution known as the universal distribution ( a beautiful introduction is given in ) .it is important to notice that the value of is dominated by the length of the smallest program ( when the denominator is larger ) .however , the length of the smallest that produces the string is .the semi - measure is therefore also uncomputable , because for every , requires the calculation of , involving , which is itself uncomputable . an alternative to the traditional use of compression algorithmsis the use of the concept of algorithmic probability to calculate by means of the following theorem . + * coding theorem * ( levin ) : this means that if a string has many descriptions it also has a short one .it beautifully connects frequency to complexity , more specifically the frequency of occurrence of a string with its algorithmic ( kolmogorov ) complexity .the coding theorem implies that one can calculate the kolmogorov complexity of a string from its frequency , simply rewriting the formula as : an important property of as a semi - measure is that it dominates any other effective semi - measure , because there is a constant such that for all , .for this reason is often called a _ universal distribution _ .let be a function defined as follows : where is the turing machine with number ( and empty input ) that produces upon halting , and is , in this case , the cardinality of the set . in we calculated the output distribution of turing machines with 2-symbols and states for which the busy beaver values are known , in order to determine the halting time , and in results were improved in terms of number and turing machine size ( 5 states ) and in the way in which an alternative to the busy beaver information was proposed , hence no longer needing exact information of halting times in order to approximate an informative distribution .here we consider an experiment with 2-dimensional deterministic turing machines ( also called _ turmites _ ) in order to estimate the kolmogorov complexity of 2-dimensional objects , such as images that can represent space - time diagrams of simple systems .turmite _ is a turing machine which has an orientation and operates on a grid for tape " .the machine can move in 4 directions rather than in the traditional left and right movements of a traditional turing machine head .a reference to this kind of investigation and definition of 2d turing machines can be found in , one popular and possibly one of the first examples of this variation of a turing machine is lagton s ant also proven to be capable of turing - universal computation . in section [ sec : strings - lenghts-10 ] , we will use the so - called _ turmites _ to provide evidence that kolmogorov complexity evaluated through algorithmic probability is consistent with the other ( and today only ) method for approximating , namely lossless compression algorithms .we will do this in an artful way , given that compression algorithms are unable to compress strings that are too short , which are the strings covered by our method .this will involve concatenating strings for which our method establishes a kolmogorov complexity , which then are given to a lossless compression algorithm in order to determine whether it provides consistent estimations , that is , to determine whether strings are less compressible where our method says that they have greater kolmogorov complexity and whether strings are more compressible where our method says they have lower kolmogorov complexity .we provide evidence that this is actually the case .in section [ eca ] we will apply the results from the coding theorem method to approximate the kolmogorov complexity of 2-dimensional evolutions of 1-dimensional , closest neighbor cellular automata as defined in , and by way of offering a contrast to the approximation provided by a general lossless compression algorithm ( deflate ) .as we will see , in all these experiments we provide evidence that the method is just as successful as compression algorithms , but unlike the latter , it can deal with short strings .turmites or 2-dimensional ( 2d ) turing machines run not on a 1-dimensional tape but in a 2-dimensional unbounded grid or array . at each stepthey can move in four different directions ( _ up _ , _ down _ , _ left _ , _ right _ ) or _stop_. transitions have the format , meaning that when the machine is in state and reads symbols , it writes , changes to state and moves to a contiguous cell following direction .if is the halting state then is .in other cases , can be any of the other four directions .let be the set of turing machines with states and symbols .these machines have entries in the transition table , and for each entry there are possible instructions , that is , different halting instructions ( writing one of the different symbols ) and non - halting instructions ( 4 directions , states and different symbols ) .so the number of machines in is .it is possible to enumerate all these machines in the same way as 1d turing machines ( e.g. as has been done in and ) .we can assign one number to each entry in the transition table .these numbers go from 0 to ( given that there are different instructions ) .the numbers corresponding to all entries in the transition table ( irrespective of the convention followed in sorting them ) form a number with digits in base .then , the translation of a transition table to a natural number and vice versa can be done through elementary arithmetical operations .we take as output for a 2d turing machine the minimal array that includes all cells visited by the machine .note that this probably includes cells that have not been visited , but it is the more natural way of producing output with some regular format and at the same time reducing the set of different outputs . [ cols= " < , < " , ] [ corbylength ] a 1-dimensional ca can be represented by an array of _ cells _ where ( integer set ) and each takes a value from a finite alphabet .thus , a sequence of cells \{ } of finite length describes a string or _ global configuration _ on .this way , the set of finite configurations will be expressed as .an evolution comprises a sequence of configurations produced by the mapping ; thus the global relation is symbolized as : where represents time and every global state of is defined by a sequence of cell states . the global relationis determined over the cell states in configuration updated simultaneously at the next configuration by a local function as follows : wolfram represents 1-dimensional cellular automata ( ca ) with two parameters where is the number of states , and is the neighborhood radius .hence this type of ca is defined by the parameters .there are different neighborhoods ( where ) and distinct evolution rules .the evolutions of these cellular automata usually have periodic boundary conditions .wolfram calls this type of ca elementary cellular automata ( denoted simply by eca ) and there are exactly rules of this type .they are considered the most simple cellular automata ( and among the simplest computing programs ) capable of great behavioral richness .1-dimensional eca can be visualized in 2-dimensional space - time diagrams where every row is an evolution in time of the eca rule . by their simplicity andbecause we have a good understanding about them ( e.g. at least one eca is known to be capable of turing universality ) they are excellent candidates to test our measure , being just as effective as other methods that approach eca using compression algorithms that have yielded the results that wolfram obtained heuristically .we have seen that our coding theorem method with associated measure ( or in this paper for 2d kolmogorov complexity ) is in agreement with bit string complexity as approached by compressibility , as we have reported in section [ sec : strings - lenghts-10 ] . the universal distribution from turing machines that we have calculated ( )will help us to classify elementary cellular automata .classification of eca by compressibility has been done before in with results that are in complete agreement with our intuition and knowledge of the complexity of certain eca rules ( and related to wolfram s classification ) . in classifications by simplest initial condition and random initial condition were undertaken , leading to a stable compressibility classification of ecas .here we followed the same procedure for both simplest initial condition ( single black cell ) and random initial condition in order to compare the classification to the one that can be approximated by using , as follows .we will say that the space - time diagram ( or evolution ) of an elementary cellular automaton after time has complexity : that is , the complexity of a cellular automaton is the sum of the complexities of the arrays or image patches in the partition matrix from breaking into square arrays of length produced by the eca after steps .an example of a partition matrix of an eca evolution is shown in fig .[ rule30 ] for eca rule 30 and where .notice that the boundary conditions for a partition matrix may require the addition of at most empty rows or empty columns to the boundary as shown in fig .[ rule30 ] ( or alternatively the dismissal of at most rows or columns ) if the dimensions ( height and width ) are not multiples of , in this case .decomposing ( with boundary conditions ) the evolution of rule 30 ( top ) eca after steps into 10 subarrays of length ( bottom ) in order to calculate to approximate its kolmogorov complexity.,width=222 ] decomposing ( with boundary conditions ) the evolution of rule 30 ( top ) eca after steps into 10 subarrays of length ( bottom ) in order to calculate to approximate its kolmogorov complexity.,width=309 ] all the first 128 ecas ( the other 128 are 0 - 1 reverted rules ) starting from the simplest ( black cell ) initial configuration running for steps , sorted from lowest to highest complexity according to .notice that the same procedure can be extended for its use on arbitrary images.,width=396 ] if the classification of all rules in eca by yields the same classification obtained by compressibility , one would be persuaded that is a good alternative to compressibility as a method for approximating the kolmogorov complexity of objects , with the signal advantage that can be applied to very short strings and very short arrays such as images . because all possible arrays of size are present in we can use this arrays set to try to classify all ecas by kolmogorov complexity using the coding theorem method .fig [ fig : arrays3x3 ] shows all relevant ( non - symmetric ) arrays .we denote by this subset from .[ scatterplots ] displays the scatterplot of compression complexity against calculated for every cellular automaton .it shows a positive link between the two measures .the pearson correlation amounts to , so the determination coefficient is .these values correspond to a strong correlation , although smaller than the correlation between 1- and 2-dimensional complexities calculated in section [ sec : strings - lenghts-10 ] .concerning orders arising from these measures of complexity , they too are strongly linked , with a spearman correlation of . the scatterplots ( fig .[ scatterplots ] ) show a strong agreement between the coding theorem method and the traditional compression method when both are used to classify ecas by their approximation to kolmogorov complexity .scatterplots of compression complexity against complexity as evaluated on the 128 first eca evolutions after steps .the top plot also shows the distribution of points along the axes displaying some clusters .the bottom plot shows a few of the eca rules used in fig.[sample ] ( but here for a black cell initial condition).,title="fig:",width=302 ] + scatterplots of compression complexity against complexity as evaluated on the 128 first eca evolutions after steps .the top plot also shows the distribution of points along the axes displaying some clusters .the bottom plot shows a few of the eca rules used in fig.[sample ] ( but here for a black cell initial condition).,title="fig:",width=302 ] the anomalies found in the classification of elementary cellular automata ( e.g. rule 77 being placed among eca with high complexity according to ) is a limitation of itself and not of the coding theorem method which for is unable to see " beyond 3-bit squares using , which is obviously very limited . andyet the degree of agreement with compressibility is surprising ( as well as with intuition , as a glance at fig .[ allecas ] shows , and as the distribution of ecas starting from random initial conditions in fig .[ sample ] confirms ) .in fact an average eca has a complexity of about 20k bits , which is quite a large program - size when compared to what we intuitively gauge to be the complexity of each eca , which may suggest that they should have smaller programs .however , one can think of as attempting to reconstruct the evolution of each eca for the given number of steps with square arrays only 3 bits in size , the complexity of the three square arrays adding up to approximate of the eca rule .hence it is the deployment of that takes between 500 to 50k bits to reconstruct every eca space - time evolution depending on how random vs. how simple it is .other ways to exploit the data from ( e.g. non - square arrays ) can be utilized to explore better classifications .we think that constructing a universal distribution from a larger set of turing machines , e.g. will deliver more accurate results but here we will also introduce a tweak to the definition of the complexity of the evolution of a cellular automaton ._ block decomposition method_. all the first 128 ecas ( the other 128 are 0 - 1 reverted rules ) starting from the simplest ( black cell ) initial configuration running for steps , sorted from lowest to highest complexity according to as defined in eq .[ newecaeq].,width=411 ] splitting eca rules in array squares of size 3 is like trying to look through little windows 9 pixels wide one at a time in order to recognize a face , or training a microscope on a planet in the sky .one can do better with the coding theorem method by going further than we have in the calculation of a 2-dimensional universal distribution ( e.g. calculating in full or a sample of ) , but eventually how far this process can be taken is dictated by the computational resources at hand .nevertheless , one should use a telescope where telescopes are needed and a microscope where microscopes are needed .one can think of an improvement in resolution of for growing space - time diagrams of cellular automaton by taking the of the sum of the arrays where is the number of repeated arrays , instead of simply adding the complexity of the image patches or arrays .that is , one penalizes repetition to improve the resolution of for larger images as a sort of optical lens " .this is possible because we know that the kolmogorov complexity of repeated objects grows by , just as we explained with an example in section [ kolmo ] . adding the complexity approximation of each array in the partition matrix of a space - time diagram of an eca provides an upper bound on the eca kolmogorov complexity , as it shows that there is a program that generates the eca evolution picture with the length equal to the sum of the programs generating all the sub - arrays ( plus a small value corresponding to the code length to join the sub - arrays ) .so if a sub - array occurs times we do not need to consider it s complexity times but .taking into account this , eq . [ eqeca ] can be then rewritten as : where are the different square arrays in the partition matrix and the multiplicity of , that is the number of repetitions of -length patches or square arrays found in . from now on we will use for squares of size greater than 3 and it may be denoted only by or bdm for _ block decomposition method_. bdm has recently been applied successfully to measure the kolmogorov complexity of complex networks .side by side comparison of 8 evolutions of representative ecas , starting from a random initial configuration , sorted from lowest to highest bdm values ( top ) and smallest to largest compression lengths using the deflate algorithm as a method to approximate kolmogorov complexity .,width=491 ] now complexity values of range between 70 to 3k bits with a mean program - size value of about 1k bits .the classification of eca , according to eq .[ newecaeq ] , is presented in fig . [ newecaeq ] .there is an almost perfect agreement with a classification by lossless compression length ( see fig .[ allecaslog ] and [ sample ] ) which makes even one wonder whether the coding theorem method is actually providing more accurate approximations to kolmogorov complexity than lossless compressibility for this objects length .notice that the same procedure can be extended for its use on arbitrary images .we denominate this technique _block decomposition method_. we think it will prove to be useful in various areas , including machine learning as an of kolmogorov complexity ( other contributions to ml inspired in kolmogorov complexity can be found in ) . also worth notice that the fact that eca can be successfully classified by with an approximation of the universal distribution calculated from turing machines ( tm ) suggests that output frequency distributions of eca and tm can not be but strongly correlated , something that we had found and reported before in and .another variation of the same measure is to divide the original image into all possible square arrays of a given length rather than taking a partition .this would , however , be exponentially more expensive than the partition process alone , and given the results in fig .[ allecaslog ] further variations do not seem to be needed , at least not for this case .one important question that arises when positing the soundness of the coding theorem method as an alternative to having to pick a universal turing machine to evaluate the kolmogorov complexity of an object , is how many arbitrary choices are made in the process of following one or another method and how important they are .one of the motivations of the coding theorem method is to deal with the constant involved in the invariance theorem ( eq . [ invariance ] ) , which depends on the ( prefix - free ) universal turing machine chosen to measure and which has such an impact on real - world applications involving short strings . while the constant involved remains ,given that after application of the coding theorem ( eq . [ coding ] ) we reintroduce the constant in the calculation of , a legitimate question to ask is what difference it makes to follow the coding theorem method compared to simply picking the universal turing machine .on the one hand , one has to bear in mind that no other method existed for approximating the kolmogorov complexity of short strings .on the other hand , we have tried to minimize any arbitrary choice , from the formalism of the computing model to the informed runtime , when no busy beaver values are known and therefore sampling the space using an educated runtime cut - off is called for . when no busy beaver values are known the chosen runtime is determined according to the number of machines that we are ready to miss ( e.g. less than .01% ) for our sample to be significative enoughas described in section [ sec : setting - runtime ] .we have also shown in that approximations to the universal distribution from spaces for which busy beaver values are known are in agreement with larger spaces for which busy beaver values are not known . among the possible arbitrary choices it is the enumeration that may perhaps be questioned , that is , calculating for increasing ( number of turing machine states ) , hence by increasing size of computer programs ( turing machines ) .on the one hand , one way to avoid having to make a decision on the machines to consider when calculating a universal distribution is to cover all of them for a given number of states and symbols , which is what we have done ( hence the enumeration in a thoroughly space becomes irrelevant ) . while it may be an arbitrary choice to fix and , the formalisms we have followed guarantee that -state -symbol turing machines are in with ( that is , the space of all -state -symbol turing machines ) . hence the process is incremental , taking larger spaces and constructing an average universal distribution .in fact , we have demonstrated that ( that is , the universal distribution produced by the turing machines with 2 symbols and 5 states ) is strongly correlated to and represents an improvement in accuracy of the string complexity values in , which in turn is in agreement with and an improvement on and so on . we have also estimated the constant involved in the invariance theorem ( eq . [ invariance ] ) between these for , which turned out to be very small in comparison to all the other calculated universal distributions .with two different experiments we have demonstrated that our measure is compatible with compression , yielding similar results but providing an alternative method to compression for short strings , that is the coding theorem method . we have also shown that ( and ) are ready for applications , and that calculating universal distributions is a stable alternative to compression and a worthwhile tool for approximating the kolmogorov complexity of objects , strings and images ( arrays ) .we think this method will prove to do the same for a wide range of areas where compression is nt an option given the size of strings involved .we also introduced the _ block decomposition method_. as we have seen with anomalies in the classification such as eca rule 77 ( see fig .[ allecas ] ) , when approaching the complexity of the space - time diagrams of eca by splitting them in square arrays of size 3 , the coding theorem method does have its limitations , especially because it is computationally very expensive ( although the most expensive part needs to be done only once that is , producing an approximation of the universal distribution ). like other high precision instruments for examining the tiniest objects in our world , measuring the smallest complexities is very expensive , just as the compression method can also be very expensive for large amounts of data .we have shown that the method is stable in the face of the changes in turing machine formalism that we have undertaken ( in this case turmites ) as compared to , for example , traditional 1-dimensional turing machines or to strict integer value program - size complexity as a way to estimate the error of the numerical estimations of kolmogorov complexity through algorithmic probability . for the turing machine modelwe have now changed the number of states , the number of symbols and now even the movement of the head and its support ( grid versus tape ) .we have shown and reported here and in that all these changes yield distributions that are strongly correlated with each other up to the point to assert that all these parameters have marginal impact in the final distributions suggesting a fast rate of convergence in values that reduce the concern of the constant involved in the invariance theorem . in also proposed a way to compare approximations to the universal distribution by completely different computational models ( e.g. post tag systems and cellular automata ) , showing that for the studied cases reasonable estimations with different degrees of correlations were produced .the fact that we classify elementary cellular automata ( eca ) as shown in this paper , with the output distribution of turmites with results that fully agree with lossless compressibility , can be seen as evidence of agreement in the face of a radical change of computational model that preserves the apparent order and randomness of turmites in eca and of eca in turmites , which in turn are in full agreement with 1-dimensional turing machines and with lossless compressibility .we have made available to the community a microscope " in the form of the online algorithmic complexity calculator ( http://www.complexitycalculator.com ) implementing ( in the future it will also implement and many other objects and a wider range of methods ) that provides objective complexity estimations for short binary strings using these methods .raw data and the computer programs to reproduce the results for this paper can also be found under the publications section of the algorithmic nature group ( http://www.algorithmicnature.org ) . yu.a .andrienko , n.v .brilliantov and j. kurths , complexity of two - dimensional patterns , _ eur .phys . j. _b 15 , 539546 , 2000 . c.h .bennett , logical depth and physical complexity in rolf herken ( ed ) _ the universal turing machine a half - century survey , _ oxford university press 227257 , 1988 .bennett , how to define complexity in physics and why . in _ complexity , entropy and the physics of information ._ zurek , w. h. , addison - wesley , eds .sfi studies in the sciences of complexity , p 137 - 148 , 1990 .brady , the determination of the value of rado s noncomputable function for four - state turing machines , _ mathematics of computation 40 _ ( 162 ) : 647665 , 1983 .calude , _ information and randomness _ , springer , 2002 . c.s . calude and m.a .stay , most programs stop quickly or never halt , _ advances in applied mathematics _, 40 , 295 - 308 , 2008 .chaitin , on the length of programs for computing finite binary sequences : statistical considerations , _ journal of the acm _ , 16(1):145159 , 1969 ._ from philosophy to program size ,estonian winter school in computer science , institute of cybernetics , tallinn , 2003 .a. g. casali , o. gosseries , m. rosanova , m. boly , s. sarasso , k. r. casali , s. casarotto , m. bruno , s. laureys , g. tononi and m. massimini , a theoretically based index of consciousness independent of sensory processing and behavior , _ sci transl med , _ vol .5:198 , p. 198ra105r. cilibrasi , p. vitanyi , clustering by compression , _ ieee transactions on information theory , _ 51 , 4 , 15231545 , 2005 .universality in elementary cellular automata ._ complex systems , _ 15 , pp .140 , 2004 .t.m . cover and j.a .thomas , _ information theory , _ j. wiley and sons , 2006 .delahaye , _ complexit alatoire et complexit organise , _ editions quae , 2009 .delahaye , h. zenil , towards a stable definition of kolmogorov - chaitin complexity , arxiv:0804.3459 , 2007 .j - p . delahaye and h. zenil , on the kolmogorov - chaitin complexity for short sequences . in c. calude ( ed . ) , _ randomness and complexity : from leibniz to chaitin _ ,world scientific , 2007 .delahaye & h. zenil , numerical evaluation of the complexity of short strings : a glance into the innermost structure of algorithmic randomness , _ applied math . and comp .r. downey & d.r .hirschfeldt , _ algorithmic randomness and complexity _ , springer , 2010 .feldman and j.p .crutchfield , _ phys .e _ 67 , 051104 , 2003 .feldman , some foundations in complex systems : entropy , information , computation , and complexity , _santa fe institute s annual complex systems summer school _ , beijing china , 2008 .m. gardner , mathematical games - the fantastic combinations of john conway s new solitaire game life " , pp . 120123 , _ scientific american _ 223 , 1970 . m. hutter , on the existence and convergence of computable universal priors , in : proc .14th internat . conf . on algorithmic learning theory ( alt-2003 ) , lecture notes on artificial intelligence , vol . 2842 , sapporo , springer , berlin , pp . 298312 , 2003 . j. joosten , turing machine enumeration : nks versus lexicographical " , _ wolfram demonstrations project _ , 2012 .w. kircher , m. li , and p. vitanyi , the miraculous universal distribution , _ the mathematical intelligencer , _ 19:4 , 715 , 1997 .kolmogorov , three approaches to the quantitative definition of information , _ problems of information and transmission _ , 1(1):17 , 1965 .c.g.langton , studying artificial life with cellular automata , _ physica d : nonlinear phenomena _ 22 ( 13 ) : 120149 , 1986 . l. levin , laws of information conservation ( non - growth ) and aspects of the foundation of probability theory ., _ problems in form . transmission _ 10 . 206210 , 1974 . m. li , p. vitnyi , _ an introduction to kolmogorov complexity and its applications , _ springer , 2008 .pegg , jr . math puzzle " . retrieved 10 june 2013 . . rivals , m. dauchet , j .-delahaye , o. delgrange , compression and genetic sequence analysis . , _ biochimie _ , 78 , pp 315 - 322 , 1996 .t. rad , on non - computable functions , _ bell system technical journal , _ vol .3 , pp . 877884 , 1962 .shalizi , k.l .shalizi , and r. haslinger , quantifying self - organization with optimal predictors _ phys ._ , 93 , 118701 , 2004 f. soler - toscano , h. zenil , j .-delahaye and n. gauvrit , calculating kolmogorov complexity from the frequency output distributions of small turing machines , plos one ( in press ) . f. soler - toscano , h. zenil , j .-delahaye and n. gauvrit , correspondence and independence of numerical evaluations of algorithmic information measures , arxiv:1211.4891 [ cs.it ] .solomonoff , a formal theory of inductive inference : parts 1 and 2 . _ information and control _ , 7:122 and 224254 , 1964 .s. wolfram , _ a new kind of science _, wolfram media , champaign , il .usa , 2002 .patterns of structural complexity in alzheimer s disease and frontotemporal dementia k. young , a - t .du , j. kramer , h. rosen , b. miller , m. weiner , and n. schuff , _ hum brain mapp ._ , 30(5 ) : 16671677 , 2009 . k. young and n. schuff , measuring structural complexity in brain images , _ neuroimage _ , 2008 ; 39(4 ) : 17211730 .h. zenil , compression - based investigation of the dynamical properties of cellular automata and other systems , _ complex systems . _ 19(1 ) , pages 1 - 28 , 2010 .h. zenil , j .-delahaye and c. gaucherel , image information content characterization and classification by physical complexity , _ complexity _ , vol .173 , pages 2642 , 2012 . h. zenil and j - p .delahaye , on the algorithmic nature of the world , in g. dodig - crnkovic and m. burgin ( eds ) , _ information and computation _ , world scientific publishing company , 2010 .h. zenil , n. kiani , j. tegnr , information conservation in dimensionality reduction and analysis techniques , submitted .zenil , f. soler - toscano , k. dingle and a. louis , correlation of automorphism group size and topological properties with program - size complexity evaluations of graphs and complex networks , _ physica a : statistical mechanics and its applications , _ vol .341358 , 2014 . h. zenil , une approche exprimentale la thorie algorithmique de la complexit , dissertation in fulfilment of the degree of doctor in computer science , universit de lille 1 , 2011 .
|
the question of natural measures of complexity for objects other than strings and sequences , in particular suited for 2-dimensional objects , is an open important problem in complexity science . here we provide a measure based upon the concept of algorithmic probability that elegantly connects to kolmogorov complexity that provides a natural approach to -dimensional algorithmic complexity by using an -dimensional deterministic turing machine , popularized under the term of _ turmites _ for , from which the so - called _ langton s ant _ is an example of a turing universal _ turmite_. a series of experiments to validate estimations of kolmogorov complexity based on these concepts is presented , showing that the measure is stable in the face of some changes in computational formalism and that results are in agreement with the results obtained using lossless compression algorithms when both methods overlap in their range of applicability . we also present a _ block decomposition method _ ( bdm ) application to classification of images and space - time evolutions of discrete systems , providing evidence of the soundness of the method as a complementary alternative to compression algorithms for the evaluation of algorithmic complexity . we provide exact numerical approximations of kolmogorov complexity of square image patches of size 3 and more , with the bdm allowing scalability to larger images . + * keywords : * dimensional complexity ; image classification ; algorithmic probability ; compressibility ; pattern detection ; cellular automata .
|
in many practical applications in integrations , the block time step approach is preferred . in this approach, many particles share the same step size , where the only allowed values for the time step length are powers of two .block time steps are advantageous to reduce the prediction overheads , and are needed both for good parallelization and code efficiency .however , the time - symmetricity and symplecticity of previous direct integration schemes are disturbed by using variable block time steps .the algorithm developed by ( tsbts ) is the first algorithm for time symmetrizing block time steps which carry the benefits of time symmetry to block time step algorithms . in this algorithmic approach, the total history of the simulation is divided into a number of smaller periods , with each of these smaller periods called an `` era '' .symmetrization is achieved by applying a time symmetrization procedure with an era - based iteration .the tsbts algorithm was generated for direct integration of systems and as such is suitable to use for a moderate number of bodies no more than .the direct approach to integration is preferred when we are interested in the close - range dynamics of the particles , and aiming at obtaining high accuracy .the algorithm gives us the ability to reach long integration times with high accuracy .however it has some limitations on memory usage which stem from choosing the size of the era .the tsbts algorithm also provides some benefits for parallelization of algorithms .development of parallel versions of variable time step codes becomes increasingly necessary for many areas of research , such as stellar dynamics in astrophysics , plasma simulations in physics , and molecular dynamics in chemistry and biology .the most natural way to do this is through the use of block time steps , where each particle has to choose its own power of two , for the size of its time step .block time steps allow efficient parallelization , given that large numbers of particles sharing the same block time step can then be integrated in parallel . in section 2 ,we summarize the tsbts algorithm time - symmetric block time step algorithm .we provide definitions for the era concept , and for time - symmetrization of block time steps . in section 3 ,we present sample numerical tests for choosing the size of the era .we show how important is the effect of the era size on the energy errors , and the relationship between era size and iteration number . in section 4 ,we offer a dynamic era size scheme for both better energy conservation and better memory usage . in section 5, we present a parallel algorithm for the tsbts scheme with a hybrid force calculation procedure . in section 6 ,we discuss load balance and parallel performance tests of the algorithm .section 7 sums up the study .in the tsbts algorithm , an iterative scheme is combined with an individual block time step scheme to apply the algorithm to the problem effectively .there are two important points in this algorithm : the era concept and the time - symmetrization procedure .the era is a time period in which we collect and store information for all positions and velocities of the particles for every step . at the end of each era , we synchronize all particles with time symmetric interpolation .this synchronization is repeated many times during the integration period , depending on the size of the era .let us remember the tsbts algorithm briefly : we used a self - starting form of the leapfrog scheme ; with taylor expansion for predicted velocities and positions ; one of the easiest estimates for the time step criterion is the _ collisional time step_. when two particles approach each other , or move away from each other , the ratio between relative distance and relative velocity gives us an estimation . on the other hand , if particles move at roughly the same velocity , the collision time scale estimate produces infinity when the particles relative velocities are zero . for such cases, we use a _ free fall time scale _ as an additional criterion , or just take the allowed largest time steps for those particles .time - steps are determined using both the free - fall time scale and the collision time scale ( [ eq : nbdt ] ) for particle by taking the minimum over the two criterion and over the all as ; where is a constant accuracy parameter , and are the relative position and velocity between particles and , and is the pairwise acceleration . even if aarseth s time step criterion serves us better in avoiding such unexpected situations and gives us a better estimation , it needs higher order derivatives and it is expensive for a second order integration scheme . our time - symmetry criterion is defined in eq.[blockcondition ] .this criterion gives us the smallest values that suit the condition ; where is the iteration counter . here , and refer to the beginning and end of the time step . in the case of block time step schemes , a group of particles advances at the same time . at each step of the integration ,a group of particles is integrated with the smallest value of . here, we refer to the group of particles as particle blocks .the first group of particles in an era is called the _first block_. in the first pass through an era , we perform standard forward integration with the standard block step scheme , without any intention to make the scheme time symmetric . to compute the forces on the particles with the smallest value of , we use second - order taylor expansions for the predicted positions , while a first - order expansion suffices for the predicted velocity .predicted positions , velocities , and accelerations for each particle for every time step are stored during each era . in the second pass , which is the first iteration , instead of taylor expansions we use time - symmetric interpolations with stored data .this time , each time step is calculated in a different way for symmetrization as in algorithm [ algorithm ] . here , is the block time step of the integrated particle group , and is the level block time step , which is obtained from a time - symmetry criterion ( eq.[blockcondition ] ) . if the current time is an even multiple of the current block time step , that time value is referred to as _ even time _ , otherwise it is referred to as _ odd time_. here is the description of the symmetrization scheme for block time steps ( as in algorithm [ algorithm ] ) : * if the current time is * _ odd _ * , first , we try to continue with the same time step . if , upon iteration , that time step qualifies according to the time - symmetry criterion ( as in eq.[blockcondition ] ) , then we continue to use the same step size that was used in the previous step of the iteration .if not , we use a step size half as large as that of the previous time step . *if the current time is * _ even _ * , our choices are : doubling the previous time step size ; keeping it the same ; or halving it .we first try the largest value , given by doubling . if eq.[blockcondition ] shows us that this larger time step is not too large , we accept it : otherwise , we consider keeping the time step size the same . if eq.[blockcondition ] shows us that keeping the time step size the same is okay , we accept that choice : otherwise , we simply halve the time step , in which case no further testing is needed .the same steps are repeated for higher iterations as in the first iteration .the main steps of the integration cycle is given by algorithm [ seq_algorithm ] .initialization : + - read initial position and velocity vectors from the source .+ - arrange size in the memory .+ - initialize particles forces , time steps , and next block times .+ - sort particles according to time blocks .+ start the iteration for the era .start the integration for the first block of the era .predict position and velocity vectors of all particles for the current integration time .if this is the first step of the iteration , or if the time of the particle is smaller than the current time , do direct prediction : otherwise perform interpolation from the currently stored data .calculate forces on the active particles .correct position and velocity vectors of the particles in the block .update their new time steps and next block time .+ - after the first iteration , symmetrize new time steps according to algorithm [ algorithm ] . sortparticles according to time blocks .repeat from step 3 while current time is time at the end of the era .repeat from step 2 until the number of the iteration reaches the iteration limit .repeat from step 2 for the next era , until the final time is reached .write the outputs and finish the program .the size of an era can be chosen as any integer multiple of the maximum allowed time step .there is not any important computational difference between dividing the integration to the small era parts and taking the whole simulation in one big era .however some symmetrization routines such as adjusting the time steps and interpolating the old data increase the computation time .additionally , keeping the whole history of the simulation requires a huge amount of memory .it is important to decide what is the most convenient choice for an era .we need to store sufficient information from the previous steps to adjust the time steps with iterations .to avoid doing additional work and storing a uselessly large history , choosing a large size for the era is not recommended . on the other hand, the era size must be large enough to store rapid and sharp time step changes .we made several tests with different plummer model initial conditions , using different sizes of era .units were chosen as standard units , as the gravitational constant , the total mass and the total energy is .we limited the maximum time step to .the parameter was kept larger than usual to see the error growth in smaller time periods .the parameter was set as 0.1 for 100-body problems , and 0.5 for 500-body problems .the plummer type softening length was taken as 0.01 .each system was integrated for every era size ( ) for 1000 time units .fig.[fig1 ] shows the energy errors for 5 different 100-body problems with 5 different era sizes . in these test runs ,time - symmetrized block time steps were used with 3 iterations .we also performed test runs for other era sizes ( ) .however , the growth of energy errors for these era sizes reached beyond the scales of this figure .the figure shows that , 3 iterations are not enough to avoid linearly growing errors for large ( here , ) era sizes .( 120mm,120mm)fig1.eps we conducted the following tests to see this effect clearly .fig.[fig2 ] shows the energy errors for 5 different 100-body problems with 5 different era sizes as in the previous figure .however , we used 5 iterations here . in this figure , the largest era size ( time unit ) does not show a linearly growing error exactly the contrary to the case of 3 iterations.the improvement on energy errors comes directly from the iteration process as we expected .( 120mm,120mm)fig2.eps we increased the particle number 5 times , and set the parameter as .the parameter could have been kept as , but we forced the algorithm to take larger time steps , which in turn produce larger energy errors for relatively small time periods .fig.[era_tests_p500_1 ] shows the energy errors for 5 different 500-body problems with 7 different era sizes .the red curves show the errors for era sizes of , and time units ; the black curves show the errors for era sizes of , and time units .( 120mm,120mm)fig3.eps it seems that more iterations are needed to obtain smaller energy errors while working with larger era sizes . if time - symmetric block time steps can not be produced with a small number of iterations , the total energy error grows linearly . as indicated by our tests ,iteration number and era size must be chosen carefully to ensure symmetric block time steps .although the size of the era is not very important as long as the iteration number is large enough , a high number of iterations is not the preferred choice , as it demands high computational cost . also , the era size would have to be kept small to avoid the huge memory usage . in practice ,our tests show that , 5 iterations is not enough to prevent linearly growing errors when we use greater than time unit as the era size. on the other hand , the era size must be greater than the greatest time otherwise we can not store past information for the iteration process and the algorithm works as a classical block time step scheme .our test results for symmetrized time steps with a small number of iterations in the previous section show that keeping the era size large or small has a clear effect on energy errors . however , the amount of the past position and the velocity information increase with the size of the era .then , many more iterations are required to obtain optimized time steps . and increased numbers of iterations consume more cpu time .let us remember and give some additional details and definitions about the relationship between block time steps and era : similar to the _ first block _ definition we provided in section 2 , the last group of particles in an era is referred to as the _last block_. the current time in the integration for the first and last blocks are referred to as _ first block time _ and _ last block time _ , respectively . at the end of each era ,integration of every particle stops at the same time , and new block time steps are calculated and assigned for new blocks .the last block can take the maximum allowed time step at the most .the first block can take any block time step smaller than the maximum allowed time step .then , particles are sorted according to their block time steps .also , every block has its own integration time related to its block time step .if we can find the proper criterion to change it , era size can be controlled dynamically .the simplest choices can vary between 1 time unit and the allowed largest time step .our suggestion is : calculate the new block time steps and the first and last block times at the end of each era , and take the difference between the last and first block times .this difference gives us a dynamically changing size and we can assign this as the size of the new era . naturally , sometimes this difference can be larger than 1 time unit , or smaller than the maximum allowed time step .also , if all of the particles take the same time step in any era , the difference goes to zero .we can use the maximum allowed time step and any power - of - two times of this era size for the top and bottom limits of the era , respectively . here, we used multiples of the largest time step for the lower limit .if all of the particles take the largest time step , or larger time steps than the new era size , there will not be enough past information for symmetrization .for these reasons , era size must not be much smaller than the largest time step .initialization ( same as algorithm [ seq_algorithm ] ) .+ set first and last block times .calculate dynamic era size ( _ dynamic era size _= _ last block time _ - _ first block time _+ ) if _ dynamic era size _ maximum time step _+ _ maximum time step _ + ) if _ dynamic era size _ maximum time step _+ = _ maximum time step _ + start the iteration for the era . start the integration for the first block of the era . predict position and velocity vectors of all particles for the current integration time .if this is the first step of the iteration , or if the time of the particle is smaller than the current time , do direct prediction : otherwise perform interpolation from the currently stored data . calculate forces on the active particles .correct position and velocity vectors of the particles in the block .update their new time steps and next block time .+ - after the first iteration , symmetrize new time steps according to algorithm [ algorithm ] .sort particles according to time blocks .repeat from step 5 while current time is time at the end of the era .repeat from step 4 until the number of the iteration reaches the iteration limit .repeat from step 2 for the next era , until the final time is reached .write the outputs and finish the program .if our estimate of the era size is smaller than our largest time step , the particles with largest time steps are excluded from the integration process of the era , and are then left for the next era .errors of energy conservation oscillate in time , when they happen .we can use the allowed largest time step for the era size in these cases .the main steps of the algorithm is given by algorithm [ seq_dynera_algorithm ] .in the tests we did for the dynamic era , we used two choices for era size : equal to the allowed largest time step , and dynamically changing size as defined above .we already know from previous runs for these test problems that we obtained the smallest errors on total energies when we took the allowed largest time steps as the era size .we performed 3 iterations .fig.[fig:500dyn1 ] shows the energy errors for 10 different 500-body problems .the green curves show the results for the dynamically changing era ; the red curves show the results for the fixed era .fig.[fig:100dyn1 ] shows the energy errors for 10 different 100-body problems .( 120mm,120mm)fig4.eps the results for dynamic era size are in the same range with those of fixed era size .even if the chosen fixed era size ( ) seems like the best choice for previous tests with the same initial conditions and parameters ( i.e. , maximum allowed time steps , softening and accuracy parameters ) , in general , dynamic era gives modestly better results than fixed era for .we ran more than 20 tests , and in of them were the errors for dynamic era size larger than errors for fixed era size .the rest of the results are clearly better than those for fixed era sizes , besides the advantage of reduced memory usage for the same number of iterations .running times for dynamic era size are less than for fixed era sizes in general .basically , there are two well known schemes that are used in direct parallelizations : copy and ring .the ring algorithm is generally preferred for reducing memory usage. it can be reasonable for shared time step codes , but it is not easy to use with block step schemes .it is also well known from previous works that this algorithm achieves almost the same speedup as the copy algorithm . the number of the particles in the integrated block changes with every step . in many cases ,the size of the integrated block can be smaller than the number of the processors .it is difficult to obtain balanced load distribution for such cases .we used the copy algorithm .while it is much easier to extend for block step schemes , the copy algorithm also has the load imbalance problem in classical usage . for any case , block size can be smaller than the number of processors again .we divided the partitioning strategy into two cases to avoid bad load balancing . in the first case, we divided the particles when the number of particles in the first block is greater than number of nodes .this is a kind of data partitioning , with every node containing a full copy of the system . in the second case, we divide the force calculation of the particles in the first block as a kind of work partitioning .our parallel algorithm works with the following steps , as in algorithm [ par_algorithm ] .broadcast all particles .each node has a full copy of the system .initialize the system for all particles in all nodes .every node computes time steps for all particles .compute and sort time blocks . integrate particles in the first block whose block times are the minimum for the era :+ ) if the number of the first block number of nodes : every processor + calculates forces and integrates + ( number of first time block)/(number of nodes ) particles .+ ) if the number of the first block number of nodes : every processor + calculates ( number of particles)/(number of nodes ) part of the forces + on the particles of the first block . + update integrated particles .repeat from step 3 .we have performed test runs on a linux cluster in itu - hpc lab . with 37 dual core 3.40 ghz intel(r )xeon(tm ) cpu with myrinet interconnect .the compute time was measured using mpi_wtime ( ) .the timing for total compute time was started before the broadcast of the system to the nodes , and ended at the end of integration .the calculation time of the subset of the particles in the current time block that are being handled by a given processor was taken as the work load of the processor . in the iteration process , the largest time was taken as the work load of the processor for the same time block .work load of the processor for every active integrated particle group is defined as ; is the number of processors ; the mean work load is : and load imbalances : fig.[fig : imbalance ] shows the load imbalance for a 1000-body problem .we used 12 processors . in direct simulations ,a 1000 body is not a big number for 12 processors . here ,load imbalance is not seen as more than in general . moreover , load imbalance is smaller than expected .the main reason for this is in the iteration routines of the tsbts algorithm , which increases both communication and calculation times for active particles . also , when the number of particles in the first block is smaller than the number of nodes , work partitioning is applied in the algorithm , which also increases communication time .( 120mm,120mm)fig6.eps is the running time for one processor ; is the running time for processors . , and are given respectively , as : fig.[fig : speedup ] and fig.[fig : efficiency ] show and results of symmetrized and non - symmetrized block time steps for an 10000-body problem initial conditions with plummer softening length of and accuracy parameter .only one iteration with the tsbts algorithm corresponds to individual block time step algorithm without symmetrization .the speedup result for 3 iterations is clearly better than the result for 1 iteration .these results show that the communication / calculation ratio decreases with the iteration process , though iteration needs much more computation time .( 120mm,120mm)fig7.eps ( 120mm,120mm)fig8.eps for moderately short integration times , as in one time unit cases , the same error bounds can be obtained with less computation times by classical algorithms .however , the algorithm already shows its advantages in long time integrations .fig.[fig : errcpu ] shows relative energy errors and cpu times for 20 different 500-body problems with 2 different accuracy parameters ( ) for 1 cpu .each system was integrated for 1 and 3 iterations and 1000 time units . even if it is not possible to obtain the same degree of energy errors for different test problems , the results are still highly promising .we obtained significantly better energy errors with the tsbts algorithm ( 3 iterations ) than with the classical individual block time step algorithm ( 1 iteration ) for the same accuracy parameters ( ) in all tests . also , in some tests ( more or less in of the tests ), we obtained better results with 3 iterations for 10 times larger accuracy parameters than with 1 iteration runs for .for example in one of our 500-body problems , we obtained a relative energy error of with for 3 iterations , while it was for 1 iteration . to reach the same error bound with one iteration for 1000 time units, we had to reduce the accuracy parameter to 10 times smaller ( ) .then , we obtained relative energy error of with 1 iteration . in this example , calculation times for 1 and 3 iterations with were sec . , and secrespectively , while the time was sec . for with 1 iteration . here ,3 iterations increase the calculation time by almost a multiple of 2 . however , calculation time increases by a multiple of 10 , while the accuracy parameter is reduced by the same order .fig.[fig : wallclocktime ] shows running time requirements of the algorithm for the same 10000-body problem , both for 1 and 3 iterations , for one time unit .the tsbts algorithm needs up to 5 times more run time than 1 iteration case with 1 cpu for this test ( for 500-body tests , this ratio was 4.75 as an average of their run times ) .this extra time is consumed by iteration and symmetrization procedures .the time - consuming ratio between the 1 and 3 iteration cases reduces to almost times when we increased the number of processors .we have analyzed the era concept in greater detail for time symmetrized block time steps .our test results show that the size of the era must be chosen carefully .this is important , especially for long - term simulations with highly desirable energy conservations .the era size is also important to avoid the need for additional data storage and a uselessly high number of iterations , which require too much running time . in this work ,we suggested a dynamically changing size for the era .this enables us to follow the adaptively changing size for these time periods . in this scheme, the era size will be well - adjusted to the physics of the problem .in many cases , we obtained better energy errors than previous algorithm with fixed era size .additionally , we produced a copy algorithm - based parallel scheme combining with our time symmetrized block time step scheme .we divided the force calculation into two approaches , according to the number of the integrating particles , to avoid bad load balancing .if the number of particles in the integrated block was greater than the number of processors , we used the classical approach the copy algorithm to calculate forces . if we had a lower number of particles than processors to integrate , we divided the force calculations between the processors using work partitioning .parallelization of direct problem already features some difficulties regarding communication costs .communication times dramatically increase with the number of processors .previous works show that , using more than 10 processors for a few thousands particles does not result in a substantial gain .this problem is replicated in individual time step and block time step cases .even if we need to expend some additional communication efforts in our work partitioning approach , we obtain good load balancing results with this approach .also , the iteration process requires much more effort .speedup and efficiency results are as we expected for current implementations .scaling of the algorithm can be increased by using hyper systolic or other efficient algorithms in future works .spinnato ps , van albada gd and sloot pma ( 2000 ) performance analysis of parallel codes .proceedings of high performance computing and networking , lecture notes in computer science v : 1823 p : 249 - 260
|
the time - symmetric block time step ( tsbts ) algorithm is a newly developed efficient scheme for integrations . it is constructed on an era - based iteration . in this work , we re - designed the tsbts integration scheme with dynamically changing era size . a number of numerical tests were performed to show the importance of choosing the size of the era , especially for long time integrations . our second aim was to show that the tsbts scheme is as suitable as previously known schemes for developing parallel codes . in this work , we relied on a parallel scheme using the copy algorithm for the time - symmetric scheme . we implemented a hybrid of data and task parallelization for force calculation to handle load balancing problems that can appear in practice . using the plummer model initial conditions for different numbers of particles , we obtained the expected efficiency and speedup for a small number of particles . although parallelization of the direct codes is negatively affected by the communication / calculation ratios , we obtained good load balance results . moreover , we were able to conserve the advantages of the algorithm ( e.g. , energy conservation for long term simulations ) .
|
signal correlation is one of the most computationally demanding and communication intensive tasks in the signal processing flow of a radio telescope array . it has been traditionally processed using field programmable gate arrays ( fpgas ) to achieve excellent power efficiency .however , high development challenges and lack of portability make it an expensive task either to design a system from scratch , to scale an existing one or to introduce new functionalities . with the fast development of general purpose hardware platforms ,it is likely that at some point the relatively low development cost and high flexibility of software correlators make them a viable option .there have been a growing number of software correlator projects over the last decade . the most widely used cpu cluster based vlbi correlator difx designed by deller et al . implemented a time division multiplexed system , in which the inter - node synchronization is less critical and hence it achieves excellent performance even given unbalanced computing resources or non - ideal network conditions .recent research by dodson et al . showed that difx can be efficiently implemented on supercomputers with infiniband networks as well as on the intel mic architecture , and it scales linearly up to 50 nodes after which network bottlenecks cut in .another well - known software correlator was designed for the low frequency array ( lofar ) .being one of the first new generation telescopes intensively using interferometry techniques , lofar was also one of the first real world projects to use a dedicated software correlator .a blue gene / l supercomputer is used in the lofar system for correlation and post - correlation processing by romein et al .computationally intensive jobs in the lofar software system are optimized using assembly language and as a result , it has achieved 98% of the peak floating - point capability of the hardware architecture . while the cpu - based software correlators provedthe capability , gpus ( graphic processing unit ) appear to be increasingly applicable for this type of work . in a comparison of correlation on different hardware architectures by nieuwpoort et al . , nvidia gpus showed the best absolute performance and the second best power efficiency , which revealed the feasibility of building a powerful gpu - based correlation system .gpus were first used for correlation a decade ago by schaaf et al . , when graphic programming techniques such as the cg language had to be heavily involved to get a general computing problem solved on a gpu . over the last few yearsthere have been several gpu - based software correlators .these took advantage of nvidia s compute unified device architecture ( cuda ) , in which gpus can be treated as generic computing devices in addition to graphic chips .this significantly reduced the programming challenge , and hence more efforts could be put into the optimization , rather than making algorithms compatible with the hardware .the first cuda - based gpu correlator designed by harris et al . took advantage of the cufft library for its f - engine and implemented a series of x - engines in different parallel fashions , which achieved a considerable performance gain compared with cpu correlators . another project conducted by wayth et al . implemented similar parallel approaches to those presented by harris et al . and constructed a real - time correlator for the prototype of the murchison widefield array ( mwa ) .the most recent work by clark et al . presented a highly optimized implementation on nvidia s gtx480 gpu and achieved 79% of the peak single precision capacity of the hardware architecture .the world s largest radio telescope , the square kilometer array ( ska ) , is also considering a correlation system based on gpu clusters as described by daddario .however , previous research has focused on single - gpu approaches with very little consideration given to data distribution across multiple gpus .the data distribution pattern used in cpu cluster correlators are yet to be verified with gpu clusters given the number of distinctive features of gpu correlator engines .this paper presents a software correlator for heterogeneous high performance computing clusters , especially gpu clusters , mainly focusing on data distribution models .two space - division network models are proposed in this paper and are compared with a re - implemented time - division model which was first introduced by deller et al . .the correlator engines presented by harris et al . are adopted and re - implemented in the open computing language ( opencl ) for compatibility with different computing devices .the scope of this work is to investigate possible solutions for solving large - scale correlation problems such as those ska would face .there are two main approaches to radio astronomy signal correlation .the first , a lag or xf correlator , correlates signals in the time domain , before transformation to the frequency domain via the fourier transform .this method is often used in hardware implementations where the initial correlation can be performed at lower bit precision . the second , an fx correlator ,instead transforms the signals using the fourier transform , and then performs the correlation via conjugate multiplication .this method is predominantly used in software correlators , as it requires fewer total operations . in both methodsthe results are usually then accumulated . as this work will utilize the fx correlator , a brief mathematical introduction follows . for a discrete time signal ] , the discrete fourier transform is first applied to obtain the spectra ] as shown in equation [ eq : dft ] : = \sum_{n=0}^{n-1 } s[n ] e^{-j ( 2 \pi / n ) k n}\ ] ] then for ] , where is the index of the spectra over time , and are the index of each signal in the pair , the complex visibilities $ ] are obtained using equation [ eq : dft_corr ] : =\sum_{a=0}^{a-1}s_{a , i}^\ast[k ] s_{a , j}[k]\ ] ] in an fx correlator implementation , the two steps are usually named the f - engine and the x - engine .this work takes advantage of the apple opencl fft to implement the f - engine . for the x - engine , the 1xgxg model used by harris et al . is adopted and re - implemented in opencl with modifications to fit cluster models .the time - division pattern for correlation , which was used by deller et al . , is the first data distribution model we implemented in this work .as shown in figure [ fig : timedivision ] , input data streams on streaming nodes are divided into time slices and distributed to correlation nodes .each correlation node is responsible for some of the time slices across all input streams .an input stream here refers to the sampled digital data from an antenna , which does not include the case where the data is channelized into sub - bands or where multiple polarizations are present per stream .the time - division model is highly efficient in terms of data transfers as all input data chunks are transferred only once .moreover , every correlation node processes independent data , and as a result , synchronization between correlation nodes becomes less important .however , as the time - division model was originally proposed for a cpu cluster correlator , simply replacing the fx engines with gpu implementations could potentially cause problems . based on our preliminary testing ,when the number of input data streams becomes very large , the efficiency of the fx engines drops dramatically . in this casethe time - division model is not necessarily optimal on a gpu cluster even though it is highly efficient in terms of data transfers .furthermore , the time - division model processes all baselines on a single node , thus when it comes to a point where the number of data streams is so large that the gpu memory is not able to hold all baseline data at a minimum length of a single fft , the model would fail .thus it is relevant to consider other data distribution models for gpu cluster correlators .an alternative approach is to implement data distribution models based on division in space rather than time . shown in figure [ fig : group ] are correlation jobs divided into groups based on the space - division pattern . instead of processing all correlation pairs inside a single node and assigning different nodes with different time slices ,a space - division model divides correlation pairs into groups , and each node assigned with a certain group is responsible for all time slices .thus , given the total number of input streams , the number that a single node needs to process is reduced , which would improve the gpu x - engine performance for cases with a large number of streams .the exact number of streams per node would still be dependent on the total number of correlation nodes required to achieve real - time processing .ultimately , to completely control the number of streams per node , a hybrid system would need to be used , but this is left for future investigations . and .groups labeled using numbers in the larger font correspond to correlation nodes.,scaledwidth=55.0% ] space - division models involve necessary modifications to the x - engine , since x - engines designed for single gpu correlators process all input data streams at once in a triangle pattern for all non - redundant pairs , while some of the nodes in the space - division model need to process two parts of the input streams in a rectangle pattern for cross - correlations only .moreover , it involves redundant data transfers as correlation nodes in the same row or column require the same input data .it is then of significant importance to design network topologies intelligent enough to handle the huge data efficiently . in this paper , we propose two network topologies to investigate the performance of the space - division model .the first space - division based network topology we designed is the broadcasting model shown in figure [ fig : broadcasting ] , which uses streaming nodes to broadcast the data across correlation nodes .equation [ eq : oc_os ] shows how the number of streaming nodes , , varies with the number of correlation nodes , . given the data distributing pattern , there are two methods for the data transfer .one of them is to use a native broadcast routine , which could be either at the mpi level , or a hardware multicast , while the other is to send and receive data in loops using basic point - to - point communications .figure [ fig : bdcdiagram ] shows the diagrams of both methods based on the instance given in figure [ fig : broadcasting ] with 10 correlation nodes and 4 streaming nodes .an important fact revealed by figure [ fig : bdcdiagram ] is that a native broadcast routine , or even a hardware multicast , would not help improve the overall data transfer efficiency for our models if it is implemented with blocking collective calls .this is because blocking broadcasts for each stream can not occur concurrently , due to the overlaps between destinations of streaming nodes doing broadcast . in this case , streaming nodes have to broadcast in sequence as shown in figure [ fig : bdcdiagram1 ] , and this results in the same cost , if not more , as basic point - to - point communications in [ fig : bdcdiagram2 ] .moreover , non - blocking point - to - point communications do not help either , as there is a limitation of bandwidth rather than latency .however , if a non - blocking multicast routine is available , all broadcasts in figure [ fig : bdcdiagram1 ] can occur in two relative time units in principle , since there are at most two listening events overlapped on every correlation node . to examine the timing in more detail , we first assume that all communications are blocking , every correlation node is assigned with constant correlation tasks , and every streaming node deals with data in a constant size .we also consider the time taken by a single data transfer , which could be either a send and receive pair or a broadcast .given is the time taken by an execution of the fx engines , is the number of streaming nodes , then the time taken by an entire processing cycle , , can be obtained using equation [ eq : bdcb ] for both cases show in figure [ fig : bdcdiagram ] . this indicates that by using blocking communications , the larger the number of streaming nodes , the more significant influence data transfers have on the overall performance , which leads to bad scalability . on the other hand ,if non - blocking point - to - point communications and double buffering are both applied , then improvements are seen in this case but when is so large that , the data transfer would still become a bottleneck , and the scalability problem still exists . however , if non - blocking multicasts are used in this model , then hence non - blocking multicasts can largely improve the efficiency , and in this case the time taken by a processing cycle is independent of the cluster size , which results in an excellent scalability as well . in practice , collective broadcasts usually mean more overhead and synchronization cost .hence the actual performance would never reach the ideal situation , especially when using blocking routines . the broadcast routine used by openmpi as presented by fagg et al . utilize a variety of software algorithms .however , the performance of these routines would be less than a true hardware multicast . in this work we only used the openmpi broadcast routine due to the limitation of the developing platform .we also completed another implementation based on basic point - to - point communications to verify our analysis above . in order to avoid the scalability problem while making it suitable for generic environments without requiring a specific non - blocking multicast support , we proposed the passing network topology as our second space - division model .as shown in figure [ fig : passing ] , in this model the input data is passed between neighbor nodes . since all correlation nodes take part in data streaming , dedicated streaming nodes are no longer necessary , which improves the node efficiency as a whole .figure [ fig : psdiagram ] illustrates the diagram of the passing model working with 10 correlation nodes , from which we can see that in this model each correlation node deals with four data transfers at most , being two sends and two receives , per processing cycle . with the same definitions as in equation [ eq : bdcb ] ,the time taken by a processing cycle can be obtained by equation [ eq : psb ] . similarly , if non - blocking communications and double buffering are applied , then thus the data transfer to execution ratio is independent of the cluster scale , which means a better theoretical scalability than the broadcasting model with blocking communication calls .additionally , by starting data flows from the auto - correlation nodes , it is ensured that every cross - correlation node has an identical distance from the two data sources it claims . as a result , for a cross - correlation node, the two chunks of input data from two different sources arrive at the same time , which saves any extra synchronization cost for input .it is noticeable from figure [ fig : psdiagram ] that correlation nodes are working asynchronously .more specifically , correlation nodes farther away from data sources have longer delays over time , although a processing cycle on different nodes still costs the same . taking this into account , given is the number of data sources , which is equal to the number of auto - correlation nodes lying on the hypotenuse of the triangle , and is the number of processing cycles in total , equation [ eq : psb ] should be re - written as when , we have hence the average amount of time taken by a processing cycle is not affected by the delay given the number of processing cycles is sufficiently large .however , the delay of a correlation node , , increases with the distance from the node to data sources , , as given by equation [ eq : delay ] , and this could have some negative effects on latency - critical systems . since the passing model proves to have an excellent scalability in principle , we implemented it in both blocking and non - blocking styles for comparison and analysis .double buffering is also applied in the non - blocking routine to make all data transfers happen concurrently .testing was carried out on the fornax supercomputer , which was designed for data intensive research , especially radio astronomy related data processing .fornax consists of 96 nodes , each having two intel xeon x5650 cpus , a nvidia tesla c2075 gpu and 72 gigabytes of system memory .the intel 5520 chipset is used in the compute node architecture , which enables the nvidia tesla c2075 gpu to work on an x16 pci - e slot and two 40gbps qlogic infiniband iba 7322 qdr cards on two x8 pci - e slots .the main storage of fornax is a 500 tb lustre - based shared file system .one of the two infiniband networks is dedicated to the communication between compute nodes and the lustre cluster . in terms of the software environment, fornax runs centos 6.2 with 2.6.32-131.21.1.el6.x86_64 linux kernel .the openmpi version adopted in this work is 1.6.0 .default configurations are applied for all communication stacks since our preliminary testing showed that the data transfers almost achieved the theoretical limit of the infiniband network by doing so .cuda 4.1.28 library with opencl 1.1 support was used for gpu computing .the fft implementation used for the f - engine was apple opencl fft 1.6 .all models presented were tested with the apple opencl fft for the f - engine and the modified 1xgxg model proposed by harris et al . for the x - engine .as the main purpose of this paper is to compare different data distribution models , the f - engine does not include station - based functions other than the fft , such as fringe rotation .furthermore , this paper is essentially looking at ska - scale arrays consisting of 300 to 3000 antennas , and in this case the x engine is more critical as its computational demand scales quadratically with the number of data streams while the f engine scales linearly .the metric flops used in all testing results is in single - precision and refers to the actual mathematical operations that are necessary for an fx correlator , which does not include indexing and redundant calculations for optimizing either the gpu memory access or the data transfers .this is a fair method to compare the performance between implementations on different hardware architectures , as the cost of indexing and redundant calculations could vary by several times in order to optimize algorithms for different hardware architectures or different network patterns , while that of the ultimate mathematical operations needed by the correlation algorithm does not change . in our testsall input data is packed in 8-bit integers .testing first investigated how the performance in tera - flops scales with the number of correlation nodes across all network models .as shown in figure [ fig : t ] , testing is conducted in four schemes , with the number of input data streams varying from 128 to 3072 .the six configurations used are the broadcasting model with the mpi_bcast routine and point - to - point data transfers , the passing model with single buffering and double buffering , and the time - division model with 4 and 8 streaming nodes .when the number of streams reaches 3072 , the time - division model is no longer available since a single gpu does not have enough memory to process all the streams .this is not an issue for the space - division models , as they subdivide the problem between gpus .the number of correlation nodes , which is also the number of gpus executing the fx engines , excluding streaming nodes , varies from 6 to the maximum configuration obtainable on fornax for each method .an fft length of 256 is used across all tests , as our preliminary tests showed that the throughput of the apple opencl fft does not significantly vary with the fft length in the range from 128 to 2048 , and the x stage performance is invariant with respect to fft length as long as there is sufficient data to feed the massively parallel model of gpu computing . shown in figure[ fig : pn ] is the overall performance averaged over the total number of correlation nodes .this demonstrates the node efficiency across all network models .based on our preliminary testing , the peak performance that the fx engines achieved on a single gpu is approximately 105 gflops .thus results shown in figure [ fig : pn ] also reveal how the overall performance is affected by the network transport involved for the cluster model .testing then investigated the sampling rate of input data achieved using our models with different configurations .as shown in figure [ fig : band ] , the number of data streams scales from 64 to 3072 .the lower limit was chosen as below it the streams are too few to feed the space - division models , while the upper limit was chosen as it is the largest number of data streams likely to be used in the foreseeable future .the time - division model was only tested with up to 2048 data streams due to the limit of the model suitability for the gpu hardware architecture .the output visibilities were not collected for the performance tests presented above . for correctness tests, we used the adaptive io system ( adios ) devised by jin et al . to write visibility files for up to 300 input data streams . by using adios on the lustre file system, data chunks for different subsets can be filled into a global data space asynchronously , and this enables each correlation node to write visibility data independently while keeping the data in a globally correct order .taking advantage of the buffering technique and the non - blocking io mechanism provided by adios , the performance loss caused by io was too little for us to measure for the testing schemes presented above working with up to 300 input data streams .testing results revealed that on current hardware architectures the time - division multiplex model is still the best choice for a gpu cluster correlator when the number of data streams is less than 1024 , as shown in figure [ fig : band ] . in the range between 1024 and 2048 ,space - division models start to overtake .most applications in the foreseeable future will be dealing with less than 1024 streams for which the time - division model is optimal .the only exception we are aware of so far is the ska mid phase 2 which is likely to have 3000 antennas forming a single beam .however , this is not to say space - division models will be the only way to deal with such large - scale correlation tasks . rather than the network itself ,the gpu architecture is one of the most significant factors leading to the performance turnover between 1024 and 2048 streams . due to the fact that the output data rate scales quadratically with the number of input streams , the larger the number of input streams is , the bigger proportion of gpu memory the output buffer takes .as the number of input streams increases , at a certain point the output buffer takes so much gpu memory that the input buffer is no longer large enough to hold data that can feed the massively parallel model of gpu computing , which is the major cause of the performance drop . if future gpus integrate more gpu memory , then it is possible to extend the optimum range of the time - division model .moreover , the turnover point might shift with a wide range of factors from both hardware and software aspects .this includes but is not limited to the hardware platform and configuration , the optimization of fx engines and the involvement of other correlator functionalities .another implication of the gpu memory limitation is that varying the fft length would not significantly affect the performance , as long as the gpu memory is enough for both input and output buffers .as the two buffers are both proportional to the fft length , increasing it does not change the proportion of gpu memory that the input buffer takes , and hence it does not negatively affect the parallel scale of the x - engine . the f - engine might be slightly affected depending on fft implementations .the performance of the apple opencl fft we used in this work does not vary significantly for the fft length up to 4k .however , the fft length does affect the maximum number of input streams that a single gpu is able to process , as both the input and output buffers need to be at least large enough to hold all intended data at the length of a single fft .moreover , this work did not investigate extra - long ffts beyond 8k , which potentially have some significant impacts on performance .the overall throughput that the time - division model can provide depends on the number of streaming nodes . as shown in figure [ fig : t ] , the model achieved better scalability when it was given 8 streaming nodes instead of 4 .furthermore , for real - time ska - scale correlation , 8 streaming nodes are still far from sufficient , otherwise each streaming node needs to handle more than hundreds of gigabytes of data per second , which is orders of magnitude beyond what the current technology can provide . however , unless the throughput is being limited by the streaming nodes , then adding more streaming nodes does not improve performance .therefore , we did not test the time - division model with more than 8 streaming nodes since this was sufficient in most cases across our testing schemes .the time - division model is likely to have excellent scalability even on larger clusters than our testing platform because firstly , based on our testing results shown in figure [ fig : t ] , the time - division model achieves a more linear scalability than space - division models , and secondly , the scalability is only limited by the number of streaming nodes rather than the network topology and a series of factors for space - division models .it seems from figure [ fig : t ] that the broadcasting model achieves better performance than the passing model on the same number of gpus when the number of data streams is large .however , this is based on the prerequisite that a considerable number of extra nodes are allocated as streaming nodes , as given by equation [ eq : oc_os ] .the passing model is promising in solving large scale correlation problems in the future .an obvious advantage is that it does not need any dedicated streaming nodes to re - organize and distribute data , for the auto - correlation nodes are the only nodes that receive input from external sources , and are able to receive streaming data in its original form .the topology also prevents network bottlenecks to a large extent , as the number of data transfers that each node deals with does not scale . in principle, the performance would scale linearly .our passing model testing results showed a near linear trend in which the performance falls behind the broadcasting model when large datasets are applied because the passing model does not perfectly suit the switch - based topology applied on fornax .it is likely to scale better on supercomputers with multi - dimensional torus topology , in which neighbor nodes have dedicated communication channels , as well as clusters with custom networks built - in to match our model . for the space - division models , it is debatable whether or not to process fft and cmac coherently on the same node .redundant ffts are introduced if they are on the same node , as correlation nodes in the same row or column claim the same input data streams . on the other hand ,if they are processed on separate nodes , the network load would increase by several times , as the data for a sample is usually packed in a small number of bits before fft and is expanded to the complex floating - point format in 64 bits afterward .thus it is ultimately a trade - off between compute and network . from figure[ fig : pn ] we can see that even using our optimum network model under its favorable configuration , the performance per node is still reduced by approximately 30% when the number of total correlation nodes scales up to 90 , compared with the peak single gpu performance which is 105 gflops .this indicates that for large - scale correlation problems , the network is where bottlenecks would mostly appear , rather than compute .additionally , in large - scale correlation systems , the fft only takes a small proportion of the entire fx correlator in terms of execution time , due to the fact that the computational demands of the fft scales linearly with the number of data streams while that of the cmac scales quadratically . in this case ,redundant ffts introducing minor performance loss are more desirable than increasing the network load by potentially an order of magnitude .the gpu fx engines used in this work achieved approximately 10% of the capacity of c2075 gpus by using the metric that counts only the mathematical operations . to includethe indexing and redundant calculations , a factor ranging from approximately 2 to 4 , depending on the fft length , the accumulation size and the network model , needs to be multiplied .the optimization techniques in gpu computing change significantly with hardware architectures , while the fx engines used in this work was designed several generations ago in terms of the gpu architecture .it is likely that fx engines optimized for newer gpu architectures can largely improve the performance of the gpu cluster correlator .the network models presented in this paper are applicable to such newer gpu fx engines , and also can be integrated in systems based on other hardware architectures such as cpus and fpgas . in our tests ,the output visibility data was only written to files on the lustre file system for up to 300 input streams for correctness verification .there are two reasons behind this , firstly for next - generation telescopes which generate enormous amount of data , the visibility data is not likely to be stored on hard disks , but rather being streamed directly to post - processing stages . in this case, the visibility data resulted from our models would need to be re - ordered in time - stamped sub - bands or potentially other patterns .this re - formatting process can occur concurrently with the gpu x - engine , and be fully parallelized and completed on each correlation node independently for the time - division model , where every correlation node processes all baselines . for the space - division models ,visibility data containing a subset of the baselines on each correlation node can be first split into sub - bands locally , and then gathered on post - processing nodes for all baselines .each post - processing node deals with a sub - band , so that the corner turning and imaging algorithms can be applied concurrently on each node without gathering all data onto a single node . there needs to be a streaming many - to - many network connecting correlation nodes and post - processing nodes , which is similar to what we implemented in our models to send input data from streaming nodes to correlation nodes .secondly , the output data rate is usually less critical than the input . shown in table [ tab : ratio ]are calculated input and output data rates for the ska phase 1 correlator from ford et al .as seen in these figures , the output data rates are much lower than the input .hence in immediate future the actual problem we are likely to face is still an input - limited correlation system rather than output - limited .correlators for the full scale ska might be output - limited . however , while trying to meet science requirements , the final design will also largely depend on how relevant technologies develop in the next decade and how the cost can be controlled in a reasonable range . taking these into account the implementation of the output networkis left to future work .c|cc case & ska 1 low & ska 1 mid + total input data rate & 18.24 tb / s & 750 gb / s + total output data rate & 2.405 tb / s & 136.7 gb / s + next - generation telescopes are likely to have an entirely streaming work flow in order to reduce the expense of storing intermediate data .this requires all correlation data to be processed in real time .however , being limited by current technology , when the problem size approaches the ska scale , the sampling rate of input signals achieved in our testing , as shown in figure [ fig : band ] , falls far behind what is required .this situation can be changed in three aspects in the future .firstly , new generation gpus are likely to double the performance every other year , and by the time ska - scale telescopes come into reality , the newest gpus would be at least an order of magnitude faster than what we used in our testing .secondly , as the gpu computing industry grows and more developer resources become available , optimizing the gpu fx engines would become easier .the hardware architecture and compiler would also evolve towards a direction that provides simpler ways to utilize more of the gpu capacity .while the gpu fx engines used in this work still have considerable space to optimize on current gpu architecture , evolving with new technology becomes even more important .thirdly , for ska - scale real - time correlation , it would eventually be necessary to scale our models on much larger clusters .this could be at the level of 10 to 100 times as large as our testing platform .in this case the network would become increasingly critical , and implementing our passing model on a cluster with multi - dimensional torus network would be a promising solution . the time - division model is another choice if future gpu architectures allow a single gpu to process all baselines of the telescope array .this work has investigated several ways to scale a single gpu based software correlator to clusters .we have investigated two major strategies , which are the time - division and space - division multiplex systems , and compared the performance over a range that is large enough to meet the requirements in the foreseeable future .our testing results have shown that for numbers of data streams smaller than 1024 , the time - division model is more efficient , while the passing topology of the space - division model showed advantages for large numbers of streams due to the more efficient use of the gpu memory .as it is difficult to predict the development of technology in the next decade , it is still too early to make statements as to how achievable it is to build a real - time gpu cluster correlator for a 3000-antenna telescope such as the ska mid phase 2 .meanwhile there is still considerable space for our models to be optimized .future work will therefore firstly focus on replacing the gpu fx engines with newer and more optimized implementations , the xgpu developed by clark et al . for instance , and optimizing models for real world projects . in terms of the network patterns, there is a possibility of designing a hybrid model combining advantages of both space - division and time - division models .orthogonal correlation triangles separating frequency channels prior to the cmac stage is another promising direction to investigate , which can generate output visibilities in a more friendly pattern for post - processing but involves the design of a complex communication network between fft and cmac .some non - performance - critical functionalities such as the delay compensation are also to be added to make a fully integrated system , as this work only investigates the compute intensive stages of an fx correlator . with the high flexibility of a software correlator , it is also sensible to integrate other functional techniques into the correlation flow , such as a coherent fast transient detector proposed by law et al . which needs to be placed between conjugate multiplications and accumulations within the x - engine .a. t. deller , s. j. tingay , m. bailes and c. west .: difx : a software correlator for very long baseline interferometry using multiprocessor computing environments .publications of the astronomical society of the pacific , 119 , 318336 ( 2007 ) .a. t. deller , w. f. brisken , c. j. phillips , j. morgan , w. alef , r. cappallo , e. middelberg , j. romney , h. rottmann , s. j. tingay and r. wayth .: difx2 : a more flexible , efficient , robust and powerful software correlator .publications of the astronomical society of the pacific , 123 , 275287 ( 2011 ) .j. w. romein , p. c. broekema , e. meijeren , k. schaaf , and w. h. zwart .: astronomical real - time streaming signal processing on a blue gene / l supercomputer .proceedings of the eighteenth annual acm symposium on parallelism in algorithms and architectures , 5966 ( 2006 ) .j. w. romein , p. c. broekema , j. d. mol and r. v. nieuwpoort . : the lofar correlator : implementation and performance analysis .proceedings of the 15th acm sigplan symposium on principles and practice of parallel programming , 169178 ( 2010 ) .r. b. wayth , l. j. greenhill , and f. h. briggs . : a gpu based real - time software correlation system for the murchison widefield array prototype .publications of the astronomical society of the pacific , 121(882 ) , 857865 ( 2009 ) .
|
next generation radio telescopes will require orders of magnitude more computing power to provide a view of the universe with greater sensitivity . in the initial stages of the signal processing flow of a radio telescope , signal correlation is one of the largest challenges in terms of handling huge data throughput and intensive computations . we implemented a gpu cluster based software correlator with various data distribution models and give a systematic comparison based on testing results obtained using the fornax supercomputer . by analyzing the scalability and throughput of each model , optimal approaches are identified across a wide range of problem sizes , covering the scale of next generation telescopes .
|
non - linear partial differential equations ( pde s ) are distinguished by the fact that , starting from smooth initial data , they can develop a singularity in finite time .very often , such a singularity corresponds to a physical event , such as the solution ( e.g. a physical flow field ) changing topology , and/or the emergence of a new ( singular ) structure , such as a tip , cusp , sheet , or jet . on the other hand, a singularity can also imply that some essential physics is missing from the equation in question , which should thus be supplemented with additional terms .( even in the latter case , the singularity may still be indicative of a real physical event ) .consider for example the physical case shown in fig .[ lava ] , which we will treat in section [ travel ] below .shown is a snapshot of one viscous fluid dripping into another fluid , close to the point where a drop of the inner fluid pinches off .this process is driven by surface tension , which tries to minimise the surface area between the two fluids . at a particular point in space and time , the local radius of the fluid neck goes to zero ; this point is a singularity of the underlying equation of motion . since the drop breaks into two pieces , there is no way the problem can be continued without generalising the formulation to one that includes topological changes .however , in this review we adopt a broader view of what constitutes a singularity . we consider it as such whenever there is a loss of regularity , which implies that there is a length scale which goes to zero .this is the situation under which one expects self - similar behaviour , which is our guiding principle .cm , the viscosity ratio is . ]a fascinating aspect of the study of singularities is that they describe a great variety of phenomena which appear in the natural sciences and beyond .some examples of such singular events occur in free - surface flows , turbulence and euler dynamics ( singularities of vortex tubes and sheets ) , elasticity , bose - einstein condensates , non - linear wave physics , bacterial growth , black - hole cosmology , and financial markets . in this paper we consider evolution equations , \label{ge}\ ] ] where ] with being a periodic function of period in .this is known as `` discrete self - similarity '' , since at times , n integer , the solution looks like a self - similar one ._ strange attractors _ ( section [ sec : strange ] ) + the dynamics on scale are described by a nonlinear ( low - dimensional ) dynamical system , such as the lorenz equation ._ multiple singularities _( section [ multiple ] ) + blow - up may occur at several points ( or indeed in any set of positive measure ) , in which case the description ( [ ds ] ) is not useful .we also describe cases where ( [ ss ] ) still applies , and blow - up occurs at a single point , but the underlying dynamics is really one of two singularities which merge at the singular time .+ & i , ii & stable ? & [ thin ] + & i & & + & & stable & [ thin ] + , ] ( dashed line ) and ( full line ) with . ]first , the presence of logarithms implies that there is some dependence on initial conditions built into the description .the reason is that the argument inside the logarithm needs to be non - dimensionalised using some `` external '' time scale .more formally , any change in time scale leads to an identical equation if also lengths are rescaled according to .this leaves the prefactor in ( [ hmin_res ] ) invariant , but adds an arbitrary constant to .this is illustrated by comparing to a numerical simulation of the mean curvature equation ( [ mc ] ) close to the point of breakup , see fig .[ hminfig ] .namely , we subtract the analytical result ( [ hmin_res ] ) from the numerical solution and multiply by . as seen in fig.[hminfig ] , the remainder is varying slowly over 12 decades in .if the constant is adjusted , this small variation is seen to be consistent with the logarithmic dependence predicted by ( [ hmin_res ] ) .the second important point is that convergence in space is no longer uniform as implied by ( [ conv1 ] ) for the case of type i self - similarity .namely , to leading order the pinching solution is a cylinder . for this to be a good approximation, one has to require that the correction is small : .thus corrections become important beyond , which , in view of the logarithmic growth of , implies convergence in a constant region _ in similarity variables only_. as shown in , the slow convergence toward the self - similar behaviour has important consequences for a comparison to experimental data .mean curvature flow is also an example of a broader class of problems called generically `` geometric evolution equations '' .these are evolution equations intended to gain topological insight by flowing geometrical objects ( such as metric or curvature ) towards easily recognisable objects such as constant or positive curvature manifolds .the most remarkable example is the so called ricci flow , introduced in , which is the essential tool in the recent proof of the geometrisation conjecture ( including poincar s conjecture as a consequence ) by grigori perelman .namely , poincar s conjecture states that every simply connected closed 3-manifold is homeomorphic to the 3-sphere . being homeomorphic means that both are topologically equivalent and can be transformed one into the other through continuous mappings .such mappings can be obtained from the flow associated to an evolutionary pde involving fundamental geometrical properties of the manifold .thurston s geometrisation conjecture is a generalisation of poincar s conjecture to general 3-manifolds and states that compact 3-manifolds can be decomposed into submanifolds that have basic geometric structures .perelman sketched a proof of the full geometrisation conjecture in 2003 using ricci flow with surgery . starting with an initial 3-manifold ,one deforms it in time according to the solutions of the ricci flow pde ( [ ricci_g ] ) we consider below .since the flow is continuous , the different manifolds obtained during the evolution will be homeomorphic to the initial one .the problem is in the fact that ricci flow develops singularities in finite time , one of which we describe below .one would like to get over this difficulty by devising a mechanism of continuation of solutions beyond the singularity , making sure that such a mechanism controls the topological changes leading to a decomposition into submanifolds , whose structure is given by thurston s geometrisation conjecture .perelman obtained essential information on how singularities are like , essentially three dimensional cylinders made out of spheres stretched out along a line , so that he could develop the correct continuation ( also called `` surgery '' ) procedure and continue the flow up to a final stage consisting of the elementary geometrical objects in thurston s conjecture .ricci flow is defined by the equation for a riemannian metric , where is the ricci curvature tensor .the ricci tensor involves second derivatives of the curvature and terms that are quadratic in the curvature .hence , there is the potential for singularity formation and singularities are , in fact , formed .as perelman poses it , the most natural way to form a singularity in finite time is by pinching an almost round cylindrical neck .the structure of this kind of singularity has been studied in . by writing the metric of a -dimensional cylinder as where is the canonical metric of radius one in the , is the radius of the hypersurface at time and is the arclength parameter of the generatrix of the cylinder .the equation for then becomes in it is shown that for the solution close to the singularity admits a representation that resembles the one obtained for mean curvature flow : namely , ( [ ricci ] ) admits a constant solution , and the linearisation around it gives the same linear operator ( [ ling ] ) as for mean curvature flow .thus a pinching solution behaves as where the equation for is , with solution .the semilinear parabolic equation is again closely related to the mean curvature flow problem ( [ mc ] ) .namely , disregarding the higher order term in , ( [ mc ] ) becomes putting one finds which is ( [ semilinear ] ) in one space dimension and , once more neglecting higher - order non - linearities . as before , ( [ semilinear ] ) has the exact blow - up solution if , where is the space dimension , then there are no other self - similar solutions to ( [ semilinear ] ) , and blow - up is of the form ( [ semi_blow ] ) ( see , and for a recent review ) . as in the case of mean curvature flow , corrections to ( [ semi_blow ] )are described by a slowly varying amplitude : , \quad \xi = x'/t'^{1/2 } , \label{semi_corr}\ ] ] where obeys the equation this result holds in 1 space dimension . in higher dimensions ,one has to replace by the distance to the blow - up set .this covers all range of exponents ( larger than one , because otherwise there is no blow - up ) in dimensions and .the situation if is not so clear : if then there are solutions that blow - up and `` small '' solutions that do not blow - up .nevertheless , the construction of solutions as perturbations of constant self - similar solutions holds for any and any . a simple generalisation of ( [ semilinear ] ) results from considering a nonlinear diffusion operator , and now the blow - up character depends on the two parameters m and p , see .more complex logarithmic corrections are possible if the linearisation around the fixed point leads to a zero eigenvalue and cubic nonlinearities . as shown in , the equation for a slender cavity or bubble is where and is the radius of the bubble .dots denote derivatives with respect to time .the length measures the total size of the bubble .if for the moment one disregards boundary conditions and looks for solutions to ( [ bernoulli ] ) of cylindrical form , , one can do the integral to find it is easy to show that an an asymptotic solution of ( [ cyl ] ) is given by corresponding to a power law with a small logarithmic correction .indeed , initial theories of bubble pinch - off treated the case of an approximately cylindrical cavity , which leads to the radial exponent , with logarithmic corrections .however both experiment and simulation show that the cylindrical solution is unstable ; rather , the pinch region is rather localised , see fig .[ shape ] .therefore , it is not enough to treat the width of the cavity as a constant ; the width is itself a time - dependent quantity . in we show that to leading order the time evolution of the integral equation ( [ bernoulli ] ) can be reduced to a set of ordinary differential equations for the minimum of , as well as its curvature . between full numerical simulations of bubble pinch - off ( solid line ) and the leading order asymptotic theory ( dashed line ) ., width=264 ] namely , the integral in ( [ bernoulli ] ) is dominated by a local contribution from the pinch region . to estimate this contribution , it is sufficient to expand the profile around the minimum at : . as in previous theories ,the integral depends logarithmically on , but the axial length scale is provided by the inverse curvature . thus evaluating ( [ bernoulli ] ) at the minimum , one obtains to leading order which is a coupled equation for and .thus , a second equation is needed to close the system , which is obtained by evaluating the the second derivative of ( [ bernoulli ] ) at the pinch point : the two coupled equations ( [ a0]),([deltaequ ] ) are most easily recast in terms of the time - dependent exponents where , so are generalisations of the usual exponents in ( [ ss ] ) .the exponent characterises the time dependence of the aspect ratio . returning to the collapse ( [ cyl ] ) predicted for a constant solution, one finds that and . in the spirit of the the previous subsection , this is the fixed point corresponding to the cylindrical solution .now we expand the values of and around their expected asymptotic values and : and put . to leading order, the resulting equations are the linearisation around the fixed point thus has the eigenvalues and , in addition to the eigenvalue coming from time translation .as before , the vanishing eigenvalue is the origin of the slow approach to the fixed point observed for the present problem .the derivatives and are of lower order in the first two equations of ( [ l_sys ] ) , and thus to leading order and . using this , the last equation of ( [ l_sys ] )can be simplified to equation ( [ deg_ad ] ) is analogous to , but has a degeneracy of third order , rather than second order .equation yields , in an expansion for small , thus the exponents converge toward their asymptotic values only very slowly , as illustrated in fig .[ compare ] .this explains why typical experimental values are found in the range , and why there is a weak dependence on initial conditions bmsspl06 .this model describes the aggregation of microorganisms driven by chemotactic stimuli .the problem has biological meaning in 2 space dimensions .if we describe the density of individuals by and the concentration of the chemotactic agent by , then the keller - segel system reads where and are positive constants . in was shown that for radially symmetric solutions of ( [ ks1]),([ks2 ] ) singularities are such that to leading order blows up in the form of a delta function .the profile close to the singularity is self - similar and of the form where and the result comes from a careful matched asymptotics analysis that , in our notation , amounts to introducing the time - dependent exponent which has the fixed point .corrections are of the form where is controlled by a third - order non - linearity , as in the bubble problem : the cubic nonlinear schrdinger equation appears in the description of beam focusing in a nonlinear optical medium , for which the space dimension is .equation ( [ nlse ] ) belongs to the more general family of nonlinear schrdinger equations of the form and in any dimension .of particular interest , from the point of view of singularities , is the _critical case _ . in this case , singularities with slowly converging similarity exponents appear due to the presence of zero eigenvalues .we will describe this situation below , based on the formal construction of zakharov , later proved rigorously by galina perelman . at the moment, the explicit construction has only been given for , that is , for the quintic schrdinger equation .the same blow - up estimates have been shown to hold for any space dimension by merle and raphal , , without making use of zakharov s formal construction .merle and raphal also show that the stable solutions to be described below are in fact global attractors. in the critical case ( [ nlse_gen ] ) becomes in d=1 : this equation has explicit self - similar solutions ( in the sense that rescaling , , leaves the solutions unchanged except for the trivial phase factor ) of the form the function solves and is given explicitly by we seek solutions of ( [ nlse_quin ] ) using a generalisation of ( [ nlse_ss ] ) , which allow for a variation of the phase factors , and the amplitude to be different from a power law : where and satisfies when is constant , ( [ nlse_ansatz ] ) is a solution of ( [ nlse_quin ] ) if satisfy notice that the equation for is uncoupled , so we only need to solve the equations for simultaneously and then integrate the equation for .it is interesting for the following that , in addition to the solutions for constant , one can let vary slowly in time .the resulting system for is note the appearance of the factor in the last equation , which comes from a semiclassical limit of a linear schrdinger equation with appropriate potential ( see ) , and is an it follows from the presence of this factor that the non - linearity is beyond all orders , smaller than any given power , in contrast to the examples given above . as in section[ cavity ] , we rewrite the equations in terms of similarity exponents , to obtain the system : the advantage of this formulation is that the exponents have fixed points .there are two families of equilibrium points for ( [ nlse_e1])-([nlse_e4 ] ) : 1 . arbitrary positive or zero . arbitrary positive or zero .we first investigate case ( 1 ) by writing the final fixed point corresponding to the singularity is going to be . however , there are also equilibrium points for _ any _ , in which case the linearisation reads : this system has the matrix whose eigenvalues are : , and .the vanishing eigenvalue corresponds to the line of equilibrium points for , the positive eigenvalue to the direction of instability generated by a change in blow - up time .the eigenvector corresponding to the negative eigenvalue gives the direction of the stable manifold . at the point , there is an additional vanishing eigenvalue , and the equations become : where .the first two equations reduce to leading order to and , while the last two equations reduce to the nonlinear system : in the original -variable , the dynamical system is which controls the approach to the fixed point .the system ( [ nlse_orig ] ) is two - dimensional , corresponding to the two vanishing eigenvalues .integrating the first equation of ( [ nlse_los ] ) one gets , and thus using the second equation . from the last equation one obtains to leading order , so that thus we can conclude that in this fashion , one can construct a singular solution such that note the remarkable smallness of this correction to the `` natural '' scaling exponent of , which enters only as the logarithm of logarithmic time .the fixed points ( 2 ) can be analysed in a similar fashion .the linearisation leads to all eigenvalues are positive , so one can not expect these equilibrium points to be stable .one may also consider the blow - up of vortex solutions to both critical and supercritical solutions to nonlinear schrdinger equation in 2d .these are a subset of the general solutions to nlse that present a phase singularity at a given point .the singularities appear in the form of collapse of rings at that point .both the existence of such solutions and their stability have been considered recently in .the nonlinear schrdinger equation belongs to the broader class of nonlinear dispersive equations , for which many questions concerning existence and qualitative properties of singular solutions are still open .nevertheless , there have been recent developments that we describe next .the korteweg - de vries ( kdv ) equation describes the propagation of waves with large wave - length in a dispersive medium . for example , this is the case of water waves in the shallow water approximation , where represents the height of the wave . in the case of an arbitrary exponent of the nonlinearity , ( [ kdv ] ) becomes the generalised korteweg de vries equation : based on numerical simulations , conjectured the existence of singular solutions of ( [ kdv_gen ] ) with type - i self - similarity if . in , it was shown that in the _ critical case _ solutions may blow - up both in finite and in infinite time .lower bounds on the blow - up rate were obtained , but they exclude blow - up in the self - similar manner proposed by . the camassa - holm equation also represents unidirectional propagation of surface waves on a shallow layer of water .it s main advantage with respect to kdv is the existence of singularities representing breaking waves .the structure of these singularities in terms of similarity variables has not been addressed to our knowledge .the pinching of a liquid thread in the presence of an external fluid is described by the stokes equation . for simplicity, we consider the case that the viscosity of the fluid in the drop and that of the external fluid are the same .an experimental photograph of this situation is shown in fig .[ lava ] . to further simplify the problem, we make the assumption ( the full problem is completely analogous ) that the fluid thread is slender .then the equations given in simplify to where and the mean curvature is given by ( [ mean ] ) .here we have written the velocity in units of the capillary speed .the limits of integration and are for example the positions of the plates which hold a liquid bridge .dimensionally , one would once more expect a local solution of the form and has to be a linear function at infinity to match to a time - independent outer solution . in similarity variables , has the form we have chosen as a real - space variable close to the pinch - point , such that the similarity description is valid in ] .nevertheless , the existence of different self - similar solutions is known in a few particular cases , like the case , where is an odd integer ( see ) or in space dimension ( see ) .the character of the blow - up is controlled by the blow - up curve , which is the locus where the equation first blows up at a given point in space .it has been shown for that there exists a set of _ characteristic _ points , where the blow - up curve locally coincides with the characteristics of ( [ semi - wave ] ) .the set of non - characteristic points is open , and is on .recently , it has been shown that the blow - up at characteristic points is of type ii .even more intriguingly , it appears that the structure of blow - up at these points is such that the singularity results from the _ collision _ of two peaks at the blow - up point , very similar to the observation shown in fig .[ hs_simulation ] . in the hele - shaw equation of the previous subsection , different parts of the solution , characterised by different scaling laws , interacted with each other . in the generic case , however , finally blow - up only occurred at a single point in space .an example where singularities may even occur on sets of finite measure is given by reaction - diffusion equations of the family where is any bounded , open set in dimension .depending on the values of and singularities of ( [ react - diff ] ) may be regional ( blows up in subsets of of finite measure ) , or even global ( the solution blows - up in the whole domain ) ; see for instance and references therein .singularities may even happen in sets of fractional hausdorff dimension , i.e. , fractals .this is the case of the inviscid one - dimensional system for jet breakup ( cf . ) and might be case of the navier - stokes system in three dimensions , where the dimension of the singular set at the time of first blow - up is at most ( cf .this connects to the second issue we did not address here .it is the nature of the singular sets both in space and time , i.e. including possible continuation of solutions after the singularity . in some instances ,existence of global in time ( for all ) solutions to nonlinear problems can be established in a _weak sense_. for example , this has been achieved for systems like the navier stokes equations , reaction - diffusion equations , and hyperbolic systems of conservation laws .weak solutions allow for singularities to develop both in space and time . in the case of the three - dimensional navier - stokes system ,the impossibility of singularities `` moving '' in time , that is of curves within the singular set is well - known .hence , provided certain kinds of singularities do not persist in time , the question is how to continue the solutions after a singularity has developed .a first version of this paper was an outgrowth of discussions between the authors and r. deegan , preparing a workshop on singularities at the isaac newton institute , cambridge .the present version was written during the programme : `` singularities in mechanics : formation , propagation and microscopic description '' , organised with c. josserand and l. saint - raymond , which took place between january and april 2008 at the institut henri poincar in paris .we are grateful to all participants for their input , in particular c. bardos , m. brenner , m. escobedo , f. merle , h. k. moffatt , y. pomeau , a. pumir , j. rauch , s. rica , l. vega , t. witten , and s. wu . we also thank j. m. martin - garcia and j. j. l. velazquez for fruitful discussions and for providing us with valuable references .100 url # 1#1urlprefix[2][]#2 levine h a 1990 _ siam review _ * 32 * 262
|
we survey rigorous , formal , and numerical results on the formation of point - like singularities ( or blow - up ) for a wide range of evolution equations . we use a similarity transformation of the original equation with respect to the blow - up point , such that self - similar behaviour is mapped to the fixed point of a _ dynamical system_. we point out that analysing the dynamics close to the fixed point is a useful way of characterising the singularity , in that the dynamics frequently reduces to very few dimensions . as far as we are aware , examples from the literature either correspond to stable fixed points , low - dimensional centre - manifold dynamics , limit cycles , or travelling waves . for each `` class '' of singularity , we give detailed examples .
|
many natural , social and economic phenomena follow power laws .it has been previously ascertained that the distribution of incomes , size of cities , evolution of human language , internet and genetic networks , and scientific publications and citations , all follow power laws . finding a complete theory for describing these kind of systemsseem an impractical task , given the huge amount of degrees of freedom involved these social systems .notwithstanding , remarkable regularities were reported and studied , such as zipf s law , or the celebrated gibrat s law of proportional growth , which constitutes important milestones on the quest for a unified framework that mathematically describe predictable tendencies .firm size distributions ( fsd ) are the outcome of the complex interaction among several economic forces .entry of new firms , growth rates , business environment , government regulations , etc . , may shape different fsd .the underlying dynamics that drives the distribution of firms sizes is still an issue under intense scrutiny . according to gaffeo et al . , there is an active debate going on among industrial organization s scholars , in which log - normal , pareto , weibull , or a mixture of them , compete for the best - fitting distributions of fsd .one of the controversial issues is the very definition of `` size '' , which can be measured by different proxies such as annual sales , number of employees , total assets , etc . the seminal contribution by gibrat initiated a research line concerning the formal model that governs firms sizes and industry structure .the introduction of a theoretical model that would underlie the industrial demography could be of great help for authorities interested in maintaining fair competence and/or antitrust policies .hart and prais find , using a database of large firms , that average growth rates and sizes are independent variables .quandt states that pareto s distribution is often rejected when analyzing industries sub - sectors .other independent empirical studies , carried out by simon and bonnini , mansfield , and bottazzi and secchi , among others , confirm that firms growth rates are not related to firm size and that fsd follow a log - normal distribution . jacquemin and cardon de lichtbuer study the degree of firms and industry concentration in british firms using fortune s 200 largest industrial companies outside the united states , ranked according to sales .this study detects an increasing degree of concentration .kwasnicki affirms that skewed size distributions could be found even in the absence of economies of scale , and that the shape of the distribution is the outcome of innovation in firms .in particular , according to his simulations , cost improving innovations generate pareto - like skewed distributions .this work also reconciles the finding by ijiri and simon about the concavity toward the origin of the log - log rank size plot .such concavity could be produced by the evolutionary forces and innovation in the market .jovanic finds that rates of growth for smaller firms are larger and more variable than those of bigger firms .similar results are found empirically for dutch companies by marsili . on contrary ,vining had argued that the origin of the concavity is the existence of decreasing returns to scale .segal and spivak develop a theoretical model in which , under the presence of bankruptcy costs , the rate of growth of small firms is prone to be higher and more variable than that of larger firms .the same model also predicts that , for the largest firms , the sequence of growth rates is convergent satisfying gibrat s law , namely where is the size of the firm at time , its change in time , and a size - independent growth rate .this model is consistent with some previous empirical evidence , as that of mansfield .sutton has published a review of the literature on markets structure , highlighting the current challenges concerning fsd modelling . during the 1990s ,the interest in fsd experienced a revamp , with the availability of new data - bases .a drawback of early studies was a biased selection of firms .typically , data comprise only publicly traded firms , i.e. , the largest ones . in recent years , new, more comprehensive data sources became available .stanley et al . , use the zipf - plot technique in order to verify fittings of selected data for us manufacturing firms and find a non - lognormal right tail . shortly afterwards , stanley et al . encounter that the distribution of growth rates has an exponential form .kattuman studies intra - enterprise business size distributions , finding also a skewed distribution .axtell , using census data for all us firms , encounters that the fsd is right skewed , giving support for the workings of pareto s law .a similar finding is due to cabral and mata for portuguese manufacturing firms , although a log - normal distribution underestimates the skewness of the distribution and is not suitable for its lower tail . in this line, fu et al . find that , for pharmaceutical firms in 21 countries , and for us publicly traded firms , growth rates exhibit a central - portion distributed according to a laplace distribution , with power law tails .palestrini agrees with a power law distribution for firm sizes , although he models firm growth as a laplace distribution , that could change over business cycles . according to riccaboni , the simultaneous study of firm sizes and growth presents an intrinsic difficulty , arising from two facts : ( i ) the size distribution follows a pareto law and ( ii ) firms growth rate is independent of the firm s size .this latter property is known as the `` law of proportionate effect '' .growiec et al . study firms growth and size distributions using firms business units as units of measurements .this study reveals that the size of products follow a log - normal distribution , whereas firm - sizes decay as a power law .gaffeo et al . , using data from 38 european countries , find that log mean and log variance size are linearly related at sectoral levels , and that the strength of this relationship varies among countries .di giovanni et al . find that the exponent of the power law for french exporting firms is lower than for non - exporting firms , raising the argument of the influence of firm heterogeneity in the industrial demography .additionally , gallegati and palestrini and segarra and teruel , show that sampling sizes influence the power - law distribution .one can fairly assert that the concomitant literature has not yet reached a consensus regarding what model could best fit empirical data .an overview of several alternative models is detailed in ref . , and references therein .as shown in the above literature review , previous attempts to model growth and sizes of firms have not been entirely successful . in particular , there is a dispute concerning the underlying stochastic process that steers fsd . a possible solution in terms of agent - based model was proposed ; these models are remarkable as descriptive tools , but they do not furnish an overall panorama because are single - purpose models . besides , they are sensitive to the initial conditions , and , in some cases , their outcome depends on the length of the simulation time .the aim of this paper is twofold .first , to develop a thermodynamic - like theoretical model , able to capture typical features of firms distributions .we try to uncover the putative universal nature of fsd , which could be characterized by general laws , independent of `` microscopic '' details .secondly , to validate our theoretical model using an extensive database of spanish manufacturing firms during a long time - period .this paper contributes to the literature in several aspects .first , it provides an explanation for the stochastic distribution of firms sizes .the understanding of fsd is relevant for economic policy because it deals with market concentration , and thus , with competition and antitrust policy measures for example , naldi exhibits a relationship between zipf s law and some concentration indices .second , we apply our model to a large sample of spanish firms .third , this work expands the literature on industrial economics modelling .the paper is organized as follows .first , we present the theoretical framework and perform numerical experiments to validate our analytic approach .then , we show the empirical application to the spanish firms .finally , we draw some discussions and conclusions of our work .our framework is based on two fundamental hypothesis : 1 . a micro - economic dynamical hypothesis for individual firm growth ; and 2 .using the maximum entropy principle , with dynamical prior information , for describing macroeconomic equilibrium .for the micro - economical hypothesis , we assume gibrat s law of proportional growth ( eq . ) as the main mechanism underlying firms size evolution .a finite - size term , due to the central limit theorem , becomes dominant for medium and small sizes , being proportional to the square root of the size .in addition to these two terms , we also assume that non - proportional forces become eventually effective , being dominant for the smallest sizes .thus , our full dynamical equation is written as where ( , and ) are independent growth rates .it is expected that the growth rates are of a stochastic nature .thus , a _ temperature _ can be defined from their variance ] ( where is some reference value , in our case , the transition size ) which linearizes the dynamical equation as .thus , we write the macroscopic entropy for the system s density distribution for firms as = -\int du \rho(u ) \log[\rho(u)/n],\ ] ] the equilibrium density is obtained by extremization of under the empirical constraints , such as the total number of firms , the minimum size of a firms , among others .lacking them , as sometimes happens in physics , we will use a symmetry criterion : employ constraints that preserve a symmetry of scale of , i.e. translation symmetry in . for this, we define an _ energy function _ , ] , where is a lagrange multiplier ( become then the multipliers for each term ) , and the general solution is of the form .\label{eq : rhomaxent}\ ] ] the values of the multipliers are obtained by solving the system of lagrange equations , for the distribution of eq . .we consider , for simplicity , the linear regime with only the first two moments ( a constraint on the average total number of firms ) and ( a constraint on the mean value written as ) .since the equations are formally equivalent to those found in thermodynamics , and traditionally the multipliers associated with these constraints are , , , we have a thermodynamic potential where .the variational problem becomes =0 12 & 12#1212_12%12[1][0] _( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) _ _ ( , ) _ _ ( , ) `` , '' ( ) `` , '' ( ) _ _ ( , ) `` , '' ( ) , , and as function of the year .faded lines represent the temperature evolution for every autonomous community , bold blue and red lines represent the spanish mean temperature .[ fig : temperatures ] ] , , and , and for negative ebitda , , and .the vertical dashed lines display the transition to proportional growth regimes , where the equation of state holds .[ fig : spain_rank ] ]
|
the distribution of firms growth and firms sizes is a topic under intense scrutiny . in this paper we show that a thermodynamic model based on the maximum entropy principle , with dynamical prior information , can be constructed that adequately describes the dynamics and distribution of firms growth . our theoretical framework is tested against a comprehensive data - base of spanish firms , which covers to a very large extent spain s economic activity with a total of firms evolving along a full decade . we show that the empirical exponent of pareto s law , a rule often observed in the rank distribution of large - size firms , is explained by the capacity of the economic system for creating / destroying firms , and can be used to measure the health of a capitalist - based economy . indeed , our model predicts that when the exponent is larger that 1 , creation of firms is favored ; when it is smaller that 1 , destruction of firms is favored instead ; and when it equals 1 ( matching zipf s law ) , the system is in a full macroeconomic equilibrium , entailing `` free '' creation and/or destruction of firms . for medium and smaller firm - sizes , the dynamical regime changes ; the whole distribution can no longer be fitted to a single simple analytic form and numerical prediction is required . our model constitutes the basis of a full predictive framework for the economic evolution of an ensemble of firms that can be potentially used to develop simulations and test hypothetical scenarios , as economic crisis or the response to specific policy measures .
|
it has long been the case that pressure determinations for incompressible flows , both navier - stokes and magnetohydrodynamic ( mhd ) , are known to be highly non - local .taking the divergence of the equation of motion and using leaves us with a poisson equation for the pressure , which is said to function as an equation of state : here , is the fluid velocity field as a function of position and time , is the magnetic field , is the electric current density , is the speed of light , is the kinematic viscosity , assumed spatially uniform and constant , and is the pressure normalized to the mass density , also spatially uniform .( [ eq : eqmo ] ) and ( [ eq : pe ] ) are written for mhd .their navier - stokes equivalents can be obtained simply by dropping the terms containing and . if we are to solve ( [ eq : pe ] ) for , boundary conditions are required . in the immediate neighborhood of a stationary no - slip " boundary ,both the terms on the left of ( [ eq : eqmo ] ) vanish and we are left with the following equation for as a boundary condition : we now focus on the navier - stokes case , where the magnetic terms disappear from ( [ eq : bcs ] ) , for simplicity .all the complications of mhd are illustrated by this simpler case .it is apparent that ( [ eq : bcs ] ) must apply to all components of , and that while the normal component of is enough to determine through neumann boundary conditions , the tangential components of ( [ eq : bcs ] ) at the wall equally well determine through dirichlet boundary conditions .this is a problem which some inventive procedures have been proposed to resolve , usually by some degree of pre - processing " or various dynamical recipes which seem to lead to approximately no - slip velocity fields after a few time steps ( e.g. , and ) .it is not our purpose to review or critique these recipes , but rather to focus on a set of velocity fields , related to chandrasekhar - reid functions , for which ( [ eq : pe ] ) is explicitly soluble at a level where the neumann or dirichlet conditions can be exactly implemented . in [ sec : pdetermin ] , we explore the difference between the two pressures so arrived at .then in [ sec : discussion ] , we propose a replacement for the long - standing practice of demanding that all components of a solenoidal vanish at material walls , in favor of a replacement by a wall friction term for which the above mathematical difficulty is no longer present .of course , similar statements and options will apply to all comparable incompressible mhd problems .restricting attention at present to the navier - stokes case , we consider two - dimen- sional , solenoidal , velocity fields obtained from the following stream function : \label{eq : sf}\ ] ] the hyperbolic cosine term in ( [ eq : sf ] ) contributes a potential flow velocity component to which makes it possible to demand that obey two boundary conditions : the vanishing of both components at rigid walls .the function in ( [ eq : sf ] ) is even in and , but can obviously be converted into an odd or mixed one by the appropriate trigonometric substitutions .the velocity field has only and components and is periodic in , with an arbitrary wavenumber . is a normalizing constant , and the constants and can systematically be found numerically to any desired accuracy so that both components of vanish at symmetrically placed no - slip walls at and . in fact , for given , an infinite sequence of such pairs of and can be determined straightforwardly .thus any such , or superposition thereof , is not only solenoidal , but has both components zero at , and all spatial derivatives exist .moreover , the source " term , or , from the right hand side of ( [ eq : pe ] ) , is of a relatively simple nature for such a , since every term in it can be written as a product of exponentials of , and .it is straightforward to find an inhomogeneous solution for , which then is the same for all boundary conditions for a given of the form stated . to this inhomogeneous part of be added a solution of laplace s equation .this can be chosen so that the total may satisfy either the normal component of ( [ eq : bcs ] ) at the walls , or the tangential component of it , but not both .the determination involves only simple but tedious algebra . using from ( [ eq : sf ] ) with , and ,height=336 ] we illustrate , in figure [ fig : vfield ] , an arrow plot of the velocity field given by choosing , and , in units of .the two pressures resulting from the satisfaction of the normal and tangential components of ( [ eq : bcs ] ) can best be compared by comparing their respective values of , since itself is indeterminate up to an additive constant in both cases . in figure[ fig : gradp ] , we display , as an arrow plot , the difference between the pressure gradients associated with the velocity field shown in figure [ fig : vfield ]. we have rewritten ( [ eq : eqmo])-([eq : bcs ] ) in dimensionless units for this purpose , with the kinematic viscosity being replaced by the reciprocal of a reynolds number , which may be defined as . here , the angle brackets refer to the mean of taken over the 2-d box , containing one period in the direction and from to .the value of used to construct figure [ fig : gradp ] is , with the dimensionless version of in ( [ eq : sf ] ) .the two pressures are similar but not identical . with ,height=336 ] in figure [ fig : contour ] , a fractional measure of the difference between the neumann pressure " and the dirichlet pressure " is exhibited as a contour plot of the scalar ratio there is no absolute significance to the numerical value of this ratio .it initially increases with approaching a maximum of about near the wall for .it is considered interesting however , that the fractional difference is nearly x - independent where it is largest .that occurs formally because the algebra reveals it to be dominated by a term which varies as in a region where . with .note that the fractional difference between the two values of is significant only near the wall.,height=336 ] it is amusing but perhaps not significant to superpose the velocity field from ( [ eq : sf ] ) with a parabolic plane poiseuille flow of a larger amplitude .the resulting flow field is shown in figure [ fig : vstreet ] , and it bears a striking but perhaps not significant similarity to the flow patterns seen in two - dimensional plane poiseuille flow when linear stability thresholds are approached . the pressure gradient difference for this case will be fractionally smaller than in figure [ fig : contour ] , since pure parabolic plane poiseuille flow is a rare case where the two pressures happen to agree , and it quantitatively dominates the pressures determined from equation ( [ eq : pe ] ) in this example . plus parabolic plane poiseuille flow : with ,height=336 ]an alternative to the no - slip condition is the navier " boundary condition : the slip velocity at the wall surface is taken to be proportional to the rate of shear at the wall .this may be expressed where is the slip velocity of the fluid at the wall , is the rate of shear at the wall and is a constant with the dimensions length .molecular dynamic simulations of newtonian liquids under shear have shown this to be the case under some circumstances .in fact recent work has shown that , in cases where the shear rate is large , there is a nonlinear relationship between and .we note that the velocity field shown in figure [ fig : vfield ] does not lead to one which obeys the navier boundary condition , after an initial time step , where the fluid has been allowed to slip at the wall .if the velocity field determined by ( [ eq : sf ] ) is advanced in time using ( [ eq : eqmo ] ) with the neumann pressure " , the proportionality between the slip velocity and the rate of shear at the wall , after the initial time step , varies sinusoidally with .it is difficult to see in what sense the velocity field obtained from ( [ eq : sf ] ) might be an unacceptable one from the point of view of the navier - stokes or mhd descriptions .it seems to have all the properties that are thought to be relevant .the family of functions of the same x - periodicity in ( [ eq : sf ] ) can be shown to be orthogonal , and is a candidate for a complete set , in which any might be expanded , when supplemented by flux - bearing functions of y alone .the mathematical question of which if any velocity fields , which are both solenoidal and vanish at the wall , would lead to neumann and dirichlet pressures that were in agreement with each other , must remain open . indeed , the question of whether there are any , without some degree of pre - processing , " must remain open .this is an unsatisfactory situation for fluid mechanics and mhd , in our opinion , even if it is a not unfamiliar one .the search for alternatives seems mandatory .one alternative that may be explored is one that seemed some time ago , in a rather different context [ shan & montgomery 1994a , b ] , to have worked well enough for mhd .namely , we may think of replacing the requirement of the vanishing of the tangential velocity at a rigid wall with a wall friction term , added to the right hand side of ( [ eq : eqmo ] ) , of the form where the coefficient vanishes in the interior of the fluid and rises sharply to a large positive value near the wall .the region over which it is allowed to rise should be smaller than the characteristic thickness of any boundary layer that it might be intended to resolve , but seems otherwise not particularly restrictive .such a term provides a mechanism for momentum loss to the wall and constrains the tangential velocity to small values , but does not force it to zero .the dirichlet boundary condition disappears in favor of a relation that permits the time evolution of the tangential components of , while demanding that be determined solely by the neumann condition ( the normal component of ( [ eq : bcs ] ) only ) . in a previous mhd application [ shan & montgomery 1994a , b ] dealing with rotating mhd fluids , the scheme seemed to perform acceptably well , but was not intensively tested or benchmarked sharply against any of the better understood navier - stokes flows .this comparison seems worthy of future attention .the work of one of us ( d.c.m . ) was supported by hospitality in the fluid dynamics laboratory at the eindhoven university of technology in the netherlands .a preliminary account of this work was presented orally at a meeting of the american physical society .
|
certain unresolved ambiguities surround pressure determinations for incompressible flows , both navier - stokes and magnetohydrodynamic . for uniform - density fluids with standard newtonian viscous terms , taking the divergence of the equation of motion leaves a poisson equation for the pressure to be solved . but poisson equations require boundary conditions . for the case of rectangular periodic boundary conditions , pressures determined in this way are unambiguous . but in the presence of no - slip " rigid walls , the equation of motion can be used to infer both dirichlet and neumann boundary conditions on the pressure , and thus amounts to an over - determination . this has occasionally been recognized as a problem , and numerical treatments of wall - bounded shear flows usually have built in some relatively _ ad hoc _ dynamical recipe for dealing with it , often one which appears to work " satisfactorily . here we consider a class of solenoidal velocity fields which vanish at no - slip walls , have all spatial derivatives , but are simple enough that explicit analytical solutions for can be given . satisfying the two boundary conditions separately gives two pressures , a neumann pressure " and a dirichlet pressure " which differ non - trivially at the initial instant , even before any dynamics are implemented . we compare the two pressures , and find that in particular , they lead to different volume forces near the walls . this suggests a reconsideration of no - slip boundary conditions , in which the vanishing of the tangential velocity at a no - slip wall is replaced by a local wall - friction term in the equation of motion . to appear in journal of plasma physics +
|
biological populations are often exposed to catastrophic events that cause mass extinction : epidemics , natural disasters , etc .when mild versions of these disasters occur , survivors may develop strategies to improve the odds of their species survival .some populations adopt dispersion as a strategy .individuals of these populations disperse , trying to make new colonies that may succeed settling down depending on the new environment they encounter .recently , schinazi and machado _ et al . _ proposed stochastic models for this kind of population dynamics .for these models they concluded that dispersion is a good survival strategy .earlier , lanchier considered the basic contact process on the lattice modified so that large sets of individuals are simultaneously removed , which also models catastrophes . in this workthere are qualitative results about the effect of the shape of those sets on the survival of the process , with interesting non - monotonic results , and dispersion is proved to be a better strategy in some contexts . moreover , brockwell _ et al . _ and later artalejo _ et al . _ considered a model for the growth of a population ( a single colony ) subject to collapse . in their model , two types of effects when a disaster strikes were analyzed separately , _ binomial effect _ and _ geometric effect_. after the collapse , the survivors remain together in the same colony ( there is no dispersion ) .they carried out an extensive analysis including first extinction time , number of individuals removed , survival time of a tagged individual , and maximum population size reached between two consecutive extinctions .for a nice literature overview and motivation see kapodistria _ et al . _ . based on the model proposed by artalejo _et al . _ , and adapting some ideas from schinazi and machado _ et al . _ , we analyze growth models of populations subject to disasters , where after the collapse species adopt dispersion as a survival strategy .we show that dispersion is not always a good strategy to avoid the population extinction .it strongly depends on the effect of a catastrophic event , the spatial constraints of the environment and the probability that each exposed individual survives when a disaster strikes .this paper is divided into four sections . in section 2we define and characterize three models for the growth of populations subject to collapses . in section 3we compare the three models introduced in section 2 and determine under what conditions the dispersion is a good strategy for survival , due to space restrictions and the effects when a disaster strikes .finally , in section 4 we prove the results from sections 2 and 3 .first we describe a model presented in artalejo _et al . _this is a model for a population which sticks together in one colony , without dispersion .the colony gives birth to a new individual at rate , while collapses happen at rate .if at a collapse time the size of the population is , it is reduced to with probability .the parameters are determinated by how the collapse affects the population size .next we describe two types of effects . _ binomial effect : _ disasters reach the individuals simultaneously and independently of everything else .each individual survives with probability ( dies with probability ) , meaning that _ geometric effect : _ disasters reach the individuals sequentially and the effects of a disaster stop as soon as the first individual survives , if there are any survivor .the probability of next individual to survive given that everyone fails up to that point is which means that the binomial effect is appropriate when the catastrophe affects the individuals in a independent and even way . the geometric effect would correspond to cases where the decline in the population is halted as soon as any individual survives the catastrophic event .this may be appropriate for some forms of catastrophic epidemics or when the catastrophe has a sequential propagation effect like in the predator - prey models - the predator kills prey until it becomes satisfied .more examples can be found in artalejo _et al . _ and in cairns and pollett . in artalejo _et al . _ the authors consider the binomial and the geometric effect separately as alternatives to the total catastrophe rule which instantaneously removes the whole population whenever a catastrophic event occurs . herewe consider a mixture of both effects , that is , with probability the group is striken sequentially ( geometric effect ) and with probability the group is striken simultaneously ( binomial effect ) .more precisely , we assume that the collapse rate equals 1 .the size of the population ( number of individuals in the colony ) at time is a continuous time markov process whose infinitesimal generator is given by we also assume and denote by the process described by .when and , we obtain the models considered in artalejo _et al . _ .[ th : semdisp ] let a process , with and .then , extinction ( which means for some ) occurs with probability moreover , if , or and the time it takes until extinction has finite expectation .the result of theorem [ th : semdisp ] has been shown by artalejo _et al . _ for the cases and .they use the word _extinction _ to describe the event that , for some , for a process where state 0 is not an absorbing state .in fact the extinction time here is the first hitting time to the state 0 .we keep using the word extinction for this model trough the paper . from their resultone can see that survival is only possible when the effect is purely geometric ( ) .the reason for that is quite clear : if the binomial effect strikes at rate so even if one considers when the geometric effect strikes , the population will die out as proved in artalejo _et al . _ for the case . consider a population of individuals divided into separate colonies .each colony begins with an individual .the number of individuals in each colony increases independently according to a poisson process of rate .every time an exponential time of mean 1 occurs , the colony collapses through a binomial or a geometric effect and each of the collapse survivors begins a new colony independently of everything else .we denote this process by and consider it starting from a single colony with just one individual .the following theorem establishes necessary and sufficient conditions for survival in [ th : disp1 ] the process survives with positive probability if and only if theorem [ th : disp1 ] shows that , contrary to what happens in , in the population is able to survive even when the binomial effect may occur .see example [ ex : bin ] .in particular , if ( pure binomial effect ) the process survives with positive probability whenever .the next result shows how to compute the probability of extinction , which means , the probability that eventually the system becomes empty .[ th : disp2 ] let be the probability of extinction in . then is the smallest non - negative solution of =s\ ] ] [ ex : bin ] for the smallest non - negative solution for the equation is given by for ( pure binomial effect ) and ( pure geometric effect ) the smallest non - negative solution for ( [ probext ] ) is : observe that where the strict inequality holds provided moreover , * if then * if then and * if then and note that likewise as occurs in , the binomial effect is a worst scenary than the geometric effect for the population survival in .observe that for . ] + + = \displaystyle\sum_{j=0}^{i+1}f(j)p_{i , j } \le ( i+2)^2 < \infty \text { for } i\in a. \end{array} ] .the probability of extinction for is the smallest non - negative solution of where is the probability generating function of .[ l : disp ] the probability generating function of is given by : \ ] ] and =\frac{p(\lambda+1)^2r}{\lambda p+1 } + p(\lambda+1)(1-r).\ ] ] is the number of colonies in the first generation of denote and firstly we show that &=&\left\{\begin{array}{ll } \displaystyle\frac{1+\lambda}{\lambda ( 1+\lambda p)}\left(\frac{\lambda p}{1+\lambda p}\right)^ k , & k\geq 1 \vspace{0,2 cm } \\\displaystyle\frac{q}{1+\lambda p } , & k=0.\end{array}\right.\end{aligned}\ ] ] &=&\left\{\begin{array}{ll } \displaystyle\frac{p}{1+\lambda p}\left(\frac{\lambda } { 1+\lambda } \right)^ { k-1 } , & k\geq 1 \vspace{0,2cm}\\ \displaystyle\frac{q}{1+\lambda p } , & k=0.\\\end{array}\right.\end{aligned}\ ] ] let us consider the following random variables * the lifetime of the collony until the collapse time ; * the density of the random variable t ; * the amount of individuals created in a collony until it collapes .observe that =\int_0^{\infty } f_t(t ) \sum_{n= 0 \vee k-1}^{\infty } \mathbb{p}(x_t = n|t = t)\mathbb{p}(z_b = k|x_t = n ; t = t)dt.\end{aligned}\ ] ] then , for , we have that =\int_0^\infty e^{-t } \sum_{n=0}^\infty \frac{e^{-\lambda t}(\lambda t)^n}{n ! } q^{n+1 } dt = q\int_0^\infty e^{-(\lambda p+1)t } dt=\frac{q}{1+\lambda p}.\ ] ] for &=&\displaystyle\int_0^\infty e^{-t } \sum_{n = k-1}^\infty \frac{e^{-\lambda t } ( \lambda t)^n}{n!}{n+1 \choose k } p^kq^{n+1-k } dt \\ & = & q\left(\displaystyle\frac{p}{q}\right)^k \displaystyle\sum_{n = k-1}^\infty { n+1 \choose k } \frac{(\lambda q)^n}{n ! } \int_0^\infty e^{-(\lambda + 1)t } \ t^n dt \\ & = & q\left(\displaystyle\frac{p}{q}\right)^k \displaystyle\sum_{n = k-1}^\infty { n+1 \choose k } \frac{(\lambda q)^n}{n ! } \frac{\gamma(n+1)}{(\lambda + 1)^{n+1 } } \\ & = & \displaystyle\frac{q}{\lambda + 1 } \left(\frac{p}{q}\right)^k \displaystyle\sum_{n = k-1}^\infty { n+1 \choose k } \left(\frac{\lambda q}{\lambda + 1}\right)^{n } \\ &= & \displaystyle\frac{q}{\lambda + 1 } \left(\frac{p}{q}\right)^k \left(\frac{\lambda q}{\lambda + 1}\right)^{k-1}\displaystyle\sum_{j=0}^\infty { j+k \choose k } \left(\frac{\lambda q}{\lambda + 1}\right)^{j } \\ & = & \displaystyle\frac{q}{\lambda + 1 } \left(\frac{p}{q}\right)^k \left(\frac{\lambda q}{\lambda + 1}\right)^{k-1 } \left(1-\frac{\lambda q}{\lambda + 1}\right)^{-(k+1)}\\ & = & \displaystyle\frac{1+\lambda}{\lambda ( 1+\lambda p)}\left(\frac{\lambda p}{1+\lambda p}\right)^ k.\end{aligned}\ ] ] similarly to ( [ eq : zbezg ] ) , we obtain the distribution of .first observe that =\mathbb{p}[z_g=0] ] + [ l : disp2 ] the probability generating function of is given by : where ^k\sum_{j=0}^k { k \choose j}\frac{(-1)^jj^k}{m(1+\lambda p)-\lambda p j},\ ] ] ^{k-1}\sum_{j=0}^k { k \choose j}\frac{(-1)^{j-1}j^k}{m(1+\lambda ) -\lambda j}.\ ] ] furthermore , =\frac{mp(\lambda + 1)^2r}{(m+\lambda)(\lambda p + 1)}+\frac{mp(\lambda + 1)(1-r)}{m+ \lambda p}.\ ] ] consider starting from one colony placed at some vertex . besides the quantity already defined consideralso the number of individuals that survived right after the collapse , before they compete for space . from the definition of follows that =r\mathbb{p}[z_g = j]+(1-r)\mathbb{p}[z_b = j],\ ] ] where and are the random variables defined in ( [ e1:lemaaux1 ] ) and ( [ e2:lemaaux1 ] ) , respectively . by other side , for and , observe that ={m \choose k}\frac{t(j , k)}{m^j}.\ ] ] by the inclusion - exclusion principle , is the number of surjective functions whose domain is a set with elements and whose codomain is a set with elements .see tucker p. 319 . then, for &=&r\sum_{j = k}^\infty { m \choose k}\frac{t(j , k)}{m^j}\mathbb{p}[z_g = j ] \nonumber\\ & & + ( 1-r)\sum_{j = k}^\infty { m \choose k}\frac{t(j , k)}{m^j}\mathbb{p}[z_b = j].\end{aligned}\ ] ] by ( [ e1:lemaaux1 ] ) , we have that + ] ^{j-1}t(j , k)\nonumber\\ & = & { m \choose k } \frac{p}{m(\lambda p+1)}\left[\frac{\lambda}{m(\lambda+1)}\right]^{k-1}\sum_{j=0}^\infty \left[\frac{\lambda}{m(\lambda+1)}\right]^{j}t(j+k , k)\nonumber\\ & = & { m \choose k } \frac{p}{m(\lambda p+1)}\left[\frac{\lambda}{m(\lambda+1)}\right]^{k-1}\sum_{j=0}^\infty \left[\frac{\lambda}{m(\lambda+1)}\right]^{j}\sum_{i=0}^k { k \choose i}(-1)^i(k - i)^{j+k}\nonumber\\ & = & { m \choose k } \frac{p}{m(\lambdap+1)}\left[\frac{\lambda}{m(\lambda+1)}\right]^{k-1}\sum_{i=0}^k { k \choose i}(-1)^i(k - i)^{k}\sum_{j=0}^\infty \left[\frac{\lambda(k - i)}{m(\lambda+1)}\right]^{j}\nonumber\\ & = & { m \choose k}\frac{(1+\lambda)p}{\lambda p + 1}\left[\frac{\lambda } { m(1+\lambda ) } \right]^{k-1}\sum_{i=0}^k { k \choose i}\frac{(-1)^i ( k - i)^k}{m(1+\lambda ) -\lambda(k - i)}.\end{aligned}\ ] ] finally , observe that =\mathbb{p}[z=0]=q/(1+\lambda p) ] consider enumerating each neighbour of the initial vertex , from 1 to .next we describe where is the indicator function of the event \{a new colony is created in the first generation at the neighbour vertex of } .therefore , =\sum_{i=1}^m \mathbb{p}[i_{i}=1]=m\mathbb{p}[i_{1}=1].\end{aligned}\ ] ] observe that = 1-\left(\frac{m-1}{m}\right)^k \nonumber\end{aligned}\ ] ] and that by using ( [ e ] ) we have that &=&r\sum_{k=1}^\infty\left[1-\left(\frac{m-1}{m}\right)^k\right]\mathbb{p}[z_g = k ] \nonumber \\ & & + ( 1-r)\sum_{k=1}^\infty\left[1-\left(\frac{m-1}{m}\right)^k\right]\mathbb{p}[z_b = k ] .\end{aligned}\ ] ] substituting ( [ e1:lemaaux1 ] ) and ( [ e2:lemaaux1 ] ) in ( [ e2 : lemaaux2 ] ) one can see that &=&\frac{p(\lambda + 1)^2r}{(m+\lambda)(\lambda p + 1)}+\frac{p(\lambda + 1)(1-r)}{m+ \lambda p}.\end{aligned}\ ] ] finally , plugging ( [ e3 : lemaaux2 ] ) into ( [ e1 : lemaaux2 ] ) we obtain the desired result .from remark [ auxpro ] one can see that survives if and only if >1. ] of for and respectively .the desired results follow from lemmas [ l : disp ] and [ l : disp2 ] .first we define the following functions from theorems [ th : disp1 ] and [ th : dispesp1 ] it follows that observe that and are continuous functions on such that and + moreover , is a strictly increasing sequence of strictly increasing functions on such that .similarly , is a strictly increasing function. then , from the intermediate value theorem and the strict monotonicity of we have that there is a unique such that moreover , from the definition of and the continuity of , we have that thus , similarly , for we obtain that and besides , from the strict monotonicity of , it follows that in order to show that for all let us assume that for some and proceed by contradiction .note that which is cleary a contradiction .analogously one can show that let us restrict the domain of the functions and to . ] then , from theorem 7.13 in rudin we have that converges uniformly to on ] for all and the existence of then , from the uniform convergence of to , it follows that ( see rudin ( * ? ? ?* exercise 9 , chapter 7 ) ) .finaly the result follows from ( [ valorcritico ] ) .the authors are thankful to rinaldo schinazi and elcio lebensztayn for helpful discussions about the model . v. junior and a.roldn wish to thank the instituto de matemtica e estatstica of universidade de so paulo for the warm hospitality during their scientific visits to that institute .the authors are thankful for the two anonymous referees for a careful reading and many suggestions and corrections that greatly helped to improve the paper .evaluating persistence times in populations that are subject to local catastrophes in `` modsim 2003 international congress on modelling and simulation '' ( ed . d.a .post ) , modelling and simulation society of australia and new zealand , 747752 ( 2003 ) .
|
we consider stochastic growth models to represent population subject to catastrophes . we analyze the subject from different set ups considering or not spatial restrictions , whether dispersion is a good strategy to increase the population viability . we find out it strongly depends on the effect of a catastrophic event , the spatial constraints of the environment and the probability that each exposed individual survives when a disaster strikes .
|
the motion of a fluid can be described either in a fixed frame of reference in terms of the spatio - temporal _ eulerian _ coordinates or by following individual fluid particles in terms of their initial or _lagrangian _ coordinates .the description of a viscous fluid is simpler in eulerian coordinates , but for an ideal ( inviscid ) incompressible fluid there is a variational formulation in lagrangian coordinates of the equations of motion , due precisely to lagrange . over most of the 19th and 20th century the eulerian formulation ,which is better adapted to cases where the boundaries of the flow are prescribed and fixed , became predominant .nevertheless , the lagrangian formulation is more natural for wave motion in a free - surface flow and for complete disruptions of the flow , such as the breakup of a dam . ; for dam breakup , see the end of [ ss : noether ] . ] in more recent years there has been a strong renewal of interest in lagrangian approaches . in _cosmology _ most of the matter is generally believed to be of the dark type , which is essentially collisionless and thus inviscid .lagrangian perturbations methods have been developed since the nineties that shed light on the mechanisms of formation of cosmological large - scale structures .closely related is the problem of reconstruction of the past lagrangian history of the universe from present - epoch observations of the distribution of galaxies .novel fast photography techniques have been developed for the tracking of many particles seeded into laboratory flow that allow the reconstruction of a substantial fraction of the lagrangian map that associates the positions of fluid particles at some initial time to their positions at later times .when the flow is electrically highly conducting and can support a magnetic field by the magnetohydrodynamic ( mhd ) dynamo effect , it has been shown that the long - time fate of such a magnetic field is connected to the issue of lagrangian chaos , namely how fast neighbouring fluid particle trajectories separate in time . at a more fundamental level ,over 250 years after euler wrote the equations governing incompressible ideal three - dimensional ( 3d ) fluid flow ( generally known as the 3d euler equations ) , we still do not know if the solutions remain well - behaved for all times or become singular and dissipate energy after some finite time ; and this even when the initial data are very smooth , say , analytic .many attempts have been made to tackle this problem by numerical simulations in an eulerian - coordinates framework , but the problem remains moot .lagrange himself frequently preferred eulerian coordinates , although the variational formulation he gave was formulated in lagrangian coordinates : in modern mathematical language , the solutions to the equations of incompressible fluid flow are geodesics on the infinite - dimensional manifold sdiff of volume - preserving lagrangian maps . in a lagrangian framework one is `` riding the eddy '' anddoes not feel too much of the possible spatial roughness .actually , fluid particle trajectories can be analytic in time even when the flow has only limited spatial smoothness .recently , borrowing some ideas of the cosmological lagrangian perturbation theory , one of us and vladislav zheligovsky obtained a form of the incompressible 3d euler equations in lagrangian coordinates , from which simple recursion relations among the temporal taylor coefficients of the lagrangian map can be derived and analyticity in time of the lagrangian trajectories can be proved in a rather elementary way .this lagrangian approach can also be used to save significant computer time in high - resolution simulations of 3d ideal euler flow .the lagrangian equations used for this did not seem at first glance to be widely known , but after some time spent searching the past scientific literature , we found that the equations had been derived in 1815 by augustin cauchy in a long memoir that won a prize from the french academy . at first sight , cauchy s equations , to be presented in section [ ss:15 - 17 ] here called _ cauchy s invariants equations _ to distinguish them from an important corollary , _ cauchy s vorticity formula _ , seemed hardly cited at all .we then engaged in a much more systematic search .the surprising result can be summarized as follows : in the 19th century , cauchy s invariants equations are cited only in a small number of papers , the most important one being by hermann hankel ; in the 20th century , cauchy s result seems almost completely uncited , except at the very end of the century .the outline of the paper is as follows .section [ s : cauchy ] is devoted to augustin cauchy : in section [ ss : microbiocauchy ] we recall a few biographical elements , in particular those connected to the present study ; section [ ss : prize ] is devoted to the 1815 prize - winning _ mmoire sur la propagation des ondes _ ; in section [ ss:15 - 17 ] we analyze in detail the very beginning of its second part , which contains cauchy s lagrangian formulation of the 3d ideal incompressible equations in terms of what would now be called invariants , a terminology we adopt here . section [ s:19th ] is about 19th century scientists who realized the importance of cauchy s lagrangian formulation .outstanding , here , is hermann hankel ( section [ ss : hankel ] ) , a german mathematician whose keen interest in the history of mathematics allowed him not to miss cauchy s 1815 work and to understand in 1861 its potential in deriving helmholtz s results on vorticity in a lagrangian framework , and discovering on the way what is known as kelvin s circulation theorem .then , in section [ ss : stokesetal ] , we turn to the other 19th century scientist who discuss cauchy s invariants equations : foremost george stokes , then maurice lvy , horace lamb , jules andrade and paul appell and to a few others who mention the equations but may not be aware that they were obtained by cauchy : gustav kirchhoff and henri poincar .then , in section [ s:20th ] , we turn to the 20th century and beyond .the first part ( section [ ss : noncauchy ] ) has cauchy s invariants equations apparently fallen into oblivion . in the second part ( section [ ss : noether ] we shall see that , as a consequence of emmy noether s theorem connecting continuous symmetries and invariants , a number of scientists were able to rederive cauchy s invariants equations for the 3d ideal incompressible flow , but without being aware of cauchy s work . in section [ ss : rebirth ]we shall find that russian scientists , followed by others , reminded us that all this had been started a long time ago by cauchy . in section [ s : conclusion ] we make some concluding remarks .since one of our key goals is to understand how important work such as cauchy s 1815 formulation of the hydrodynamical equations in lagrangian coordinates managed to get nearly lost , we are obliged to pay attention to who cites whom .this is a delicate matter , given that present - day ethical rules of citing definitely did not apply in past centuries .but without fast communications , a peer - review system to point out missing references and a much larger scientific population , the rules had to be different .here , we shall do our best to mention each instance of citation of previous work by the author being discussed , when such work is relevant to our paper .these elements are just intended to give a background on the circumstances which led to cauchy s 1815 work .our main sources have been the biography of augustin - louis cauchy by bruno belhoste , the biography by claude alphonse valson , written just a few years after the death of cauchy and the minutes ( _ procs - verbaux _ ) of the meetings of the french academy of sciences , referred to as pv , followed by the date of the corresponding meeting .cauchy was born in 1789 , turbulent times , but this did not affect his ability to get the best theoretical and practical training available in the early 19th century , attending successively cole polytechnique and cole des ponts et chausses .his first employment was as a junior engineer in a major harbour project in cherbourg in 1810 and then as an engineer at the ourcq canal project in paris in 1813 . during his engineering yearshe already displayed keen interest in deep mathematical questions , several of which he solved in a way that sufficiently impressed the mathematicians at the academy of sciences , ( called `` first class of the institute of france '' until king louis xviii restored the old name of `` academy of sciences '' ; here , we refer to it just as `` academy '' ) . in 1813 ,the geometry section of the academy ranked cauchy second for an election to the academy but the vote of the rest of the members of the academy went heavily against him .anyway , during the years 18121815 cauchy had a strong coupling to the academy and his name appears about one hundred times in the minutes of the academy meetings .his awarding of the 1815 mathematics prize by the academy ( see section [ ss : prize ] ) is one more evidence that he was a rising star . eventually , the king took advantage of the reorganization of the academy to remove lazare carnot and gaspard monge from the academy and appoint cauchy as a member in march 1816 . this unusual way of entering the academy produced some friction with regular members .cauchy , who was soon to become a world - dominant figure in mathematics and mathematical physics , was no easy - going personality , he was however willing to suffer considerably for his ideas , particularly those grounded in his christian beliefs .for example , he went into exile in 1830 for eight long years in order not to have to swear an oath of allegiance to king louis philippe , considered by cauchy as not legitimate ._ _ the differential equations given by the author are rigorously applicable only to the case where the depth of the fluid is infinite ; but he succeeded in obtaining their general integrals in a form allowing to discuss the results and comparing them to experiments . _ _+ thus reads the beginning of the statement made by the academy during the public ceremony of january 8 , 1816 at which cauchy received the 1815 _ grand prix _ ( mathematics prize ) .the events leading to this prize started on december 27 , 1813 when the academy committee in charge of proposing a subject for a mathematics prize decided for `` _ _ le problme des ondes la surface dun liquide de profondeur indfinie _ _ '' ( the problem of waves on the surface of a liquid of arbitrary depth ) .the committee put laplace in charge of defining the scientific programme .on october 2 , 1815 the academy received two anonymous manuscripts as usual in those circumstances , distinguished by an epigraph for later identification .cauchy s manuscript had the epigraph _ nosse quot ionii veniant ad littora fluctus _( virgil , geor .ii , 108 ; translation : to know how many waves come rolling shoreward from the ionian sea ) . on december 26 , 1815the committee proposed giving the 1815 prize to cauchy s manuscript .the manuscript of the prize was not published until 1827 , when appeared the first volume of _ mmoires des savans trangers _ ( memoirs of non - member scientists ) printed since the 1816 reorganization of the academy .it comprised a hefty 310 pages , including 189 pages with 20 technical notes ( the last 7 where added at various dates after 1815 , but all the material not contained in the 1815 manuscript is clearly identified by the author ) .it is interesting that cauchy s prized manuscript begins with a statement of the problem to be solved : + _ _ a massive fluid , initially at rest , and of an arbitrary depth , has been put in motion by the effect of a given cause .one asks , after a determined time , the shape of the external surface of the fluid and the speed of each molecule located at the same surface . _ _+ although we have not found this sentence in any of the minutes of the academy or at its archives , it is likely that it constitutes the academy s detailed formulation of the problem , which had been entrusted to laplace .actually , cauchy gave a more general treatment than had been requested by the academy , since he obtained results not only for the _ surface _ of the fluid , but also for its _bulk_. the memoir itself has three parts : the solution is in the third part , whereas the first two actually contain , in the intention of the author , a sort of preparatory material describing the initial state of the whole fluid and its later evolution . here , it is the first section of the second part that interests us ; it is entitled _ _ on the equations that subsist at any instant of the motion for all the points within the mass of the fluid . __ our focus will be entirely on this section and even more so on its very beginning , where cauchy obtains his lagrangian formulation of the 3d incompressible euler equations in terms of three invariants and uses it immediately to derive what is called cauchy s vorticity formula relating the current and initial vorticity fields through the jacobian matrix of the lagrangian map .in the next section we turn to these matters .there is of course no real substitute for reading section i of the second part of cauchy s paper .it can be mostly understood without prior reading of the first part .the notation is not too different from the modern one , except that , of course , no vectors were used . for illustration ,figure [ f : cauchy15 ] gives cauchy s key equation ( from the point of view of the present paper ) as it was published in 1827 .( our attempts to retrieve the original hand - written manuscript of 1815 have failed . ) in our description of the work we shall use modernized notation .cauchy considers a 3d ideal incompressible fluid subject to an external force . analyzing the various forces acting on a `` molecule '' ( i.e. a fluid particle ) ,he derives his eq .( 4 ) , which in our notation reads _ t+()=-p + , where , is the flow velocity , the external force , the pressure ( divided by the constant fluid density , taken here unity for convenience ) and the time .he then points out that these equations coincide with those obtained by lagrange by another method .they coincide also in their form and method of derivation with those obtained by euler and are presently called the euler equations .. cauchy , then , changes to lagrangian variables , denoted by him ( here , ) .the eulerian position becomes then a function .nowadays , the representation of the flow in terms of the coordinates is called _ eulerian _ and , when the coordinates are used , it is called _ lagrangian _ ; the ( time - dependent ) map is called the _ lagrangian map_. the velocity and the acceleration of a fluid particle are then and , respectively , where the dot denotes the lagrangian time derivative .the euler equations state that the acceleration minus the external force is balanced by minus the eulerian gradient of the pressure . making use of the set of nine partial derivatives of the with respect to the ( now called the jacobian matrix ) cauchy transforms the eulerian pressure gradient into a lagrangian pressure gradient , here denoted , and obtains his eq .k=1 ^ 3(_k -f_k)x_k =- p , where and denote the components of and , respectively . this is precisely lagrange s eq .( d ) . note that lagrange first wrote the equations in lagrangian coordinates and then switched to eulerian coordinates ; cauchy did it the other way round .then , cauchy considers the condition of incompressibility , which he first writes in lagrangian coordinates . in modern terms, the jacobian of the lagrangian map should be equal to unity for all ( his eq .( 9 ) ) ( ) = 1. he also writes it in eulerian coordinates ( his eq .( 10 ) ) = 0 , an equation already found in euler but which has been derived earlier in the axisymmetric case by dalembert .then , cauchy observes that the two equations - are not integrable , but if one restricts oneself to external forces deriving from a potential , then can be integrated once , as he will show .cauchy thus writes ( his eq .( 11 ) : = .he then rewrites as ( his eq .k=1 ^ 3_k x_k=-(p- ) .observe that the r.h.s is a lagrangian gradient .cauchy then applies what we now call a ( lagrangian ) curl to cancel out the r.h.s .he thus obtains his eq .k=1 ^ 3_k x_k=0 .he then notices that the three components of the l.h.s .of are exact time derivatives of three quantities which thus must be time - independent .he easily identifies their constant values to what we now call the initial vorticity .this way , cauchy obtains his eq .( 15 ) : _ k=1 ^ 3_kx_k=_k=1 ^ 3v_kx_k=_0 , and states : _telles sont les intgrales que nous avions annonces _( such are the integrals that we had announced ) .indeed , in lagrangian coordinates , the r.h.s .is time - independent .the constant quantities in the l.h.s .are now usually called `` the cauchy invariants , '' a terminology we shall adopt . as we shall see in section [ ss : hankel ] , they are closely connected to the circulation invariants of helmholtz and kelvin and are the three - dimensional generalization of the two - dimensional vorticity invariant .as to the cauchy equations and [ cauchy15 ] for the lagrangian map , which play a central role in the present paper , we shall refer to them as `` cauchy s invariants equations '' , to avoid any possible confusion with `` cauchy s vorticity formula , '' discussed below .cauchy was obviously aware that he had succeeded in partially integrating the equations of motion .however , modern concepts such as invariants and their relation to symmetry / invariance groups would emerge only about one century later ( see section [ ss : noether ] ) .nonetheless , for his invariants equations cauchy immediately found an application , which would become quite famous ( much more , so far , than cauchy s invariants ) .starting from , written in terms of the velocity , cauchy reexpresses its lagrangian space derivatives in terms of the eulerian ones and the jacobian matrix .he obtains for the l.h.s . of expressions which are linear in the components of the vorticity ( evaluated at the current time ) and quadratic in the jacobian matrixhe then solves these linear equations , using the fact the jacobian is unity .he thus obtains his eq .( 17 ) : = _ 0 , or , with indices _ i = _ j_0j _ j _ i. in modern terms this `` cauchy vorticity formula '' states that the current vorticity is obtained by multipying the initial vorticity by the jacobian matrix .cauchy gives a rather low - key application of his formula , which is consistent with the context of the prize : in the first part of his memoir , he had envisaged a mechanism of setting the fluid in motion impulsively that would produce a flow initially potential and thus with no vorticity .his formula then implies that the flow would have no vorticity at any instant of time . in the language of the time this was expressed by stating that is a `` complete differential . '' nonetheless , cauchy was certainly aware of lagrange s theorem , which states that an ideal flow initially potential , stays potential at later times .lagrange s proof used eulerian coordinates and assumed that the velocity could be taylor - expanded in time to arbitrary orders .lagrange then showed that if the vorticity vanishes initially , so will its time - taylor coefficients of arbitrary orders .cauchy s proof requires only a limited smoothness of the flow ( he does not state how much ) and it must have appeared to the readers at the time that , as long as the jacobian matrix exists , then the persistence of potential flow will hold .stokes observed that the vanishing of all the taylor coefficients does not imply the vanishing of a function ( giving well - known examples such as near ) ; he thus considered cauchy s proof more general than that of lagrange .today , we know that a flow with an initial velocity field that is `` moderately smooth in space '' ( just a little more regular than once differentiable in space ) , will stay so for at least a finite time , during which its temporal smoothness in time _ in eulerian coordinates _ is not better than its spatial smoothness , rendering lagrange s argument inapplicable , whereas cauchy s proof only requires spatial differentiability of the lagrangian map , which is now known to hold with a moderately smooth initial velocity field .one reason why cauchy s vorticity formula is very well known today is that it applies not only to the vorticity in an ideal fluid , but also to a magnetic field in ideal conducting fluid flow governed by the magnetohydrodynamic ( mhd ) equations . in modern mathematical language , both the vorticity and the magnetic fieldare transported 2-forms . in an ordinary fluid, one can not prescribe the velocity and the vorticity independently , but in a conducting fluid one can prescribe the velocity and the initial magnetic field independently in the limit of weak magnetic fields , when studying the kinematic dynamo problem .all this explains that there has been a strong interest , particulary in recent years , in cauchy s vorticity formula .our main focus in this paper are the cauchy invariants equations .one can not describe the history of the cauchy s vorticity formula without mentioning his invariants equations , and thus we can not completely disentangle the histories of their citations .however , nowadays , most derivations of cauchy s vorticity formula use a much shorter route , based on the eulerian vorticity equation of helmholtz _t + = , and thus bypass cauchy s invariants equations .the rest of cauchy s section i of part ii ( pp . 44 - 49 ) is devoted to the case of potential flow and does not concern us here .hermann hankel ( 18391873 ) was a german mathematician who studied with august ferdinand moebius , bernhard riemann , karl weierstrass and leopold kronecker .our sources on his life are the obituary by wilhelm von zahn , a 19th century biography by moritz cantor , a 20th century short biography by michael crowe and an assessment of his mathematical contribution from a modern perspective by antonie frans monna .hankel is quite well - known for work on hankel matrices , on hankel transforms and on hankel functions .he was in addition also involved very seriously in the history of mathematics and has left a book _ zur geschichte der mathematik in alterthum und mittelalter _( on the history of mathematics in the antiquity and the middle ages ) which includes , among other things , one of the first studies bringing out the major contributions of indian mathematics .what interests us here is hankel s work on fluid dynamics contained in a manuscript prized by gttingen university _ zur allgemeinen theorie der bewegung der flssigkeiten _( on the general theory of motion of fluids ) . to understand the context of this prize , let us recall that in 1858 hermann helmholtz ( 18211894 ) wrote a major paper about vortex motion in three - dimensional incompressible ideal flow that generated considerable interest .a key result of his work , stated in modern language , is that the flux of the vorticity through an infinitesimal piece of surface is a lagrangian invariant .this result , known as helmholtz s second theorem , immediately implies , by adding up infinitesimal surface elements , that the same holds for a finite surface ; moreover , by using stokes s theorem one obtains kelvin s circulation theorem .helmholtz s derivation of his result is to a large extent resting on an eulerian approach and begins with the establishment of the aforementioned eulerian vorticity equation .furthermore , helmholtz s derivation , mostly written for physicists , was a bit heuristic .on june 4 , 1860 gttingen university ( philosophische facultt der georgia augusta ) set up a prize , intended to stimulate interest in lagrangian approaches and in particular to give such a derivation of helmholtz s invariants : + _ _ the general equations for determining fluids motions may be given in two ways , one of which is eulerian , the other one is lagrangian .the illustrious dirichlet pointed out in the posthumous unpublished paper `` on a problem of hydrodynamics '' the until now almost completely overlooked advantages of the lagrangian way , but he was prevented from unfolding this way further by a deadly illness .so , this institution asks for a theory of fluid motion based on the equations of lagrange , yielding , at least , the laws of vortex motion already derived in another way by the illustrious helmholtz . _ _ + the prominent reference to dirichletcan be understood as follows .johann peter gustav lejeune dirichlet ( 18051859 ) was an important german mathematician who , from 1855 to his death , succeeded carl friedrich gauss in gttingen .he had also a strong interest in hydrodynamics . in 18561857he wrote an unfinished paper `` untersuchungen ber ein problem der hydrodynamik '' ( investigation of a problem in hydrodynamics ) .dirichlet asked the german mathematician richard dedekind ( 18311916 ) , at that time professor in gttingen , to help him with the work , but was not able to finish completely before he died .major pieces of the work were found by dedekind who published them in 1859 with the above title , followed by `` from his legacy , edited by r. dedekind . '' in the introduction dirichlet pointed out that lagrange himself was not too keen to advocate the use of what was later called lagrangian coordinates , found by lagrange to be a bit complicated .eulerian coordinates quickly grew in favour .dirichlet , however observed that eulerian coordinates have their own drawbacks , particularly when the volume occupied by the fluid changes in time .we now turn to hankel s prize - winning manuscript .it carried the epigraph _ tanto utiliores sunt notae , quanto magis exprimunt rerum relationes _ (the more signs express relations among things , the more useful they are ) .the manuscript was written in latin , but hankel got the permission to print a slightly edited german translation , which was also published as a book . in 1863 a four - page review was published in _ fortschritte der physik _ ( progress in physics ) ; it was signed , anonymously as `` hl . '' and will be cited below as _fortschritte_. this work is of particular interest , not only because it is the first time that it was shown that the helmholtz invariants are directly connected with the cauchy invariants , but also because it gives the first derivation of what is generally called the kelvin circulation theorem .hankel discusses both compressible and incompressible fluids , but here it suffices to consider the latter . to derive the helmholtz invariants , hankel first establishes cauchy s invariants equations , for which he refers to cauchy s 1815/1827 prized paper .hankel then rewrites this , in modernised notation , _k=1 ^ 3_kx_k=_0 , where are the components of the velocity .this is eq .( 3 ) of hankel s 6 .the equation is not to be found in cauchy s 1815/1827 paper but , given that cauchy obtained his invariants equations ( here , ) by taking the lagrangian curl of and then integrating over the time variable , it is not really surprising that the left - hand - side of is a lagrangian curl .hankel s next step is to consider in lagrangian space a connected ( _ zusammenhngende _ ) surface , here denoted by , whose boundary is a curve , here denoted by , and to apply to what was later to be called the stokes theorem , relating the flux of the curl of a vector field across a surface to the circulation of the vector field along the boundary of this surface .hankel could not easily be aware of what thomson and stokes had done before on the subject and of the much earlier work of ostrogradski and thus he devotes his 7 to proving the stokes theorem . in 8, hankel then infers that the flux through of the r.h.s . of , namely the initial vorticity ,is given by the circulation along of the initial velocity : _c_0_0d=_s_0_0 _ 0d_0 , where denotes the local unit normal to and the surface element .this is the unnumbered equation near the top of his p. 38 .then , in 9 , he similarly handles the l.h.s . of andfirst notices that _ k=1 ^ 3 v_kx_k d= d. this is the third equation before eq .( 2 ) of his 9 .he thus obtains the eulerian circulation , an integral over the curve where are located at the present time the fluid particles initially on : _ c d= _ s_0_0 _ 0 d_0 , which is the unnumbered equation just before eq .( 2 ) of his 9 . eq ., together with is clearly the standard circulation theorem , generally associated to the name of kelvin .hankel does not seem to wish highlighting this result , hence the unnumbered equations .also , the result is not mentionned in the _ fortschritte _ review .nevertheless , the fact that hankel proved the circulation theorem eight years before kelvin did not escape the attention of truesdell , who even proposed calling it the `` hankel kelvin circulation theorem . '' however , truesdell did not explain how hankel proceeded and furthermore never cited cauchy s invariants equations , but just the cauchy s vorticity formula .this could be the reason why truesdell s rather justified suggestion did not seem to have many followers , one exception being a book on ship propellers by breslin and andersen who were aware of truesdell s suggestion .one further application by hankel of the stokes theorem , gives him the constancy in time of the flux of the vorticity through any finite surface moving with the fluid .finally , he lets this surface shrink to an infinitesimal element and obtains helmholtz s theorem . to conclude this section on hankel ,we ask : how much was his work on hydrodynamics remembered ?an interesting case is that of heinrich martin weber ( 18421913 ) , who was quite close to riemann . in 1868, weber wrote a paper titled _ ber eine transformation der hydrodynamischen gleichungen _( on a transformation of the equations of hydrodynamics ) which , from the point of view of its scientific content , is very closely related to cauchy s 1815 invariants equations and even more so to hankel s 1861 reformulation .specifically , by `` decurling '' in lagrangian coordinates , one obtains _k=1 ^ 3_kx_k=_0 -w , where is the initial velocity and a scalar function , here called `` the weber function '' . actually , weber showed that is the time integral from to , in lagrangian coordinates , of , where is the pressure and the velocity .weber derived his equation by a clever transformation of lagrange s equation , now called the `` weber transform . ''weber did cite hankel but without an actual reference and just as a person who had pointed out that the so - called eulerian and lagrangian coordinates both were first introduced by euler ( a statement made by hankel but attributed by him to his advisor , riemann ) .felix auerbach ( 18561933 ) , a german scientist with wide - ranging interests in all areas of physics , in hydrodynamics , in architecture and painting , wrote in his early days _die theoretische hydrodynamik nach dem gange ihrer entwickelung in der neuesten zeit , in krze dargestellt _( a brief presentation of theoretical hydrodynamics , following its evolution in the most recent times ) .this manuscript won the querini stampalia foundation prize of the royal venetian institute of sciences , letters and arts ( _ atti del reale istituto veneto di scienze , lettere ed arti _ ) on the assigned theme of `` essential progress of theoretical hydrodynamics . ''several pages are devoted to hankel s hydrodynamics work and the manuscript contains also a brief reference to cauchy on p. 34 .an irish physicist and mathematician , george gabriel stokes ( 18191903 ) spent all his career at the university of cambridge in england and was considered a leading british scientist , particularly so for many contributions to the dynamics of both ideal and viscous fluids .stokes followed rather closely the work of french mathematicians and physicists and made genuine efforts to cite other scientists s work . to the best of our knowledge , he was the first to realize the importance of the discovery of the cauchy invariants ( called by stokes `` integrals '' ) and of the ensuing cauchy formula for the vorticity . in three papers in the late 1840s , stokes discussed various proofs of lagrange s theorem on the persistence in time of potentiality for 3d incompressible flow . in particular , in the 1848 paper `` notes on hydrodynamicsiv '' stokes described in detail cauchy s proof of lagrange s theorem and also gave an alternative proof of his own .concerning cauchy s proof , stokes wrote : + _ the theorem considered follows as a particular consequence from m. cauchy s integrals .as however the equations employed in obtaining these integrals are rather long , and the integrals themselves do not seem to lead to any result of much interest except the theorem enunciated at the beginning of this article ._ + stokes also gave an alternative proof of his own , not using the cauchy integrals .however , as observed by meleshko and aref , in 1883 when stokes edited his `` mathematical and physical papers '' , he introduced a footnote , refering to an added note at the end of the paper .this note begins as follows : + _ it may be noticed that two of helmholtz s fundamental propositions respecting vortex motion follow immediately from cauchy s integrals ; or rather , two propositions the same as those of helmholtz merely generalized so as to include elastic fluids follow from cauchy s equations similarly generalized ._ + the two propositions are ( i ) that `` the same loci of particles which at one moment are vortex lines remain vortex lines throughout the motion '' and ( ii ) in modernised language , that the product of the modulus of the vorticity and of the area of a perpendicular section of an infinitesimal vortex tube does not change in time while following the lagrangian motion .actually stokes s statement should not be misread : he mentions `` cauchy s integrals '' but , in 1883 , stokes understands by this only the cauchy vorticity formula , which of course was derived from cauchy s invariants equations .maurice lvy ( 18381910 ) was a french engineer and specialist of continuum mechanics . in 1890he gave a lecture on `` modern hydrodynamics and the hypothesis of action at a distance '' at the collge de france where he was a professor .on the first page of the published version , lvy writes : + _ the admirable properties of vortices were discovered only in 1858 by helmholtz , although they merely express the intermediate integrals of lagrange s hydrodynamical equations , discovered by cauchy ... _ + lvy observed that cauchy wrote _three _ ( scalar ) conservation laws ; together with the condition of incompressibility this makes _ four _ equations for the three components of the lagrangian map .lvy stated that the equations are actually compatible ( this follows from the fact that the lagrangian divergence of cauchy s three invariants vanishes ) .it is of particular interest that lvy very much highlighted what we would today call the _ nonlocal _ character of the equations of incompressible fluid dynamics .this did not seem to him in violation of any known mechanical principle .of course , such observations were made fifteen years before the birth of relativity theory .today , we know that the nonlocal character stems from taking the limit of vanishing mach number for a slightly compressible fluid , a limit that amounts to letting the speed of sound tend to infinity .horace lamb ( 18491934 ) , a british applied mathematician , wrote one of the most authoritative treatise on hydrodynamics , with editions ranging from 1879 to 1932 ( the title `` hydrodynamics '' was used only from 1895 ) . from 1895 , lamb had cauchy s invariants equations but only as an intermediate step to obtain cauchy s vorticity formula and cauchy s derivation of lagrange s theorem .there is no mention of `` integrals '' or `` invariants '' , although mere inspection of the equations makes it clear that we are here dealing with integrals of motion , as cauchy himself pointed out in 1815 .paul mile appell ( 18551930 ) , a french mathematician with a talent for simple and illuminating writing , published in 1897 a paper where he gave an elementary and immediate interpretation of cauchy s equations , leading to the fundamental theorems of the theory of vortices .he first rederived cauchy s invariants equations .he then considered the following first - order differential formd- _ 0d , where is the initial velocity , and showed that as a consequence of it is an exact differential of some function .( actually where is the weber function defined near the end of section [ ss : hankel ] . )since the integral of such an exact form on a closed contour vanishes with suitable connectedness and regularity assumptions , appell immediately obtained the hankel kelvin circulation theorem .appell pointed out that here he was just following poincar s _ thorie des tourbillons _( lectures on vortices ) .actually , poincar s derivation is a bit more mathematical and quite close to hankel s derivation of the circulation theorem ( see section [ ss : hankel ] ) .the derivation is again based on the cauchy invariants equations for which poincar cites kirchhoff s `` lectures on mathematical physics ( mechanics ) '' .the latter writes indeed cauchy s invariants equations [ lecture 15 , 3 , eq . ( 14 ) on p. 165 ] but does not give any reference .jules andrade ( 18571933 ) , a french specialist of mechanics and chronometry , published in 1898 `` _ _ leons de mcanique physique _ _ '' ( lectures on physical mechanics ) .its chapter vi was devoted to fluid dynamics .on p. 242 , andrade derived the cauchy invariants equations , which he called `` cauchy s intermediate integrals '' .andrade also derived cauchy s vorticity formula and lagrange s theorem , closely following cauchy .andrade then stated _ les thormes dhelmholtz sont aussi referms dans ces quations , mais cauchy ne les a pas aperus . _( helmholtz s theorems are also contained in these equations but cauchy did not perceive them ) .he then showed how to derive helmholtz s result along more or less the lines used by stokes in his 1883 added note ( see above ) .the material is here separated into three subsections : section [ ss : noncauchy ] has not only cauchy s work on the invariants equations forgotten , but the invariants themselves never mentioned . in section [ ss : noether ] , we find the independent rediscovery of the cauchy invariants by application of noether s theorem .eventually , in section [ ss : rebirth ] everything will reconnect in the late 20th century .the 20th century was to see a tremendous rise of research in fluid dynamics , driven to a significant part by the needs of the blossoming aeronautical industry .for this , the study of flow constrained by external or internal boundaries with viscous boundary layers of the kind introduced by prandtl in 1905 was essential .inclusion of viscous effects requires the use of the navier stokes equations , which are somewhat easier to study in eulerian coordinates .mathematical issues , relating to ideal and viscous fluid flow , such as the well - posedness of the fluid dynamical equations , started being addressed with the new tools of functional analysis .for example , lichtenstein gave the first proof of the well - posedeness for at least a finite time of the three - dimensional incompressible euler equations with sufficiently smooth initial data .then , hlder and wolibner independently showed that , under suitable conditions , the two - dimensional incompressible euler equations constitute a well - posed problem for all times .leray obtained similar results in the viscous case and introduced the important concept of `` weak solutions '' which need not be differentiable .we do not give here other details .we stress that these results were generally obtained using eulerian coordinates , with an occasional excursion into lagrangian coordinates by lichtenstein .it seems that during the 20th century cauchy s invariants equations were hardly used for mathematical studies . on the one hand, this could be because of the general belief at that time , going back to lagrange , that ( what we now call ) lagrangian coordinates are unnecessarily complicated ; as we already stated , the questionable character of this belief was underlined by dirichlet .on the other hand , it may be that cauchy s lagrangian formulation through - just drifted into oblivion . in order to understand better what had happened, we examined , in addition to the mathematical papers already cited , a considerable number of major fluid mechanics textbooks published in the 20th century and looked for citations of cauchy s 1815/1827 work .a fully relevant citation would not only have cauchy s invariants equations , but also stress , as cauchy did , that they define lagrangian invariants . herea word of caution is required : since stokes in 1883 , several authors have referred to the cauchy vorticity formula as `` cauchy s integrals . ''cauchy used the word `` integrals '' in connection with and not .we failed to find any truly relevant citations before the very end of the 20th century , although cauchy s invariants equations ( or a 2d instance ) were rediscovered independently .hereafter , we indicate some of the partially relevant findings .lamb s treatment of cauchy remained exactly what it was in 1895 ( see section [ ss : stokesetal ] ) with little emphasis on .lichtenstein , in addition to his pioneering papers on the mathematical theory of ideal flow , published several books . in 1929 he produced volume xxx _ grundlagen der hydrodynamik _( foundations of hydrodynamics ) of an encyclopedia of mathematics with emphasis on applications , edited by richard courant . here ,in chapter 10 , cauchy s invariants equations appear briefly [ his eq .( 54 ) for the case but are not directly attributed to cauchy and no use is made of the invariance other than deriving the cauchy vorticity formula .sommerfeld s 1945 `` mechanics of deformable bodies '' and landau and lifshitz s 1944 first edition of `` fluid mechanics '' seem to contain neither the cauchy invariants nor the cauchy vorticity formula . in his encyclopedia of physics article on the foundations of fluid dynamics , oswatitsch followed rather closely cauchy s original derivation of the vorticity formula but skipped the cauchy invariants( a similar treatment with some allowance for turbulent fluctuations is made by goldstein ) .we have already mentioned in section [ ss : hankel ] that truesdell cited hankel quite extensively , but cited cauchy only for his vorticity formula .ramsey has but , again , only as an intermediate step in proving cauchy s vorticity formula .batchelor in his `` introduction to fluid dynamics '' derived the vorticity formula and attributed it to cauchy .finally , stuart and tabor in their introductory paper to the theme issue of _ philosophical transactions _, devoted to the lagrangian description of fluid motions , have cauchy s invariants equations : these are their equations ( 2.13)(2.15 ) , which were here derived from the cauchy vorticity formula ; this amounts to retracing cauchy s steps in reverse .it is not mentioned that the resulting equations were already in cauchy 1815/1827 .as we shall see , the rebirth of cauchy s invariants at the very end of the 20th century and the beginning of the 21st ( section [ ss : rebirth ] ) was preceded by rediscoveries ( without cauchy being named ) .the most widely known of these rediscoveries used a novel tool developed in the early 20th century .emmy noether ( 18821935 ) is recognized as one of the most important mathematicians of all times for her work in algebra . in other fields of mathematics and in mathematical physics, she is also known for major contributions . here , we are concerned with a theorem ( now called `` noether s theorem '' ) , which she proved in 1915 and published in 1918 , that relates continuous symmetry groups and invariants for mechanical systems possessing a lagrangian variational formulation . with the development of quantum mechanics and field theory in the 20th century ,this theorem was to acquire a central role and is covered in most textbooks on analytical mechanics or on field theory .it has been known since lagrange s `` mchanique analitique '' of 1788 that the motion of an incompressible three - dimensional fluid possesses a variational formulation . in modern language , if the fluid occupies the whole space and the lagrangian map is specifed at time ( the identity ) and at some time , the lagrangian map at intermediate times is an extremum of the action integral s = _ 0^tdt_^3 d^3 l(),l ( ) ||^2 , with the constraint of incompressibility that ( ( , t))=1 0tt , where we recall that denotes the lagrangian gradient . indeed if , following lagrange , we introduce infinitesimal variations , vanishing at and at , we find from that the vanishing of the variation of the action requires ( after an integration by parts over time ) that s = - _ 0^tdt_^3 d^3 ( , t ) ( , t ) = 0 , for all variations consistent with incompressibility .this constraint is more easily written in eulerian coordinates by defining ^e(,t ) ( ( , t),t ) , where is the inverse of the lagrangian map .the incompressibility constraint for infinitesimal variations is then simply .( in particular , by taking , one has , where . )it thus follows that the variation of the action must vanish for all of zero eulerian divergence .hence the acceleration must be the eulerian gradient of a suitable function ( actually the negative of the pressure ) : = -p , which is equivalent to the euler equation when there is no external force .it has been known for a long time that the obvious invariances of the lagrangian , such as invariance under time and space translations and under rotations , are connected , by noether s theorem , to standard mechanical invariants , viz .the conservation of kinetic energy , of momentum and of angular momentum . in 1967william a. newcomb , a theoretical physicist well - known for the `` newcomb paradox '' , noticed a new continuous invariance group he called `` exchange invariance '' ( which is now mostly called `` relabeling symmetry '' ) . from this he inferred , by noether s theorem , new invariants which have been identified with the cauchy invariants many years later ( see section [ ss : rebirth ] ) .newcomb observed that the action is preserved if we change the original lagrangian coordinates to new lagrangian coordinates , provided the map from the to the conserves volumes .an infinitesimal version , needed to apply noethers theorem , is + ( ) , ( ) = 0 . the resulting change in the lagrangian map at time is then ( in components with summation over repeated indices ) and the change in the action is s = _ 0^tdt_^3 d^3 x_i _ t^l ( _ j^l x_i)a_j , where denotes the lagrangian time derivative , when a dot would be too cumbersome . setting this variation equal to zero for all pertubations of vanishing divergence, we find that should be a lagrangian gradient .thus its lagrangian curl should vanish , that is = 0 . by, the second term in the square bracket is the lagrangian gradient of the pressure , whose lagrangian curl vanishes .thus is equivalent to the cauchy invariants equations , when they are written in hankel s form , with a curl in front , as in .actually , newcomb was preceded in 1960 by eckart , a specialist of quantum mechanics who applied variational methods to fluid dynamics .eckart rederived the circulation theorem and obtained the cauchy invariants equations ( his equations ( 3.9 ) , ( 4.4 ) and ( 4.5 ) ) without any explicit use of noether s theorem , but in another 1963 paper eckart pointed out that `` the general theorems just mentioned are also consequences of this invariance , a fact that does not seem to have been noted before . ''( by `` this invariance '' he understands the unimodular group of volume - conserving transformations of the lagrangian coordinates . ) in 1963 calkin did apply noether s theorem to hydrodynamics and magnetohydrodynamics and recovered various known invariants , but apparently not the cauchy invariants .the subject of the relabeling symmetry was reviewed in salmon s annual review of fluid mechanics paper , which also contains further references , none of which mentions the cauchy invariants .finally , we should mention that a special case of cauchy s reformulation of the equations in lagrangian coordinates was rediscovered in the fifties without any use of noether s theorem . in a classical book on water waves ,stoker pointed out that certain hydrodynamical problems involving a free surface are better handled using lagrangian than eulerian coordinates .he then described the phd work of his student pohle in the early fifties at the courant institute of mathematical sciences in new york .pohle established cauchy s invariant equation ( without citing cauchy ) in the special case of two dimensions when there is a single invariant , while pointing out that `` similar results hold for the three dimensional case . ''he then assumed analyticity in time of the lagrangian map and obtained recurrence relations among the corresponding taylor coefficients . here, two - dimensionality allowed him to use a complex - variable method to obtain special solutions of relevance to the breakup of a dam .stoker stated that the assumed time - analyticity could probably be established `` at least for a finite time '' and pointed out that `` the convergence of developments of this kind in some simpler problems in hydrodynamics has been proved by lichtenstein . ''this happened indeed , but only recently .cauchy s 1815/1827 paper was cited many times in stoker s book , in connection with waves , and not in the section discussing the lagrangian formulation . in the mid - eighties stokers 1957 work on the two - dimensional lagrangian equations was cited by abrashkin and yakubovich , among the persons who in the late nineties would be involved in correctly identifying cauchy s role in the discovery of the invariants ( see , section [ ss : rebirth ] ) .it was only when the 20th century was nearing its end that cauchy s name was again associated to his invariants , thanks to russian scientists .of course russia has had for a long time a very strong tradition in fluid mechanics . in 1996 abrashkin ,zenkovich and yakubovich , from the institute of applied physics in nizhny novgorod , wrote a paper about a new matrix reformulation of the 3d euler equations in lagrangian coordinates .their eqs .( 4 ) are cauchy s invariants equations written as three scalar equations , just as in cauchy s 1815/1827 paper .the time - independent right hand sides are qualified by them as `` integrals of motion '' and attributed to cauchy ( only by referring to lamb ) . on the next page, they refer to them as `` cauchy invariants . '' in a 1997 review paper on nonlinear waves with significant emphasis on plasma physics , zakharov and kuznetsov from the landau institute in moscow discussed the relabeling symmetry and the corresponding conservation law .their eq .( 7.11 ) gives the cauchy invariants equations in vector notation .they do point out , including in their abstract , that these are `` the cauchy invariants . '' again , they cite cauchy s work with an indirect reference via lamb .the name `` cauchy invariant '' is of course fully appropriate , given the modern meaning of `` invariant '' , a local or global quantity conserved in the course of time .this name was already used in russia several years prior to the 19961997 publications .evsey yakubovich ( 2013 private communication , transmitted through evgenii kuznetsov ) confirmed that it was used in internal discussions at the institute of applied physics in the early nineties .two further publications containing the name `` cauchy invariants '' were published by the nizhnii novgorod group .cauchy s role in introducing the invariants having thus finally been recognized , several works discussing the cauchy invariants have appeared since the year 2000 .what have we learned from exploring this more than two - century - long ( 17882014 ) history of the lagrangian formulation for the incompressible euler equations ?actually , we have a situation ( here in hydrodynamics ) , for which a discovery made two centuries back and hardly ever used since , has emerged as particularly relevant for modern research developments .we have in mind pohle s early - fifties work on the breakup of dams , the invariants obtained from noether s theorem in the sixties and the recent proof of time - analyticity of lagrangian fluid particle trajectories . in all these instances ,cauchy s invariants equations do play a key role , but were actually rediscovered in different ways , for example by a transposition to incompressible hydrodynamics of a lagrangian perturbation method that cosmologists have developed since the nineties . probably , cauchy made his 1815 discovery too early to be adequately appreciated , because the study of invariants and conservation laws would not emerge as an important paradigm for another century .only a corollary of , namely cauchy s vorticity formula was attracting attention in the early 19th century , because of lagrange s theorem . the first serious opportunity to understand some of the importance of cauchy s invariants equations came in germany around 1860 after dirichlet stated that lagrangian coordinates are important ( a statement not heard much again until late in the 20th century ) , when gttingen university , possibly prompted by riemann , pushed for studies based on lagrangian coordinates to get more insight into helmholtz s vorticity theorems and , last but not least , when hankel found how to make full use of cauchy s invariants equations and in the process came across the circulation theorem some years before kelvin .later , in the first decades of the 20th century , cauchy s invariants equations faded into oblivion or were viewed a mere intermediate step in proving cauchy s vorticity formula .eventually , developments in theoretical mechanics , closely connected to the rise of quantum mechanics and later of quantum field theory , led to a rediscovery of the invariants in the late 1960 through application of noether s theorem .another 30 years elapsed , during which developments in nonlinear physics were increasingly making use of symmetries and invariants , until the crucial importance of cauchy s work could be appreciated .+ acknowledgments we are grateful to jrmie bec , bruno belhoste , olivier darrigol , michael eckert , evgenii kuznetsov , koji ohkitani , walter pauls and vladislav zheligovsky for useful discussions and to florence greffe for helping us with historical material .abrashkin , a.a . , yakubovich , e.i .1985 ` nonstationary vortex flows of an ideal incompressible fluid . '_ j. appl .mech . tech ._ , * 26 * , 2 , 202208 . translated from __ , * 2 * , 5764 , 1985 , in russian .abrashkin , a.a . &yakubovich , e.i .2006 _ vortex dynamics in the lagrangian description _ , fizmatlit , moscow .abrashkin , a.a . , zenkovich , d.a . ,yakubovich , e.i .1996 ` matrix formulation of hydrodynamics and extension of ptolemaic flows to three - dimensional motions . ' _ radiophys .quantum el ._ , * 39 * , 6 , 518 - 526 . translated from _ izv ._ , * 39 * , 6 , 783796 , 1996 , in russian .andrade , jules 1898 _ leons de mcanique physique _ , paris ,societ dditions scientifiques .http://gallica.bnf.fr/ark:/12148/bpt6k8832547.r=andrade.langen anonymous , ( signed as hl . ), 1863 ` aufsatz ber _ zur allgemeinen theorie der bewegung der flssigkeiten .eine von der philosophischen facultt der georgia augusta am 4 .juni 1861 gekrnte preisschrift , gttingen _ ' in _ die fortschritte der physik i m jahre 1861 _ produced by _physikalische gesellschaft zu berlin _ , 5761 .https://play.google.com/books/reader?id=zt0eaaaaqaaj&printsec=frontcover&output=reader&authuser=0&hl=en&pg=gbs.pa57 appell , paul 1897 ` sur les quations de l ' hydrodynamique et la thorie des tourbillons. _ journal de mathmatiques pures et appliques _ , 5e srie , * 3 * , 516 .http://portail.mathdoc.fr/jmpa/pdf/jmpa_1897_5_3_a1_0.pdf arnold , v.i . ,zeldovich , y.b . ,ruzmaikin , a.a . , sokoloff , d.d .1981 ` a magnetic field in a stationary flow with stretching in a riemannian space . 'jetp _ , * 54 * , 10831086 .translated from __ , * 81 * , 20522058 , in russian .auerbach , felix 1881 _ die theoretische hydrodynamik nach dem gange ihrer entwickelung in der neuesten zeit in krze dargestellt _ : von dem k. venetianischen institute der wissenschaften gekrnte preisschrift , f. vieweg und sohn , braunschweig .https://archive.org/stream/dietheoretische01auergoog#page/n7/mode/2up brenier , y. , frisch , u. , hnon , m. , loeper , g. , matarrese , s. , mohayaee , r. , sobolevski , a. 2003 ` reconstruction of the early universe as a convex optimization problem . '_ mon . not .r. astron ._ , * 346 * , 501524 .cantor , moritz 1879 ` hankel hermann h. ' in _ allgemeine deutsche biographie _ , * 10 * , 516519 .http://daten.digitale-sammlungen.de/0000/bsb00008368/images/index.html?fip=193.174.98.30&id=00008368&seite=518 cauchy , augustin - louis 1815/1827 ` thorie de la propagation des ondes la surface dun fluide pesant dune profondeur indfinie - prix danalyse mathmatique remport par m. augustin - louis cauchy , ingnieur des ponts et chausses .( concours de 1815 ) . '_ mmoires prsents par divers savans lacadmie royale des sciences de linstitut de france et imprims par son ordre .sciences mathmatiques et physiques_. tome i , imprim par autorisation du roi limprimerie royale , 5318 .dalmedico , a.d .1989 ` la propagation des ondes en eau profonde et ses dveloppements mathmatiques ( poisson , cauchy 18151825 ) . ' in _ the history of modern mathematics _ , vol.2 , proceedings of the symposium on the history of modern mathematics , new - york , june , 20 - 24 , 1988 , edited by d.e .rowe , j.mc cleary , 129168 .dick , auguste 1970 _ emmy noether : 1882 - 1935 _ , birkhuser , basel . translated into english by h.i .blocher , 1981 , birkhuser , boston inc ..( lejeune-)dirichlet , gustav peter 1859 , produced _ post mortem _ by r. dedekind , ` untersuchungen ber ein problem der hydrodynamik . '_ abhandlungen der mathematischen klasse der kniglichen gesellschaft der wissenschaften zu gttingen _ , * 8 * , 342 .http://gdz.sub.uni-goettingen.de/dms/load/img/?ppn=gdzppn002018772 euler , leonhard 1755 [ printed in 1757]`continuation des recherches sur la thorie du mouvement des fluides . '_ masb _ , * 1 * ] , 316361 . also in _ opera omnia _ ,ser . 2 , * 12 * , 92132 , * e227*. http://bibliothek.bbaw.de/bibliothek/digital/struktur/02-hist/1755/jpg-0600/00000324.htm hankel , hermann 1861 _ zur allgemeinen theorie der bewegung der flssigkeiten_. eine von der philosophischen facultt der georgia augusta am 4 .juni 1861 gekrnte preisschrift , gttingen . printed by dieterichschen univ .- buchdruckerei ( w. fr .kaestner ) .http://babel.hathitrust.org/cgi/pt?id=mdp.39015035826760;view=1up;seq=5 von helmholtz , hermann 1858 ` ber integrale der hydrodynamischen gleichungen , welche den wirbelbewegungen entsprechen . '_ journal fr die reine und angewandte mathematik _ , * 55 * , 2555 .translated into english by p.g .tait , 1867 ` on integrals of the hydrodynamical equations , which express vortex motion . ' _ the london , edinburgh , and dublin philosophical magazine _ , supplement to vol .xxxiii , 485512 . http://www.biodiversitylibrary.org/item/121849#page/499/mode/1up hlder , ernst 1933 ` ber die unbeschrnkte fortsetzbarkeit einer stetigen ebenen bewegung in einer unbegrenzten incompressiblen flssigkeit . '_ mathematische zeitschrift _ , * 37 * , 698726 .http://www.digizeitschriften.de/de/dms/img/?ppn=ppn266833020_0037&dmdid=dmdlog70 institut de france 1816 .prix dcrns dans la sance publique du 8 janvier 1816 .thorie des ondes . in_ annales maritimes et coloniales ou recueil de lois ou ordonnances royales , etc _ 1816 iie partie , 6061 , publis par m. bajot .imprimerie royale ( paris ) .http://books.google.de/books?id=8wiyaaaayaaj&pg=pa60&lpg=pa60&dq=institut+royal+de+france+prix+decernes+dans+la+seance+publique+1816&source=bl&ots=fyciwkn_hn&sig=fv4ge9jnwhf0lic42zftnwac2gs&hl=en&sa=x&ei=rit7uuc4icagywpd9ykacq&redir_esc=y#v=onepage&q=institut%20royal%20de%20france%20prix%20decernes%20dans%20la%20seance%20publique%201816&f=false knigsberger , leo 1902 - 1903 _ hermann von helmholtz _ , 3 volumes , f. vieweg und sohn , braunschweig . prepared for internet by g. drflinger 2010 universittsbibliothek heidelberg .translated into english by f.a .welby , ( being the part about the life , slightly abridged with permission of the author and german publisher ) , with a preface by lord kelvin , 1906 , oxford , at the clarendon press .https://archive.org/stream/hermannvonhelmho00koenrich#page/n5/mode/2up lvy , maurice 1890 ` lhydrodynamique moderne et lhypothse des actions a distance . '_ revue gnrale des sciences pures et appliques _ , * 23 * , 721728. http://gallica.bnf.fr/ark:/12148/bpt6k213928z/f723 .image.r = maurice%20levy%20hydrodynamique%20moderne%201890.langen[http://gallica.bnf.fr / ark:/12148/bpt6k213928z / f723 .image.r = maurice%20levy%20hydrodynamique%20moderne%201890.langen ] lichtenstein , leon 1927 ` ber einige existenzprobleme der hydrodynamik .zweite abhandlung .nichthomogene , unzusammendrckbare , reibungslose flssigkeiten . '_ mathematische zeitschrift _ , * 26 * , 1 , 196323 .http://gdz.sub.uni-goettingen.de/dms/load/img/?ppn=gdzppn002369230&iddoc=82659 noether , emmy 1918 ` invariante variationsprobleme . '_ nachrichten von der gesellschaft der wissenschaften zu gttingen , mathematisch - physikalische klasse 1918 _ , 235257 .translated into english by m.a .tavel , 1971 ` invariant variation problem . ' _ transport theory and statistical physics _, * 1 * , 3 , 183207 .prandtl , ludwig 1905 ` ber flssigkeitsbewegung bei sehr kleiner reibung . ' in _ verhandlungen des iii .internationalen mathematiker - kongresses in heidelberg vom 8 .august 1904 _ , edited by a. krazer , b.g .teubner , leipzig , 484491 .https://archive.org/stream/verhandlungende00krazgoog#page/n504/mode/2up.translated into english 1928 ` on the motion of fluids of very small viscosity . '_ naca , tech . memo ._ , n. 452 .http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19930090813_1993090813.pdf risser , m.r .1925 _ essai sur la thorie des ondes par mersion _ , thse prsente a la facult des sciences de paris , gauthier - villars et cie , diteurs , paris .stokes , george gabriel 1845 , [ printed in 1847 ] , ` on the theories of the internal friction of fluids in motion and of the equilibrium and motion of elastic solids . '_ transactions of the cambridge philosophical society _ ,* 8 * , 287319 .stokes , george gabriel 1846 , [ printed in 1847 ] , ` report on the recent researches in hydrodynamics . ' _ report of the british association for the advancement of science _ , 120 .report of the sixteenth meeting of the british association held at southampton in september 1846 .stokes , george gabriel 1848 ` notes on hydrodynamics .iv demonstration of a fundamental theorem . '_ the cambridge and dublin mathematical journal _ ,iii , 209219 .stokes , george gabriel 1883 ` notes on hydrodynamics .iv demonstration of a fundamental theorem . '_ mathematical and physical papers by george gabriel stokes _, reprinted from the original journals and transactions , with additional notes by the author , cambridge university press , * 2 * , 3650 .stokes , george gabriel & thomson , william ( lord kelvin ) 18461869 & 18701903 . vols . 1 and 2 of _ the correspondence between sir george gabriel stokes and sir william thomson , baron kelvin of largs _ , edited with an introduction by d.b .wilson , cambrige university press .strutt , john william ( lord rayleigh ) 1904 ` sir george gabriel stokes , bart .1819 - 1903 . '_ proceedings of the royal society of london _ , * 75 * , 199216 .weber , h.m ., 1868 ` ber eine transformation der hydrodynamischen gleichungen . '_ journal fr die reine und angewandte mathematik _ ( crelle ) , berlin , * 68 * , 286292 .http://gdz.sub.uni-goettingen.de/dms/load/img/?ppn=gdzppn00215353x wolibner , witold 1933 ` un theorme sur lexistence du mouvement plan dun fluide parfait , homogne , incompressible , pendant un temps infiniment long . '_ mathematische zeitschrift _ , * 37 * , 698726 .wood , a. 2003 george gabriel stokes 18191903 in _ physicists of ireland , passion and precision _ mccartney , m. & whitaker , a. , eds . ,pp . 8594 , institute of physics publishing ( bristol & philadelphia ) .
|
two prized papers , one by augustin cauchy in 1815 , presented to the french academy and the other by hermann hankel in 1861 , presented to gttingen university , contain major discoveries on vorticity dynamics whose impact is now quickly increasing . cauchy found a lagrangian formulation of 3d ideal incompressible flow in terms of three invariants that generalize to three dimensions the now well - known law of conservation of vorticity along fluid particle trajectories for two - dimensional flow . this has very recently been used to prove analyticity in time of fluid particle trajectories for 3d incompressible euler flow and can be extended to compressible flow , in particular to cosmological dark matter . hankel showed that cauchy s formulation gives a very simple lagrangian derivation of the helmholtz vorticity - flux invariants and , in the middle of the proof , derived an intermediate result which is the conservation of the circulation of the velocity around a closed contour moving with the fluid . this circulation theorem was to be rediscovered independently by william thomson ( kelvin ) in 1869 . cauchy s invariants were only occasionally cited in the 19th century besides hankel , foremost by george stokes and maurice lvy and even less so in the 20th until they were rediscovered via emmy noether s theorem in the late 1960 , but reattributed to cauchy only at the end of the 20th century by russian scientists .
|
coherent spectroscopy enables one to study mechanisms involving dynamic light scattering .the spectral distribution of a monochromatic optical field scattered by moving particles is modified as a consequence of momentum transfer ( doppler broadening ) .the measurement of the doppler linewidth of this field ( referred as object field ) with an optimal sensitivity is crucial , since doppler conversion yields are typically low .+ optical mixing ( or _ postdetection filtering _ ) techniques are derived from rf spectroscopy techniques .they can be grouped in two categories : homodyne and heterodyne schemes . in homodyne mixing ( fig . [ fig_coh_detex](a ) ) , self - beating object light impinges on a -bandwidth photodetector ( also referred as optical mixer or photo - mixer , pm ) . to assess a frequency component of the object light , the output of the pmis sent to a spectrum analyser , whose bandwidth defines the detection resolution .the resulting spectra are proportional to the second - order object field spectral distribution . in heterodynemixing , sketched in fig .[ fig_coh_detex](b ) , the object light is mixed onto a pm with a frequency - shifted reference beam , also called local oscillator ( lo ) . the lo field ( ) is detuned to provoke a heterodyne beat of the object - lo field cross contributions to the recorded intensity .this beat is sampled by a pm and sent to a spectrum analyser whose bandwidth defines the apparatus resolution .this scheme enables one to measure the first order object field spectral distribution .+ in heterodyne optical mixing experiments , the pm bandwidth defines the span of the measurable spectrum ( usually ghz ) .the resolution can be lowered down to the sub hertz range , which is suitable for , among other applications , spectroscopy of liquid and solid surfaces , dynamic light scattering and in vivo laser doppler anemometry .but these schemes are inadequate for imaging applications , because measurements are done on one point .the absence of spatial resolution has been sidestepped by scanning techniques , designed at the expense of temporal resolution .+ we present a heterodyne optical mixing detection scheme on an array detector ( configuration of fig .[ fig_coh_detex](b ) ) .typical array detectors sampling rates seldom run over 1 khz , failing to provide a bandwidth large enough for most doppler applications to date .nevertheless , their strong advantage is to perform a parallel detection over a large number of pixels .we present a spatial and temporal modulation scheme ( spatiotemporal heterodyning ) that uses the spatial sampling capabilities of an area detector to counterbalance the noise issue of a measurement in heterodyne configuration performed in the low temporal frequency range ( e.g. with a typical ccd camera ) .the issue of using narrow - bandwidth camera pms is alleviated by detuning the lo field optical frequency accordingly to the desired spectral point of the object field to measure .post - detection filtering results from a numerical fourier transform over a limited number of acquired images . +this heterodyne optical mixing scheme on a low frame rate array detector can be used as a filter to analyze coherent light .it has already been used in several applications yet , including the detection of ultrasound - modulated diffuse photons , low - light spectrum analysis , laser doppler imaging , and dynamic coherent backscattering effect study .the purpose of this paper is to present its mechanism .we consider the spatially and temporally coherent light field of a cw , single axial mode laser ( dimensionless scalar representation ) : where is the angular optical frequency , is the amplitude of the field ( positive constant ) , ; describes the laser amplitude fluctuations and the phase fluctuations .this field shines a collection of scatterers that re - emits the object field ( or scattered field ) , described by the following function : where is a positive constant and is the time - domain phase and amplitude fluctuation induced by dynamic scattering of the laser field , i.e. the cause of the object field fluctuations due to dynamic scattering we intend to study . as in conventional heterodyne detection schemes ,a part of the laser field , taken - out from the main beam constitutes the reference ( lo ) field : where is a positive constant .the lo optical frequency is shifted with respect to the main laser beam by to provoke a tunable temporal modulation of the interference pattern resulting from the mix of the scattered and reference fields .it can be done experimentally by diffracting the reference beam with rf - driven bragg cells ( acousto - optic modulators ) for example .+ the expression of the light instant intensity impinging on the camera pms is : ) ^2\ ] ] where is a positive constant and ] , in db .horizontal axis : detuning frequency , in hz .dotted line : .continuous line : .frequency diagram of heterodyne terms spectral components in the case where the detuning frequency is set to . 6 . images ( pixels ) of the sample for equal to 0 hz ( a ) , 400 hz ( b ) , 4000 hz ( c ) and 8000 hz ( 80 ms image exposure time , 4-image demodulation .arbitrary logarithmic scale display . 7 .traces obtained by summation along columns of fig.[fig_images_80ms_4im](a ) to ( d ) intensities .curves a to d correspond to a detuning frequency equal to 0 hz ( a ) , 400 hz ( b ) , 4000 hz ( c ) and 8000 hz ( d ) . horizontal scale is the image horizontal pixel index .vertical scale is in linear arbitrary units .frequency spectra of the light diffused through a suspension of latex particles in brownian motion .exposure time is ms .demodulation is performed with 4 ( a ) , 8 ( b ) , 16 ( c ) and 32 ( d ) images .horizontal axis is the detuning frequency in khz .vertical scale is in linear arbitrary units .the four curves overlap .spectra measured with exposure time ms ( a ) , 20 ms ( b ) , 5 ms ( c ) , and 4-image demodulation .note that curve ( a ) the same as fig.[fig_spectrum_80ms ] ( 4 images and 80 ms ) .vertical scale is in linear arbitrary units .the three curves overlap .frequency lineshapes of the light diffused through the cell for different concentrations of latex spheres .exposure time is ms .4-image demodulation .horizontal axis is the detuning frequency in khz .vertical scale is in linear arbitrary units .volumic concentration of latex beads : ( a ) , ( b ) , ( c ) , to ( d ) .
|
we describe a scheme into which a camera is turned into an efficient tunable frequency filter of a few hertz bandwidth in an off - axis , heterodyne optical mixing configuration , enabling to perform parallel , high - resolution coherent spectral imaging . this approach is made possible through the combination of a spatial and temporal modulation of the signal to reject noise contributions . experimental data obtained with dynamically scattered light by a suspension of particles in brownian motion is interpreted .
|
automated program analyses are useful for various purposes . for instance, compilers can benefit from their results to improve the translation of source into target programs .analysis information can be helpful to programmers to reason about the behavior and operational properties of their programs .moreover , this information can also be documented by program documentation tools or interactively shown to developers in dedicated programming environments . on the one hand , declarative programming languages provide interesting opportunities for analyzing programs . on the other hand , their complex or abstract execution model demands for good tool support to develop reliable programs .for example , the detection of type errors in languages with higher - order features or the detection of mode problems in the use of prolog predicates .this work is related to functional logic languages that combine the most important features of functional and logic programming in a single language ( see for recent surveys ) . in particular , these languages provide higher - order functions and demand - driven evaluation from functional programming together with logic programming features like non - deterministic search and computing with partial information ( logic variables ) .this combination has led to new design patterns and better abstractions for application programming .moreover , program implementation and analysis aspects for functional as well as logic languages can be considered in a unified framework .for instance , test cases for functional programs can be generated by executing functions with logic variables as arguments .automated program analyses have been already used for functional logic programming in various situations .for instance , currydoc is an automatic documentation tool for the functional logic language curry that analyzes curry programs to document various operational aspects , like the non - determinism behavior or completeness issues .currybrowser is an interactive analysis environment that unifies various program analyses in order to reason about curry applications .kics2 , a recent implementation of curry that compiles into haskell , includes an analyzer to classify higher - order and deterministic operations in order to support their efficient implementation which results in highly efficient target programs .similar ideas are applied in the implementation of mercury which uses mode and determinism information to reorder predicate calls .non - determinism information as well as information about definitely demanded arguments has been used to improve the efficiency of functional logic programs with flexible search strategies . a recent eclipse - based development environment for curry also supports the access to analysis information during interactive program development .these different uses of program analyses is the motivation for the current work .we present cass ( curry analysis server system ) which is intended to be a central component of current and future tools for functional logic programs .cass provides a generic interface to support the integration of various program analyses .although the current implementation is strongly related to curry , cass can also be used for similar declarative programming languages , like toy .the analyses are performed on an intermediate format into which source programs can be compiled .cass supports the analysis of larger applications by a modular and incremental analysis .the analysis results for each module are persistently stored and recomputed only if it is necessary .since cass is implemented in curry , it can be directly used in tools implemented in curry , like the documentation generator currydoc , the analysis environment currybrowser , or the curry compiler kics2 .cass can also be invoked as a server system providing a text - based communication protocol in order to interact with tools implemented in other languages , like the eclipse plug - in for curry .cass is implemented as a master / worker architecture , i.e. , it can distribute the analysis work to different processes in order to exploit parallel or distributed execution environments .in the next section , we review some features of curry .section [ sec : anaimpl ] shows how various kinds of program analyses can be implemented and integrated into cass. some uses of cass are presented in section [ sec : usage ] before its implementation is sketched in section [ sec : impl ] and evaluated in section [ sec : eval ] .in this section we review some aspects of curry that are necessary to understand the functionality and implementation of our analysis tool .more details about curry s computation model and a complete description of all language features can be found in .curry is a declarative multi - paradigm language combining in a seamless way features from functional , logic , and concurrent programming .curry has a haskell - like syntax to is denoted by juxtaposition ( `` '' ) . ] extended by the possible inclusion of free ( logic ) variables in conditions and right - hand sides of defining rules .curry also offers standard features of functional languages , like polymorphic types , modules , or monadic i / o which is identical to haskell s i / o concept .thus , `` '' denotes the type of an i / o action that returns values of type . a _ curry program _ consists of the definition of functions and the data types on which the functions operatefunctions are defined by conditional equations with constraints in the conditions .they are evaluated lazily and can be called with partially instantiated arguments .for instance , the following program defines the types of boolean values and polymorphic lists and functions to concatenate lists ( infix operator `` '' ) and to compute the last element of a list : data bool = true | false data list a = [ ] | a : list a ( + + ) : : [ a ] - > [ a ] - > [ a ] [ ] + + ys = ys ( x : xs ) + + ys = x : ( xs + + ys) last xs | _ + + [ x ] = : = xs = x where x free the data type declarations define and as the boolean constants and ( empty list ) and ( non - empty list ) as the constructors for polymorphic lists ( is a type variable ranging over all types and the type `` '' is usually written as for conformity with haskell ) .the ( optional ) type declaration ( `` '' ) of the function specifies that takes two lists as input and produces an output list , where all list elements are of the same ( unspecified ) type . into elements of type . ]the operational semantics of curry is a conservative extension of lazy functional programming ( if free variables do not occur in the program or the initial goal ) and ( concurrent ) logic programming . to describe this semantics , compile programs , or implement analyzers and similar tools ,an intermediate representation of curry programs has been shown to be useful .programs of this intermediate language , called flatcurry , contain a single rule for each function where the pattern matching strategy is represented by case / or expressions .the basic structure of flatcurry is defined as follows : denotes a sequence of objects . ] + + a program consists of a sequence of function definitions with pairwise different variables in the left - hand sides .the right - hand sides are expressions composed by variables , constructor and function calls , case expressions , disjunctions , and introduction of free ( unbound ) variables .a case expression has the form , where is an expression , are different constructors of the type of , and are expressions .the _ pattern variables _ are local variables which occur only in the corresponding subexpression .the difference between and shows up when the argument is a free variable : suspends ( which corresponds to residuation ) whereas nondeterministically binds this variable to the pattern in a branch of the case expression ( which corresponds to narrowing ) .note that it is possible to translate other functional logic languages , like toy , or even haskell into this intermediate format .since our analysis tool is solely based on flatcurry , it can also be used for other source languages provided that there is a translator from such languages into flatcurry .mature implementations of curry , like pakcs or kics2 , provide support for meta - programming by a library containing data types for representing flatcurry programs and an i / o action for reading a module and translating its contents into the corresponding data term .for instance , a module of a curry program is represented as an expression of type data prog = prog string [ string ] [ typedecl ] [ funcdecl ] [ opdecl ] where the arguments of the data constructor are the module name , the names of all imported modules , the list of all type , function , and infix operator declarations .furthermore , a function declaration is represented as data funcdecl = func qname int visibility typeexpr rule where the arguments are the qualified name ( of type , i.e. , a pair of module and function name ) , arity , visibility ( or ) , type , and rule ( of the form `` '' ) of the function . finally , the data type for expressions just reflects its formal definition : data expr = var int | lit literal | comb combtype qname [ expr ] | case casetype expr [ ( pattern , expr ) ] | or expr expr | free [ int ] expr data combtype= funccall thus , variables are numbered , literals ( like numbers or characters ) are distinguished from combinations ( ) which have a first argument to distinguish constructor applications and applications of defined functions .the remaining data type declarations for representing curry programs are similar but we omit them for brevity .basically , a program analysis can be considered as a mapping that associates a program element with information about some aspect of its semantics .since most interesting semantical aspects are not computable , they are approximated by some abstract domain where each abstract value describes some set of concrete values .for instance , an `` overlapping rules '' analysis determines whether a function is defined by a set of overlapping rules , which means that some ground call to this function can be reduced in more than one way .an example for an operation that is defined by overlapping rules is the `` choice '' operation x ?y = y for this analysis one can use as the abstract domain so that the abstract value is interpreted as `` defined by non - overlapping rules '' and is interpreted as `` defined by overlapping rules '' .hence , the `` overlapping rules '' analysis has the type funcdecl - > bool which means that we associate a value to each function definition . based on the data type definitions sketched in section [ sec : flatcurry ] and some standard functions ,such an analysis can be defined by looking for occurrences of in the defining expression as follows : isoverlapping : : funcdecl - > bool isoverlapping ( func _ _ _ _ ( rule _ e ) ) = orinexpr e orinexpr : : expr - > bool orinexpr ( var _ ) = false orinexpr ( lit _ ) = false orinexpr ( comb _ _ es ) = any orinexpr es orinexpr ( case _ e bs ) = orinexpr e || any ( orinexpr .snd ) bs orinexpr ( or _ _ ) = true orinexpr ( free _ e ) = orinexpr e many interesting aspects require a more sophisticated analysis where dependencies are taken into account .for instance , consider a `` non - determinism '' analysis with the abstract domain data detdom = det | nondet here , is interpreted as `` the operation evaluates in a deterministic manner on ground arguments . ''however , is interpreted as `` the operation _ might _ evaluate in different ways for given ground arguments . ''the apparent imprecision is due to the approximation of the analysis .for instance , if the function is defined by overlapping rules and the function might call , then is judged as non - deterministic . in order to take into account such dependencies, the non - determinism analysis requires to examine the current function as well as all directly or indirectly called functions for overlapping rules . due to recursive function definitions, this analysis can not be done in one shot but requires a fixpoint computation . in order to make things simple for the analysis developer, cass supports such fixpoint computations and requires from the developer only the implementation of an operation of type funcdecl - > [ ( qname , a ) ] - > a where `` '' denotes the type of abstract values .the second argument of type represents the currently known analysis values for the functions _ directly _ used in this function declaration .hence , the non - determinism analysis can be implemented as follows : nondetfunc : : funcdecl - > [ ( qname , detdom ) ] - > detdom nondetfunc ( func f _ _ _ ( rule _ e ) ) calledfuncs = if orinexpr e || freevarinexpr e || any ( = = nondet ) ( map snd calledfuncs ) then nondet else det thus , it computes the abstract value if the function itself is defined by overlapping rules or contains free variables that might cause non - deterministic guessing ( we omit the definition of since it is quite similar to ) , or if it depends on some non - deterministic function .the actual analysis is performed by defining some start value for all functions ( the `` bottom '' value of the abstract domain , here : ) and performing a fixpoint computation for the abstract values of these functions .cass uses a working list approach as default but also supports other methods to compute fixpoints .the termination of the fixpoint computation can be ensured by standard assumptions in abstract interpretation , e.g. , by choosing a finite abstract domain and monotonic operations , or by widening operations . to support the inclusion of different analyses in cass , there are an abstract type `` '' denoting a program analysis with abstract domain `` '' and several constructor operations for various kinds of analyses .each analysis has a name provided as a first argument to these constructors .the name is used to store the analysis information persistently and to pass specific analysis tasks to workers ( see below for more details ) .for instance , a simple function analysis which depends only on a given function definition can be created by funcanalysis : : string - > ( funcdecl - > a ) - > analysis a where the analysis name and analysis function are provided as arguments .thus , the overlapping analysis can be specified as overlapanalysis : : analysis bool overlapanalysis = funcanalysis `` overlapping '' isoverlapping a function analysis with dependencies can be constructed by dependencyfuncanalysis : : string - > a - > ( funcdecl - > [ ( qname , a ) ] - > a ) - > analysis a here , the second argument specifies the start value of the fixpoint analysis , i.e. , the bottom element of the abstract domain .thus , the complete non - determinism analysis can be specified as nondetanalysis : : analysis detdom nondetanalysis = dependencyfuncanalysis `` deterministic '' det nondetfunc it should be noted that this definition is sufficient to execute the analysis with cass since the analysis system takes care of computing fixpoints , calling the analysis functions with appropriate values , analyzing imported modules , etc . thus , the programmer can concentrate on implementing the logic of the analysis and is freed from many tedious implementation details .sometimes one is also interested in analyzing information about data types rather than functions .for instance , the curry implementation kics2 has an optimization for higher - order deterministic operations .this optimization requires some information about the higher - order status of data types , i.e. , whether a term of some type might contain functional values .cass supports such analyses by appropriate analysis constructors .a simple type analysis which depends only on a given type declaration can be specified by typeanalysis : : string - > ( typedecl - > a ) - > analysis a a more complex type analysis depending also on information about the types used in the type declaration can be specified by dependencytypeanalysis : : string - > a - > ( typedecl - > [ ( qname , a ) ] - > a ) - > analysis a similarly to a function analysis , the second argument is the start value of the fixpoint analysis and the third argument computes the abstract information about the type names used in the type declaration . the remaining entities in a curry program that can be analyzed are data constructors . since their definition only contains the argument types , it may seem uninteresting to provide a useful analysis for them .however , sometimes it is interesting to analyze their context so that there is a analysis constructor of type constructoranalysis : : string - > ( consdecl - > typedecl - > a ) - > analysis a thus , the analysis operation of type gets for each constructor declaration the corresponding type declaration .this information could be used to compute the sibling constructors , e.g. , the sibling for the constructor is .the information about sibling constructors is useful to check whether a function is completely defined , i.e. , contains a case distinction for all possible patterns . for instance , the operation ( in flatcurry notation ) not x = case x of true - > false false - > true is completely defined whereas cond x y = case x of true - > y is incompletely defined since it fails on as the first argument . to check this property , information about sibling constructorsis obviously useful .but how can we provide this information in an analysis for functions ? for this and similar purposes, cass supports the combination of different analyses .thus , an analysis developer can define an analysis that is based on information computed by another analysis . to make analysis combination possible, there is an abstract type `` '' to represent the analysis information of type `` '' for a given module and its imports . in order to look up analysis information about some entity, there is an operation lookupproginfo : : qname - > proginfo a - > maybe a one can use the analysis constructor combinedfuncanalysis : : string - > analysis b - > ( proginfo b - > funcdecl - > a ) - > analysis a to implement a function analysis depending on some other analysis .the second argument is some base analysis computing abstract values of type `` '' and the analysis function gets , in contrast to a simple function analysis , the analysis information computed by this base analysis .for instance , if the sibling constructor analysis is defined as siblingcons : : analysis [ qname ] siblingcons = constructoranalysis then the pattern completeness analysis might be defined by patcompanalysis : : analysis bool patcompanalysis = combinedfuncanalysis `` patcomplete '' siblingconsispatcomplete ispatcomplete : : proginfo [ qname ] - > funcdecl - > bool ispatcomplete siblinginfo fundecl = similarly , other kinds of analyses can be also combined with some base analysis by using the following analysis constructors : combinedtypeanalysis : : string - > analysis b - > ( proginfo b - > typedecl - > a ) - > analysis a combineddependencyfuncanalysis : : string - > analysis b - > a - > ( proginfo b - > funcdecl - > [ ( qname , a ) ] - > a ) - > analysis a combineddependencytypeanalysis : : string - > analysis b - > a - > ( proginfo b - > typedecl - > [ ( qname , a ) ] - > a ) - > analysis a for instance , an analysis for checking whether a function is totally defined , i.e. , always reducible for all ground arguments , can be based on the pattern completeness analysis .it is a dependency analysis so that it can be defined as follows ( in this case , is the bottom element since the abstract value denotes `` might not be totally defined '' ) : totalanalysis : : analysis bool totalanalysis = combineddependencyfuncanalysis `` total '' patcompanalysis true istotal istotal : : proginfo bool - > funcdecl - > [ ( qname , bool ) ] - > bool istotal pcinfo fdecl calledfuncs = ( maybe false i d ( lookupproginfo ( funcname fdecl ) pcinfo ) ) & & all snd calledfuncs hence , a function is totally defined if it is pattern complete and depends only on totally defined functions .further examples of combined analyses are the higher - order function analysis used in kics2 ( see above ) where the higher - order status of a function depends on the higher - order status of its argument types , and the non - determinism analysis of where non - determinism effects are analyzed based on groundness information . in order to integrate some implemented analysis in cass, one has to register it . in principle , this could be done dynamically , but currently only a static registration is supported . for the registration , the implementation of cass contains a constant registeredanalysis : : [ registeredanalysis ] keeping the information about all available analyses . to register a new analysis, it has to be added to this list of registered analyses and cass has to be recompiled .each registered analysis must provide a `` show '' function to map abstract values into strings to be shown to the user .this allows for some flexibility in the presentation of the analysis information .for instance , showing the results of the totally definedness analysis can be implemented as follows : showtotal : : bool - > string showtotal true = `` totally defined '' showtotal false = `` possibly partially defined '' an analysis can be registered with the auxiliary operation cassanalysis : : analysis a - > ( a - > string ) - > registeredanalysis for instance , we can register our analyses presented in this section by the definition registeredanalysis = [ cassanalysis overlapanalysis showoverlap , cassanalysis nondetanalysis showdet , cassanalysis siblingcons showsibling , cassanalysis patcompanalysis showcomplete , cassanalysis totalanalysis showtotal ] in the cass implementation . after compiling cass ,they are immediately available as shown in the next section .as mentioned above , a program analysis is useful for various purposes , e.g. , the implementation and transformation of programs , tool and documentation support for programmers , etc .therefore , the results computed by some analysis registered in cass can be accessed in various ways .currently , there are three methods for this purpose : batch mode : : : cass is started with a module and analysis name .then this analysis is applied to the module and the results are printed ( using the analysis - specific show function , see above ) . api mode : : : if the analysis information should be used in an application implemented in curry , the application program could use the cass interface operations to start an analysis and use the computed results for further processing .server mode : : : if the analysis results should be used in an application implemented in some language that does not have a direct interface to curry , one can start cass in a server mode . in this case , one can connect to cass via some socket using a communication protocol that is specified in the documentation of cass .figure [ fig : cass - usage ] shows some uses of cass which are discussed in the following .the use of cass in batch mode is obvious .this mode is useful to get a quick access to analysis information so that one can experiment with different abstractions , fixpoint computations , etc .if one wants to access cass inside an application implemented in curry , one can use some interface operation of cass .for instance , cass provides an operation analyzegeneric : : analysis a - > string - > io ( either ( proginfo a ) string ) to apply an analysis ( first argument ) to some module ( whose name is given in the second argument ) .the result is either the analysis information computed for this module or an error message in case of some execution error .this access to cass is used in the documentation generator currydoc to describe some operational aspects of functions ( e.g. , pattern completeness , non - determinism , solution completeness ) , the curry compiler kics2 to get information about the determinism and higher - order status of functions , and the non - determinism optimizer described in to obtain information about demanded arguments and non - deterministic functions .furthermore , there is also a similar operation analyzemodule : : string - > string - > io ( either ( proginfo string ) string ) which takes an analysis name and a module name as arguments and yields the textual representation of the computed analysis results .this is used in the currybrowser which allows the user to browse through the modules of a curry application and apply and visualize various analyses for each module or function . beyond some specific analyses like dependency graphs , all function analyses registered in cass are automatically available in the currybrowser .the server mode of cass is used in a recently developed eclipse plug - in for curry which also supports the visualization of analysis results inside eclipse . since this plug - inis implemented in a java - based framework , the access to cass is implemented via a textual protocol over a socket connection .this protocol has a command to query the names of all available analyses .this command is used to initialize the analysis selection menus in the eclipse plug - in . furthermore , there are commands to analyze a complete module or individual entities inside a module .the analysis results are returned as plain strings or in xml format .currently , we are working on more options to visualize analysis information in the eclipse plug - in rather than strings , e.g. , term or graph visualizations .as mentioned above , cass is implemented in curry using the features for meta - programming as sketched in section [ sec : flatcurry ] .since the analysis programmer only provides operations to analyze a function , type , or data constructor , as shown in section [ sec : anaimpl ] , the main task of cass is to supply these operations with the appropriate parameters in order to compute the analysis results .cass is intended to analyze larger applications consisting of many modules .thus , a simple implementation by concatenating all modules into one large program to be analyzed would not be efficient enough .hence , cass performs a separate analysis of each module by the following steps : 1 .the imported modules are analyzed .2 . the analysis information of the interface of the imported modules are loaded .3 . the module is analyzed .if the analysis is a dependency analysis , they are evaluated by a fixpoint computation where the specified start value is used as initial values for the locally defined ( i.e. , non - imported ) entities .obviously , this scheme can be simplified in case of a simple analysis without dependencies , since such an analysis does not require the imported entities . for a combined analysis ,the base analysis is performed before the main analysis is executed .it should be noted that the separate analysis of each module allows only a bottom - up but not a top - down analysis starting with the initial goal .a bottom - up analysis is sufficient for interactive systems where the initial goal is not known at analysis time .nevertheless , it is sometimes possible to express `` top - down oriented '' analyses , like a groundness analysis , in a bottom - up manner by choosing appropriate abstract domains , as shown in where a type and effect system is used to analyze groundness and non - determinism information . in order to speed up the complete analysis process, cass implements a couple of improvements to this general analysis process sketched above .first , the analysis information for each module is persistently stored .hence , before a module is analyzed , it is checked whether there already exists a storage with the analysis information of this module and whether the time stamp of this information is newer than the source program with all its direct or indirect imports . if the storage is found and is still valid , the stored information is used .otherwise , the information is computed as described above and then persistently stored .this has the advantage that , if only the main module has changed and needs to be re - analyzed , the analysis time of a large application is still small . to exploit multi - core or distributed execution environments ,the implementation of cass is designed as a master / worker architecture where a master process coordinates all analysis activities and each worker is responsible to analyze a single module .thus , when cass is requested to analyze some module , the master process computes all import dependencies together with a topological order of all dependencies .therefore , the standard prelude module ( without import dependencies ) is the first module to be analyzed and the main module is the last one .then the master process iterates on the following steps until all modules are analyzed : * if there is a free worker and all imports of the first module are already analyzed , pass the first module to the free worker and delete it from the list of modules . *if the first module contains imports that are not yet analyzed , wait for the termination of an analysis task of a worker .* if a worker has finished the analysis of a module , mark all occurrences of this module as `` analyzed . '' since contemporary curry implementations do not support thread creation , the workers are implemented as processes that are started at the beginning and terminated at the end of the entire execution .the number of workers can be defined by some system parameter .the current distribution of cass contains fourteen program analyses , including the analyses discussed in section [ sec : anaimpl ] .further analyses include a `` solution completeness '' analysis ( which checks whether a function might suspend due to residuation ) , a `` right - linearity '' analysis ( used to improve the implementation of functional patterns ) , an analysis of demanded arguments ( used to optimize non - deterministic computations ) , or a combined groundness / non - determinism analysis based on a type and effect system .new kinds of analyses can easily be added , since , as shown in section [ sec : anaimpl ] , the infrastructure provided by cass simplifies their definition and integration .we have already discussed some practical applications of cass in section [ sec : usage ] .these applications demonstrate that the current implementation with a module - wise analysis , storing analysis information persistently , and incremental re - analysis is good enough to use cass in practice . in order to get some ideas about the efficiency of the current implementation, we made some benchmarks and report their results in this section .since all analyses contained in cass have been developed and described elsewhere ( see the references above ) , we do not evaluate their precision but only their execution efficiency . cass is intended to analyze larger systems .thus , we omit the data for analyzing single modules but present the analysis times for four different curry applications : the interactive environment ( read / eval / print loop ) of kics2 , the analysis system presented in this paper , the interactive analysis environment currybrowser , and the module database , a web application generated from an entity / relationship model with the web framework spicey . in order to get an impression of the size of each application ,the number of modules ( including imported system modules ) is shown for each application .typically , most modules contain between 100 - 300 lines of code , where the largest one has more than 900 lines of code .table [ table : benchmarks ] contains the elapsed time ( in seconds ) needed to analyze these applications for different numbers of workers .we ran two kinds of fixpoint analysis : an analysis of demanded arguments and a groundness analysis .each analysis has always been started from scratch , i.e. , all persistently stored information were deleted at the beginning , except for the last row which shows the times to re - analyze the application where only the main module has been changed . in this case , the actual analysis time is quite small but most of the total time is spent to check all module dependencies for possible updates .the benchmarks were executed on a linux machine running ubuntu 12.04 with an intel core i5 ( 2.53ghz ) processor where cass was compiled with kics2 ( version 0.2.4 ) .modules : & & & & + analysis : & demand & ground & demand & ground & demand & ground & demand & ground + 1 worker : & 8.09 & 8.25 & 10.25 & 10.30 & 19.53 & 19.36 & 27.97 & 28.15 + 2 workers : & 5.75 & 5.82 & 6.87 & 7.48 & 12.33 & 12.49 & 18.32 & 18.56 + 4 workers : & 5.41 & 5.47 & 6.17 & 6.47 & 10.20 & 10.38 & 16.98 & 17.15 + re - analyze : & 1.40 & 1.38 & 1.26 & 1.26 & 2.01 & 1.99 & 2.34 & 2.34 + the speedup related to the number of workers is not optimal .this might be due to the fact that the dependencies between the modules are complex so that there are not many opportunities for an independent analysis of modules , i.e. , workers might have to wait for the termination of the analysis of modules which are imported by many other modules .nevertheless , the approach shows that there is a potential to exploit the computing power offered by modern computers .furthermore , the absolute run times are acceptable . it should also be noted that , during system development , the times are lower due to the persistent storing of analysis results .in this paper we presented cass , a tool to analyze functional logic programs .cass supports various kinds of program analyses by a general notion of analysis functions that map program entities into analysis information . in order to implement an analysis that also depends on information about other entities used in a definition, cass supports `` dependency analyses '' that require a fixpoint computation to yield the final analysis information .moreover , different analyses can be combined so that one can define an analysis that is based on the results of another analysis . using these different constructions ,the analysis developer can concentrate on defining the logic of the analysis and is freed from the details to invoke the analysis on modules and complete application systems . to analyze larger applications efficiently, cass performs a modular and incremental analysis where already computed analysis information is persistently stored .thus , cass does not support top - down or goal - oriented analyses but only bottom - up analyses which is acceptable for large applications or interactive systems with unknown initial goals .the implementation of cass supports different modes of use ( batch , api , server ) so that the registered analyses can be accessed by various systems , like compilers , program optimizers , documentation generators , or programming environments .currently , cass produces output in textual form .the support for other kinds of visualizations is a topic for future work .the analysis of programs is an important topic for all kinds of languages so that there is a vast body of literature .most of such works is related to the development and application of various analysis methods ( where some of them related to functional logic programs have already been discussed in this paper ) , but there are less works on the development or implementation of program analyzers .an example of such an approach , that is in some aspects similar to our work , is hoopl .hoopl is a framework for data flow analysis and transformation .as our framework does , hoopl eases the definition of analyses by offering high - level abstractions and releases the user from tasks like writing fixpoint computations .in contrast to our work , hoopl works on a generic representation of data flow graphs , whereas cass performs incremental , module - wise analyses on an already existing representation of functional logic programs .another related system is ciao , a logic programming system with advanced program analysis features to optimize and verify logic programs .cass has similar goals but supports strongly typed analysis constructors to make the analysis construction reliable .there are only a few approaches or tools directly related to the analysis of combined functional logic programs , as already discussed in this paper .the examples in this paper show that this combination is valuable since analysis aspects of pure functional and pure logic languages can be treated in this combined framework , like demand and higher - order aspects from functional programming and groundness and determinism aspects from logic programming . an early system in this direction is cider .cider supports the analysis of single curry modules together with some graphical tracing facilities .a successor of cider is currybrowser , already mentioned above , which supports the analysis and browsing of larger applications .cass can be considered as a more efficient and more general implementation of the analysis component of currybrowser . for future work, we will add further analyses in cass with more advanced abstract domains . since this might lead to analyses with substantial run times, the use of parallel architectures might be more relevant .thus , it would be also interesting to develop advanced methods to analyze module dependencies in order to obtain a better distribution of analysis tasks between the workers .s. antoy and m. hanus .declarative programming with function patterns . in _ proceedings of the international symposium on logic - based program synthesis and transformation ( lopstr05 )_ , pp . 622 .springer lncs 3901 , 2005 .s. antoy and m. hanus . new functional logic design patterns . in _ proc . of the 20th international workshop on functional and ( constraint )logic programming ( wflp 2011 ) _ , pp . 1934 .springer lncs 6816 , 2011 .b. brael , m. hanus , b. peemller , and f. reck . : a new compiler from curry to haskell . in _ proc . of the 20th international workshop on functional and ( constraint )logic programming ( wflp 2011 ) _ , pp .springer lncs 6816 , 2011 .p. cousot and r. cousot .abstract interpretation : a unified lattice model for static analysis of programs by construction of approximation of fixpoints . in _ proc . of the 4th acm symposium on principles of programming languages _ , pp . 238252 , 1977 .s. fischer and h. kuchen .systematic generation of glass - box test cases for functional logic programs . in _ proceedings of the 9th acmsigplan international conference on principles and practice of declarative programming ( ppdp07 ) _ , pp .acm press , 2007 .m. hanus .currydoc : a documentation tool for declarative programs . in _ proc .11th international workshop on functional and ( constraint ) logic programming ( wflp 2002 ) _ , pp .research report udmi/18/2002/rr , university of udine , 2002 .m. hanus . improving lazy non - deterministic computations by demand analysis . intechnical communications of the 28th international conference on logic programming _ , volume 17 , pp .130143 . leibniz international proceedings in informatics ( lipics ) , 2012 .m. hanus , s. antoy , b. brael , m. engelke , k. hppner , j. koj , p. niederau , r. sadre , and f. steiner . : the portland aachen kiel curry system .available at http://www.informatik.uni-kiel.de/~pakcs/ , 2013 .m. hanus and j. koj .cider : an integrated development environment for curry . in _ proc . of the international workshop on functional and ( constraint )logic programming ( wflp 2001 ) _ , pp .369373 . report no .2017 , university of kiel , 2001 .m. hanus and s. koschnicke .an er - based framework for declarative web programming . in _ proc . of the 12th international symposium on practical aspects of declarative languages ( padl 2010 ) _ , pp .springer lncs 5937 , 2010 .n. ramsey , j. dias , and s. peyton jones .hoopl : a modular , reusable library for dataflow analysis and transformation . in _ proceedings of the 3rd acm sigplan symposium on haskell ( haskell 2010 )_ , pp . 121134 .acm press , 2010 .
|
we present a system , called cass , for the analysis of functional logic programs . the system is generic so that various kinds of analyses ( e.g. , groundness , non - determinism , demanded arguments ) can be easily integrated . in order to analyze larger applications consisting of dozens or hundreds of modules , cass supports a modular and incremental analysis of programs . moreover , it can be used by different programming tools , like documentation generators , analysis environments , program optimizers , as well as eclipse - based development environments . for this purpose , cass can also be invoked as a server system to get a language - independent access to its functionality . cass is completely implemented in the functional logic language curry as a master / worker architecture to exploit parallel or distributed execution environments .
|
as numerical relativity is empowered by ever larger computers , numerical evolutions of black hole data sets are becoming more and more common .the need for such simulations is great , especially as gravitational wave observatories like the ligo / virgo / tama / geo network are gearing up to collect gravitational wave data over the next decade ( see , e.g. , ref . and references therein ) .a recent , thorough , and very detailed study indicates that black hole collisions are considered a most likely source of signals to be detected by these observatories . among the conclusions of this workis that reliable information about the merger process can be crucial not only to the interpretation of such observations , but also could greatly enhance the detection rate .therefore , it is crucial to have a detailed theoretical understanding of the coalescence process , and particularly the merger phase , that can only be achieved through numerical simulation . however , numerical simulations of black holes have proved very difficult , largely because of the problems associated with dealing with singularities present inside . even in axisymmetry , at present it is difficult to evolve black hole systems beyond about , where is the mass of the system . in 3d calculations ,the huge memory requirements make these problems much more severe .the most advanced 3d calculations based on traditional cauchy evolution methods published to date , utilizing massively parallel computers , have difficulty evolving schwarzschild , misner , or distorted schwarzschild beyond about .characteristic evolution methods have been used to evolve distorted black holes in 3d indefinitely , although to date waveforms have not been extracted and verified , and it is not clear whether the technique will be able to handle highly distorted or colliding black holes due to potential trouble with caustics . in spite of these problems ,much physics has been learned , especially in axisymmetry .there , calculations of distorted black holes with and without angular momentum , have been performed .furthermore , calculations of the misner two black hole initial data have been carried out , and the waveforms generated during the collision process have been extensively compared to calculations performed using perturbation theory .one of the important results to emerge from these studies is that the numerical and perturbative results agree very well , giving great confidence in both approaches .in particular , the perturbative approach turned out to work extremely well in some regimes where it was not , _ a priori _ , expected to be accurate .this has led to considerable interest in comparing perturbative calculations to full scale numerical simulations , as a way of not only confirming the validity of each approach , but also as an aid to interpreting the physics of the systems .we expect this rich interplay between perturbation theory and numerical simulation to play an important role in the future of numerical relativity . to this end , in the present paper we study a family of distorted axisymmetric and full 3d black hole initial data sets using perturbative techniques .these data sets consist of single black holes that have been distorted by the addition of gravitational waves .they can be considered to represent the final stages of the coalescence of two black holes , just after the horizons have merged .they therefore provide an excellent system to study the late stages of this process without having to model the long and difficult inspiral period . by using perturbative techniques to compute the waveforms expected in the evolution of these data sets, we provide an important testbed for full nonlinear codes that should evolve the same systems . in regimeswhere the distortions are considered moderately small perturbations of the underlying schwarzschild or kerr geometries , full scale numerical calculations should agree well with the type of perturbative treatment presented here .we expect that these results should have many uses in testing numerical relativity codes , and should also be useful in interpreting the physics contained in such simulations . for cases wheredifferent 3d codes and techniques are being developed , these results should help certify and interpret the numerical results .for example , 3d codes are being developed with standard adm and new hyperbolic formulations of the equations , with different slicing conditions .new approaches to black hole evolution that promise to extend dramatically the accuracy and duration of such calculations are under development , such as apparent horizon boundary conditions ( see , e.g. , and references therein ) or very recently characteristic evolution . in all cases , these testbeds should be applicable and should provide important tests of the numerical simulation results .in this section we provide the basic mathematical formalism for evolving distorted black holes as perturbative systems .we want to investigate the possibility of treating the non - spherical multipole moments of a general ( numerical ) spacetime as a linear perturbation about its `` background '' spherical part . roughly speaking, this approach should be a valid way to describe black hole spacetimes whose nonspherical departure from schwarzschild is small . in practice , such an approach has already shown itself to be spectacularly successful in the `` close - limit '' approximation that regards the misner initial data for two colliding black holes as a nonspherical perturbation to schwarzschild . in this paperwe apply similar ideas to the evolution of distorted single black hole spacetimes , providing detailed comparisons with fully nonlinear numerical evolutions in axisymmetry , and providing the framework for comparing fully 3d simulations with perturbative evolutions .this approach is also motivated by earlier work of ref . , where perturbative evolution was used as a check on numerical evolution , but not as a way to evolve cauchy initial data .we assume the spacetime metric to be in some general coordinate system .this metric comes either from an analytic solution to the einstein equations , or a numerical simulation . at this pointwe do not distinguish between these cases . in this coordinate systemwe can always write the metric in the form where is the spherically symmetric term , in a decomposition of into tensor spherical harmonics , and contains the higher multipoles which describe the deviations from the spherically symmetric mode .the terms satisfy , to , the linear field equations , linearized about . thus if the terms are small , in the sense that , they can be well approximated by a solution to the linearized equations . our general approachthen , is to take cauchy initial data which we expect to describe a system which is close to spherical symmetry . from this initial data ,we use a decomposition into tensor spherical harmonics to construct a background metric , and perturbations .the perturbations are then evolved using the linear field equations .the methods for extracting and , and for evolving are now discussed in detail .we write the background metric in the form the perturbations , which describe the deviations from spherical symmetry , are expanded using regge - wheeler harmonics as in the above , there are seven even - parity , , , , , , and three odd - parity variables , which are functions only of and . in eqs .( [ eqn : rw1 ] ) to ( [ eqn : rw10 ] ) a summation over the modes , is understood .given this formal expansion of the full metric , using the orthogonality of spherical harmonics we can extract the components of the background metric by appropriate integrations over the 2-sphere ( we note that a similar expression in ref. contained an error ) .the ten variables describing the non - spherical contributions ( for ) can be extracted by similar integrals , for example a complete list of formulae is given in appendix [ app : extraction_formulae ] . in the usual case where the full metric is given numerically , say after solving the cauchy initial value problem of numerical relativity , these integrals over 2-spheres can be computed numerically . in certain cases ,the initial data may be known analytically , in which case the integrals can also be computed analytically . in the sections belowwe shall encounter examples of both cases . in any case , by performing these integrals over a series of 2-spheres of different radii on a given time slice ( say , the initial time slice ) , we obtain the spherical background metric coefficients , , and for each nonspherical the ten metric perturbation functions , e.g. , .these metric perturbation functions , coming directly from the metric , are gauge dependent ; their values will depend on the particular gauge used , say , by the numerical code used to generate them . however , a gauge - invariant perturbation theory of spherical spacetimes has been developed by various authors . as shown originally by abrahams ,such formalisms can be used in numerical relativity calculations to isolate the even- and odd - parity gauge - invariant gravitational wave functions .the basic idea is that while the metric perturbation quantities , such as , will transform under infinitesimal coordinate transformations in the usual way , one can use this information to construct special quantities that are invariant under such gauge transformations . on a schwarzschild backgroundthese gauge - invariant quantities are found to obey the standard regge - wheeler and zerilli equations describing gravitational waves propagating on the spherical black hole background .we will follow this approach below . to compute these gauge - invariant perturbations on the general , _ time - dependent _ background , we could follow the work of refs . and construct gauge - invariant functions from the extracted multipole moments , and evolve these functions using the gauge - invariant linearized equations . in this way we could treat data which is given in a general , and even time dependent , spherical coordinate system (although we note that in the present formalism the shift terms must be treated as perturbations , i.e. , they must be formally ) . here, we take a simpler approach , which has so far been suitable for our needs .we assume that the cauchy initial data for is , to , given on a hypersurface of constant schwarzschild time .we then construct gauge invariant variables using moncrief s prescription , which can then be easily evolved using the zerilli or regge - wheeler wave equations .we first transform to the areal radial coordinate , using and then note that can be easily calculated using then , we calculate all the required multipole moments of this transformed metric , integrating the metric over 2-spheres as described above , using formulae detailed in the appendix [ app : extraction_formulae ] .this provides the ten metric perturbation functions as described above . with this information , following moncriefwe construct an odd - parity function , and an even - parity function , which are invariant under first order coordinate transformations .these are defined by \frac{s}{r } \\ \text{and } \nonumber\\ q^{+}_{lm } & = & \frac{1}{\lambda}\sqrt{\frac{2(l-1)(l+2)}{l(l+1 ) } } \left(l(l+1)s ( r^2\partial_r g^{(\ell m)}-2h_1^{(\ell m ) } ) \nonumber \right . \\&&+\left .2r s(h_2^{(\ell m)}-r\partial_r k^{(\ell m)})+\lambda r k^{(\ell m)}\right).\end{aligned}\ ] ] here we also note the definition where is the schwarzschild mass of the background .we approximate this mass by examining the function , given by in practice , in the spacetimes we are considering here , the function is quite constant in , and so there is no practical ambiguity in computing the mass . alternatively , for very small perturbations , one could consider simply using the adm mass of the spacetime for , as is often done , but this will include contributions from the waves in the spacetime . hence the mass defined by eq .( [ eq : mass ] ) provides a slight improvement in the treatment . while the individual metric quantities will transform in the usual way by a linear gauge transformation , the functions and are invariant , and thus are more directly connected to the physics of the system .we note that we have introduced a slight inconsistency in our construction of the gauge - invariant quantities and .we have defined the regge - wheeler perturbation functions ( e.g. ) in a general way , on a general time - dependent background metric , but in computing the gauge - invariant quantities we have simply assumed the background to be in schwarzschild coordinates .more complex expressions for the gauge - invariants on the more general background could be used , but in the present applications this has not proved to be necessary . at this stage we hint at a complication in this procedure which will we return to in more detail below .the numerical prescription outlined above , using integrals over 2spheres to pick off the different wave modes in a distorted black hole spacetime , simply lumps `` everything not spherical '' into the wavefunctions .it does not discriminate between contributions at various orders in the perturbation expansion , but rather it lumps them altogether .one must be careful when using the standard perturbation equations , as described below , which are derived from a formal theory keeping only terms at linear order .in particular , we will encounter cases where some perturbation modes appear _ only _ at higher order . having taken a general distorted black hole metric ,we have detailed a technique to compute the gauge - invariant functions describing the waves on a schwarzschild background .we now turn to the linearized evolution of these quantities .these gauge invariant functions obey the wave equations where the regge - wheeler potential is given by \ ] ] and the zerilli potential is \end{aligned}\ ] ] and finally and . in general , the first time derivatives of and also be calculated to provide cauchy data for the evolution . in this paper only timesymmetric initial data has been used , for which but it is no problem to provide time derivatives through analysis of the extrinsic curvature variables given as initial data in the more general non - time symmetric case . to summarize the development of this section , we have detailed the technique we use to isolate the even- and odd - parity gauge - invariant perturbation functions and from a numerically generated metric , under the assumption that the waves are linear perturbations on a spherical background metric .this information can be used in two ways : ( a ) it can be used to extract waveforms from a numerical simulation at a finite radius , and ( b ) it can be used to provide initial data for the linearized evolution equations given originally by regge - wheeler ( odd - parity ) and zerilli ( even - parity ) .these evolution equations can be used to compare results obtained with the full nonlinear evolution of black hole spacetimes .in this section we take the somewhat abstract discussion of the previous section and show how one applies it in practice to an actual family of numerically generated distorted black hole data sets .the method is tested on the so - called black hole plus brill wave spacetimes , which represent a black hole distorted by a gravitational wave .the deviation from a schwarzschild spacetime can be parameterized by a dimensionless parameter , corresponding to the amplitude of the wave . in the sections below we describe these initial data sets ,describe tests we can perform on the procedures used to obtain initial data for the perturbation equations discussed above , and in section [ sec : evol ] we compare linear and nonlinear numerical evolutions . in this sectionwe review the single distorted black hole initial data sets that we evolve in this paper .these black holes are distorted by the presence of an adjustable torus of nonlinear gravitational waves .the amplitude and shape of the initial wave can be specified by hand , as described below , and a range of initial data can be created , from slightly perturbed to very highly distorted black holes .such initial data sets , and their evolutions in axisymmetry , have been studied extensively , as described in refs. .three dimensional versions have been developed and studied recently in refs. .following , we write the 3metric in the form originally used by brill : where is a radial coordinate defined by , where and are the mass and standard schwarzschild isotropic radial coordinate of the black hole in the schwarzschild limit , described below . in this paper , we choose our initial slice to be time symmetric , so that the extrinsic curvature vanishes , although more general datasets can also be considered .given a choice for the `` brill wave '' function , the hamiltonian constraint leads to an elliptic equation for the conformal factor .the function represents the gravitational wave surrounding the black hole , and is chosen to be thus , an initial data set is characterized by the parameters , where , roughly speaking , is the amplitude of the brill wave , is its radial location , its width , and and control its angular structure .the parameters , which must be a positive even integer , and dictate the angular ( ) structure of the brill wave . in particular , if , the resulting spacetime is axisymmetric .a study of full 3d initial data and their evolutions are discussed elsewhere . if the amplitude vanishes , the undistorted schwarzschild solution results , leading to note that the initial data defined by eq .( [ eq : metric ] ) and ( [ eq : q2d ] ) have octant symmetry ( i.e. , equatorial plane symmetry and discrete symmetry in the four quadrants around the ) .together with the time - symmetry condition , this implies that and is real , and only non - zero for even and .therefore in the tests performed in this paper , only even - parity modes with even numbers are present , and in all analysis that follows linear evolutions will be carried out with the zerilli evolution equation. however , the techniques presented are quite general , and can be used on a wider class of initial data sets than considered here , containing the full range of modes . all the data sets considered now , and in the following sections have , and , that is , loosely speaking , the brill waves initially have unit width , and are centered on the throat of an schwarzschild black hole . the elliptic equation to be solved for must in general be solved numerically , as will normally be the case .once this is done , it can be evolved with a fully nonlinear numerical relativity code , and it can be also be used to compute initial data for the perturbation equations as described above . however , in this case we can also write down a solution in terms of special functions which analytically solves the hamiltonian constraint to linear order in the expansion parameter .this will provide a useful check on our procedures to compute linear initial data for the perturbation equations from the full nonlinear , numerically generated data .we give these solutions in schematic form below , and in more detail in appendix [ app : linsol ] .the conformal factor is expanded into spherical harmonics , such that and the hamiltonian constraint is solved to linear order in the expansion parameter .depending on the parameters considered in eq .( [ eq : q2d ] ) , the resulting linear conformal factor has a different angular structure .three cases are considered in the following : 1 . , : the only non - zero coefficients are , and . , : in this axisymmetric case , the only non - zero coefficients in the expansion are and .the term that one might expect is _missing_. note that this implies that at linear order the zerilli function vanishes , although it will appear at order . , : in this non - axisymmetric case , the coefficients , , , , , and , are all non - zero . in this casethe contribution does not exist , and hence the zerilli function will vanish at linear order . with these fully nonlinear numerical solutions and perturbative expansionswe are in a position to extract and analyze initial data for the zerilli evolution equation ( and for the full 2d nonlinear evolution code ) , which is the topic of the next section . in this sectionwe discuss specific examples of the extraction procedure applied to the axisymmetric black hole initial data sets discussed above . herewe focus on the extraction of initial data itself , not on the evolution .we use the exact analytic solution to the perturbative initial data equations as an aid to evaluating and understanding the numerically extracted initial data from the full nonlinear solution to the hamiltonian constraint .there are two main variations on the extraction of initial data that we compare and contrast in this section : \(a ) we compute the gauge - invariant functions numerically , by computing numerical integrals over 2spheres according to the procedure outlined above and in appendix [ app : extraction_formulae ] .\(b ) we can use the exact perturbative solution to calculate _ analytically _ the gauge - invariant variable denoted by , which we obtain by linearizing about the spherically symmetric background , and then constructing the variables as if the background was schwarzschild .this follows the spirit of the general numerical procedure described above , and is the solution to which we expect to converge with the complete numerical procedure .this exact solution is too lengthy to be reproduced here , but can be obtained straightforwardly with a computer algebra package .we note that we can also use the exact solution to the perturbation expansion to calculate a gauge - invariant variable , which results from linearizing the spacetime about the background mass schwarzschild spacetime .this is not quite the same as the wave obtained by linearizing about the more general spherical background .this is due to two effects .first , the individual metric perturbation functions , such as defined via eq .( [ eq : h2 ] ) above , are computed on a more different spherical background than schwarzschild second , the brill wave introduces the presence of a non - zero coefficients in the expansion ( [ eqn : expansion ] ) , which is reflected in the definition of the mass defined in eq .( [ eq : mass ] ) .the difference between the two waveforms is .the general method , and code , were tested using the exact linear solutions for initial data using the conformal factor described above . using these exact linear solutions , the 3-metric components ( [ eq : metric ] )were constructed on a 2d grid in coordinates ( keeping only those terms linear in the brill wave parameter ) .the waveforms , and were numerically extracted , as described in sec .[ sec : extract ] , and compared to the analytically computed functions . as expected from the numerical methods used , the numerically extracted waveforms converged , as , to , for each -mode considered .this provides an excellent test of the accuracy of the integration routines on the 2sphere required to compute the gauge - invariant wavefunctions , and of the rather complex expressions involving the various tensor - spherical harmonics for different -modes that must be coded . as an example , fig .[ extract1 ] shows the numerically extracted and exact waves for the initial data sets ( a ) and ( b ) , plotted as a function of the logarithmic radius .the numerically and analytically computed wave , calculated from the perturbative analytic solution to the hamiltonian constraint , are shown as the dotted and dashed lines in the figure .on this scale they are almost indistinguishable , demonstrating the high accuracy of the extraction procedure . for comparison, we also show the numerically extracted wave , discussed fully in the next section , computed from the full nonlinear solution to the hamiltonian constraint , as a solid line .this indicates the difference between the linear and nonlinearly computed initial data . in fig .[ extract1]a , we show the modes . for this initial data set with ,the modes are present at linear order , while the mode is not .the numerically computed mode from the linear solution should vanish , and does so at second order in .correspondingly , for the linear initial data set , only the mode should be present , and this is seen in fig .[ extract1]b , which shows the numerically extracted mode to be indistinguishable from the exact result , and the mode to be very close to zero .now we use the procedure detailed in sec .[ sec : method ] to extract waveforms from ( 2d numerical ) initial data generated by solving the _ nonlinear _ constraints , for the same brill wave parameters as above in sec .[ subsubsec : test ] . in fig .[ extract1 ] the initial waveform obtained is compared with the previous waveforms from the exact linear initial data .the waveforms match very closely , with the greatest differences near the black hole throat .this area , with the greatest difference between the waveforms from the linear and nonlinear initial data , is also inside the maximum of the background black hole potential , which is located ( for ) at , and most of this part of the waveform will be radiated across the horizon to disappear in the black hole . as the brill wave parameter increased , the higher order terms begin to have more influence , and the waveform obtained from the nonlinear initial data deviates more from that from the linear initial data .this is demonstrated in figs .[ extract2 ] and [ extract3 ] , in which as extracted from nonlinear data is shown for a range of brill wave amplitudes , .the figures show that in the region close to the black hole throat ( at ) the amplitude of does not increase linearly with , as expected .however , at the maximum of the black hole potential , ( for and ) , scales nearly linearly with .we will see in the next section that even in cases such as these where the linear study deviates significantly from the nonlinear extraction , the linear evolution can still do a reasonably good job in predicting the results of the full nonlinear evolution far from the black hole .the key reason for this seems to be that the nonlinearities of these particular datasets are largest well inside the peak of the potential barrier , and hence they are kept inside and propagate down the black hole .this point will be discussed further in the next section . in this sectionwe have demonstrated that our extraction method is accurate by comparisons with known linear solutions , and shown that the nonlinear initial data sets , parameterized by the brill wave parameter , yield initial waveforms which agree closely with the known linear solution for small . in the next section we turn to evolutions of these numerically extracted linear initial data sets .the waveforms , extracted from data on the initial hypersurface , can be numerically evolved using the zerilli equation ( [ eqn : rwwave ] ) .we stress that in this case , we compute from the _ nonlinear _ computed initial data , _i.e. _ from the full solution to the hamiltonian constraint . in the absence of an exact solution for the perturbative initial data expansion ,as will normally be the case , this will be the only way to obtain initial data for the linearized evolution equations .for the cases considered in this paper , only time symmetric initial data was considered , with on the initial hypersurface . for non - time symmetric data, can be calculated using the numerical extrinsic curvature , using a procedure similar to that for extracting from . in the following sections , we compare these 1d linear evolutions , with the results from 2d fully nonlinear evolutions of the original initial data sets . the data is compared by constructing , in both simulations , at the same schwarzschild radius . in the nonlinear code this requires the same extraction procedure that was used to find the linear initial data , to be applied at the chosen schwarzschild radius throughout the evolution . in all the examples shown here ,the radiation waveforms were extracted at a schwarzschild radius . in principleone must be careful in comparing waveforms measured with different time coordinates . since the nonlinear evolution implements a general slicing condition ( in these examples an algebraic slicing ) , and not the schwarzschild slicing used in the linear evolution , it could happen that corrections would be needed to account for the differences in slicing .however , in our simulations the slicings are similar enough in the regions where the waves are computed that this is only a very small effect , as is borne out by the figures .for all the results shown here , both for the linear and nonlinear evolutions , the accuracy of the results were carefully studied , to be sure that any differences between the nonlinear and linear waveforms is due only to the different initial data or the mode of evolution .this is particularly important to ensure that the observed modes are not due to insufficient resolution or boundary effects .first , we discuss the evolutions corresponding to the axisymmetric initial data sets specified by , , for a range of brill wave amplitudes . for the exact linear initial data described in sec .[ sec : initialdata ] , it was seen that and occurred to linear order in the brill wave amplitude , with all other modes occurring at higher order .this linear scaling of and was also seen in fig .[ extract2 ] , in the initial data used for these linear evolutions .[ n4_compare ] compares the radiation waveforms at from the linear evolution of the zerilli equation and from the nonlinear evolution of the full einstein field equations .it is clearly seen that for the lowest amplitude brill waves , the waveforms from linear and nonlinear evolutions match very closely .as the amplitude increases , the deviations become more apparent . in fig .[ lowhigh_n4 ] we isolate the cases and for closer study , and compare the relative amplitudes of the and waveforms .this clearly shows the high level of agreement for the lowest amplitude , shown in fig .[ lowhigh_n4 ] .note that the scales have been chosen in the two graphs , such that if the waves were scaled linearly with , they would be the same size in both ( a ) and ( b ) .although we have _ evolved _ the same ( non - linear ) initial data in both cases , nonlinearities are clearly coming in , even in the initial data .the second group of initial data sets were created using and , for the same range of brill wave amplitudes as in the previous section . for these parameters , the exact linearinitial data sets of sec .[ sec : initialdata ] contained only the mode to linear order in , with all other modes occurring at second order or higher in .this behavior is also seen in fig .[ extract3 ] , in the initial data used for these linear evolutions .[ n2_compare ] compares the radiation waveforms at from the linear evolution of the zerilli equation and from the nonlinear evolution of the full einstein field equations . for these initial data , the waveforms shown in fig .[ n2_compare]a again match well for low amplitudes , with deviations growing for the higher amplitudes .the waveforms , shown in fig .[ n2_compare]b are of much lower amplitude than the waveforms .as is apparent in the figure , these waveforms from the linear and nonlinear evolutions do not match well , in amplitude or phase for any of the brill wave parameters used .however , we should not expect them to .we have already seen in sec .[ sec : initialdata ] , in these , initial data sets , there is _ no _ content at linear order in the expansion parameter .the expansion of the initial data in powers of the amplitude is directly related to the formal expansion used in the derivation of the zerilli evolution equation .the equation is valid only for evolving initial data of order , but not order , since all such terms were dropped in the derivation . in the case in point ,the data are of order and hence this data will not be accurately evolved with the linear evolution equation .basically this means that the piece , being of order , is _ too small _ to be treated by linear theory , and only the nonlinear code can pick this up . as shown by gleiser _ , the zerilli equation can be generalized to higher order , in which case it has the same form as for the linear case , but now with nonlinear source terms that mix in different multipoles . in this casewe could in principle evolve the data with such an equation , taking into account quadratic terms in the linear ( and other ) modes . in practice however , this would not be possible with the extraction method as used here , since errors of order are already introduced .in fact , it is worth stressing that the standard linear extraction process used to obtain the waveforms for the nonlinear code has been applied even for these nonlinear modes .the implications of such higher order effects on waveform extraction should be carefully considered in future investigations .the main point to be made here is that linear theory does not , and should not agree with the nonlinear code in this case , and one must be careful in applying linear theory as a testbed . on the other hand , these techniques open the door to careful and systematic studies of nonlinear physics of black holes , such as mode mixing , that could be pursued in the future . fig .[ lowhigh_n2 ] shows the and waveforms on the same graph , for the low amplitude and higher amplitude cases .[ lowhigh_n2]a shows results from the initial data , showing very good agreement between the linear and nonlinear waveforms .the waveforms are too small to be seen on this scale .[ lowhigh_n2]b shows results from the higher amplitude case . herethe discrepancy between the two methods can be seen in both the and waveforms .one might expect , given the rather large discrepancy between linear and nonlinear initial data for the case , that the linear and nonlinear evolved waveforms would be rather more different than they are .however , as previously noted the largest deviations between the linear and nonlinear initial data occurs near the horizon , well inside the peak of the potential barrier .although linear theory breaks down inside the potential barrier , linear theory is still fairly accurate outside the peak where waveforms are measured .this effect has also been seen in studies of black hole collisions ( see , e.g. ) where perturbative treatment was very successful even when one might naively think it would break down . the asymptotic radiated energy flux , for each angular mode , can be defined by which can easily be numerically integrated to give the radiated energy , in each mode .note that this energy formula comes from _ linear _ theory , and hence will not provide an accurate prediction for the energy in the higher order modes . fig .[ energy ] uses log plots to compare the energies in the waveforms from the nonlinear and linear evolutions of the two initial data sets described above . for the initial data sets , shown in fig .[ energy]a , the energy radiated in both and waveforms is similar , for the two types of evolution and for the range of brill wave amplitudes considered .the energy carried by the waveform is some 10 to 20 times smaller than the energy carried by the waveform .the radiated energy scales as , for both modes . in fig .[ energy]b we show the family of initial data sets , and the picture is different , as was already seen in the evolutions .whereas the radiated energy is similar from both methods of evolution for the waveform , the waveform is very different .the waveform resulting from the nonlinear evolution , contains a factor of around 60 times more energy than the corresponding waveform from the linear evolution .the `` radiated energy '' in the waveform scales as , for both methods of evolution .but we emphasize that we have used a linearized energy measure in this case , where nonlinear effects should be accounted for .so far in this paper we have concentrated on axisymmetric initial data sets , in order to aid the understanding and interpretation of the method . in this section ,the same procedure is applied to construct linearized initial data from a non - axisymmetric 3d initial data set .our aim here is to show that the extraction and linear analysis techniques developed here carry over easily to the full 3d case , even though the extraction process is more complicated .the evolutions of these data sets , and comparisons with nonlinear calculations , are left for the next paper in this series .the 3d initial data set studied here again corresponds to a black hole distorted by a gravitational wave , belonging to the same family as described in sec .[ sec : initialdata ] .the non - axisymmetry follows from choosing a non - zero parameter in eq .[ eq : q2d ] .the nonlinear data was created using the parameters , , , and .again , an exact linear solution for the 3-metric was constructed for these parameters , as detailed in appendix [ app : linsol ] .starting with this exact 3-metric on the initial hypersurface , exact expressions for the waveforms were found , using the extraction procedure of sec .[ sec : method ] . using the exact linear solutions ,the 3-metric was calculated numerically , on a 3d polar grid , in coordinates ( keeping only those terms linear in the brill wave parameter ) .waveforms for different and were then numerically extracted and compared to the exact . as before , the numerically calculated waveforms converge to the exact waveforms as as expected from the numerical methods .[ ext3d_1 ] shows the numerically extracted and exact waveforms , for the modes which are present to in the exact solution . in this figure ,the top graph shows the higher amplitude axisymmetric modes , and the bottom graph the lower amplitude non - axisymmetric modes .the two solutions , shown by the dashed and dotted lines , can not be distinguished for this resolution , ( ,, ) .finally , instead of extracting the waveforms from the numerical linear initial data , we extract from 3d numerical data , found by solving the nonlinear constraints for the given brill wave parameters .this ` nonlinear ' initial data is constructed in the coordinate system .the waveforms obtained from this initial data are shown as solid lines in fig .[ ext3d_1 ] .note that there are significant differences between the waveforms extracted from the linear and initial data .however , the largest differences are again inside the maximum of the potential barrier of the background spacetime at for . on the potential barrier the linear and nonlinear waveforms are very similar , with the largest difference occurring for , indicating that in an evolution of this time - symmetric initial data , there should be only small differences outside of this radius .although not shown here , the difference between the initial waveforms from the linear and nonlinear data decreases as the brill wave parameter is reduced .[ ext3d_1b ] shows the initial waveforms which are present in the extraction from nonlinear data , but are not found in the linear initial data , where they occur at or higher . as in the axisymmetric cases discussed above, these modes should _ not _ agree with full nonlinear evolutions .building on previous work ( e.g. , ) we have further developed the use of perturbation theory as a tool for numerical relativity .we presented techniques for studying single , distorted black holes without net angular momentum , using the perturbative regge - wheeler and zerilli equations .these techniques include extracting initial data , decomposed into different modes , from a numerically computed black hole initial data set , and evolving these modes forward in time with the perturbation equations .these results can then be used both to perform a powerful check on the accuracy of fully nonlinear evolution codes , from which waveforms can be extracted , and to help untangle linear and nonlinear effects , such as mode - mixing , of the full evolution .we applied this technique to a series of distorted black holes , using the `` brill wave plus black hole '' family of initial data sets .these data sets have adjustable parameters for the shape and amplitude of the initial distortions , and have been well studied previously and found to mimic the behavior of two black holes that have just collided head - on .they have a further nice property that an analytic solution to the initial value problem is possible if the brill wave amplitude is small .this information was used both to test the accuracy of the perturbative initial data extraction techniques , and to understand the spectral decomposition of both the linear and _ nonlinear _ contributions to the initial data .we then performed a series of axisymmetric , fully nonlinear supercomputer evolutions of the full initial data sets , and compared gravitational radiation waveforms extracted from these simulations to results obtained via our perturbative techniques using the zerilli equation .we find that for initial data sets containing modes that can be shown to be _ linear _ in the wave amplitude parameter , the extracted waveforms from the full nonlinear evolutions show excellent agreement with the perturbatively evolved waveforms , confirming both the very high degree of accuracy that can be achieved with axisymmetric codes and the linear perturbation code and approach to the problem .however , we also find that some modes computed with the nonlinear simulation code do _ not _ agree with the linearized treatment ; in some cases the linear evolution can give very different results from the full nonlinear evolution for specific modes , even when other modes agree well .but in all such cases , one can see from analytic study of the initial data that it is precisely these `` renegade '' modes that show up at a _ nonlinear _ order in the wave amplitude parameter .their perturbative evolution equations should then be extended to second order , with nonlinear source terms ( composed of linear modes ) coming in . in effect , such modes are _ too small _ to be treated by linear theory , since nonlinear contributions from other modes are relatively large enough not to be neglected .hence one can see clearly the effects of mode - mode coupling in these cases .the combination of linearized treatment and perturbative treatment was essential not only to confirm the nonlinear code , but also to interpret the complex nature of the numerical results .furthermore , we pointed out that the use of linear waveform extraction techniques should be studied further for systems that may contain modes only at higher , nonlinear order . another important pointto mention is the role that the potential barrier plays in the evolution of these distorted black holes .the application of perturbation theory to the evolution of these distorted black holes often works well , even in cases where one can see large deviation between linear and nonlinear treatment in the initial data .but in such cases , the error made by using a linear approximation is largest _ inside _ the peak of the potential barrier , and hence most of this nonlinearity is swallowed by the black hole .hence , as has also been stressed in previous work , the potential barrier acts to trap much of the deviation from linear behavior , extending the range of applicability of perturbative treatment over what one might naively expect .finally , we also applied this technique to study a new class of 3d distorted black hole initial data sets , and we showed that the initial data can be extracted accurately for study in 3d , just as in axisymmetry .the comparison of evolutions of such 3d distorted black hole initial data between full nonlinear numerical relativity and perturbation theory will be the subject of the next paper in this series . in this paperwe have only considered examples of time symmetric , even - parity perturbations of non - rotating black holes .the technique is much more general , and also applies to all even- and odd - parity modes , and to non - time symmetric initial data , which will be considered in future papers in this series .the data sets developed in ref. also contain distorted black hole initial data with angular momentum .the rotating case is considerably more complicated , and naturally involves using the teukolsky equation to evolve perturbations of the kerr metric .this important followup step will be considered in a future paper .this work has been supported by aei , ncsa , and the binary black hole grand challenge alliance , nsf phy / asc 9318152 ( arpa supplemented ) . we thank s. brandt for help with the analytic solutions for the perturbative initial data , and h. beyer , c. gundlach and k. kokkotas for helpful discussions .e.s . would like to thank j. mass and c. bona for hospitality and discussions at the university of the balearic islands where part of this work was carried out .calculations were performed at aei and ncsa on an sgi / cray origin 2000 supercomputer .here we list the general formulae for extracting the regge - wheeler variables from the metric using the the spherical background , this appendix we derive an exact linear solution for the perturbed black hole initial data sets used in this paper . expanding the conformal factor in terms of the brill wave amplitude , we write the zeroth order hamiltonian constraint is then with the spherical schwarzschild solution keeping terms linear in , the first order hamiltonian constraint is when is expanded in spherical harmonics , equation [ 1st_order_constraint ] reduces to a series of odes , which can be solved for a particular case of the brill wave function , .the solution for two of the brill wave functions considered in this paper are , 1 .+ the only nonzero coefficients in the expansion [ 1st_order_sum ] are and which have the values \\\tilde{\psi}^{(1)}_{20 } & = & \frac{2\sqrt{10m\pi}}{15 } e^{-\eta^2 } -\frac{2\sqrt{5}\pi}{15}e^{9/4 } \cosh(5\eta/2)+\sqrt{5\pi}{15}e^{9/4}\left[\mbox{erf}\left(-\eta+\frac{3}{2 } \right)e^{-5\eta/2}+\mbox{erf}\left(\eta+\frac{3}{2}\right)e^{5\eta/2 } \right]\end{aligned}\ ] ] 2 . + for this case , which includes a non - axisymmetric contribution through the parameter , there are several more modes in the expansion .the non - zero coefficients in [ 1st_order_sum ] are now , , , , . + \\ \tilde{\psi}^{(1)}_{20 } & = & \frac{1}{105 } \sqrt{\frac{2 \pi m}{5 } } \left(2+c \right ) \left [ 40 e^{-\eta^2 } \cosh\left(\frac{\eta}{2}\right ) + 6\sqrt{\pi } e \cosh\left(\frac{5\eta}{2}\right ) - 14 \sqrt{\pi } e^{9/4 } \cosh\left ( \frac{5\eta}{2}\right ) \right .\nonumber \\ & & \left .+ 3 \sqrt{\pi } e^{1 - 5\eta/2 } \erf\left(\eta-1\right ) - 3\sqrt{\pi } e^{1 + 5\eta/2 } \erf\left(\eta+1\right ) - 7 \sqrt{\pi } e^{9/4- 5\eta/2 } \erf\left(\eta-\frac{3}{2}\right ) + 7 \sqrt{\pi } e^{9/4 + 5\eta/2 } \erf\left(\eta+\frac{3}{2}\right ) \right ] \\\tilde{\psi}^{(1)}_{22 } & = & \frac{1}{70 } \sqrt{\frac{\pi m}{15 } } c \left [ -60 e^{-\eta^2 } \cosh\left(\frac{\eta}{2}\right ) - 44 \sqrt{\pi } e \cosh\left ( \frac{5\eta}{2}\right ) - 14 \sqrt{\pi } e^{9/4 } \cosh\left(\frac{5\eta}{2 } \right ) \right .\nonumber \\ & & \left .- 22\sqrt{\pi } e^{1 - 5\eta/2 } \erf\left(\eta-1\right ) + 22\sqrt{\pi } e^{1 + 5\eta/2 } \erf\left(\eta+1\right ) - 7\sqrt{\pi } e^{9/4 - 5\eta/2 } \erf\left(\eta-\frac{3}{2}\right ) + 7\sqrt{\pi } e^{9/4 + 5\eta/2 } \erf\left(\eta+\frac{3}{2}\right ) \right ] \\\tilde{\psi}^{(1)}_{40 } & = & -\frac{1}{105 } \sqrt{2\pi m } \left(2+c\right ) \left [ 4 e^{-\eta^2 } \cosh\left(\frac{\eta}{2}\right ) - 2\sqrt{\pi } e^{25/4 } \cosh\left(\frac{9\eta}{2}\right ) - \sqrt{\pi } e^{25/4 - 9\eta/2 } \erf\left ( \eta-\frac{5}{2}\right ) \right .\nonumber \\ & & \left .+ \sqrt{\pi } e^{25/4 + 9\eta/2 } \erf\left(\eta+ \frac{5}{2}\right ) \right ] \\\tilde{\psi}^{(1)}_{42 } & = & \frac{1}{42 } \sqrt{\frac{\pi m}{5 } } c \left [ 4 e^{-\eta^2 } \cosh\left(\frac{\eta}{2}\right ) - 2\sqrt{\pi } e^{25/4 } \cosh\left(\frac{9\eta}{2}\right ) - \sqrt{\pi } e^{25/4 - 9\eta/2 } \erf\left ( \eta-\frac{5}{2}\right ) \right .\nonumber \\ & & \left .+ \sqrt{\pi } e^{25/4 + 9\eta/2 } \erf\left(\eta+ \frac{5}{2}\right ) \right]\end{aligned}\ ] ] + if the parameter , the coefficients with vanish .note that if the physical metric is required to linear order , ( as used in the examples in this paper ) , then the components will be given by , for example numerical implementation of the 2d and 3d nonlinear codes for the generation and evolution of initial data is described fully in .\(i ) find a 2-sphere on which the approximate spherical symmetry is manifest .for the examples used in this paper , appropriate spheres are known from the construction of the initial data , and the 2-spheres are simply spheres of constant isotropic coordinate radius from the center of the octant symmetry .\(ii ) construct the spatial metric components , , , , , on the given 2-sphere . in general this procedure will involve interpolating metric components from the numerical grid used to create the initial data or for the evolutions to a 2d numerical grid on the given 2-sphere , and then transforming the metric to the polar coordinate system .however for the data sets used here , the initial data was created and evolved on the same 2-spheres used for the extraction .\(iii ) integration over the 2-sphere to calculate the regge - wheeler functions , and directly from these the radiative variables and . the integrations were performed numerically using simpson s rule , which calculates the integral with an error of . to avoid complications ,the polar grid points on the sphere are arranged so that the polar axis is straddled . the number of polar grid points needed for an accurate estimate of the integral will depend in general on the angular structure of the metric components and the , mode under consideration .p. anninos , d. bernstein , d. hobill , e. seidel , l. smarr , and j. towns , in _ computational astrophysics : gas dynamics and particle methods _ , edited by w. benz , j. barnes , e. muller , and m. norman ( springer - verlag , new york , 1997 ) , in press .
|
we consider a series of distorted black hole initial data sets , and develop techniques to evolve them using the linearized equations of motion for the gravitational wave perturbations on a schwarzschild background . we apply this to 2d and 3d distorted black hole spacetimes . in 2d , waveforms for different modes of the radiation are presented , comparing full nonlinear evolutions for different axisymmetric modes with perturbative evolutions . we show how axisymmetric black hole codes solving the full , nonlinear einstein equations are capable of very accurate evolutions , and also how these techniques aid in studying nonlinear effects . in 3d we show how the initial data for the perturbation equations can be computed , and we compare with analytic solutions given from a perturbative expansion of the initial value problem . in addition to exploring the physics of these distorted black hole data sets , in particular allowing an exploration of linear , nonlinear , and mode mixing effects , this approach provides an important testbed for any fully nonlinear numerical code designed to evolve black hole spacetimes in 2d or 3d .
|
collaborative filtering ( cf ) is one of the most widely used approaches to recommender systems .it is based on the analysis of users previous activity ( likes , watches , skips , etc . of items ) and discovering hidden relations between users and items . among cf methods , matrix factorization techniques offer the most competitive performance . these models map users and items into a latent factor space which contains information about preferences of users w.r.t . items . due to the fact that cf approaches use only user behavioural data for predictions , but not any domain - specific context of users / items, they can not generate recommendations for new _ cold _ users or _ cold _ items which have no ratings so far . a very common approach to solve this _ cold - start problem _ , called _ rating elicitation _ , is to explicitly ask cold users to rate a small representative _ seed set _ of items or to ask a representative _ seed set _ of users to rate a cold item .one of the most successful approaches to rating elicitation is based on the maximal - volume concept .its general intuition is that the most representative seed set should consist of the most representative and diverse latent vectors , i.e. they should have the largest length yet be as orthogonal as possible to each other . formally , the degree to which these two requirements are met is measured by the volume of the parallelepiped spanned by these latent vectors . in matrix terms , the algorithm , called maxvol , searches very efficiently for a submatrix of a factor matrix with the locally maximal determinant .unfortunately , the determinant is defined only for square matrices , what means that a given fixed size of a seed set requires the same rank of the matrix factorization that may be not optimal .for example , the search for a sufficiently large seed set requires a relatively high rank of factorization , and hence a higher rank implies a larger number of the model parameters and a higher risk of overfitting , which , in turn , decreases the quality of recommendations . to overcome the intrinsic `` squareness '' of the ordinary maxvol , which is entirely based on the determinant ,we use the notion of rectangular matrix volume , a generalization of the usual determinant . searching a submatrix with high rectangular volumeallows to use ranks of the factorization that are lower than the size of a seed set .however , the problem of searching for the globally optimal rectangular submatrix is np - hard in the general case . in this paper, we propose a novel efficient algorithm , called rectangular maxvol , which generalizes original maxvol .it works in a greedy fashion and adds representative objects into a seed set one by one .this incremental update has low computational complexity that results in high algorithm efficiency . in this paper, we provide a detailed complexity analysis of the algorithm and its competitors and present a theoretical analysis of its error bounds .moreover , as demonstrated by our experiments , the rectangular volume notion leads to a noticeable quality improvement of recommendations on popular recommender datasets .let us briefly describe the organisation of the paper .section [ sec : background ] describes the background on the existing methods for searching representatives that is required for further understanding .sections [ sec : volume ] , [ sec : algorithm ] present our novel approach based on the notion of rectangular matrix volume and the fast algorithm to search for submatrices with submaximal volume . in sections[ sec : rectmaxvol_complexity ] and [ sec : bound ] , we provide a theoretical analysis of the proposed method .section [ sec : experiments ] reports the results of our experiments conducted on several large - scale real world datasets .section [ sec : related ] overviews the existing literature related to cf , the cold start problem and the basic maximal - volume concept papers .the rating elicitation methods , such as , are based on the same common scheme , which is introduced in this section .suppose we have a system that contains a history of users ratings for items , where only a few items may be rated by a particular user .denote the rating matrix by , where is the number of users and is the number of items , and the value of its entry describes the feedback of user on item .if the rating for pair is unknown , then is set to . without loss of generality and due to the space limit, the following description of the methods is provided only for the user cold start problem . without any modifications ,these methods for the user cold start problem can be used to solve the item cold start problem after the transposition of matrix .algorithm [ alg : elicitation ] presents the general scheme of a rating elicitation method .such procedures ask a cold user to rate a _ seed set _ of representative items with indices for modeling his preference characteristics , where , called _ budget _ , is a parameter of the rating elicitation system .warm rating matrix , cold user , budget predicted ratings of the cold user for all items compute indices of representative items that form a seed set elicit ratings of the cold user on items with indices predict ratings of the cold user for all items using the performance of a rating elicitation procedure should be measured using a quality of predictions . for this purpose ,we use ranking measures ( such as precision@ ) , which are well suitable for cf task ( see section [ sec : experiments ] for details ) .the major contribution of this paper is a novel method of performing step 1 , described in section [ sec : proposed_method ] .it is based on puresvd collaborative filtering technique , that is described in section [ sec : puresvd ] . in section [ sec : predicting ] , we discuss how to effectively perform step 3 using the similar factorization based approach . and in section [ sec : maximal ] , we talk about the baseline method for seeking a seed set ( step 1 ) , which is based on the maximal - volume concept .let us briefly describe the general idea of puresvd , which is a very effective cf method in terms of ranking measures and therefore used as a basis of our rating elicitation approach .puresvd provides a solution of the following optimization problem : where is the frobenius norm and is a parameter of puresvd called rank . according to eckart - young theorem ,the optimal solution can be found by computing the truncated sparse singular value decomposition of the sparse rating matrix .this factorization can be interpreted as follows .every user has a low dimensional embedding , a row in the matrix , and every item has an embedding , a column of the matrix .these embeddings are called _ latent vectors _ .the puresvd method provides an approximation of the unknown rating for a pair , which is computed as the scalar product of the latent vectors : low - rank factors and are used in the rating elicitation procedures that are described further .let us assume that some algorithm has selected a seed set with representative items with indices , and assume a cold user has been asked to rate only items , according to steps 1 - 2 of the rating elicitation scheme described by algorithm [ alg : elicitation ] . in this section , we explain how to perform step 3 , i.e. how to predict ratings for all items using only the ratings of the seed set .as shown in , the most accurate way to do it is to find a coefficient matrix that allows to linearly approximate each item rating via ratings of items from the seed set .each column of contains the coefficients of the representation of an item rating via the ratings of the items from the seed set .shortly , this approximation can be written in the following way : we highlight two different approaches to compute matrix c. first approach is called representative based matrix factorization ( rbmf ) .it aims to solve the following optimization task : in our paper , we use the matlab indexing notation : is the matrix whose column coincides with the column of , where is the component of vector . note that is not a part of , because there is still no information about a cold user ratings .this optimization task corresponds to the following approximation : the solution of ( [ eq : matrix_decomp ] ) is : since , the matrix is often well - conditioned .therefore , the regularization term used in is unnecessary and does not give a quality gain . 2 in this paper , we propose a more efficient second approach that considers the rank- factorization given by equation ( [ eq : puresvd_optimization ] ) , .let be the matrix formed by columns of that correspond to the items of the seed set .let us try to linearly recover all item latent vectors via the latent vectors from the seed set : it is a low - rank version of the problem given by ( [ eq : matrix_decomp ] ) and , therefore , is computationally easier .solution of this optimization problem can be also used for recovering all ratings using ( [ eq : r_coefs ] ) . unlike ( [ eq : matrix_decomp ] ) , the optimization problem given by ( [ eq : coef_factor_optimization ] ) does not have a unique solution in general case , because there are infinitely many ways to linearly represent an -dimensional vector via more than other vectors . therefore , we should find a solution of the underdetermined system of linear equations : where we denote . since the seed set latent vectors surely contain some noise and coefficients in show how all item latent vectors depend on the seed set latent vectors , it is natural to find `` small '' , because larger coefficients produce larger noise in predictions .we use the least - norm solution in our research , what is additionally theoretically motivated in section [ sec : bound ] .the least - norm solution of ( [ eq : coef ] ) should be computed as follows : where is the right pseudo - inverse of .actually , such linear approach to rating recovering results in the following factorization model .taking the latent vectors of the representative items as a new basis of the decomposition given by equation ( [ eq : puresvd_optimization ] ) , we have where . in this way, we approximate an unknown rating by the corresponding entry of matrix , where factor consists of the known ratings for the seed set items . this scheme is illustrated on fig .[ fig : scheme ] .this section introduces the general idea of the maximal - volume concept and maxvol algorithm for selecting a good seed set , what corresponds to step 1 in the rating elicitation scheme ( algorithm [ alg : elicitation ] ) .suppose we want to select representative items with indices .first of all , maxvol algorithm requires to compute the rank- svd factorization of given by equation ( [ eq : puresvd_optimization ] ) .after this , searching for an item seed set is equivalent to searching for a square submatrix in the factor matrix .note that every column of or is a latent vector corresponding to an item from the seed set .an algorithm of seeking for a set of representative items may rely on the following intuitions .first , it should not select items , if they are not popular and thus cover preferences of only a small non - representative group of users .that means that the latent vectors from the seed set should have large norms .second , the algorithm has to select diverse items that are relevant to different users with different tastes .this can be formalized as selecting latent vectors that are far from being collinear .the requirements can be met by searching for a subset of columns of that maximizes the volume of the parallelepiped spanned by them . this intuition is demonstrated in fig .[ fig : maxvol_demo ] , which captures a two - dimensional latent space and three seed sets .the volume of each seed set is proportional to the area of the triangle built on the corresponding latent vectors .the dark grey triangles have small volumes ( because they contain not diverse vectors or vectors with small length ) and hence correspond to bad seed sets .contrariwise , the light gray triangle has a large volume and represents a better seed set .overall , we have the following optimization task : the problem is np - hard in the general case and , therefore , suboptimal greedy procedures are usually applied .one of the most popular procedures is called maxvol algorithm and is based on searching for a _ dominant _ submatrix of .the dominant property of means that all columns of can be represented via a linear combination of columns from with the coefficients not greater than 1 in modulus .although , this property does not imply that has the maximal volume , it guarantees that is _ locally optimal _, what means that replacing any column of with a column of , does not increase the volume . at the initialization step , maxvol takes linearly independent latent vectors that are the pivots from lu - decomposition of matrix .practice shows that this initialization usually provides a good initial approximation to maximal volume matrix .after this , the algorithm iteratively swaps a `` bad '' latent vector inside the seed set with a `` good '' one out of it .the procedure repeats until convergence .see for more rigorous explanation of maxvol algorithm . in our paper, we also call this algorithm _ square maxvol _ , because it seeks for a square submatrix ( since determinant is defined only for square ) . furthermore , it is important to note that the original algorithm presented in has crucial speed optimizations for avoiding the expensive matrix multiplications and inversions , which are not presented in our paper due to the lack of space .let us analyse the complexity of maxvol .the lu - decomposition with pivoting takes operations .the iterative updates take operations , where is the number of iterations .typically , iterations are needed .the overall complexity of square maxvol can be estimated as .a more detailed complexity analysis of square maxvol is given in .the obvious disadvantage of this approach to rating elicitation is the fixed size of the decomposition rank , because the matrix determinant is defined only for square matrices . that makes it impossible to build a seed set with fixed size using an arbitrary rank of decompositionhowever , as we further demonstrate in section [ sec : experiments ] with experiments , using our rectangular maxvol generalization with a decomposition of rank smaller than the size of the seed set could result in better accuracy of recommendations for cold users .this section introduces a generalization of the maximal - volume concept to rectangular submatrices , which allows to overcome the intrinsic `` squareness '' of the ordinary maximal - volume concept , which is entirely based on the determinant of a square matrix .consider , .it is easy to see that the volume of a square matrix is equal to the product of its singular values . in the case of a rectangular matrix , its volume can be defined in a similar way : we call it _ rectangular volume_. the simple intuition behind this definition is that it is the volume of the ellipsoid defined as the image of a unit sphere under the linear transformation defined by : this can be verified using the singular value decomposition of and the unitary invariance of the spectral norm . moreover , in the case of a square matrix , the rectangular volume is equal to the ordinary square volume : note that , if , then .overall , searching for a seed set transforms to the following optimization task that is a generalization of problem ( [ eq : optimize_det ] ) : , where .it is important to note that this maximization problem does not depend on the basis of the latent vectors from .the simplest method to find a suboptimal solution is to use a greedy algorithm that iteratively adds columns of to the seed set .unfortunately , the straightforward greedy optimization ( trying to add each item to the current seed set and computing its rectangular volume ) costs , that often is too expensive considering typical sizes of modern recommender datasets and number of model hyperparameters .therefore , we developed a fast algorithm with complexity that is described in the following section .in this section , we introduce an algorithm for the selection of representative items using the notion of rectangular volume . at the first step, the algorithm computes the best rank- approximation of the rating matrix , puresvd ( see section [ sec : puresvd ] for details ) , and selects representative items with the pivot indices from lu - decomposition of or with maxvol algorithm .this seed set is further expanded by algorithm [ alg:2maxvol ] in a greedy fashion : by adding new representative items one by one maximizing rectangular volume of the seed set .further , we show that new representative item should have the maximal norm of the coefficients that represent its latent vector by the latent vectors of the current seed set .the procedure of such norm maximization is faster than the straightforward approach . at the end of this sectionwe describe the algorithm for even faster rank-1 updating norms of coefficients .suppose , at some step , we have already selected representative items with the indices .let be the corresponding submatrix of .on the next step , the algorithm selects a column and adds it to the seed set : , ] is an operation of horizontal concatenation of two matrices and .this column should maximize the following volume : \right).\ ] ] suppose is the current matrix of coefficients from equation ( [ eq : coef ] ) , and let be an -th column of matrix . then the updated seed set from ( [ eq : volume_update ] )can be written as following : =[s , sc_i]=s[i_l , c_i].\ ] ] then the volume of the seed set can be written in the following way : \right ) & = \sqrt{\det \left([s , q_i][s , q_i]^\top \right)}=\\ & = \sqrt{\det \left(ss^\top + sc_ic_i^\top s^\top \right)}. \end{split}\ ] ] taking into account the identity the volume ( [ eq : rectvol_before_coef ] ) can be written as following : \right)=\text{rectvol}(s)\sqrt{1+w_i},\ ] ] where .thus , the maximization of rectangular volume is equivalent to the maximization of the -norm of the coefficients vector , which we know only after recomputing ( [ eq : coef_pseudo ] ) .total recomputing of coefficient matrix on each iteration is faster than the straightforward approach described in section [ sec : volume ] and costs .however , in the next section , we describe even faster algorithm with an efficient recomputation of the coefficients .since the matrix of coefficients is the least - norm solution , after adding column to the seed set , should be computed using equation ( [ formula1 ] ) : ^\dagger q=[i_l , c_i]^\dagger s^\dagger q=[i_l , c_i]^\dagger c.\ ] ] the pseudoinverse from ( [ eq : recomputing_coef ] ) can be obtained in this way : ^\dagger=[i_l , c_i&]^\top \left([i_l , c_i][i_l , c_i]^\top \right)^{-1}=\\ & = \begin{bmatrix}i_l\\ c_i^\top \end{bmatrix}\left(i_l+c_ic_i^\top \right)^{-1 } , \end{split}\ ] ] where is an operation of vectical concatenation of and . the inversion in this formulacan be computed by the sherman - morrison formula : putting it into ( [ eq : recomputing_coef ] ) , we finally get the main update formula for : recall that we should efficiently recompute norms of coefficients . using equation ( [ eq : newc ] ) , we arrive at the following formula for the update of all norms : it is natural to see that coefficients norms are decreasing , because adding each new latent vector to the seed set gives more flexibility of representing all latent vectors via representative ones . equations ( [ eq : newc ] ) and ( [ eq : length_update ] ) allow to recompute and using the simple rank- update . thus , the complexity of adding a new column into the seed set is low , what is shown in section [ sec : rectmaxvol_complexity ] .the pseudocode of the algorithm is provided in algorithm [ alg:2maxvol ] .rating matrix , number of representative items , rank of decomposition indices of representative items compute rank- puresvd of the matrix get the initial square seed set : pivot indices from lu - decomposition of , where is the -th column of ] the seed sets provided by the algorithm can be used for rating elicitation and further prediction of ratings for the rest of the items , as demonstrated in section [ sec : predicting ] .moreover , if the size of the seed set is not limited by a fixed budget , alternative stopping criteria is proposed in section [ sec : bound ] .the proposed algorithm has two general steps : the initialization ( steps 15 ) and the iterative addition of columns or rows into the seed set ( steps 612 ) .the initialization step corresponds to the lu - decomposition or square maxvol , which have complexity .addition of one element into the seed set ( steps 711 ) requires the recomputation of the coefficients ( step 10 ) and lengths of coefficient vectors ( step 11 ) .the recomputation ( step 10 ) requires a rank- update of the coefficients matrix and the multiplication , where is a column of .the complexity of each of the two operations is , so the total complexity of one iteration ( steps 711 ) is .since this procedure is iterated over , the complexity of the loop ( step 6 ) is equal to .so , in total , the complexity of algorithm [ alg:2maxvol ] is . in this section ,we theoretically analyse the estimation error of our method proposed in section [ sec : algorithm ] . according to section [ sec : puresvd ] we have a low - rank approximation of the rating matrix where is a random error matrix . on the other hand, we have rbmf approximation ( [ eq : r_coefs ] ) .let us represent its error via .first of all , we have since ( see section [ sec : predicting ] for details ) , the rbmf approximation of can be written in the following form : what means the smaller in modulus the noise terms are , the better approximation of we have .it means that we are interested in the small values of the matrix , such as the least - norm solution of ( [ eq : coef_factor_optimization ] ) .further , we prove a theorem providing an approximated bound for the maximal length of .similarly to square maxvol algorithm , a rectangular submatrix is called _ dominant _ , if its rectangular volume does not increase by replacing one row with another one from the source matrix .[ theorem ] let be a matrix of rank . assume is a vector of seed set element indices that produces rank- dominant submatrix of , where and .let be a matrix of least - norm coefficients , such that .then -norm of a column of for not from the seed set is bounded as : since is a dominant submatrix of the matrix , it has the maximal rectangular volume among all possible submatrices of ] , we get [s , q_i]^\top \right ) \le \frac{l_0 + 1}{l_0 + 1-f}\det(ss^\top ) .\ ] ] using equation ( [ eq : rect_det ] ) , we get : [s , q_i]^\top \right)}{\det(ss^\top ) } -1\le \frac{f}{l_0 + 1-f } , \end{split}\ ] ] what finishes the proof . the similar theoretical result was obtained in , however our proof seems to be much closely related to the notation used in our paper and in the proposed algorithm . theorem [ theorem ]demonstrates that if we have an existing decomposition with the fixed rank and the size of the seed set is not limited by a fixed budget it is enough to take items to the seed set for getting all coefficients norm less than 1 .this condition of representativeness has a very natural geometric meaning : all item latent vectors are inside the ellipsoid spanned by the latent vectors from the seed set .the numerical experiments with randomly generated matrices have shown , that algorithm [ alg:2maxvol ] requires only rows to reach upper bound 2 for the length of each row of and only to reach the upper bound 1 for the length of each row of .so , although , our algorithm does not guarantee that the seed set submatrix is dominant , the experiment results are fully consistent with the theory .further , we prove the supporting lemma .[ lemma ] let and .let be submatrix of without -th column and be submatrix of without -th row .then , from the cauchy - binet formula we get where is a vector of different indices .since contains all columns of except -th column , then is a submatrix of for any .since consists of different numbers , we have different , such that is a submatrix of .the same is true for the matrix .so get applying cauchy - binet formula to each summand .therefore , what finishes the proof .the proposed experiments compare two algorithms : square maxvol based ( our primary baseline ) and rectangular maxvol based ( section [ sec : proposed_method ] ) .other competitors have either an infeasible computational complexity ( see section [ sec : related ] for details ) or have a lower quality than our baseline , as it is shown in ( we reproduced the conclusions from but they are not demonstrated here due to the lack of space ) .moreover , it is important to note that the experiments in used smaller versions of the datasets .therefore , the performance of square maxvol on the extended datasets is different from that reported in .we used two popular publicly available datasets in our experiments .t first one is the movielens dataset which contains 20,000,263 ratings of 26,744 movies from 138,493 users .the analysis of the older and smaller version of this dataset is provided in .the second one is the netflix dataset .it contains 100,480,507 ratings of 17,770 movies from 480,189 users .the description of the dataset and the competition can be found in .the rating matrix was formed in the same way as in .our evaluation pipeline for the comparison of the rating elicitation algorithms is similar to the one introduced in .all our experiments are provided for both the user and the item cold start problems .however , without loss of generality , this section describes the evaluation protocol for the user cold start problem only .the item cold start problem can be evaluated in the same way after the transposition of the rating matrix .we evaluate the algorithms for selecting representatives by the assessing the quality of the recommendations recovered after the acquisition of the actual ratings of the representatives , what can be done as shown in section [ sec : predicting ] .note that users may not have ratings for the items from the seed set : if user was asked to rate item with unknown rating , then , according to puresvd model , is set to . in case of the user cold start problem ,all users are randomly divided into 5 folds of equal size , and the experiments are repeated 5 times , assuming that one part is a test set with cold users and the other four parts form the train set and the validation set contain warm users .analogically , in case of the item cold start , all items were divided into 5 folds .pointwise quality measures are easy to be optimized directly , but they are not very suitable for recommendation quality evaluation , because the goal of a recommender system is not to predict particular rating values , but to predict the most relevant recommendations that should be shown to the user .that is why , we use ranking measures to evaluate all methods . for evaluation , we divided all items for every user into relevant and irrelevant ones , as it was done in the baseline paper .one of the most popular and interpretable ranking measures for the recommender systems evaluation are precision@ and recall@ that measure the quality of top- recommendations in terms of their relevance .more formally , precision@ is the fraction of relevant items among the top- recommendations .recall@ is the fraction of relevant items from the top among all relevant items .our final evaluation measures were computed by averaging precision@ and recall@ over all users in the test set .note that in the case of the item cold start problem , precision@ and recall@ are computed on the transposed rating matrix .moreover , following the methodology from , we compare algorithms in terms of coverage and diversity .as we mentioned in section [ sec : predicting ] , there are two different ways to compute the coefficients for representing the hidden ratings via the ratings from a seed set . the first one is to compute them via the low - rank factors , as shown in equation ( [ eq : coef_pseudo ] ) .the second one is to compute them via the source rating matrix , as shown in equation ( [ eq : coef_matrix ] ) .our experiments show that the second approach demonstrates the significantly better quality .therefore , we use this method in all our experiments .we processed experiments for the seed set sizes from 5 to 100 with a step of 5 .these computations become possible for such dense grid of parameters , because of the high computational efficiency of our algorithm ( see section [ sec : related ] ) .the average computational time of rectangular maxvol on the datasets is seconds ( intel xeon cpu 2.00ghz , 256 gb ram ) .the average computational time of square maxvol is almost the same what confirms the theoretical complexity analysis . in the case of rectangular maxvol ,for every size of the seed set , we used the rank that gives the best performance on a separate evaluation set .[ fig : major ] demonstrates the superiority of our approach over the ordinal square maxvol for all cold start problems types ( user and item ) and for both datasets .moreover , it can be seen from the magnitudes of the differences that rectangular maxvol gives much more stable results that the square one .the same conclusions can be made for any combination of precision / recall , and seed set sizes , but they are not demonstrated here due to the lack of space . as mentioned above , rectangular maxvol used the optimal rank value in our experiments .[ fig : opt_rank ] demonstrates the averaged optimal rank over all experiments for all datasets and for all cold start problem types .it is easy to see that , in each case , the required optimal rank is significantly smaller than the corresponding size of the seed set .this unequivocally confirms that the rectangular generalization of the square maximal - volume concept makes a great sense .moreover , since rectangular maxvol requires a smaller rank of the rating matrix factorization , it is more computationally and memory efficient . 2 2 [ fig : opt_rank ] [ fig : coverage_diversity ] on fig .[ fig : coverage_diversity ] , we can see that the coverage and diversity measures of the representative netflix items selected by rectangular maxvol are higher than the measures of square maxvol .the cases of representative users and movielens dataset lead to the same results , but the corresponding figures are not demonstrated here due to the lack of space . in the end, it is interesting to analyse the behaviour of the automatic stopping criterion that adds objects into the seed set until all latent vectors are covered by the ellipsoid spanned by the latent vectors of the representatives .the experiments show that increasing the rank results in a quality fall in the case of representative users and the ranks higher than 50 , what means an overfitting of puresvd . in case of the representative items ,the quality becomes almost constant starting from the same ranks .conventional cf methods do not analyse any domain - specific context of users / items , such as explicit user and item profiles , items text descriptions or social relations between users .therefore , they are domain- and data - independent and can be applied to a wide range of tasks , what is their major advantage .as shown in , cf approaches based on a factorization have high accuracy for the majority of datasets .while a particular choice of a factorization algorithm is not essential for our approach to the cold start problem , our methodology is based on the puresvd , which performs better than other popular methods such as svd++ .the simplest methods for the seed set selection rank users or items by some ad - hoc score which shows how representative they are and take the top- ranked entities as a seed set .an obvious drawback of such methods that is avoided in our approach is that these elements are taken from the seed set independently and diversity of the selected elements is limited .further in this section , we overview the methods that aim on a selection of a diverse seed set and that have better performance .this is why we do not use the scoring methods in our experiments . among them ,the most straightforward method is the greedyextend approach .unfortunately , the brute force manner of greedyextend implies very high computational costs .hence , it is hardly scalable , in contrast to the approaches that are empirically compared in this paper .this method greedily adds the item to the current seed set of indices that maximizes the target quality measure .the search of the best is computed in a brute force manner , i.e. the algorithm iteratively adds the best item into the seed set : ] and ) ] .the authors of this method reported the results only for an approach that uses similarities of items to predict the ratings via the seed set ratings .more effective linear approach described in section [ sec : predicting ] costs , where . at each step ,the least squares solution is computed for almost all items , i.e. times . since the algorithm has such steps , the total complexity is ( more than operations for the netflix dataset and the seed set size ) .therefore , we do not use this method in our experiments .another class of methods of searching for diverse representatives is based on the factorization of the rating matrix .since the selection of user or item representatives is equivalent to selecting a submatrix of the corresponding factor , these algorithms seek for the submatrix that maximizes some criterion .one such approach , called backward greedy selection , solves only the item cold start problem , but not the user one .this method is based on the techniques for transductive experimental design introduced in . to get the seed set, it greedily removes users from a source user set in order to get a good seed set minimizing the value where is a submatrix in the items factor of a rank- decomposition .each deletion of an item requires iterative look up of all the items in the data , where each iteration costs .so , one deletion takes operations . assuming that , the whole procedure takes operations , which is too expensive to be computed on real world datasets ( the authors have selected a small subset of users to perform their evaluation ) .therefore , we do not use this method in our experiments . the method presented in , called representative based matrix factorization ( rbmf ) , takes the diversity into account as well .it uses maximal - volume concept and the maxvol algorithm for searching the most representative rows or columns in the factors of a cf factorization .this approach is highly efficient and more accurate than all ad - hoc competitors , but it also has one important limitation .it must use the same rank of factorization as the desired number of representative users or items for the seed set . the algorithm proposed in our paperis a generalization of maxvol that allows to use different rank values .it often leads to a better recommendation accuracy , as shown in section [ sec : experiments ] .let us overview the computational complexity of the proposed rectangular maxvol and its competitors .some of these methods use low - rank factorizations of the matrix , whose detailed complexity analysis is provided in .however , as this is not a key point of our work , we neglect the computational cost of factorizations in the further analysis , because it is same for all rating elicitation algorithms and usually is previously computed for the warm cf method .the summary of the complexity analysis is shown in table [ tab : complexity ] .the detailed complexity analysis of square maxvol and rectangular maxvol is provided in sections [ sec : maximal ] and [ sec : rectmaxvol_complexity ] respectively ..complexity of the algorithms [ cols="<,<,<",options="header " , ] apart from rating elicitation methods , there were also different approaches to cold start problem proposed in the literature .additional context information ( e.g. , category labels or all available metadata ) may be used .moreover , there is a class of methods that use adaptive tree - based questionnaires to acquire the initial information about new users .moreover , the cold start problem can be viewed from the exploration - exploitation trade - off point of view .the methods from analyse the performance of cf methods w.r.t . the number of known ratings for a user .the maximal - volume concept , originally described in the field of low - rank approximation of matrices , provides an approach for a matrix approximation in a pseudo - skeleton form , which is a product of matrices formed by columns or rows of the source matrix .the algorithm , called maxvol , allows to efficiently find a well - conditioned submatrix with a high enough volume for building such an approximation .maximal volume submatrices are useful not only for low - rank approximations , but also in wireless communications , preconditioning of overdetermined systems , tensor decompositions , and recommender systems .our generalization of the maximal - volume concept to rectangular case offers additional degrees of freedom , what is potentially useful in any of these areas .in our paper , we overviewed the existing approaches for the rating elicitation and introduced the efficient algorithm based on the definition of rectangular matrix volume. moreover , in order to demonstrate the superiority of the proposed method , we provided the analytical and experimental comparison to the existing approaches .it seems to be an interesting direction of future work to apply the proposed framework to building tree - based cold - start questionnaires in recommender systems . another interesting direction for future work is to join approaches from two classes : based on the maximal - volume concept and based on optimal design criteria .they historically came from absolutely different fields : from computational lineal algebra and from statistical experimental analysis respectively .although all these methods are very similar from the mathematical point of view , it seems quite interesting to explore their similarities and differences .work on problem setting and numerical examples was supported by russian science foundation grant 14 - 11 - 00659 .work on theoretical estimations of approximation error and practical algorithm was supported by russian foundation for basic research 16 - 31 - 00351 mol_a .also we thank evgeny frolov for helpful discussions .
|
cold start problem in collaborative filtering can be solved by asking new users to rate a small seed set of representative items or by asking representative users to rate a new item . the question is how to build a seed set that can give enough preference information for making good recommendations . one of the most successful approaches , called representative based matrix factorization , is based on maxvol algorithm . unfortunately , this approach has one important limitation a seed set of a particular size requires a rating matrix factorization of fixed rank that should coincide with that size . this is not necessarily optimal in the general case . in the current paper , we introduce a fast algorithm for an analytical generalization of this approach that we call rectangular maxvol . it allows the rank of factorization to be lower than the required size of the seed set . moreover , the paper includes the theoretical analysis of the method s error , the complexity analysis of the existing methods and the comparison to the state - of - the - art approaches .
|
since von neumann and morgenstern have initiated the field of game theory , it has often proved of great value for the quantitative description and understanding of the competition and co - operation between individuals .game theory focusses on two questions : 1 . which is the optimal strategy in a given situation ?2 . what is the dynamics of strategy choices in cases of repeatedly interacting individuals ? in this connection game dynamical equations find a steadily increasing interest . although they agree with the replicator equations of evolution theory ( cf . sec .2 ) , they can not be substantiated in the same way .therefore , we will be looking for a foundation of the game dynamical equations which bases on individual actions and decisions ( cf .4 ) . in addition, we will formulate a stochastic version of evolutionary game theory ( cf .3 ) . this allows to investigate the effects of fluctuations on the dynamics of social systems . in order to illustrate the essential ideas , a concrete model for the self - organization of behavioral conventionsis presented ( cf .we will see that the game dynamical equations describe the average evolution of social systems only for a certain time period .therefore , a criterium for their validity will be developed ( cf .finally , we will present possible extensions to more general behavioral models and discuss the actual meaning of the game dynamical equations ( cf .sec . 7 ) .let with denote the _ proportion _ of individuals pursuing the _ behavioral strategy _ at time .we assume the considered strategies to be mutually exclusive .the set of strategies may be discrete or continuous , finite or infinite .the only difference will be that sums over are to be replaced by integrals in cases of continuous sets .by we will denote the possibly time - dependent _ payoff _ of an individual using strategy when confronted with an individual pursuing strategy .hence , his / her _ expected success _ will be given by the weighted mean value since is the probability that the interaction partner uses strategy .in addition , the _ average expected success _ will be assuming that the relative temporal increase of the proportion of individuals pursuing strategy is proportional to the difference between the expected success and the average expected success , we obtain the _ game dynamical equations _ \nonumber \\ & = & \nu p_x(t ) \big [ \langle e_x \rangle_t - \sum_y p_y(t ) \langle e_y \rangle_t \big ] \ , , \label{repli}\end{aligned}\ ] ] where the possibly time - dependent proportionality factor is a measure for the _ interaction rate _ with other individuals . according to ( [ repli ] ) , the proportions of strategies with an above - average success increase , whereas the other strategies will be diminished .note , that the proportion of a strategy does not necessarily increase or decrease monotonically .certain payoffs are related with an _ oscillatory _ or even _chaotic _ dynamics . equations ( [ repli ] ) are identical with the replicator equations from evolutionary biology .they can be extended to the _ selection - mutation equations _ \nonumber \\ & + & \sum_y \big [ p_y(t ) w_1(y\rightarrow x ) - p_x(t ) w_1(x\rightarrow y ) \big ] \ , .\label{mutation}\end{aligned}\ ] ] the terms which agree with ( [ repli ] ) describe a selection of superior strategies .the new terms correspond to the effect of mutations , i.e. to _ spontaneous _ changes from strategy to other strategies with possibly time - dependent _ transition rates _ ( last term ) and the inverse transitions .they allow to describe _ trial and error behavior _ or behavioral fluctuations .let us consider a social system consisting of a constant number of individuals .herein , denotes the number of individuals who pursue strategy at time .hence , the time - dependent vector reflects the _ strategy distribution _ in the social system and is called the _socioconfiguration_. if the individual strategy changes are subject to random fluctuations ( e.g. due to trial and error behavior or decisions under uncertainty ) , we will have a stochastic dynamics .therefore , given a certain socioconfiguration at time , for the occurence of the strategy distribution at a time we can only calculate a certain _ probability _ .its temporal change is governed by the so - called _ master equation _ \ , .\label{master}\ ] ] the sum over extends over all socioconfigurations fulfilling and ( [ sum ] ) . according to equation ( [ master ] ) , an _ increase _ of the probability of having socioconfiguration is caused by transitions from other socioconfigurations to . while a _ decrease _ of is related to changes from to other socioconfigurations .the corresponding changing rates are proportional to the _ configurational transition rates _ of changes to socioconfigurations _ given _ the socioconfiguration and to the probability of _ having _ socioconfiguration at time .the configurational transition rates have the meaning of transition probabilities per time unit and must be non - negative quantities .frequently , the individuals can be assumed to change their strategies independently of each other .then , the configurational transition rates have the form i.e. they are proportional to the number of individuals who may change their strategy from to another strategy with an _ individual transition rate _ . in relation ( [ trans ] ) , the abbreviation means the socioconfiguration which results after an individual has changed his / her strategy from to . it can be shown that the master equation has the properties for all times , if they are fulfilled at some initial time .therefore , the master equation actually describes the temporal evolution of a probability distribution .in order to connect the stochastic model to the game dynamical equations , we must specify the individual transition rates in a suitable way .therefore , we derive the mean value equations related to the master equation ( [ master ] ) and compare them to the selection - mutation equations ( [ mutation ] ) . the proportion is defined as the _ mean value _ of the number of individuals pursuing strategy , divided by the total number of considered individuals : taking the time derivative of and inserting the master equation gives \nonumber \\ & = & \sum_{\vec{n } } ( n'_x - n_x ) w(\vec{n } \rightarrow \vec{n}^{\,\prime } ) p(\vec{n},t)\ , , \label{not}\end{aligned}\ ] ] where we have interchanged and in the first term on the right hand side . taking into account relation ( [ trans ] ) , we get p(\vec{n},t ) \ , .\end{aligned}\ ] ] with ( [ av ] ) this finally leads to the _ approximate mean value equations _ \label{rate}\ ] ] however , these are only exact , if the individual transition rates are independent of the socioconfiguration .anyhow , they are _ approximately _ valid as long as the probability distribution is narrow , so that the mean value of a function can be replaced by the function of the mean value . this problem will be discussed in detail later on . comparing the rate equations ( [ rate ] ) with the selection - mutation equations ( [ mutation ] ), we find a complete correspondence for the case with and the _ success _ since whereas is again the mutation rate ( i.e. the rate of spontaneous transitions ) , the additional term in ( [ add ] ) describes _ imitation processes , _ where individuals take over the strategy of their respective interaction partner .imitation processes correspond to pair interactions of the form their frequency is proportional to the number of interaction partners who may convince an individual of strategy .the proportionality factor is the _ imitation rate_. relation ( [ prop ] ) is called the _ proportional imitation rule _ and can be shown to be the best learning rule .it was discovered in 1992 and says that an imitation behavior only takes place , if the strategy of the interaction partner turns out to have a greater success than the own strategy . in such cases ,the imitation rate is proportional to the difference between the success of the alternative and the previous strategy , i.e. strategy changes occur more often the greater the advantage of the new strategy would be .all specifications of the type \label{prop2}\ ] ] with an arbitrary parameter also lead to the game dynamical equations .however , individuals would then , with a certain rate , take over the strategy of the interaction partner , even if its success is smaller than that of the previously used strategy .moreover , if is not chosen sufficiently large , the individual transition rates can become negative . in summary , we have found a microscopic foundation of evolutionary game theory which bases on four plausible assumptions : 1 .individuals evaluate the success of a strategy as its average payoff in interactions with other individuals ( cf .( [ es ] ) ) .they compare the success of their strategy with that of the respective interaction partner , basing on observations or an exchange of experiences .3 . individuals imitate each others behavior .4 . in doing so, they apply the proportional imitation rule ( [ prop ] ) [ or ( [ prop2 ] ) ] .for illustrative reasons , we will now discuss an example which allows to understand how social conventions emerge .we consider the simple case of two alternative strategies and assume them to be equivalent so that the payoff matrix is symmetrical : if , the additional payoff reflects the _ advantage _ of using the same strategy like the respective interaction partner .this situation is , for example , given in cases of network externalities like in the historical rivalry between the video systems vhs and beta max . finally , the mutation rates are taken constant , i.e. . the resulting game dynamical equations are \big\ { w_1 + \nu a p_x(t ) \big [ p_x(t ) - 1 \big ] \big\ } \ , .\label{game}\ ] ] obviously , they have only _ one _ stable stationary solution if the ( control ) parameter is smaller than zero .however , for equation ( [ game ] ) can be rewritten in the form \left [ p_x(t ) - \frac{1 + \sqrt{\kappa}}{2 } \right ] \left [ p_x(t ) - \frac{1 - \sqrt{\kappa}}{2 } \right ] \ , .\ ] ] the stationary solution is unstable , then , but we have two new stable stationary solutions .that is , dependent on the detailled initial condition , one strategy will win the majority of users although both strategies are completely equivalent .this phenomenon is called _symmetry breaking_. it will be suppressed , if the mutation rate is larger than the advantage effect .the above model allows to understand how behavioral conventions come about .examples are the pedestrians preference for the right - hand side ( in europe ) , the revolution direction of clock hands , the direction of writing , or the already mentioned triumph of the video system vhs over beta max .it is very interesting how the above mentioned symmetry breaking affects the probability distribution of the related stochastic model ( cf .[ f1 ] ) and a broad initial probability distribution have been chosen . in each picture , the box is twice as high as the maximal occuring value of the probability . ] ) .for the probability distribution is located around and stays small so that the approximate mean value equations are applicable . at the so - called _ critical point _ , a _ phase transition _ to a qualitative different system behavior occurs and the probability distribution becomes very broad . as a consequence , the game dynamical equations do not correctly describe the temporal evolution of the mean strategy distribution anymore . for , a bimodal and symmetrical probability distribution evolves .that is , the likelihood that one of the two equivalent strategies will win through is much larger than the likelihood to find approximately equal proportions of both strategies . at the beginning , the initial state or maybe some random fluctuation determines ,which strategy has better chances to win .however , in the long run both strategies have exactly the same chance .it is clear , that in such cases the game dynamical equations fail to describe the mean system behavior ( cf .[ f2 ] ) , which would correspond to the average temporal evolution of an ensemble of identical social systems . in cases of oscillatory or chaotic solutions of the game dynamical equationsthe situation is even worse .in the last section we have seen that the approximate mean value equations with the so - called _ first jump moments _ ( cf .( [ not ] ) ) are not sufficient .this calls for corrected mean value equations and a criterium for the time period of their validity . if the individual transition rates depend on the socioconfiguration , the exact mean value can only be evaluated via formula ( [ av ] ) .this requires the calculation of the probability distribution and , therefore , the numerical solution of the respective master equation ( [ master ] ) .since the number of possible socioconfigurations is normally very large , an extreme amount of computer time would be necessary for this .luckily , it is possible to derive from ( [ not ] ) the _ corrected mean value equations _ by means of a suitable taylor approximation .this equation depends on the _ covariances _ which can be determined by means of the _ covariance equations _ \ , .\label{corrcov}\end{aligned}\ ] ] the functions are called the _ second jump moments_. equations ( [ corrmean ] ) and ( [ corrcov ] ) build a closed system of equations , but still no exact one , since this would depend on higher moments of the form . nevertheless , according to figure [ f2 ] the corrected mean value equations yield significantly better results than the approximate ones . as a consequence , they are valid for a much longer time period .suitable _ validity criteria _ are the _ relative variances _ since these are a measure for the relative width of the probability distribution .it can be shown that the covariances and all higher moments are small , if only is much smaller than 1 for every .numerical investigations indicate that the approximate mean value equations begin to separate from the exact ones as soon as one of the relative variances becomes greater than 0.04 .the corrected mean value equations and covariances remain reliable as long as is smaller than 0.12 for all ( cf .[ f2 ] ) .a more detailled discussion of the above matter is presented elsewhere .the above discussed behavioral model can be generalized in different respects . [[ modified - transition - rates ] ] modified transition rates : + + + + + + + + + + + + + + + + + + + + + + + + + + the strange cusp at in figure [ f1 ] , which comes from the discontinuous derivative of at , can be avoided by the modified imitation rates this ansatz agrees with relation ( [ prop2 ] ) in linear approximation for and , but it always yields non - negative imitation rates . similar to ( [ prop ] ) it guarantees two essential things : 1 .the imitation rate grows with an increasing gain of success .if the alternative strategy is inferior , the imitation rate is very small ( but , due to uncertainty , not negligible ) .the results of the corresponding stochastic behavioral model are presented in figure [ f3 ] .they show the usual flatness of the probability distribution at the critical point , where again a phase transition occurs .[ [ dynamics - with - expectations ] ] dynamics with expectations : + + + + + + + + + + + + + + + + + + + + + + + + + + + the decisions of individuals are often influenced by their _ expectations _ about the success of a strategy at future times .these will base on some kind of extrapolation of past experiences with the success of .if expected payoffs at future times are weighted exponentially with their distance from the present time , one would set [ [ other - kinds - of - pair - interactions ] ] other kinds of pair interactions : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + apart from imitative behavior , individuals also sometimes show an _ avoidance behavior _ especially if they dislike their interaction partner ( so - called ` snob effect ' ) .this can be taken into account by an additonal contribution to the individual interaction rates : denotes the _ avoidance rate_. [ [ several - subpopulations ] ] several subpopulations : + + + + + + + + + + + + + + + + + + + + + + + sometimes one has to distinguish different _ subpopulations _ , i.e. different kinds of individuals .this is necessary , if not all individuals have the same set of strategies .a similar thing holds , if the considered social system consists of competing groups , where only individuals of the same group behave cooperatively .the generalized behavioral equations are \ ] ] with individual interaction rates of the form \ , .\ ] ] [ [ inclusion - of - memory - effects ] ] inclusion of memory effects : + + + + + + + + + + + + + + + + + + + + + + + + + + + + if the strategy distribution at past times influences present decisions in a non - markovian way , the approximate mean value equations have the form \ , .\ ] ] for example , in cases of an exponentially decaying memory one would have have found a microscopic foundation of the game dynamical equations , basing on a certain kind of imitative behavior .moreover , a stochastic version of evolutionary game theory has been formulated .it allowed to understand the self - organization of social conventions as a phase transition which is related with symmetry breaking .moreover , we have seen that the game dynamical equations correspond to approximate mean value equations . normally , they agree with the mean value equations of stochastic game theory for a certain time period only , which can be determined by calculating the relative variances . for an improved description of the average system behaviorwe have derived corrected mean value equations which require the solution of additional covariance equations .the interpretation of the game dynamical equations follows by reformulating these in terms of a _ social force model , _ assuming a continuous strategy set : the force term delineates spontaneous strategy changes by individual , whereas is the _ interaction force _ which originates from individual and influences individual . here , denotes _dirac s delta function _ ( which yields a contribution for only ) . according to ( [ force ] ) ,the game dynamical equations describe the _ most probable strategy changes _ rather than the average ( representative ) evolution of a social system .therefore , they neglect the effects of fluctuations on the system behavior .
|
the game dynamical equations are derived from boltzmann - like equations for individual pair interactions by assuming a certain kind of imitation behavior , the so - called proportional imitation rule . they can be extended to a stochastic formulation of evolutionary game theory which allows the derivation of approximate and corrected mean value and covariance equations . it is shown that , in the case of phase transitions ( i.e. multi - modal probability distributions ) , the mean value equations do not agree with the game dynamical equations . therefore , their exact meaning is carefully discussed . finally , some generalizations of the behavioral model are presented , including effects of expectations , other kinds of interactions , several subpopulations , or memory effects .
|
a large number of proteins perform their biological activity under the shape of dimers ( or oligomers ) .a dimer is a protein whose native conformation is a globule build out of two disjoint chains .depending whether the two chains have the same sequence or not , they are referred to as homodimers or heterodimers .notwithstdanding this difference , they have been observed to fold through two major paradigms .some of the known dimers fold according to a three state mechanism ( d ) , where first the denaturated chains of the monomers ( d ) assume conformations rich of native structures independently of each other ( i : folding intermediate ) , and subsequently the two parts come together to form the dimer ( n : native ) .this is the case , for example , of aspartate aminotransfease , where one can observe three populated species , namely unfolded monomers ( d ) , partially folded monomers ( i ) and folded dimers ( n ) .a different behaviour is displayed by , for example , p22 arc repressor , whose chains dimerize without populating any monomeric native like intermediate ( two state process , d ) . in this caseone can only identify the unfolded monomers ( d ) and the native dimers ( n ) .the aim of the present work is to achieve , using model calculations , a basic understanding of the folding mechanism of dimers based solely on energetic arguments , as was already done in the case of small , single domain proteins ( monomers ) . in this case, it was found that good folders are those sequences whose total energy in the ground state conformation lies below a threshold value .this threshold energy is solely determined by the number of amino acids forming the chain , the standard deviation of the contact energies used in designing the sequences and by their composition .it is equal to the lowest energy that random heteropolymers of the same length and composition can achieve when they compact into the ensemble of conformations structurally dissimilar ) of the similarity parameter ( order parameter ) , defined as the ratio between the number of native contacts present in the structure to which the sequence has compacted , and the total number of contacts of the native structure used to design the sequence . in the case of dimers , it is convenient to introduce two similarity ( order ) parameters for any given conformation , namely and which correspond to the relative value of native contacts within each chain and across the two chains , respectively . ] to the native conformation . aside from being responsible for the thermodynamical uniqueness and stability of the ground state , the low energy character of single domain good folders is also essential for their dynamics . the only way a small, single domain lattice designed protein can display an energy below is by positioning ( few ) strongly interacting amino acids in some key sites of the protein .these sites , called `` hot '' sites in ref . participate in the formation of local elementary structures ( les ) , which bias the chain to its native conformation and which build the ( post critical ) folding nucleus when they get together , responsible not only for the stability of the protein , but also of its fast folding ability .amino acids in these sites are highly conserved in evolution and determine the topology of the space of folding sequences . in the followingwe will characterize the dynamic behaviour of model homodimers with respect to the energetic properties of their ground state conformation . since a reliable potential function for residue residue interaction is not available , we can compare the results of model calculations to real proteins only through the analysis of conservation patterns in families of analogous , measured as percentage of matching residues , are likely to be analogous . ]such an analysis has been performed for the analogs of aspartate aminotransfease and of p22 arc repressor , representative examples of a three state and a two state folding homodimer , respectively .the corresponding results are displayed in figs .[ arc ] and [ asp ] , and discussed in section v , where they are compared with the conservation patterns found in lattice model proteins .the model we use to study homodimers has been largely employed with success in the study of monomers .this is because , in spite of the strong simplifications introduced in the description of the proteins , aimed at making feasible dynamical simulations of the folding process , the model still contains the two main ingredients which are at the basis of the distinctive properties of proteins : polymeric structure and disordered interactions among the amino acids .the model is used to study the general thermodynamical and kinetical properties of notional dimers , independently on details concerning secondary structures , side chains , etc . of course , all these details may prove of relevance when addressing specific questions , as was e.g. done in refs . concerning the oligomerization equilibrium properties of leucine zippers .according to the model , a protein is a chain of beads on a cubic lattice , each bead representing an amino acid ( selected from twenty different types ) which interacts with its nearest neighbours through a contact potential with numbers taken from the statistical analisys of real proteins carried out by miyazawa and jernigan in ref . ( for details about the model see , e.g. , refs . ) . in the case of single domain proteinsit is , in principle , simple to design sequences which fold to a given target conformation . due to the fact that the thermodynamical and kinetical behaviour of a protein are essentially determined by its total energy, it is possible to design folding sequences by searching in the space of sequences for those having energy lower than . using a monte carlo algorithm, a sequence with energy in the target conformation has a probability to be selected proportional to , where is an intensive variable which plays the role of temperature and gives the degree of bias towards low energy sequences . in the evolutionary context, has the meaning of selective pressure with respect to the protein ability to fold : the lower is the value of , the stronger is this pressure and the better are the folding properties of the selected sequences . in particular , for values of lower than the temperature ( which is the temperature at which the mean energy is exactly ) , the average energy of the selected sequences is lower than .consequently one obtains in this way sequences with a unique and stable native conformation and able to find it rapidly . in other words ,the designed sequences display a unique ground state ( native conformation ) with energy into which they fold fast . in order to design homodimers, we first set a target conformation built out of two identical parts ( chosen equal to the native structure of a 36mer widely used in the literature in studies of small , single domain proteins ) , having a face in contact ( see fig .[ native ] ) . in the present modelthe monomers have been chosen to be mirror images of each other , consequently the overall structure is symmetrical with respect to the interface , a feature which simplifies the computational handling of the protein .the second step in the homodimer design is to choose a realistic ratio between the different kinds of amino acids and two evolutionary temperatures , and , which control the amino acids in the bulk and at the interface , respectively .we then use a multicanonical sampling algorithm ( see appendix a ) to select a set of sequences , according to the distribution of probability where and are the energies associated with the contacts between residues belonging to the same monomer and between residues across the interface , respectively , and is the partition function . during the sampling process , couples of residues belonging to one of the two monomers are swapped , the same swap being repeated on the other monomer , in such a way that the sequence remains identical in the two parts of the dimer . moreover , in this way the overall concentration of the different kinds of residues is kept constant . starting from a random sequence displaying a realistic composition ( i.e. the average ratio of different kinds of amino acids as found in natural proteins ) , we make swappings , sufficient to ensure a stationary distribution of energy , before selecting a sequence , repeating this process times , to obtain a statistically representative set of designed sequences . consequently , in constructing this set we sample only sequences of the total of sequences . in keeping with the fact that each of the selected sequences are separated from each other by swappings , it is reasonable to assume that is a set of evolutionary uncorrelated sequences whose properties ( e.g. , conservation patterns ) can be compared to those of non homologous sequences ( analogous ) families of real proteins .the use of two different temperatures and to select residues in the bulk and at the interface respectively , corresponding to two different evolutive pressures , allows one to control how the energy , and the amino acid conservation pattern , is distributed inside the dimer .details and caveats concerning the details and limitations of the design procedure are given in appendix b. examples of sequences selected at different evolutionary temperatures to fold to the structure displayed in fig .[ native ] are listed in table i.in the present and in the next sections we shall study the properties and the behaviour of the designed sequences in conformation space , making use of long monte carlo runs . in what follows we shall ( mainly ) concentrate on the thermodynamical properties of these sequences , while in sect .iv we dwell primarily on their dynamical behaviour .we first proceed to the calculation of the ground state energy of the designed sequences .because a complete enumeration of all possible homodimer conformations is out of question , the thermodynamical properties of the sequences selected at different values of and ( see e.g. table i ) are analyzed through a standard metropolis algorithm in the conformational space at fixed temperature ( temperature in `` real '' conformational space , not to be confused with the `` evolutionary '' temperatures and in the space of sequences ) .the simulations were performed with periodic boundary conditions , i.e. in a cubic wigner cell of dimension ( cf .the need for using a finite volume arises from the fact that , in an infinite volume each conformation with disjoint chains have an infinitely negative free energy , due to its infinite translational entropy , while that of the dimer is proportional to , being the volume allowed to the protein to move in . ] .long monte carlo simulations ( mc steps ) were performed for each pair of ( identical ) designed chains .two outcomes were observed : ( a ) folding into the dimeric native state , ( b ) aggregation into a set of disordered clumps . in what followswe shall only analyze the outcome ( a ) , while outcome ( b ) will be discussed in a forthcoming paper . in fig .[ dyn_phases ] we show the results obtained by carrying out the mc simulation at , for which the fractional population of the native state is .full simbols correspond to sequences displaying the behaviour ( a ) , while empty symbols correspond to sequences displaying the behaviour ( b ) .the solid line indicates the loci where the values of and correspond to the energy .we note that this line delimits the area in which the corresponding designed sequences display the behaviour of type ( a ) .this is not surprising , considering the fact that the lowest energy of a random conformation with the same number of contacts as the target conformation is ( evaluated in the approximation of the random energy model , being the standard deviation of the interaction matrix elements and the effective coordination number ) .as in the case of single domain proteins , conformations with energy lower than have not to compete with the sea of random structures , and consequently are good native conformations ( unique and stable ) in the native conformation ( i.e. , quantity closely connected with the z score , and where is the standard deviation of the interaction matrix ) is actually more general . in fact , because these monomers fold through local elementary structures ( les ) stabilized by few ( hot ) , strongly interacting amino acids , and because these amino acids are conserved for all sequences with , one can restate the large energy gap paradigm of good folders as follows : sequences which display a very small number of hot amino acids and which conserve , in any way , the energy gap . ] . in what followswe shall discuss , making again use of the results of long monte carlo runs and periodic boundary conditions because this was found the most appropriate from numerical considerations ( cf .c ) . ] , the properties of two of the sequences shown in table i , namely sequences and , chosen as representative examples of chains building homodimers which fold according to a two and to a three - state mechanism , respectively .they were both designed at low and , in the first case with and in the second case with . in both cases ,the designed sequences display a first order transition in conformational space from the denaturated state into the native dimeric structure , as in the case of single domain proteins , as testified by the discontinuous behaviour of the similarity ( order ) parameters at the critical temperature .in other words , the order parameters display at a double peak behaviour . as a rule , the critical temperatures and in conformational space will be different .in particular , in the case of sequence of table i , , in keeping with the fact that this sequence was designed setting larger than .this property has the consequence that the native state is stable at temperatures , being the critical temperature , at which the area of the peak associated to the unfolded state ( small ) is equal to that associated to the native state ( ) , and that dimerization happens already at .the fact that indicates that while it is possible to find a situation ( at such that ) in which the two chains build the native interface but the monomers are not folded ( cf .5(b ) ) , there is not a temperature at which the monomers are folded and separated ( cf .5(a ) ) .also sequence undergoes a first order transition , but in this case the critical temperature associated with the bulk is larger than that associated with the interface ( ) .this indicates that for sequences selected at comparatively high selective temperature ( , cf .table i ) , there exists a phase where the two chains are folded but separated ( cf .another interesting thermodynamical property of a dimer is the localization of the sites which are mostly responsible for the stabilization of the native state .these sites can be identified by the change in energy that mutations induce in the native conformation . for a given site ,19 mutations can be carried out . because may change markedly depending on the point mutation introduced in the protein , it is useful to define the average value taken over all possible substitutions . in keeping with the result of studies of single domain protein , in particular of the sequence of table i , `` hot '' sites ( sites which are most sensitive to point mutations , which as a rule , if mutated , denaturate the protein ) are those sites for which , where is the average value of ( ) taken over all the sites of the chain , while is the associated standard deviation .`` cold '' and `` warm '' sites in the nomenclature of ref . are those sites for which and respectively for the case of s , i.e. ( cold ) , ( warm ) , ( hot ) , in keeping with the fact that , in this case , and . ] . in fig .7 we display the values of associated with sequence , , and ( s monomer ) of table i. a marked difference in the pattern distribution of hot and warm sites associated with sequence ( two state folding ) and ( three - state folding ) is observed .in fact , sequence ( fig .7(a ) ) displays no hot sites and double as many warm sites than sequence ( fig .7(b ) ) , the properties of the sites of this last chain in the native conformation being essentially those found in the study of the isolated monomer s ( fig .7(d ) , cf . also ) .furthermore , the amino acids occupying the warm sites of chain are , in average , more strongly interacting ( 2.4 ) than those associated with the warm sites of chain ( 1.9 ) , althoug still much less than hot sites of this chain ( 3.1 ) .in other words , in sequence most of the binding energy is concentrated in few `` hot '' sites , while in the case of sequence the stabilization energy is spread more homogeneously throughout the monomers .we now concentrate our attention , making again use of the results of the monte carlo simulations already discussed in the last section , on the dynamics of the process that leads the system from a random conformation to the native dimer ( for details and caveats see appendix b ) . the first issue to assessis whether the dimeric native state is accessible on a short time scale ( short with respect to the random search time , mc steps ) . for each of the sequences selected at various evolutionary temperatures ( some of them being listed in table i ) ,20 simulations of mc steps have been performed , recording the first passage time , i.e. the folding time ( cf . tableii ) into the dimeric native conformation .each set of simulations has been repeated at three temperatures , , and .we shall first discuss the case of sequences optimized at very low values of both and and with , so that the interface energy is close to its global minimum ( solid squares in fig .[ dyn_phases ] ) .this design procedure was followed in the expectation to obtain sequences which fold through a two state process , namely first dimerization and then folding . to check this scenario , we have determined through dynamical mc simulations the distribution of ( and ) associated with sequence of type ( cf .table i ) , and calculated at the instant in which ( or ) reach , for the first time , a value of the order of one and vary discontinuously as a function of the number of mc steps , we have used the mc step at which a value of or larger than is reached for the first time to define the instant at which to calculate the value of the other order parameter ( and thus the associated distribution ) or respectively . ] .the results displayed in fig .8(a ) show that , by the time the bulk of the two chains fold , is essentially equal to 1 , indicating that the interface between the two chains is already in place . on the other hand , at the time in which acquires for the first time a value , displays a rather flat distribution , its average value being .one can conclude that sequence of table i first dimerizes and then folds . the fact that in the present case the folding time increases with temperature ( cf .table ii ) indicates that the dimerization process is not diffusion limited .that is , is determined not only by the time needed by the two interfaces to come in contact , but also by the stability of the interface structure . in first approximation ,the diffusion time for the interfaces to meet , in the approximation that two interfaces search for each other randomly in the space of configurations , is steps , where 6 is the number of faces of an ideal cubic conformation , 4 takes into account the rotational symmetries of the faces and is the volume of the system .this time is much shorter than the folding time . on the other hand , at relative population of an isolated fragment containing the 12 interface residues in place ( forming the surface of one of the two monomers ) has been calculated to be ( determined through a mc simulation ) , so that the probability to have the two interfaces structured at the same time is .consequently , in this simple model , the folding time is predicted to be , which agrees well with the value found in detailed mc simulations and displayed in table ii .this picture explains also the increase of aggregation probability with temperature .in fact , the higher is the temperature , the longer is the time that partially structured surfaces which eventually dimerize move around in configurational space and can bind to wrong partners , causing aggregation .also the behaviour of unfolding times agrees with the picture of folding after in fact , starting from the native conformation , the average decay of with respect to time is ( cf .[ unfold ] ) at ( the values for other temperatures are listed in table ii ) . on the contrary ,the distribution of is best fitted by a stretched exponential in the form , with and , the average time being .the fact that does not follow an exponential law indicates that the unfolding of each of the two chains is not a process which depends only on its internal ( intra monomer ) contacts , like e.g. in the case of single domain proteins , cf . , but is subordinated to an external event . in keeping with this fact , and because the detaching time ( the characteristic decay time of ) is shorter than , one can conclude that the breaking of the native bonds associated with the internal structure of the two chains is a consequence of their detaching or , in other words , that the stability of the monomer structures relies on the presence of the interface .we now turn our attention to the sequences selected at in order to be close to the global minimum of the bulk energy ( ) and corresponding to the solid triangles in fig . [ dyn_phases ] .sequence of table i is an example of this class of designed homodimers .the dynamical behaviour of this sequence is quite different from that of sequence , although at it folds with probability to its native conformation , in an average time ( first passage time ( fpt ) ) of ( cf .table iii ) mc steps , quantities which is quite similar to those associated with sequence ( and mc steps , respectively ) .the analysis of the time dependence of and shows ( cf .8(b ) ) that for sequence is very small at the time in which , while at the time at which , is already essentially equal to 1 , indicating a three state scenario and the esistence of an intermediate state where the two monomers are folded and not dimerized .that is , sequence first fold to its monomeric native conformation and then forms the dimer . in the folding process , this sequence follows the hierarchical path typical of monomeric model proteins , building first local elementary structures ( les ) , then the ( postcritical ) folding nucleus , finding the monomeric native conformation shortly after .in addition , in the present case , there is a further step which consists in the association of the two monomers to build the dimer ( to be noted that sequence has the same local elementary structures and folding core than the monomeric protein studied in ref . , as testified by the fact that the hot sites are essentially the same ( cf . figs .7(b ) and ( d ) ) ) . the same sequence , when studied as an isolated chain at the same temperature , folds in an average time of steps , indicating that the time limiting step is the association ( it takes steps ) .as temperature increases , the folding time decreases slightly ( see table iii ) .this decrease is much less pronounced than the decrease in the diffusion time as measured by the inverse of the diffusion coefficient ( which is exponential with temperature , see third column in table iii ) , indicating that the dimerization time is not purely diffusion limited . on the contrary, the stability of the interface ( which depends exponentially on temperature as well ) plays an important role in the dimerization time . to check the generality of the results discussed above, we have repeated the design and the folding for another three dimensional conformation , where the native structure of each of the monomers is the same , but the interface involves the opposite face than in the case of the dimeric conformation shown in fig .while the folding time of sequences which first fold and then aggregate is similar to that of sequence , the folding time for sequences behaving according to a two state mechanism is substantially higher ( of the order of mc steps ) .this is due to the much lower stability of the interface built out of the two , rather unspecific surfaces stabilized by the contacts between residues lying rather distant from each other along the chain ( e.g. 1932 , 1732 ) , while in the previous case the surfaces were pieces of incipient stabilized by quartets of residues ( e.g. 36 , 710 ) .nonetheless , also in this case it is possible to find sequences which fold according to the two paradigms discussed above .summing up , it is found that if the folding of the homodimer is controlled by stable les ( cf . ) stabilized by few hot , strongly interacting amino acids , the folding time decreases with temperatures , as the folding of the protein is ( post critical ) folding nucleus formation controlled . in fact , the folding nucleus is the result of the docking of the les in the appropriate way , and thus controlled by the diffusion time of the les , a process which is speed up by increasing the temperature .this happens until one reaches a temperature at which the stability of les is affected , beyond which the folding time rises again .since the stabilization energy of les is much larger than the interaction energy between pairs of amino acids , there exists a large window of temperatures at which folding time is short .this scenario reflects , to a large extent , the temperature dependence of the folding time of monomers of which the homodimer is made of ( cf .the situation is quite different in the case of a dimer built out of two sequences of type , which first dimerize and then fold ( two state process ) . in this case , the folding of the system is not controlled by les .this is because , were one to put `` hot '' amino acids ( needed to stabilize the les ) on the surfaces which dimerize , these surfaces will be so reactive that the system will aggregate with high probability ( open circles in the low region of fig .but if the dimerizing surfaces do not contain `` hot '' amino acids , the folding of the remaining amino acids ( `` volume '' ) can not depend on les either , otherwise the system will behave more like sequences of type . consequently ,if both the dimerizing surfaces and the remaining part of the ( `` volume '' ) system are marginally stable , the binding energy being essentially uniformly distributed among all the pairs of nearest neighbours amino acids ( low associated essentially with all sites ) , the fast folding temperature window is expected to be much narrower than in the case of homodimer based on sequence of type .in fact , already at low temperature ( ) , the system based on sequences of type is found to be passed the minimum of the curve shown in fig .10 , and one observes an increase of the folding time as a function of ( cf . table ii ) .a possible connection between the model of homodimer folding discussed above and real proteins can be achieved by studying the degree of amino acid conservation in each site of analogous dimers .this can be done by calculating the entropy in the space of sequences in each site , given by where is the probability of finding the amino acid of kind at site , a probability to be calculated over a large number of sequences folding to the same conformation . within the present model ,the calculation of is straightforward . given a dimeric native conformation ( fig .[ native ] ) , and a couple of evolutionary temperatures and , one selects from the corresponding set of designed sequences a representative sample ( e.g. ) of them . since these sequences are aligned , it is possible to calculate the associated values of and , consequently , .the entropy at each site ranges from , if only one kind of amino acids occupies that site in all sequences selected , to , if each kind of amino acid is equiprobable , the latter being the case at large values of the design temperature .we have shown elsewhere that , for monomeric proteins , low entropy sites are those where most of the stabilization energy is concentrated .the search of low entropy sites rather than of strongly interacting sites has the advantage that it can be easily performed for real proteins , for which a reliable energy function is not available . in figs .11 and 12 we display the entropy per site and the associated distribution of entropy for : ( a ) sequences displaying the folding behaviour of sequence ( solid squares in fig .4 ) , ( b ) for sequences displaying the folding behaviour of sequence ( solid triangles in fig .4 ) , ( c ) aggregating sequences ( empty circles in fig .4 ) , ( d ) for the monomer s , single domain protein , which folds into the native conformation corresponding to half of the homodimer shown in fig .3 ( sequence of table i ) . one observes a marked difference between the entropy per site shown in fig .11(a ) and that displayed in fig .in fact , sequences of type have many more conserved sites than sequences of type ( the ratio between these two numbers being 3.3 ) , the average entropy being a factor of 1.6 larger than that associated with sequences of type .furthermore , in the case of sequence only of the conserved sites lie on the surface , while in the case of sequence this ratio is . on the other hand , cases ( a ) and ( b ) can not be distinguished by looking at the interface / bulk average entropy , being 1.42 and 1.43 respectively in the case of sequence of type and 1.17 and 1.24 in the case of sequences of type situation is similar if one looks at the values of the average entropy associated with conserved sites lying on the interface ( ) as compared to the average entropy associated with all conserved sites ( cf .table iv ) . for aggregating sequences ( empty circles in fig .4 , see fig .11(c ) ) , there is an evident difference between the entropy of the interface ( sites 112 ) , whose average entropy is 1.18 , and the bulk , whose average entropy is 1.71 .aggregation arises due to the fact that the surface , which essentially contains all of the `` hot '' sites of the protein ( of them ) , is too reactive . summing up, sequences of type give rise to a dimer where the conservation pattern is distributed much more uniformly among all sites and where there are quite a large number of important sites although much less conserved than the few hot sites of sequences of type .these results indicate that , from the conservation patterns of lattice designed sequences , it is possible to recognize sequences which aggregate from those which dimerize . among these we can recognize sequences which dimerize through a three state scenario or through a two state mechanism by looking at the overall distribution of entropy , but not at the difference between the entropy of the interface and of the bulk .these results agree with the findings by grishin and phillips , which analyze the conservation of residues on the surface of five oligomeric enzymes and find no signal of any larger conservationism on the interface with respect to the bulk .we suggest that one has to analyze the distribution of entropy , not the bulk / interface partition , to derive the behaviour of a family of sequences . to test the predictions of the model, we have calculated the entropy at each site for two dimeric proteins , i.e. p22 arc repressor and aspartate aminotransfease ( cf . figs .[ arc ] and [ asp ] ) . the former folds according to the paradigm of sequence of table i ( first dimerize and then fold ) , while the latter follows the opposite scenario ( the monomers first fold and then dimerize ) . from the fssp databasewe have selected , for each of the two proteins , a family of non homologous aligned sequences which have the same fold and the same interface ( z score larger than 2 ) , and from them we have calculated . the entropy was calculated in three different ways , in order to take care of the gaps in the alignment .the results indicated in figs .1 and 2 in terms of solid lines were obtained considering the positions in the gaps as filled by a kind of amino acid which is , by definition , different from all others ( including the other `` fake '' amino acids put in the gaps of the other sequences ) .dots indicate a calculations where amino acids are grouped into six classes ( cf . ref . ) , while the dotted line is calculated ignoring sequences which , at a given site , display a gap . below the x axis it is displayed , with a black stripe , the interface region .the relative number of conserved sites is considerably larger ( a factor 2.7 , cf .table iv ) in the case of p22 arc repressor than in the case of aspartate aminotransfease . of them , only lie on the interface of aspartate aminotransfease while does in the case of the two state p22 arc repressor .further confrontation between the results of lattice model calculations and real proteins can be carried out through the entropy distributions associated with fig .11 ( designed proteins ) and figs . 1 and 2 ( p22 arc repressor and aspartate aminotransfease ) , as displayed in figs . 12 and 13 , respectively .the arc repressor distribution displays an abrupt increase of the entropy at a value somewhat larger than 1 which is the edge of a well defined peak , a behaviour which is very similar to that of sequences of kind shown in fig .12(a ) . on the other hand , aspartate aminotransfease displays a gradual increase in the distribution of entropy , and a two peak structure , which resembles the corresponding quantity associated with sequences of type ( cf .12(b ) ) , having few sites which are highly conserved . these results , as well as those displayed in table iv ,thus seem to confirm the overall picture emerging from lattice model calculations .we conclude this section by recalling the fact that the data used to derive the numbers displayed in figs . 1 and 2 ( 6 and 11 sequences respectively )have rather poor statistics , because not many analogous sequences with the same interface are available in the pdb .another caveat to be used in comparing the data with the model calculations is the fact that real proteins aside from structural features , display functional properties ( not present in model calculations ) , which can have conditioned the conservation of amino acids at precise sites , in particular surface and interface sites .in the present paper we have designed , with the help of a lattice model , sequences which either first dimerize and then fold or , conversely , first fold and then dimerize .they were obtained by minimizing their energy in a given ( dimeric ) conformation with respect to amino acid sequence at constant composition .the swapping of amino acids was carried out making use of monte carlo techniques .two design temperatures have been used to carry out the minimization process , one controlling the evolutionary pressure on the interface amino acids , the other controlling the pressure on the amino acids occupying the bulk of the dimer .we know that the way evolution solves the protein folding problem of small monoglobular systems , at least within lattice models , is by asigning the commanding role of the process to few , hot , highly conserved amino acids ( low entropy `` bump '' of fig .12(d ) ) . for the remaining amino acids one can essentially choose anyone among the twenty different types .in fact , we know that there are sequences sharing the same `` hot '' amino acids which fold to the native structure on which the monomeric sequence s ( sequence , table i ) was designed ( high entropy peak of fig .12(d ) ) . the further requirement of dimerization after folding, typical of sequences of type , seems to be solved by evolution through an increase in the number of commanding amino acids , that is , by increasing the number of highly conserved , low entropy monomers . in other words , by shifting amino acid ( sites ) from the high s peak of fig .12(d ) into the low s peak of the same figure , as testified by the entropy distribution associated with sequences of type shown in fig .12(b ) ( cf . also figs .7(b ) and 7(d ) ) .this change in the strategy of evolution is complete in the case of sequences of type , which first dimerize and then fold . in this case , all amino acids become essentially equally conserved , the small high s peak of fig .12(b ) being absorbed in the long tail of the single peak of fig .this is consistent with the low observed for the transition state of p22 arc repressor . from these resultsone can conclude that sequences which first dimerize and then fold must be much more difficult to come about than sequences that first fold and then dimerize , in keeping with the fact that the first type of sequences are dependent on a significantly larger set of conserved amino acids to reach the protein native structure in the folding process , than the second type of proteins .the unusual feature of the present design process is the fact that it requires different average energies for the bulk and for the interface . as a consequence ,it is not possible to apply standard equilibrium thermodynamics .anyway , one can still proceed in parallel with the canonical ensemble picture and regard the system as composed of two interacting parts , each of them in contact with its own thermal bath . ofcourse we are not interested in the true equilibrium of the system , which would require the two baths to reach the same temperature and the same average energy , but in a stationary state in which the average energies are constant .if we call a generic distribution of states of the system , we can define the average energy functional =\sum h_1 p ] of the two parts of the system and the entropy functional =-\sum p\log p ] and =e_2^* ] as a function of the energies .its derivatives in and give in parallel with equilibrium thermodynamics , we call temperatures the inverse of the two lagrange multipliers , and . to select sequences distributed according to where and are the bulk and the interface energy of the sequence , respectively , and , we use a multicanonical technique ( see appendix b ) . the multicanonical method is an extension of the usual monte carlo sampling method . in the latter the phase space of a generic system kept at temperature is sampled making trial random moves ( here , amino acid swappings ) and accepting the move with a probability , where is the equilibrium statistical weight of a given state of the system , which is of course the boltzman distribution .this works efficiently at high temperatures , but becomes problematic when is decresed , due to the fact that the system can get trapped in local energy minima . to overcome this problem, the multicanonical method samples the phase space using unconventional statistical weigths , namely , where is the energetic degeneracy of that state .coarse graining the description of the system , each state can be labelled with its energy and the statistical weight associated with a given energy is . in other words, we are sampling a phase space which ( using these artificial weights ) is flat , so that the system can not get trapped in metastable states .the problem is that , which defines the weights , is not known _a priori_. the algorithm has to be , accordingly , self consistent : a trial is guessed , the phase space is sampled with weights , from the results of the sampling a new distribution , and so on .when has converged , one has found the distribution of energies of the system ( which does not depend on the weights used to find it ! ) and from this derive all the other thermodynamical quantities of the system .the algorithm is computationally requiring , also because at each autoconsistent step the information found in the preceding steps , save the last , is discareded . to solve this problem a very efficient method , which combines the informations found during the whole calculation , has be developed by borg and described extensively in ref . . in the sampling of the space of sequenceswe have used the multicanonical algorithm and checked the results with the extension of ref .strictly speaking , the monte carlo algorithm was designed to study equilibrium properties of systems with many degrees of freedom .nonetheless , it has been shown that , being equivalent to solving the fokker planck equation for diffusion in a potential , it can be helpful also in studying the kinetical properties of complex systems , provided that the fokker planck approximation is valid ( i.e. , moves are local and the potential changes smoothly on the diffusionlenght scale ) .furthermore , rey and skolnick have shown that the folding trajectories obtained for single domain proteins with monte carlo simulations are consistent with those obtained with real molecular dynamics calculations . in the study of dimersthere is an additional problem .since the monte carlo moves are local , it is not evident that this algorithm describes properly the diffusion of one chain with respect to the other . to make sure that the present algorithm is suitable to deal with diffusion , we have simulated the displacement of the center of mass of single monomer heteropolymers . in fig . [ com1 ]it is displayed the mean square displacement of the center of mass as a function of time for five sequences designed , to different degrees , to fold to the same conformation .it is also displayed the behaviour of a random sequence ( the first to the left ) .the calculations are performed at the temperature and the average is done over 50 independent runs , each time starting from a random conformation .all these sequences move in a diffusive regime , characterized by , where is time ( measured as number of mc steps ) and is the diffusion coefficient . in the insetwe show the diffusion coefficient of each sequence ( in units of lattice units over number of mc steps ) with respect to its native energy ( on the horizontal axis is the energy ) .it is clear from this plot that the diffusion coefficient depends on the stability of the protein , a feature which is rather unphysical since it should only depend on the shape of the polymer and on the properties of the solvent . on the other hand, one can notice that for optimized sequences ( ) decreases linearly with the energy , spanning a range per unit energy .consequently , we expect that the effects of this dependence on the folding mechanism of optimized sequences are negligible .the dependence of the diffusion coefficient on temperature for a fixed sequence ( e.g. of table i ) is displayed in fig .[ com2 ] . except for low temperatures ( )it satisfies einstein s equation , where is the mobility of the chain , indicating that that chain undergoes brownian motion .the choice of the linear dimension of the wigner cell where the monte carlo simulations were carried out was found to be important in determining the dynamical evolution of the system .this quantity is the model parameter which reflects the density of sequences in real systems , either the cell or a test tube , which eventually fold in the native conformation of the designed homodimer .if , we found that the chains get entangled in some non native conformations , since there is not enough room for them to assume the native conformation ( which is a parallelepiped of ) . if the translational entropy of the disjoint chains is so large that the conformation corresponding to the two chains folded and separated becomes the equilibrium state .in other words , although the two chain system is able to reach the native homodimer conformation , it is very unstable ( e.g. , the relative population of the homodimer native conformation is , for sequence at and , less than ) . in what follows we shall set ] , even if with this choice , the system experiences some difficulties in reaching the native conformation , due to the narrowness of the space available .for example , in 20 long mc runs ( mc steps ) , sequence can find its correct dimeric native state in 16 times out of 20 .in fact , in 4 cases it finds a conformation with energy , corresponding to a situation in which one of the two chains ( let us call them a and b ) is folded ( say , chain a ) , the monomers of chain b at the interface being in their native position , while chain b is in a ( well defined ) conformation which has only similarity with its native structure ( similarity parameter ) .the reason for this result is to be found in the fact that chain b builds some contacts with the `` back '' of chain a , taking advantage of the periodic boundary conditions .these contacts are mostly between residues of chain b and partners which are of the right kind but belong to the wrong chain ( contacts 17a32b , 23a18b , 24a17b , 25a36b , 26a35b , 35a26b ) , while two of them are between monomers which can not be in contact if they belong to the same chain ( 31a32b , 32a33b ) .while these processes may be of relevance , for example , in connection with the formation of amyloid aggregates , it arises in the present work due to an artifact of the model .in fact , it is connected with the fact that we are simulating a system of many chains by considering explicitely only two of them with periodic boundary conditions . as a consequence , a domain swapping like mechanism is likely to take place .in fact , if after the two chains have built the native interface , a subdomain of one of the two sequences is not in place but moves around , it can find itself in the vicinity of its complement , but belonging to the other chain . we shall come back to phenomena of domain swapping in a future publication , where we shall consider more than two chains . .inrows 25 we display examples of one of the two identical sequences designed , making use of the dimer native conformation displayed in fig .3 and of the 20 contact energy matrix of ref .[ 24 ] ( table vi ) , at different evolutionary temperatures and .the energy is associated with the bulk contacts of one of the two monomers , so that the total energy in the native conformation is , being the energy associated with the contacts across the interface .as indicated , sequences and display a two state dimerization , while sequences and fold through a three - state mechanism . on the other hand , sequence aggregates . also displayedis the threshold energy , as well as the normalized gap , where ( ) is the standared deviation of the contact energies ( cf . also footnote number 6 ) . in the last row ( number 6 )we give , for the sake of comparison , the properties of the monomeric , single domain protein designed on one of the two identical halves of the conformation shown in fig .3 , and known in the literature as s . [ cols="^,^,^,^,^,^,^,^,^,^",options="header " , ] 99 bennet , m. , phillips m. and eisenberg , d. ( 1995 ) protein sci . * 4 * , 24552458 xu , d. , tsai , c. , and nussinov r. , ( 1998 ) protein science * 7 * , 533544 shakhnovich , e. , ( 1999 ) nature struct .* 6 * , 99102 mateu , m. g. , sanchez del pino m. m. , and fersht , a. r. , ( 1999 ) nature struct . biol .* 6 * , 191 herold m. and kirschner k. , biochemistry ( 1990 ) * 29 * , 19071913 milla m. e. and sauer r. t. , ( 1994 ) biochemistry * 33 * , 11251133 shakhnovich , e. and gutin , a. m. , ( 1990 ) nature * 346 * , 773775 broglia r. a. , tiana , g. , roman h. e. , vigezzi , e. and shakhnovich , e. ( 1999 ) phys . rev. lett . * 82 * , 4727 broglia , r. a. , and tiana , g. , ( 2001 ) j. chem . phys . * 114 * , 7267 tiana , g. , broglia , r. a. , roman , h. e. , vigezzi e. , and shakhnovich , e. ( 1998 ) j. chem . phys . * 108 * , 757 . dokholyam n. v. , buldryev s. v. , stanley h. e. and shakhnovich e. i. , ( 2000 ) j. mol . biol . *296 * , 11831188 dokholyan n. v. and shakhnovich , e. i. , ( 2001 ) j. mol . biol . *312 * , 289307 tiana , g. and broglia , r. a. , ( 2001 ) j. chem .phys . * 114 * 2503 broglia , r. a , tiana , g. , pasquali , s. , roman , h. e. , vigezzi , e. , ( 1998 ) proc .usa 95 , 12930 .abkevich , v. , gutin a. m. and shakhnovich e. , ( 1994 ) biochem . * 33 * , 1002610036 mirny l. and shakhnovich , e. , ( 1999 ) j. mol . biol . *291 * , 177 tiana , g. , broglia r. a. and shakhnovich e. i. , ( 2000 ) prot .39 * , 244 sander , c. and schneider , r. ( 1991 ) prot .. gen . * 9 * , 56 go , n. ( 1975 ) int .. res . * 7 * , 313 lau k. f. and dill k. , ( 1989 ) macromolecules * 22 * 3986 frauenfelder h. and wolynes p. g. , ( 1994 ) physics today , februray , 5864 mohanty d. , kolinski a. and skolnik j. , biophys .j. ( 1999 ) * 77 * , 5469 vieth m. , kolinski a. and skolnik j. , ( 1996 ) biochemistry ( 1996 ) * 35 * , 955967 miyazawa s. and jernigan r. , ( 1985 ) macromolecules * 18 * , 534 shakhnovich e. and gutin , a. m. , ( 1993 ) protein engin .* 6 * 793 t. e. creighton , ( 1993 ) _ proteins _ , j. freeman anc co. , new york metropolis , n. , rosenbluth , a. , rosenbluth , m. n. , teller a. h. and teller , e. ( 1953 ) j. chem . phys . * 21 * , 1087 tiana g. , and broglia , r. a. , ( to be published ) derrida , b. , ( 1981 ) phys .b * 24 * , 2613 orland , h. , itzykson c. , and de dominicis , c. , ( 1985 ) j. phys .* 46 * , l353 bowie , j. , luthey schulten r. , and eisenberg , d. ( 1991 ) , science * 253 * , 164 grishin n. , and phillips , m. , ( 1994 ) protein sci .* 3 * , 24552458 walburger , c. , johnson t. , and sauer , r. ( 1996 ) proc .usa * 93 * 2629 berg b. and neuhaus t. , ( 1991 ) phys .b * 267 * , 249253 borg , j. ( to be published ) kikuchi , k. , yoshida , m.,maekawa t. and watanabe h. , ( 1992 ) chem .196 , 57 rey , j. and skolnick , j. , ( 1991 ) chem . phys . * 158 * , 199
|
in a similar way in which the folding of single domain proteins provide an important test in the study of self organization , the folding of homodimers constitute a basic challenge in the quest for the mechanisms which are at the basis of biological recognition . dimerization is studied by following the evolution of two identical 20letter amino acid chains within the framework of a lattice model and using monte carlo simulations . it is found that when design ( evolution pressure ) selects few , strongly interacting ( conserved ) amino acids to control the process , a three state folding scenario follows , where the monomers first fold forming the halves of the eventual dimeric interface independently of each other , and then dimerize ( `` lock and key '' kind of association ) . on the other hand , if design distributes the control of the folding process on a large number of ( conserved ) amino acids , a two state folding scenario ensues , where dimerization takes place at the beginning of the proces , resulting in an `` induced type '' of association . making use of conservation patterns of families of analogous dimers , it is possible to compare the model predictions with the behaviour of real proteins . it is found that theory provides an overall account of the experimental findings .
|
relativity is a crucial ingredient in a variety of astrophysical phenomena .for example the jets that are expelled from the cores of active galaxies reach velocities tantalizingly close to the speed of light , and motion near a black hole is heavily influenced by space - time curvature effects .in the recent past , substantial progress has been made in the development of numerical tools to tackle relativistic gas dynamics problems , both on the special- and the general - relativistic side , for reviews see .most work on numerical relativistic gas dynamics has been performed in an eulerian framework , a couple of lagrangian smooth particle hydrodynamics ( sph ) approaches do exist though .+ in astrophysics , the sph method has been very successful , mainly because of its excellent conservation properties , its natural flexibility and robustness . moreover , its physically intuitive formulation has enabled the inclusion of various physical processes beyond gas dynamics so that many challenging multi - physics problems could be tackled . for recent reviews of the methodwe refer to the literature .relativistic versions of the sph method were first applied to special relativity and to gas flows evolving in a fixed background metric .more recently , sph has also been used in combination with approximative schemes to dynamically evolve space - time .+ in this paper we briefly summarize the main equations of a new , special - relativistic sph formulation that has been derived from the lagrangian of an ideal fluid .since the details of the derivation have been outlined elsewhere , we focus here on a set of numerical benchmark tests that complement those shown in the original paper .some of them are `` standard '' and often used to demonstrate or compare code performance , but most of them are more violent and therefore more challenging versions of widespread test problems .an elegant approach to derive relativistic sph equations based on the discretized lagrangian of a perfect fluid was suggested in .we have recently extended this approach by including the relativistic generalizations of what are called `` grad - h - terms '' in non - relativistic sph . for details of the derivationwe refer to the original paper and a recent review on the smooth particle hydrodynamics method .+ in the following , we assume a flat space - time metric with signature ( -,+,+,+ ) and use units in which the speed of light is equal to unity , .we reserve greek letters for space - time indices from 0 ... 3 with 0 being the temporal component , while and refer to spatial components and sph particles are labeled by and .+ using the einstein sum convention the lagrangian of a special - relativistic perfect fluid can be written as l_pf , sr= - t^ u_u _ dv[eq : fluid_lag_srt ] , where t^= ( n[1 + u(n , s ) ] + p ) u^u^+ p ^ denotes the energy momentum tensor , is the baryon number density , is the thermal energy per baryon , the specific entropy , the pressure and is the four velocity with being proper time .all fluid quantities are measured in the local rest frame , energies are measured in units of the baryon rest mass energy obviously depends on the ratio of neutrons to protons , i.e. on the nuclear composition of the considered fluid .] , . for practical simulations we give up general covariance and perform the calculations in a chosen `` computing frame '' ( cf ) . in the general case ,a fluid element moves with respect to this frame , therefore , the baryon number density in the cf , , is related to the local fluid rest frame via a lorentz contraction n= n , [ rosswog::eq : n_vs_n ] where is the lorentz factor of the fluid element as measured in the cf .the simulation volume in the cf can be subdivided into volume elements such that each element contains baryons and these volume elements , , can be used in the sph discretization process of a quantity : f()= _ b f_b w(|-_b|,h),[eq : sph_discret ] where the index labels quantities at the position of particle , .our notation does not distinguish between the approximated values ( the on the lhs ) and the values at the particle positions ( on the rhs ) .the quantity is the smoothing length that characterizes the width of the smoothing kernel , for which we apply the cubic spline kernel that is commonly used in sph .applied to the baryon number density in the cf at the position of particle , eq .( [ eq : sph_discret ] ) yields : n_a= n(_a)= _ b _ b w(|_a-_b|,h_a).[eq : dens_summ_sr ] this equation takes over the role of the usual density summation of non - relativistic sph , . since we keep the baryon numbers associated with each sph particle , , fixed , there is no need to evolve a continuity equation and baryon number is conserved by construction .if desired , the continuity equation can be solved though , see e.g. .note that we have used s own smoothing length in evaluating the kernel in eq .( [ eq : dens_summ_sr ] ) . to fully exploit the natural adaptivity of a particle method, we adapt the smoothing length according to h_a= ( ) ^-1/d[eq : dens_summ_sr_n_b ] , where is a suitably chosen numerical constant , usually in the range between 1.3 and 1.5 , and is the number of spatial dimensions . hence , similar to the non - relativistic case , the density and the smoothing length mutually depend on each other and a self - consistent solution forboth can be obtained by performing an iteration until convergence is reached .+ with these prerequisites at hand , the fluid lagrangian can be discretized l_sph , sr= - _ b [ 1 + u(n_b , s_b)].[eq : sr : l_sph ] using the first law of thermodynamics one finds ( for a detailed derivation see sec . 4 in ) for the canonical momentum per baryon _ a = _ a _ a ( 1+u_a+ ) [ eq : can_mom ] , which is the quantity that we evolve numerically .its evolution equation follows from the euler - lagrange equations , - = 0 , as = - _ b _ b ( _ a w_ab(h_a ) + _ a w_ab(h_b ) ) , [ eq : momentum_eq_no_diss ] where the `` grad - h '' correction factor _ b1-k was introduced . as numerical energy variablewe use the canonical energy per baryon , _ a _ a ( 1+u_a+ ) - = _ a _ a + [ eq : sr : epsilon_a ] which evolves according to = - _ b _ b ( _ a w_ab(h_a ) + _ a w_ab(h_b ) ) .[ eq : ener_eq_no_diss ] as in grid - based approaches , at each time step a conversion between the numerical and the physical variables is required .+ the set of equations needs to be closed by an equation of state . in all of the following tests , we use a polytropic equation of state , , where is the polytropic exponent ( keep in mind our convention of measuring energies in units of ) .to handle shocks , additional artificial dissipation terms need to be included .we use terms similar to ( ) _ diss= - _ b _ b _ ab _ ab= - ( _ a^-_b^ ) _ ab [ rosswog::eq : diss_mom ] and ( ) _ diss= - _ b _ b _ ab _ ab = - ( _ a^-_b^)_ab .[ rosswog::eq : diss_en ] here is a numerical constant of order unity , an appropriately chosen signal velocity , see below , , and is the unit vector pointing from particle to particle . for the symmetrized kernel gradient we use= . note that in was used instead of our , in practice we find the differences between the two symmetrizations negligible .the stars at the variables in eqs .( [ rosswog::eq : diss_mom ] ) and ( [ rosswog::eq : diss_en ] ) indicate that the projected lorentz factors _k^= are used instead of the normal lorentz factor .this projection onto the line connecting particle and has been chosen to guarantee that the viscous dissipation is positive definite .+ the signal velocity , , is an estimate for the speed of approach of a signal sent from particle to particle .the idea is to have a robust estimate that does not require much computational effort .we use v_sig , ab= max(_a,_b),[eq : vsig ] where _ k^=max(0,^_k ) with being the extreme local eigenvalues of the euler equations ^_k= and being the relativistic sound velocity of particle .these 1d estimates can be generalized to higher spatial dimensions , see e.g. .the results are not particularly sensitive to the exact form of the signal velocity , but in experiments we find that eq .( [ eq : vsig ] ) yields somewhat crisper shock fronts and less smeared contact discontinuities ( for the same value of ) than earlier suggestions .+ since we are aiming at solving the relativistic evolution equations of an _ ideal _ fluid , we want dissipation only where it is really needed , i.e. near shocks where entropy needs to be produced .to this end , we assign an individual value of the parameter to each sph particle and integrate an additional differential equation to determine its value . for the details of the time - dependent viscosity parameter treatment we refer to .in the following we demonstrate the performance of the above described scheme at a slew of benchmark tests .the exact solutions of the riemann problems have been obtained by help of the riemann_vt.f code provided by marti and mller . unless mentioned otherwise ,approximately 3000 particles are shown .this moderately relativistic ( maximum lorentz factor ) shock tube has become a standard touch - stone for relativistic hydrodynamics codes . it uses a polytropic equation of state ( eos ) with an exponent of and {\rm l}= [ 40/3 , 10 , 0] ] for the right - hand state . : sph results ( circles ) vs. exact solution ( red line ) . from left to right , top to bottom : velocity ( in units of ) , specific energy , computing frame baryon number density and pressure.,title="fig : " ] : sph results ( circles ) vs. exact solution ( red line ) . from left to right , top to bottom : velocity ( in units of ) , specific energy , computing frame baryon number density and pressure.,title="fig : " ] : sph results ( circles ) vs. exact solution ( red line ) . from left to right , top to bottom : velocity ( in units of ) , specific energy , computing frame baryon number density and pressure.,title="fig : " ] : sph results ( circles ) vs. exact solution ( red line ) . from left to right , top to bottom : velocity ( in units of ) , specific energy , computing frame baryon number density and pressure.,title="fig : " ] as shown in fig .[ rosswog::fig:1 ] , the numerical solution at ( circles ) agrees nearly perfectly with the exact one . note in particular the absence of any spikes in and at the contact discontinuity ( near ) , such spikes had plagued many earlier relativistic sph formulations .the only places where we see possibly room for improvement is the contact discontinuity which is slightly smeared out and the slight over-/undershoots at the edges of the rarefaction fan .+ in order to monitor how the error in the numerical solution decreases as a function of increased resolution , we calculate l_1 _b^n_part |v_b - v_ex(r_b)|,[eq : l1 ] where is the number of sph - particles , the ( 1d ) velocity of sph - particle and the exact solution for the velocity at position .the results for are displayed in fig .[ rosswog::fig:2 ] .the error decreases close to ( actually , the best fit is ) , which is what is also found for eulerian methods in tests that involve shocks .therefore , for problems that involve shocks we consider the method first - order accurate . ) as a function of particle number for the relativistic shock tested in riemann problem 1 .the error decreases close to . ]the order of the method for smooth flows will be determined in the context of test 6 . )are shown as circles , the exact solution as red line . from left to right , top to bottom : velocity ( in units of ) , specific energy , computing frame baryon number density and pressure.,title="fig : " ] ) are shown as circles , the exact solution as red line . from left to right , top to bottom : velocity ( in units of ) , specific energy , computing frame baryon number density and pressure.,title="fig : " ] ) are shown as circles , the exact solution as red line . from left to right , top to bottom : velocity ( in units of ) , specific energy , computing frame baryon number density and pressure.,title="fig : " ] ) are shown as circles , the exact solution as red line . from left to right , top to bottom : velocity ( in units of ) , specific energy , computing frame baryon number density and pressure.,title="fig : " ] this test is a more violent version of test 1 in which we increase the initial left side pressure by a factor of 100 , but leave the other properties , in particular the right - hand state , unchanged : {\rm l}= [ 4000/3 , 10 , 0] ] .this represents a challenging test since the post - shock density is compressed into a very narrow `` spike '' , at near .a maximum lorentz - factor of is reached in this test . + in fig .[ rosswog::fig:3 ] we show the sph results ( circles ) of velocity , specific energy , the computing frame number density and the pressure at together with the exact solution of the problem ( red line ) .again the numerical solution is in excellent agreement with the exact one , only in the specific energy near the contact discontinuity occurs some smearing .this test is an even more violent version of the previous tests .we now increase the initial left side pressure by a factor of 1000 with respect to test 1 , but leave the other properties unchanged : {\rm l}= [ 40000/3 , 10 , 0] ] .the post - shock density is now compressed into a very narrow `` needle '' with a width of only , the maximum lorentz factor is 6.65 .+ ) are shown as circles , the exact solution as red line . from left to right , top to bottom : velocity ( in units of ) , specific energy , computing frame baryon number density and pressure.,title="fig : " ] ) are shown as circles , the exact solution as red line . from leftto right , top to bottom : velocity ( in units of ) , specific energy , computing frame baryon number density and pressure.,title="fig : " ] ) are shown as circles , the exact solution as red line . from left to right , top to bottom : velocity ( in units of ) , specific energy , computing frame baryon number density and pressure.,title="fig : " ] ) are shown as circles , the exact solution as red line . from leftto right , top to bottom : velocity ( in units of ) , specific energy , computing frame baryon number density and pressure.,title="fig : " ] fig .[ rosswog::fig:4 ] shows the sph results ( circles ) of velocity , specific energy , the computing frame number density and the pressure at together with the exact solution ( red line ) .the overall performance in this extremely challenging test is still very good .the peak velocity plateau with ( panel 1 ) is very well captured , practically no oscillations behind the shock are visible .of course , the `` needle - like '' appearance of the compressed density shell ( panel 3 ) poses a serious problem to every numerical scheme at finite resolution . at the applied resolution ,the numerical peak value of is only about half of the exact solution .moreover , this extremely demanding test reveals an artifact of our scheme : the shock front is propagating at slightly too large a speed .this problem decreases with increasing numerical resolution and experimenting with the parameter of eqs .( [ rosswog::eq : diss_mom ] ) and ( [ rosswog::eq : diss_en ] ) shows that it is related to the form of artificial viscosity , smaller offsets occur for lower values of the viscosity parameter . herefurther improvements would be desirable .this is a more extreme version of the test suggested by .it starts from an initial setup similar to a normal riemann problem , but with the right state being sinusoidally perturbed .what makes this test challenging is that the smooth structure ( sine wave ) needs to be transported across the shock , i.e. kinetic energy needs to be dissipated into heat to avoid spurious post - shock oscillations , but not too much since otherwise the ( physical ! ) sine oscillations in the post - shock state are not accurately captured .we use a polytropic exponent of and [ p , n , v]^l=[1000,5,0 ] ^r=[5,2 + 0.3 ( 50 x),0 ] . as initial conditions , i.e.we have increased the initial left pressure by a factor of 200 in comparison to .( blue ) and ( red ) are overlaid as solid lines.,title="fig : " ] ( blue ) and ( red ) are overlaid as solid lines.,title="fig : " ] ( blue ) and ( red ) are overlaid as solid lines.,title="fig : " ] ( blue ) and ( red ) are overlaid as solid lines.,title="fig : " ] the numerical result ( circles ) is shown in fig .[ rosswog::fig:5 ] together with two exact solutions , for the right - hand side densities ( solid blue ) and ( solid red ) .all the transitions are located at the correct positions , in the post - shock density shell the solution nicely oscillates between the extremes indicated by the solid lines .the initial conditions of the einfeldt rarefaction test do not exhibit discontinuities in density or pressure , but the two halfs of the computational domain move in opposite directions and thereby create a very low - density region around the initial velocity discontinuity .this low - density region poses a serious challenge for some iterative riemann solvers , which can return negative density / pressure values in this region . herewe generalize the test to a relativistic problem in which left / right states move with velocity -0.9/+0.9 away from the central position .for the left and right state we use {\rm l}= [ 1 , 1 , -0.9] ] and an adiabatic exponent of . note that here we have specified the local rest frame density , , which is related to the computing frame density by eq .( [ rosswog::eq : n_vs_n ] ) .the sph solution at is shown in fig .[ rosswog::fig:6 ] as circles , the exact solution is indicated by the solid red line .small oscillations are visible near the center , mainly in and , and over-/undershoots occur near the edges of the rarefaction fan , but overall the numerical solution is very close to the analytical one . in its current form , the code can stably handle velocities up to 0.99999 , i.e. lorentz factors , but at late times there are practically no more particles in the center ( sph s approximation to the emerging near - vacuum ) , so that it becomes increasingly difficult to resolve the central velocity plateau . everywhere , and .,title="fig : " ] everywhere , and .,title="fig : " ] everywhere , and .,title="fig : " ] everywhere , and .,title="fig : " ] in this test problem we explore the ability to accurately advect a smooth density pattern at an ultra - relativistic velocity across a periodic box .since this test does not involve shocks we do not apply any artificial dissipation .we use only 500 equidistantly placed particles in the interval $ ] , enforce periodic boundary conditions and use a polytropic exponent of .we impose a computing frame number density , a constant velocity as large as , corresponding to a lorentz factor of , and instantiate a constant pressure corresponding to , where and and .the specific energies are chosen so that each particle has the same pressure . with these initial conditions the specified density patternshould just be advected across the box without being changed in shape. + , lorentz factor ) of a density pattern across a periodic box .the advection is essentially perfect , the patterns after 50 ( blue circles ) and 100 ( green triangles ) times crossing the box are virtually identical to the initial condition ( red line ) .right : decrease of the error as a function of resolution , for smooth flows the method is second - order accurate.,title="fig : " ] , lorentz factor ) of a density pattern across a periodic box . the advection is essentially perfect , the patterns after 50 ( blue circles ) and 100 ( green triangles ) times crossing the box are virtually identical to the initial condition ( red line ) .right : decrease of the error as a function of resolution , for smooth flows the method is second - order accurate.,title="fig : " ] the numerical result after 50 times ( blue circles ) and 100 times ( green triangles ) crossing the interval is displayed in fig .[ rosswog::fig:7 ] , left panel .the advection is essentially perfect , no deviation from the initial condition ( solid , red line ) is visible .+ we use this test to measure the convergence of the method in the case of smooth flow ( for the case involving shocks , see the discussion at the end of test 1 ) .since for this test the velocity is constant everywhere , we use the computing frame number density to calculate similar to eq .( [ eq : l1 ] ) .we find that the error decreases very close to , see fig .[ rosswog::fig:7 ] , right panel , which is the behavior that is theoretically expected for smooth functions , the used kernel and perfectly distributed particles ( actually , we find as a best - fit exponent -2.07 ) .therefore , we consider the method second - order accurate for smooth flows .we have summarized a new special - relativistic sph formulation that is derived from the lagrangian of an ideal fluid . as numerical variablesit uses the canonical energy and momentum per baryon whose evolution equations follow stringently from the euler - lagrange equations .we have further applied the special - relativistic generalizations of the so - called `` grad - h - terms '' and a refined artificial viscosity scheme with time dependent parameters .+ the main focus of this paper is the presentation of a set of challenging benchmark tests that complement those of the original paper .they show the excellent advection properties of the method , but also its ability to accurately handle even very strong relativistic shocks . in the extreme shock tube test 3 , where the post - shock density shell is compressed into a width of only 0.1 % of the computational domain, we find the shock front to propagate at slightly too large a pace .this artifact ceases with increasing numerical resolution , but future improvements of this point would be desirable .we have further determined the convergence rate of the method in numerical experiments and find it first - order accurate when shocks are involved and second - order accurate for smooth flows .j. a. faber , t. w. baumgarte , s. l. shapiro , k. taniguchi , and f. a. rasio , _ dynamical evolution of black hole - neutron star binaries in general relativity : simulations of tidal disruption _ , physd * 73 * ( 2006 ) , no . 2 , 024012 .j. a. faber , f. a. rasio , and j. b. manor , _ post - newtonian smoothed particle hydrodynamics calculations of binary neutron star coalescence .binary mass ratio , equation of state , and spin dependence _ ,d * 63 * ( 2001 ) , no . 4 , 044012 .
|
in this paper we test a special - relativistic formulation of smoothed particle hydrodynamics ( sph ) that has been derived from the lagrangian of an ideal fluid . apart from its symmetry in the particle indices , the new formulation differs from earlier approaches in its artificial viscosity and in the use of special - relativistic `` grad - h - terms '' . in this paper we benchmark the scheme in a number of demanding test problems . maybe not too surprising for such a lagrangian scheme , it performs close to perfectly in pure advection tests . what is more , the method produces accurate results even in highly relativistic shock problems . i smoothed particle hydrodynamics , special relativity , hydrodynamics , shocks
|
understanding human behavior has long been recognized as one of the keys to understanding epidemic spreading , which has triggered intense research activity aimed at including social complexity in epidemiological models .age structure , human mobility and very detailed data at the individual level are now incorporated in most of the realistic models . however , much remains to be done .models based on social mobility and behavior have shown to be valuable tools in the quantitative analysis of the unfolding of the recent h1n1 pandemic , but it has become clear that societal reactions coupling behavior and disease spreading can have substantial impact on epidemic spreading thus defining limitations of most current modeling approaches .societal reactions can be grouped into different classes .first , there are changes imposed by authorities through the closure of schools , churches , public offices , and bans on public gatherings .second , individuals self - initiate behavioral changes due to the concern induced by the disease .behavioral changes vary from simply avoiding social contact with infected individuals and crowded spaces to reducing travel and preventing children from attending school . in all caseswe have a modification of the spreading process due to the change of mobility or contact patterns in the population .in general , these behavioral changes may have a considerable impact on epidemic progression such as the reduction in epidemic size and delay of the epidemic peak . + several studies have been carried out in order to evaluate the impact and role that organized public health measures have in the midst of real epidemics .however , only a few recent attempts have considered self - induced behavioral changes individuals adopt during an outbreak in order to reduce the risk of infection . in some approachesindividual behaviors were modeled by modifying contact rates in response to the state of the disease . in others new compartments representing individual responseswere proposed . finally , in some studiesthe spread of information in the presence of the disease was explicitly modeled and coupled with the spreading of the disease itself . however , we are still without a formulation of a general behavior - disease model . in this studywe propose a general framework to model the spread of information concerning the epidemic and the eventual behavioral changes in a single population .the emergent infectious diseases that we consider throughout the manuscript resemble the natural history of an acute respiratory infection with a short duration of infectiousness and have mild impact on the health status of individuals in that healthy status is recovered at the end of the infectious period .we modify the classic susceptible - infected - recovered ( sir ) model by introducing a class of individuals , , that represents susceptible people who self - initiate behavioral changes that lead to a reduction in the transmissibility of the infectious disease . in other words ,this class models the spread of ` fear ' associated with the actual infectious disease spread . individuals who fear the disease self - initiate social distancing measures that reduce the transmissibility of the disease .the spread of fear depends on the source and type of information to which individuals are exposed .we classify the general interaction schemes governing the transitions of individuals into and out of by considering behavioral changes due to different information spreading mechanisms , i.e. , belief - based versus prevalence - based and local versus global information spreading mechanisms .we provide a theoretical and numerical analysis of the various mechanisms involved and uncover a rich phenomenology of the behavior - disease models that includes epidemics with multiple activity peaks and transition points .we also show that in the presence of belief - based propagation mechanisms the population may acquire a collective ` memory ' of the fear of the disease that makes the population more resilient to future outbreaks .this abundance of different dynamical behaviors clearly shows the importance of the behavior - disease perspective in the study of realistic progressions of infectious diseases and provides a chart for future studies and scenario analyses in data - driven epidemic models .in order to describe the infectious disease progression we use the minimal and prototypical sir model . this model is customarily used to describe the progression of acute infectious diseases such as influenza in closed populations where the total number of individuals in the population is partitioned into the compartments , and , denoting the number of susceptible , infected and recovered individuals at time , respectively . by definitionit follows .the model is described by two simple types of transitions represented in figure ( [ trans ] ) .the first one , denoted by , is when a susceptible individual interacts with an infectious individual and acquires infection with transmission rate .the second one , denoted by , occurs when an infected individual recovers from the disease with rate and is henceforth assumed to have permanent immunity to the disease .the sir model is therefore described by the two following reactions and the associated rates : while the transition is itself a spontaneous process , the transition from depends on the structure of the population and the contact patterns of individuals .here we consider the usual homogeneous mixing approximation that assumes that individuals interact randomly among the population . according to this assumption the larger the number of infectious individuals among one individual s contacts the higher the probability of transmission of the infection .this readily translates in the definition of the force of infection in terms of a mass action law , that expresses the per capita rate at which susceptible individuals contract the infection .in order to simulate the sir model as a stochastic process we can consider a simple binomial model of transition for discrete individuals and discrete times .each member of the susceptible compartment at time has a probability during the time interval between and to contract the disease and transfer to the infected state at time , where is the unitary time scale considered that we have set to in simulations .as we assume to have independent events occurring with the same probability , the number of newly infected individuals generated during the time interval is a random variable that will follow the binomial distribution ] , where the number of independent trials is given by the number of infectious individuals that attempt to recover and the probability of recovery in the time interval is given by the recovery probability . in this processes we recognize that the stochastic variables define a markov chain of stochastic events in which the current state of the system is determined only by the state of the system at the previous time steps .formally , we can indeed write the following markov chain relations : these equations can be readily used to simulate different stochastic realizations of the epidemic events with the same basic parameters and initial conditions .this allows us to analyze the model s behavior by taking into account statistical fluctuations and noise in the epidemic process .the equations can also be translated into the standard set of continuous deterministic differential equations describing the sir model by using expected values as the crucial parameter in the analysis of single population epidemic outbreaks is the basic reproductive number , which counts the expected number of secondary infected cases generated by a primary infected individual . under the assumption of homogeneous mixing of the populationthe basic reproductive number of the sir model is given by by the simple linearization of the above equations for it is straightforward to see that in the single population case any epidemic will spread to a nonzero fraction of the population only if .in this case the epidemic is able to generate a number of infected individuals larger than those who recover , leading to an increase in the overall number of infectious individuals .the previous considerations lead to the definition of a crucial epidemiological concept : the epidemic threshold .indeed , if the transmission rate is not large enough to allow a reproductive number larger than one ( i.e. , ) , the epidemic outbreak will be confined to a tiny portion of the population and will die out in a finite amount of time in the thermodynamic limit of . in the following we will use binomial stochastic processes to simulate numerically the progression of the epidemics and we will use the continuum limit to provide the analytical discussion of the models . schematic representation of the two types of transitions that will be recurrent in the paper .in panel ( a ) we show the first in which individuals in compartment interact with individuals in class , represented by the small square , becoming themselves . in general the compartment inducing the transition of individuals in could be any other compartment in the model , e.g. , different from the end - point of the transition .we assume the homogeneous mixing of the population so that the rate at which an individual in interacts with individuals in and changes status is simply given by the product of prevalence of and the transmission rate , .this type of reaction can be written as . in the case of the sir model and . in panel( b ) we show the second type .this is a spontaneous transition with rate in which an individual in compartment spontaneously moves to compartment .these types of reactions can be written as . in the sir model and .,scaledwidth=40.0% ] we need to classify the source and type of information concerning the disease that people use to conduct their behavior in order to model the coupling between behavioral changes and the disease spread . in other words , while the disease spreads in the population , individuals are exposed ( by local contacts , global mass media news , etc . ) to information on the disease that will lead to changes in their behavior .this is equivalent to the coupled spread of two competing contagion processes : the infectious disease and the ` fear of the disease ' contagion processes .the fear of the disease is what induces behavioral changes in the population .for this reason we will assume that individuals affected by the fear of the disease will be grouped in a specific compartment of susceptible individuals .these individuals will not be removed from the population , but they will take actions such as reducing the number of potentially infectious contacts , wearing face masks , and other social distancing measures that change disease parameters . in the following we will consider that self - induced behavior changes have the effect of reducing the transmission rate of the infection , introducing the following reaction : with ( i.e. , ) .the above process corresponds to a force of infection on the individuals affected by the fear contagion . the parameter therefore modulates the level of self - induced behavioral change that leads to the reduction of the transmission rate .as the scope of the awareness of the disease or of the adopted behavioral changes is avoidance of infection , we assume that individuals in the compartment relax their behavioral changes upon infection and return back to their regular social behavior .while the above modeling scheme is a straightforward way to include social distancing in the system , a large number of possible scenarios can be considered in the modeling of the contagion process that induce susceptible individuals to adopt self - induced behavioral changes and transition to the state .in particular we consider three main mechanisms : * * local , prevalence - based spread of the fear of the disease*. in this scenario we assume that susceptible individuals will adopt behavioral changes only if they interact with infectious individuals .this implies that the larger the number of sick and infectious individuals among one individual s contacts , the higher the probability for the individual to adopt behavioral changes induced by awareness / fear of the disease .the fear contagion process therefore can be modeled as where in analogy with the disease spread , is the transmission rate of the awareness / fear of the disease .this process defines a transition rate for the fear of the disease that can be expressed by the usual mass - action law . ** global , prevalence - based spread of the fear of the disease*. in some circumstances , individuals adopt self - induced behavioral changes because of information that is publicly available , generally through newspapers , television , and the internet . in this casethe local transmission is superseded by a global mechanism in which the news of a few infected individuals , even if not in contact with the large majority of the population , is able to trigger a widespread reaction in the population . in this casethe fear contagion process is not well represented by the usual mass action law and has to be replaced by where .figure ( [ trans3 ] ) shows the schematic representation of this .+ schematic representation of the third type of interaction discussed . in this casethe transition into compartment is based on the absolute number of the individuals in the compartment ( shown by the small square ) . in general the inducing compartmentcould be different ( e.g. ) than the end - point of the transition ., scaledwidth=30.0% ] + for small values of we have a pseudo mass action law of the first order in : .\ ] ] the above contagion process acts on the whole population even in the case of a very limited number of infectious individuals and the parameter identifies the characteristic number of infected individuals reported by the news above which the fear spreads quickly in the population similarly to a panic wave .* * local , belief - based spread of the fear of the disease*. in addition to the local prevalence - based spread of the fear of the disease , in this case we assume that the fear contagion may also occur by contacting individuals who have already acquired fear / awareness of the disease .in other words , the larger the number of individuals who have fear / awareness of the disease among one individual s contacts , the higher the probability of that individual adopting behavioral changes and moving into the class .the fear contagion therefore can also progress according to the following process : where the transmission rate is , with modulating the ratio between the transmission rate by contacting infected individuals and contacting individuals with fear of the disease .the transition rate is defined by the mass - action law .the fear / awareness contagion process is not only defined by the spreading of fear from individual to individual , but also by the process defining the transition from the state of fear of the disease back to the regular susceptible state in which the individual relaxes the adopted behavioral changes and returns to regular social behavior .we can imagine a similar reaction on a very long time scale in which individuals lose memory of the disease independent of their interactions with other individuals and resume their normal social behavior .this would correspond to spontaneous recovery from fear as proposed by epstein _however , our social behavior is modified by our local interactions with other individuals on a much more rapidly acting time - scale .we can therefore consider the following processes : and we can then define two mass - action laws : and .these mimic the process in which the interaction between individuals with fear and without fear , susceptible or recovered , leads the individual with fear to resume regular social behavior . both processes , occurring with rate ,tell us that the larger the number of individuals who adopt regular social behavior among one individual s contacts , the higher the probability for the individual to relax any behavioral changes and resume regular social behavior .the two interactions translate into a unique mass action law : .the fear contagion process is therefore hampered by the presence of large numbers of individuals acting normally .the spreading of fear is the outcome of two opposite forces acting on society , but is always initially triggered by the presence of infectious individuals . in table[ tab1 ] we report all the infection and recovery transitions for the disease and fear contagion dynamics and the corresponding terms and rates .we will use those terms to characterize different scenarios of interplay between the information and disease spreading processes . unless specified otherwise the numerical simulations will be performed by individual - based chainbinomial processes in discrete time and the analytical discussion will consider the continuous deterministic limit . in the comparison between the analytic conclusions with the numerical simulations we will always make sure to discuss the differences due to stochastic effects such as the outbreak probability at relatively small values of the reproductive number . in the following discussion will refer to the basic reproductive number of the sir model unless specified otherwise .[ cols="<,<,^,^ , < " , ]the first model ( model i ) we consider is the coupling of the sir model with local prevalence - based spread of the fear of the disease .the coupled behavior - disease model is described by the following set of equations : a schematic representation of the model is provided in figure ( [ modeli ] ) . schematic representation of model i. , scaledwidth=50.0% ] considering table [ tab1 ]we can write down all the terms , ,\\ \nonumber d_{t}s^{f}(t ) & = & -r_{\beta}\beta s^{f}(t)\frac{i(t)}{n}+\beta_{f}s(t)\frac{i(t)}{n}-\mu_{f}s^{f}(t)\left[\frac{s(t)+r(t)}{n}\right],\\ \nonumber d_{t}i(t ) & = & -\mu i(t)+\beta s(t)\frac{i(t)}{n}+r_{\beta}\beta s^{f}(t)\frac{i(t)}{n},\\ \nonumber d_{t}r(t ) & = & \mu i(t ) , \end{aligned}\ ] ] in which ,\ ] ] meaning that the total number of individuals in the population does not change . in acute diseases ,the time scale of the spreading is very small with respect to the average lifetime of a person , allowing us to ignore birth and death processes and the demographic drift of the population .this is also the time scale over which it is more meaningful to consider the effect of the spread of behavioral changes .diseases with a longer time scale may be equally affected by behavioral changes emerging especially as cultural changes toward certain social behavior for instance sexual habits in the presence of a sexually transmitted disease with a long latency period but in this case the demography of the system should be taken into account .to explain the equations we can simply consider the negative terms . in particular the first term of the first equation in eq . ( [ f_o_f ] ) takes into account individuals in the susceptible compartment who through interaction with infected individuals become sick .the second term takes into account individuals in the susceptible compartment who through interaction with infected individuals change their own behavior .the first term of the second equation takes into account individuals in compartment who through interaction with infected individuals become sick .it is important to remember that the transmission rate for people in compartment is reduced by a factor due to the protection that they gain on account of membership in this class .the last term in the second equation takes into account people in compartment who through interaction with healthy individuals , , and recovered ones , , normalize their behavior and move back to compartment .the first term in the third equation takes into account the spontaneous recovery of sick individuals .it is natural to assume that in the beginning of the disease spreading process the population is fully susceptible except for the infectious seeds , which means that we can set . at this pointthe behavioral response is not active yet .if the disease proceeds to spread much faster than fear contagion , then the model reduces to the classic sir with basic reproductive number . in this case the initial spread is well described by .the number of individuals in the compartment is of the same order of infectious and recovered individuals . from the conservation ofthe number of individuals follows . since is the leading order , all the terms in the equations like in which both and are different from can be considered as second order .using this approximation we can linearize the system and reduce the equations to first - order ordinary differential equations that are easy to integrate .in particular for we can write which has the following solution : for fear will spread in the population since the condition is always satisfied .the growth of the fear contagion is due to the spread of the infection in the population .+ when fear spreads much faster than the disease , , everyone quickly becomes scared and our model reduces to an sir model with a reduced reproductive ratio that is dominated by the characteristics of the compartment .+ by considering both stochastic simulations of the model and direct integration of the equations , we explored numerically the intermediate regime between these two limits , i.e. . the spread of the fear of infection contagion in this regime does not significantly affect the timing of the disease spread , as showed in figure ( [ curves ] ) . in this figure the stochastic fluctuations are demonstrated by individual realizations and compared with the median profiles obtained by considering different stochastic realizations .the deterministic solution of the equation for , obtained by direct integration of the equations , is well inside the reference range of our stochastic simulations as shown in figure ( [ comparison ] ) . in this region of the model s phase space fear simply produces a mild reduction in the epidemic size . for , , , and .we show the medians of , evaluated using stochastic runs for the baseline ( sir model without fear of contagion ) and three realizations of the model for different values of .in particular in panel ( a ) we show the baseline sir model with the same disease parameters . in panel ( b ) we set . in panel ( c )we set . in panel( d ) we set .it is clear how the peak time is the same for all the scenarios and how the number of infected individuals at peak is reduced as increases ., scaledwidth=50.0% ] fixing , , and .we compare the solution of the deterministic equations ( red solid line ) with the reference ranges of our stochastic solutions . herewe consider runs that produced at least an epidemic size of of the population ( ).,scaledwidth=50.0% ] multiple waves of infection .fixing , , , , and we show stochastic runs of the infected profiles and the median evaluated considering runs in which the epidemic size is at least of the population.,scaledwidth=40.0% ] by increasing the value of it is possible to find a region of parameters characterized by multiple peaks . in figure ( [ two - peak_model1 ] )we show stochastic runs and the median profile obtained from runs for a set of parameters associated with multiple peaks .after the first wave of infection individuals leave the compartment and return to the susceptible state in which they are less protected from the disease .the second wave manifests if the number of infected individuals at this stage is not too small and if there is still a large enough pool of individuals susceptible to the infection . a closer inspection of the parameter space by numerical integration of the deterministic equations yields very rich dynamical behavior .figure ( [ multipeak_model1 ] ) displays the phase diagram of the model on - plane regarding different number of disease activity peaks for a set of model parameters . as increases , the region in which multiple peaks are encountered shifts to smaller values of and larger values of .fixing , increasing values of increase the number of infection peaks while an increase in leads to a decrease in the number of peaks .it is interesting to note that adding a simple modification to the basic sir model leads to scenarios with more than one peak .this is important not only from a mathematical point of view ( existence of states characterized by multiple and unstable stationary points in the function ) but also for practical reasons ; in historical data from the 1918 pandemic multiple epidemic peaks were observed . by increasing the value of to larger and larger values, the spread of the fear contagion becomes increasingly rapid with respect to the spread of the disease .it is natural to think in this regime that the reproductive number of the disease is characterized by the class .we then have two different scenarios : 1 . if , then the epidemic size is given by that of an sir model with , then fear completely stops the spreading of the disease .this is confirmed in figure ( [ rid_r_mod1 ] ) in which we plot the proportion of recovered individuals at the end of the epidemic , which is evaluated by the integration of the deterministic equations .we consider different values of and and hold fixed the other parameters .it is clear that for very large values of the spreading of the disease is characterized by the reproductive number .+ fixing , , and we evaluate the normalized epidemic size for different values of and through direct integration of the equations .once the product is smaller than unity , then the epidemic size goes to as .,scaledwidth=50.0% ] at the end of the disease epidemic the system enters the so - called ` disease - free ' stage .this region of the phase space is described by this regime can be easily derived by setting in the set of eqs .( [ f_o_f ] ) .the system is then reduced to ,\\ \nonumber d_{t}s^{f}(t ) & = & -\mu_{f}s^{f}(t)\left[\frac{s(t)+r(t)}{n}\right],\\ \nonumber d_{t}i(t ) & = & 0 , \\\nonumber d_{t}r(t ) & = & 0.\end{aligned}\ ] ] from the last equation it is clear that , and the first and second equations are equivalent .it is then possible to find the solution for and by using the conservation of individuals .in particular the equation to solve is \nonumber\\ & = & -\mu_{f}s^{f}(t)\left[\frac{n - s^f(t)}{n}\right].\end{aligned}\ ] ] by integrating this equation directly it is easy to show that fear disappears exponentially : in the stationary state , for , the system reaches the disease- and fear - free equilibrium : there is no possibility of an endemic state of fear .fear can only be produced by the presence of infected people .as soon as the infection dies out , fearful people recover from their fear by interacting with all the susceptible and recovered individuals and become susceptible themselves .the second fear - inducing process we consider is the spread of the fear contagion through mass - media ( model ii ) . in order to increase ratings mass - mediawidely advertise the progress of epidemics , causing even the people that have never contacted a diseased person to acquire fear of the disease . in this formulation ,even a very small number of infected people is enough to trigger the spread of the fear contagion . to modelthis we consider a pseudo mass - action law in which the number of infected people is not rescaled by the total population .hence the absolute number of infected individuals drives the spread .the transition rate peculiar to this model can be written as $ ] .the equations describing the system read as a schematic representation of the model is provided in figure ( [ model3 ] ) .schematic representation of model ii .the pseudo mass - action law is represented by the dashed line ., scaledwidth=50.0% ] considering table [ tab1 ] we can explicitly introduce all the terms , + \mu_{f}s^{f}(t)\left[\frac{s(t)+r(t)}{n}\right],\nonumber\\ d_{t}s^{f}(t ) & = & -r_{\beta}\beta s^{f}(t)\frac{i(t)}{n}+\beta_{f}s(t)\left[1-e^{-\delta i(t)}\right]-\mu_{f}s^{f}(t)\left[\frac{s(t)+r(t)}{n}\right],\\ d_{t}i(t ) & = & -\mu i(t)+\beta s(t)\frac{i(t)}{n}+r_{\beta}\beta s^{f}(t)\frac{i(t)}{n},\nonumber\\ d_{t}r(t ) & = & \mu i(t),\nonumber\end{aligned}\ ] ] yielding that the population size is fixed , .\ ] ] as in the previous model , if the infection spreads faster than the fear contagion , then the reproductive number is simply . in the opposite limit it is easy to understand that the reproductive number is . in this latter limit , if , then the global prevalence - based spread of fear suppresses the spread of the disease .moreover , in general we will have a reduction in the epidemic size as a function of .the early time progression of is analogous to that of model i : the analogy is due to the fact that as in the first model the transition to is related only to the presence of infected individuals . even in this casethe condition is always satisfied so that if , then fear can spread in the population .+ interestingly , there is a region of the phase space in which this model and model i are equivalent . in both modelsthe transition to fear is related only to the presence of infected individuals . in the first model we use a mass - action law while in the second we use a pseudo mass - action law .it is possible to relate one of the transmission rates of fear to the other by tuning the parameters .let us focus our attention on small values of .we can approximate the transition rate by .\ ] ] let us consider the first order term only , i.e. , .the relation between the two transmission rates can easily be obtained by imposing which leads to where we define as the rate in the second model , given in the first .the above relation guarantees the equivalence of the two models at the first order on . in the small region in which the approximation ( [ app_iii ] ) holds ,model i and ii are mathematically indistinguishable for suitable values of the parameters , which indicates that even in the phase space of model ii we have multi - peak regions .these regions , of course , coincide with the regions in the first model . +the disease - free equilibrium of this model does not allow for an endemic state of fear , as the transition to fear is induced by the presence of infected individuals only .as soon as the epidemic dies out the in - flow to the compartment stops , while the out - flow continues to allow people to recover from fear .when the number of infected individuals goes to zero , the media coverage vanishes , as does the fear it causes .+ even in this model the effect of fear results in a reduction of the epidemic size .this reduction is a function of and of all of the parameters .as increases the transition into fear becomes faster .since the people in compartment are more protected from the disease , the epidemic size inevitably decreases . while keeping the value of fixed, increasing reduces the epidemic size and drives it to its asymptotic value .the asymptotic value of as a function of depends on the product . if this product is bigger than , obtained through direct numerical integration of the equations as shown in figure ( [ exp_1])-a , then the asymptotic value is equal to the epidemic size of an sir model with .if the product is smaller than , obtained similarly through direct integration of the equation as shown in figure ( [ exp_1])-b , then the asymptotic value is zero ; the rate of the spread of awareness is infinitely faster than the spread of the disease .this dynamic can be thought as that of an sir with a reproductive number smaller than .+ reduction of the epidemic size as a function of for different values of and .we fix , , and . in panel ( a ) we assume for which .increasing the value of results in an asymptotic value of the epidemic size other than zero . in panel( b ) we consider . in this case , instead , . by increasing the value of the epidemic sizeis increasingly reduced .this effect is stronger for bigger values of .the values are obtained by numerical integration of the equations ., scaledwidth=50.0% ] in this section we introduce the last model ( model iii ) in which we also consider self - reinforcing fear spread which accounts for the possibility that individuals might enter the compartment simply by interacting with people in this compartment : fear generating fear . in this model peoplecould develop fear of the infection both by interacting with infected persons and with people already concerned about the disease .a new parameter , , is necessary to distinguish between these two interactions .we assume that these processes , different in their nature , have different rates . to differentiate them we consider that people who contact infected people are more likely to be scared of the disease than those who interact with fearful individuals .for this reason we set .+ let us consider the case of the limit in which no infected individuals are present in the population .the compartment can only grow through the interaction .it is possible to show that in the early stage this can be thought of as an sis - like model .let us consider the case in which there are no infected individuals and just one individual in the compartment , i.e. , . considering this limit , the set of equations of model iii could be written as we assume that in this early stage all the population is almost fully susceptible .the equation for is then \mu_fs^f(t).\ ] ] this is the typical early - time term for the ` infected ' individuals in an sis model .the spread of fear contagion will start if this allows us to define the reproductive number of fear by in isolation , the fear contagion process is analogous to the reproductive number of an sis or sir model with transmission rate .however , in the general case the spread of the fear of infection is coupled with the actual disease spread .the complete set of equations is a schematic representation of the model is provided in figure ( [ modelii ] ) .schematic representation of model iii . , scaledwidth=50.0% ] considering table [ tab1 ] we can write all of the terms explicitly , +\mu_{f}s^{f}(t)\left[\frac{s(t)+r(t)}{n}\right],\nonumber \\ d_{t}s^{f}(t ) & = & -r_{\beta}\beta s^{f}(t)\frac{i(t)}{n}+ \beta_{f}s(t)\left[\frac{i(t)+\alpha s^{f}(t)}{n}\right]-\mu_{f}s^{f}(t)\left[\frac{s(t)+r(t)}{n}\right],\nonumber \\d_{t}i(t ) & = & -\mu i(t)+\beta s(t)\frac{i(t)}{n}+r_{\beta}\beta s^{f}(t)\frac{i(t)}{n } , \nonumber \\d_{t}r(t ) & = & \mu i(t).\end{aligned}\ ] ] also in this model we assume that the population size is fixed , .\ ] ] if we consider the case in which the disease spreads faster than the fear of it , then the reproductive ratio is . in the opposite casethe reproductive ratio is governed by the compartment so that and the epidemic size will be reduced depending on the value of . in this latter case , if , then the protection from infection gained in the compartment causes the disease to fade out . following the same linearization strategy shown in previous sections ,the early stage of the compartment is given by .\end{aligned}\ ] ] two different regions in the parameter space are then identified : one in which the rate of increase of fear is dominated by its own thought contagion process , , and one in which the rate of the local belief - based spread is dominated by the disease , . in the first case the fear spreads independently of the value of , and the epidemic size will be reduced due to the protection that individuals gain in the compartment .the new interaction , although intuitively simple , significantly complicates the dynamics of the model .in particular within several regions of the parameter space we observe two epidemic peaks as demonstrated in figure ( [ curves_ssf ] ) . in this figurewe plot the medians for two different values of evaluated considering at least runs in which the epidemic size is at least of the population .we also show stochastic runs of the model to explicitly visualize the fluctuation among them .this non - trivial behavior can be easily understood .fear reinforces itself until it severely depletes the reservoir of susceptible individuals , causing a decline in new cases . as a resultpeople are lured into a false sense of security and return back to their normal behavior ( recovery from fear ) causing a second epidemic peak that can be even larger than the first .some authors believe that a similar process occurred during the pandemic , resulting in multiple epidemic peaks .multiple waves of infection .fixing , , , , and we show stochastic runs and the medians evaluated considering runs for two different values of . in panel( a ) . in panel( b ) .,scaledwidth=50.0% ] we show in figure ( [ multipeak_model3 ] ) for a set of model parameters the phase diagram of the model on - plane regarding different number of disease activity peaks as obtained by numerical integration of the deterministic equations .the figure should be considered as illustrative as we do not have any analytical expression on the sufficient conditions yielding multiple infection peaks . at the end of the disease epidemicthe system enters the disease - free stage .setting and the epidemic size to the set of differential equations becomes ,\\ \nonumber d_{t}s^{f}(t ) = & + & \alpha \beta_f s(t)\frac{s^f(t)}{n } -\mu_{f}s^{f}(t)\left[\frac{n - s^f(t)}{n}\right],\\ \nonumber d_{t}i(t ) & = & 0,\\ \nonumber d_{t}r(t ) & = & 0.\end{aligned}\ ] ] conservation of the total number of individuals yields the following differential equation for : ,\ ] ] with the solution we have defined as where is a time - independent variable and is a function of the parameters of the model .interestingly , there are two possible disease - free equilibriums. one in which where fear dies along with the disease , and the one given by where fear and behavioral changes persist even after the end of the disease epidemic .the condition is necessary but not sufficient in order to have an endemic state of fear , while is sufficient to avoid an endemic state of fear .unfortunately , the parameter is an implicit function of the whole dynamics through the epidemic size .+ the presence of an endemic state , a societal memory of the disease , and associated fear are quite interesting features of the model induced by fear s self - reinforcement . in modeli transition to the compartment is possible only in the presence of infected individuals .however , in this model fear is able to sustain its presence in the population if the effective reproductive number of the local belief - based spread is larger than unity even if the disease dies out .unfortunately , this argument can not be used to fix the range of parameters in the phase space with these properties since any linearization at these stages of the compartments is not suitable .the possibility of having an endemic state of fear indicates that an event localized in time is capable of permanently modifying society with interesting consequences . in the case of a second epidemic, the presence of part of the population already in the compartment reduces the value of the basic reproduction number . to show thislet us consider the differential equation for the infected compartment after the re - introduction of the very same infectious virus ( meaning that the parameters and are equal to those of the first infectious disease ) : (t).\ ] ] the initial condition of the second disease epidemic could be considered to be the disease - free equilibrium of the first epidemic . by using eq .( [ second_d_f ] ) we can express the rate equation of the infected compartment during the early stage of the second disease as \mu i(t).\ ] ] let us define as the proportion of recovered individuals at the end of the first epidemic .in the case of the re - introduction of the disease into the population we will have an outbreak only if the argument in the parenthesis of the above equation is larger than zero , yielding the following condition for the reproductive number of a second outbreak : > 1.\ ] ] it is worth noting that the societal memory of the first outbreak increases the resistence in the population against the spread of the second outbreak in a non - trivial way .one might be tempted to conclude that the new reproductive number is simply provided by the reproductive number of an sir model with an equivalent proportion of removed individuals , but this is not the case as we have to factor in the behavioral changes of individuals in the compartment , obtaining to prove the last inequality we have to show that or the expressions on both sides of the above inequality are first - order polynomial functions of . for they assume the same value .it is important to stress that in this limit ( ) the model is indistinguishable from the classical sir .these two functions can only have one common point which occurs at .we will consider only the region in which as assumed in our model . to prove our proposition we have to confront the slopes of the functions andshow that the polynomial with smaller slope will always be below the other in the relevant region .( [ ine_1 ] ) can be rewritten as which is always satisfied , provided our assumption .this is an important result that confirms how an endemic state of behavioral change in the population reduces the likelihood and impact of a second epidemic outbreak .we note that such a state will inevitably fade out on a long time scale .this can be modeled with a spontaneous transition acting on a time scale longer than the epidemic process itself .reduction of the epidemic size as a function of and .fixing , , , , and .the three lines are curves of as a function of , keeping constant .we select three different values of which correspond to solid black , red , and dashed lines , respectively .the value is a special case that leads to .it divides the phase space in two different regions .all the values of below are characterized by . in this case for large values of the model is reduced to an sir with reproductive number below and the epidemic is halted .interestingly , this behavior starts in an intermediate regime of .there is a critical value of above which ( i.e. , ) the epidemic size is zero .this transition happens with a jump , as shown by the solid black line .all the values of above are instead characterized by .also in this case the model is reduced to an sir with reproductive number for large values of , but in this case this value is above .this results in a epidemic size that is always non - zero . in this region of parametersno jumps are present ( see the dashed line ) .the values shown in the plot are computed through numerical integration of the equations ., scaledwidth=40.0% ] a further interesting characteristic of this model resides in the reduction of the epidemic size as shown in figure ( [ reduction_ssf ] ) .in this plot we show , evaluated through direct integration of the equations , as a function of and , keeping fixed the other parameters . in this casethe self - reinforcement mechanism creates a more complicated phase space that allows for a jump in the epidemic size as increases above a critical value ( see the black solid line in figure ( [ reduction_ssf ] ) ) .this behavior , typical of the first - order phase transitions in cooperative systems , signals a drastic change in the dynamical properties of the behavior - disease model . if , then obviously the fear of the disease is not able to affect a large fraction of the population and the disease spreads as usual in the population , affecting at the end of its progression individuals .if we face two different scenarios or two different regions of separated by the red solid line in figure ( [ reduction_ssf ] ) : * in the case that ( i.e. , the dashed line in figure ( [ reduction_ssf ] ) ) the generation of a finite fraction of individuals in the compartment is not able to halt the epidemic .the behavioral changes are not enough to bring the reproductive number below the epidemic threshold and decreases smoothly because of the epidemic progress with a progressively lower effective reproductive number . *if , ( i.e. , the black solid line in figure ( [ reduction_ssf ] ) ) the individuals that populate the compartment keep the spread of the epidemic below the threshold . in principle , the state and would be possible . in general , the process needs to start with infectious individuals that trigger the first transitions and therefore a small number of individuals are generated .however , there will be a at which the growth of the fear contagion process is faster than the growth of the epidemic with a small . at this pointthe fear contagion process is accelerated by the growth of individuals in while the epidemic spread is hampered by it .the is quickly populated by individuals while the epidemic stops , generating a very small number of .this generates a jump in the amount of individuals that experience the infection as a function of .this is clearly illustrated by figure ( [ rb_0 ] ) where the behavior of both quantities and is plotted close to the transition point .the value at which the transition occurs also depends on the other parameters of the model including and .the extremely rich phase space of this model is important for two reasons : i ) we have a strong reduction in the cumulative number of infected individuals associated with discontinuous transition ; ii ) in the case of a second epidemic the memory of the system shifts the reproductive number towards smaller values .these are very interesting properties of the model due to the self - reinforcing mechanism that clearly creates non - trivial behaviors in the dynamics .we have tried different analytical approaches to get more insight into the phase transition .unfortunately , the discontinuous transition is triggered by model behavior out of the simple linearized initial state and it is extremely difficult to derive any closed analytical expression .an analytic description is beyond the scope of the present classification of behavior - disease models and is the object of future work on the model . and for , , , , , and .the values are obtained by numerical integration of the equations ., scaledwidth=50.0% ]we introduced a general framework with different mechanisms in order to consider the spread of awareness of a disease as an additional contagion process .three mechanisms were proposed . in the first , basic modelthe social distancing effects and behavioral changes are only related to the fraction of infected individuals in the population . in the second we modeled the spread of awareness considering only the absolute number of infected individuals as might happen in the case that the information the individuals rely on is mostly due to mass media reporting about the global situation . finally , in the third model we added the possibility that susceptible people will initiate behavioral changes by interacting with individuals who have already adopted a behavioral state dominated by the fear of being infected .this apparently simple interaction allows for the self - reinforcement of fear .we have found that these simple models exhibit a very interesting and rich spectrum of dynamical behaviors .we have found a range of parameters with multiple peaks in the incidence curve and others in which a disease - free equilibrium is present where the population acquires a memory of the behavioral changes induced by the epidemic outbreak . this memoryis contained in a stationary ( endemic ) prevalence of individuals with self - induced behavioral changes .finally , a discontinuous transition in the number of infected individuals at the end of the epidemic is observed as a function of the transmissibility of fear of the disease contagion . at this stagethe study of these properties has been mostly phenomenological and we have focused on minimal models that do not include demographic changes and spontaneous changes in the behavior of individuals such as the fading out of an epidemic over a long time .we should also note that the behavior - disease models we have suggested do not take into account the associated costs of social - distancing measures adopted by individuals , such as societal disruption and financial burden . a game theoretical approach would be well suited in order to account for factors in the decision making process for self - initiated behavioral changes .however , more features added to increase the realism of the models inevitably increase their complexity .moreover , the non - trivial dynamic behavior of the models emphasizes the importance of calibrating those features by appropriate choices of parameter values .unfortunately , in many cases we lack the data necessary for calibrating the behavioral models .the availability of real - world , quantitative data concerning behavioral changes in populations affected by epidemic outbreaks is therefore the major roadblock to the integration of behavior - disease models .any progress in this area certainly has to target novel data acquisition techniques and basic experiments aimed at gathering these data .this work has been partially funded by the nih r21-da024259 award and the dtra-1 - 0910039 award to av .the work has been also partly sponsored by the army research laboratory and was accomplished under cooperative agreement number w911nf-09 - 2 - 0053 .the views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies , either expressed or implied , of the army research laboratory or the u.s .government .n. m. ferguson , d. a. t. cummings , s. cauchemez , c. fraser , s. riley , a. meeyai , s. iamsirithaworn , and d. s. burke .strategies for containing an emerging influenza pandemic in southeast asia ., * 437*:209 , 2005 .d. balcan , h. hu , b. gonalves , p. bajardi , c. poletto , j.j .ramasco , d. paolotti , n. perra , m. tizzoni , w. van den broeck , v. colizza , and a. vespignani . seasonal transmission potential and activity peaks of the new influenza a(h1n1 ) : a monte carlo likelihood analysis based on human mobility . , * 7*:45 , 2009 .p. bajardi , c. poletto , d. balcan , h. hu , b. gonalves , j.j .ramasco , d. paolotti , n. perra , m. tizzoni , w. van den broeck , v. colizza , and a. vespignani .modeling vaccination campaigns and the fall / winter 2009 activity of the new a(h1n1 ) influenza in the northern hemisphere . , *2*:e11 , 2009 .g. cruz - pacheco , l. duran , l. esteva , a.a .minzoni , m. lpez - cervantes , p. panayotaros , a. ahued , and i. villaseor .modelling of the influenza a(h1n1 ) outbreak in mexico city , april - may 2009 , with control sanitary . , * 14*:19254 , 2009 .h. markel , h.b .lipman , j.a .navarro , a. sloan , j.r .michalsen , a.m. stern , and cetron m.s .nonpharmaceutical interventions implemented by us cities during the 1918 - 1919 influenza pandemic . , * 298*:6 , 2007 .fenichel , c. castillo - chavez , m.g .ceddia , g. chowell , p.a .gonzalez parrae , g. j. hickling , g. holloway , r. horan , b. morin , c. perrings , m. springborn , l. velazquez , and c. villalobos .adaptive human behavior in epidemiological models ., * 108*:63066311 , 2011 .
|
the last decade saw the advent of increasingly realistic epidemic models that leverage on the availability of highly detailed census and human mobility data . data - driven models aim at a granularity down to the level of households or single individuals . however , relatively little systematic work has been done to provide coupled behavior - disease models able to close the feedback loop between behavioral changes triggered in the population by an individual s perception of the disease spread and the actual disease spread itself . while models lacking this coupling can be extremely successful in mild epidemics , they obviously will be of limited use in situations where social disruption or behavioral alterations are induced in the population by knowledge of the disease . here we propose a characterization of a set of prototypical mechanisms for self - initiated social distancing induced by local and non - local prevalence - based information available to individuals in the population . we characterize the effects of these mechanisms in the framework of a compartmental scheme that enlarges the basic sir model by considering separate behavioral classes within the population . the transition of individuals in / out of behavioral classes is coupled with the spreading of the disease and provides a rich phase space with multiple epidemic peaks and tipping points . the class of models presented here can be used in the case of data - driven computational approaches to analyze scenarios of social adaptation and behavioral change .
|
entanglement is a uniquely quantum mechanical phenomenon in which quantum systems exhibit correlations above and beyond what is possible for classical systems .entangled systems are thus an important resource for many quantum information processing protocols including quantum computation , quantum metrology , and quantum communication .much work has been done with respect to the identification and quantification of entanglement as well as explorations of entanglement evolution under a range of possible dynamics .an important area of research is to understand the possible degredation of entanglement under decoherence .decoherence , unwanted interactions between the system and environment , is the major challenge confronting experimental implementations of quantum computation , metrology , and communication .decoherence may be especially detrimental to highly non - classical , and hence the most potentially useful , entangled states .a manifestation of the detrimental affects of decoherence on entangled states is entanglement sudden death ( esd ) in which entanglement is completely lost in a finite time despite the fact that the loss of system coherence is asymptotic .this aspect of entanglement has been well explored in the case of bi - partite systems and there are a number of studies looking at esd in multi - partite systems .in addition , there have been several initial experimental esd studies .the esd phenomenon is interesting on a fundamental level and important for the general study of entanglement .however , it is not yet clear what the affect of esd is on quantum information protocols .are different quantum protocols helped , hurt , or indifferent to esd ?previous studies along these lines have been in the area of quantum error correction ( qec ) .an explicit study of the three - qubit phase flip code concludes that this specific code is indifferent to esd . in this paperi take a first step in studying the affect of esd on cluster state quantum computational gates .specifically , i study a four qubit cluster state to see how esd affects its utility as a means of implementing a general single qubit rotation for measurement based ( cluster state ) quantum computation. my approach will be to use an entanglement witness , the negativity and bi - partite concurrence as entanglement metrics and compare the behavior of these metrics under the influence of decoherence to the fidelity of the final state after the attempted single qubit rotation .in addition , i will study the entanglement that remains in the cluster state after two measurements and compare it to the fidelity of the state of the two unmeasured qubits .the cluster state is a specific type of entangled state that can be used as an initial resource for a measurement based approach to quantum computation .a cluster state can be created by first rotating all qubits into the state . desired pairs of qubitsare entangled by applying control phase ( cz ) gates between them . in a graphical picture of a cluster state ,qubits are represented by circles and pairs of qubits that have been entangled via a cz gate are connected by a line .a cluster state with qubits arranged in a two - dimensional lattice , such that each qubit has been entangled with four nearest neighbors , suffices for universal qc . after constructing the cluster state , any quantum computational algorithmcan be implemented using only single - qubit measurements along axes in the - plane .these processing measurements are performed by column , from left to right , until only the last column is left unmeasured .the last column contains the output state of the quantum algorithm which can be extracted by a final readout measurement .one can view each row of the cluster - state lattice as the evolution of a single logical qubit in time .two ( logical ) qubit gates are performed via a connection between two rows of the cluster state .cz gates in particular are ` built - in ' to the cluster state and simple measurement automatically implements the gate .single qubit rotations can be performed when there is no conncetion between the measured qubit(s ) and qubits in another row .in such a case the logical gate implemented by measurement along an angle in the - plane is , where is the hadamard gate and ( ) ) is a - ( - ) rotation by an angle .the dependence of the logical operation on the outcome of the measurement is manifest in for measurement outcome .an arbitrary single qubit rotation can be implemented via three logical single - qubit rotations of the above sort yielding where are the euler angles of the rotation .for example , by drawing the euler angles according to the haar measure , a random single - qubit rotation can be implemented . as with all quantum computing paradigms , cluster state quantum computation , both during the constuction of the cluster state and during subsequent measurement , are subject to decoherence .we study a four qubit cluster chain , with no interaction between the qubits ( beyond the initial conditional phase gates used to construct the cluster state ) placed in a dephasing environment fully described by the kraus operators where we have defined the dephasing parameter .when all four qubits undergo dephasing we have 16 kraus operators each of the form where and .though all of the below calculations are done with respect to , i implicitly assume that increases with time , , at a rate , such that and only at infinite times .for now i also assume equal dephasing for all four qubits . in optical cluster state construction small ( few qubit )cluster states are fused together to form larger cluster states .the smaller states must be stored until they are needed and may be subject to decohence ( especially dephasing ) . in other cluster state implementations , where complete two - dimensional cluster states can be constructed in just a few steps , any four qubit chain may be attached to at least one other qubit . in this caseour results may not be exact . while entanglement is invariant to single qubit operations , decoherence is not and local operations may play a significant role in the entanglement dynamics of the state .thus , if a cluster state must be stored in a decohering environment one would ideally like to choose a cluster state representation ( within single qubit operations ) that has the greatest immunity to the decoherence so as retain as much entanglement as possible . with thisis mind a secondary aim of this paper is to study two representations of the four qubit chain cluster state and compare the affects of dephasing on these representations .the first representation of the four qubit cluster state is this representation minimizes the number of computational basis states having non - zero contribution . the second representation is : where is the single qubit hadamard gate on qubit .this is the state one would get by initially rotating each qubit into the state and applying contolled phase gates , , and .we note that ` connections ' between qubits may be added or removed by single qubit rotations ( though the entanglement stays constant ) thus changing the operation performed via measurement .the four qubit cluster has pure four qubit entanglement .thus , for example , there is no bi - partite concurrence between any of the two qubits . as an entanglement metricwe use the negativity , , for which we will simply use the most negative eigenvalue of the parital transpose of the density matrix .there are a number of inequivalent forms of the negativity for the four qubit cluster state : the partial transpose may be taken with respect to any single qubit , , or the partial transpose may be taken with respect to two qubits : qubits 1 and 2 , , qubits 1 and 3 , , or qubits 1 and 4 , .a further method of monitoring entanglement evolution is via the expectation value of the state with respect to an appropriate entanglement witness .entanglement witnesses are observables with positive or zero expectation value for all states not in a specified class and a negative expectation value for at least one state of the specified class .entanglement witnesses may allow for an efficient means of determining whether entanglement is present in a state ( as opposed to inefficient state tomography ) .this is especially important for experimental implementations as it may be the only practical means of deciding whether or not sufficient entanglement is present in the system .the entanglement witnesses i use are designed to detect cluster states and will be either or depending on the representation .( color online ) entanglement evolution as measured by ] ( solid line ) , ( large dashed line ) , ( chained line ) , ( medium dashed line ) , and ( small dashed line ) for intial states ( left ) and ( right ) as a function of dephasing strength on all four qubits . for intial state there is no esd for or , but esd is exhibited for at .the expectation value of the dephased state with respect to the entanglement witness is equivalent to . for initial state , and esd occurs at .this is the same value for which exhibits esd .esd for is exhibited at .the entanglement witness , fails to detect entanglement for . , title="fig:",width=151 ] our first step is to determine at what dephasing strength , , ( if any ) the four qubit cluster state exhibits esd .the final state of the four qubit system after dephasing is given by where .figure [ c4 ] shows the evolution of our chosen entanglement metrics for initial cluster states as a function of . for the intial state the expectation value of the final state after dephaing with respect to the entanglement witness , ,is given by .thus , cluster state entanglement can be detected by the entanglement witness for .interestingly , the expectation value with respect to the entanglement witness is equal to , the most negative eigenvalue of the partial transpose of the final state with respect to qubits 1 and 2 , which thus exhibits esd at the same value . , , and do not undergo esd . , the lowest eigenvalue of the partial transpose of the final state with respect to one qubit , is given by .the most negative eigenvalues of the partial transpose of the state with respect to qubits 1 and 3 ( ) and 1 and 4 ( ) are four times degenerate and given by .non - zero negativity for only some qubit partitions implies the presence of bound entanglement . for the initial state under dephasing bound entanglementis present in the state for . for the intial state with the most negative eignevalue of the partial transpose of the final state given by , where .both exhibit esd at . for the most negative eigenvalueis given by and is the last negativity to exhibit esd , which occurs when .for the lowest eigenvalue is doubly degenerate and given by .esd is exhibited at which is the same dephasing value at which exhibits esd .again note the presence of bound entanglement for .the expectation value of the final state with respect to the entanglement witness , is given by , thus , the witness fails to detect entanglement for .the evolution of the above entanglement metrics as a function of are shown in fig .as a function of the dephasing strength , and the sum of the first two measurement angles .the third measurement angle does not affect the fidelity .the unmeasured qubit is the final state of the cluster computational logical qubit after performance of an arbitrary single qubit rotation via measurement .there is no sign of any sort of discontinuity that might have been expected due to esd at .,width=188 ] having observed that some sort of esd occurs for both of our chosen representations of the four qubit cluster state , we now seek to determine whether esd affects the utilization of the cluster state as a means of implementing a general single qubit rotation in the measurement based cluster model of quantum computation .to implement such a rotation measurements at an angle with respect to the positive axis in the plane are performed on the first three qubits , , giving a one qubit final state as a function of the measurement angles and the dephasing strength , .we look at the fidelity of the state of the unmeasured qubit as compared to the same state without dephasing : .\ ] ] for convenience we have assumed that the outcome of each measurement is in the chosen measurement basis , such that and no extra rotations are necessary .a measurement of would simply add the necessity for an rotation .we note that the fidelity calculation was done only for initial states and while full process tomography is needed to completely determine the dynamics of the single qubit rotation . for initial state the fidelity can be determined analytically , notice that for this representation , cancels and the other measurement angles contribute only as .the fidelity is plotted in fig . [ f4 ] and shows an oscillating plane steadily and smoothly decreasing toward , but never reaching , .the amplitude of the oscillations decrease at high and low values of and reach a maximum at .we do not see any sort of sharp transition or discontinuity in the behavior of at as one might expect due to the sudden disappearance of for the complete four qubit cluster . as mentioned above, the initial state undergoes esd only with respect to .one may suggest that the reason esd is not manifest in the fidelity degradation of the unmeasured qubit for this initial state is because there is still some entanglement , , which does not exhibit esd , present in the state . to explore thiswe now look at the initial state which , under dephasing , exhibits esd for all negativity measures . following the above, we find the fidelity of the final single qubit state as a function of and measurement angles for the intial state to be : , fidelity of the state of the single unmeasured qubit such that an arbitrary rotation has been performed via the cluster state as a function of two of the measurement angles and .the curves are ( gray ) and ( light ) for .the black curve is the fidelity of the state of the single unmeasured qubit with dephasing for the intial state .this is plotted so as to compare the range of fidelities of the two initial states given the same evolution .right : fidelity as a function of dephasing strength and with and the two curves again equal to ( gray ) and ( light ) . as a function of see the overall fidelity decreases steadily toward .5 without any discontinuity.,title="fig:",width=151 ] , fidelity of the state of the single unmeasured qubit such that an arbitrary rotation has been performed via the cluster state as a function of two of the measurement angles and .the curves are ( gray ) and ( light ) for .the black curve is the fidelity of the state of the single unmeasured qubit with dephasing for the intial state .this is plotted so as to compare the range of fidelities of the two initial states given the same evolution .right : fidelity as a function of dephasing strength and with and the two curves again equal to ( gray ) and ( light ) . as a function of see the overall fidelity decreases steadily toward .5 without any discontinuity.,title="fig:",width=151 ] fig .[ f4h ] plots the fidelity as a function of the three measurement angles and ( see figure caption ) . as a function of fidelity decreases almost uniformly approaching , but not reaching , .again we do not see any discontinuity or change of behavior at the dephasing strengths where esd is exhibited for the complete cluster state , and .[ f4h ] ( left ) also shows the fidelity of the state of the single unmeasured qubit for the intial state and dephasing strength as a function of the measurement angles .note that the range of fidelity is the same for both initial states but the maximum and minimum points as a function of measurement angle are different .the equivalent fidelity range for the two cluster representations is in contrast to the disappearance of entanglement which occurs at different dephasing strengths for the two cluster state representations .so far our exploration of fidelity decay and entanglement as functions of dephasing indicate that esd does not affect the utility of a cluster state as a means of implementing an arbitrary logical single qubit rotation .however , the picture changes when we explore fidelities and sudden bi - partite entanglement death of two qubits after having measured the other two qubits . to quantify the bi - partite entanglement between the two unmeasured qubits i use the concurrence , .the concurrence between two qubits and with density matrix is usually defined as the maximum of zero and , where and the are the eigenvalues of in decreasing order . is the pauli matrix of qubit . for the purposes of clearly seeing at what point esdoccurs we will use as the concurrence noting that esd occurs when in finite time ( i. e. before ) . .the concurrence is plotted as a function of dephasing strength and measurement axes ( which contribute only as ) .there is no esd exhibited for this concurrence function ., width=188 ] we start with the intial state , with measurements performed on qubits 1 ( along the axis ) and 2 ( along the axis ) . the fidelity of the state of the two remaining qubits as a function of dephasing is given by eq .( [ eqfc4 ] ) , the fidelity of the final state of the fourth qubit after measurement on qubits 1 , 2 , and 3 .this is so because eq .( [ eqfc4 ] ) does not depend on .the concurrence between unmeasured qubits 3 and 4 is a function only of the sum of the two measurement angles , , and , and is given by where : and the concurrence is plotted in fig .we note that the fidelity of the state of the two unmeasured qubits never falls below .5 and no esd is exhibited due to the dephasing .( color online ) fidelity ( dashed line ) of the state of qubits 2 and 4 after measurements on qubits 1 and 3 of the initial state compared to the concurrence ( solid line ) between these same qubits .note that the fidelity crosses .5 ( horizontal light line ) at ( vertical light line ) which is the same dephasing strength where esd is exhibited by the concurrence between these two qubits and by of ., width=188 ] if measurements are carried out on qubits 1 and 3 the fidelity of the state that remains on qubits 2 and 4 with dephasing is completely independent of any measurement angle and is given by .the concurrence between qubits 2 and 4 after the measurements is also independent of measurement angle and is given by .note that the fidelity goes to .5 and the concurrence goes to zero at , the same value for which of the four qubit cluster state exhibits esd and the expectation value of the four qubit state with respect to goes to zero . while there is no discontinuity in the fidelity behavior at the dephasing strength that causes esd , the fidelity does cross the critical value of .5 at the same dephasing strength .thus , esd indicates the severity of the decreased correlation between the dephased and not dephased state .the correlation between these metrics is shown in fig .[ c4f24 ] . also note that in the previous case , where qubits 1 and 2 are measured , there is no exhibition of esd and the fidelity never reaches .5 .measurement on qubits 1 and 4 or qubits 2 and 3 give the exact same results as the measurements on 1 and 3 .( color online ) fidelity ( left ) of the state of qubits 3 and 4 after measurement on qubits 1 and 2 and concurrence ( right ) between those qubits as a function of dephasing strength and the measurement axes angles .top left : fidelity as a function of and for .bottom left : contours of fidelity equal to .5 for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) . the fidelity in all cases converges to as goes to zero .top right : concurrence as a function of and for ( bottom ) , ( middle ) , and ( top ) .bottom right : contours of concurrence equal to zero showing where esd occurs ( values of as in previous contour plot ) .the dephasing values at which esd is exhibited approach .704 , the exact value for which the fidelity goes to .5 . , title="fig:",width=151 ] ( color online ) fidelity ( left ) of the state of qubits 3 and 4 after measurement on qubits 1 and 2 and concurrence ( right ) between those qubits as a function of dephasing strength and the measurement axes angles .top left : fidelity as a function of and for .bottom left : contours of fidelity equal to .5 for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) .the fidelity in all cases converges to as goes to zero .top right : concurrence as a function of and for ( bottom ) , ( middle ) , and ( top ) .bottom right : contours of concurrence equal to zero showing where esd occurs ( values of as in previous contour plot ) .the dephasing values at which esd is exhibited approach .704 , the exact value for which the fidelity goes to .5 . , title="fig:",width=151 ] ( color online ) fidelity ( left ) of the state of qubits 3 and 4 after measurement on qubits 1 and 2 and concurrence ( right ) between those qubits as a function of dephasing strength and the measurement axes angles .top left : fidelity as a function of and for .bottom left : contours of fidelity equal to .5 for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) . the fidelity in all cases converges to as goes to zero .top right : concurrence as a function of and for ( bottom ) , ( middle ) , and ( top ) .bottom right : contours of concurrence equal to zero showing where esd occurs ( values of as in previous contour plot ) .the dephasing values at which esd is exhibited approach .704 , the exact value for which the fidelity goes to .5 ., title="fig:",width=151 ] ( color online ) fidelity ( left ) of the state of qubits 3 and 4 after measurement on qubits 1 and 2 and concurrence ( right ) between those qubits as a function of dephasing strength and the measurement axes angles .top left : fidelity as a function of and for .bottom left : contours of fidelity equal to .5 for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) .the fidelity in all cases converges to as goes to zero .top right : concurrence as a function of and for ( bottom ) , ( middle ) , and ( top ) .bottom right : contours of concurrence equal to zero showing where esd occurs ( values of as in previous contour plot ) .the dephasing values at which esd is exhibited approach .704 , the exact value for which the fidelity goes to .5 ., title="fig:",width=151 ] we see similar correlations between fidelity and entanglement metrics when measuring cetain pairs of qubits of the inital state .the fidelity of the state of qubits 3 and 4 upon measuring qubits 1 and 2 is given by : as shown in fig .[ c4h34 ] , when the fidelity goes to .5 as approaches 0 or or when approaches .this is also the maximum dephasing value for which we find esd of the concurrence between unmeasured qubits 3 and 4 as shown in the figure ( we do not have an analytical solution for the concurrence ) .thus , while once again we do not have a change of fidelity behavior due to esd , the sudden death of concurrence does indicate the lowering of fidelity to the critical value of .5 .( color online ) fidelity ( left ) of the state of qubits 2 and 4 after measurement on qubits 1 and 3 and concurrence ( right ) between those qubits as a function of dephasing strength and measurement axes angles .top left : fidelity as a function of and for .bottom left : contours of fidelity equal to .5 for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) .as and go to zero the fidelity equals .5 contour goes to , the value at which the state exhibits esd for a number of entanglement measures .top right : concurrence as a function of and for .bottom right : contours of concurrence equal to zero showing where esd occurs ( values of as in previous contour plot ) .the maximum dephasing value at which esd is exhibited is .618 ., title="fig:",width=151 ] ( color online ) fidelity ( left ) of the state of qubits 2 and 4 after measurement on qubits 1 and 3 and concurrence ( right ) between those qubits as a function of dephasing strength and measurement axes angles .top left : fidelity as a function of and for .bottom left : contours of fidelity equal to .5 for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) . as and go to zero the fidelity equals .5 contour goes to , the value at which the state exhibits esd for a number of entanglement measures .top right : concurrence as a function of and for .bottom right : contours of concurrence equal to zero showing where esd occurs ( values of as in previous contour plot ) .the maximum dephasing value at which esd is exhibited is .618 ., title="fig:",width=151 ] ( color online ) fidelity ( left ) of the state of qubits 2 and 4 after measurement on qubits 1 and 3 and concurrence ( right ) between those qubits as a function of dephasing strength and measurement axes angles .top left : fidelity as a function of and for .bottom left : contours of fidelity equal to .5 for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) .as and go to zero the fidelity equals .5 contour goes to , the value at which the state exhibits esd for a number of entanglement measures .top right : concurrence as a function of and for .bottom right : contours of concurrence equal to zero showing where esd occurs ( values of as in previous contour plot ) .the maximum dephasing value at which esd is exhibited is .618 ., title="fig:",width=151 ] ( color online ) fidelity ( left ) of the state of qubits 2 and 4 after measurement on qubits 1 and 3 and concurrence ( right ) between those qubits as a function of dephasing strength and measurement axes angles .top left : fidelity as a function of and for .bottom left : contours of fidelity equal to .5 for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) . as and go to zero the fidelity equals .5 contour goes to , the value at which the state exhibits esd for a number of entanglement measures .top right : concurrence as a function of and for .bottom right : contours of concurrence equal to zero showing where esd occurs ( values of as in previous contour plot ) .the maximum dephasing value at which esd is exhibited is .618 ., title="fig:",width=151 ] the fidelity of the state of qubits 2 and 4 upon measuring qubits 1 and 3 is given by : and is plotted in fig .[ c4h24 ] along with the concurrence between unmeasured qubits 2 and 4 .there does not appear to be a correlation between the fidelity and concurrence with respect to these two unmeasured qubits .however , the maximum at which the fidelity crosses .5 , when , is , the exact value where the four qubit state exhibits esd for , , and .the minimum value at which the fidelity crosses .5 is at .568 .the maximum at which esd of concurrence is exhibited is .618 .( color online ) fidelity ( left ) of the state of qubits 2 and 3 after measurement on qubits 1 and 4 and concurrence ( right ) between those qubits as a function of dephasing strength and measurement axes angles .top left : fidelity as a function of and for .bottom left : contours of fidelity equal to .5 for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) .as and go to zero the fidelity equals .5 contour goes to , the value at which the state exhibits esd for a number of entanglement measures .top right : concurrence as a function of and for ( bottom ) , ( middle ) , and ( top ) .bottom right : contours of concurrence equal to zero showing where esd occurs for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) .the maximum dephasing value at which esd is exhibited is .586 which is the value at which esd is exhibited for the state ., title="fig:",width=151 ] ( color online ) fidelity ( left ) of the state of qubits 2 and 3 after measurement on qubits 1 and 4 and concurrence ( right ) between those qubits as a function of dephasing strength and measurement axes angles .top left : fidelity as a function of and for .bottom left : contours of fidelity equal to .5 for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) .as and go to zero the fidelity equals .5 contour goes to , the value at which the state exhibits esd for a number of entanglement measures .top right : concurrence as a function of and for ( bottom ) , ( middle ) , and ( top ) .bottom right : contours of concurrence equal to zero showing where esd occurs for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) .the maximum dephasing value at which esd is exhibited is .586 which is the value at which esd is exhibited for the state ., title="fig:",width=151 ] ( color online ) fidelity ( left ) of the state of qubits 2 and 3 after measurement on qubits 1 and 4 and concurrence ( right ) between those qubits as a function of dephasing strength and measurement axes angles .top left : fidelity as a function of and for .bottom left : contours of fidelity equal to .5 for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) .as and go to zero the fidelity equals .5 contour goes to , the value at which the state exhibits esd for a number of entanglement measures .top right : concurrence as a function of and for ( bottom ) , ( middle ) , and ( top ) .bottom right : contours of concurrence equal to zero showing where esd occurs for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) .the maximum dephasing value at which esd is exhibited is .586 which is the value at which esd is exhibited for the state ., title="fig:",width=151 ] ( color online ) fidelity ( left ) of the state of qubits 2 and 3 after measurement on qubits 1 and 4 and concurrence ( right ) between those qubits as a function of dephasing strength and measurement axes angles .top left : fidelity as a function of and for .bottom left : contours of fidelity equal to .5 for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) .as and go to zero the fidelity equals .5 contour goes to , the value at which the state exhibits esd for a number of entanglement measures .top right : concurrence as a function of and for ( bottom ) , ( middle ) , and ( top ) .bottom right : contours of concurrence equal to zero showing where esd occurs for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) .the maximum dephasing value at which esd is exhibited is .586 which is the value at which esd is exhibited for the state ., title="fig:",width=151 ] the fidelity of the state of qubits 2 and 3 upon measuring qubits 1 and 4 is given by : and plotted in fig .[ c4h23 ] along with the concurrence between unmeasured qubits 2 and 3 . as in the previous case esd may be an indicator of fidelity .the maximum at which the fidelity crosses .5 , which occurs for , is , the exact value where the four qubit state exhibits esd for a number of entanglement measures .the minimum at which the fidelity crosses .5 is , which is also equal to the the maximum at which esd of concurrence is exhibited .though the initial state in this example was this is the value at which esd occurs for the initial state .such cross - correlation between the different cluster state representations can come from the measurements : measuring some of the qubits at certain angles transforms the state from one represenation to the other .( color online ) fidelity ( left ) of the state of qubits 1 and 4 after measurement on qubits 2 and 3 and concurrence ( right ) between those qubits as a function of dephasing strength and the measurement axes angles .top left : fidelity as a function of and for .bottom left : contours of fidelity equal to .5 for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) .as goes to the fidelity equals .5 contour goes to , when and go to zero the fidelity equals .5 contour goes to , the value at which the state exhibits esd for a number of entanglement measures .top right : concurrence as a function of and for .bottom right : contours of concurrence equal to zero showing where esd occurs ( same values as above ) .these curves are equivalent to those of the fidelity equals .5 curves ., title="fig:",width=151 ] ( color online ) fidelity ( left ) of the state of qubits 1 and 4 after measurement on qubits 2 and 3 and concurrence ( right ) between those qubits as a function of dephasing strength and the measurement axes angles .top left : fidelity as a function of and for .bottom left : contours of fidelity equal to .5 for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) .as goes to the fidelity equals .5 contour goes to , when and go to zero the fidelity equals .5 contour goes to , the value at which the state exhibits esd for a number of entanglement measures .top right : concurrence as a function of and for .bottom right : contours of concurrence equal to zero showing where esd occurs ( same values as above ) .these curves are equivalent to those of the fidelity equals .5 curves ., title="fig:",width=151 ] ( color online ) fidelity ( left ) of the state of qubits 1 and 4 after measurement on qubits 2 and 3 and concurrence ( right ) between those qubits as a function of dephasing strength and the measurement axes angles .top left : fidelity as a function of and for .bottom left : contours of fidelity equal to .5 for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) .as goes to the fidelity equals .5 contour goes to , when and go to zero the fidelity equals .5 contour goes to , the value at which the state exhibits esd for a number of entanglement measures .top right : concurrence as a function of and for .bottom right : contours of concurrence equal to zero showing where esd occurs ( same values as above ) .these curves are equivalent to those of the fidelity equals .5 curves . ,title="fig:",width=151 ] ( color online ) fidelity ( left ) of the state of qubits 1 and 4 after measurement on qubits 2 and 3 and concurrence ( right ) between those qubits as a function of dephasing strength and the measurement axes angles .top left : fidelity as a function of and for .bottom left : contours of fidelity equal to .5 for ( chained line ) , ( dotted line ) , ( dashed line ) , and ( solid line ) .as goes to the fidelity equals .5 contour goes to , when and go to zero the fidelity equals .5 contour goes to , the value at which the state exhibits esd for a number of entanglement measures .top right : concurrence as a function of and for .bottom right : contours of concurrence equal to zero showing where esd occurs ( same values as above ) .these curves are equivalent to those of the fidelity equals .5 curves ., title="fig:",width=151 ] the fidelity of the state of qubits 1 and 4 upon measuring qubits 2 and 3 is given by : the concurrence between unmeasured qubits 1 and 4 is given by $ ] where fig .[ c4h14 ] demonstrates the strong correlation between the dephasing value where esd is exhibited and the value where the fidelity goes to .5 .furthermore , the highest dephasing possible where esd occurs ( and when the fidelity goes to .5 ) is at , the same value for which we find esd for the state .again this points to the possibility of the measurement ` transforming ' between the two representations of the cluster state .the lowest dephasing at which esd occurs ( or where the fidelity goes to .5 ) is at .568 .all the above esd and fidelity results are summarized in table [ table ] . [ cols="^,^,^,^,^,^",options="header " , ] [ table ] finally , we note that in all of the above we have made a number of assumptions .first , we have assumed that the intial cluster state is constructed perfectly .one way to relax this assumption is by looking at intial states of the form : , where .preliminary explorations using this starting state indicate that there is merely a shift in entanglement values downwards but that there is no fundamental change in the behavior of the entanglement .another assumption is that the dephasing strength is equal on all four qubits .this is unrealistic for a number of reasons but especially so if not all of the measurements are performed at the same time ( non - simultaneous measurements are necessary when trying to implement a given logical rotation because the measurement axes for a given qubit depends on the outcome of the measurement on the previous qubit ) .a way to relax this assumption without significantly increasing the number of variables in the problem may be to add a term to the dephasing strength where represents the dephaing the occurs during the time between subsequent measurements and is an integer .in conclusion , i have studied the entanglement evolution of a four qubit ( chain ) cluster state in a dephasing environment .specifically , i have looked at two represenations of the state differing by single qubit rotations . both of these representations exhibit entanglement sudden death under sufficient dephasing .the difference in the dephasing strength at which this occurs may be important when deciding in what representation to store a cluster state .the issue of storage is especially relevant during the construction of optical cluster states but may have relevance to other implementations as well .i asked whether esd affects the utility of the cluster state in implementing a general single qubit rotation in the cluster state measurement based quantum computation paradigm .judging from the fidelity decay of the single unmeasured qubit as a function of dephasing strength and the measurement axes angles of the three measurements the answer would seem to be no .i see no indication in the fidelilty behavior that esd has taken place .instead the fidelity decreases smoothly with increased dephasing with no discontinuities or dramatic changes in behavior . however , there are clear correlations ( sometimes total and sometimes at certain limits ) between the fidelity of the state of two qubits remaining from the four qubit cluster state after measurement on the other two qubits , and esd of the negativity for the entire cluster state or esd of the concurrence between the said two unmeasured qubits. this correlation does not appear as a discontinuity in the fidelity decay behavior but instead is manifest by the fidelity crossing the critical value of .5 .thus , we could say that esd may be an _ indicator _ of how badly a certain cluster state operation was carried out . however , this is not the same as saying that esd itself negatively affects quantum information protocols .the question of whether esd affects quantum information protocols requires further study and may be related to the more general issue of the role of entanglement in quantum computation .m nielsen , i. chuang , _ quantum information and computation _( cambridge university press , cambridge , 2000 ) .for a recent review see r. horodecki , p. horodecki , m. horodecki , k. horodecki , arxiv : quant - ph/0702225 . c. simon and j. kempe , phys .a * 65 * , 052327 ( 2002 ) ; w. dur and h .- j .briegel , phys .lett . * 92 * 180403 ( 2004 ) ; m. hein , w. dur , and h .- j .briegel , phys .a * 71 * , 032350 ( 2005 ) ; s. bandyopadhyay and d.a .lidar , phys .a * 72 * , 042339 ( 2005 ) ; o. guhne , f. bodosky , and m. blaauboer , phys . rev .a * 78 * , 060301 ( 2008 ) .l. diosi , in _ irreversible quantum dynamics _ , edited by f. benatti and r. floreanini , lect .notes phys .* 622 * , ( springer - verlag , berlin ) 157 ( 2003 ) ; p.j .dodd and j.j .halliwell , phys . rev . a * 69 * , 052105 ( 2004 ) . t. yu and j.h .eberly , phys .lett . * 93 * , 140404 ( 2004 ) ; _ ibid . _ * 97 * , 140403 ( 2006 ) . i. sainz and g. bjork , phys .a * 76 * , 042313 ( 2007 ) .l. aolita , r. chaves , d. cavalcanti , a. acin , and l. davidovich , phys . rev* 100 * , 080501 ( 2008 ) .lopez , g. romero , f. lastra , e. solano , and j.c .retamal , phys .* 101 * , 080503 ( 2008 ) .m. yonac , t. yu , j.h .eberly , j. phys .b * 39 * , 5621 ( 2006 ) ; _ ibid . _* 40 * , 545 ( 2007 ) .i. sainz and g. bjork , phys .rev a * 77 * , 052307 ( 2008 ) .weinstein , phys .rev a * 79 * , 0123318 ( 2009 ) .almeida , _ et al ._ , science * 316 * , 579 ( 2007 ) ; j. laurat , k.s .choi , h. deng , c.w .chou , and h.j .kimble , phys .* 99 * , 180504 ( 2007 ) ; a. salles , f. de melo , m.p .almeida , m. hor - meyll , s.p .walborn , p.h .souto ribeiro , and l. davidovich , phys .a * 78 * , 022322 ( 2008 ) . h. j. briegel and r. raussendorf , phys .lett . * 86 * , 910 ( 2001 ) .r. raussendorf and h. j. briegel , phys .rev . lett . * 86 * , 5188 ( 2001 ) .r. raussendorf , d. e. browne , and h. j. briegel , phys .a * 68 * , 022312 ( 2003 ) .browne and t. rudolph , phys .lett . , * 95 * , 010501 , ( 2005 ) .weinstein , c.s .hellberg , and j. levy , phys .a * 72 * , 020304 ( 2005 ) ; y.s .weinstein and c.s .hellberg , phys .lett . * 98 * , 110501 ( 2007 ) ; j.q .you , x. wang , t. tanamoto , and f. nori , phys .rev . a * 75 * , 052319 ( 2007 ) ; l. jiang , a.m. rey , o. romero - isert , j.j .garcia - ripoll , a. sanpera , and m.d .lukin , arxiv:0811.3049 .p. walther , k.j .resch , t. rudolph , e. schenk , h. weinfurter , v. vedral , m. aspelmeyer , and a. zeilinger , nature ( london ) * 434 * , 169 ( 2005 ) ; g. gilbert , m. hamrick , and y.s .weinstein , phys .a * 73 * , 064303 ( 2006 ) .g. vidal and r.f .werner , phys .a * 65 * 032314 ( 2002 ) .terhal , phys .a * 271 * , 319 ( 2000 ) ; m. lewenstein , b. kraus , j.i .cirac , and p. horodecki , phys .a * 62 * , 052310 ( 2000 ) . g. toth and o. guhne , phys .94 * , 060501 ( 2005 ) .s. hill and w.k .wootters , phys .lett * 78 * , 5022 ( 1997 ) .
|
i explore the entanglement evolution of a four qubit cluster state in a dephasing environment concentrating on the phenomenon of entanglement sudden death ( esd ) . specifically , i ask whether the onset of esd has an effect on the utilization of this cluster state as a means of implementing a single qubit rotation in the measurement based cluster state model of quantum computation . to do this i compare the evolution of the entanglement to the fidelity , a measure of how accurately the desired state ( after the measurement based operations ) is achieved . i find that esd does not cause a change of behavior or discontinuity in the fidelity but may indicate when the fidelity of certain states goes to .5 .
|
let be a sequence of random samples obtained from an unknown probability distribution .the corresponding random measure from samples is the monte carlo estimator of , where is the support of .the random measure is a maximum likelihood estimator of and is consistent : for all -measurable sets , sometimes estimating the entire distribution is of intrinsic inferential interest .in other cases , this may be desirable if there are no limits on the functionals of which might be of future interest .alternatively , the random sampling might be an intermediary update of a sequential monte carlo sampler no for which it is desirable that the samples represent the current target distribution well at each step .pointwise monte carlo errors are inadequate for capturing the overall rate of convergence of the realised empirical measure to .this consideration is particularly relevant if is an infinite mixture of distributions of unbounded dimension : in this case it becomes necessary to specify a degenerate , fixed dimension function of interest before monte carlo error can be assessed .this necessity is potentially undesirable , since the assessment of convergence will vary depending on which function is selected and that choice might be somewhat arbitrary .the work presented here considers sampling multiple target distributions in parallel .this scenario is frequently encountered in real - time data processing , where streams of data pertaining to different statistical processes are collected and analysed in fixed time - window updates .decisions on how much sampling effort to allocate to each target will be made sequentially , based on the apparent relative complexity of the targets , as higher - dimensional , more complex targets intuitively need more samples to be well represented .the complexities of the targets will not be known _ a priori _ , but can be estimated from the samples which have been obtained so far . as a consequence , the size of the sample drawn from any particular target distribution will be a realisation of a random variable , , determined during sampling by a random stopping rule governed by the history of samples drawn from that target and those obtained from the other targets . to extend the applicability of monte carlo error to entire probability measures , the following question is considered : if a new sample of random size were drawn from , how different to might the new empirical measure be ? if repeatedly drawing samples in this way led to relatively similar empirical measures , this suggests that the target is relatively well represented by samples ; whereas if the resulting empirical measures were very different , then there would be a stronger desire to obtain a ( stochastically ) larger number of samples . to formally address this question , a new _ monte carlo divergence error _is proposed to measure the expected distance between an empirical measure and its target .correctly balancing sample sizes is a non - trivial problem . apparently sensible , but _ad hoc _ , allocation strategies can lead to extremely poor performance , much worse than simply assigning the same number of samples to each target . here , a sample - based estimate of the proposed monte carlo divergence error of an empirical measure is derived ; these errors are combined across samplers through a loss function , leading to a fully - principled , sequential sample allocation strategy .section [ sec : mc_convergence ] formally defines and justifies monte carlo divergence as an error criterion .section [ sec : rival_samplers ] examines two different loss functions for combining sampler errors into a single performance score .section [ sec : other_methods ] introduces some alternative sample size selection strategies ; some are derived by adapting related ideas in the existing literature , and some are _ ad hoc_. the collection of strategies are compared on univariate and variable dimension target distributions in section [ sec : examples ] before a brief discussion in section [ sec : discussion ] .in this section the rate of convergence of the empirical distribution to the target will be assessed by information theoretic criteria . in information theory , it is common practice to discretise distributions of any continuous random variables ( see * ? ? ? * ) . without this discretisation ( or some alternative smoothing )the intersection of any two separately generated sets of samples would be empty , and distribution - free comparisons of their empirical measures would be rendered meaningless : for example , the kullback - leibler divergence between two independent realisations of will be always be infinite .when a target distribution relates to that of a continuous random variable , a common discretisation of both the empirical measure and notionally the target will be performed . for the rest of the article ,both and should be regarded as suitably discretised approximations to the true distributions when the underlying variables are continuous .when there are multiple distributions , the same discretisation will be used for all distributions . for univariate problems a large but finite grid with fixed spacing will be used to partition into bins ; for mixture problems with unbounded dimension , the same strategy will be used for each component of each dimension , implying an infinite number of bins . later in section [ sec : bin_width ] , consideration will be given to how the number of bins for each dimension should be chosen .for a discrete target probability distribution , let be the estimator for a prospective sample of size to be drawn from , and let be the same estimator when the sample size is a random stopping time . for , the monte carlo divergence error of the estimator will be defined as where is shannon s entropy function ; recall that if is a probability mass function , note that is the maximum likelihood estimator of .the monte carlo divergence error has a direct interpretation : it is _ the expected kullback - leibler divergence of the empirical distribution of a sample of size from the target _ , and therefore provides a natural measure of the adequacy of for estimating .the monte carlo divergence error of the estimator when is a random stopping time is defined as the expectation of with respect to the stopping rule , or equivalently where the expectation in is now with respect to both and the stopping rule .this more general definition of monte carlo divergence error should be interpreted as _ the expected kullback - leibler divergence of the empirical distribution of a sample of random size from the target . to provide a sampling based justification for this definition of monte carlo divergence error , for consider the empirical distribution estimates which would be obtained from independent repetitions of sampling from , where the sample size of each run is a random stopping time from the same rule .the jensen - shannon divergence of , measures the variability in these distribution estimates by calculating their average kullback - leibler divergence from the closest dominating measure , which is their average .the jensen - shannon divergence is a popular quantification of the difference between distributions , and its square root has the properties of a metric on distributions .just as monte carlo variance is the limit of the sample variance of sample means as , the monte carlo divergence error defined in is easily seen to be the limit of as : by the strong law of large numbers , and , ] , and let be the bin probabilities .the bayesian formulation of this histogram treats the probabilities as unknown , and a conjugate dirichlet prior distribution based on a lebesgue base measure with confidence level suggests . for samples , the marginal likelihood of observing bin counts under this modelis \prod_{i=1}^k\gamma\{\alpha(b - a)/k+n_i\}.\label{eq : density_ml}\ ] ] using standard optimisation techniques , identifying the pair that jointly maximise suggests that serves as a good number of bins for a regular histogram of the observed data .to calibrate the performance of the proposed method , some variations of the strategy for selecting sample sizes are considered .this section considers some alternative measures of the monte carlo error of a sampler , to be used in place of the divergence estimates or in the algorithm of section [ sec : alg ] . in the context of particle filters, proposed a method for choosing the number of samples required from a single sampler to guarantee that , under a chi square approximation , with a desired probability the kullback - leibler divergence between the binned empirical and true distributions does not exceed a certain threshold .this was achieved by noting an identity between times this divergence and the likelihood ratio statistic for testing the true distribution against the empirical distribution , assuming the true distribution had the same number of bins , , as the observed empirical distribution .since the likelihood ratio statistic should approximately follow a chi - squared distribution with degrees of freedom , this suggested a sample size of where is the quantile of that distribution .adapting this idea to the algorithm of section [ sec : alg ] simply requires a rearrangement of to give the approximate error as a function of sample size , this error estimate can be substituted directly into the algorithm in place of the monte carlo divergence error estimate to provide an alternative scheme for choosing sample sizes when using loss function .the same ( arbitrary ) value of must be used for each rival sampler , and here this was specified as although the results are robust to different choices . by the central limit theorem, the chi - squared distribution quantiles grow near - linearly with the degrees of freedom parameter for , so it should be noted that , which depends only on the number of bins , has much similarity , and almost equivalence , with the miller - madow estimate of entropy error cited in section [ sec : mc_divergence_estimation ] . by the reasoning given in section [ sec : mc_divergence_estimation ] , use of this error function should show some similarity in performance with the proposed method , but be less robust to distinguishing differences in distributions beyond the number of non - empty bins .recall from section [ sec : sequential ] that the sequential allocation strategy for minimising the loss function requires an estimate of the expected reduction in error which would be achieved from obtaining another observation from a sampler .since this error criterion depends entirely upon the number of non - empty bins , in this case an estimate is required for the probability of the new observation falling into a new bin . a simple empirical estimate of the probability of falling into a new bin is provided by the proportion of samples after the first one that have fallen into new bins , given by .note that this estimate will naturally carry positive bias , since discovery of new bins should decrease over time , and so a sliding window of this quantity might be more appropriate in some contexts . as a convergence diagnostic for transdimensional samplers , proposed running replicate sampling chains for the same target distribution , and comparing the variability across the chains of the empirical distributions of a distance - based function of interest .the method requires that the target be a probability distribution for a point process , and maps multidimensional sampled tuples of _ events _ from to a fixed - dimension space . specifically , a set of _ reference points _ are chosen , and for any sampled tuple of events the distance from each reference point to the closest event in the tuple is calculated .thus is summarised by a -dimensional distribution , where is the number of reference points in .one example considered in is a bayesian continuous - time changepoint analysis of a changing regression model with an unknown number of changepoint locations .a variation of this example is analysed in section [ sec : results_multivariate ] in this article , where instead the analysis will be for the canonical problem of detecting changes in the piecewise constant intensity function ] is used .the convergence diagnostic of did not formally provide a method for calibrating error or selecting sample size . here , to compare the performance of the proposed sample size algorithm of section [ sec : alg ] , the sum across the reference points of the monte carlo variances of either of these functions of interestis used as the error criterion in the algorithm . to demonstrate the value of the sophisticated sample size selection strategies given above , two simple strategies which have similar motivation but are otherwise _ ad hoc _are included in the numerical comparisons of section [ sec : examples ] .these strategies are now briefly explained .the _ extent _ of a distribution is the exponential of its entropy , and was introduced as a measure of spread by .a simple strategy might be to choose sample size proportional to the estimated squared extent of , .note that the gaussian distribution , has an extent which is directly proportional to the standard deviation , and so in the univariate gaussian example which will be considered in section [ sec : results_univariate ] , this sample allocation strategy will be approximately equivalent to the optimal strategy when minimising the maximum monte carlo error of the sample means ( _ cf ._ section [ sec : loss ] ) . present a class of convergence tests for monitoring the stationarity of the output of a sampler from a single run which operate by splitting the current sample in two and quantifying the difference between the empirical distributions of the first half of the sample , and the second half of the sample . for univariate samplers the kolmogorov - smirnov test , for example ,is used to obtain a p - value as a measure of evidence that the second half of the sample is different from the first , and hence neither half is adequately representative of the target .the test statistics which are used condition on the sample size , and so the sole purpose of these procedures is to investigate how well the sampler is mixing and exploring the target distribution . to adapt these ideas to the current context, any mixing issues can first be discounted by splitting the sample for each target in half by allocating the samples into two groups alternately , so that the distribution of , say , can be compared with the distribution of .this method of splitting up the sample is also computationally much simpler in a streaming context , as incrementing the sample size does not change the required groupings of the existing samples .let and be the respective empirical distributions of these two subsamples .a crude variation on using the monte carlo divergence error criteria of is to estimate the error of the sampler by the jensen - shannon divergence of and , if sufficiently many samples have been taken for to be a good representation of the target distribution , then both halves of the sample should also provide reasonable approximations of the target and therefore have low divergence between one another . as in section[ sec : efficient_calculation ] , calculation of during sampling can be updated at each iteration very quickly .let be the bin in which the observation falls .then , for example , updating the first term of simply requires methodology from this article is demonstrated on three different data problems .the first two examples assume only two or three data processes respectively , to allow a detailed examination of how the allocation strategies differ .then finally a larger scale example with 400 data processes is considered , derived from the ieee vast 2008 challenge concerning communication network anomaly detection .two straightforward , synthetic examples are now considered .the first is a univariate problem of fixed dimension with two gaussian target distributions , and the second is a transdimensional problem of unbounded dimension , concerning the changepoints in the piecewise constant intensity functions of three inhomogeneous poisson processes . in both examples, it is assumed that _ a priori _ nothing is known about the target distributions and that computational limitations determine that only a fixed total number of samples can be obtained from them overall , which will correspond to an average of samples per target distribution .both loss functions from section [ sec : loss ] are considered , measuring either the maximum error or average error across the target samplers . for each loss function , the following sample size allocation strategies are considered : 1 .`` fixed '' the default strategy , samples are obtained from each sampler .dynamically , aiming to minimise the expected loss , with sampling error estimated using the following methods : 1 .`` grassberger '' monte carlo divergence error estimation from section [ sec : mc_divergence_estimation ] ; 2 .`` fox '' the goodness of fit statistic of section [ sec : fox ] ; 3 .`` sisson '' ( only for the transdimensional example ) the monte carlo variances of one of the two candidate fixed dimension functions from section [ sec : foi ] evaluated at 100 equally spaced reference points ( denoted `` sisson - i '' , for the intensity function , `` sisson - n '' for the distance to nearest changepoint function ) ; 4 .`` extent '' and `` jsd '' two _ ad hoc _ criteria from section [ sec : ad_hoc_strategies ] .each sample size allocation strategy is evaluated over a large number of replications , where or respectively in the two examples .good performance of a sample allocation strategy is measured by the chosen loss function when applied to the realised monte carlo divergence error for each sampler .good estimates of the true values of are obtained by calculating the jensen - shannon divergence of the monte carlo empirical distributions obtained from the runs ( _ cf ._ section [ sec : mc_divergence ] ) .note that in all simulations , the same random number generating seeds are used for all strategies , so that all strategies are making decisions based on exactly the same samples . in the first example, a total of 100,000 samples are drawn from two gaussian distributions , where one gaussian has twice the standard deviation of the other : , note that if these two distributions were considered on different scales they would be equivalent ; but when losses in estimating the distributions are measured on the same scale , then they are not equivalent . for discretising the distributions ,the following bins were used : , , , , , , .this corresponds to an interior range of plus or minus five times the largest of the standard deviations of the two targets , divided into 100 evenly spaced bins , along with two extra bins for the extreme tails .results are robust to allowing wider ranges or more bins , but are omitted from presentation . for further validation ,a simple experiment was conducted using the method from section [ sec : bayesian_bin_width ] on ] with different piecewise constant intensity functions . in each case , prior beliefs for the intensity functions were specified by a homogeneous poisson process prior distribution on the number and locations of the changepoints and independent , conjugate gamma priors on the intensity levels .the three rival target distributions for inference are the bayesian marginal posterior distributions on the number and locations of the changepoints for each of the three processes .each of the three simulated poisson processes had two changepoints , located at and .the intensity levels of the three processes were respectively : , , , so the processes differed only through magnitudes of intensity changes . to make the target distributions closer andtherefore make the inferential problem harder , in each case the prior expectation for the number of changepoints was set to 1 . for illustration of the differences in complexity of the resulting posterior distributions for the changepoints of the three processes , large sample estimates of the true , discretised posterior distributions are shown in fig .[ fig : targets ] , based upon one trillion reversible jump markov chain monte carlo samples .note that the different target distributions place different levels of mass on the number of changepoints , and therefore on the dimension of the problem .in all cases there is insufficient information to strongly detect both changepoints , and so much of the mass of the posterior distributions is localised at a single changepoint at , the midpoint of the two true changepoints . additionally , fig .[ fig : fois ] shows the posterior variance of two functions of interest identified in section [ sec : foi ] for ] was divided into 50 equally sized bins ; while for a single dimension this would be fewer bins than were used in the previous section , here the bins are applied to each dimension of a mixture model of unbounded dimension , meaning that actually a very large number of bins are visited ; computational storage issues can begin to arise when using an even larger number of bins , simply through storing the frequency counts of the samples .[ fig:3target_sample_sizes ] shows the distributions of sample sizes obtained from a selection of the strategies over repetitions , and tables [ tab : transdim_losses_max ] and [ tab : transdim_losses_ave ] show results from the different strategies examined for these more complex transdimensional samplers .performance is similar to the previous section , with the grassberger entropy bias correction method performing best .+ + for this transdimensional sampling example , it also makes sense to consider the fixed dimension function of interest methods of , using the mean intensity function of the poisson process or the distance to nearest changepoint , each evaluated at 100 equally spaced grid points on $ ] .the monte carlo variances used in these strategies estimate the variances displayed in the plots of fig .[ fig : fois ] at the reference points , divided by the current sample size .the performance of these fixed dimensional strategies is particularly poor under the loss function .importantly , it should also be noted that the sample sizes and performance vary considerably depending upon which of the two arbitrary functions of the reference points are used .this final example now illustrates how the method performs in the presence of a much larger number of target distributions , in the context of network security .the ieee vast 2008 challenge data are synthetic but realistically generated records of mobile phone calls for a small community of 400 individuals over a ten day period .the data can be obtained from www.cs.umd.edu/hcil/vastchallenge08 .the aim of the original challenge was to find anomalous behaviour within this social network , which might be indicative of malicious coordinated activity .one approach to this problem is to monitor the call patterns of individual users and detect any changes from their normal behaviour , with the idea that a smaller subset of anomalous individuals will then be investigated for community structure . in particular , this approach has been shown to be effective with these data when monitoring the event process of incoming call times for each individual . after correcting for diurnal effects on normal behaviour , this approach can be reduced to a changepoint analysis of the intensities of 400 poisson processes of the same character as section [ sec : results_multivariate ] . for the focus of this article ,it is of interest to see how such an approach could be made more feasible in real time by allocating computational resource between these 400 processes more efficiently .[ fig : vast ] shows the contrasting performance between an equal computational allocation of one million markov chain monte carlo samples to each process against the variable sample size approach using grassberger s entropy bias estimate , for the same total computational resource of 400 millions samples and using the loss function .the left hand plot shows the distribution of sample sizes for the individual processes over repetitions , using 5,000 initial samples and an average allocation of one million samples for each posterior target ; the dashed line represents the fixed sample size strategy .the sample sizes vary enormously across individuals .however , for each individual the variability between runs is much lower , showing that the method is robust in performance .the right hand plot shows the resulting monte carlo divergence errors of the estimated distributions from the targets .ideal performance under would have each of these errors approximately equal , and the variable sample size method gets much closer to this ideal .the circled case in the right hand plot indicates the process which has the highest error when using a fixed sample size , and this corresponds to the same individual process that always gets the highest sample size allocation under the adaptive sample size strategy in the left hand plot .this individual has a very changeable calling pattern , suggesting several possible changepoints : no calls in the first five days , then two calls one hour apart , then another two days break , and then four calls each day for the remainder of the period .it was remarked in the review paper of on transdimensional samplers that `` a more rigorous default assessment of sampler convergence '' than the existing technology is required , and this has remained an open problem .this article is a first step towards establishing such a default method from a decision theoretic perspective , proposing a framework and methodology which are rigorously motivated and fully general in their applicability to all distributional settings .note that when the samplers induce autocorrelation , which is commonplace with metropolis - hastings ( mh ) markov chain monte carlo simulation , then the decision rule for becomes more complicated since independence was assumed in the derivation of .if one or more of the samplers has very high serial autocorrelation , then drawing additional samples from those targets will become less attractive under , as with high probability very little extra information will be obtained from the next draw .it is still possible to proceed in this setting by adapting to admit autocorrelation ; for example , the rejection rate of the markov chain could be used to approximate the probability of observing the same bin as the last sample , and otherwise draws could be assumed to be more realistically drawn from the target . however , for reasons of brevity this is not pursued further in this work , and of course the efficacy would depend entirely on the specifics of the mh / other sampler .importantly , this issue should not be seen as a decisive limitation of the proposed methodology when using , since although thinning was used in the markov chain monte carlo examples of sections [ sec : results_multivariate ] and [ sec : vast_data ] to obtain the next sample for use in calculating the convergence criteria , this would not prevent the full sample from being retained and utilised without thinning for the actual inference problem .the amount of thinning could be varied between samplers if appropriate , and this could be counterbalanced by weighting the errors in accordingly .another related problem which could be considered is that of importance sampling . if samples can not be obtained directly from the target but instead from some importance distribution with the same support , then it would be useful to understand how these error estimates and sample size strategies can be extended to the case where the empirical distribution of the samples has associated weights . in addressing the revised question of how large an importance sample should be, there should be an interesting trade - off between the inherent complexity of the target distributions , which has been the subject of this article , and how well the importance distributions match those targets .
|
it is often necessary to make sampling - based statistical inference about many probability distributions in parallel . given a finite computational resource , this article addresses how to optimally divide sampling effort between the samplers of the different distributions . formally approaching this decision problem requires both the specification of an error criterion to assess how well each group of samples represent their underlying distribution , and a loss function to combine the errors into an overall performance score . for the first part , a new monte carlo divergence error criterion based on jensen - shannon divergence is proposed . using results from information theory , approximations are derived for estimating this criterion for each target based on a single run , enabling adaptive sample size choices to be made during sampling . sample sizes ; jensen - shannon divergence ; transdimensional markov chains
|
complex systems in physics , engineering , biology , economics , and finance , are often characterized by the occurence of fat - tailed probability distributions . in many casesthere is an asymptotic decay with a power - law . for these types of systems more general versions of statistical mechanicshave been developed , in which power laws are effectively derived from maximization principles of more general entropy functions , subject to suitable constraints .typical distributions that occur in this context are of the -exponential form . the -exponential is defined as , where is a real parameter , the entropic index .it has become common to call the corresponding statistics ` -statistics ' .a possible dynamical reason for -statistics is a so - called superstatistics . for superstatistical complex systemsone has a superposition of ordinary local equilibrium statistical mechanics in local spatial cells , but there is a suitable intensive parameter of the complex system that fluctuates on a relatively large spatio - temporal scale .this intensive parameter may be the inverse temperature , or the amplitude of noise in the system , or the energy dissipation in turbulent flows , or an environmental parameter , or simply a local variance parameter extracted from a suitable time series generated by the complex system .the superstatistics approach has been the subject of various recent papers and it has been applied to a variety of complex driven systems , such as lagrangian and eulerian turbulence , defect turbulence , cosmic ray statistics , solar flares , environmental turbulence , hydroclimatic fluctuations , random networks , random matrix theory and econophysics . if the parameter is distributed according to a particular probability distribution , the -distribution , then the corresponding superstatistics , obtained by integrating over all , is given by -statistics , which means that there are -exponentials and asymptotic power laws . for other distributions of the intensive parameter ,one ends up with more general asymptotic decays . in this paperwe intend to analyse yet another complex system where -statistics seem to play an important role , and where a superstatistical model makes sense .we have analysed in detail the probability distributions of delays occuring on the british rail network . the advent of real - time train information on the internet for the british network ( http://www.nationalrail.co.uk/ ldb / livedepartures.asp ) has made it possible to gather a large amount of data and therefore to study the distribution of delays .information on such delays is very valuable to the traveller .published information is limited to a single point of the distribution - for example , the fraction of trains that arrive with 5 minutes of their scheduled time .travellers thus have no information about whether the distribution has a long tail , or even about the mean delay .we find that the delays are well modelled by a -exponential function , allowing a characterization of the distribution by two parameters , and .we will relate our observations to a superstatistical model of train delays .this paper is organized as follows : first , we describe our data and the methods used for the analysis .we then present our fitting results .in particular , we will demonstrate that -exponentials provide a good fit of the train delay distributions , and we will show which parameters are relevant for the various british rail network lines . in the final section , we will discuss a superstatistical model for train delays .we collected data on departure times for 23 major stations for the period september 2005 to october 2006 , by software which downloads the real - time information webpage every minute for each station .as each train actually departs , the most recent delay value is saved to a database .the database now contains over two million train departures ; for a busy station such as manchester piccadilly over 200,000 departures are recorded .preliminary investigation led us to believe that the model would fit well ; here is the delay , and are shape parameters , and is a normalization parameter .we have as and as .these limiting forms allow an initial estimate of the parameters ; an accurate estimate is then obtained by nonlinear least - squares .we also have so that measures the deviation from an exponential distribution .an estimated larger than unity indicates a long - tailed distribution .we did not include the zero - delay value in the fitted models .typically 80% of trains record , indicating a delay of one minute or less ( the resolution of the data ) .thus , our model represents the conditional probability distribution of the delay , given that the train is delayed one minute or more . in order to provide meaningful parameter confidence intervals, we weighted the data as follows . since our data is in the form of a histogram , the distribution of the height of the bar representing the count of trains with delay will be binomial .in fact , it is of course very close to gaussian whenever is large enough , which is the case nearly always . the normalized height ( where is the total number of trains ) will therefore have standard deviation .we used these values as weights in the nonlinear least squares procedure , and hence computed parameter confidence intervals by standard methods , namely from the estimated parameter covariance matrix .we find that typically and have a correlation coefficient of about ; thus , the very small confidence intervals quoted in the figure captions for are not particularly useful ; typically acquires a larger uncertainty via its correlation with .we first fitted the model to all data , obtaining the fit shown in figure [ _ ] .this corresponds to a ` universality ' assumption - if all routes had the same distribution of delays , the parameter values would be the relevant ones .we may thus compare the parameters for specific routes with these .typical fits for three such routes are shown in fig .[ bth_pad ] , fig .[ swi_pad ] , and fig .[ rdg_pad ] .-exponential : , . ]delays typically build up over a train s journey , and are very unlikely at the initial departure station .thus , we choose to study delays at intermediate stations . at such stations ,a delayed departure almost certainly means the arrival was delayed . , .] , . ] , . ]we start with a very simple model for the local departure statistics of trains .the waiting time distribution until departure takes place is simply given by that of a poisson process here is the time delay from the scheduled departure time , and is a positive parameter .the symbol denotes the conditional probability density to observe the delay provided the parameter has a certain given value . clearly , the above probability density is normalized .large values of mean that most trains depart very well in time , whereas small describe a situation where long delays are rather frequent .the above simple exponential model becomes superstatistical by making the parameter a fluctuating random variable as well .these fluctuations describe large - scale temporal variations of the british rail network environment . for example , during the start of the holiday season , when there is many passengers , we expect that is smaller than usual for a while , resulting in frequent delays .similarly , if there is a problem with the track or if bad weather conditions exist , we also expect smaller values of on average .the value of is also be influenced by extreme events such as derailments , industrial action , terror alerts , etc .the observed long - term distribution of train delays is then a mixture of exponential distributions where the parameter fluctuates .if is distributed with probability density , and fluctuates on a large time scale , then one obtains the marginal distributions of train delays as it is this marginal distribution that is actually recorded in our data files .let us now construct a simple model for the distribution . there may be different gaussian random variables , that influence the dynamics of the positive random variable in an additive way .we may thus assume as a very simple model that where and .in this case the probability density of is given by a -distribution with degrees of freedom : the average of is given by and the variance by the integral ( [ 9 ] ) is easily evaluated and one obtains where and .our model generates -exponential distributions of train delays by a simple mechanism , namely a -distributed parameter of the local poisson process .+ birmingham & 1.257 & 0.271 & bhm + cambridge & 1.270 & 0.396 & cbg + canterbury east & 1.298 & 0.400 & cbe + canterbury west & 1.267 & 0.402 & cbw + city thameslink & 1.124 & 0.277 & ctk + colchester & 1.222 & 0.272 & col + coventry & 1.291 & 0.330 & cov + doncaster & 1.289 & 0.332 & don + edinburgh & 1.228 & 0.401 & edb + ely & 1.316 & 0.393 & ely + ipswich & 1.291 & 0.333 & ips + leeds & 1.247 & 0.273 & lds + leicester & 1.231 & 0.337 & lei + manchester piccadilly & 1.231 & 0.332 & man + newcastle & 1.378 & 0.330 & ncl + nottingham & 1.166 & 0.209 & not + oxford & 1.046 & 0.141 & oxf + peterborough & 1.232 & 0.201 & pbo + reading & 1.251 & 0.268 & rdg + sheffield & 1.316 & 0.335 & shf + swindon & 1.226 & 0.253 & swi + york & 1.311 & 0.259 & yrk typical -values obtained from our fits are in the region ( see fig . [ q_b_plot ] and table [ q_b_table ] ) .this means is in the region .this means the number of degrees of freedom influencing the value of is just of the order we expected it to be : a few large - scale phenomena such as weather , seasonal effects , passenger fluctuations , signal failures , repairs of track , etc .seem to be relevant .we can also estimate the average contribution of each degree of freedom , from the fitted value of .we obtain if the above number is large for a given station , the local station management seems to be doing a good job , since in this case the local exponential decay of the delay times is as fast as it can be . in general , it makes sense to compare stations with the same ( the same number of external degrees of freedom of the network environment ) : the larger the value of , the better the performance of this station under the given environmental conditions .our analysis shows that two of the best performing busy stations according to this criterion are cambridge and edinburgh . c. tsallis , _ possible generalization of boltzmann - gibbs statistics _ , j. stat .* 52 * , 479 ( 1988 ) c. tsallis , r. s. mendes and a. r. plastino , _ the role of constraints within generalized nonextensive statistics _ , physica a * 261 * , 534 ( 1998 ) c. tsallis , _ nonextensive statistics : theoretical , experimental and computational evidences and connections _ ,. j. phys .* 29 * , 1 ( 1999 ) s. abe , y. okamoto ( eds . ) , _ nonextensive statistical mechanics and its applications _ , springer , berlin ( 2001 ) c. beck and e. g. d. cohen , _ superstatistics _ , physica a * 322 * , 267 ( 2003 ) c. beck , e. g. d. cohen , and h. l. swinney , _ from time series to superstatistics _ , phs . rev .e * 72 * , 026304 ( 2005 ) c. beck , _ superstatistics : theory and applications _ ,* 16 * , 293 ( 2004 ) h. touchette and c. beck , _ asymptotics of superstatistics _ ,e * 71 * , 016131 ( 2005 ) c. tsallis and a. m. c. souza , _ constructing a statistical mechanics for beck - cohen superstatistics _ ,e * 67 * , 026106 ( 2003 ) p .- h .chavanis , _ coarse grained distributions and superstatistics _ , physica a * 359 * , 177 ( 2006 ) c. vignat , a. plastino and a. r. plastino , _ superstatistics based on the microcanonical ensemble _ , cond - mat/0505580 a. k. rajagopal , _ superstatistics a quantum generalization _ , cond - mat/0608679 c. beck , _ lagrangian acceleration statistics in turbulent flows _ , europhys . lett . *64 * , 151 ( 2003 ) a. reynolds , _ superstatistical mechanics of tracer - particle motions in turbulence _ , phys . rev* 91 * , 084503 ( 2003 ) c. beck , _ superstatistics in hydrodynamic turbulence _ , physica d * 193 * , 195 ( 2004 ) k. e. daniels , c. beck , and e. bodenschatz , _ generalized statistical mechanics and defect turbulence _, physica d * 193 * , 208 ( 2004 ) c. beck , _ generalized statistical mechanics of cosmic rays _ , physica a * 331 * , 173 ( 2004 ) m. baiesi , m. paczuski and a. l. stella , _ intensity thresholds and the statistics of temporal occurence of solar flares _ , phys .* 96 * , 051103 ( 2006 ) s. rizzo and a. rapisarda , _ environmental atmospheric turbulence at florence airport _ , proceedings of the 8th experimental chaos conference , florence , aip conf. proc . * 742 * , 176 ( 2004 ) ( cond - mat/0406684 ) a. porporato , g. vico , and p. a. fay , _ superstatistics in hydro - climatic fluctuations and interannual ecosystem productivity _ ,* 33 * , l15402 ( 2006 ) s. abe and s. thurner , _ analytic formula for hidden variable distribution : complex networks arising from fluctuating random graphs _ ,e * 72 * , 036102 ( 2005 ) a. y. abul - magd , _ superstatistics in random matrix theory _ , physica a * 361 * , 41 ( 2006 ) m. ausloos and k. ivanova , _dynamical model and nonextensive statistical mechanics of a market index on large time windows _ , phys .e * 68 * , 046122 ( 2003 ) n. g. van kampen , _stochastic processes in physics and chemistry _ , north holland , amsterdam ( 1981 ) c. beck , _ dynamical foundations of nonextensive statistical mechanics _ ,lett . * 87 * , 180601 ( 2001 )
|
we demonstrate that the distribution of train delays on the british railway network is accurately described by -exponential functions . we explain this by constructing an underlying superstatistical model .
|
in this paper we consider the quantile hedging problem when the underlying market does not have an equivalent martingale measure .instead , we assume that there exists a _ local martingale deflator _ ( a strict local martingale which when multiplied by the asset prices yields a positive local martingale ) .we characterize the value function as the smallest nonnegative viscosity supersolution of a fully non - linear partial differential equation .this resolves the open problem proposed in the final section of ; also see pages 61 and 62 of .our framework falls under the umbrella of the stochastic portfolio theory of fernholz and karatzas , see e.g. , , ; and the benchmark approach of platen . in this framework ,the linear partial differential equation that the superhedging price satisfies does not have a unique solution ; see e.g. , , , and .similar phenomena occur when the asset prices have _ bubbles _ : an equivalent local martingale measure exists , but the asset prices under this measure are strict local martingales ; see e.g. , , , , , and . a related series of papers , , , , , , and addressed the issue of bubbles in the context of stochastic volatility models .in particular , gave necessary and sufficient conditions for linear partial differential equations appearing in the context of stochastic volatility models to have a unique solution .in contrast , we show that the quantile hedging problem , which is equivalent to an optimal control problem , is the smallest nonnegative viscosity supersolution to a fully non - linear pde . as in the linear case, these pdes may not have a unique solution , and , therefore , an alternative characterization for the value function needs to be provided .recently , , , and also considered stochastic control problems in this framework .the first reference solves the classical utility maximization problem , the second one solves the optimal stopping problem , whereas the third one determines the optimal arbitrage under model uncertainty , which is equivalent to solving a zero - sum stochastic game .the structure of the paper is simple : in section [ eq : model ] , we formulate the problem . in this sectionwe also discuss the implications of assuming the existence of a local martingale deflator . in section [ sec : quantile - hedging ] , we generalize the results of on quantile hedging , in particular the neyman - pearson lemma .we also prove other properties of the value function such as convexity .section [ sec : pde - characterization ] is where we give the pde characterization of the value function .we consider a financial market with a bond which is always equal to , and stocks which satisfy where is a -dimensional brownian motion .following the set up in ( * ? ? ?* section 8) , we make the following assumption .[ as : fassp ] let and be continuous functions .set and , which we assume to be invertible for all .we also assume that has a weak solution that is unique in distribution for every initial value .let denote the probability space specified by a weak solution .another assumption we will impose is that where , .we will denote by the right - continuous version of the natural filtration generated by , and by the -augmentation of the filtration .thanks to assumption [ as : fassp ] , the brownian motion of is adapted to ( see e.g. ( * ? ? ?* section 2 ) ) , every local martingale of has the martingale representation property , i.e. it can be represented as a stochastic integral , with respect to , of some -progressively measurable integrand ( see e.g. the discussion on p.1185 in ) , the solution of takes values in the positive orthant , and the exponential local martingale the so - called _ deflator _ is well defined .we do not exclude the possibility that is a strict local martingale .let be the set of -progressively measurable processes , which satisfies in which and with , , and .at time , an investor invests proportion of his wealth in the stock . the proportion gets invested in the bond .for each and initial wealth the associated wealth process will be denoted by .this process solves it can be easily seen that is a positive local martingale for any .let be a measurable function satisfying <\infty,\ ] ] and define thanks to assumption [ as : fassp ] , we have that ] .if no such sequence exists , then we say that nupbr holds ; see ( * ? ? ?* proposition 4.2 ) .in fact , the so - called _ no - free - lunch - with - vanishing - risk _ ( nflvr ) is equivalent to nupbr plus the classical _ no - arbitrage _ assumption .thus , in our setting ( since we assumed the existence of local martingale deflators ) , although arbitrages exist they remain on the level of cheap thrills " , which was coined by .( note that the results of karatzas and kardaras also imply that one does not need nflvr for the portfolio optimization problem of an individual to be well - defined .one merely needs the nupbr condition to hold . )the failure of no - arbitrage means that the money market is not an optimal investment and is dominated by other investments .it follows that a short position in the money market and long position in the dominating assets leads one to arbitrage .however , one can not scale the arbitrage and make an arbitrary profit because of the admissibility constraint , which requires the wealth to be positive .this is what is contained in nupbr , which holds in our setting .also , see , where these issues are further discussed .in this section , we develop new probabilistic tools to extend results of f and leukert on quantile hedging to settings where equivalent martingale measures need not exist .this is not only mathematically intriguing , but also economically important because it admits arbitrage in the market , which opens the door to the notion of optimal arbitrage , recently introduced in fernholz and karatzas .the tools in this section facilitate the discussion of quantile hedging under the context of optimal arbitrage , leading us to generalize the results of on this sort of probability - one outperformance .we will try to determine for ] ; see e.g. ( * ? ? ?* section 10 ) .it follows that for any ] ; see e.g. section 10.1 of . that is, there exists such that a.s .now if , we have . then it follows from that ] . 1 .there exists satisfying with equality and . as a result , holds with equality .2 . if , then letting we have * ( i ) * if there exists such that either or , then we can take or , thanks to and . in the rest of the proof we will assume that let be a brownian motion with respect to and define .let us define by .the function satisfies and .moreover , the function is continuous and nondecreasing .right continuity can be shown as follows : for the right continuity follows from observing that the last expression goes to zero as .one can show left continuity of in a similar fashion . since , thanks to the above properties of there exists satisfying .define .observe that and that satisfies . *( ii ) * this follows immediately from ( 1 ) : \\ & = \mathbb{e}[z(t)g(x(t ) ) 1_{a_a } ] + \mathbb{e}[z(t)g(x(t ) ) 1_{\partial a_a \cap b_{b^ * } } ] \\ & = v(t , x , f(a- ) ) + a \mathbb{p}(\partial a_a \cap b_{b^ * } ) \\ & = v(t , x , f(a- ) ) + a(p - f(a- ) ) .\end{array}\ ] ] [ rem : rafl ] note that when is a martingale , using the neyman - pearson lemma , it was shown in that = { \mathbb{e}}[z(t ) g(x(t ) ) \varphi^*],\ ] ] where \bigg| \mathcal{f}_t \hbox { measurable } , \mathbb{e } [ \varphi ] \geq p\right\}.\ ] ] the randomized test function is not necessarily an indicator function .using lemma [ t - pchar ] and the fine structure of the filtration , we provide in proposition [ c - pchar ] another optimizer of which is an indicator function .[ c - convex ] suppose assumption [ as : fassp ] holds .then , the map is convex and continuous on the closed interval ] . by proposition [ c - pchar ] , for any ] . since , clearly .\ ] ] for the other direction , it is enough to show that for any , we have \le \mathbb{e}[z(t ) g(x(t ) ) \varphi].\ ] ] indeed , since the left hand side is actually , we can get the desired result by taking infimum on both sides over . letting , we observe that - \mathbb{e}[z(t ) g(x(t ) ) 1_a ] \\\quad = \mathbb{e}[z(t ) g(x(t ) ) \varphi 1_a ] + \mathbb{e}[z(t ) g(x(t ) ) \varphi 1_{a^c } ] - \mathbb{e}[z(t ) g(x(t))1_a ] \\\quad = \mathbb{e}[z(t ) g(x(t ) ) \varphi 1_{a^c } ] - \mathbb{e}[z(t ) g(x(t ) ) 1_a ( 1 - \varphi ) ] \\\quad \ge { \rm ess\,inf}_{a^c } \{z(t)g(x(t))\}\mathbb{e}[\varphi 1_{a^c}]-m\mathbb{e}[1_a ( 1-\varphi)]\\ \quad \ge m \mathbb{e}[\varphi 1_{a^c } ] - m \mathbb{e}[1_a ( 1-\varphi ) ] \quad \hbox{(by \eqref{eq : t - pchar1 } ) } \\\quad = m\mathbb{e}[\varphi]-m\mathbb{e}[1_a]\geq 0 .\end{array}\ ] ] for ]. we will denote the class of such processes by .note that is nonempty , as the constant control obviously lies in .the next result obtains an alternative representation for in terms of .[ t - control ] under assumption [ as : fassp ] , <\infty.\ ] ] the finiteness follows from .define \bigg| \mathcal{f}_t \hbox { measurable } , \mathbb{e } [ \varphi ] = p\right\}.\ ] ] thanks to proposition [ c - pchar ] , there exists a set satisfying and such that \ge \inf_{\varphi \in \widetilde{{\mathcal{m } } } } \mathbb{e}[z(t ) g(x(t ) ) \varphi].\ ] ] since the opposite inequality follows immediately from proposition [ p - urep ] , we conclude that .\ ] ] therefore , it is enough to show that satisfies the inclusion is clear . to show the other inclusion we will use the martingale representation theorem : for any there exists a measurable -valued process satisfying a.s .such that = p + \int_0^t \psi(s ) ' d w(s),\ t\in[0,t].\ ] ] note that since takes values in ] for all ] satisfies with .we denote by the solution of starting from at time and by the solution of define the process by then we see from that satisfies we then introduce the value function ,\ ] ] where is defined in .note that the original value function can be written in terms of as .we also consider the legendre transform of with respect to the variable . to make the discussion clear , however ,let us first extend the domain of the map from ] . since , we see from and that is continuous on ] , we conclude that is also convex on . nowthanks to , the convexity and the lower semicontinuity of on imply that the double transform of is indeed equal to itself .that is , for any \times(0,\infty)^d\times\mathbb{r} ] .we will show that and derive various properties of .> from the definition of in , is the upper hedging price for the contingent claim , and potentially solves the linear pde this is not , however , a traditional black - scholes type equation because it is degenerate on the entire space .consider the following function which takes values in the space of matrices : \ ] ] degeneracy can be seen by observing that is only positive semi - definite for all . or, one may observe degeneracy by noting that there are risky assets , and , with only independent sources of uncertainty , . as a result, the existence of classical solutions to can not be guaranteed by standard results for parabolic equations . indeed , under the setting of example [ e - bessel ] , we have =(q - x)^+,\ ] ] which is not smooth . in this subsection , we will approximate by a sequence of smooth functions , constructed by elliptic regularization .we will then derive some properties of and investigate the relation between and .finally , we will show that , which validates the construction of . to perform elliptic regularization under our setting , we need to first introduce a product probability space . recall that we have been working on a probability space , given by a weak solution to the sde .now consider the sample space ;\mathbb{r}) ] . by and, we see that the processes and have the following relation .\ ] ] it then follows from , the fact that , and the definition of that .\ ] ] since is positive definite and continuous , it must satisfy the following ellipticity condition : for every compact set , there exists a positive constant such that for all and ; see e.g. ( * ? ? ?* lemma 3 ) . under assumption[ as : loc lips ] and , the smoothness of and the pde follow immediately from ( * ? ? ?* theorem 4.2 ) .finally , note that satisfies the boundary condition by definition .we will first compute , and then show that it is strictly increasing in from to .let and for .fix an arbitrary . for any ,define note that by construction , and are disjoint , and .it follows that \\ & = \frac{1}{\delta}\left\{\bar{\mathbb{e}}\left[\left((q+\delta)l_\varepsilon(t , t)-z^{t , x,1}(t)g(x^{t , x}(t))\right)\text{1}_{\widetilde{a}_{q+\delta}}\right]-\bar{\mathbb{e}}\left[\left ( ql_\varepsilon(t , t)-z^{t , x,1}(t)g(x^{t , x}(t))\right)\text{1}_{\widetilde{a}_q}\right]\right\}\\ & = \frac{1}{\delta}\bigg\{\bar{\mathbb{e}}\left[\left((q+\delta)l_\varepsilon(t , t)-z^{t , x,1}(t)g(x^{t , x}(t))\right)\text{1}_{\widetilde{a}_q}\right]+\bar{\mathbb{e}}\left[\left((q+\delta)l_\varepsilon(t , t)-z^{t , x,1}(t)g(x^{t , x}(t))\right)\text{1}_{e^\delta}\right]\\ & \hspace{0.5 in } -\bar{\mathbb{e}}\left[(ql_\varepsilon(t , t)-z^{t , x,1}(t)g(x^{t , x}(t)))\text{1}_{\widetilde{a}_q}\right]\bigg\}\\ & = \bar{\mathbb{e}}[l_\varepsilon(t , t)1_{\widetilde{a}_q}]+\frac{1}{\delta}\bar{\mathbb{e}}\left[\left((q+\delta)l_\varepsilon(t , t)-z^{t , x,1}(t)g(x^{t , x}(t))\right)\text{1}_{e^\delta}\right ] .\end{split}\ ] ] by the definition of , \leq\frac{1}{\delta}\bar{\mathbb{e}}[\delta l_\varepsilon(t , t ) \text{1}_{e^\delta}]\\ & = & \bar{\mathbb{e}}[l_\varepsilon(t , t)1_{e^\delta}]\to 0,\ \text{as}\ \delta\downarrow 0 , \end{aligned}\ ] ] where we use the dominated convergence theorem .we therefore conclude that }=\bar{\mathbb{e}}[l_\varepsilon(t , t)1_{\widetilde{a}_q}].\ ] ] thanks to the dominated convergence theorem again , we have =0\ \text{and}\ \lim_{q\to\infty}\bar{\mathbb{e}}[l_\varepsilon(t , t)1_{\widetilde{a}_q}]=\bar{\mathbb{e}}[l_\varepsilon(t , t)]=1.\ ] ] it remains to prove that ] , * for any compact subset , converges to uniformly on \times(0,\infty)^d\times e ] * ( i ) * by , we observe that }z^{t , x,1}(t)q^{t , x , q}_\varepsilon(t)\right]=\bar{\mathbb{e}}\left[\sup_{\varepsilon\in(0,1]}q\exp\left\{-\frac{1}{2}\varepsilon^2(t - t)+\varepsilon(b(t)-b(t))\right\}\right]\nonumber\\ & \le & q \bar{\mathbb{e}}\left[\sup_{\varepsilon\in(0,1]}\exp\left\{\varepsilon ( b(t)-b(t))\right\}\right]\nonumber\\ & \le & q \bar{\mathbb{e}}\left[\sup_{\varepsilon\in(0,1]}\exp\left\{\varepsilon ( b(t)-b(t))\right\}1_{\{b(t)-b(t)\ge 0\}}\right]+ q \bar{\mathbb{e}}\left[\sup_{\varepsilon\in(0,1]}\exp\left\{\varepsilon ( b(t)-b(t))\right\}1_{\{b(t)-b(t ) < 0\}}\right]\nonumber\\ & \le & q\bar{\mathbb{e}}\left[\exp\left\{b(t)-b(t)\right\}\right]+q = q\left(\exp\left\{\frac{1}{2}(t - t)\right\}+1\right ) < \infty.\end{aligned}\ ] ] then it follows from the dominated convergence theorem that \nonumber\\ & = & \bar{\mathbb{e}}[(q - z^{t , x,1}(t)g(x^{t , x}(t)))^+]\nonumber\\ & = & \mathbb{e}[(q - z^{t , x,1}(t)g(x^{t , x}(t)))^+]= \widetilde{w}(t , x , q),\end{aligned}\ ] ] where the third equality is due to the fact that depends only on . *( ii ) * from , , and the observation that for any , \\ & = q\left[\left(1+\phi({\varepsilon}\sqrt{t - t})-\phi(-{\varepsilon}\sqrt{t - t})\right ) e^{{\varepsilon}^2 ( t - t)}-1\right]\\ & \le q\left[\left(1+\phi({\varepsilon}\sqrt{t})-\phi(-{\varepsilon}\sqrt{t})\right ) e^{{\varepsilon}^2 t}-1\right ] , \end{split}\ ] ] where is the cumulative distribution function of the standard normal distribution .note that the second line of follows from the inequality for ; this inequality holds because if , and if , .we can then conclude from that converges to uniformly on \times(0,\infty)^d\times e ] for any compact subset of . by lemmas [ lem : smooth twe ] and [ lem : tw twe ] ( ii ), the viscosity solution property follows as a direct application of ( * ? ? ?* proposition 2.3 ) . andthe boundary condition holds trivially from the definition of .now we want to relate to to .given \times(0,\infty)^d ] , there exists such that .we can take two nonnegative numbers and with such that observe that .plugging this into the first line of , we get also note from that plugging this back into , we obtain it then follows from and that +\lambda_2[f(a-)q - u(t , x , f(a-))]\nonumber\\ & \le & \max\left\{f(a)q - u(t , x , f(a)),f(a-)q - u(t , x , f(a-))\right\}.\end{aligned}\ ] ] choose a sequence such that from the left as .thanks to proposition [ c - convex ] , is continuous on ] for all , the opposite inequality is trivial .we therefore conclude }\{pq - u(t , x , p)\}=\sup_{a\ge0}\{f(a)q - u(t , x , f(a))\}.\ ] ] now , thanks to , we have \nonumber\\ & = & \mathbb{e}[(q - z^{t , x,1}(t)g(x^{t , x}(t)))\text{1}_{\bar{a}_a}].\end{aligned}\ ] ] it follows from , and lemma [ lem : max = tw ] that =\widetilde{w}(t , x , q).\ ] ] let us extend the domain of the map from to the entire real line by setting and for . in this subsection, we consider the legendre transform of with respect to the variable we will first show that is a classical solution to a nonlinear pde. then we will relate to and derive the viscosity supersolution property of . under assumption[ as : loc lips ] , we have that and satisfies the equation +\inf_{a\in\mathbb{r}^d}\left((d_{xp}u_\varepsilon)'\sigma a + \frac{1}{2}|a|^2d_{pp}u_\varepsilon-\theta'ad_pu_\varepsilon\right ) + \inf_{b\in\mathbb{r}^d}\left(\frac{1}{2}|b|^2d_{pp}u_\varepsilon-\varepsilon d_pu_\varepsilon{\bf 1}'b\right),\ ] ] where , with the boundary condition moreover , is strictly convex in the variable for , with since from proposition [ prop : strict convex ] the function is strictly increasing on with its inverse function is well - defined on . moreover , considering that is smooth on , is smooth on and can be expressed as see e.g. . by direct calculations, we have in particular , we see that is strictly convex in for and satisfies .now by setting , we deduce from that -\frac{1}{2}(|\theta|^2+\varepsilon^2)q^2d_{qq}\widetilde{w}_\varepsilon - qtr[\sigma\theta d_{xq}\widetilde{w}_\varepsilon]\\ & = \partial_t u_\varepsilon+\frac{1}{2}tr[\sigma \sigma ' d_{xx}u_\varepsilon]-\frac{1}{2d_{pp}u_\varepsilon}tr[\sigma \sigma ' ( d_{px}u_\varepsilon)(d_{px}u_\varepsilon)']-\frac{1}{2}(|\theta|^2+\varepsilon^2)\frac{(d_p u_\varepsilon)^2}{d_{pp}u_\varepsilon}\\ & \hspace{0.2in}+\frac{d_p u_\varepsilon}{d_{pp}u_\varepsilon}tr[\sigma\theta d_{px}u_\varepsilon]\\ & = \partial_t u_\varepsilon+\frac{1}{2}tr[\sigma \sigma ' d_{xx}u_\varepsilon]+\left((d_{xp}u_\varepsilon)'\sigma a^*+\frac{1}{2}|a^*|^2d_{pp}u_\varepsilon-\theta'a^*d_p u_\varepsilon\right)+\left(\frac{1}{2}|b^*|^2d_{pp}u_\varepsilon-\varepsilon d_pu_\varepsilon { \bf 1}'b^*\right)\\ & = \partial_t u_\varepsilon+\frac{1}{2}tr[\sigma \sigma ' d_{xx}u_\varepsilon]+\inf_{a\in\mathbb{r}^d}\left((d_{xp}u_\varepsilon)'\sigma a+\frac{1}{2}|a|^2d_{pp}u_\varepsilon-\theta'a d_p u_\varepsilon\right ) + \inf_{b\in\mathbb{r}^d}\left(\frac{1}{2}|b|^2d_{pp}u_\varepsilon-\varepsilon d_pu_\varepsilon{\bf 1}'b\right ) , \end{split}\ ] ] where the minimizers and are defined by finally , observe that for any , the maximum of is attained at .therefore , by as a consequence of lemma [ lem : tw twe ] ( ii ) , is continuous at \times(0,\infty)^d\times(0,\infty) ] .it follows that where the second equality follows from proposition [ prop : w = tw ] . before we state the supersolution property for ,let us first introduce some notation .for any , define also consider the lower semicontinuous envelope of observe that by definition , [ prop : viscosity u ] under assumption [ as : loc lips ] , is a lower semicontinuous viscosity supersolution to the equation +g_*(x , d_{p}u , d_{pp}u , d_{xp}u),\ ] ] for , with the boundary condition note that the lower semicontinuity of is a consequence of lemma [ lem : u = liminfue ] , and the boundary condition comes from the fact that and the definition of as the following calculation demonstrates : let us now turn to the pde characterization inside the domain of .set .let be a smooth function such that attains a local minimum at and .note from that as , we must have .thus , the viscosity supersolution property is trivially satisfied .we therefore assume in the following that .let denote the right hand side of .observe from the calculation in that as , -\frac{1}{2\gamma}tr[\sigma(x)\sigma(x)'\lambda\lambda']-\frac{\beta^2}{2\gamma}(|\theta(x)|^2+\varepsilon^2)+\frac{\beta}{\gamma}tr[\sigma(x)\theta(x)\lambda].\ ] ] this shows that is continuous at every as long as .it follows that for any with , we have +\inf_{a\in\mathbb{r}^d}\left(\lambda'\sigma(x)a+\frac{1}{2}|a|^2\gamma-\theta(x)'a\beta\right).\ ] ] since we have from lemma [ lem : u = liminfue ] , we may use the same argument in ( * ? ? ?* proposition 2.3 ) and obtain that considering that , we see from and that this is the desired supersolution property. results similar to proposition [ prop : viscosity u ] were proved by , with stronger assumptions ( such as the existence of an equivalent martingale measure and the existence of a unique strong solution to ) , using the stochastic target formulation . here ,we first observe that the legendre transform of is equal to and that can be approximated by , which is a classical solution to a linear pde and is strictly convex in ; then , we apply the legendre duality argument , as carried out in , to show that , the legendre transform of , is a classical solution to a nonlinear pde .finally , the stability of viscosity solutions leads to the viscosity supersolution property of .instead of relying on the legendre duality we could directly apply the dynamic programming principle of for weak solutions to the formulation in section [ sec : stoc - cont ] .the problem with this approach is that it requires some growth conditions on the coefficients of , which would rule out the possibility of arbitrage , the thing we are interested in and want to keep in the scope of our discussion .* let us consider the pde satisfied by the superhedging price : unless additional boundary conditions are specified , this pde may have multiple solutions .the role of additional boundary conditions in identifying as the unique solution of the above cauchy problem is discussed in section 4 of . also see for a similar discussion on boundary conditions for degenerate parabolic problems on bounded domains .+ even when additional boundary conditions are specified , the growth of might lead to the loss of uniqueness ; see for example and theorem 4.8 of which give necessary and sufficient conditions on the uniqueness of cauchy problems in one and two dimensional setting in terms of the growth rate of its coefficients .we also note that develops necessary and sufficient conditions for uniqueness , in terms of the attainability of the boundary of the positive orthant by an auxiliary diffusion ( or , more generally , an auxiliary it ) process .* let be the difference of two solutions of - .then both and are solutions of ( along with its boundary conditions ) . as a result , whenever and has multiple solutions , so does the pde for the value function .we intend to characterize as the smallest solution among a particular class of functions , as specified below in proposition [ prop : charac ue ] .then , considering that from lemma [ lem : u = liminfue ] , this gives a characterization for . in determining numerically , one could use as a proxy for for small enough .additionally , we will characterize as the smallest nonnegative supersolution of in proposition [ prop : charac u ] .[ prop : charac ue ] suppose that assumption [ as : loc lips ] holds .let \times(0,\infty)^d\times[0,1]\mapsto[0,\infty) ] to the entire real line by setting for and for .then , we can define the legendre transform of with respect to the variable }\{pq - u(t , x , p)\}\ge 0,\ \text{for}\ q\ge0 , \end{aligned}\ ] ] where the positivity comes from the condition .first , observe that since is nonnegative , we must have }pq = q,\ \text{for any}\ q\ge0.\ ] ] next , we derive the boundary condition of from as }\{pq - u(t , x , p)\}=\sup_{p\in[0,1]}\{pq - pg(x)\}=(q - g(x))^+.\ ] ] now , since is strictly convex in for and satisfies , we can express as where is the inverse function of .we can therefore compute the derivatives of in terms of those of , as carried out in .we can then perform the same calculation in ( but going backward ) , and deduce from that for any , +\frac{1}{2}(|\theta|^2+\varepsilon^2)q^2d_{qq}w^u+qtr[\sigma\theta d_{xq}w^u].\ ] ] define the process for ] and for all ] . then from, we may apply the dominated convergence theorem to and obtain \\ & = \bar{\mathbb{e}}[z^{t , x,1}(t)(q^{t , x , q}_\varepsilon(t)-g(x^{t , x}(t)))^+ ] = \widetilde{w}_\varepsilon(t , x , q ) , \end{split}\ ] ] where the first equality is due to .it follows that [ prop : charac u ] suppose assumption [ as : loc lips ] holds .let \times(0,\infty)^d\times[0,1]\mapsto[0,\infty) ] . if is a lower semicontinuous viscosity supersolution to on with the boundary condition , then .let us denote by the legendre transform of with respect to .by the same argument in the proof of proposition [ prop : charac ue ] , we can show that , and are true .moreover , as demonstrated in ( * ? ? ?* section 4 ) , by using the supersolution property of we may show that is an upper semicontinuous viscosity subsolution on to the equation let be a nonnegative function supported in ,|(x , q)|\le 1\} ] .then for any , define by definition , is .moreover , it can be shown that is a subsolution to on ; see e.g. ( 3.23)-(3.24 ) in ( * ? ? ?* section 3.3.2 ) and ( * ? ? ? * lemma 2.7 ) .set . by, we see from the definition of that also , the continuity of implies that for every \times(0,\infty)^d\times(0,\infty)$ ] . considering that is a classical subsolution to, we have ,\ \text{for}\ n\in\mathbb{n},\ ] ] where . for each fixed , thanks to we may apply the dominated convergence theorem as we take the limit in .we thus get .\ ] ] now by applying the reverse fatou s lemma ( see e.g. ) to , we have \\ & \le&\mathbb{e}[z^{t , x,1}(t)w^u(t , x^{t , x}(t),q^{t , x , q}(t))]\\ & \le&\mathbb{e}[z^{t , x,1}(t)(q^{t , x , q}(t)-g(x^{t , x}(t)))^+]=w(t , x , q ) , \end{aligned}\ ] ] where the second inequality follows from the upper semicontinuity of and the third inequality is due to . finally , we conclude that where the first equality is guaranteed by the convexity and the lower semicontinuity of . one should note that and satisfy the assumptions stated in propositions [ prop : charac ue ] and [ prop : charac u ] , respectively .therefore , one can indeed see these results as pde characterizations of the functions and . in this paper , under the context where equivalent martingale measures need not exist, we discuss the quantile hedging problem and focus on the pde characterization for the minimum amount of initial capital required for quantile hedging . an interesting problem followingthis is the construction of the corresponding quantile hedging portfolio .we leave this problem open for future research .
|
our goal is to resolve a problem proposed by fernholz and karatzas ( 2008 ) : to characterize the minimum amount of initial capital with which an investor can beat the market portfolio with a certain probability , as a function of the market configuration and time to maturity . we show that this value function is the smallest nonnegative viscosity supersolution of a non - linear pde . as in fernholz and karatzas ( 2008 ) , we do not assume the existence of an equivalent local martingale measure but merely the existence of a local martingale deflator .
|
the effects of fractionated radiotherapy and single dose radiation may be quite different depending on the gap between consecutive fractions . the larger the gap is , the larger the difference , due to the tissue recovery capabilities characteristic times .fractionated therapies are usually modeled including correction factors in single dose expressions . here, we will explore how to include fractionation in a recently introduced model derived using the tsallis entropy definition and the maximum entropy principle . as can be seen in ( and other works in the same issue ) nonextensive tsallis entropy has become a successful tool to describe a vast class of natural systems .the new radiobiological model ( maxent model in what follows ) takes advantage of tsallis formulation to describe the survival fraction as function of the radiation dose , based on a minimum number of statistical and biologically motivated hypotheses .the maxent model assumes the existence of a critical dose , , that annihilates every single cell in the tissue .the radiation dose can be written as a dimensionless quantity in terms of that critical dose as , where is the radiation dose .then the support of the cell death probability density function , , in terms of the received dose , becomes ] relates equations and such that implies radiation fractions are completely correlated while means they are fully independent . according to both limits interpretation, values will depend on the time between fractions and also on tissue repair or recovery capabilities .a single radiation fraction with an effective dimensionless dose equal to the whole fractionated treatment can be found such that , after the -th fraction , the dimensionless effective dose becomes , assuming . when the -th fraction is given , then .all fractionated treatments sharing the same value of will provide the same value for the survival fraction .so , the same will provide the isoeffect criterion for the fractionated therapy . in order to check the model reliability ,it has been fitted to data from using a weighted least squares algorithm .those data sets are considered as a reliable source of clinical parameters ( as the relation of lq model ) .the results of the fit are shown in figure [ fig : isoeffects - fits ] .isoeffect relationship data reported for mouse lung by ( , ) , mouse skin by ( , ) and mouse jejunal crypt cells by ( , ) , fitted to our model.,scaledwidth=100.0% ] the obtained coefficients show a survival fraction behavior far from the pure -algebraic limits ( ) .since values for usual tissue reaction differ from limiting values , it is worth to further study the biophysical interpretation of this new parameter .isoeffect curves for mouse jejunal crypt cells by .curves are calculated based on fitted parameters and for different values of our model , shown for every plot.,scaledwidth=100.0% ] every value provides a different isoeffect relationship , as shown in figure [ fig : isoeffects - explain ] . once the involved coefficients for a treatment ( and ) are known it can be tuned to obtain the desired effective dose by changing and . assuming the same physical dose per fraction , ,as is the case in many radiotherapy protocols , expression becomes a recursive map , describing the behavior of the effective dose in a treatment . for a given is a critical value of , dividing the plane in two different regions ( see figure [ fig : nmap ] ) . for a treatment with , there will always be a surviving portion of the tissue since always .however , if , after enough fractions , meaning that effective dose has reached the critical value and every single cell of tissue has been removed by the treatment . then it is possible to find , the threshold value of , that kills every cell , for a given therapy protocol .this is shown in the inset of figure [ fig : nmap ] .the larger plot represents isolines as a function of and ( dashed lines ) above ( solid line ) ; below this line , killing all tissue cells is impossible .the small one represents critical values in terms of .,scaledwidth=100.0% ] if the desired result is the elimination of the radiated tissue cells , _i.e. _ surrounding tissue is not a concern for treatment planning , represents the minimum number of sessions needed to achieve this goal ; any session after that will be unnecessary . on the contrary , if the therapy goal requires the conservation of tissue cells ( for instance in order to preserve an organ ) , then the number of sessions must be lower than .the parameter is a cornerstone on isoeffect relationships .a fractionated therapy of fully independent fractions requires a greater radiation dose per fraction , or more fractions , in order to reach the same isoeffect as a treatment with more correlated fractions .the coefficient acts here as a relaxation term .immediately after radiation damage occurs ( ) tissue begins to recover , as decreases , until the tissue eventually reaches its initial radiation response capacity ( ) .in other words , the formerly applied radiation results in a decrease of the annihilation dose ( initially equal to ) describing the effect of the next fraction . the more correlated a fraction is to the previous one , the larger the value of and , thus , the larger the effect on the critical dose will be . notice that unlike , that characterizes the tissue primary response to radiation , characterizes the tissue trend to recover its previous radioresistance .correlation between fractions can be translated in terms of the late and acute tissue effects of radiobiology . indeed, damaged tissue repairing and recovering capabilities should determine the value of .given a dosage protocol , an early responding tissue would correspond to close to , whereas late responding tissue , would have closer to . notice that in current working models for hyperfractionated therapies this repair and recovery effects are introduced as empirical correction factors , as will be required for . as it was shown in ,nonextensivity properties of tissue response to radiation for single doses are more noticeable for higher doses than predicted by current models . on the contrary ,a lower dose per fraction brought out nonextensive properties for fractionated therapies .indeed , for high dosage a few fractions are applied in a treatment and a change in is not required for different values . however , in the lower dosage case , more radiation fractions need to be applied and the parameter may become crucial . in this case values move away from each other for isoeffect treatments with different .so , in order to achieve the desired therapy effects , fractionated radiotherapy must be planned for a tissue described by , varying according to .this coefficient should be experimentally studied as its value tunes the annihilation dose along a radiotherapy protocol .for some radiation treatments as brachytherapy the irradiation is applied in a single session but for a prolonged period of time .if the discrete irradiation sessions were close enough could be written as , in continuous irradiation the effective dose is in general small , and is possible to assume and .then , ,\label{eq : dotxmix}\ ] ] where the terms of second order in and above have been neglected .it is obvious from dose additivity properties that in the continuous irradiation case and for two time instants and close enough , where is the dose rate per unit time .however if both instants of time are far enough to make relevant the tissue recovering capabilities this expression becomes invalid .so , whereas a usual integration process could become valid in a short time period this is not true for longer intervals .so , in a similar way as was already done for the sum operation , a new definition for integration must be introduced .this can be done following and introducing the -algebraic sum and difference , in those terms , a nonextensive derivative operation follows such that , then we can define the physical dose rate , , as the nonextensive time derivative of the equivalent dose , expression can be rewritten as a standard ode , which can be solved in the usual way taking into account that and are in general functions of time . due to the applied radiation ( ) the applied effective dose increases linearly .however a resistance force ( ) , that depends not only on tissue recovering characteristics but also on the dose rate and the effective dose itself , is slowing down this increase . in order to show behavior ,let us suppose is constant ( a common case in clinical practice ) and slowly varying in time , so that it can be also taken as a constant .then it is straightforwardly obtained , allowing to find the needed irradiation time to kill every cell in the tissue ( ) , and showing that effective dose increases at a decreasing speed , until tissue cells get annihilated at time ( ) . under continuous irradiation ,survival fraction decreases faster at the beginning of irradiation process .however , depending on dose rate and coefficient , the killing process speed slows down until eventually every cell is killed .if the recovery capacity is very high ( ) the radiation effects stack slowly and there will always be surviving tissue cells ( ) .those radiation damages stack faster as long as tissue cells are less capable to repair it and if there is no repair processes at all ( ) the effective radiation dose grows linearly in time and cells get killed faster ( ) . this time shortening behavior with decreasing repairing rate is also shown by other radiobiological models . comparing and we see that , in the limit of continuous dosage , they become the same expression with .however this relation may become invalid at high exposures as effective dose becomes larger and gets closer to . at this point ,the fractionated and continuous treatments differ .so must be studied regardless of but if a continuous alternative therapy is desired , known values can be a good starting point to find .the use of tsallis entropy and the second law of thermodynamics have allowed us to write a simple nonextensive expression for the single dose survival fraction .the mathematical constraints , required to define the probabilities composition such that the two limiting behaviors are described , introduce a new parameter , relating the radiation sessions .the fits to available experimental data show that usual treatment have non trivial values of this parameter , _i.e. _ , are not close to the limiting behaviors .this make the study of this coefficient relevant for clinical treatments and experimental setups .the existence of a critical dosage arises from these composition rules , providing a criterion to adjust a treatment to kill every tumor cell or minimize the damage over healthy tissue .this could be reached changing the number of sessions or the radiation dose by session , allowing to switch between isoeffective treatments .also an expression for the effective dose in continuous irradiation treatments has been found , showing it is phenomenologically linked to the previous one .this has the potential to provide isoeffect relationships in continuous dose treatments such as brachytherapy .besides , a relation between fractionated and continuous therapies could be established from the obtained coefficients .authors acknowledge the financial support from the spanish ministerio de ciencia e innovacin under itrenio project ( tec2008 - 06715-c02 - 01 ) . o. sotolongo - grau , d. rodriguez - perez , j.c .antoranz , and o. sotolongo - costa .non - extensive radiobiology . in a.mohammad - djafari , j - f .bercher , and p.bessiere , editors , _ bayesian inference and maximum entropy methods in science and engineering ( proceedings of the 30th international workshop on bayesian inference and maximum entropy methods in science and engineering , 4 - 9 july 2010 , chamonix , france ) _ , volume 1305 of _ aip conference proceedings _ , pages 219226 .aip , 2010 .van der kogel and c.c.r .calculation of isoeffect relationships . in g.g .steel , editor , _ basic clinical radiobiology for radiation oncologists _ , pages 7280 .edward arnold publishers , london , 1993 .l. a. m. pop , j. f. c. m. van den broek , a. g. visser , and a. j. van der kogel .constraints in the use of repair half times and mathematical modelling for the clinical application of hdr and pdr treatment schedules as an alternative for ldr brachytherapy . , 38(2):153 162 , 1996 .
|
the biological effect of one single radiation dose on a living tissue has been described by several radiobiological models . however , the fractionated radiotherapy requires to account for a new magnitude : time . in this paper we explore the biological consequences posed by the mathematical prolongation of a model to fractionated treatment . nonextensive composition rules are introduced to obtain the survival fraction and equivalent physical dose in terms of a time dependent factor describing the tissue trend towards recovering its radioresistance ( a kind of repair coefficient ) . interesting ( known and new ) behaviors are described regarding the effectiveness of the treatment which is shown to be fundamentally bound to this factor . the continuous limit , applicable to brachytherapy , is also analyzed in the framework of nonextensive calculus . also here a coefficient arises that rules the time behavior . all the results are discussed in terms of the clinical evidence and their major implications are highlighted . radiobiology , fractionated radiotherapy , survival fraction , entropy
|
the input is a set of messages , each with a probability and cost , and a parameter the number of channels .the output is ( finitely described ) infinite _ broadcast schedule _ for the messages specifying for each time and channel , a message ( if any ) to be broadcast at that time on that channel . the goal is to minimize the cost of the schedule , denoted and defined as the _ expected response time _ plus the _ broadcast cost _ of . for a finite schedule , the expected response time of , denoted , is defined as follows . at each time unit , each message is requested by some client with probability . once a message is requested , the client waits until the next time at which the message is scheduled on any channel ( or the end of the schedule , whichever comes first ) . is defined to be the expected waiting time for a random request at a random time .the broadcast cost of , denoted , is defined to be the total cost of scheduled messages , divided by the length of the schedule . throughout the paper ,if any real - valued function is defined with respect to finite schedules , then we implicitly extend it to any infinite schedule as follows : , where denotes restricted to the first time slots .thus , the above definitions of expected response time and broadcast cost implicitly extend to infinite schedules .all of the infinite schedules considered in this paper will be periodic , in which case this extension is particularly simple .the data broadcast problem and special cases were studied in .works studying applications and closely related problems include .some of the above works study the generalization allowing messages to have arbitrary lengths , which we do not consider here .ammar and wong proved that there always exists an optimal infinite schedule with finite period .they also formulated a natural relaxation of the problem that gives an explicit lower bound on the optimum ; the performance guarantee in this paper is proven with respect to that lower bound .more recently , constant - factor polynomial - time approximation algorithms have been shown , the best to date being a -approximation .although the problem itself is not known to be -hard , several variants are known to be .khanna and zhou state that it is unknown whether the problem is max - snp hard , even when and without broadcast costs . in this paper , we show that it is not ( unless p = np ) .we present the first deterministic polynomial time approximation scheme for the problem , assuming the and each cost is bounded by a constant . by `` polynomial time '' , we mean that the time taken to output the finite description of the infinite schedule is polynomial in the number of messages in the input .our algorithm is based on a simple new observation that works for a special case of the problem .we use fairly technical but to some extent standard techniques to extend it to the general case .we sketch the idea here , glossing over a fair amount of technical detail .ammar and wong relax the optimization problem by allowing messages to ( a ) be scheduled at non - integer times and ( b ) to _ overlap _ , while still insisting that the total _density _ of the scheduled messages is at most , the number of channels ( the extension to the multiple channel case is due to ) .the density of a message ( or set of messages ) is the total number of scheduled times , divided by the length of the schedule .standard calculus yields a solution to this relaxed problem .the solution specifies for each message a _ density _ , meaning that the message should be scheduled every time units .ammar and wong describe the following simple randomized rounding algorithm for producing a real schedule : _ for , for , choose a single message randomly so that is ; schedule in schedule slot . _they observe that the expected waiting time for a random request for is essentially in this schedule . since the expected waiting time in the relaxed schedule is essentially ( because an average request falls midway between two successive broadcasts of ) , this yields a 2-approximation w.r.t . expected response time .since the expected broadcast cost of is the same as the broadcast cost of the relaxed solution , the algorithm is a -approximation algorithm w.r.t . the total cost .ammar and wong also describe a greedy algorithm that bar - noy , bhatia , naor and schieber generalize in to the multiple channel case and prove to be essentially a derandomization of the randomized algorithm , with the same performance guarantee .* round - robin within groups .* since our goal is a ptas , we naturally group messages that are essentially equivalent ( i.e. have essentially the same cost and probability ) .our simple idea is the following variation of ammar and wong s rounding scheme , which is most simply described as follows : schedule the messages as ammar and wong do , but then , _ within each group , rearrange the messages so that they are scheduled in round - robin ( cyclic ) order . _the broadcast cost is unchanged , but the expected response time improves as follows .whereas before , a random request for a message in a group would have waited ( in expectation ) for messages from until finding its message , in the round - robin schedule , a random request for will wait ( by symmetry ) for messages from .that is , the expected wait in the round - robin schedule is times the expected wait in the ammar - wong schedule .since the ammar - wong schedule has performance guarantee , the round - robin schedule has performance guarantee .thus , when the groups are all large , the ammar - wong relaxation is essentially tight . * extending to the general case .* recall that for our purposes a group is a collection of messages with approximately ( w.r.t . ) the same probability and cost . as long as each group has size at least , the round - robin schedule gives a -approximation .to extend to the general case , we show the following . _any _ set of messages can be partitioned into three classes as follows : 1 . a constant number of _ important _ ( high probability ) messages .2 . messages belonging to _ large groups_. 3 . leftover messages , contributing _negligibly _ to the cost .the basic intuition for the existence of this partition is that , due to the rounding , the message - probabilities of the successive groups decrease exponentially fast .thus , for all but a constant number of groups ( where the message - probability is high ) , either the group is very large , or the total probability of the messages in the group is very small .althouth the intuition is basic , obtaining the proof with the appropriate parameters is is somewhat involved and delicate .once we have the partition , we proceed as follows : + 1 .find the density of messages in in a near - optimal schedule of and .compute an optimal `` short '' schedule of having density approximately .schedule the messages in in the slots not occupied by , using the group - round - robin algorithm .`` stretch '' the schedule , interspersing empty slots every time units , and schedule the messages for in these empty slots .note that in order to `` cut and paste '' the schedules together , we have to explicitly control the density of and .this in itself requires little that is new .the main new difficulty is the following . in step 3, we are using the round - robin algorithm to schedule , but in a schedule that is already partially filled by . for the analysis of the round - robin algorithm to continue to approximately hold ,we require that the empty slots in schedule are sufficiently _ evenly distributed _so that the scheduling of is not overly delayed at any time ( cost increases quadratically with delay ) .a - priori , imposing this additional requirement on might increase the cost of too much . to show that this is not the case , we show ( using a non - constructive probabilistic argument ) that there is a schedule of that has _ constant - length period _, density approximately , and cost approximately the cost of any optimal schedule of with density . since the period of this schedule is small , the empty slots are _ necessarily _ evenly distributed .the final output of the algorithm is a finite ( size linear in the input size ) description from which an infinite schedule with approximately optimal expected cost can be generated by a randomized algorithm in an `` on - line '' fashion , where each step requires time to schedule .the running time of the various steps is as follows . in step 1 ,only a constant number of densities need to be considered : we can try them all and take the best . for each ,the time for the remaining steps is as follows .step 2 can be done in constant time since the schedule we are looking for has constant length .step 3 can be done in randomized time in the size of the output .step 4 can also be done in randomized linear time in the size of the output .the final technical hurdle is showing that the algorithm can be derandomized ( extending the analysis of the greedy algorithm by bar - noy , bhatia , naor and schieber to this more complicated setting ) .the resulting deterministic algorithm outputs a polynomial - length schedule , the repetition of which gives the desired near - optimal infinite schedule . .this is a rather technical contribution to the area of approximation schemes , which uses several usual techniques s.a . rounding , exhaustive search and treating `` large '' objects separately , butalso requires a few additional ideas , specific to this problem , which we now point out . as in all previous papers since ammar and wong s seminal work on the topic , we focus on the very informative lower bound , in which one can separate the contribution of each message to the objective function . since the lower bound is tight up to a factor of 2, it helps us identify the messages which contribute a lot to the objective function and must be treated with special care ( `` important '' messages ) .one apparently difficult case arises when there are many messages of similar costs and probabilities , which individually contribute very little to the objective function but are quite significant as a group .one simple but important new idea is that the lower bound is in fact essentially optimal in that case and is achieved by a round - robin - type heuristic .our algorithm thus starts by classifying messages into three categories : first , a constant number of important messages ( set ) ; second , messages belonging to large groups ( messages are grouped when they have the same probability and cost ) ; the other messages are negligible and can easily be dealt with in the end .now , imagine that we guess the fraction of time which the optimal schedule devotes to broadcasting messages from each category ( or _ density _ of each category ) .it seems that one could construct the optimal schedule of the `` important '' messages with respect to the density constraint , and then use the unoccupied time slots to broadcast the messages from the second category using the round - robin - type heuristic .however this is only provably efficient if one can relate the period of the schedule of to the size of the groups of the second category .studying the period of an optimal schedule of requires a new idea : cutting the schedule into pieces and glueing them back together in random order ( with special glue that prevents interactions between blocks ) has a smoothing effect which enables us to prove that the period can essentially be bounded by .furthermore , a structural lemma , requiring some further technical partitioning idea , proves that the groups can be assumed to have size at least for any .this is the key to the analysis of our approximation scheme .* plan of the paper*. we construct the algorithm gradually . after a preliminary section ( section [ sec : lb ] ) recalling the lower bound of ammar and wong and extending it suitably , we present ( section [ sec : simplecase ] ) a simple special case of message set ( every message has many identical sibling messages ) , for which we combine information from the lower bound with a round - robin type heuristic .we then study ( section [ sec : criticalcase ] ) a slightly more general case of message set ( every message , except for a constant number , has many identical sibling messages ) ; the proof of lemma [ lem : boundedperiod ] for bounding the period of the schedule of the important messages contains a random - shuffling type argument .section [ sec : negligible ] explains how to deal with negligible messages and is relatively straightforward .section [ sec : ptas ] puts the ideas together to construct a polynomial type randomized approximation scheme ; its validity rests on the structural lemma [ lem : partition ] ; we then proceed to show how the algorithm can be derandomized in a greedy fashion and how the period of the resulting schedule can be controlled .finally in section [ sec : technical ] we state several easy but useful lemmas which are used in several of the other proofs .[ sec : simplecase ] let the set of messages be partitioned into groups where group has size every message in has the same probability and broadcast cost .let be the desired maximum density of in the schedule. in this notation , ammar and wong s relaxation of the problem is : the minimization problem is a lower bound to the contribution of the messages of to the cost of any schedule over channels , in which has density .the problem has a unique solution satsifying : , for some . if , then ; otherwise , is the unique solution to : .the following important observation states that _ w.l.o.g ._ messages from the same group can be scheduled in round robin order .[ lem : roundrobin ] for any schedule , there exists a schedule in which the messages are broadcast in round robin order within each group , so that : + + in , consider a request for a message of , and the next broadcasts of messages of after that request : the expected response time to the request is minimized if all messages of are broadcast during those time slots .thus changing so as to reschedule the messages in in round robin order , decreases the expected response time while leaving the broadcast cost . note that if is periodic of period , then the resulting schedule is also periodic and of period at most .the next step consists in observing that the lower bound is _ tight _ in the setting of this section . according to the general lower bound, it is desired that a message is broadcast every , and then a random request for that message would have to wait for about on average . in the algorithm of ( specialized to this setting ) , each _ message _ is broadcast with probability , so that any request has to wait on average time : thus they obtain then a -approximation. on the other hand , in our algorithm [ algo : rando ] , we schedule the _ group _ with probability , and then the messages in round robin order within the groups .since a request for a message in , has to wait with equal probability for the 1 , the 2 , ... , the broadcast of messages in , it will be served after expected time .we will thus obtain a -approximation , which is a -approximation since all groups have size at least .we believe that this simple idea might be applicable to other problems . [lem : rando ] in the setting of this section , the randomized algorithm [ algo : rando ] constructs a one - channel schedule whose cost satisfies : + = { \displaystyle } { \sum_{j=1}^q \left ( p_j \frac{g_j(g_j+1)}2 \tau_j + \frac{c_j}{\tau_j } \right ) - \frac12 } ] . +as explained above , a request for a message in waits on average until the end of the current time slot and then broadcasts of a message in on average .then : =\frac12+\sum_j p_j g_j \tau_j \frac{g_j+1}2 ] , hence the claimed performance ratio .note that the law of large numbers implies that the expected cost is obtained with probability .next we treat the case where the set of the messages can be partitioned into two sets and such that * consists of a constant number of messages * is partitioned into groups as in the previous section , such that each group has size at least , where will be defined later .a key point is to remark that , if we know the density of the messages of in an optimal schedule , and if we can schedule the messages of almost optimally subject to this density constraint while keeping the period of the schedule small , then we can complete the schedule by scheduling the messages of in the remaining empty slots according to the randomized algorithm given above .determining the densities of and thus allows to treat those two sets of messages independently .recall from the discussion in the introduction that the challenge at this point is to show that there is a near - optimal schedule of with the appropriate density and in which the empty slots are relatively uniformly distributed . if so , then we can find the desired schedule for by exhaustive search , and then schedule into the empty space in the schedule using the round - robin algorithm previously described . to show the existence of the desired schedule for , we show there is a near - optimal schedule of with the appropriate density and with _ constant period _ ( independent of ) .[ lem : boundedperiod ] given a set of messages , with cost at most , some constant and a density , there exists a periodic schedule satsifying : 1 .the density of empty slots is is approximately : + , and ,1[ ] .let be the solution to the minimization problem .we obtain by scheduling the messages of on the first channel in the empty slots of , according to the randomized algorithm [ algo : rando ] with .lemma [ lem : rando ] and the scaling lemma [ lem : scaling ] ensure that the expected contribution of is bounded by . the algorithm above can easily be derandomized by trying all the starting point and choosing the one that minimizes the over cost for the messages of and use the greedy algorithm [ algo : greedy ] to schedule .we now assume that we are in the general case .the aim of the section is to prove the following theorem , which is the main result of the paper .[ thm : ptas ] given and a set of messages , with message costs bounded by , algorithm [ algo : ptas ] constructs in time a periodic schedule with period , so that : we will first derive a ptras that will be derandomized in section [ sec : derando ] . we now need to put together the ideas developed for the special cases of the previous sections . as a preliminary treatment ,we use standard rounding techniques to reduce the number of different messages .[ lem : rounding ] without loss of generality , we can assume that the request probabilities are a multiple of powers of and the broadcast costs are multiples of : where .standard and omitted .the following lemma is the main tool for putting together the various special cases studied so far , and is thus a key part of our construction .we would like to claim that similar ideas could be applied to other problems as well , however we were unable to abstract simple and general ideas from the technical proof .perhaps , if one believes that every approximation scheme rests on one `` structural lemma '' , it can be seen as the structural lemma for this problem .[ lem : partition ] given and , one can construct , in linear time in , a partition of the groups , of messages with probability ( where is the normalizing constant such that ) and cost , into three sets so that : 1 .the groups of have total size constant : , independent of .2 . the groups of are all large : + + 3 .the messages in have negligible contribution if they are scheduled rarely ( with density ) : + + since the proof is rather technical , we will only in this extended abstract give the construction of the partition into and in the case when there are no costs ( ) and there is only one broadcast channel ( ) ; this already contains the gist of the proof .let . in the case where there are no costs , the lower bound can be solved explicitly ( see ) even when there is a density constraint , to yield , for any subset of the message set : + [ [ the - construction ] ] _ the construction _+ + + + + + + + + + + + + + + + + + the construction is best understood by referring to figure [ fig : partition ] .we first deal with indices such that . let be some constant to be defined later , and define , and .( one can observe already that since the contributions of the messages of form the tail of a geometrically decreasing series , they will be negligible , and so they will end up in ; moreover , since and are both bounded for the definition of , set can only contain a small number of messages and so these messages will end up in ) .we now consider the more delicate case of the groups for which , for which we will need to use the pigeon hole principle .we partition their indices into blocks as follows : where is some constant to be defined later . according to , we can then rewrite the lower bound on the expected response time as , and the pigeon hole principle tells us that there exists at least one such that .we now define , , and .finally we set and as shown on figure [ fig : partition ] .it is now a simple matter to take our building blocks and deduce a randomized approximation scheme for the general data broadcast problem .[ pro : ptras ] given , the randomized algorithm [ algo : ptras ] yields a random schedule with cost : { \leqslant}(1 + 10{\varepsilon}){\ensuremath{\operatorname{opt}}}(m)\ ] ] round the probabilities and costs of the messages in , and partition the set of messages into three sets , according to lemma [ lem : partition ] with . schedule and with algorithm [ algo : a+b ] . insert the messages of into the schedule of and , with the algorithm described in lemma [ lem : schedulec ] .consider the rounded instance of the set of messages .according to the previous proposition [ pro : a+b ] and lemma [ lem : schedulec ] , we have : + { \leqslant}(1+{\varepsilon})(1 + 5{\varepsilon}){\ensuremath{\operatorname{opt}}}({{\ensuremath{\dot m } } } ) $ ] but lemma [ lem : rounding ] ensures that : + which yields the result .the insertion of can be done at the same time than the broadcast of and in algorithm [ algo : ptras ] .the ptras has one slight problem , namely , that it is not periodic , hence may be somewhat awkward to implement in some settings . in this sectionwe derandomize it using greedy choices , and show how to control the period of the resulting algorithm . [ not : greedystate ] we define the _ state _ at slot as the time period elapsed from the beginning of the of the last broadcasts of group to the end of slot , as shown figure [ fig : greedystate ] .[ lem : greedy ] given a set of messages partitioned into groups of size , and a set of reals so that , the greedy algorithm [ algo : greedy ] yields a one - channel schedule whose cost satisfies: if minimizes , we get a -approximation . add a dummy group , if needed . let be the state at time slot . let which minimizes : + schedule during slot , the next message of in the round robin order , if , and stay idle otherwise .the greedy choice at time slot is made in order to minimize the expected cost of the already allocated slots , if the schedule continues with the randomized algorithm [ algo : rando ] after time ; this property ensures that the greedy schedule is at least as good as the randomized one .the above greedy algorithm could conceivably have very large period .the lemma below shows that we can truncate it so as to obtain a periodic schedule of polynomial length .[ lem : periodicgreedy ] given a set of messages partitioned into groups of size , a set of reals such that , and any , algorithm [ algo : periodicgreedy ] yields a one - channel schedule with period , whose cost is bounded by : schedule during slot message . the greedy algorithm during slots . sort in increasing order the set and schedule in slots in order of increasing , the message of group in the round robin order .omitted .our main algorithm can now be found in algorithm [ algo : ptas ] .round the probabilities and costs , and partition into as in the ptras .compute and the density and periodic schedule of to minimizes , as in algorithm 2 .compute the greedy periodic schedule of with and with period .concatenate periods of and map into the empty slots in the natural order .compute the greedy periodic schedule of with where minimizes , and with period .choose the best starting point in and stretch the schedule of and by inserting a slot of on the first channel every and an empty slot on the other channels at that time .let be the resulting schedule .choose the best starting point in and construct by stretching by inserting the messages in fixed order on the first channel every . is then structured into independent blocks of length .the cheapest block will be the period of our approximation .theorem [ thm : ptas ] theorem [ thm : ptas ] is proved by analyzing the algorithm [ algo : ptas ] .the analysis is derived from the analysis of the ptras .the six first steps are exactly the same , except that the periodic greedy algorithm [ algo : periodicgreedy ] is used instead of the randomized algorithm [ algo : rando ] .since the performance ratio in algorithm [ algo : periodicgreedy ] is better , the schedule obtained step 6 is at least as good , and is periodic with period : + we finally reduce the period in steps 7 - 8 by using stretching lemma [ lem : stretching ] , which ensures that at an increase of of the cost , we can extract from a block with length and : + lemmas in this sections are useful for analyzing several of our constructions .the stretching lemma states that changing a schedule by inserting a few empty slots once in a while does not affect the expected response time .[ lem : stretching ] given a schedule on channels of and a positive integer , let .consider the schedule obtained from by inserting empty slots just before the time slots , where is a random time in .then : { \leqslant}(1+{\varepsilon}){\ensuremath{\operatorname{ert}}}(s)\ ] ] [ lem : scaling ] given a set of messages and a schedule , let the schedule obtained by scaling by a factor : schedule at time on some channel the same message as at time , and stays idle otherwise .then : the mapping lemma is used for analyzing the effect of inserting the messages from into the slots left empty in the density - constrained schedule of ; these slots may be spaced irregularly .[ lem : mapping ] given a set of messages , partitioned into groups of identical messages , such that all groups are larger than , consider a one - channel schedule of scheduling each group in round robin order , and a periodic sequence of reserved time - slots over channels with density and period .let be the schedule obtained by mapping the schedule into the reserved empty slots from left to the right , then :
|
the data broadcast problem is to find a schedule for broadcasting a given set of messages over multiple channels . the goal is to minimize the cost of the broadcast plus the expected response time to clients who periodically and probabilistically tune in to wait for particular messages . the problem models disseminating data to clients in asymmetric communication environments , where there is a much larger capacity from the information source to the clients than in the reverse direction . examples include satellites , cable tv , internet broadcast , and mobile phones . such environments favor the `` push - based '' model where the server broadcasts ( pushes ) its information on the communication medium and multiple clients simultaneously retrieve the specific information of individual interest . this sort of environment motivates the study of `` broadcast disks '' in information systems . in this paper we present the first polynomial - time approximation scheme for the data broadcast problem for the case when and each message has arbitrary probability , unit length and bounded cost . the best previous polynomial - time approximation algorithm for this case has a performance ratio of .
|
the classic database textbook dedicates several chapters to schema design : carefully crafting an abstract model , translating it into a relational schema , which is then normalized .while walking their students through the course , scholars emphasize again and again the importance of an anticipatory , holistic design , and the perils of making changes later on .decades of experience in writing database applications have taught us this . yetthis waterfall approach no longer fits when building today s web applications . during the last decade, we have seen radical changes in the way we build software , especially when it comes to interactive , web - based applications : release cycles have accelerated from yearly releases to weekly or even daily , new deployments of beacon applications such as youtube ( quoting marissa meyer in ) .this goes hand in hand with developers striving to be agile . in the spirit of lean development ,design decisions are made as late as possible .this also applies to the schema .fields that _ might _ be needed in the future are not added presently , reasoning that until the next release , things might change in a way that would render the fields unnecessary after all .it is partly due to this very need for more flexibility , that schema - free nosql data stores have become so popular .typically , developers need not specify a schema up front .moreover , adding a field to a data structure can be done anytime and at ease .[ [ scope - of - this - work . ] ] scope of this work .+ + + + + + + + + + + + + + + + + + + we study aspects of schema management for professional web applications that are backed by nosql data stores .figure [ fig : architecture ] sketches the typical architecture .all users interact with their own instance of the application , e.g. a servlet hosted by a platform - as - a - service , or any comparable web hosting service .it is established engineering practice that the application code uses an object mapper for the mapping of objects in the application space to the persisted entities .we further assume that the nosql data store is provided as database - as - a - service , so we have no way of configuring or extending it .our work addresses this important class of applications . of course , there are other use cases for employing nosql technology , yet they are not the focus of our work . [[ case - study - blogging - applications . ] ] case study : blogging applications .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we introduce a typical example of a professional web application : an online blog . in the spirit of shippingearly and often , both the features of our application as well as the data will evolve .we use a nosql data store , which stores data as entities .we will establish our terminology in later chapters , and make do with a hand - wavy introduction at this point .each entity has an entity key , which is a tuple of an entity kind and an identifier .each entity further has a value , which is a list of properties : .... ( kind , i d ) = { comma - separated list of properties } .... let us dive right in . in our first release , users publish blogs ( with title and content ) and guests can leave comments on blogposts . for each blogpost , information about the author and the date of the post is stored . in the examplebelow , we use a syntax inspired by json , a lightweight data - interchange format widely used within web - based applications . ....( blogpost , 007 ) = { title : " nosql data modeling techniques " , content : " nosql databases are often ... " , author : " michael " , date : " 2013 - 01 - 22 " , comments : [ { comment - content : " thanks for the great article ! " , comment - date : " 2013 - 01 - 24 " } , { comment - content : " i would like to mention ... " , comment - date : " 2013 - 01 - 26 " } ] } .... soon , we realize that changes are necessary : we decide to support voting , so that users may `` like '' blogposts and comments .consequently , we expand the structure of blogposts and add a `` likes '' counter . since we have observed some abuse , we no longer support anonymous comments . from now on ,users authenticate with a unique email address .users may choose a username ( ` user ` ) and link from their comments to their website ( ` url ` ) , as well as specify a list of interests .accordingly , we add new fields .we take a look at a new data store entity ` ( blogpost , 234708 ) ` and the state of an older entity ` ( blogpost , 007 ) ` that has been added in an earlier version of the application . for the sake of brevity ,we omit the data values : .... ( blogpost , 234708 ) = { title , content , author , date , likes , comments [ { comment - content , comment - date , comment - likes , user , email , url , interests [ ] } ] } .... .... ( blogpost , 007 ) = { title , content , author , date , comments [ { comment - content , comment - date } , { comment - content , comment - date } ] } .... next , we decide to reorganize our user management . we store user - related data in separate ` user ` entities .these entities contain the user s ` login ` , ` passwd ` , and ` picture ` . during this reorganizationwe further rename ` email ` to ` login ` in ` blogpost ` entities .the ` interests ` are moved from ` blogpost ` to the ` user ` entities , and the ` url ` is removed .below , we show blogpost ` ( blogpost , 331175 ) ` of this new generation of data , along with old generation blogposts that were persisted in earlier versions of the application .the structural differences are apparent . ....( blogpost , 331175 ) = { title , content , author , date , likes , comments [ { comment - content , comment - date , comment - likes , user , login } ] } ( user , 42 ) = { login , passwd , interests [ ] , picture } .... .... ( blogpost , 234708 ) = { title , content , author , date , likes , comments [ { comment - content , comment - date , comment - likes , user , email , url , interests [ ] } ] } .... .... ( blogpost , 007 ) = { title , content , author , date , comments [ { comment - content , comment - date } , { comment - content , comment - date } ] } .... after only three releases , we have accumulated considerable technical debt in our application code .it is now up to the developers to adapt their object mappers and the application logic so that all three versions of blogposts may co - exist .whenever a blogpost is read from the data store , the application logic has to account for the heterogeneity in the comments : some comments do not have any user information , while others have information about the user identified by ` email ` ( along with other data ) .a third kind of ` blogpost ` contains ` comments ` identified by the user s ` login ` .if new generation comments are added to old generation blogposts , we produce even a fourth class of blogposts .not only does this introduce additional code complexity , it also increases the testing effort . with additional case distinctions ,a good code coverage in testing becomes more difficult to obtain . in an agile setting where software is shipped early and often, developers would rather spend their time writing new features than fighting such forms of technical debt . at the same time , the nosql data store offers little , if any , support in evolving the data along with the application .our main contribution in this paper is an approach to solving these kinds of problems .[ [ schema - evolution - in - schema - less - stores . ] ] schema evolution in schema - less stores .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + while the sweet spot of a schema - less backend is its flexibility , this freedom rapidly manifests in ever - increasing technical debt with growing data structure entropy .once the data structures have degenerated , a nosql data store provides little support for getting things straightened out .most nosql data stores do not offer a _data definition language _ for specifying a global schema ( yet some systems , such as cassandra , actually do ) .usually , they merely provide basic read- and write operations for manipulating single entities , delegating the manipulation of sets of entities completely to the application logic .consequently , these systems offer no dedicated means for migrating legacy entities , and developers are referred to writing batch jobs for data migration tasks ( e.g. ) . in such batch jobs , entities are fetched one - by - one from the data store into the application space , modified , and afterwards written back to the store .worse yet , since we consider interactive web applications , migrations happen while the application is in use .we refer to data migration in batches as _ eager migration _ , since entities are migrated in one go . alas , for a popular interactive web - application , the right moment for migrating all entities may never come. moreover , a large - scale data store may contain legacy data that will never be accessed again , such as stale user accounts , blogposts that have become outdated , or offers that have expired . migrating this datamay be a wasted effort , and expensive , when you are billed by your database - as - a - service provider for all data store reads and writes . as an alternative, the developer community pursues what we call a _ lazy _ data migration strategy .entities of the old and new schema are allowed to co - exist .whenever an entity is read into the application space , it can be migrated . effectively, this will migrate only `` hot '' data that is still relevant to users .for instance , the objectify object mapper has such support for google datastore .however , all structure manipulations require custom code . as of today, there is no systematic way to statically analyze manipulations before executing them .moreover , from a database theory point - of - view , lazy migration is little understood ( if at all ) .this makes lazy migration a venture that , if applied incorrectly on production data , poses great risks .after all , once entities have been corrupted , there may be no way to undo the changes .[ [ desiderata . ] ] desiderata .+ + + + + + + + + + + what is missing in today s frameworks is a means to systematically manage the schema of stored data , while at the same time maintaining the flexibility that a schema - less data store provides .what we certainly can not wish for is a rigorous corset that ultimately enforces a relational schema on nosql data stores .most systems do provide some kind of data store viewer , where single entities can be inspected , and even modified , or data can be deleted in bulk ( e.g. ) . yet to the best of our knowledge , there is no schema management interface that would work _ across _ nosql systems from different providers , allowing application administrators to manage their data s structure systematically .this entails basic operations such as adding or deleting fields , copying or moving fields from one data structure to another . from studying the discussions in developer forums ,we have come to believe that these are urgently needed operations ( e.g. to list just a few references ) .add , rename , and delete correspond to the capabilities of an `` alter table '' statement in relational databases .just as with relational databases , more complex data migration tasks would then have to be encoded programmatically . yet in the majority of nosql databases , _ any _ data structure maintenance affecting more than one entity must be coded manually .we still lack some of the basic tooling that one would expect in a nosql data store _ ecosystem _ , so that we may professionally maintain our production data in the long run .[ [ contributions . ] ] contributions .+ + + + + + + + + + + + + + the goal of this work is to address this lack of tooling .we lay the foundation for building a generic schema evolution interface to nosql systems .such a tool is intended for developers , administrators , and software architects to declaratively manage the structure of their production data . to this end, we make the following contributions : * we investigate the established field of schema evolution in the new context of schema - less nosql data stores . *we contribute a declarative _ nosql schema evolutionlanguage_. our language consists of a set of basic yet practical operations that address the majority of the typical cases that we see discussed in developer forums .* we introduce a generic _ nosql database programming language _ that abstracts from the apis of the most prominent nosql systems .our language clearly distinguishes the state of the persisted data from the state of the objects in the application space .this is a vital aspect , since the nosql data store offers a very restricted api , and data manipulation happens in the application code .* by implementing our schema evolution operations in our nosql database programming language , we show that they can be implemented for a large class of nosql data stores .* we investigate whether a proposed schema evolution operation is _ safe _ to execute . * apart from exploring _ eager _migration , we introduce the notion of _ lazy _ migration and point out its potential for future research in the database community . [ [ structure . ] ] structure .+ + + + + + + + + + in the next section , we start with an overview on the state - of - the - art in nosql data stores .section [ sec : evolution ] introduces our declarative language for evolving the data and its structure . in section [ sec : api ] , we define an abstract and generic nosql database programming language for accessing nosql data stores .the operations of our language are available in many popular nosql systems . with this formal basis, we can implement our schema evolution operations eagerly , see section [ sec : encoding_evolution ] .alternatively , schema evolution can be handled lazily .we sketch the capabilities of object mappers that allow lazy migration in section [ sec : lazy ] . in section [ sec : related ] , we discuss related work on schema evolution in relational databases , xml applications , and nosql data stores .we then conclude with a summary and an outlook on our future work .we focus on nosql data stores hosted in a cloud environment .typically , such systems scale to large amounts of data , and are schema - less or schema - flexible .we begin with a categorization of popular systems , discussing their commonalities and differences .we then point out the nosql data stores that we consider in this paper with their core characteristics . in doing so, we generalize from proprietary details and introduce a common terminology .nosql data stores vary hugely in terms of data model , query model , scalability , architecture , and persistence design .several taxonomies for nosql data stores have been proposed .since we focus on schema evolution , a categorization of systems by data model is most natural for our purposes .we thus resort to a ( very common ) classification into ( 1 ) key - value stores , ( 2 ) document stores , and ( 3 ) extensible record stores . often , extensible record stores are also called wide column stores or column family stores . [[ key - value - stores . ] ] ( 1 ) key - value stores .+ + + + + + + + + + + + + + + + + + + + + systems like redis ( * ? ? ?* chapter 8) or riak store data in pairs of a unique key and a value .key - value stores do not manage the structure of these values .there is no concept of schema beyond distinguishing keys and values .accordingly , the query model is very basic : only inserts , updates , and deletes by key are supported , yet no query predicates on values .since key - value stores do not manage the schema of values , schema evolution is the responsibility of the application .[ [ document - stores . ] ] ( 2 ) document stores .+ + + + + + + + + + + + + + + + + + + + systems such as mongodb or couchbase also store key - value pairs .however , they store `` documents '' in the value part .the term `` document '' connotes loosely structured sets of name - value pairs , typically in json ( javascript object notation ) format or the binary representation bson , a more type - rich format of json .name - value pairs represent the properties of data objects .names are unique , and name - value pairs are sometimes even referred to as key - value pairs .the document format is hierarchical , so values may be scalar , lists , or even nested documents .documents within the same document store may differ in their structure , since there is no fixed schema .queries in document stores are more expressive than in key - value stores . apart from inserting , updating , and deleting documents based on the document key , we may query documents based on their properties .the query languages differ from system to system .some systems , such as mongodb , have an integrated query language for ad - hoc queries , whereas other systems , such as couchdb ( * ? ? ?* chapter 6 ) and couchbase , do not .there , the user predefines views in form of mapreduce functions .an interesting and orthogonal point is the behavior in evaluating predicate queries : when a document does not contain a property mentioned in a query predicate , then this property is not even considered in query evaluation .document stores are schema - less , so documents may effortlessly evolve in structure : properties can be added or removed from a particular document without affecting the remaining documents . typically , there is no schema definition language that would allow the application developer to manage the structure of documents globally , across all documents .[ [ extensible - record - stores . ] ] ( 3 ) extensible record stores .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + extensible record stores such as bigtable or hbase actually provide a loosely defined schema .data is stored as records .a schema defines families of properties , and new properties can be added within a property family on a per - record basis .( properties and property families are often also referred to as _ columns _ and _ column families_. ) typically , the schema can not be defined up front and extensible record stores allow the ad - hoc creation of new properties .however , properties can not be renamed or easily re - assigned from one property family to the other .so certain challenges from schema evolution in relational database systems carry over to extensible record stores .google datastore is built on top of megastore and bigtable , and is very flexible and comfortable to use .for instance , it very effectively implements multitenancy for all its users .the cassandra system is an exception among extensible record stores , since it is much more restrictive regarding schema .properties are actually defined up front , even with a `` create table '' statement , and the schema is altered globally with an `` alter table '' statement .so while cassandra is an extensible record store , it is not schema - less or schema - flexible . in this work, we will exclusively consider schema - less data stores .[ [ a - word - on - null - values . ] ] a word on null values .+ + + + + + + + + + + + + + + + + + + + + + the handling of null values in nosql data stores deserves attention , as the treatment of unknown values is a factor in schema evolution . in relational database systems ,null values represent unknown information , and are processed with a three - valued logic in query evaluation .yet in nosql data stores , there is no common notion of nulls across systems : * some systems follow the same semantics of null values as relational databases , e.g. . *some systems allow for null values to be stored , but do not allow nulls in query predicates , e.g. .* some systems do not allow null values at all , e.g. , arguing that null values only waste storage . while there is no common strategy on handling unknown values yet, the discussion is ongoing and lively .obviously , there is a semantic difference between a property value that is not known ( such as the first name for a particular user ) , and a property value that does not exist for a variant of an entity ( since home addresses and business addresses are structured differently ) .consequently , some nosql data stores which formerly did not support null values have introduced them in later releases ( * ? ? ?* ; * ? ? ?* chapter 6 ) . in section[ sec : api ] , we present a generic nosql data store programming language .as the approaches to handling null values are so manifold , we choose to disregard nulls as values and in queries , until a consensus has been established among nosql data stores . in this paper , we investigate schema evolution for feature - rich , interactive web applications that are backed by nosql data stores .this makes document stores and schema - less extensible record stores our primary platforms of interest . since key - value storesdo not know any schema apart from distinguishing keys and values , we believe they are not the technology of choice for our purposes ; after all , one can not even run the most basic predicate queries , e.g. to find all blogs posted within the last ten hours .we assume a high - level , abstract view on document stores and extensible record stores and introduce our terminology .our terminology takes after google datastore .we also state our assumptions on the data and query model . [sec : terminology ] [ [ data - model . ] ] data model .+ + + + + + + + + + + objects stored in the nosql data store are called _ entities_. each entity belongs to a _kind _ , which is a name given to groups of semantically similar objects .queries can then be specified over all entities of the same kind .each entity has a unique _ key _ , which consists of the entity kind and an _ id_. entities have several _ properties _ ( corresponding to attributes in the relational world ) .each entity property consists of a _ name _ and a _value_. properties may be scalar , they may be multi - valued , or consist of nested entities . [[ query - model . ] ] query model .+ + + + + + + + + + + + entities can be inserted and deleted based on their key .we can formulate queries against all entities of a kind . at the very least ,we assume that a nosql data store supports conjunctive queries with equality comparisons .this functionality is commonly provided by document stores and extensible record stores alike .[ [ freedom - of - schema . ] ] freedom of schema .+ + + + + + + + + + + + + + + + + + we assume that the global structure of entities can not be fixed in advance .the structure of a single entity can be changed any time , according to the developers needs . _the blogging application example from the introduction is coherent with this terminology and these assumptions .in schema - less nosql data stores , there is no explicit , global schema . yet when we are building feature - rich , interactive web applications on top of nosql data stores, entities actually do display an implicit structure ( or schema ) ; this structure manifests in the entity kind and entity property names .this especially holds when object mappers take over the mundane task of marshalling objects from the application space into persisted entities , and back .these object mappers commonly map class names to entity kinds , and class members to entity properties .( we discuss object mappers further in the context of related work in section [ sec : related ] . )thus , there is a large class of applications that use nosql data stores , where the data is _ somewhat _ consistently structured , but has no fixed schema in the relational sense .moreover , in an agile setting , these are applications that evolve rapidly , both in their features and their data . under these assumptions, we now define a compact set of declarative schema migration operations , that have been inspired by schema evolution in relational databases , and update operations for semi - structured data .while we can only argue empirically , having read through discussions in various developer forums , we are confident that these operations cover a large share of the common schema migration tasks ..... evolutionop : : = add | delete | rename | move | copy ; add : : = " add " property " = " value [ selection ] ; delete : : = " delete " property [ selection ] ; rename : : = " rename " property " to " pname [ selection ] ; move : : = " move " property " to " kname [ complexcond ] ; copy : : = " copy " property " to " kname [ complexcond ] ; selection : : = " where " conds ; complexcond : : = " where " ( joincond | conds | ( joincond " and " conds ) ) ; joincond : : = property " = " property ; conds : : = cond { " and " cond } ; cond : : = property " = " value ; property : : = kname " . " pname ; kname : : = identifier ; pname : : = identifier ; .... figure [ fig : ebnf ] shows the syntax of our _ nosql schema evolution language _ in extended backus - naur form ( ebnf ) .an evolution operation adds , deletes , or renames properties .properties can also be moved or copied .operations may contain conditionals , even joins .the property kinds ( derived from ` kname ` ) and the property names ( ` pname ` ) are the terminals in this grammar .we will formally specify the semantics for our operations in section [ sec : encoding_evolution ] .for now , we discuss some examples to develop an intuition for this language .we introduce a special - purpose numeric property `` version '' for all entities .the version is incremented each time an entity is processed by an evolution operator .this allows us to manage heterogeneous entities of the same kind .this is an established development practice in entity evolution .we begin with operations that affect all entities of one kind : the add operation adds a property to all entities of a given kind .a default value may be specified ( see example [ ex : add ] ) .the delete operation removes a property from all entities of a given kind ( see example [ ex : delete ] ) .the rename operation changes the name of a property for all entities of a given kind ( see example [ ex : rename ] ) . _[ ex : add ] below , we show an entity from our blogpost example before and after applying operation * add blogpost.likes = 0*. this adds a likes - counter to all blogposts , initialized to zero .we chose a compact tabular representation of entities and their properties . _c c|c c [ cols="<,<",options="header " , ] section [ sec : encoding_evolution ] formalizes the semantics and investigates the effort of our migration operations . as a prerequisite , we next introduce a generic nosql database programming language .relational databases come with a query language capable of joins , as well as dedicated data definition and data manipulation language . yet in programming against nosql data stores , the application logic needs to take over some of these responsibilities .we now define the typical operations on entities in nosql data stores , building a purposeful nosql database programming language .our language is particularly modeled after the interfaces to google datastore , and is applicable to document stores ( e.g. ) as well as schema - less extensible record stores ( e.g. ) .we consider system architectures such as shown in figure [ fig : architecture ] .each user interacts with an instance of the application , e.g. a servlet .typically , the application fetches entities from the data store into the application space , modifies them , and writes them back to the data store .we introduce a common abstraction from the current state of the data store and the objects available in the application space .we refer to this abstraction as the _ memory state_. [ [ the - memory - state . ] ] the memory state .+ + + + + + + + + + + + + + + + + we model a memory state as a set of mappings from entity keys to entity values .let us assume that an entity has key and value .then the memory contains the mapping from this key to this value : .keys in a mapping are unique , so a memory state does not contain any mappings and with .the entity value itself is structured as a mapping from property names to property values .a property value may be from an atomic domain , either single - valued ( ) or multi - valued ( ) , or it may consist of the properties of a nested entity . _ [ ex : single_entity ] we model a memory state with a single entity managing user data .the key is a tuple of kind _ user _ and the i d .the entity value contains the user s login `` hhiker '' and password `` galaxy '' : . _ [ [ substitutions . ] ] substitutions .+ + + + + + + + + + + + + + we describe manipulations of a memory state by substitutions .a substitution is a mapping from a set ( e.g. the entity keys ) to a set ( e.g. the entity values ) and the special symbol . to access in a substitution ,we write .if , then this explicitly means that this mapping is not defined .let be the memory state , and let be a substitution .in _ updating the memory state by substitution _ , we follow a create - or - replace philosophy for each mapping in the substitution .we denote the updated memory by ] to abbreviate the substitution with a single mapping ] ; + above , $ ] is obtained from operation by first substituting each occurrence of in by , and next replacing all operands in query predicates by the value of `` getproperty( ) '' . _[ ex : safe_migrations ] we add a new property `` email '' to all user entities in the data store , and initialize it with the empty string ._ * foreach * * in * get( ) * do * + = setproperty( , ) ; + put( ) + since denormalization is vital for performance in nosql data stores , we show how to copy the property `` url '' from each user entity to all blogposts written by that user .* foreach * * in * get( ) * do * + * foreach * * in * get( ) * do * + = setproperty( , , getproperty( , ) ) ; + put( ) + * od + * od * * that we have a generic nosql database programming language , we can implement the declarative schema evolution operations from section [ sec : evolution ] .we believe the declarative operations cover common schema evolution tasks . for more complex migration scenarios, we can always resort to a programmatic solution .this matches the situation with relational databases , where an `` alter table '' statement covers the typical schema alterations , but where more complex transformations require an etl - process to be set up , or a custom migration script to be coded .figure [ fig : implementing_operations_adr ] shows the implementation for the operations add , delete , and rename .a for - loop fetches all matching entities from the data store , modifies them , and updates their version property ( as introduced in section [ sec : evolution ] ) .the updated entities are then persisted .figure [ fig : implementing_operations_cm ] shows the implementation for copy and move .again , entities are fetched from the nosql data store one by one , updated , and then persisted .this requires joins between entities .since joins are not supported in most nosql data stores , they need to be encoded in the application logic .this batch update corresponds to the recommendation of nosql data store providers on how to handle schema evolution ( e.g. ) .note that the create - or - replace semantics inherent in our nosql database programming language make for a _ well - defined _ behavior of operations .for instance , renaming the property `` text '' in blogposts to `` content '' ( c.f .example [ ex : rename ] ) effectively overwrites any existing property named content .moreover , the version property added to all entities makes the migration _ robust _ in case of interruptions .nosql data stores commonly offer very limited transaction support .for instance , google datastore only allows transactions to span up to five entities in so - called _ cross - group transactions _ ( or alternatively , provides the concept of entity groups not supported in our nosql database programming language ) .so a large - scale migration can not be performed as an atomic action . by restricting migrations to all entities of a particular version ( using the where - clause ) ,we may correctly recover from interrupts , even for move and copy operations .interestingly , not all migrations that can be specified are desirable .for instance , assuming a 1:n relationship between users and the blogposts they have written , the result of the migration does not depend on the order in which blogpost entities are updated . however , if there is an n : m relationship between users and blogposts , e.g. since we specify the copy operation as cross product between all users and all blogposts , then the execution order influences the migration result . naturally , we want to be able to know whether a migration is safe before we execute it .concretely , we say a migration is _ safe _ if it does not produce more than one entity with the same key .[ [ legend ] ] legend : + + + + + + + let be a kind , let be a property name , and let be a property value from . is a conjunctive query over properties .+ * foreach * * in * get( ) * do * + = setproperty( , , ) ; + setproperty( , , getproperty( , ) ) ; + put( ) + * od * * foreach * * in * get( ) * do * + = removeproperty( , ) ; + setproperty( , , getproperty( , ) ) ; + put( ) + * od * * foreach * * in * get( ) * do * + = setproperty( , , getproperty( , ) ) ; + removeproperty( , ) ; + setproperty( , , getproperty( , ) ) ; + put( ) + * od * [ [ legend-1 ] ] legend : + + + + + + + let be kinds and let be a property name .conditions and are conjunctive queries . has atoms of the form , where is a property name and is a value from . has atoms of the form or , where , and are property names . is a value from .+ * foreach * * in * get( ) * do * + * foreach * * in * get( ) * do * + = setproperty( , , getproperty( , ) ) ; + setproperty( , , getproperty( , ) ) ; + put( ) + * od ; + setproperty( , , getproperty( , ) ) ; + removeproperty( , ) ; + put( ) + * od * * * foreach * * in * get( ) * do * + * foreach * * in * get( ) * do * + = setproperty( , , getproperty( , ) ) ; + setproperty( , , getproperty( , ) ) ; + put( ) + * od + * od * * the following propositions follow from the implementations of schema evolution operators in figures [ fig : implementing_operations_adr ] and [ fig : implementing_operations_cm ] .an add , delete , or rename operation is safe . for a move or copy operation , and a data store state , the safety of executing the operation on can be decided in .deciding whether a copy or move operation is safe can be done in a simulation run of the evolution operator .if an entity has already been updated in such a `` dry - run '' and is to be overwritten with different property values , then the migration is not safe .in relational data exchange , the existence of solutions for relational mappings under constraints is a highly related problem .there , it can be shown that while the existence of solutions is an undecidable problem per - se , for certain restrictions , the problem is ptime - decidable ( c.f .corollary 2.15 in ) .moreover , the vehicle for checking for solutions is the chase algorithm , which fails when equality - generating dependencies in the target schema are violated .this is essentially the same idea as our dry - run producing entities with the same key , but conflicting values .since our schema evolution operations copy and move require two nested for - loops , we can check for safety in quadratic time .( keeping track of which entities have already been updated can be done efficiently , e.g. by maintaining a bit vector in the size of . )our nosql database programming language can also express operations for lazy migration . to illustrate this on an intuitive level , we encode some features of the objectify object mapper .we will make use of some self - explanatory additional language constructs , such as if - statements and local variables .additionally , we assume an operation `` hasproperty( , ) '' that tests whether the entity with key in the application state has a property by name . _the following example is adapted from the objectify documentation .it illustrates how properties are renamed when an entity is loaded from the data store and translated into a java object ._ the java class person is mapped to an entity .the annotation ` ` marks the identifier for this entity , the entity kind is derived from the class name .the earlier version of this entity has a property `` name '' , which is now renamed to `` fullname '' .legacy entities do not yet have the property `` fullname '' .when they are loaded , the object mapper assigns the value of property `` name '' to the class attribute `` fullname '' .the next time that the entity is persisted , its new version will be stored ..... public class person { long i d ; ("name " ) string fullname ; } .... in our nosql database programming language , we implement the annotation ` ` as follows .= key : = ; + hasproperty( , ) * do * + = setproperty( , , getproperty( , ) ) ; + removeproperty( , ) + * od * _ the following example is adapted from .the annotation ` ` specifies the migration for an entity when it is loaded . if the entity has properties street and city , these properties are moved to a new entity storing the address .these properties are then discarded from the person entity when it is persisted ( specified by the annotation ` ` ) .saving an entity is done by calling the objectify function ` ofy().save ( ) ` . _ .... public class person { long i d ; string street ; string city ; void onload ( ) { if ( this.street ! = null & & this.city ! = null ) { entity a = new entity("address " ) ; a.setproperty("person " , this.id ) ; a.setproperty("street " , this.street ) ; a.setproperty("city " , this.city ) ; ofy().save().entity(a ) ; } } } .... we implement the method with annotation ` ` as follows .= key : = ; + ( hasproperty( , ) hasproperty( , ) ) * do * + = key = ; + new( ) ; + setproperty( , , ) ; + setproperty( , , getproperty( , ) ) ; + setproperty( , , getproperty( , ) ) ; + put( ) ; + removeproperty( , ) ; + removeproperty( , ) ; + * od * it remains future work to explore lazy migrations in greater detail , and develop mechanisms to statically check them prior to execution : the perils of using such powerful features in an uncontrolled manner , on production data , are evident .lazy migration is particularly difficult to test prior to launch , since we can not foretell which entities will be touched at runtime .after all , users may return after years and re - activate their accounts , upon which the object mapper tries to evolve ancient data .it is easy to imagine scenarios where lazy migration fails , due to artifacts in the entity structure that developers are no longer aware of . in particular, we would like to be able to determine whether an annotation for lazy migration is safe . at the very least, we would like to check whether a lazy migration is _ idempotent _ , so that when transactions involving evolutions fail , there is no harm done in re - applying the migration .we define a nosql database programming language as an abstract interface for programming against nosql data stores . in recent work , present a calculus for nosql systems together with its formal semantics .they introduce a turing - complete language and its type system , while we present a much more restricted language with a focus on updates and schema evolution . for relational databases ,the importance of designing database programming languages for strong programmability , concerning both performance and usability , has been emphasized in .the language presented there can express database operators , query plans , and also capture operations in the application logic .however , the work there is targeted at query execution in relational databases , while we cover aspects of data definition and data manipulation in nosql data stores .moreover , we treat the data store itself as a black box , assuming that developers use a cloud - based database - as - a - service offering that they can not manipulate .all successful applications age with time , and eventually require maintenance or evolution . typically , there are two alternatives to handling this problem on the level of schema : schema versioning and schema evolution .relational databases have an established language for schema evolution ( `` alter table '' ) .this schema definition language is part of the sql standard , and is implemented by all available relational databases systems . for evolving xml - based applications ,research prototypes have been built that concentrate on the co - evolution of xml schemas and the associated xml documents .the authors of have developed a model driven approach for xml schema design , and support co - evolution between different abstraction levels . a dedicated language for xml evolutionis introduced in that formalizes xml schema change operations and describes the corresponding updates of associated xml documents .jsoniq is a quite new query language for json documents , the first version was published in april 2013 .future versions of jsoniq will contain an update facility and will offer operations to add , delete , insert , rename , and replace properties and values . our schema evolution language can be translated into corresponding update expressions . if jsoniq establishes itself as a standard for querying and updating nosql datastores , we can also base our schema evolution method on this language .the question whether an evolution is safe corresponds to the existence of ( universal ) solutions in data exchange .in particular , established practices from xml data exchange , using regular tree grammars to specify the source and the target schema , are highly relevant to our work .the use of object mappers translating objects from the application space into persisted entities can be seen as a form of schema specification .this raises an interesting question : provided that all entities conform to the class hierarchy specified by an object mapper , if we evolve entities , will they still work with our object mapper ?this boils down to checking for absolute consistency in xml data exchange , and is a current topic in database theory ( e.g. ) .it is therefore part of our plans to see how we can leverage the latest research on xml data exchange for evolving data in schema - less data stores .there are various object - relational mapping ( orm ) frameworks fulfilling well established standards such as the java persistence api ( jpa ) , and supporting almost all relational database systems .some orm mappers are even supported by nosql data stores , of course not implementing all features , since joins or foreign - keys are not supported by the backend ( e.g. see the jpa and jdo implementations for google datastore ) .so far , there are only few dedicated mappers for persisting objects in nosql data stores ( sometimes called object - data - store mappers ( odm ) ) .most of today s odms are proprietary , supporting a particular nosql data store ( e.g. morphia for mongodb , or objectify for google datastore ) .few systems support more than one nosql data store ( e.g. hibernate ogm ) . today , these objects - to - nosql mapping tools have at best rudimentary support for schema evolution .to the best of our knowledge , objectify and morphia go the furthest by allowing developers to specify lazy migration in form of object annotations .however , we could not yet find any solutions for systematically managing and expressing schema changes .at this point , the ecosystem of tools for maintaining nosql databases is still within its infancy .this work investigates the maintainability of feature - rich , interactive web applications , from the view - point of schema evolution .in particular , we target applications that are backed by schema - less _ document stores _ or _ extensible record stores_. this is an increasingly popular software stack , now that database - as - a - service offerings are readily available : the programming apis are easy to use , there is near to no setup time required , and pricing is reasonable .another sweet spot of these systems is that the data s schema does not have to be specified in advance .developers may freely adapt the data s structure as the application evolves . despite utter freedom, the data nevertheless displays an _implicit _ structure : the application class hierarchy is typically reflected in the persisted data , since object mappers perform the mundane task of marshalling data between the application and the data store . asan application evolves , so does its schema . yetschema - free nosql data stores do not yet come with convenient schema management tools . as of today, virtually all data migration tasks require custom programming ( with the exception of very basic data inspection tools for manipulating _ single _entities ) .it is up to the developers to code the migration of their production data `` on foot '' , getting the data ready for the next software release .worse yet , with weekly releases , the schema evolves just as frequently . in this paper , we lay the foundation for systematically managing schema evolution in this setting .we define a declarative _ nosql schema evolution language _ , to be used in a nosql data store administration console . using our evolution language , developers can specify common operations , such as adding , deleting , or renaming properties in batch .moreover , properties can be moved or copied , since data duplication and denormalization are fundamental in nosql data stores .we emphasize that we do not mean to enforce a relational schema onto nosql data stores .rather , we want to ease the pain of schema evolution for application developers . we regard it as one of our key contributions that our operations can be implemented for a large class of nosql data stores .we show this by an implementation in a generic _ nosql database programming language_. we also discuss which operations can be applied safely , since non - deterministic migrations are unacceptable .our nosql schema evolution language specifies operations that are executed _ eagerly _ , on all qualifying entities .an alternative approach is to migrate entities _lazily _ , the next time they are fetched into the application space .some object mappers already provide such functionality .we believe that lazy evolution is still little understood , and at the same time poses great risks when applied erroneously. we will investigate how our nosql schema evolution language may be implemented both safely _ and _ lazily .ideally , a dedicated schema evolution management tool would allow developers to migrate data eagerly for leaps in schema evolution , and to patch things up lazily for minor changes .
|
nosql data stores are commonly schema - less , providing no means for globally defining or managing the schema . while this offers great flexibility in early stages of application development , developers soon can experience the heavy burden of dealing with increasingly heterogeneous data . this paper targets schema evolution for nosql data stores , the complex task of adapting and changing the implicit structure of the data stored . we discuss the recommendations of the developer community on handling schema changes , and introduce a simple , declarative schema evolution language . with our language , software developers and architects can systematically manage the evolution of their production data and perform typical schema maintenance tasks . we further provide a holistic nosql database programming language to define the semantics of our schema evolution language . our solution does not require any modifications to the nosql data store , treating the data store as a black box . thus , we want to address application developers that use nosql systems as database - as - a - service . nosql data stores , schema evolution api for data stores , schema evolution language , schema management , eager migration , lazy migration , schema versioning
|
random number conversion ( rnc ) is a fundamental topic in information theory , and its asymptotic behavior has been well studied in the context of not only the first - order asymptotics but also the second - order asymptotics .the second - order analysis for the random number conversion is remarkable in the following sense .the second - order coefficients can not be characterized by use of the normal distribution in the case of random number conversion although all of second - order coefficients except for random number conversion are given by use of the normal distribution . to characterize the second - order coefficients in the random number conversion , the previous paper introduced rayleigh - normal distributions as a new family of distribution .this new family of distribution leads us to a new frontier of second order analysis , which is completely different from existing analysis of the second coefficients . in this paper, we focus on a realistic situation , in which one uses this conversion via a storage with a limited size , like a hard disk . in this case , first , initial random numbers are converted to other random numbers in a storage with a limited size , which is called _ random number storage _ or simply storage .second , the random numbers in the storage are converted to some desired random numbers .when the size of media for the conversion is limited , it is natural to consider the trade - off between the sizes of target random numbers and the storage . in this paper , we consider this problem when the initial and the target random random variables are given as multiple copies of respective finite random variables .that is , the initial random variables are subject to the -fold independent and identical distribution ( i.i.d . ) of a distribution with finite support and the target random variables are subject to the -fold i.i.d . of another distribution with finite support . in the problem ,since there is a freedom of the required number of copies of in the target distribution , we have to take care of the trade - off among three factors , the accuracy of the conversion , the size of the storage , and the required number of copies of in the output distribution . for simplicity , we fix the accuracy of the conversion , and investigate the trade - off between the size of the storage and the required number of copies of in the output distribution .we call this problem rnc via restricted storage . in particular , when , this problem can be regarded as random number compression to the given random number storage .one of our main purposes is to derive the maximum conversion rate when the rate of storage size is properly limited .if the size of storage is small , the maximum number of copies of target distribution should also be small since the conversion has to once pass through the small storage .thus , the allowable size of storage closely relates with the conversion rate of rnc via restricted storage.in this paper , we particularly investigate the region of achievable rate pairs for the size of storage and the number of copies of target distribution in the first- and the second - order settings . to clarify which rate pairs are truly important in the rate region, we introduce the relations named better " and simulate " between two rate pairs , and based on these two relations , we define the admissibility of rate pairs .although admissible rate pairs are only a part of the boundary of the region , those characterize the whole of the rate region , and hence , are of special importance in the rate region . here , remember that the second coefficients of the random number conversion are characterized by rayleigh - normal distribution .since the second - order asymptotic behaviour of other typical information tasks is often described by the standard normal distribution , the characterization by such non - normal distribution is a remarkable feature . to treat the second - order asymptotics of our problem, we introduce a new kind of probability distribution named generalized rayleigh - normal distribution as an extension of rayleigh - normal distribution .the generalized rayleigh - normal distributions are a family of probability distributions with two parameters and include the rayleigh - normal in as the limit case . using the generalized rayleigh - normal distributions, we can characterize the second - order rate region of rnc with restricted storage in a unified manner we also consider locc conversion for pure entangled states in quantum information theory .the asymptotic behavior of locc conversion has been intensively studied . however , unlike conventional settings of locc conversion , we assume that locc conversion passes through quantum system to store entangled states named _entanglement storage_. in the setting , an initial i.i.d .pure entangled state is once transformed into the entanglement storage with smaller dimension by locc and then transformed again to approximate a target i.i.d .pure state by locc .in particular , when the target pure entangled state is the same as the original pure entangled state , this problem can be regarded as locc compression of entangled states into the given entanglement storage . since the storage to keep the entangled statesis implemented with a limited resources , the analysis for locc compression is expected to be useful to store entanglement in small quantum system . since locc convertibility between pure entangled states can be translated to majorization relation between two probability distributions consisting of the squared schmidt coefficients of the states , we focus on the relation between majorization conversion and deterministic conversion which describes rnc to analyze the asymptotic behavior of locc conversion . then , it is shown that the performance of majorization conversion and deterministic conversion asymptotically coincide with each other as similar to the results of conventional rnc shown in .the paper is organized as follows . in section[ sec : family ] , we introduce the generalized rayleigh - normal distribution function as a function defined by an optimization problem .then we show its basic properties used in the asymptotics of rnc via restricted storage . in section [ sec :narnc ] , we formulate random number conversion ( rnc ) via restricted storage by two kinds of approximate conversion methods and give their relations in non - asymptotic setting . in section [ sec : arnc ] , we proceed to asymptotic analysis for rnc via restricted storage .then , we show the relation between the rates of the maximum conversion number and storage size and draw various rate regions in both frameworks of first and second - order asymptotic theory . in section [ sec : rcr ] , we see that conventional rnc without storage can be regarded as rnc via restricted storage with infinite size . in section [ sec : aqit ] , we consider locc conversion via entanglement storage for quantum pure states .using the results for rnc , we derive the asymptotic performance of optimal locc conversion .in particular , optimal locc compression rate is derived in the second - order asymptotics . in section [ sec :proof ] , we give technical details of proofs of theorems , propositions and lemmas . in section [ sec : conclusion ] , we state the conclusion of the paper .in this section , we introduce a new two - parameter probability distribution family on r which contains the rayleigh - normal distribution introduced in . a function on generally called a cumulative distribution function if is right continuous , monotonically increasing and satisfies and .then , there uniquely exists a probability distribution on whose cumulative distribution coincides with .that is , given a cumulative distribution function in the above sense , it determines a probability distribution on . to define the new probability distribution family , we give its cumulative distribution function . for and ,let and be the cumulative distribution function and the probability density function of the normal distribution with the mean and the variance .we denote and simply by and .to generalize rayleigh - normal distribution , we employ the continuous fidelity ( or the bhattacharyya coefficient ) for probability density functions and on defined by then , we generalize the rayleigh - normal distribution defined in as follows . for and , a generalized rayleigh - normal distribution function on is defined by where the set of functions ] such that . when a -achievable sequence is better than a sequence , the sequence is also -achievable obviously .moreover , the following lemma holds . when a -achievable sequence simulates a sequence , the sequence is also -achievable .we provide the proof of lemma [ simulate ] in section [ simulate.app ] . in this subsection ,let a sequence be represented by and with the first - order rates and and we focus on the first - order asymptotics of rnc via restricted storage in terms of and .then , we omit the term since it does not affect any result in this subsection . a first - order rate pair is called -_achievable _ when a sequence is -achievable . the set of -achievable rate pairs for and denoted by then , we have the following characterization . for , we have where and are the shanon entropy of and , respectively . we give the proof of theorem [ region1 ] in section [ region1.app ] . from theorem [ region1 ] , and with each other and do not depend on . in the following ,we denote the rate regions by simply .similar to the -achievability , we define that is better than or simulates by the relation between the sequences and .then , is better than if and only if and .similarly , simulates if and only if .when does not have any better achievable rate pair except for itself , the rate pair is called semi - admissible . moreover ,when no other rate pair is better than or simulates except for itself , the rate pair is called admissible .we obtain the following corollary by theorem [ region1 ] .the set of semi - admissible rate pairs is given by and is the unique admissible rate pair .the rate region is illustrated as fig .then , the set of semi - admissible rate pairs are illustrated as the line with the slope and the admissible rate pair is dotted at the tip of the line . andthe thick line corresponds to the semi - admissible rate pairs ., width=302,height=207 ] in later discussion , we separately treat the problem according to whether an semi - admissible rate pair is the admissible rate pair or not . in this subsection, we fix a first - order rate pair of each sequence and assume it to be -achievable .let the sequence be represented by and with second - order rates and .then we focus on the second - order asymptotics of rnc via restricted storage in terms of and .then , we omit the term unless otherwise noted . a second - order rate pair is called -_achievable _ when a sequence is -achievable . the set of -achievable rate pairs for and denoted by where if the first - order rate pair is not semi - admissible , the second - order rate region is trivially the empty set or the whole of .thus , we assume that the first - order rate pair is semi - admissible in the following .let and be arbitrary probability distributions on finite sets . then , there is a function ] which has different forms depending on as follows . when , when , when and , wthen and , suppose that and satisfy , or and , or and .for an arbitrary , there exist real numbers which satisfy the following condition ( ) : + there exist and which satisfy the three conditions : then such satisfy the following inequality first , we simultaneously treat the case when and the case when and .we take a constant which satisfies and .we verify that and satisfy the condition ( ) in the following .first , there exists a real number such that and by the mean value theorem .moreover , since satisfies ( [ threshold0 ] ) , can be taken as .thus , the conditions ( i ) and ( ii ) in ( ) hold .next , since is monotonically decreasing on from lemma [ sol2 ] and lemma [ monotone ] , the condition ( iii ) in ( ) holds .therefore , and satisfy the condition ( ). then the following holds thus , the proof is completed for the case when and the case when and .next , we treat the case when and . then we can take as and in ( ) from lemma [ sol1 ] and lemma [ monotone ]. then the following holds . thus , the proof is completed for the case when and .the following lemma is obvious .suppose that and satisfy and , or and .then , the following equality holds the following lemma is given as lemma of .let and be probability distributions and satisfy .when is a probability distribution and satisfies for any , the following holds : moreover , the equation holds for if and only if .suppose that and satisfy , or and , or and .when real numbers satisfy the condition ( ) in lemma [ lem.direct ] , the following inequality holds we set a sequence for as .then , we have the following for an arbitrary in defined in definition [ rn ] : where the inequality ( [ ine1 ] ) is obtained from the schwartz inequality and the inequality ( [ ine2 ] ) is obtained from lemmas [ monotone ] and [ naiseki2 ] .the following lemma is obvious by the schwartz inequality .suppose that and satisfy and , or and .then , the following inequality holds let be the function defined in subsection [ zdir.app ] .when and satisfy , or and , or and , lemmas [ lem.direct ] and [ zcon ] derives similarly , when and satisfy and , or and , lemmas [ lem.direct2 ] and [ lem.converse2 ] derives ( [ zequal ] ) . from the direct calculation for each case , we obtain the concrete form of the generalized rayleigh - normal distribution as in theorem [ zform ] .first , we show that is monotonically increasing .we define a shift operator for a map by .then we have .thus when we define the set of functions ] for such that for due to the mean value theorem .then holds because of the relation and the assumption ( i ) . since is monotonically decreasing on by the assumption ( iii ), we have for .moreover , holds for since , and holds . from the above discussion , we can use lemma [ naiseki2 ] .therefore , the following hold : where we used and . since we obtain we treat the case when . here , we use lemma [ lem.converse ] .for any , the existence of such that and can be easily verified by the mean value theorem .moreover , when we take as , then and hold by lemma [ sol2 ] . from lemma [ monotone ], is monotonically decreasing on .since , thus ( iii ) holds . taking the limit in ( [ lem.con.ineq ] ) , we have the following inequality and thus , the proof is completed .then , we treat the case when first , we treat the case when . from ( [ jimei ] ) , where we used lemma [ lem.central ] in the last equality .next , we treat the case when . here, we use lemma [ lem.converse ] . for any , the existence of such that and can be easily verified by the mean value theorem .moreover , when we take as , then and hold by lemma [ sol0 ] . from lemma [ monotone ], is monotonically decreasing on , and thus ( iii ) holds for any and . taking the limit in ( [ lem.con.ineq ] ), we have the following inequality since the proof is completed .then , we treat the case when . at first, we treat the case when , where . for an arbitrary sequence of probability distributions which satisfies ,the monotonicity of the fidelity follows since we obtain next , we treat the case when . here, we use lemma [ lem.converse ] . by lemma [ sol1 ], satisfies and satisfies when we take as and in lemma [ lem.converse ] , those satisfy ( i ) and ( ii ) .moreover , from lemma [ monotone ] , is monotonically decreasing on . since , ( iii ) holds .thus , we have the following inequality and thus , the proof is completed .the function in ( [ 2nd - dil ] ) is obviously continuous and strictly monotonically decreasing on .we first prove the direct part .let .since the size of storage is greater than the size of support of , can be converted to itself in storage .thus , we have where the equality follows from lemma [ lem.uni ] .next , let . can be converted to under the condition that asymptotic fidelity of conversion is .thus , we have then , we prove the converse part .let .then , the following inequality obviously holds next , let . since an arbitrary probability distribution on defined in ( [ s1 ] )can be converted from the uniform distribution with size of bits by majorization conversion .thus , we have from ( [ fidelity - ineq ] ) , ( [ 14 - 1 ] ) , ( [ 14 - 1 ] ) , ( [ 14 - 2 ] ) and ( [ 14 - 2 ] ) , we obtain ( [ feq ] ) .the function in ( [ 2nd - con ] ) is obviously continuous and strictly monotonically decreasing on .we first prove the direct part .let . since the size of storage is greater than the size of support of , we have when , the direct part is obvious from lemma [ lem.uni ] .next , we prove the converse part .let .then , the following inequality obviously holds let . since an arbitrary probability distribution on can be converted from the uniform distribution with size of bits by majorization conversion .thus , we have from ( [ fidelity - ineq ] ) , ( [ 15 - 1 ] ) , ( [ 15 - 2 ] ) and ( [ 15 - 2 ] ) , we obtain ( [ feq ] ) .let be a pure state on with the suquared schmidt coefficient defined in ( [ pl ] ) . then , according to lemma [ opt trans ] , an arbitrary pure state on which can be converted from by locc can also be converted from via by locc .thus , if we convert to in the first step , the minimal error is attainable in the second step . here , was given when the optimal entanglement concentration was performed for and does not depend on .therefore , it is optimal to perform the entanglement concentration as locc in the first step and especially the optimal operation does not depend on .let be a pure state on a bipartite system .then , there exists a locc map which satisfies the following conditions : ( i ) : : , ( ii ) : : for any locc map , there exists a locc map such that . because of nielsen s theorem , there exists a locc map which satisfies ( i ) .next , we prove that such satisfies ( ii ) .let a locc map output a state with probability . then , because of jonathan - plenio s theorem , holds for any . since , we have for any where was defined in ( [ j ] ) .moreover , ( [ ineq ] ) holds for any . if it does not holds , it is a contradiction as follows. then , there are the minimum numbers such that and .moreover , the inequality ( [ ineq3 ] ) holds for any because is monotonically decreasing with respect to .thus , we have the following contradiction . as proved above , ( [ ineq ] ) holds for any , and thus , we obtain ( ii ) because of jonathan - plenio s theorem . from lemma [ universal ] with , we have thus , the proof is completed .we have considered random number conversion ( rnc ) via random number storage with restricted size . in particular , we derived the rate regions between the storage size and the conversion rate of rnc from the viewpoint of the first- and second - order asymptotics . in the first - order rate region, it was shown that there exists the trade - off when the rate of storage size is smaller than or equal to the entropy of the initial distribution as in fig .[ 1st ] and semi - admissible rate pairs characterize the trade - off .when rnc achieves a semi - admissible first - order rate pair , the non - trivial second - order rate regions were obtained as in figs .[ 2nd ] , [ fig - case2 ] , [ fig - case3 ] , [ 2nd - reg2-uni1 ] and [ 2nd - reg2-uni2 ] . especially , to derive the second - order rate at a semi - admissible rate pairs , we introduced the generalized rayleigh - normal distribution and investigate its basic properties .from the second - order asymptotics , we also obtained asymptotic expansion of maximum generation number with high approximation accuracy .then , we applied the results for probability distributions to an locc conversion via entanglement storage problem of pure states in quantum information theory . in the problem , we did not assume that an initial state and a target state are the same states , however , the locc conversion via storage can be regarded as compression process if the target state equals the initial state , and thus , our problem setting is a kind of generalization of locc compression for pure states . here, we give some special remarks on the admissibility of rate pairs . in the argument to characterization of the rate regions , we defined the simple relations called better " and simulate " between two rate pairs , and introduced the admissibility of rate pairs based on the relations in order to clarify essentially important rate pairs in the rate region .the admissible rate pairs can determine whether a rate pair is in the rate region .that is , for any rate pair in the rate region , there is an admissible rate pair such that the admissible rate pair simulates or is better than the rate pair . on the other hand ,an arbitrary rate pair not having such an admissible pair is not contained in the rate region .thus , the admissible rate pairs uniquely determine the whole of rate region although those are a subset of the boundary of the rate region .moreover , since any admissible rate pair does not simulate or is not better than another admissible one , a proper subset of the admissible rate pairs can not determine the rate region as above . in the sense, the admissible rate pairs can be regarded as the minimal generator " of the rate region , and hence , are of special importance in the rate pairs . to characterize the rate region , the above discussiontells us the importance of the characterization of the semi - admissible rate pairs . in the first order case, the characterization can be obtained by the interval between the specific point and the origin . however , in the second order case , it is not so trivial and has been obtained as lemma [ simulate-2nd ] in this paper first time .the characterization is related to the first order of the specific point as well .we note that , besides rnc via restricted storage , the notion of `` simulate '' was implicitly appeared in asymmetric information theoretic operations .for instance , fig .1 in represents the typical first - order rate region in the wiretap channel then the left side boundary of the region is characterized as an interval between the origin and the other edge point , and hence , the left side boundary is simulated by the edge point of the interval .besides of such an applicability of `` simulate '' , the notion of `` simulate '' has not been focused on , and thus , the admissibility in the sense of this paper has not been recognized .in particular , to our knowledge , it has not been appeared in the context of the second - order rate region in existing studies .since the notion of `` simulate '' plays an important role in the characterization of the rate region , it will be widely used also in the rate region in the sense of the first and second order .we refer some future studies .first , probability distributions or quantum states were assumed to be i.i.d . in this paper . to treat information sources with classical or quantum correlation , the extension from an i.i.d .sequence to general one is thought as a problem to be solved .second , we analyzed only the asymptotic performance of random number conversion and locc conversion .on the other hand , what we can operate has only finite size .therefore , it is expected that conversion via restricted storage are analyzed in finite setting .third , since only pure states were treated in quantum information setting although mixed entangled states can be appear in practice , the extension from pure states to mixed states is thought to be important .finally , we have shown that the problem of rnc via restricted storage has a non - trivial trade - off relation described by the second - order rate region although trade - off relation in the first - order rate region is quite simple . as is suggested by the results , even when two kinds of first - order rates in an information theoretical problem simply and straightforward relate with each other ,there is a possibility that the rate region has a non - trivial trade - off relation in the second order asymptotics .we can conclude that consideration of the second order asymptotics might bring a new trade - off relation in various information theoretical problems .wk was partially supported from grant - in - aid for jsps fellows no .mh is partially supported by a mext grant - in - aid for scientific research ( a ) no .23246071 and the national institute of information and communication technology ( nict ) , japan .the centre for quantum technologies is funded by the singapore ministry of education and the national research foundation as part of the research centres of excellence programme .w. kumagai , m. hayashi , a new family of probability distributions and asymptotics of classical and locc conversions , " arxiv:1306.4166 , ( 2013 ) ; the conference version of this paper is appeared in isit2014 , ieee international symposium on ( pp .2047 - 2051 ) .r. nomura , t. s. han , second - order resolvability , intrinsic randomness , and fixed - length source coding for mixed sources : information spectrum approach , " ieee trans .theory , * 59 * , 1 - 16 , ( 2013 ) .
|
we consider random number conversion ( rnc ) through random number storage with restricted size . we clarify the relation between the performance of rnc and the size of storage in the framework of first- and second - order asymptotics , and derive their rate regions . then , we show that the results for rnc with restricted storage recover those for conventional rnc without storage in the limit of storage size . to treat rnc via restricted storage , we introduce a new kind of probability distributions named generalized rayleigh - normal distributions . using the generalized rayleigh - normal distributions , we can describe the second - order asymptotic behaviour of rnc via restricted storage in a unified manner . as an application to quantum information theory , we analyze locc conversion via entanglement storage with restricted size . moreover , we derive the optimal locc compression rate under a constraint of conversion accuracy . random number conversion , locc conversion , compression rate , entanglement , second - order asymptotics , generalized rayleigh - normal distribution .
|
since 2006 , inpop ( integration numerique planetaire de lobservatoire de paris ) has become an international reference for space navigation ( to be used for the gaia mission navigation and the analysis of the gaia observations ) and for scientific research in dynamics of the solar system objects and in fundamental physics . a first version of inpop , inpop06 ,was published in 2008 ( fienga et al . 2008 ) .this version is very close to the reference ephemerides of jpl in its dynamic model and in its fit procedure . with mex and vextracking data provided by esa , lunar laser ranging observations and the development of new planetary and moon ephemeris models and new adjustment methods , inpop08 ( fienga et al . 2009 ) and inpop10a ( fienga et al . 2011 ) were constructed .these versions of inpop have established inpop at the forefront of global planetary ephemerides because its precision in terms of extrapolation to the position of planets is equivalent to the jpl ephemerides .its dynamic model follows the recommendations of the international astronomical union ( iau ) in terms of i ) compatibility between time scales ( tt , tdb ) , ii ) metric in the relativistic equations of motion ( consistency in the computation of the position of the barycenter of the solar system ) and iii ) in the fit of the sun gravitational mass with a fixed au .inpop provides to the user , positions and velocities of the planets , the moon , the rotation angles of the earth and the moon as well as tt - tdb chebychev polynomials at .inpop10a was the first planetary ephemerides in the world built up with a direct estimation of the gravitational mass of the sun with a fixed astronomical unit instead of the traditional adjustment of the au scale factor .with inpop10a , we have demonstrated the feasibility of such determination helping the iau of taking the decision of fixing the astronomical unit ( see resolution b2 of the 35th iau general assembly , 2012 ) .the inpop01e is the latest inpop version developed for the gaia mission final release and available for users .compared to inpop10a , new sophisticated procedures related to the asteroid mass determinations have been implemented : bounded value least squares have been associated with a - priori sigma estimators ( kuchynka 2010 , fienga et al .2011 ) and solar plasma corrections ( verma et al .very recent uranus observations provided by ( viera martins and camaro 2012 ) have been added as well as positions of pluto deduced from hst ( tholen et al .for the llr fit , additionnal observations are available from cerga , mlrs2 , matera ( ) and apollo ( ) .adjustment of the gravitational mass of the sun is performed as recommended by the iau resolution b2 as well as the sun oblateness ( j ) , the ratio between the mass of the earth and the mass of the moon ( emrat ) and the mass of the earth - moon barycenter .estimated values are presented on table [ paramfita ] .
|
the inpop ephemerides have known several improvements and evolutions since the first inpop06 release ( fienga et al . 2008 ) in 2008 . in 2010 , anticipating the iau 2012 resolutions , adjustement of the gravitational solar mass with a fixed astronomical unit ( au ) has been for the first time implemented in inpop10a ( fienga et al . 2011 ) together with improvements in the asteroid mass determinations . with the latest inpop10e version , such advancements have been enhanced and studies about solar corona have also been investigated ( verma et al . 2012 ) . the use of planetary ephemerides for several physical applications are presented here from electronic densities of solar slow and fast winds to asteroid mass determinations and tests of general relativity operated with inpop10a . perspectives will also be drawn especially related to the analysis of the messenger spacecraft data for the planetary orbits and future computation of the time variations of the gravitational mass of the sun .
|
currently , oncologists rely upon the manual measurement of lesions to assess treatment response using criteria such as response evaluation criteria in solid tumors ( recist ) . due to the laborious nature of recist assessment , it is only applied to of all cancer patients who are enrolled in clinical trials .automated recist can be used for better assessment of treatment response and aggregate evidence to new , alternative biomarkers to recist .previous work directed toward automatic lymph node detection is limited .a special filter was used in to detect lymph nodes . this minimum directional difference( min - dd ) filter is constructed with the assumption that lymph nodes have uniform intensity , which is not always the case .the min - dd filter method was improved in by adding a hessian - based blobness measure for reducing false positives .several automatic algorithms have been proposed for liver lesion detection and segmentation , including combinations of adaptive multi - thresholding and morphological operators or k - means clustering on mean shift filtered images . however , these histogram - based methods require a good contrast between lesions and parenchyma . other techniques , such as adaboost , have been used in both semi - automatic approaches and in automatic settings to classify image textures . in another approach , shimizu _ _et al.__ trained two adaboost classifiers with a set of statistical and gradient features , as well as with features based on a convergence index filter that enhances blob - like structures .our work is innovative due to the following : \1 ) to our knowledge , we are the first to introduce a fully automated end - to - end pipeline that is generalizable across multiple organs .\2 ) the method is accurate and robust in its analysis of highly diverse lesions by effectively handling lesions with low contrast and substantial heterogeneity found within various organs .the organ detection step is based on marginal space learning ( see fig . [fig : pipelinesiemensorgandetection ] ) . in marginal space learning ,the detection is performed by using a sequence of learned classifiers , starting with classifiers with a few parameters ( e.g. , organ position without orientation and scale ) and ending with a classifier that models all the desired organ parameters ( e.g. , position , orientation and scale ) .each learned classifier is a probabilistic boosting tree ( pbt ) with 3d haar and steerable features and is trained via adaboost .the output probability can be within the range $ ] .the overall system architecture consists of two layers : 1 ) a discriminative anatomical network ( dan ) , and 2 ) a database - guided segmentation module .the dan supplies an estimate regarding the scale of the patient and the portion of the visible volume .furthermore , it detects a set of landmarks for navigating the volume .the database - guided segmentation module uses the output of the dan for the detection of the position , orientation , and scale of the organs visible in the given portion of the volume .liver lesions and lymph nodes are detected based on cascaded classifications .the detection pipeline has three steps .first , we exploit 3d haar - like features using an integral image of the organ sub - volume of interest .next , we use the adaboost classifier to perform feature selection and classifier training simultaneously .this is especially preferred since our haar - like feature pool contains many weak features .third , we use self - aligned image features to train another classifier to prune the candidates .the features are based on rays cast in 14 directions from each 3d candidate .the positions with maximum gradients along each ray are determined , and 24 local image features ( based on intensity , gradient , and orientation ) are extracted at each position .similarly , adaboost is used to train the classifier .the lesion detection algorithm is presented in algorithm [ lesiondetection ] : * input : * ct volumes of lymph nodes and liver lesions extract sub - volumes of interest ( ) obtain initial candidates as all locations with intensity within [ -100 , 200 ] hu of the candidates , keep the candidates that pass the * 3d haar detector * of the candidates , keep the candidates that pass the * self - aligning detector * obtain a rough segmentation with center at obtain a score from the detector based on the features extracted in discard from all with ( defined threshold ) , obtaining call non - maximal suppression on to obtain the detected lesions * output : * set of detected lesions and lymph nodes we conduct lesion segmentation by incorporating a convolutional neural network ( cnn ) and active contours .the performance of the level set segmentation depends not only upon the local region in which the energy statistics are calculated , but also upon the weighting parameters of the energy functional .the strength of the proposed method is its generalizability and ability to overcome a variety of segmentation challenges .thus , our novel technique utilizes the benefits of both approaches and overcomes their limitations to achieve significantly better results than either method alone .in , the authors proposed an iterative approach to calculate the adaptive size of the local window .the algorithm is applied for each point , at each iteration of the segmentation process , and for each lesion in the image database .the adaptive window is calculated using the lesion scale and its texture .texture analysis is accomplished by extracting haralick image features ( e.g. contrast , homogeneity ) from a second order statistics model , namely , from gray - level co - occurrence matrices ( glcm ) .our method incorporates both global and local texture in a single hybrid model .the functional parameters play a key role in the direction and magnitude of curve evolution . in authors present a method to adaptively calculate those parameters .first , a convolutional neural network ( cnn ) is used to estimate the location of the zero level set contour ( zls ) relative to the lesion .three possible locations are considered : outside the lesion , near the lesion boundaries , or inside the lesion boundary .the cnn outputs a probability for each of three classes : inside the lesion and far from its boundaries ( ) , close to the boundaries of the lesion ( ) , or outside the lesion and far away from its boundaries ( ) . in the second step ,we use the cnn probabilities of the three classes to set the weighting parameters and . in general, tends to contract the contour while tends to expand it .these parameters are calculated using the following equations : the authors propose a generalized architecture for the cnn , which is composed of two convolutional layers followed by two fully connected layers , including the final three - class output layer .both convolutional layers use 5 x 5 kernels , as this size outperformed smaller kernels .each convolutional block of our cnn is composed of three layers : a filter bank layer , a nonlinearity layer composed of leaky rectified linear units ( relu ) , and a max - pooling layer .we obtained image data from the cancer genome atlas ( tcga ) liver hepatocellular carcinoma data collection , comprising 42 3d ct volumes of liver lesions , and from the cancer imaging archive ( tcia ) , comprising 86 3d ct volumes containing pathological lymph nodes in the abdomen .a slice thickness of 2.5 mm and an average pixel spacing of 0.894 mm were used .the cases are challenging due to the low contrast characteristics of the images and to the varying sizes , poses , shapes and sparsely distributed locations of cancer lesions .the detection step provides two points representing the corners of the lesion bounding box .those two points , which are considered to be the lesion diameter , were used to generate the initial circular zls contour . for lymph nodes ,the tcia offers a complete radiologist annotation of all lymph nodes .therefore , we can calculate the sensitivity and false positive rates automatically. however , no complete annotation of the liver lesions exists .consequently , we visually analyzed all detections , manually counting the quantity of false positives / negatives . in ct slices that had been marked as containing liver lesions, we evaluated the segmentation by calculating the dice criterion between the automated segmentation and the radiologist - annotated segmentation mask .for combined data sets of 595 lymph nodes and 42 liver lesions , a detection sensitivity of 0.53 was obtained . for the segmentation phase , only the true positive cases with valid ground truth have been analyzed .table 1 shows the results of our analyses .the automated segmentation was applied twice , as follows .first , by using a zls contour that was initialized by the 2 automatically detected points ( average dice between the automated and the manual contours was ) .second , by using 2-points that were obtained by a manual detection ( average dice of ) .detection and segmentation examples for different challenging lesions are presented in figure 2 ..detection and segmentation results for analysis of liver lesions and lymph nodes [ cols="^,^,^,^",options="header " , ]in this work , we present a novel , fully automated pipeline for the detection and segmentation of liver lesions and pathological lymph nodes . we could nt directly compare the literature since methods were often validated on different datasets .we believe there is room for improvement on pathological lymph node detection if we can collect enough annotated training data to leverage the recent progress in deep learning. this will be our future work .furthermore , we note that the detection sensitivity of lymph nodes is significantly worse than that of the liver .lymph nodes are far more challenging to detect than liver lesions for two reasons .first , liver lesions occur only inside a liver .we have developed a robust algorithm to segment the liver from a ct scan , which provides a strong constraint for liver lesion detection .however , lymph nodes appear almost everywhere inside human body ; therefore we did not apply any spatial constraint during detection .second , lymph nodes are normally small and have an intensity very close to the surrounding tissues .liver lesions are often more easily discernible from normal liver tissue .+ + we note that even when the lymph nodes boundaries are unclear around boundaries , or even when the liver lesion is very heterogeneous , our presented method can deal appropriately with both of those challenges .we also observe that the analysis of the lymph nodes is more robust than the analysis of the liver lesions , due to 1 ) a larger number of training cases and 2 ) a more accurate manual segmentation of those cases . because the liver lesion cohort is much smaller andincludes a higher percentage of challenging cases for radiologists , the training set is less accurate .future work will include the extension of our current training set .we will also use a larger cohort with higher lesion diversity , including lesions in the colon , ovaries , kidneys , lymph nodes , and other organs .moreover , additional manual markings will be used in order to obtain a more accurate evaluation and training set for our technique .this work was supported in part by grants from the national cancer institute , national institutes of health , u01ca142555 , 1u01ca190214 , and 1u01ca187947 .
|
we propose a fully - automated method for accurate and robust detection and segmentation of potentially cancerous lesions found in the liver and in lymph nodes . the process is performed in three steps , including organ detection , lesion detection and lesion segmentation . our method applies machine learning techniques such as marginal space learning and convolutional neural networks , as well as active contour models . the method proves to be robust in its handling of extremely high lesion diversity . we tested our method on volumetric computed tomography ( ct ) images , including 42 volumes containing liver lesions and 86 volumes containing 595 pathological lymph nodes . preliminary results under 10-fold cross validation show that for both the liver lesions and the lymph nodes , a total detection sensitivity of 0.53 and average dice score of for segmentation were obtained .
|
this problem was born in 1922 when a.friedmann wrote his famous cosmological solution for the homogeneous isotropic universe . however , during the next 35 years researches devoted their attention mainly to the physical processes after the big bang and there was no serious attempts to put under a rigorous analysis the phenomenon of the cosmological singularity as such .the person who inspired the beginning of such analysis was l.d.landau . in the late 1950s he formulated the crucial question whether the cosmological singularity is a general phenomenon of general relativity or it appears only in particular solutions under the special symmetry conditions .the large amount of work have been done in landau school before an answer emerges in 1969 in the form of the so - called `` bkl conjecture '' ( the present - day terminology ) .the basic reviews covering also the contemporary development are - .the bkl conjecture has its foundation in a collection of results and , first of all , it asserts that the general solution containing the cosmological singularity exists .this fundamental question of existence of such solution was the principal goal of our work , however , we succeeded also in describing the analytical structure of gravitational and matter fields in asymptotic vicinity to the singularity and we showed that in most general physical settings such solution has complicated oscillatory behaviour of chaotic character . in order to avoid misunderstandings let s stress that under cosmological singularity we mean the singularity in time , when singular manifold is space - like , and when the curvature invariants together with invariant characteristics of matter fields ( like energy density ) diverge on this manifold . an intuitive feeling that there are no reasons to doubt in existence of the general solution with cosmological singularity we have already in 1964 but another five years passed before the concrete structure have been discovered . in 1965 appeared the important theorem of roger penrose , saying that under some conditions the appearance of incomplete geodesics in space - time is unavoidable .this is also singularity but of different type since , in general , incompleteness does not means that invariants diverge .in addition the theorem can say nothing about the analytical structure of the fields near the points where geodesics terminate .then penrose s result was not of a direct help for us , nevertheless it stimulated our search .today it is reasonable to consider that the bkl conjecture and penrose theorem represent two sides of the phenomenon but the links are still far to be understandable .this is because bkl approach deal with asymptotic in the vicinity to the singularity and penrose theorem has to do with global space - time .it is worth to stress that some misleading statements are to be found in the literature in relation to the aforementioned results .first of all , from the bkl theory as well as from penrose theorem not yet follows that cosmological singularity is inevitable in general relativity .bkl showed that the general solution containing such singularity exists but general in the sense that initial data under which the cosmological singularity is bound to appear represent a set of nonzero measure in the space of all possible data .however , we do nt know how big this measure is and we have no proof that this set can cover the totality of initial data . in the non - linear system can be many general solutions ( that is , each containing maximal number of arbitrary functional parameters ) of different types including also a general solution without singularity . moreover there is the proof of the global stability of minkowski spacetime which means that at least in some small ( but finite ) neighbourhood around it exists a general solution without any singularity at any time .the same is true in relation to the all versions of penrose theorem : for these theorems to be applicable the nontrivial initial / boundary conditions are strictly essential to be satisfied , but an infinity of solutions can exists which do not meet such conditions .the thorough investigation of applicability of penrose theorem as well as all its subsequent variations the reader can find in .the second delusion is that the general solution with singularity can be equally applied both to the singularity in future ( big crunch ) and to the singularity in past ( big bang ) ignoring the fact that these two situations are quite different physically . to describewhat are going near cosmological big crunch ( as well as near the final stage of gravitational collapse of an isolated object in its co - moving system ) one really need the general solution since in the course of evolution inescapably will arise the arbitrary perturbations and these will reorganize any regime into the general one .the big bang is not the same phenomenon .we do nt know initial conditions at the instant of singularity in principle and there are no reasons to expect that they should be taken to be arbitrary .for example , we can not ruled out the possibility that the universe started exactly with the aid of the friedmann solution and it may be true that this does not means any fine tuning from the point of view of the still unknown physics near such exotic state . of course , the arbitrary perturbations familiar from the present day physics will appear after big bang but this is another story .the conclusion is that if somebody found the general cosmological solution this not yet means that he knows how universe really started , however he has grounds to think that he knows at least something about its end .sometimes one can find in literature the statement that in the bkl approach only the time derivatives are important near singularity and because of this the asymptotic form of einstein equations became the ordinary differential equations with respect to time .such statement is a little bit misleading since space - like gradients play the crucial role in appearing the oscillatory regime .one of the main technical advantage of the bkl approach consists in identification among the huge number of the space gradients those terms which are of the same importance as time derivatives . in the vicinity to the singularity these terms in no way are negligible , they act during the whole course of evolution ( although from time to time and during comparatively short periods ) and namely due to them oscillations arise .the subtle point here is that asymptotically these terms can be represented as products of effective scale coefficients , governing the time evolution of the metric , and some factors containing space - like derivatives .this nontrivial separation springing up in the vicinity to the singular point produce gravitational equation of motion which effectively are the ordinary differential equations in time because all factors containing space - like derivatives enter these equations solely as external parameters , though dynamically influent parameters .owing to these ordinary differential equations the asymptotic evolution can be described as motion of a particle in some external potential .the aforementioned dominating space gradients create the reflecting potential walls responsible for the oscillatory regime .for the case of homogeneous cosmological model of the bianchi ix type such potential have been described by misner .the literal assertion that only the time derivatives are important near singularity is correct just for those cases when the general solution is of non oscillatory character and has simple power asymptotic near singularity as , for instance , for the cases of perfect liquid with stiff - matter equation of state , pure gravity in space - time of dimension more than ten , or some other classes of `` subcritical '' field models .the character of the general cosmological solution in the vicinity to the singularity can most conveniently be described in the synchronous reference system , where the interval is of the form we use a system of units where the einstein gravitational constant and the velocity of light are equal to unity .the greek indices refer to three - dimensional space and assume the values 1,2,3 .latin indices will refer to four - dimensional space - time and will take the values 0,1,2,3 .the coordinates are designated as .the einstein equations in this reference system take the form where the dot signifies differentiation with respect to time and the tensorial operations on the greek indices , as well as covariant differentiation in this system are performed with respect to the three - dimensional metric .the quantity is a three - dimensional contraction : is a three - dimensional ricci tensor , expressed in terms of in the same way as is expressed in terms of .the quantities and are components of the energy - momentum tensor four - dimensional contraction of which is designated by . it turn out that the general cosmological solution of eqs .( 2)-(4 ) in the asymptotic vicinity of a singularity with respect to time is of an oscillatory nature and may be described by an infinite alternation of the so - called kasner epochs .the notions of a kasner epoch and of the succession of two of these epochs are the key elements in the dynamics of the oscillatory regime .it is most convenient to study their properties in the example of empty space , when and then take into account all the changes that may be observed in the presence of matter .this procedure is reasonable since , in general , the influence of matter upon the solution in the vicinity of the singularity appears to be either negligible or can be put under the control .so let s assume that the tensor in eqs.(2)-(4 ) equals zero .a kasner epoch is a time interval during which in eq .( 4 ) the three - dimensional ricci tensor may be neglected in comparison with the terms involving time differentiation .then from ( 2 ) and ( 4 ) we obtain the following equations in this approximation : here and elsewhere we shall assert that the singularity corresponds to the instant and we shall follow the evolution of the solution towards the singularity , i.e. , the variation of time as it decreases from certain values down to .( 3 ) in the general case is of no interest for the dynamics of the solution , since its role is reduced to the establishment of certain additional relations on arbitrary three - dimensional functions resulting from the integration of the eqs.(2),(4 ) ( that is of supplementary conditions for initial data ) . the general solution of eqs .( 8) may be written down in the form where by the big latin letters we designate the three - dimensional frame indices ( they take the values 1,2,3 ) .the exponents and vectors are arbitrary functions of the three - dimensional coordinates .we call the directions along as kasner axis , the triad represents the common eigenvectors both for metric and second form . the exponents satisfy two relations : it ensues from these relations that one of the exponents is always negative while the two others are positive .consequently , the space expands in one direction and contracts in two others .then the value of any three - dimensional volume element decreases since , according to ( 9)-(10 ) , the determinant of decreases proportionally to .of course , the solution ( 9)-(10 ) sooner or later will cease to be valid because the _ _ _ _ three - dimensional ricci tensor _ _ _ _ contain some terms which are growing with decreasing of time faster than the terms with time derivatives and our assumption that can be neglected will become wrong .it is possible to identify these `` dangerous '' terms in and include them into the new first approximation to the einstein equations , instead of ( 8) .the remarkable fact is that the asymptotic solution of this new approximate system can be described in full details and this description is valid and stable up to the singularity .the result is that the evolution to the singularity can be represented by a never - ending sequence of kasner epochs and the singularity is the point of its condensation .the durations of epochs tend to zero and transitions between them are very short comparatively to its durations .the determinant of the metric tensor tends to zero . on each kasner epochthe solution take the form ( 9)-(10 ) but each time with new functional parameters and . on each epoch the exponents the same relations ( 10 ) , that is the space expands in one direction and contracts in two others , however , from epoch to epoch these directions are different , i.e. on each new epoch the kasner axis rotate relatively to their arrangement at the preceding one .the effect of rotation of kasner axis make its use inconvenient for an analytical description of the asymptotic oscillatory regime because this rotation never stops . however , it turn out that another axis exist ( they are not eigenvectors for the second form ) , rotation of which are coming to stop in the limit and projection of the metric tensor into such `` asymptotically frozen '' ( terminology of the authors of ref . ) triad still is a diagonal matrix .the components of this matrix have no limit since their behaviour again can be described by the never - ending oscillations of a particle against some potential walls .this is an efficient way to reduce the description of asymptotic evolution of six components of the metric tensor to the three oscillating degrees of freedom . for the homogeneous model of bianchi type ixthis approach was developed in where the three - dimensional interval has been represented in the form with the standard bianchi ix differential forms ( where depends only on in that special way that with only non - vanishing structural constants ) .the diagonal matrix and three - dimensional _ orthogonal _ matrix depend only on time ( tilde means transposition ) .remarkably , the gravitational equations for this model shows that near singularity all three euler angles of matrix tends to some arbitrary limiting constants and three components of oscillate between the walls of potential of some special structure .we never tried to generalize this approach ( namely with orthogonal matrix ) to the inhomogeneous models but the recent development of the theory showed that even in most general inhomogeneous cases ( including multidimensional spacetime filled by different kind of matter ) there is analogous representation of the metric tensor leading to the same asymptotic freezing phenomenon of `` non - diagonal '' degrees of freedom and reducing the full dynamics to the few `` diagonal '' oscillating scale factors .this is so - called iwasawa decomposition first used in and thoroughly investigated in .the difference is that in general inhomogeneous case instead of orthogonal matrix it is more convenient to use an upper triangular matrix ( with components where upper index numerates the rows and lower index corresponds to columns ) {ccc}1 & n_{1 } & n_{2}\\ 0 & 1 & n_{3}\\ 0 & 0 & 1 \end{array } \right ) \text{\ \ }\label{11}\ ] ] and to write three - dimensional interval in the form . the diagonal matrix as well as matrix functions of all four coordinates but near the singularity matrix tends to some time - independent limit and components of oscillate between the walls of some potential .this asymptotic oscillatory regime has the well defined lagrangian . if one writes matrix as then the asymptotic equations of motion for the scale coefficients became the ordinary differential equations in time ( separately for each point three - dimensional space ) which follow from the lagrangian : {ll}l = g_{ab}\frac{d\beta^{a}}{d\tau}\frac{d\beta^{b}}{d\tau}-v(\beta^{a } ) , & \\ v(\beta^{a})=c_{1}e^{-4\beta_{{}}^{1}}+\ c_{2}e^{-2(\beta_{{}}^{2}-\beta ^{1})^{{}}}+c_{3}e^{-2(\beta_{{}}^{3}-\beta^{2})}\ .\label{12 } & \end{array}\ ] ] here we use the new time variable instead of original synchronous time . in asymptotic vicinity to the singularitythe link is where differentials should be understood only with respect to time , considering the coordinates in formally as fixed quantities .since tends to zero approximately like it follows that singular limit corresponds to the metric of three - dimensional space of scale coefficients are defined by the relation this is flat lorenzian metric with signature ( -,+,+ ) which can be seen from transformation after which one get all coefficients are time - independent and positive ; with respect to the dynamics they play a role of external fixed parameters .apart from the three differential equations of second order for which follow from the lagrangian ( 12 ) there is well known additional constraint which represents the component of the einstein equations . in particular case of homogeneous model of bianchi type ix equations( 12 ) and ( 13 ) gives exactly the same system which was described in , in spite of the fact that in these last papers the asymptotical freezing of `` non - diagonal '' metric components has been obtained using an orthogonal matrix instead of iwasawa s one analysis of the eqs .( 12)-(13 ) shows that in the limit the exponents are positive and all tend to infinity in such a way that the differences and also are positive and tend to infinity , that is each term in the potential tends to zero .then from ( 13 ) follows that each trajectory becomes `` time - like '' with respect to the metric , i.e. near singularity we have , though this is so only in the extreme vicinity to the potential walls and between the walls where the potential is exponentially small and trajectories become `` light - like '' , i.e. these periods of `` light - like '' motion between the walls corresponds exactly to the kasner epochs ( 9)-(10 ) ( with an appropriate identification of kasner axis during each period ) .it is easy to see that the walls itself are `` time - like '' what means that collisions of a `` particle moving in a light - like directions '' against the walls are inescapable and interminable .one of the crucial points discovered in is that in the limit the walls become infinitely sharp and of infinite height which simplify further the asymptotic picture and make transparent the reasons of chaoticity of such oscillatory dynamics . because and near singularity all are positive we have .then by the transformation one can introduce instead of the `` radial '' coordinate ( when ) and `` angular '' coordinates subjected to the restriction the last condition pick out in -space the two - dimensional lobachevsky surface of constant negative curvature and each trajectory has the radially projected trace on this surface .the free kasner flights in three - dimensional -space between the walls are projected into geodesics of this two - dimensional surface .the walls are projected into three curves forming a triangle on the lobachevsky surface and reflections against these curves are geometric ( specular ) .if we introduce the new evolution parameter by the relation then the new lagrangian ( with respect to the `` time '' ) will be : {ll}l_{t}=-\left ( \frac{d\ln\rho}{dt}\right ) ^{2}+g_{ab}\text { } \frac { d\gamma^{a}}{dt}\frac{d\gamma^{b}}{dt}-\rho^{2}v(\rho\gamma^{a } ) , & \\ g_{ab}\gamma_{\text { } } ^{a}\gamma_{\text { } } ^{b}=-1.\text{\ \ } \label{14 } & \end{array}\ ] ] in the limit the new potential is exactly zero in the region between the walls where and becomes infinitely large at the points where the walls are located and behind them where quantities are negative .this means that near singularity potential depends only on -variables and can be considered as cyclic degree of freedom . in this way the asymptotic oscillatory regime can be viewed as the eternal motion of a particle inside a triangular bounded by the three stationary walls of infinite height in two - dimensional space of constant negative curvature .the important fact is that the area occupied by this triangle is finite .it is well known ( see references in , section 5.2.2 ) that the geodesic motion under the conditions described is chaotic .it is worth to mention that in case of homogeneous bianchi ix model the fact that its dynamics is equivalent to a billiard on the lobachevsky plane was established in .the numerical calculations confirming the admissibility of the bkl conjecture can be found in and .in papers we studied the problem of the influence of various kinds of matter upon the behaviour of the general solution of the gravitational equations in the neighbourhood of a singular point .it is clear that , depending on the form of the energy - momentum tensor , we may meet three different possibilities : ( i ) the oscillatory regime remains as it is in vacuum , i.e. the influence of matter may be ignored in the first approximation ; ( ii ) the presence of matter makes the existence of kasner epochs near a singular point impossible ; ( iii ) kasner epochs exist as before , but matter strongly affects the process of their formation and alternation .actually , all these possibilities may be realized .there is a case in which the oscillatory regime observed as a singular point is approached remains the same , in the first approximation , as in vacuum .this case is realized in a space filled with a perfect liquid with the equation of state for .no additional reflecting walls arise from the energy - momentum tensor in this case . if we have the `` stiff matter '' equation of state .this is the second of the above - mentioned possibilities when neither kasner epoch nor oscillatory regime can exist in the vicinity of a singular point .this case has been investigated in where it has been shown that the influence of the `` stiff matter '' ( equivalent to the massless scalar field ) results in the violation of the kasner relations ( 10 ) for the asymptotic exponents . instead we have where is an arbitrary three - dimensional function ( with the restriction ) to which the energy density of the matter is proportional ( in that particular case when the stiff - matter source is realized as a massless scalar field its asymptotic is and this is the formal reason why we use the index for the additional exponent ) .thanks to ( 15 ) , in contrast to the kasner relations ( 10 ) , it is possible for all three exponents to be positive . in has been shown that , even if the contraction of space starts with the quasi - kasner epoch ( 15 ) during which one of the exponents is negative , the asymptotic behaviour ( 9 ) with all positive exponents is inevitably established after a finite number of oscillations and remains unchanged up to the singular point .thus , for the equation of state the collapse in the general solution is described by monotonic ( but anisotropic ) contraction of space along all directions .the asymptotic of the general solution near cosmological singularity for this case we constructed explicitly in , see also .the disappearance of oscillations for the case of a massless scalar field should be consider as an isolated phenomenon which is unstable with respect to inclusion into the right hand side of the einstein equations another kind of fields .for instance , in the same paper we showed that if to the scalar field we add a vector one then the endless oscillations reappear .the cosmological evolution in the presence of an electromagnetic field may serve as an example of the third possibility . in this casethe oscillatory regime in the presence of matter is , as usual , described by the alternation of kasner epochs , but in this process the energy - momentum tensor plays a role as important as the three - dimensional curvature tensor .this problem has been treated by us in , where it has been shown that in addition to the vacuum reflecting walls also the new walls arise caused by the energy - momentum tensor of the electromagnetic field .the electromagnetic type of alternation of epochs , however , qualitatively takes place according to the same laws as in vacuum .in paper we have also studied the problem of the influence of the yang - mills fields on the character of the cosmological singularity . for definiteness ,we have restricted ourselves to fields corresponding to the gauge group su(2 ) .the study was performed in the synchronous reference system in the gauge when the time components of all three vector fields are equal to zero .it was shown that , in the neighbourhood of a cosmological singularity , the behaviour of the yang - mills fields is largely similar to the behaviour of the electromagnetic field : as before , there appears an oscillatory regime described by the alternation of kasner epochs , which is caused either by the three - dimensional curvature or by the energy - momentum tensor .if , in the process of alternation of epochs , the energy - momentum tensor of the gauge fields is dominating , the qualitative behaviour of the solution during the epochs and in the transition region between them is like the behavior in the case of free yang - mills fields ( with the abelian group ) .this does not mean that non - linear terms of the interaction may be neglected completely , but the latter introduce only minor , unimportant quantitative changes into the picture we would observe in the case of non - interacting fields .the reason for this lies in the absence of time derivatives of the gauge field strengths in those terms of the equations of motion which describe the interaction .the story resembling the aforementioned effect of dissappearance ( for scalar field ) and reconstruction ( after adding a vector field ) of oscillations occurred later in more general and quite different circumstances . in 1985 appeared very interesting and unexpected result that oscillatory regime near cosmological singularity in multidimensional spacetime ( for pure gravity ) holds for spacetime dimension up to but for dimension the asymptotic of the general solution follow the smooth multidimensional kasner power law . up to nowwe have no idea why this separating border coincides with dimension so significant for superstring theories , most likely it is just an accident .however , the important point is that if we will add to the vacuum multidimensional gravity the fields of -forms the presence of which is dictated by the low energy limit of superstring models , the oscillatory regime will reappear .this fact was established in and subsequently has been developed by t.damour , m.henneaux , h.nicolai , b.julia and their collaborates into the new interesting and promising branch of superstring theories . in articles it was demonstrated that bosonic sectors of supergravities emerging in the low energy limit from all types of superstring models have oscillatory cosmological singularity of the bkl character .let consider the action of the following general form:{ll}\displaystyle s=\int d^{d}x\sqrt{g}\text { } \biggl[r-\partial^{i}\varphi\partial_{i}\varphi- & \\ \displaystyle-\frac{1}{2}\sum_{p}\frac{1}{(p+1)!}e^{\lambda_{p}\varphi } f_{i_{1} ... i_{p+1}}^{(p+1)}f^{(p+1)i_{1} ... i_{p+1}}\biggr]\label{16 } & \end{array}\ ] ] where designates the the field strengths generated by the -forms , i.e. . the real parameters are coupling constants corresponding to the interaction between the dilaton and -forms .the tensorial operations in ( 16 ) are carring out with respect to -dimensional metric and . nowthe small latin indices refer to -dimensional space - time and greek indices ( as well as big latin frame indices and ) correspond to -dimensional space where also in this theory the kasner - like epochs exist which are of the form : {ll}g_{ik}dx^{i}dx^{k}=-dt^{2}+\eta_{ab}(t , x^{\alpha})l_{\mu}^{a}(x^{\alpha } ) l_{\nu}^{b}(x^{\alpha})dx^{\mu}dx^{\nu } , &... ,t^{2p_{d}(x^{\alpha})}],\text { \ \ } \label{17 } & \end{array}\]] however , in the presence of the dilaton the exponents instead of the kasner law satisfy the relations analogous to ( 15 ) : the approximate solution ( 17)-(19 ) follows from the -dimensional einstein equations by neglecting the energy - momentum tensor of -forms , -dimensional curvature tensor and spatial derivatives of .now one has to do the work analogous to that one for 4-dimensional gravity : it is necessary to identify in all neglected parts of the equations those `` dangerous '' terms which will destroy the solution ( 17)-(19 ) in the limit . then one should construct the new first approximation to the equations taking into account also these `` dangerous '' terms and try to find asymptotic solution for this new system .this is the same method which have been used in case of the 4-dimensional gravity with electromagnetic field and it works well also here . using the iwasawa decomposition for -dimensional frame metric where it can be shown that near singularity again the phenomenon of freezing of `` non - diagonal '' degrees of freedom of the metric tensor arise and the foregoing new approximate system reduces to the ordinary differential equations ( for each spatial point ) for the variables and where it is convenient to use the -dimensional flat superspace with coordinates and correspondingly new indices running over values the metric in this superspace is the asymptotic dynamics for -variables follows from the lagrangian of the form similar to ( 14 ) : {ll}l_{t}=-\left ( \frac{d\ln\rho}{dt}\right ) ^{2}+g_{\bar{a}\bar{b}}\text { } \frac{d\gamma^{\bar{a}}}{dt}\frac{d\gamma^{\bar{b}}}{dt}-\rho^{2}\sum _ { b}c_{b}e^{-2\rho w_{b}(\gamma ) } , & \\g_{\bar{a}\bar{b}}\gamma_{\text { } } ^{\bar{a}}\gamma_{\text { } } ^{\bar{b}}=-1.\label{21 } & \end{array}\ ] ] again component of the einstein equations gives additional condition to the equations of motion following from this lagrangian: here and time parameters and are defined by the evident generalization to the multidimensional spacetime of their definitions we used in case of 4-dimensional gravity : .all functional parameters in general are positive .the cosmological singularity corresponds to the the limit and in this limit potential term in lagrangian can be considered as -independent , asymptotically it vanish in the region of this space where and is infinite where sum in the potential means summation over all relevant ( dominating ) impenetrable barriers located at hypersurfaces where in the hyperbolic -dimensional -space .all are linear functions on therefore the free motion of between the walls in the original -dimensional -superspace is projected onto a geodesic motion of on hyperbolic -dimensional -space , i.e. to the motion between the corresponding projections of the original walls onto -space . these geodesic motions from time to time are interrupted by specular reflections against the infinitely sharp hyperplanes .these hyperplanes bound a region in -space inside which a symbolic particle oscillates and the volume of this region , in spite of its non - compactness , is finite .the last property is of principle significance since it leads to the chaotic character of the oscillatory regime .of course , one of the central point here is to find all the aforementioned dominant walls and corresponding `` wall forms '' this depends on the spacetime dimension and menu of -forms . in papers the detailed description of all possibilities for the all types of supergravities (i.e. , eleven - dimensional supergravity and those following from the known five types of the superstring models in ten - dimensional spacetime ) can be found .it was shown that in all cases there is only 10 relevant walls governing the oscillatory dynamics .the large number of other walls need no consideration because they are located behind these principal ten and have no influence on the dynamics in the first approximation .the mentioned above region in -space where a particle oscillate is called `` billiard table '' and collection of its bounding walls forms the so - called coxeter crystallographic simplex , that is , in the cases under consideration , polyhedron with 10 faces in 9-dimensional -space with all dihedral angles between the faces equal to the numbers where belongs to some distinguished set of natural numbers ( or equal to infinity ) .this is very special geometric construction which ( when combined with the specular laws of reflections against the faces ) lead to the nontrivial huge symmetry hidden in the asymptotic structure of spacetime near cosmological singularity which symmetry coexists , nevertheless , with chaoticity .the mathematical description of the symmetry we are talking about can be achieved in the following way .consider the trajectories of a particle moving between the walls in the original 10-dimensional -superspace with coordinates and metric ( 20 ) .these trajectories are null stright lines with respect to the lorenzian metric wall forms are linear function on , that is where the set of constants depends on the choice of a supergravity model and on the type of the wall ( index ) in the chosen model .we see that for each wall the constants represent components of the vector orthogonal to to this wall .we can imagine all these vectors ( for different ) as arrows starting at the origin of the -space .all these vectors have fixed finite norm ( is inverse to ) and one can arrange the scalar products for each supergravity model in the form of the matrix: the crucial point is that , independently of a supergravity model , is the cartan matrix of indefinite type , i.e. with one negative principal value .any cartan matrix can be associated with some lie algebra and particular matrix ( 23 ) corresponds to the so - called lorenzian hyperbolic kac - moody algebra of the rank 10 .as was shown in the particle s velocity after the reflection from the wall changes according to the universal ( i.e. again independent of the model ) law : {ll}(v^{\bar{a}})_{after}=(v^{\bar{a}})_{before}-2\frac{(v^{\bar{b}})_{before}w_{a\bar{b}}}{(w_{a}\bullet w_{a})}w_{a}^{\bar{a } } , & \\ w_{a}^{\bar{a}}=g^{\bar{a}\bar{b}}w_{a\bar{b}},\text{\ \ \ ( no summation in } a\text{).}\label{24 } & \end{array}\ ] ] this transformation is nothing else but the already mentioned specular reflection of a particle by the wall orthogonal to the vector now it is clear that one can formally identify the ten vectors with the simple roots of the root system of kac - moody algebra , the walls with the weyl hyperplanes orthogonal to the simple roots , the reflections ( 24 ) with the elements of the weyl group of the root system and the region of -superspace bounded by the walls ( where a particle oscillates ) with the fundamental weyl chamber . for the readers less familiar with all these notions of the theory of generalized lie algebras ( especially in application to the question under consideration ) we can recommend the exhaustive review which is well written also from pedagogical point of view .the manifestation of lie algebra means that the corresponding lie symmetry group must somehow be hidden in the system .the hidden symmetry conjecture proposes that this symmetry might be inherent for the exact superstring theories ( assuming that they exist ) and not only for their classical low energy limits of their bosonic sectors in the vicinity to the cosmological singularity .the limiting structure near singularity should be considered just as an auxiliary instrument by means of which this symmetry is coming to light .as of now we have no comprehension where and how exactly the symmetry would act ( could be as a continuous infinite dimensional symmetry group of the exact lagrangian permitting to transform the given solutions of the equations of motion to the new solutions ) .if true the hidden symmetry conjecture could create an impetus for the third revolution in the development of the superstring theories .i would like to express the gratitude to the organizers of this conference for the excellent arrangement and for the warm hospitality in minsk .i am also grateful to thibault damour for useful comments which helped me to improve the present exposition of my talk .v.a.belinsky and i.m.khalatnikov `` on the influence of the spinor and electromagnetic field on the cosmological singularity character '' , preprint of landau institute for theoretical physics , chernogolovka 1976 ; rend.sem.mat.univ.politech .torino , * 35 * , 159 ( 1977 ) .m.henneaux `` kac - moody algebras and the structure of cosmological singularities : a new light on the belinskii - khalatnikov - lifshitz analysis '' , to appear in quantum mechanics of fundamental systems : the quest for beauty and simplicity - claudio bunster festsschrift " [ arxiv : hep - th/0806.4670 ] .
|
the talk at international conference in honor of ya . b. zeldovich 95th anniversary , minsk , belarus , april 2009 . the talk represents a review of the old results and contemporary development on the problem of cosmological singularity . address = icranet , p.le della repubblica , 10 - 65122 pescara , italy
|
sequences , such as unimodular or low peak - to - average power ratio ( par ) , have many applications in both single - input single - output ( siso ) and multi - input multi - output ( mimo ) communication systems . for example , the -ary phase - shift keying techniques allow only symbols of constant - modulus , i.e. , unimodular , to be transmitted . in mimo radars and code - division multiple - access ( cdma ) applications ,the practical implementation demands from hardware , such as radio frequency power amplifiers and analog - to - digital converters , require the sequences transmitted to be unimodular or low par . in this paper, we consider the design of optimal unimodular or low par sequences for channel estimation .there is an extensive literature on designing single unimodular sequences with good correlation properties such that the autocorrelation of the sequence is zero at each nonzero lag . as such properties are usually difficult to achieve , metrics of `` goodness '' have been proposed instead where autocorrelation sidelobes are suppressed rather than literally set to zero , and optimization problems are thus formulated and solved with numerical algorithms .specifically , the work provides several cyclic algorithms ( ca ) for either minimizing integrated sidelobe level ( isl ) or maximizing isl - related merit factor ( mf ) . in , a computationally efficient algorithm called misl for minimizing isl is proposed , and it is demonstrated that misl results in lower autocorrelation sidelobes with less computational complexity . the good correlation property of a single unimodular sequence is also extended to mimo systems , where multiple sequences are transmitted .the good autocorrelation is defined for each sequence as that for a single sequence .meanwhile , good cross - correlation demands that any sequence be nearly uncorrelated with time - shifted versions of the other sequences . in , algorithms ca - direct ( cad ) and ca - new ( can ) are developed to obtain sequence sets of low auto- and cross - correlation sidelobes .also proposes some efficient algorithms to minimize the same metric in .the aforementioned isl and isl - related metrics are both alternative ways to describe the impulse - like correlation characteristics .sequences with such properties enable matched filters at the receiver side to easily extract the signals backscattered from the range bin of interest and attenuate signals backscattered from other range bins .nevertheless , matched filters take no advantage of any prior information on the channel when the unimodular low - isl sequences are used for estimation .the unimodular constraint is actually a special case of the low par constraint , which imposes how the largest amplitude of the sequence compares with its average power .the low par constraint , as a structural requirement , has been well studied in the design of tight frames .although the individual vector norms of a frame could be adjusted to maximize the sum - capacity of ds - cdma links , the optimality in terms of any performance measures was not directly considered therein .furthermore , the algorithm they proposed is based on alternating projection that often suffers a slow convergence .as far as channel estimation is concerned , many studies have been conducted for both frequency - flat and frequency - selective fading channels under minimum mean square error ( mmse ) estimation and conditional mutual information ( cmi ) maximization .most of those obtained optimization problems , however , only address the power constraint without addressing the unimodular or low par constraints . in , training sequence design for flat mimo channelsis studied assuming some special structures , e.g. , kronecker product , on the prior covariance matrices of channel and noise .it is shown that the optimization problem can be reformulated as power allocations using the majorization theory , and the waterfilling solutions are obtained .meanwhile , problems of similar formulations have also been studied in joint linear transmitter - receiver design . to deal with arbitrarily correlated mimo channels ,some numerical algorithms based on block coordinate descent are proposed in .more related to our work is training sequence design for frequency - selective fading channels . under a total power constraint , channel capacityis investigated for siso channels and mimo channels .independent and identically distributed channel coefficients and noise are assumed to facilitate mathematical analysis . as a result ,impulse - like sequences for both types of channel are suggested for optimal estimation .optimal design for the mmse channel estimation has been studied in , where the noise is assumed to be white and the channel taps are uncorrelated ; however , such assumption is hardly satisfied in practice . andthere is no guarantee of finding an optimal solution for an arbitrary length of training or channel correlation .more important , their results can not be used when the unimodular or low par constraint is imposed on the sequences to be designed .we formulate the problem as the design of optimal unimodular sequences based on the mmse and the cmi .both problems are non - convex with the bothersome unimodular constraint . without assuming any amenable structures , e.g. , kronecker product , on the prior channel and noise covariances, the problems are also challenging even if only the power constraint is imposed . to tackle those issues ,the majorization - minimization ( mm ) technique is employed to develop efficient algorithms . by rewriting the objective functions in a more appropriate way ,majorizing / minorizing functions can be obtained for minimization / maximization objective . as a result, the original problems are solved instead by a sequence of simple problems , each of which turns out to have a closed - form solution .convergence of our proposed algorithms is guaranteed , and an acceleration scheme is also given to improve the convergence rate .for low par constraints , similar problems can be formulated , and the developed algorithms need only a few modifications to be applied . the rest of this paper is organized as follows . in section [ sec : channel_problem ] , the channel model is described , on which the optimal unimodular sequence design problems are formulated . in section [ sec : algorithms_for_optimal ] , derivations of algorithms for both the mmse minimization and the cmi maximization are presented , followed by a brief analysis of convergence properties and an acceleration scheme .the optimal design under the low par constraints is discussed in [ sec : opt_seq_par ] .numerical examples are presented in section [ sec : simulations ] . andconclusion is then given in section [ sec : conclusion ] ._ notation : _ scalars are represented by italic letters .boldface uppercase and lowercase letters denote matrices and vectors , respectively . is the set of complex numbers .the identity matrix is denoted by with the size implicit in the context if undeclared .the superscripts , and denote respectively transpose , conjugate transpose and complex conjugate . with ,the vector is formed by stacking the columns of .the kronecker product is denoted by . takes the expectation of random variable . is the trace of a matrix . is frobenius norm of a matrix .we consider a block - fading or quasistatic multi - input multi - output ( mimo ) channel .assume the number of transmit antennas and receive antennas are and , respectively , and the channel impulse response is described as a length- sequence of matrices . in the training period , a length- sequence is sent through the channel from each transmit antenna or , equivalently , a length- vector from the set of transmit antennas at the time instant . for simplicity, we still call this sequence of vectors as a sequence , which is denoted by =\begin{bmatrix}\mathbf{u}_{1 } & \cdots & \mathbf{u}_{n}\end{bmatrix}^{t}\in \mathbb{c}^{n\times n_{t}} ] is a submatrix of with rows from to and columns from to , for , and . to find the next update , note that can be equivalently written as \right\|_{f}^{2}.\ ] ] and the minimum is achieved by projection onto a complex circle , which is \right)},\ ] ] where is taken element - wise .the whole procedure is summarized in algorithm [ alg : optimal_unimodular_sequence ] .the iterations of the algorithm is deemed to be converged , e.g. , when the difference between two consecutive updates for is no larger than some admitted threshold .set , and initialize . , and [eq : step3 ] \right)} ] . andthe algorithms are considered to be converged when the difference between two consecutive updates is no larger than , i.e. , .[ figs_unim_mmse_k19n10 ] shows the mse of different channel estimates after training with different unimodular sequences .both cap and can were proposed to design sequences with low sidelobes , or good correlation properties , and sequences designed by cap was employed to estimate channel impulse response with the matched filter .it was claimed that misl could further reduce the sidelobes of the designed unimodular sequences , with which channel estimate by matched filtering was also compared herein .the resulting mse of our proposed sequence , mmse - optimal accel ., by the accelerated scheme algorithm [ alg : accelerated_mm ] is lower than that of low sidelobes and that of random phases , especially in the low snr scenarios. therefore , the good correlation properties do not guarantee a good channel estimate when the length of the training sequence is limited with respect to the length of the channel impulse response .note that sequence mmse - optimal by algorithm [ alg : optimal_unimodular_sequence ] achieves almost the same performance as that of mmse - optimal accel . , but the resulting mse degrades a little bit in the high snr case as it needs more iterations to converge .the convergence of algorithm [ alg : optimal_unimodular_sequence ] and algorithm [ alg : accelerated_mm ] will be illustrated in section [ ssub : convergence ] .the obtained cmi for different unimodular sequences are shown in fig .[ fig_cmi_k29n50 ] .although by definition , the resulting cmi only depends on the channel statistics without being affected by the channel realizations , monte carlo simulations are still conducted for 200 times to avoid the effects from local minima .expectedly , sequences obtained by can and misl produces almost the same cmi . by incorporating the prior channel information into the sequence design ,however , the cmi obtained is improved . in this subsection, we compare the optimal unimodular sequences with those of good correlation properties or random phases for mimo channels .as in the case of siso channels , two performance metrics are considered , namely the channel mse and cmi .suppose the mimo channel has transmit antennas and receive antennas , with the length of the channel impulse .the vectorized channel impulse response is drawn from a circular complex gaussian distribution .each channel coefficient is associated with a triple set , where and are indices of transmit and receive antenna , respectively , and is the channel delay . andeach entry of the covariance matrix describes the correlation between the channel coefficient of the triple set and . without loss of generality , consider where and characterizes , respectively , the correlation between transmit antennas and the correlation between receive antennas , and is an exponentially decaying correlation with respect to the channel delay . for the true channel impulse response , we set and . in the optimal unimodular training sequence design , the channel prior is assumed to follow a circularly complex gaussian distribution with zero mean and covariance matrix of the same correlation structure as and and .each column of noise matrix in model corresponds to a miso channel , and the vectorized noise is assumed to be colored with a toeplitz correlation and , with .the optimal unimodular training sequences , sequences of good auto- and cross - correlations properties , and sequences of random phases are transmitted and then the corresponding mmse channel estimators can be obtained .the mse for each estimate is calculated by with .the cmi is similarly defined by .the snr is defined as the setting for algorithm initialization and convergence are the same as the unimodular case . and the mse and cmi are averaged over 100 times monte carlo simulations for different values of snr .[ figs_unim_mimo_mmse ] shows the mse of mmse channel estimates with different unimodular training sequences and snr s .the length of sequence for each transmit antenna is .it is obvious that the optimal unimodular sequences , both mmse - optimal by algorithm [ alg : optimal_unimodular_sequence ] and mmse - optimal accel . by algorithm [ alg : accelerated_mm ] , produce smaller mse than that of random phases or good auto- and cross - correlation properties ( good - corr ) .also notice that there is a gap between two curves of mse of mmse - optimal and mmse - optimal accel .this is because algorithm [ alg : optimal_unimodular_sequence ] needs much more iterations to be converged for mimo channel training sequence design than that of the siso case .the convergence properties are shown in section [ ssub : convergence ] . in the cmi maximization for mimo channels , the performances of different unimodular sequencesare shown in fig .[ figs_unim_mimo_cmi ] with . for different snr, the optimal unimodular training sequences can achieve larger cmi than sequences of either random phase or good correlation properties . with power proportions among three antennas :the results are averaged over 100 monte carlo simulations.,width=326 ] consider the mimo channel of the same conditions described in section [ ssec : mimo_unim_simu ] .we employ algorithm [ alg : optimal_par_sequence ] and its accelerated scheme to design low par sequences for the application of mmse channel estimation . in fig .[ figs_lowpar_mimo_mmse ] , mmse - optimal and mmse - optimal accel .are obtained by algorithm [ alg : optimal_par_sequence ] and its accelerated scheme , respectively .it is demonstrated that both optimal training sequences achieve much smaller mse than low par sequences of random phases . like the results for algorithm [ alg : optimal_unimodular_sequence ] and algorithm [ alg : accelerated_mm ] in the previous subsections ,mmse - optimal renders an larger mse than mmse - optimal accel . especially in the high snr cases .an example of convergence of both algorithms are shown in section [ ssub : convergence ] . in fig .[ figs_siso_unim_vs_lowpar_mmse ] , we also compare resulting mse of unimodular sequences and sequences of different values of par .experimental results are given to show the convergence properties of proposed algorithms for the mmse minimization problem and the cmi maximization problem with unimodular constraints or low par constraints .the setting for algorithm initialization and convergence criteria are the same as previous subsections .first , we experiment with algorithm [ alg : optimal_unimodular_sequence ] and algorithm [ alg : accelerated_mm ] for both mmse minimization and cmi maximization in siso channel unimodular training sequence design .[ fig_siso_unim_cvg ] shows the objective values with respect to algorithm iterations .in both problems , algorithm [ alg : optimal_unimodular_sequence ] converge monotonically to a stationary point though slowly . with acceleration techniques, however , algorithm [ alg : accelerated_mm ] renders an very fast convergence .the same convergence properties can be seen in fig .[ fig_mimo_unim_cvg ] , where unimodular sequences for mimo channel estimation are considered with , .within the same mimo channel setting , algorithm [ alg : optimal_par_sequence ] and its accelerated scheme are applied to design low par sequences .the convergence of both algorithms are shown in fig .[ fig_mimo_lowpar_cvg ] . note that in those three examples , the algorithms algorithm [ alg : optimal_unimodular_sequence ] and algorithm [ alg : optimal_par_sequence ] converge slower than the accelerated scheme especially in designing sequences for mimo channels with large values of snr .this is due to successive majorizations or minorizations applied in the derivation of algorithms and thus explains the difference between two training sequences in terms of the resulting mse and cmi .db.,width=326 ] , , and db.,width=326 ] , , and db.,width=326 ]in this paper , optimal training sequences with unimodular constraint and low par constraints are considered .the optimal sequence design problem is formulated by minimizing the mmse criterion and maximizing the cmi criterion .the formulated problems are nonconvex and efficient algorithms are developed based on the majorization - minimization framework . furthermore , the acceleration scheme is derived using the squarem method .all the proposed algorithms are guaranteed to monotonically converge to a stationary point .numerical results show that the optimal unimodular sequences can improve either the accuracy of channel estimate or the cmi compared with those of sequences with good correlation properties or random phases . under the same criteria ,the optimal sequence design with low par constraint is also studied , for which the similar algorithms to unimodular case are derived .numerical examples show that the optimal low par sequences perform better than that of random phases .h. he , p. stoica , and j. li , `` designing unimodular sequence sets with good correlations including an application to mimo radar , '' _ ieee trans . signal process ._ , vol .57 , no . 11 , pp . 43914405 , nov 2009 .y. liu , t. f. wong , and w. w. hager , `` training signal design for estimation of correlated mimo channels with colored interference , '' _ ieee trans . signal process ._ , vol .55 , no . 4 , pp .14861497 , apr .d. katselis , e. kofidis , and s. theodoridis , `` on training optimization for estimation of correlated mimo channels in the presence of multiuser interference , '' _ ieee trans .signal process ._ , vol . 56 , no . 10 , pp .48924904 , oct .2008 .e. bjrnson and b. ottersten , `` a framework for training - based estimation in arbitrarily correlated rician mimo channels with rician disturbance , '' _ ieee trans . signal process ._ , vol .58 , no . 3 , pp . 18071820 , mar .d. p. palomar , j. m. cioffi , and m. a. lagunas , `` joint tx - rx beamforming design for multicarrier mimo channels : a unified framework for convex optimization , '' _ ieee trans .signal process ._ , vol .51 , no . 9 , pp . 23812401 , sep .d. p. palomar and y. jiang , `` mimo transceiver design via majorization theory , '' _ found .trends commun .inf . theory _, vol . 3 , no . 4 ,331551 , nov .[ online ] .available : http://dx.doi.org/10.1561/0100000018 d. katselis , c. r. rojas , m. bengtsson , e. bjrnson , x. bombois , n. shariati , m. jansson , and h. hjalmarsson , `` training sequence design for mimo channels : an application - oriented approach , '' _ arxiv preprint arxiv:1301.3708 _ , 2013 .q. shi , c. peng , w. xu , and y. wang , `` training signal design for mimo channel estimation with correlated disturbance , '' in _ 2014 ieee int .speech and signal process .( icassp ) _ , 2014 , pp. 57205724 .h. vikalo , b. hassibi , b. hochwald , and t. kailath , `` on the capacity of frequency - selective channels in training - based transmission schemes , '' _ ieee trans .signal process ._ , vol .52 , no . 9 , pp . 25722583 , sep. 2004 .a. beck and m. teboulle , `` gradient - based algorithms with applications to signal recovery , '' in _convex optimization in signal processing and communications _, d. palomar and y. eldar , eds.1em plus 0.5em minus 0.4emcambridge , u.k . :cambridge univ . press , 2009 , pp .
|
par - constrained sequences are widely used in communication systems and radars due to various practical needs ; specifically , sequences are required to be unimodular or of low peak - to - average power ratio ( par ) . for unimodular sequence design , plenty of efforts have been devoted to obtaining good correlation properties . regarding channel estimation , however , sequences of such properties do not necessarily help produce optimal estimates . tailored unimodular sequences for the specific criterion concerned are desirable especially when the prior knowledge of the channel is taken into account as well . in this paper , we formulate the problem of optimal unimodular sequence design for minimum mean square error estimation of the channel impulse response and conditional mutual information maximization , respectively . efficient algorithms based on the majorization - minimization framework are proposed for both problems with guaranteed convergence . as the unimodular constraint is a special case of the low par constraint , optimal sequences of low par are also considered . numerical examples are provided to show the performance of the proposed training sequences , with the efficiency of the derived algorithms demonstrated . unimodular sequence , peak - to - average power ratio ( par ) , channel estimation , majorization - minimization , minimum mean square error , conditional mutual information .
|
variable phenomena are common in the universe and multi - epoch surveys are producing very interesting scientific results ( see * ? ? ?* ) . among all the numerous existing surveys, gaia is really prominent and will have a tremendous impact on time domain astronomy ( see for example * ? ? ?* ; * ? ? ?* ) . within the gaia data processing and analysis consortium ( dpac ,see * ? ? ?* ) , one coordination unit ( cu7 ) is dedicated to the thematics of variability .the objective of cu7 is to populate the gaia catalogue with properties of sources having photometric and/or spectral variability .cu7 is composed of about 70 researchers , postdocs , students , and computers scientists who are distributed among about 20 countries .the top - level work - breakdown is represented in fig .[ figcu7wps ] .we describe briefly the different tasks hereafter .a first global release of all variable sources detected among the one billion observed sources is foreseen in the 4th release ( currently planned for 2018/2019 ) , the final release might come in 2022. however the goal of cu7 is to release at earlier dates specific groups of variables , once the estimations of completeness and contamination are thought to be reliable enough and reach acceptable levels . 99 distefano , e. , lanzafame , a. c. , lanza , a. f. , et al .2012 , mnras , 421 , 2774 dubath , p. , rimoldini , l. , sveges , m. , et al .2011 , mnras , 414 , 2602 dzigan , y. , & zucker , s. 2012 , apj , 753 , ll1 dzigan , y. , & zucker , s. 2013 , mnras , 428 , 3641 eyer , l. , & mowlavi , n. 2008 , journal of physics conference series , 118 , 012010 eyer , l. , palaversa , l. , mowlavi , n. , et al .2012 , astrophysics and space science , 341 , 207 eyer , l. , holl , b. , pourbaix , d. , et al .2013 , central european astrophysical bulletin , 37 , 115 mignard , f. , bailer - jones , c. , bastian , u. , et al .2008 , iau symposium , 248 , 224 nienartowicz , k. , ordez blanco , d. , guy , l. , et al .2014 , arxiv:1411.5943 rimoldini , l. , dubath , p. , sveges , m. , et al .2012 , mnras , 427 , 2917 tingley , b. 2011 , a&a , 529 , aa6 varadi , m. , eyer , l. , jordan , s. , & koester , d. 2011 , eas publications series , 45 , 167
|
we present the variability processing and analysis that is foreseen for the gaia mission within coordination unit 7 ( cu7 ) of the gaia data processing and analysis consortium ( dpac ) . a top level description of the tasks is given .
|
the problem of testing whether two data generating distributions are equal has been studied extensively in the statistical and machine learning literatures .practical applications range from speech recognition to fmri and genomic data analysis .parametric approaches typically test for divergence between two distributions using statistics based on a standardized difference of the two sample means , _e.g. _ , student s -statistic in the univariate case or hotelling s -statistic in the multivariate case . a variety of non - parametric rank - based tests have also been proposed .more recently , and devised kernel - based statistics for homogeneity tests in a function space .in several settings of interest , prior information on the structure of the distribution shift is available as a graph on the variables .specifically , suppose we observe from a first multivariate normal distribution and from a second such distribution . in cases where an undirected graph encoding some type of covariance information in is given ,the putative _ location _ or _ mean shift _ may be expected to be coherent with .that is , viewed as a function of is _ smooth _ , in the sense that the shifts and for two connected nodes and are similar .classical tests , such as hotelling s -test , consider the null hypothesis against the alternative , without reference to the graph .our goal is to take into account the graph structure of the variables in order to build a more powerful two - sample test of means under smooth - shift alternatives . just as a natural notion of smoothness of functions on a euclidean spacecan be defined through the notion of dirichlet energy and controlled by fourier decomposition and filtering , it is well - known that the smoothness of functions on a graph can be naturally defined and controlled through spectral analysis of the graph laplacian . in particular , the eigenvectors of the laplacian provide a basis of functions which vary on the graph at increasing frequencies ( corresponding to the increasing eigenvalues ) . in this paper , we propose to compare two populations in terms of the first few components of the graph - fourier basis or , equivalently , in the original space , after filtering out high - frequency components .an important motivation for the development of our graph - structured test is the detection of groups of genes whose expression changes between two conditions .for example , identifying groups of genes that are differentially expressed ( de ) between patients for which a particular treatment is effective and patients which are resistant to the treatment may give insight into the resistance mechanism and even suggest targets for new drugs .in such a context , expression data from high - throughput microarray and sequencing assays gain much in relevance from their association with graph - structured prior information on the genes , _e.g. _ , gene ontology ( go ; http://www.geneontology.org ) or kyoto encyclopedia of genes and genomes ( kegg ; http://www.genome.jp/kegg ) .most approaches to the joint analysis of gene expression data and gene graph data involve two distinct steps .firstly , tests of differential expression are performed separately for each gene .then , these univariate ( gene - level ) testing results are extended to the level of gene sets , _e.g. _ , by assessing the over - representation of de genes in each set based on -values for fisher s exact test ( or a approximation thereof ) adjusted for multiple testing or based on permutation adjusted -values for weighted kolmogorov - smirnov - like statistics . another family of methods directly performs multivariate tests of differential expression for groups of genes , _ e.g. _ , hotelling s-test .it is known that the former family of approaches can lead to incorrect interpretations , as the sampling units for the tests in the second step become the genes ( as opposed to the patients ) and these are expected to have strongly correlated expression measures .this suggests that direct multivariate testing of gene set differential expression is more appropriate than posterior aggregation of individual gene - level tests . on the other hand , while hotelling s -statistic is known to perform well in small dimensions , it loses power very quickly with increasing dimension , essentially because it is based on the inverse of the empirical covariance matrix which becomes ill - conditioned .in addition , such direct multivariate tests on unstructured gene sets do not take advantage of information on gene regulation or other relevant biological properties . an increasing number of regulation networks are becoming available , specifying , for example , which genes activate or inhibit the expression of which other genes . as stated before , incorporating such biological knowledge in de tests is important . indeed ,if it is known that a particular gene in a tested gene set activates the expression of another , then one expects the two genes to have coherent ( differential ) expression patterns , _e.g. _ , higher expression of the first gene in resistant patients should be accompanied by higher expression of the second gene in these patients .accordingly , the first main contribution of this paper is to propose and validate multivariate test statistics for identifying distribution shifts that are coherent with a given graph structure .next , given a large graph and observations from two data generating distributions on the graph , a more general problem is the identification of smaller non - homogeneous subgraphs , _i.e. _ , subgraphs on which the two distributions ( restricted to these subgraphs ) are significantly different .this is very relevant in the context of tests for gene set differential expression : given a large set of genes , together with their known regulation network , or the concatenation of several such overlapping sets , it is important to discover novel gene sets whose expression change significantly between two conditions .currently - available gene sets have often been defined in terms of other phenomena than that under study and physicians may be interested in discovering sets of genes affecting in a concerted manner a specific phenotype .our second main contribution is therefore to develop algorithms that allow the exhaustive testing of all the subgraphs of a large graph , while accounting for the multiplicity issue arising from the vast number of subgraphs . as the problem of identifying variables or groups of variables which differ in distribution between two populations is closely - related to supervised learning , our proposed approach is similar to several learning methods . use filtering in the fourier space of a graph to train linear classifiers of gene expression profiles whose weights are smooth on a gene network. however , their classifier enforces global smoothness on the large regularization network of all the genes , whereas we are concerned with the selection of gene sets with locally - smooth expression shift between populations . in ,sparse learning methods are used to build a classifier based on a small number of gene sets .while this approach leads in practice to the selection of groups of variables whose distributions differ between the two classes , the objective is to achieve the best classification performance with the smallest possible number of groups . as a result , correlated groups of variables are typically not selected .other related work includes , who proposed an adaptive neyman test in the fourier space for time - series .however , as illustrated below in section [ sec : experiments ] , direct translation of the adaptive neyman statistic to the graph case is problematic , as assumptions on fourier coefficients which are true for time - series do not hold for graphs . in addition , the neyman statistic converges very slowly towards its asymptotic distribution and the required calibration by bootstrapping renders its application to our subgraph discovery context difficult .by contrast , other methods do not account for shift smoothness and try to address the loss of power caused by the poor conditioning of the -statistic by applying it after dimensionality reduction or by omitting the inverse covariance matrix and adjusting instead by its trace . recently proposed de tests , where a probabilistic graphical model is built from a gene network .however , this model is used for gene - level de tests , which then have to be combined to test at the level of gene sets .several approaches for subgraph discovery , like that of , are based on a heuristic to identify the most differentially expressed subgraphs and do not amount to testing exactly all the subgraphs . concerning the discovery of distribution - shifted subgraphs , a graph laplacian - based testing procedure to identify groups of interacting proteins whose genes contain a large number of mutations .their approach does not enforce any smoothness on the detected patterns ( smoothness is not necessarily expected in this context ) and the graph laplacian is only used to ensure that very connected genes do not lead to spurious detection . the gene expression network analysis ( gxna )method of detects differentially expressed subgraphs based on a greedy search algorithm and gene set de scoring functions that do not account for the graph structure . the rest of this paper is organized as follows : section [ sec : smooth ] introduces elements of fourier analysis for graphs which are needed to develop our method .section [ sec : test ] presents our graph - structured two - sample test statistic and states results on power gain for smooth - shift alternatives .section [ sec : discovery ] describes procedures for systematically testing all the subgraphs of a large graph .section [ sec : experiments ] presents results for synthetic data as well as breast cancer gene expression and kegg data .finally , section [ sec : discussion ] summarizes our findings and outlines ongoing work .the fundamental idea of harmonic analysis for functions defined on a euclidean space is to build a basis of the function space , such that each basis function varies at a different frequency .the basis functions are typically sinusoids .they were originally obtained in an attempt to solve the heat equation , as the eigenfunctions of the laplace operator , with corresponding eigenvalues proportional to the frequencies of the sinusoids .any function can then be decomposed on the basis as a linear combination of sinusoids of increasing frequency .the set of projections of the function on the basis sinusoids gives a dual representation of the function , often referred to as fourier transform .this representation is useful for filtering functions , by removing or shrinking coefficients associated with high frequencies , as these are typically expected to reflect noise , and then taking the inverse fourier transform .the resulting filtered function contains the same signal in the low frequencies as the original function .a related concept is the dirichlet energy of a function on an open subspace , defined as where is the gradient operator , a measure of variation that is consistent with the laplace operator . in particular ,the dirichlet energy of the basis functions is proportional to their associated frequencies . for functions on a euclidean space , natural notions of smoothness , along with the dirichlet energy and dual representation in the frequency domain by projection on a fourier basis ,are therefore classically defined from the laplace operator and its spectral decomposition .likewise , notions of smoothness for functions on graphs can be defined based on the graph laplacian . specifically , consider an undirected graph , with nodes , adjacency matrix , and degree matrix , where is a unit column - vector , is the diagonal matrix with diagonal for any vector , and .let denote a function that associates a real value to each node of the graph .the laplacian matrix of is typically defined as or for the normalized version .more generally , given any gradient matrix , defined on and associating to each function on the graph its variation on each edge , it is possible to derive a corresponding laplacian matrix following the classical definition of the laplace operator , , where is the divergence operator defined as the negative of the adjoint operator of the gradient . any desired notion of variation may be encoded in a gradient function and thus translated into its associated dirichlet energy , for a function defined on the graph . a common choice of gradient is the finite difference operator .this definition leads to the unnormalized laplacian above .the corresponding energy function is .let denote the spectral decomposition of the laplacian , where is the diagonal matrix of eigenvalues and the columns of the matrix are the corresponding eigenvectors .then , by definition , the eigenvectors of are functions of increasing energy , as for all . in the remainder of this paper, we denote by the fourier coefficients of a function defined on a graph .if the above two notions of smoothness are not appropriate for a particular application , other gradients , leading to other laplacian matrices , may be devised to build the function basis .for example , introducing weights on the edges of a graph and using these weights in the normalized version of the finite differences allows the incorporation of prior belief on where a shift in distributions is expected to be smooth . for applications like structuredgene set differential expression detection , one may use negative weights for edges that reflect an expected negative correlation between two variables , _e.g. _ , a gene whose expression inhibits the expression of another gene . in this case , a small variation of the shift on the edge between and should correspond to a small .accordingly , the gradient should be defined as , where is for negative interactions and for positive interactions .the eigenvectors of the corresponding laplacian are functions of increasing , an appropriate notion of smoothness for the application at hand .a signed laplacian can be recovered from the classical definition , where is allowed to have negative entries .note that such a smoothness function is used as a penalty for semi - supervised learning in .as an example , figure [ fig : ev ] displays the eigenvectors of the signed laplacian for a simple four - node graph with the first eigenvector , corresponding to the smallest frequency ( eigenvalue of zero ) , can be viewed as a `` constant '' function on the graph , in the sense that its absolute value is identical for all the nodes , but nodes connected by an edge with negative weight take on values of opposite in sign . by contrast , the last eigenvector , corresponding to the highest frequency , is such that nodes connected by positive edges take on values of opposite sign and nodes connected by negative edges take on values of the same sign . for a simple four - node graph .the corresponding eigenvalues are .nodes are colored according to the value of the eigenvector , where green corresponds to high positive values , red to high negative values , and black to .`` t''-shaped edges have negative weights.,title="fig : " ] for a simple four - node graph .the corresponding eigenvalues are .nodes are colored according to the value of the eigenvector , where green corresponds to high positive values , red to high negative values , and black to .`` t''-shaped edges have negative weights.,title="fig : " ] for a simple four - node graph .the corresponding eigenvalues are .nodes are colored according to the value of the eigenvector , where green corresponds to high positive values , red to high negative values , and black to .`` t''-shaped edges have negative weights.,title="fig : " ] for a simple four - node graph .the corresponding eigenvalues are .nodes are colored according to the value of the eigenvector , where green corresponds to high positive values , red to high negative values , and black to .`` t''-shaped edges have negative weights.,title="fig : " ]for multivariate normal distributions , hotelling s -test , a classical test of location shift , is known to be uniformly most powerful invariant against global - shift alternatives .the test statistic is based on the squared _ mahalanobis norm _ of the sample mean shift and is given by , where , , and denote , respectively , the sample sizes , means , and pooled covariance matrix , for random samples drawn from two -dimensional gaussian distributions , , . under the null hypothesis of equal means , follows a ( central ) -distribution , where . in general, follows a non - central -distribution , where the non - centrality parameter is a function of the mahalanobis norm of the mean shift , , which we refer to as _ distribution shift_. in the remainder of this paper , unless otherwise specified , -statistics are assumed to follow the nominal -distribution , _e.g. _ , for critical value and power calculations . for any graph - fourier basis , direct calculation shows that , _i.e. _ , the statistic in the original space and the statistic in the graph - fourier space are identical .more generally , for , the statistic in the original space after filtering out frequencies above is the same as the statistic restricted to the first coefficients in the graph - fourier space : } \left(u_{[k]}^\top \hat{\sigma } u_{[k]}\right)^{-1}u_{[k]}^\top ( \bar{x}_1 - \bar{x}_2 ) \\ & = \frac{n_1n_2}{n_1+n_2}(\bar{x}_1 - \bar{x}_2)^\top u 1_{k } u^\top \left(u 1_{k } u^\top \hat{\sigma } u 1_{k } u^\top\right)^{+}u 1_{k } u^\top ( \bar{x}_1 - \bar{x}_2),\end{aligned}\ ] ] where denotes the generalized inverse of a matrix , the matrix } ] denote the distribution shift restricted to the first dimensions of and , _ i.e. _ , based on only the first elements of , , and the first diagonal block of , .under the assumption that the distribution shift is smooth , _i.e. _ , lies mostly at the beginning of the graph spectrum , so that is nearly maximal for a small value of , lemma [ lem : das ] states that performing hotelling s test in the graph - fourier space restricted to its first components yields more power than testing in the full graph - fourier space .equivalently , the test is more powerful in the original space after filtering than in the original unfiltered space .note that this result holds because retaining the first fourier components is a _ non - invertible _ transformation .[ lem : das ] for any level and any , there exists such that where is the power of hotelling s -test at level in dimension for a distribution shift , according to the nominal -distribution .this lemma is a direct application of corollary in to hotelling s -test in the graph - fourier space .the bottom line of the proof of s result is that can be shown to be a continuous and strictly decreasing function of , so that a strictly positive increase in the non - centrality parameter of the -distribution is necessary to maintain power when increasing dimension .in particular , a direct application of lemma [ lem : das ] yields the following corollary : [ cor : smooth ] if , then according to corollary [ cor : smooth ] , if the distribution shift lies in the first fourier coefficients , then testing in this subspace yields strictly more power than using additional coefficients .in particular , if there exists such that ( _ i.e. _ , the mean shift is smooth ) and is block - diagonal such that , then gains in power are obtained by testing in the first fourier components . although non - necessary , this condition is plausible when the mean shift lies at the beginning of the spectrum , as the coefficients which do not contain the shift are not expected to be correlated with the ones that do contain it .note that the result in lemma [ lem : das ] is even more general , as testing in the first fourier components can increase power even when the distribution shift partially lies in the remaining components , as long as the latter portion is below a certain threshold .figure [ fig : shiftinc ] illustrates , under different settings , the increase in distribution shift necessary to maintain a given power level against the number of added coefficients .if for some reason one expects that the mean shift is smooth ( rather than the distribution shift ) , _ i.e. _ , lies at the beginning of the spectrum , and that the covariance between coefficients that contain the shift and those that do not is non - zero , then one should use test statistics based on estimators of the unstandardized _ euclidean norm _ of this shift , _e.g. _ , [equation or .results similar to lemma [ lem : das ] can be derived for these statistics .namely , the corresponding tests gain asymptotic power when applied at the beginning of the spectrum , provided the euclidean norm of only increases moderately as coefficients for higher frequencies are added .the results follow from [theorem and [equations , using the fact that , by cauchy s interlacing theorem , the trace of the square of any positive semi - definite matrix is larger than the trace of the square of any principal submatrix .-test to maintain a given power when increasing the number of tested fourier coefficients : vs. such that .power computed under the non - central -distribution , for observations , , and .each line corresponds to the fixed shift and power pair indicated in the legend .right : zoom on the first dimensions ., title="fig : " ] -test to maintain a given power when increasing the number of tested fourier coefficients : vs. such that .power computed under the non - central -distribution , for observations , , and .each line corresponds to the fixed shift and power pair indicated in the legend .right : zoom on the first dimensions ., title="fig : " ]a systematic approach for discovering non - homogeneous subgraphs , _i.e. _ , subgraphs of a large graph that exhibit a significant shift in means , is to test all of them . in practice , however , this can represent an intractable number of tests , so it is important to be able to rapidly identify sets of subgraphs that all satisfy the null hypothesis of equal means . to this end , we devise a pruning approach based on an upper bound on the value of the test statistic for any subgraph containing a given set of nodes .given a large graph with nodes , we adopt the following classical branch - and - bound - like approach to test subgraphs of size at level .we start by checking , for each node in , whether the hotelling -statistic in the first graph - fourier components of any subgraph of size containing this node can be guaranteed to be below the level- critical value ( _ e.g. _ , -quantile of distribution ) .if this is the case , the node is removed from the graph .we then repeat the procedure on the edges of the remaining graph and , iteratively , on the subgraphs up to size , at which point we test all the remaining subgraphs of size . specifically , for a subgraph of of size , hotelling s -statistic in the first graph - fourier components of is defined as } \left ( u_{[k]}^\top \hat{\sigma}(g ) u_{[k ] } \right)^{-1}u_{[k]}^\top ( \bar{x}_1(g ) - \bar{x}_2(g)),\ ] ] where } ] , where we omit to ease notation ) and , and are , respectively , the empirical means and pooled covariance matrix restricted to the nodes in . we make use of the following upper bound on . [lem : neighbound ] for any subgraph of of size , any subgraph of of size , and any , then where is the -neighborhood of , that is , the union of the nodes of and the nodes whose shortest path to a node of is less than or equal to .the proof involves the following result : [ lem : bessel ] let be an invertible matrix and , , be a matrix with orthonormal columns . for any , first notethat , by orthonormality of the columns of , is indeed invertible , and that where is an orthogonal projection , with eigenvalues either or . thus , is positive - semi - definite , as its eigenvalues are also either or .the result follows from properties of products of positive - semi - definite matrices .we can now prove lemma [ lem : neighbound ] . by lemma [ lem : bessel ] , as ,applying lemma [ lem : bessel ] a second time with the compression from to the nodes of yields the result .note that the bound takes into account the fact that the -statistic is eventually computed in the first few components of a basis which is not known beforehand : at each step , for each potential subgraph which would include the subgraph which we consider for pruning , the that we need to upper bound depends on the graph laplacian of .for `` small - world '' graphs above a certain level of connectivity and large enough , the -neighborhood of , , tends to be large , at least at the beginning of the above exact algorithm , and the number of tests actually performed wo nt decrease much compared to the total number of possible tests .one can , however , identify much more efficiently the subgraphs whose sample mean shift in the first components of the graph - fourier space has euclidean norm }(g)\| = \|u_{[k]}^\top(\bar{x}_1(g ) - \bar{x}_2(g))\| ] .for any , if this threshold is low enough , all the subgraphs with are included in this set . performing the actual -test on these pre - selected subgraphs yields exactly the set of subgraphs that would have been identified using the exact procedure of section [ sec : exact ] .more precisely , we have the following result : [ lem : nc ] for any threshold , , and any subgraph of size such that }(g)\|^2 < \theta ] denotes the smallest eigenvalue of }(g ) = u_{[k]}\hat{\sigma}(g)u_{[k]}^\top ] , it follows that , for any , }(g))^{-1}x \leq \frac{\|x\|^2}{\lambda_{min}(\hat{\tilde{\sigma}}_{[k]}(g))}.\ ] ] lemma [ lem : nc ] states that for any subgraph which would be detected by hotelling s-statistic but not by the euclidean criterion }(g)\|^2 ] ; thus , the remark on variances holds for both the graph - fourier and original spaces .however , if is large , we expect to be very small , while filtering somehow controls the conditioning of the covariance matrix . testing for homogeneity over the potentially large number of subgraphs investigated as part of the above algorithms immediately raises the issue of multiple testing .however , the present multiplicity problem is unusual , in the sense that one does not know in advance the total number of tests and which tests will be performed specifically .standard multiple testing procedures , such as those in , are therefore not immediately applicable . in an attempt to address the multiplicity issue, we apply a permutation procedure to control the number of false positive subgraphs under the complete null hypothesis of identical distributions in the two populations .specifically , one permutes the class / population labels ( 1 or 2 ) of the observations and applies the non - homogeneous subgraph discovery algorithm to the permuted data to yield a certain number of false positive subgraphs . repeating this procedure a sufficiently large number of times produces an estimate of the distribution of the number of type i errors under the complete null hypothesis of identical distributions .we evaluate the empirical behavior of the procedures proposed in sections [ sec : test ] and [ sec : discovery ] , first on synthetic data , then on breast cancer microarray data analyzed in context of kegg pathways .the performance of the graph - structured test is assessed in cases where the distribution shift satisfies the smoothness assumptions described in section [ sec : test ] .we first generate a connected random graph with nodes .next , we generate datasets , each comprising gaussian random vectors in , with null mean shift for datasets and mean shift for the remaining . for the latter datasets ,the non - zero shift is built in the first fourier coefficients ( the shift being zero for the remaining coefficients ) and an inverse fourier transformation is applied to random vectors generated in the graph - fourier space .we consider two covariance settings : in the first one , the covariance matrix in the graph - fourier space is diagonal with diagonal elements at . in the second one, correlation is introduced between the shifted coefficients only . specifically , for , if , otherwise .figure [ fig : rocs ] displays receiver operator characteristic ( roc ) curves for mean shift detection by the standard hotelling -test , in the first fourier coefficients , in the first principal components ( pc ) , the adaptive neyman test of , and a modified version of this test where the correct value of is specified .note that we do not consider sparse learning approaches , but it would be straightforward to design a realistic setting where such approaches are outperformed by testing , _e.g. _ , by adding correlation between some of the functions under .the first important comparison is between the classical hotelling -test versus the -test in the graph - fourier space . as expected from lemma [ lem : das ] , testing in the restricted space where the shift lies performs much better than testing in the full space which includes irrelevant coefficients .the difference can be made arbitrarily large by increasing the dimension and keeping the shift unchanged .the graph - structured test retains a large advantage even for moderately smooth shifts , _e.g. _ , when and .of course , this corresponds to the optimistic case where the number of shifted coefficients is known .figure [ fig : miss ] shows the power of the test in the graph - fourier space for various choices of . even when missing some coefficients ( ) or adding a few non - relevant ones ( ) , the power of the graph - structured test is higher than that of the -test in the full space . the principal component approach is shown because it was proposed for the application which motivated our work and as it illustrates that the performance improvement originates not only from dimensionality reduction , but also from the fact that this reduction is in a direction that does not decrease the shift .we emphasize that power entirely depends on the nature of the shift and that a pc - based test would outperform our fourier - based test when the shift lies in the first principal components rather than fourier coefficients .the statistics of and are also largely outperformed by our graph - structured statistic ( roc curves not shown in figure [ fig : rocs ] for the sake of readability ) , which illustrates that working in the graph - fourier space solves the problem of high - dimensionality for which these statistics were designed . here again , for a non - smooth shift , the comparison would be less favorable . finally , we consider the adaptive neyman test of , which takes advantage of smoothness assumptions for time - series .this test differs from our graph - structured test , as fourier coefficients for stationary time - series are known to be asymptotically independent and gaussian . for graphs, the asymptotics would be in the number of nodes , which is typically small , and necessary conditions such as stationarity are more difficult to define and unlikely to hold for data like gene expression measurements . in the uncorrelatedsetting , the modified version of the statistic based the true number of non - zero coefficients performs approximately as well as the graph - structured .however , for correlated data , it loses power and both versions can have arbitrarily degraded performance .this , together with the need to use the bootstrap to calibrate this test , illustrates that direct transposition of the test to the graph context is not optimal .-test in the graph - fourier space with an actual mean shift evenly distributed among the first coefficients . ] to evaluate the performance of the subgraph discovery algorithms proposed in section [ sec : discovery ] , we generated a graph of nodes formed by tightly - connected hubs of sizes sampled from a poisson distribution with parameter 10 and only weak connections between these hubs ( figure [ fig : randg ] ) .such a graph structure mimics the typical topology of gene regulation networks .we randomly selected one subgraph of nodes to be non - homogeneous , with smooth shift in the first fourier coefficients .the mean shift was set to zero on the rest of the graph .we set the norm of the mean shift to and the covariance matrix to identity , so that detecting the shifted subgraph is impossible by just looking at the mean shift on the graph .we evaluated run - time for full enumeration , the exact branch - and - bound algorithm based on lemma [ lem : neighbound ] ( section [ sec : exact ] ) , and the approximate algorithm based on the euclidean norm ( section [ sec : euclidean ] ) .we also examined run - time on data with permuted class labels , as the subgraph discovery procedure is to be run on such data to evaluate the number of false positives and adjust for multiple testing .averaging over runs , the full enumeration procedure took seconds per run and the exact branch - and - bound seconds on the non - permuted data and seconds on permuted data . over runs ,the approximation at ( ) took seconds ( on permuted data ) and the approximation at ( ) took seconds ( on permuted data ) .the latter approximation missed the non - homogeneous subgraph in of the runs . while neither the exact nor the approximate bounds are efficient enough to allow systematic testing on huge graphs for which the exact approach would be impossible , they allow a significant gain in speed , especially for permuted data , and will thus prove to be very useful for multiple testing adjustment .we also validated our methods using the microarray dataset of , which comprises expression measures for genes in patients treated with tamoxifen . using distant metastasis free survival as a primary endpoint , are labeled as resistant to tamoxifen and are labeled as sensitive to tamoxifen .our goal was to detect structured groups of genes which are differentially expressed between resistant and sensitive patients .we first tested individually connected components from kegg pathways corresponding to known gene regulation networks , using the classical hotelling -test and the -test in the graph - fourier space retaining only the first fourier coefficients ( ) .for each of the graphs , ( unadjusted ) -values were computed under the nominal -distributions and , respectively .figure [ fig : path ] shows the pathway for which the ratio of graph - fourier to full space -values is the lowest ( _ i.e. _ , most significant for graph - structured test relative to classical test ) and the pathway for which it is the highest .as expected , the former corresponds to a shift which appears to be coherent with the network ( even on edges corresponding to inhibition ) , while the latter is a small network with non - smooth shift . more generally , the classical approach tends to select very small networks .the coherent pathway selected by our graph - structured test corresponds to _ leukocyte transendothelial migration_. to the best of our knowledge , this pathway is not specifically known to be involved in tamoxifen resistance . however , its role in resistance is plausible , as leukocyte infiltration was recently found to be involved in breast tumor invasion ; more generally , the immune system and inflammatory response are closely - related to the evolution of cancer . -values .bottom : regulation network ( alzheimer s disease ) with the highest ratio of graph - fourier to full space -values .nodes are colored according to the value of the difference in means , with green corresponding to high positive values , red to high negative values , and black to .red arrows denote activations , blue arrows inhibition.,title="fig : " ] -values .bottom : regulation network ( alzheimer s disease ) with the highest ratio of graph - fourier to full space -values .nodes are colored according to the value of the difference in means , with green corresponding to high positive values , red to high negative values , and black to .red arrows denote activations , blue arrows inhibition.,title="fig : " ] we then ran our branch - and - bound non - homogeneous subgraph discovery procedure on the cell cycle pathway , which , after restriction to edges of known sign ( inhibition or activation ) , has nodes and edges .specifically , we sought to detect differentially expressed subgraphs of size , after pre - selecting those for which the squared euclidean norm of the empirical shift exceeded ; for a test in the first fourier components at level , this corresponded to and to an expected removal of of the subgraphs under the approximation that the squared euclidean norm of the subgraphs follows a -distribution . for ,none of the runs on permuted data gave any positive subgraph and overlapping subgraphs ( figure [ fig : firstgraphsign ] ) were detected on the original data , corresponding to a connected subnetwork of genes .some of these genes have large individual differential expression , namely tp53 whose mutation has been long - known to be involved in tamoxifen resistance .e2f1 , whose expression level was recently shown to be involved in tamoxifen resistance , is also part of the identified network , as well as ccnd1 .some other genes in the network have quite low -statistics and would not have been detected individually .this is the case of ccne1 and cdk2 , which were also described in as part of the same mechanism as e2f1 .similarly , cdkn1a was recently found to be involved in anti - strogene treatment resistance and in ovarian cancer which is also a hormone - dependent cancer .the networks also contains rb1 , a tumor suppressor whose expression or loss is known to be correlated to tamoxifen resistance .rb1 is inhibited by cdk4 , whose inhibition has been described in as acting synergistically with tamoxifen or trastuzumab . more generally , a large part of the network displayed on figure 2a of is included in our network , along with other known actors of tamoxifen resistance .our system - based approach to pathway discovery therefore directly identifies a set of interacting important genes and may therefore prove to be more efficient than iterative individual identification of single actors . .nodes are colored according to the value of the difference in means , with green corresponding to high positive values , red to high negative values , and black to .red arrows denote activations , blue arrows inhibition . ]we developed a graph - structured two - sample test of means , for problems in which the distribution shift is assumed to be smooth on a given graph .we proved quantitative results on power gains for such smooth - shift alternatives and devised branch - and - bound algorithms to systematically apply our test to all the subgraphs of a large graph .the first algorithm is exact and reduces the number of explicitly tested subgraphs .the second is approximate , with no false positives and a quantitative result on the type of false negatives ( with respect to the exact algorithm ) .the non - homogeneous subgraph discovery method involves performing a larger number of tests , with highly - dependent test statistics .however , as the actual number of tested hypotheses is unknown , standard multiple testing procedures are not directly applicable .instead , we use a permutation procedure to estimate the distribution of the number of false positive subgraphs . such resampling procedures ( bootstrap or permutation )are feasible due to the manageable run - time of the pruning algorithms of section [ sec : discovery ] .results on synthetic data illustrate the good power properties of our graph - structured test under smooth - shift alternatives , as well as the good performance of our branch - and - bound - like algorithms for subgraph discovery .very promising results are also obtained on the drug resistance microarray dataset of .future work should investigate the use of other bases , such as graph - wavelets , which would allow the detection of shifts with spatially - located non - smoothness , for example , to take into account errors in existing networks .more systematic procedures for cutoff selection should also be considered , _e.g. _ , two - step method proposed in or adaptive approaches as in .the pruning algorithm would naturally benefit from sharper bounds .such bounds could be obtained by controlling the condition number of all covariance matrices , using , for example , regularized statistics which still have known non - asymptotic distributions , such as those of . concerning multiple testing, procedures should be devised to exploit the dependence structure between the tested subgraphs and to deal with the unknown number of tests .the proposed approach could also be enriched to take into account different types of data , _e.g. _ , copy number for the detection of de gene pathways .more subtle notions of smoothness , _e.g. _ , `` and '' and `` or '' logical relations , could also be included .an interesting alternative application would be to explore the list of pathways which are known to be differentially expressed ( or detected by the classical -test ) , but which are not detected by the graph - fourier approach , to infer possible mis - annotation in the network .other applications of two - sample tests with smooth - shift on a graph include fmri and eqtl association studies . finally , it would be of interest to compare our testing approach with structured sparse learning , for the purpose of identifying expression signatures that are predictive of drug resistance .methods should be compared in terms of prediction accuracy and stability of the selected genes across different datasets , a central and difficult problem in the design of such signatures .the comparison should also take into account the merits of the sparsity - inducing norm over the hypothesis testing - based selection , as well as the influence of the smoothness assumption .the latter could indeed also be integrated in a sparsity - inducing penalty by applying , _e.g. _ , to the reduced graph - fourier representation of the pathways , yielding a special case of multiple kernel learning .the authors thank zad harchaoui , nourredine el karoui , and terry speed for very helpful discussions and suggestions , and the uc berkeley center for computational biology genentech innovation fellowship and the cancer genome atlas project for funding .
|
we consider multivariate two - sample tests of means , where the location shift between the two populations is expected to be related to a known graph structure . an important application of such tests is the detection of differentially expressed genes between two patient populations , as shifts in expression levels are expected to be coherent with the structure of graphs reflecting gene properties such as biological process , molecular function , regulation , or metabolism . for a fixed graph of interest , we demonstrate that accounting for graph structure can yield more powerful tests under the assumption of smooth distribution shift on the graph . we also investigate the identification of non - homogeneous subgraphs of a given large graph , which poses both computational and multiple testing problems . the relevance and benefits of the proposed approach are illustrated on synthetic data and on breast cancer gene expression data analyzed in context of kegg pathways .
|
scheduling problems have been studied extensively from the point of view of the objectives of the enterprise that stands to gain from the completion of a set of jobs .we take a new look at the problem from the point of view of the workers who perform the tasks that earn the company its profits .in fact , it is natural to expect that some employees may lack the motivation to perform at their peak levels of efficiency , either because they have no stake in the company s profits or because they are simply lazy .the following example illustrates the situation facing a `` typical '' office worker , who may be one small cog in a large bureaucracy : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ example._it is 3:00 p.m. , and dilbert goes home at 5:00 p.m. dilbert has two tasks that have been given to him : one requires 10 minutes , the other requires an hour .if there is a task in his `` in - box , '' dilbert must work on it , or risk getting fired .however , if he has multiple tasks , dilbert has the freedom to choose which one to do first .he also knows that at 3:15 , another task will appear a 45-minute personnel meeting .if dilbert begins the 10-minute task first , he will be free to attend the personnel meeting at 3:15 and then work on the hour - long task from 4:00 until 5:00 . on the other hand ,if dilbert is part way into the hour - long job at 3:15 , he may be excused from the meeting .after finishing the 10-minute job by 4:10 , he will have 50 minutes to twiddle his thumbs , iron his tie , or enjoy engaging in other mindless trivia .naturally , dilbert prefers this latter option . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ an historical example of a situation where it proved crucial to schedule tasks inefficiently is documented in the book / movie _ schindler s list _it was essential for the workers and management of schindler s factory to appear to be busy at all times in order for the factory to stay in operation , but they simultaneously sought to minimize their contribution to the german war effort .these examples illustrate a general and natural type of scheduling problem , which we term the `` lazy bureaucrat problem '' ( lbp ) ; the goal of the lbp is to schedule jobs as _ inefficiently _ ( in some sense ) as possible .there exists a vast literature on scheduling ; see e.g. , some of the recent surveys .the lbp studies these traditional problems `` in reverse .'' several other combinatorial optimization problems have also been studied in reverse , leading , e.g. , to maximum tsp , maximum cut , and longest path ; such inquiries often lead to a better understanding of the structure and algorithmic complexity of the original optimization problem .in this paper we schedule a set of jobs having processing times ( lengths ) respectively .job _ arrives _ at time and has its _ deadline _ at time .we assume throughout this paper that , , and have nonnegative integral values .the jobs have _ hard deadlines , _ meaning that each job can only be executed during its allowed interval ] , at which point it is run to completion . in the preemptive setting the constraints that govern whether or not a job can be executed are more complicated ; see section [ sec : preempt ] . in traditional scheduling problems ,if it is impossible to complete the set of all jobs by their deadlines , one typically optimizes according to some objective , e.g. , to maximize a weighted sum of on - time jobs , to minimize the maximum lateness of the jobs , or to minimize the number of late jobs . for the lbp we consider three different objective functions , which naturally arise from the bureaucrat s goal of inefficiency : 1 ._ minimize the total amount of time spent working _ this objective naturally appeals to a `` lazy '' bureaucrat .2 . _ minimize the weighted sum of completed jobs _ in this paper we usually assume that the weight of job is its length , ; however , other weights ( e.g. , unit weights ) are also of interest .this objective appeals to a `` spiteful '' bureaucrat whose goal it is to minimize the fees that the company collects on the basis of his labors , assuming that the fee ( in proportion to the task length , or a fixed fee per task ) is collected only for those tasks that are actually completed .minimize the _ makespan _, the maximum completion time of the jobs _ this objective appeals to an `` impatient '' bureaucrat , whose goal it is to go home as early as possible , at the completion of the last job he is able to complete .he cares about the number of hours spent at the office , not the number of hours spent doing work ( productive or otherwise ) at the office .+ note that , in contrast with standard scheduling problems on one processor , the makespan in the lbp varies ; it is a function of which jobs have passed their deadlines and can no longer be executed . as with most scheduling problems , additional parameters of the model must be set .for example , we must explicitly allow or forbid _ preemption _ of jobs .when a job is _ preempted _ , it is interrupted and may be resumed later at no additional cost . if we forbid preemption , then once a job is begun , it must be completed without interruptions .we must also specify whether scheduling occurs _ on - line _ or _off - line ._ a scheduling algorithm is off - line if all the jobs are known to the scheduler at the outset ; it is on - line if the jobs are known to the scheduler only as they arrive . in this paperwe restrict ourselves to off - line scheduling ; we leave the on - line case as an open problem .# 1 [ cols="<,^,^",options="header " , ] recently hepner and stein published a pseudo - polynomial - time algorithm for minimizing the makespan subject to preemption constraint ii , thus resolving an open problem from an earlier version of this paper .they also extend the lbp to the parallel setting , in which there are multiple bureaucrats .in this section , we assume that no job can be preempted : if a job is started , then it is performed without interruption until it completes .we show that the lazy bureaucrat problem ( lbp ) without preemption is strongly np - complete and is not approximable to within any factor for the three metrics we consider .these hardness results distinguish our problem from traditional scheduling metrics , which can be approximated in polynomial time , as proved in .we show , however , that several special cases of the problem have pseudo - polynomial - time algorithms , using applications of dynamic programming .we begin by describing the relationship between the three different objective functions from section [ sec : hard ] in the case of no preemption . the problem of minimizing the total work ( objective function 1 ) is a special case of the problem of minimizing the weighted sum of completed jobs ( objective function 2 ) , because without preemption every job that is executed must be completed .( the weights become the job lengths . ) furthermore , if all jobs have the same arrival time , say time zero , then the two objectives minimizing the total amount of time spent working and minimizing the makespan ( go home early ) are equivalent ( objective functions 1 and 3 ) , since no feasible schedule will have any gaps .our first hardness theorem applies therefore to all three objective functions from section [ sec : hard ] .[ thm : hard1 ] the lazy bureaucrat problem with no preemption is ( weakly ) np - complete for objective functions ( 1)-(3 ) , and is not approximable to within any fixed factor , even when all arrival times are the same .we use a reduction from the subset sum problem : given a set of integers and a target integer , does there exist a subset , such that ?we construct an instance of the lbp having jobs , each having release time zero ( for all ) . for ,job has processing time and deadline .job has processing time and deadline ; thus , job can be started at time or earlier . because job is so long , the bureaucrat wants to avoid executing it , but can do so if and only if he selects a subset of jobs from to execute whose lengths sum to exactly . in summary , the large job is executed if and only if the subset problem is solved exactly and executing the long job leads to a schedule whose makespan ( i.e. , total work executed ) is not within any fixed factor of the optimal solution .we now show that the lbp with no preemption is strongly np - complete . as we will show in section [ pseudo] , the lbp from theorem [ thm : hard1 ] when all arrival times are equal , has a pseudo - polynomial - time algorithm .however , if arrival times and deadlines are arbitrary integers , the problem becomes strongly np - complete .thus , the following theorem subsumes theorem [ thm : hard1 ] when arrival times and deadlines our unconstrained , whereas theorem [ thm : hard1 ] is more generally applicable .[ thm : hard2 ] the lazy bureaucrat problem with no preemption is strongly np - complete for objective functions ( 1)(3 ) , and is not approximable to within any fixed factor .clearly the problem is in np , since any solution can be represented by an ordered list of jobs , given their arrival times . to show hardness, we use a reduction from the 3-partition problem : given a set of positive integers and a positive integer bound such that , for and , does there exist a partitioning of into disjoint sets , , such that for , ? ( note that , by the assumption that , each set must contain exactly 3 elements . )objective function ( 1 ) is a special case of objective function ( 2 ) because without preemption , any job that is begun must be completed .furthermore , hard instances will be designed so that there are no gaps , ensuring that the optimal solution for objective function ( 1 ) is also the optimal solution for objective function ( 3 ) .we construct an instance of the lbp containing three classes of jobs : * _ element jobs _ we define one `` element job '' corresponding to each element , having arrival time , deadline , and processing time . * _ unit jobs _ we define `` unit '' jobs , each of length . the -th unit job ( for )has arrival time and deadline .note that for these unit - length jobs we have ; thus , these jobs must be processed immediately upon their arrival , or not at all . *_ large job _ we define one `` large '' job of length , arrival time , and deadline .note that in order to complete this job , it must be started at time or before .as in the proof of theorem [ thm : hard1 ] , the lazy bureaucrat wants to avoid executing the long job , but can do so if and only if all other jobs are actually executed .otherwise , there will be a time when the large job is the only job in the system and the lazy bureaucrat will be forced to execute it .thus , the unit jobs must be done immediately upon their arrival , and the element jobs must fit in the intervals between the unit jobs .each such interval between consecutive unit jobs is of length exactly .refer to figure [ fig : no - preempt - hard ] . in summary ,the long job is not processed if and only if all of the element and unit jobs can be processed before their deadlines , which happens if and only if the corresponding instance of 3-partition is a `` yes '' instance .note that since can be as large as we want , this also implies that no polynomial - time approximation algorithm with any fixed approximation bound can exist , unless p = np .consider the special case of the lbp in which all jobs have unit processing times .( recall that all inputs are assumed to be integral . )the latest due date ( ldd ) scheduling policy selects the job in the system having the latest deadline .note that this policy in nonpreemptive for unit - length jobs , since all jobs have integral arrival times .consider the latest deadline first scheduling policy when jobs have unit lengths and all inputs are integral .the ldd scheduling policy minimizes the amount of executed work .assume by contradiction that no optimal schedule is ldd .we use an exchange argument . consider an optimal ( non - ldd ) schedule that has the fewest pairs of jobs executed in non - ldd order .the schedule must have two neighboring jobs such that in the schedule but , and is in the system when starts its execution .consider the first such pair of jobs .there are two cases : \(1 ) the new schedule with and switched , is feasible .it executes no more work than the optimal schedules , and is therefore also optimal .\(2 ) the schedule with and switched is not feasible .this happens if s deadline has passed .if no job is in the system to replace , then we obtain a better schedule than the optimal schedule and reach a contradiction .otherwise , we replace with the other job and repeat the switching process .we obtain a schedule executing no more work than an optimal schedule , but with fewer pairs of jobs in non - ldd order , a contradiction .consider now the version in which jobs are large in comparison with their intervals , that is , the intervals are `` narrow . ''let be a bound on the ratio of window length to job length ; i.e. , for each job , . we show that a pseudo - polynomial algorithm exists for the case of sufficiently narrow windows , that is , when .[ lem : unique - ordering ] assume that for each job , .then , if job can be scheduled before job , then job can not be scheduled before job .we rewrite the assumption : for each , .the fact that job can be scheduled before job is equivalent to the statement that , since the earliest that job can be completed is at time and the latest that job can be started is at time .combining these inequalities , we obtain which implies that job can not be scheduled before job . under the assumption that for each , the ordering of any subset of jobs in a schedule is uniquely determined .suppose that for each job , .let .consider the problem of minimizing objective functions ( 1)-(3 ) from section [ subsec - model ] , in the nonpreemptive setting .then the lbp can be solved in time .[ fixed - window - dp ] we use dynamic programming to find the shortest path in a directed acyclic graph ( dag ) .there are states the system can enter .let denote the state of the system when the processor begins executing the -th unit of work of job at time .thus , , , and .transitions from state to state are defined according to the following rules : 1 .no preemption : once a job is begun , it must be completed without interruptions . 2 .when a job is completed at time , another job must begin immediately if one exists in the system .( by lemma [ lem : unique - ordering ] , we know this job has not yet been executed . ) otherwise , the system is idle and begins executing a job as soon as one arrives .3 . state is an end state if and only if when job completes at time , no jobs can be executed subsequently .the start state has transitions to the jobs that arrive first .the goal of the dynamic program is to find the length of a shortest path from the start state to an end state .depending on how we assign weights to the edges we can force our algorithm to minimize all three metrics from section [ subsec - model ] . to complete the time analysis , note that only of the states have more than constant outdegree , and these states each have outdegree bounded by . for we know of no efficient algorithm without additional conditions .let be a bound on the ratio of longest window to shortest window , and let be a bound on the ratio of the longest job to the shortest job .note that bounds on and imply a bound on , and bounds on and imply a bound on .however , a bound on alone is not sufficient for a pseudo - polynomial - time algorithm .[ thm : bounded - delta ] even with a bound on the ratio , the lbp with no preemption is strongly np - complete for objective functions ( 1)-(3 ) .it can not be approximated to within a factor of , for any , unless p = np .modify the reduction from 3-partition of theorem [ thm : hard2 ] , by changing all the fixed `` unit '' jobs to have length , and adjust the arrival times and deadlines accordingly . instead of onevery long job as in the proof from theorem [ thm : hard2 ] , we create a sequence of bounded - length jobs that serve the same purpose .one unit before the deadline of the `` element '' jobs ( see theorem [ thm : hard2 ] ) a sequence of longer jobs arrives .each job entirely fills its window and so can only be executed directly when it arrives .job arrives at the deadline of job . in addition , a sequence of shorter jobs arrives , where each shorter job also entirely fills its window and can only be executed when it arrives .shorter job overlaps and ; it arrives one unit before the deadline of job .jobs have length and jobs have length .thus , if all the jobs comprising the -partition problem can be executed , jobs will be avoided by executing jobs .otherwise , jobs must be executed .the index of can be adjusted to any .bounds on both and are sufficient to yield a pseudo - polynomial algorithm : let . given bounds on and , the lazy bureaucrat problem with no preemption can be solved in for objective functions ( 1)-(3 ) .we modify the dynamic programming algorithm of theorem [ fixed - window - dp ] for this more complex situation .the set of jobs potentially available to work on in a given schedule at time are the jobs that have not yet been executed , for which .our state space will encode the _complement _ of this set for each time , specifically , the set of jobs that were executed earlier but could otherwise have been executed at time .the bounds on and together imply an upper bound on the number of subsets of jobs active at time that could have been executed prior to time .let be the length of the shortest job potentially active at time .we can partition all potentially active jobs into classes , where the class consists of the jobs of size greater than or equal to and less than .the earliest possible arrival time of any class- job is , since each job has an -bounded window .only jobs from class can be executed within this window .summing over all the classes implies that at most jobs potentially active at time could have been executed in a non - preemptive schedule by time . as before the choice of weights on the edges determines the metric that is optimized .the time bound on the running time follows by observing that each of the states has outdegree at most . in the next version of the problemall jobs are released at time zero , i.e. , for all .this problem can be solved in pseudo - polynomial time by dynamic programming , specifically , reducing the problem to that of finding the shortest path in a directed acyclic graph .the dynamic programming works because of the following structural result : there exists an optimal schedule that executes the jobs earliest due date ( edd ) .in fact this problem is a special case of the following general problem : minimizing the weighted sum of jobs not completed by their deadlines .a similar problem was solved by , using the same structural result .the lbp can be solved in pseudo - polynomial time for all three metrics when all jobs have a common release time .specifically , let ; then the running time is this section we consider the lazy bureaucrat problem in which jobs may be preempted : a job in progress can be set aside , while another job is processed , and then possibly resumed later .it is important to distinguish among different constraints that specify which jobs are available to be processed .we consider three natural choices of such constraints : constraint i : : : in order to work on job at time , we require only that the current time lies within the job s interval : .constraint ii : : : in order to work on job at time , we require not only that the current time lies within the job s interval , but also that the job has a _ chance _ to be completed , e.g. , if it is processed without interruption until completion .+ this condition is equivalent to requiring that , where is the _ adjusted critical time _ of job : is the latest possible time to start job , in order to meet its deadline , given that an amount of the job has already been completed .constraint iii : : : in order to work on job , we require that .further , we require that any job that is started is eventually completed .we divide this section into subsections , where each subsection considers one of the three objective functions ( 1)(3 ) from section [ subsec - model ] , in which the goals are to minimize ( 1 ) the total time working ( regardless of which jobs are completed ) , ( 2 ) the weighted sum of completed jobs , or ( 3 ) the makespan of the schedule ( the `` go home '' time ) . for each metricwe see that the constraints on preemption can dramatically affect the complexity of the problem .constraint iii makes the lbp with preemption quite similar to the lbp with no preemption .in fact , if all jobs arrive at the same time ( for all ) , then the three objective functions are equivalent , and the problem is hard : [ hard - pre-3 ] the lbp with preemption , under constraint iii ( one must complete any job that is begun ) , is ( weakly ) np - complete and hard to approximate for all three objective functions .we use the same reduction as the one given in the proof of theorem [ thm : hard1 ] .note that any schedule for an instance given by the reduction , in which all jobs processed must be completed eventually , can be transformed into an equivalent schedule with no preemptions .this makes the problem of finding an optimal schedule with no preemption equivalent to the problem of finding an optimal schedule in the preemptive case under constraint iii .note that we can not use a proof similar to that of theorem [ thm : hard2 ] to show that this problem is strongly np - complete , since preemption can lead to improved schedules in that instance .[ thm : preempt - i.1 ] the lbp with preemption , under constraint i ( one can work on any job in its interval ) and objective ( 1 ) ( minimize total time working ) , is polynomially solvable. the algorithm schedules jobs according to latest due date ( ldd ) , in which at all times the job in the system with the latest deadline is being processed , with ties broken arbitrarily .an exchange argument shows that this is optimal .suppose there is an optimal schedule that is not ldd .consider the first time in which an optimal schedule differs from ldd , and let opt be an optimal schedule in which this time is as late as possible .let opt be executing a piece of job , and ldd executes a piece of job , .we know that .we want to show that we can replace the first unit of by one unit of , contradicting the choice of opt , and thereby proving the claim .if in opt , job is not completely processed , then this swap is feasible , and we are done . on the other hand , if all of is processed in opt , such a swap causes a unit of job later on to be removed , leaving a gap of one unit .if this gap can not be filled by any other job piece , we get a schedule with less work than opt , which is a contradiction .therefore assume the gap can be filled , possibly causing a later unit gap. continue this process , and at its conclusion , either a unit gap remains contradicting the optimality of opt , or no gaps remain , contradicting the choice of opt .the lbp with preemption , under constraint ii ( one can only work on jobs that can be completed ) and objective ( 1 ) ( minimize total time working ) , is ( weakly ) np - complete . if all arrival times are the same , then this problem is equivalent to the one in which the objective function is to minimize the makespan , which is shown to be np - complete in theorem [ thm : preempt - ii.3 ] . the lbp with preemption , under constraint i ( one can work on any job in its interval ) and objective ( 2 ) ( minimize the weighted sum of completed jobs ) , is polynomially solvable . without loss of generality ,assume that jobs are indexed in order of increasing deadlines .we show how to decompose the jobs into separate components that can be treated independently .schedule the jobs according to edd ( if a job is executing and its deadline passes , preempt and execute the next job ) .whenever there is a gap ( potentially of size zero ) , where no jobs are in the system , the jobs are divided into separate components that can be scheduled independently and their weights summed .now we focus on one such set of jobs ( having no gaps ) .we modify the edd schedule by preempting a job units of time before it completes. then we move the rest of the jobs of the schedule forward by time units and continue to process . at the end of the schedule, there are two possibilities .( 1 ) the last job is interrupted because its deadline passes ; in this case we obtain a schedule in which no jobs are completed ; ( 2 ) the last job completes and in addition all other jobs whose deadlines have not passed are also forced to complete . the proof is completed by noting the following : * there is an optimal schedule that completes all of its jobs at the end ; and * the above schedule executes the maximum amount of work possible .( in other words , edd ( `` minus '' ) allows one to execute the maximum amount of work on jobs 1 through without completing any of them . ) the lbp with preemption , under constraint ii ( one can only work on jobs that can be completed ) and objective ( 2 ) ( minimize the weighted sum of completed jobs ) , is ( weakly ) np - complete .consider the lbp under constraint ii , where the objective is to minimize the makespan .the proof of theorem [ thm : preempt - ii.3 ] will have hard instances where all jobs have the same arrival time , and where the optimal solution completes any job that it begins .thus , for these instances the metric of minimizing the makespan is equivalent to the metric of minimizing the weighted sum of completed jobs , for weights proportional to the processing times .we assume now that the bureaucrat s goal is to go home as soon as possible .we begin by noting that if the arrival times are all the same ( , for all ) , then the objective ( 3 ) ( go home as soon as possible ) is in fact equivalent to the objective ( 1 ) ( minimize total time working ) , since , under any of the three constraints i iii , the bureaucrat will be busy nonstop until he can go home .observe that if the _ deadlines _ are all the same ( , for all ) , then the objectives ( 1 ) and ( 3 ) are quite different . consider the following example .job 1 arrives at time and is of length , job 2 arrives at time and is of length , job 3 arrives at time and is of length , and all jobs have deadline .then , in order to minimize total time working , the bureaucrat will do jobs 1 and 3 , a total of 4 units of work , and will go home at time 10 .however , in order to go home as soon as possible , the bureaucrat will do job 2 , performing 9 units of work , and go home at time 9 ( since there is not enough time to do either job 1 or job 3 ) .[ thm : preempt - i.3 ] the lbp with preemption , under constraint i ( one can do any job in its interval ) and objective ( 3 ) ( go home as early as possible ) , is polynomially solvable . the algorithm is to schedule by latest due date ( ldd ) .the proof is similar to the one given in theorem [ thm : preempt - i.1 ] .if instead of constraint i we impose constraint ii , the problem becomes hard : [ thm : preempt - ii.3 ] the lbp with preemption , under constraint ii ( one can only work on jobs that can be completed ) and objective ( 3 ) ( go home as early as possible ) , is ( weakly ) np - complete , even if all arrival times are the same .we give a reduction from subset sum .consider an instance of subset sum given by a set of positive integers , , , and target sum .we construct an instance of the required version of the lbp as follows . for each integer , we have a job that arrives at time , has length , and is due at time , where is a small constant ( it suffices to use ) .in addition , we have a `` long '' job , with length , that arrives at time and is due at time .we claim that it is possible for the bureaucrat to go home by time if and only if there exists a subset of that sums to exactly .if there is a subset of that sums to exactly , then the bureaucrat can perform the corresponding subset of jobs ( of total length ) and go home at time ; he is able to avoid doing any of the other jobs , since their critical times fall at an earlier time ( or ) , making it infeasible to begin them at time , by our assumption .if , on the other hand , the bureaucrat is able to go home at time , then we know the following : 1 . _the bureaucrat must have just completed a job at time . _+ he can not quit a job and go home in the middle of a job , since the job must have been completable at the instant he started ( or restarted ) working on it , and it remains completable at the moment that he would like to quit and go home ._ the bureaucrat must have been busy the entire time from 0 until time . _ + he is not allowed to be idle for any period of time , since he could always have been working on some available job , e.g. , job .if the bureaucrat starts a job , then he must finish it . _+ first , we note that if he starts job and does at least of it , then he must finish it , since at time less than remains to be done of the job , and it is not due until time , making it feasible to return to the job at time ( so that he can not go home at time ) .+ second , we must consider the possibility that he may perform very small amounts ( less than ) of some jobs without finishing them . however , in this case , the _ total _ amount that he completes of these barely started jobs is at most .this is a contradiction , since his total work time consists of this fractional length of time , plus the sum of the integral lengths of the jobs that he completed , which can not add up to the integer .thus , in order for him to go home at exactly time , he must have completed every job that he started .+ finally , note that he can not use job as `` filler '' , and do part of it before going home at time , since , if he starts it and works at least time on it , then , by the same reasoning as above , he will be forced to stay and complete it .thus , he will not start it at all , since he can not complete it before time ( recall that ) .we conclude that the bureaucrat must complete a set of jobs whose lengths sum exactly to .thus , we have reduced subset sum to our problem , showing that it is ( weakly ) np - complete .note that the lbp we have constructed has non integer data .however , we can `` stretch '' time to get an equivalent problem in which all the data is integral . letting , we multiply all job lengths and due dates by . _ remark . _hepner and stein recently published a pseudo - polynomial - time algorithm for this problem , thus resolving an open problem from an earlier version of this paper .we come now to one of the main results of the paper .we emphasize this result because it uses a rather sophisticated algorithm and analysis in order to show that , in contrast with the case of identical arrival times , the lbp with identical deadlines is polynomially solvable .specifically , the problems addressed in theorems [ thm : preempt - ii.3 ] and [ thm : preempt - ii.3-same - dead ] are identical except that the flow of time is reversed .thus , we demonstrate that in contrast to most classical scheduling problems , in the lbp , when time flows in one direction , the problem is np - hard , whereas when the flow of time is reversed , the problem is polynomial - time solvable .the remainder of this section is devoted to proving the following theorem : [ thm : preempt - ii.3-same - dead ] the lbp with preemption , under constraint ii ( one can only work on jobs that can be completed ) and objective ( 3 ) ( go home as early as possible ) , is solvable in polynomial time if all jobs have the same deadlines ( , for all ) .we begin with a definition a `` forced gap : '' there is a _ forced gap _ starting at time if is the earliest time such that the total work arriving by time is less than .this ( first ) forced gap ends at the arrival time , , of the next job .subsequently , there may be more forced gaps , each determined by considering the scheduling problem that starts at the end , , of the previous forced gap .we note that a forced gap can have length zero . under the `` go home early '' objective , we can assume , without loss of generality , that there are no forced gaps , since our problem really begins only at the time that the _ last _ forced gap ends .( the bureaucrat is certainly not allowed to go home before the end of the last forced gap , since more jobs arrive after that can be processed before their deadlines . )while an optimal schedule may contain gaps that are not forced , the next lemma implies that there exists an optimal schedule having no unforced gaps .consider the lbp of theorem [ thm : preempt - ii.3-same - dead ] , and assume that there are no forced gaps .if there is a schedule having makespan , then there is a schedule with no gaps , also having makespan .consider the first gap in the schedule , which begins at time .because the gap is not forced , there is some job that is not completed , and whose critical time is at time .this is because there must be a job that arrived before that is not completed in the schedule , and at time it is no longer feasible to complete it , and therefore its critical time is before .the interval of time between and may consist of ( 1 ) gaps , ( 2 ) work on completed jobs , and ( 3 ) work on jobs that are never completed .consider a revised schedule in which , after time , jobs of type 3 are removed , and jobs of type 2 are deferred to the end of the schedule .( since a job of type 2 is completed and all jobs have the same deadline , we know that it is possible to move it later in the schedule without passing its critical time . it may not be possible to move a ( piece of a ) job of type 3 later in the schedule , since its critical time may have passed . ) in the revised schedule , extend job to fill the empty space .note that there is enough work in job to fill the space , since a critical time of means that the job must be executed continuously until deadline in order to complete it .consider an lbp of theorem [ thm : preempt - ii.3-same - dead ] in which there are no forced gaps .any feasible schedule can be rearranged so that all completed jobs are ordered by their arrival times and all incomplete jobs are ordered by their arrival times .the proof uses a simple exchange argument as in the standard proof of optimality for the edd ( earliest due date ) policy in traditional scheduling problems .our algorithm checks if there exists a schedule having no gaps that completes exactly at time .assume that the jobs are labeled so that .the main steps of the algorithm are as follows : 1 .determine the forced gaps .this allows us to reduce to a problem having no forced gaps , which starts at the end of the last forced gap .+ the forced gaps are readily determined by computing the partial sums , , for , and comparing them to the arrival times .( we define . ) the first forced gap , then , begins at the time and ends at time .( if ; if there are no forced gaps . ) subsequent forced gaps , if any , are computed similarly , just by re - zeroing time at , and proceeding as with the first forced gap .2 . let be the length of time between the common deadline and our target makespan .a job for which is called _ short _ ; jobs for which are called _ long_. +if it is _ not _ possible to schedule the set of short jobs so that each is completed and they are all done by time , then our algorithm stops and returns `` no , '' concluding that going home by time is impossible .otherwise , we continue with the next step of the algorithm .+ the rationale for this step is the observation that any job of length at most must be completed in any schedule that permits the bureaucrat to go home by time , since its critical time occurs at or after time .3 . create a schedule of all of the jobs , ordered by their arrival times , in which the amount of time spent on job is if the job is short ( so it is done completely ) and is if the job is long .+ for a long job , is the maximum amount of time that can be spent on this job without committing the bureaucrat to completing the job , i.e. , without causing the adjusted critical time of the job to occur after time .+ if this schedule has no gaps and ends at a time after , then our algorithm stops and returns `` yes . '' a feasible schedule that allows the bureaucrat to go home by time is readily constructed by `` squishing '' the schedule that we just constructed : we reduce the amount of time spent on the long jobs , starting with the latest long jobs and working backwards in time , until the completion time of the last short job exactly equals .this schedule completes all short jobs ( as it should ) , and does partial work on long jobs , leaving all of them with adjusted critical times that fall _ before _ time ( and are therefore not possible to resume at time , so they can be avoided ) .if the above schedule has gaps or ends before time , then is not a feasible schedule for the lazy bureaucrat , so we must continue the algorithm .+ our objective is to decide _ which _ long jobs to complete , that is , is there a set of long jobs to complete that will make it possible to go home by time .this problem is solved using the dynamic programming algorithm _ schedule - by- _ , which is described in detail below .let be the sum of the gap lengths that occur before time in schedule .then , we know that in order to construct a gapless schedule , at least long jobs ( in addition to the short jobs ) from must be completed. for each we have such a constraint ; collectively , we call these the _ gap constraints_. if for each gap in schedule , there are enough long jobs to be completed in order to fill the gap , then a feasible schedule ending at exists .we devise a dynamic programming algorithm as follows .let be the earliest completion time of a schedule that satisfies the following : 1 .it completes by time ; 2 .it uses jobs from the set ; 3 .it completes exactly jobs and does no other work ( so it may have gaps , making it an infeasible schedule ) ; 4 .it satisfies the gap constraints ; and 5 .it completes all short jobs ( of size ) .the boundary conditions on are given by : : : ; : : , which implies that at least one of the jobs must be completed ; : : for ; : : if there exist constraints such that at least jobs from must be completed , some of the jobs from must be completed because they are short , and some additional jobs may need to be completed because of the gap constraints .note that this implies that is equal to zero or infinity , depending on whether gap constraints are disobeyed . in general , is given by selecting the better of two options : where is the earliest completion time if we choose not to execute job ( which is a legal option only if job is long ) , giving and is the earliest completion time if we choose to execute job ( which is a legal option only if the resulting completion time is by time ) , giving there exists a feasible schedule completing at time if and only if there exists an for which .if for all , then , since the gap constraints apply to any feasible schedule , and it is not possible to find such a schedule for any number of jobs , there is no feasible schedule that completes on or before .if there exists an for which , let be the smallest such . then , by definition , .we show that the schedule obtained by the dynamic program can be made into a feasible schedule ending at .consider jobs that are not completed in the schedule ; we wish to use some of them to `` fill in '' the schedule to make it feasible , as follows .ordered by arrival times of incomplete jobs , and doing up to of each incomplete job , fill in the gaps . notethat by the gap constraints , there is enough work to fill in all gaps .there are two things that may make this schedule infeasible : ( i ) some jobs are worked on beyond their critical times , and ( ii ) the last job to be done must be a completed one ._ fixing the critical time problem:_consider a job that is processed at some time , beginning at , after its critical time , .we move all completed job pieces that fall between and to the end of the schedule , lining them up to end at ; then , we do job from time up until this batch of completed jobs .this is legal because all completed jobs can be pushed to the end of the schedule , and job can not complete once it stops processing .( ii ) . _ fixing the last job to be a complete one:_move a `` sliver '' of the last completed job to just before time . if this is not possible ( because the job would have to be done before it arrives ), then it means that we _ must _ complete one additional job , so we consider , and repeat the process .note that a technical difficulty arises in the case in which the sum of the gap lengths is an exact multiple of : do we have to complete an additional job or not ? this depends on whether we can put a sliver of a completed job at .there are several ways to deal with this issue , including conditioning on the last completed job , modifying the gap constraint , or ignoring the problem and fixing it if it occurs ( that is if we can not put a sliver , then add another job which must be completed to the gap constraint ) .this completes the proof of our main theorem , theorem [ thm : preempt - ii.3-same - dead ] . _remark._even if all arrival times are the same , and deadlines are the same , and data is integer , an optimal solution may not be integer .in fact , there may not be an optimal solution , only a limiting one , as the following example shows : let and , for all .jobs have length 51 , while job has length .a feasible schedule executes of each of the first jobs , where , and all of job , so the total work done is .note that each of the first jobs have remaining to do , while there is only time left before the deadline .now , by making arbitrarily close to , we can make the schedule better and better .we thank the referees for constructive comments and suggestions that improved the presentation of the paper .a. i. barvinok , d. s. johnson , g. j. woeginger , and r. woodroofe .the maximum traveling salesman problem under polyhedral norms . in _ proc .6th conference on integer programming and combinatorial optimization ( ipco ) _ , _ lecture notes in computer science _ , vol . 1412 , pages 195201 , 1998 .e. l. lawler , j. k. lenstra , a. h. g. rinnooy kan , and d. b. shmoys .sequencing and scheduling : algorithms and complexity .graves , p.h .zipkin , and a.h.g .rinnooy kan ( eds . ) in _ logistics of production and inventory : handbooks in operations research and management science _ , volume 4 , pages 445522 .north - holland , amsterdam , 445 - 522 , 1993 .
|
we introduce a new class of scheduling problems in which the optimization is performed by the worker ( single `` machine '' ) who performs the tasks . a typical worker s objective is to minimize the amount of work he does ( he is `` lazy '' ) , or more generally , to schedule as inefficiently ( in some sense ) as possible . the worker is subject to the constraint that he must be busy when there is work that he _ can _ do ; we make this notion precise both in the preemptive and nonpreemptive settings . the resulting class of `` perverse '' scheduling problems , which we denote `` lazy bureaucrat problems , '' gives rise to a rich set of new questions that explore the distinction between maximization and minimization in computing optimal schedules . * keywords:*scheduling , approximation algorithms , optimization , dynamic programming , np - completeness , lazy bureaucrat . 68m20 , 68q25 , 90b35 , 90b70 .
|
in the last few years , complex networks have attracted a growing interest from a wide circle of researchers .the reason for this boom is that complex networks describe various systems in nature and society , such as the world wide web ( www ) , the internet , collaboration networks , and sexual network , and so on .extensive empirical studies have revealed that real - life systems have in common at least two striking statistical properties : power - law degree distribution , small - world effect including small average path length ( apl ) and high clustering coefficient . in order to mimic real - word systems with above mentioned common characteristics , a wide variety of models have been proposed . at present , it is still an active direction to construct models reproducing the structure and statistical characteristics of real systems . in our previous papers , on the basis of the well - known sierpinski fractal ( or sierpinski gasket ) , we have proposed a deterministic network called deterministic sierpinski network ( dsn ) , and a stochastic network named random sierpinski network ( rsn ) , respectively .both the dsn and rsn possess good topological properties observed in some real systems . in this paper, we suggest a general scenario for constructing evolutionary sierpinski networks ( esns ) controlled by a parameter .the esns can also result from sierpinski gasket and unify the dsn and rsn to the same framework , i.e. , the dsn and rsn are special cases of rsns .the esns have a power - law degree distribution , a very large clustering coefficient , and a small intervertex separation .the degree exponent of esns is changeable between and .moreover , we introduce a generating algorithm for the esns which can realize the construction of our networks . in the end , the cooperation behavior of the evolutionary prisoner s dilemma game on two limiting cases ( i.e. , dsn and rsn ) of the esns is discussed .the first two stages of construction of the sierpinski gasket ( a ) and its corresponding network ( b).,scaledwidth=60.0% ]we first introduce sierpinski gasket , which is also known as sierpinski triangle .the classical sierpinski gasket denoted as after generations , is constructed as follows : start with an equilateral triangle , and denote this initial configuration as .perform a bisection of the sides forming four small copies of the original triangle , and remove the interior triangles to get .repeat this procedure recursively in the three remaining copies to obtain , see fig .[ network](a ) . in the infinite limit, we obtain the famous sierpinski gasket . from sierpinski gasketwe can easily construct a network , called deterministic sierpinski network , with sides of the removed triangles mapped to nodes and contact to edges between nodes . for uniformity, the three sides of the initial equilateral triangle at step 0 also correspond to three different nodes .figure [ network](b ) shows a network based on .analogously , one can construct the random sierpinski network derived from the stochastic sierpinski gasket , which is a random variant of the deterministic sierpinski gasket .the initial configuration of the random sierpinski gasket is the same as the deterministic sierpinski triangle .then in each of the subsequent generations , an equilateral triangle is chosen randomly , for which bisection and removal are performed to form three small copies of it .the sketch map for the random fractal is shown in the left panel of fig .[ web ] . from this fractalwe can easily establish the random sierpinski network with sides of the removed triangles mapped to nodes and contact to links between nodes .the right panel of fig .[ web ] gives a network derived from the random sierpinski gasket .( color online ) the sketch maps for the construction of random sierpinski gasket ( left panel ) and its corresponding network ( right panel).,title="fig:",scaledwidth=40.0% ] ( color online ) the sketch maps for the construction of random sierpinski gasket ( left panel ) and its corresponding network ( right panel).,title="fig:",scaledwidth=36.0% ]in this section , we introduce an evolving unified model for the deterministic and random sierpinski networks . first we give a new variation , called evolving sierpinski gasket ( esg ) , for the sierpinski gasket .the initial configuration of the esg is the same as the deterministic sierpinski gasket .then in each of the subsequent generations , for each equilateral triangle , with probability , bisection and removal are performed to form three small copies of it . in the infinite generation limit, the esg is obtained . in a special case , the esg is reduced to the classic deterministic sierpinski gasket . if approaches but is not equal to , it coincides with the random sierpinski gasket described in ref .the proposed unified model is derived from this esg : nodes represent the sides of the removed triangles and edges correspond to contact relationship . as in the construction of the deterministic and random sierpinski networks ,the three sides of the initial equilateral triangle ( at step 0 ) of the esg are also mapped to three different nodes . in the construction process of the esg , for each equilateral triangle at arbitrary generation , once we perform a bisection of its sides and remove the central down pointing triangle , three copies of it are formed . when building the unifying network model , it is equivalent that for each group of three newly - added nodes , three new triangles are generated , which may create new nodes in the subsequent generations . according to this , we can introduce an iterative algorithm to create the esns . using the proposed algorithmone can write a computer program conveniently to simulate the networks and study their properties .( color online ) iterative construction method for the network ., scaledwidth=50.0% ] we denote the esns after ( ) iterations by , then the proposed algorithm to create esns is as follows .initially , has three nodes forming a triangle . at step 1 , with probability , we add three nodes into the original triangle .these three new nodes are connected to one another shaping a new triangle , and both ends of each edge of the new triangle are linked to a node of the original triangle .thus we obtain , see fig .[ iterative ] . for , obtained from . for the convenience of description , we give the following definition : for each existing triangle in , if there is no node in its interior and among its three nodes there is only one youngest node ( i.e. , the other two are strictly elder than it ) , we call it an active triangle ( with the initial triangle as an exception ) . at step , for each existing active triangle , with probability it is replaced by the connected cluster on the right of fig .[ iterative ] , then is produced .the growing process is repeated until the network reaches a desired order . when , the network is exactly the same as the dsn . if , the network grows randomly .especially , as approaches zero and does not equal zero , the network is reduced to the rsn studied in detail in ref . .next we compute the order ( number of all nodes ) and size ( number of all edges ) of .denote as the number of active triangles at step .then , . by construction, we can easily derive that .let and be the number of nodes and edges created at step , respectively .note that each active triangle in will ( see fig .[ iterative ] ) lead to three new nodes and nine new edges in with probability .then , at step , we add expected new nodes and new edges to .after simple calculations , one can obtain that at step the number of newly - born nodes and edges is and , respectively .thus the average number of total nodes and edges present at step is and respectively .so for large , the average degree is approximately .obviously , we have . moreover , according to the connection rule , arbitrary two edges in the esns never cross each other .thus , the class of networks under consideration are maximal planar networks ( or graphs ) .in the following we will study the topology properties of , in terms of degree distribution , clustering coefficient , and average path length .when a new node is added to the network at step , it has a degree of .let be the expected number of active triangles at step that will create new nodes connected to the node at step . then at step , . from the iterative generation process of the network , one can see that at any subsequent step each two new neighbors of generate two new active triangles involving , and one of its existing active triangles is deactivated simultaneously .we define as the degree of node at time , then the relation between and satisfies : now we compute . by construction , .considering the initial condition , we can derive .then at time , the degree of vertex becomes since the degree of each node has been obtained explicitly as in eq .( [ degree ] ) , we can get the degree distribution via its cumulative distribution , i.e. , , where denotes the number of nodes with degree .the detailed analysis is given as follows .for a degree , there are nodes with this exact degree , all of which were born at step .all nodes born at time or earlier have this or a higher degree .so we have thus , the cumulative degree distribution is give by substituting for in this expression using gives when is large enough , one can obtain so the degree distribution follows a power - law form with the exponent .note that the degree exponent is a continuous function of , and belongs to the interval ] .the clustering coefficient of the whole network is the average of over all nodes in the network . for the esns ,the analytical expression of clustering coefficient for a single node with degree can be derived exactly .when a node is added into the network , its and are both . at each subsequent discrete time step , each of its active triangles increases both and by and , respectively .thus , for all nodes at all steps .so there is a one - to - one correspondence between the clustering coefficient of a node and its degree . for a node of degree , we have }{k(k-1)}=\frac{4}{k}-\frac{1}{k-1},\ ] ] which is inversely proportional to in the limit of large .the scaling of has been empirically observed in many real - life networks . the clustering coefficient of the whole network as a function of the size of the network for various .results are averaged over ten network realizations for each datum.,scaledwidth=60.0% ] after generation evolutions , the average clustering coefficient of network is given by ,\ ] ] where the sum runs over all the nodes and is the degree of those nodes created at step , which is given by eq .( [ degree ] ) . in the infinite network order limit ( ) , eq .( [ acc1 ] ) converges to a nonzero value , see fig .[ cluster01 ] .moreover , it can be easily proved that both and increase with .exactly analytical computation shows : when increases from to , grows from to .therefore , the evolutionary networks are highly clustered .figure [ cluster02 ] shows the average clustering coefficient of the network as a function of , which is in accordance with our above conclusions . from figs .[ cumulatedegree ] and [ cluster02 ] , one can see that both degree exponent and clustering coefficient depend on the parameter .the mechanism resulting in this relation deserves further study .the fact that a biased choice of the active triangles at each iteration may be a possible explanation , see ref . . the dependence of the average clustering coefficient on parameter .,scaledwidth=60.0% ] from above discussions, one knows that the existing model shows both the scale - free nature and the high clustering at the same time .in fact , our model also possesses small - world property .next , we will show that our networks have at most a logarithmic average path length ( apl ) with the number of nodes , where apl means the minimum number of edges connecting a pair of nodes , averaged over all couples of nodes . using a mean - field approach similar to that presented in refs . , one can predict the apl of our networks analytically . by construction, at each time step , the number of newly - created nodes is different . in order to distinguish different nodes ,we construct a node sequence in the following way : when nodes are added at a given time step , we label them as , where is the total number of the pre - existing nodes .eventually , every node is labeled by a unique integer , and the total number of nodes is at time .we denote as the apl of the esns with order .it follows that , where is the total distance , and where is the smallest distance between node and node .note that the distances between existing node pairs are not affected by the addition of new nodes .as in the analysis of , we can easily derive that in the infinite limit of .then , .thus , there is a slow growth of the apl with the network order .this logarithmic scaling of with network order , together with the large clustering coefficient obtained in the preceding subsection , shows that the considered graphs have a small - world effect . in particular , in the case of , we can exactly compute the average path length .a previously reported analytical result has shown that the apl for this special case grows logarithmically with the order of the network . in fig .[ apl ] , we report the simulation results on the apl of esns for different . from fig .[ apl ] , one can see that apl decreases with increasing . for all , apl increases logarithmically with network order .semilogarithmic graph of the apl versus the network order . ]the ultimate goal of study of network structure is to study and understand the workings of systems built upon those networks . recently ,some researchers have focused on the analysis of functional or dynamical aspects of processes occurring on networks .one particular issue attracting much attention is using evolutionary game theory to analyze the evolution of cooperation on different types of networks .cooperation is ubiquitous in the real - life systems , ranging from biological systems to economic and social systems .understanding the emergence and survival of cooperative behavior in these systems has become a fundamental and central issue . after studying the relevant characteristics of network structure , which is described in the previous section , we will study the evolutionary game behavior on the networks , with focus on the game of prisoner s dilemma ( pd ) . in the simple , one - shot pd game , both receive under mutual cooperation and under mutual defection , while a defector exploiting a cooperator gets amount and the exploited cooperator receives , such that . as a result , it is better to defect regardless of the opponent s decision , which in turns makes cooperators unable to resist invasion by defectors , and the defection is the only evolutionary stable strategy in fully mixed populations .we now investigate the evolutionary pd game on our networks to reveal the influences of topological properties on cooperation behavior .here we only study two limiting cases : and ( but ) . as usual in the studies , we choose the payoffs as , , and , and implement the finite population analogue of replicator dynamics . during each generation , each individual plays the single given game with all its neighbors , and their accumulated payoff being stored in .after each round of the game , the individual is allowed to update its strategy by selecting at random a neighbor among all its neighbors , , and comparing their respective payoffs and .if , the individual will keep the same strategy for the next generation . on the contrary , the individual will adopt the strategy of its neighbor with a probability dependent on the payoff difference ( ) as .all individuals update its strategies synchronously during the evolution process .frequency of cooperators as a function of the temptation to defect . ] in fig .[ game ] , we report the simulated results ( i.e. the dependence of the equilibrium frequency of cooperators on the temptation to defect ) for both dsn and rsn .simulations are performed for both networks with order .each data point is obtained by averaging over 100 simulations for each of ten different network realizations . from fig .[ game ] , one can see that for the cooperators in the deterministic sierpinski network is dominant over defectors . and similar phenomenonis also observed for its corresponding random version .thus , both of the network structures are in favor of cooperation upon defection for a wide range of .figure [ game ] also shows that in both networks the frequency of cooperators makes a steady decrease when the temptation changes from to , and then drops dramatically when the temptation increases to . on the other hand , for large ( such as ) , the equilibrium frequency of dsn is higher than that in rsn , and this phenomenon is more obvious when the temptation is larger and goes up to . the observed phenomena in fig .[ game ] can be explained according to the underlying network structures . since both dsn and rsn are scale - free networks , this heterogeneous network architecture makes cooperation become the dominating trait over a wide range of temptation to defect .although both networks have scale - free property , dsn is more heterogeneous than rsn since the former has a smaller exponent of power - law degree distribution than the latter ; at the same time , the average clustering coefficient of dsn is larger than that of rsn .these two different characteristics between dsn and rsn can account for the dissimilar cooperation behavior in both networks : a higher value of average clustering coefficient , together with a smaller exponent of power - law degree distribution , produces an overall improvement of cooperation in dsn , even for a very large temptation to defect , which is compared to that in rsn .in summary , on the basis of sierpinski gasket , we have proposed and studied one kind of evolving network : evolutionary sierpinski networks ( esns ) . according to the network construction processwe have presented an algorithm to generate the networks , based on which we have obtained the analytical and numerical results for degree distribution , clustering coefficient , as well as average path length , which agree well with a large amount of real observations .the degree exponent can be adjusted continuously between and 3 , and the clustering coefficient is very large .moreover , we have studied the evolutionary pd game on two limiting cases of the esn .it should be stressed that the network representation introduced here is convenient for studying the complexity of some real systems and may have wider applicability .for instance , a similar recipe has been recently adopted for investigating the navigational complexity of cities ; on the other hand , it is frequently used in rna folding research ; moreover , earlier links associating this network representation with polymers have proven useful to the study of polymer physics .thus , our study provides a paradigm of representation for the complexity of many real - life systems , making it possible to study the complexity of these systems within the framework of network theory .because of its three important properties : power - law degree distribution , small intervertex separation , and large clustering coefficient , the proposed networks possess good structural features in accordance with a variety of real - life networks .additionally , our networks are maximal planar graphs , which may be helpful for designing printed circuits .finally , it should be mentioned that although our model can reproduce a few topological characteristics of real - life systems , it remains unknown whether the model can capture a true underlying mechanism responsible for those properties observed in real networks .this belongs to the issue of model evaluation , which is beyond the scope of the present paper but deserves further study in future .we thank yichao zhang for preparing this manuscript .this research was supported by the national basic research program of china under grant no .2007cb310806 , the national natural science foundation of china under grant nos .60704044 , 60873040 and 60873070 , shanghai leading academic discipline project no .b114 , and the program for new century excellent talents in university of china ( ncet-06 - 0376 ) .
|
in this paper , we propose an evolving sierpinski gasket , based on which we establish a model of evolutionary sierpinski networks ( esns ) that unifies deterministic sierpinski network [ eur . phys . j. b * 60 * , 259 ( 2007 ) ] and random sierpinski network [ eur . phys . j. b * 65 * , 141 ( 2008 ) ] to the same framework . we suggest an iterative algorithm generating the esns . on the basis of the algorithm , some relevant properties of presented networks are calculated or predicted analytically . analytical solution shows that the networks under consideration follow a power - law degree distribution , with the distribution exponent continuously tuned in a wide range . the obtained accurate expression of clustering coefficient , together with the prediction of average path length reveals that the esns possess small - world effect . all our theoretical results are successfully contrasted by numerical simulations . moreover , the evolutionary prisoner s dilemma game is also studied on some limitations of the esns , i.e. , deterministic sierpinski network and random sierpinski network . + complex networks , scale - free networks , fractals 89.75.hc , 89.75.da , 05.10.-a
|
an * emoticon * , such as ` ;-) ` , is shorthand for a facial expression .it allows the author to express her / his feelings , moods and emotions , and augments a written message with non - verbal elements .it helps to draw the reader s attention , and enhances and improves the understanding of the message .an * emoji * is a step further , developed with modern communication technologies that facilitate more expressive messages .an emoji is a graphic symbol , ideogram , that represents not only facial expressions , but also concepts and ideas , such as celebration , weather , vehicles and buildings , food and drink , animals and plants , or emotions , feelings , and activities .emojis on smartphones , in chat , and email applications have become extremely popular worldwide .for example , instagram , an online mobile photo - sharing , video - sharing and social - networking platform , reported in march 2015 that nearly half of the texts on instagram contained emojis .the use of emojis on the swiftkey android and ios keybords , for devices such as smartphones and tablets , was analyzed in the swiftkey emoji report , where a great variety in the popularity of individual emojis , and even between countries , was reported .however , to the best of our knowledge , no large - scale analysis of the emotional content of emojis has been conducted so far .sentiment analysis is the field of study that analyzes people s opinions , sentiments , evaluations , attitudes , and emotions from a text . in analyzing short informal texts , such as tweets , blogs or comments, it turns out that the emoticons provide a crucial piece of information .however , emojis have not been exploited so far , and no resource with emoji sentiment information has been provided . in this paperwe present the emoji sentiment ranking , the first emoji sentiment lexicon of 751 emojis .the lexicon was constructed from over 1.6 million tweets in 13 european languages , annotated for sentiment by human annotators . in the corpus ,probably the largest set of manually annotated tweets , 4% of the tweets contained emojis .the sentiment of the emojis was computed from the sentiment of the tweets in which they occur , and reflects the actual use of emojis in a context .[ [ background . ] ] background .+ + + + + + + + + + + an emoticon is a short sequence of characters , typically punctuation symbols .the use of emoticons can be traced back to the 19 century , when they were used in casual and humorous writing .the first use of emoticons in the digital era is attributed to professor scott fahlman , in a message on the computer - science message board of carnegie mellon university , on september 19 , 1982 . in his message, fahlman proposed to use ` :-) ` and ` :-( ` to distinguish jokes from more serious posts . within a few months, the use of emoticons had spread , and the set of emoticons was extended with hugs and kisses , by using characters found on a typical keyboard .a decade later , emoticons had found their way into everyday digital communications and have now become a paralanguage of the web .the word ` emoji ' literally means ` picture character ' in japanese .emojis emerged in japan at the end of the 20 century to facilitate digital communication . a number of japanese carriers ( softbank , kddi , docomo ) provided their own implementations , with incompatible encoding schemes .emojis were first standardized in unicode 6.0 the core emoji set consisted of 722 characters .however , apple s support for emojis on the iphone , in 2010 , led to global popularity .an additional set of about 250 emojis was included in unicode 7.0 in 2014 .as of august 2015 , unicode 8.0 defines a list of 1281 single- or double - character emoji symbols .[ [ related - work . ] ] related work .+ + + + + + + + + + + + + sentiment analysis , or opinion mining , is the computational study of people s opinions , sentiments , emotions , and attitudes . it is one of the most active research areas in natural - language processing andis also extensively studied in data mining , web mining , and text mining .the growing importance of sentiment analysis coincides with the growth of social media , such as twitter , facebook , book reviews , forum discussions , blogs , etc .the basis of many sentiment - analysis approaches is the sentiment lexicons , with the words and phrases classified as conveying positive or negative sentiments .several general - purpose lexicons of subjectivity and sentiment have been constructed .most sentiment - analysis research focuses on english text and , consequently , most of the resources developed ( such as sentiment lexicons and corpora ) are in english .one such lexical resource , explicitly devised to support sentiment classification and opinion mining , is sentiwordnet 3.0 .sentiwordnet extends the well - known wordnet by associating each synset with three numerical scores , describing how ` objective ' , ` positive ' , and ` negative ' the terms in the synset are .emoticons have proved crucial in the automated sentiment classification of informal texts . in an early work , a basic distinction between positive and negative emoticonswas used to automatically generate positive and negative samples of texts .these samples were then used to train and test sentiment - classification models using machine learning techniques .the early results suggested that the sentiment conveyed by emoticons is both domain and topic independent .in later work , these findings were applied to automatically construct sets of positive and negative tweets , and sets of tweets with alternative sentiment categories , such as the angry and sad emotional states .such emoticon - labeled sets are then used to automatically train the sentiment classifiers .emoticons can also be exploited to extend the more common features used in text mining , such as sentiment - carrying words .a small set of emoticons has already been used as additional features for polarity classification . a sentiment - analysis framework that takes explicitly into accountthe information conveyed by emoticons is proposed in .there is also research that analyzes graphical emoticons and their sentiment , or employs them in a sentiment classification task .the authors in manually mapped the emoticons from unicode 8.0 to nine emotional categories and performed the sentiment classification of tweets , using both emoticons and bag - of - words as features .ganesan et al . presents a system for adding the graphical emoticons to text as an illustration of the written emotions .several studies have analyzed emotional contagion through posts on facebook and showed that the emotions in the posts of online friends influence the emotions expressed in newly generated content .gruzd et al . examined the spreading of emotional content on twitter and found that the positive posts are retweeted more often than the negative ones .it would be interesting to examine how the presence of emojis in tweets affects the spread of emotions on twitter , i.e. , to relate our study to the field of emotional contagion . [ [ contributions . ] ] contributions .+ + + + + + + + + + + + + + emojis , a new generation of emoticons , are increasingly being used in social media .tweets , blogs and comments are analyzed to estimate the emotional attitude of a large fraction of the population to various issues .an emoji sentiment lexicon , provided as a result of this study , is a valuable resource for automated sentiment analysis .the emoji sentiment ranking has a format similar to sentiwordnet , a publicly available resource for opinion mining , used in more than 700 applications and studies so far , according to google scholar .in addition to a public resource , the paper provides an in - depth analysis of several aspects of emoji sentiment .we draw a sentiment map of the 751 emojis , compare the differences between the tweets with and without emojis , the differences between the more and less frequent emojis , their positions in tweets , and the differences between their use in the 13 languages .finally , a formalization of sentiment and a novel visualization in the form of a sentiment bar are presented .the sentiment of emojis is computed from the sentiment of tweets .a large pool of tweets , in 13 european languages , was labeled for sentiment by 83 native speakers .sentiment labels can take one of three ordered values : _ negative _ _ neutral _ _ positive_. a sentiment label , , is formally a discrete , 3-valued variable .an emoji is assigned a sentiment from all the tweets in which it occurs .first , for each emoji , we form a discrete probability distribution ( , , ) .the sentiment score of the emoji is then computed as the mean of the distribution .the components of the distribution , i.e. , , , and denote the negativity , neutrality , and positivity of the emoji , respectively . the probability is estimated from the number of occurrences , , of the emoji in tweets with the label .note that an emoji can occur multiple times in a single tweet , and we count all the occurrences .a more detailed formalization of the sentiment representation can be found in the methods section .we thus form a sentiment lexicon of the 751 most frequent emojis , called the emoji sentiment ranking .the complete emoji sentiment ranking is available as a web page at http://kt.ijs.si / data / emoji_sentiment_ranking/. the 10 most frequently used emojis from the lexicon are shown in fig [ fig : sentiment - tab ] . .the average _ position _ ranges from 0 ( the beginning of the tweets ) to 1 ( the end of the tweets ) . , , are the negativity , neutrality , and positivity , respectively . is the sentiment score.,width=491 ] first we address the question of whether the emojis in our lexicon are representative .we checked * emojitracker * ( http://emojitracker.com/ ) , a website that monitors the use of emojis on twitter in realtime . in the past two years, emojitracker has detected almost 10 billion emojis on twitter ! from the ratio of the number of emoji occurrences and tweets in our dataset ( ) , we estimate that there were about 4 billion tweets with emojis . in our dataset of about 70,000 tweets , we found 969 different emojis , 721 of them in common with emojitracker .we compared the emojis in both sets , ordered by the number of occurrences , using pearson s and spearman s rank correlation .we successively shorten our list of emojis by cutting off the least - frequent emojis .the results for two thresholds , and , with the highest correlation coefficients , are shown in table [ tab : emojitracker ] .both correlation coefficients are high , significant at the 1% level , thus confirming that our list of emojis is indeed representative of their general use on twitter . between the two options, we decided to select the list of emojis with at least occurrences , resulting in the lexicon of 751 emojis .the sentiment scores for the emojis with fewer then occurrences are not very reliable ..*overlap with emojitracker .* correlations are between the occurrences of emojis in the emoji sentiment ranking and emojitracker , for two minimum occurrence thresholds .the numbers in parenthesis are the emojis that are common in both sets .the correlation values , significant at the 1% level , are indicated by * . [ cols="<,>,^,^,^,^ " , ] [ tab : inter_noemojis ] in machine learning , a classification model is automatically constructed from the training data and evaluated on a disjoint test data .a common , and the simplest , measure of the performance of the model is , which measures the agreement between the model and the test data . here , we use the same measure to estimate the agreement between the pairs of annotators . is defined in terms of the observed disagreement : is simply the fraction of the diagonal elements of the coincidence matrix .note that it does not account for the ( dis)agreement by chance , nor for the ordering between the sentiment values .another , more sophisticated measure of performance , specifically designed for 3-class sentiment classifiers , is ( ) : ( ) implicitly takes into account the ordering of the sentiment values by considering only the _ negative _ and _ positive _ labels , and ignoring the middle , _ neutral _ label . in general , ( known as the f - score ) is a harmonic mean of precision and recall for class . in the case of a coincidence matrix , which is symmetric ,the ` precision ' and ` recall ' are equal , and thus degenerates into : in terms of the annotator agreement , is the fraction of equally labeled tweets out of all the tweets with label .this work was supported in part by the ec projects simpol ( no . 610704 ) , multiplex ( no . 317532 ) and dolfins ( no . 640772 ) , and by the slovenian arrs programme knowledge technologies ( no .p2 - 103 ) .we acknowledge gama system ( http://www.gama-system.si ) who collected most of the tweets ( except english ) , and sowa labs ( http://www.sowalabs.com ) for providing the goldfinch platform for the sentiment annotation of the tweets .we thank sao rutar for generating the emoji sentiment ranking web page , andrej blejec for statistical insights , and vinko zlati for suggesting an emoji distribution model .swiftkey pt .most - used emoji revealed : americans love skulls , brazilians love cats , the french love hearts [ blog ] ; 2015 ./ en / blog / americans - love - skulls - brazilians - love - cats - swiftkey - emoji - meanings - report/. boia m , faltings b , musat cc , pu p. a :) is worth a thousand words : how people attach sentiment to emoticons and words in tweets . in: intl . conf . on social computing ( socialcom ) .ieee ; 2013 .p. 345350 .zollo f , novak kralj p , del vicario m , bessi a , mozeti i , scala a , et al. emotional dynamics in the age of misinformation .2015;10(9):e138740 . available from : http://dx.doi.org/10.1371/journal.pone.0138740 .smailovi j , kranjc j , grar m , nidari m , mozeti i. monitoring the twitter sentiment during the bulgarian elections . in : proc .. conf . on data science andadvanced analytics ( dsaa ) .ieee ; 2015 .sluban b , smailovi j , battiston s , mozeti i. sentiment leaning of influential communities in social networks .computational social networks .2015;2(9 ) .available from : http://dx.doi.org/10.1186/s40649-015-0016-5 .ranco g , aleksovski a , caldarelli g , grar m , mozeti i. the effects of twitter sentiment on stock price returns .2015;10(9):e138441 . available from : http://dx.doi.org/10.1371/journal.pone.0138441 .smailovi j , grar m , lavra n , nidari m. stream - based active learning for sentiment analysis in the financial domain .information sciences .available from : http://dx.doi.org/10.1016/j.ins.2014.04.034 .
|
there is a new generation of emoticons , called emojis , that is increasingly being used in mobile communications and social media . in the past two years , over ten billion emojis were used on twitter . emojis are unicode graphic symbols , used as a shorthand to express concepts and ideas . in contrast to the small number of well - known emoticons that carry clear emotional contents , there are hundreds of emojis . but what are their emotional contents ? we provide the first emoji sentiment lexicon , called the emoji sentiment ranking , and draw a sentiment map of the 751 most frequently used emojis . the sentiment of the emojis is computed from the sentiment of the tweets in which they occur . we engaged 83 human annotators to label over 1.6 million tweets in 13 european languages by the sentiment polarity ( negative , neutral , or positive ) . about 4% of the annotated tweets contain emojis . the sentiment analysis of the emojis allows us to draw several interesting conclusions . it turns out that most of the emojis are positive , especially the most popular ones . the sentiment distribution of the tweets with and without emojis is significantly different . the inter - annotator agreement on the tweets with emojis is higher . emojis tend to occur at the end of the tweets , and their sentiment polarity increases with the distance . we observe no significant differences in the emoji rankings between the 13 languages and the emoji sentiment ranking . consequently , we propose our emoji sentiment ranking as a european language - independent resource for automated sentiment analysis . finally , the paper provides a formalization of sentiment and a novel visualization in the form of a sentiment bar .
|
the notion of zero - error capacity was introduced by shannon in 1956 to characterize the ability of noisy channels to transmit classical information with zero probability of error . since shannon s seminal work, the study of this notion and the related topics has grown into a vast field called _ zero - error information theory _ .the main motivation is partly due to the following facts : ( 1 ) in many real - world critical applications no errors can be tolerated ; ( 2 ) in practice , the communication channel can only be available for a finite number of times ; ( 3 ) deep connections to other research fields such as graph theory and communication complexity theory have been established .these works indicate that unlike the ordinary capacity , computing the zero - error capacity of classical channels is essentially a combinatorial optimization problem about graphs , and is extremely difficult even for very simple graphs . despite the fact that numerous interesting and important results have been reported ( see for an excellent review ) , the theory of zero - error capacity is still far from complete even for classical channels .the generalization of zero - error capacity to quantum channels is somewhat straightforward but nontrivial as the input states of the channel may be entangled between different uses , and the information transmitted may be classical or quantum .at least two notions of zero - error capacity of quantum channels exist : one is the zero - error classical capacity , the least upper bound of the rates at which one can send classical information perfectly through a noisy quantum channel , denote .if replacing classical information with quantum information in the definition of , we have another notion , the zero - error quantum capacity .a careful study of these generalizations will not only help us to exploit new features of quantum information , but also be useful in building highly reliable communication networks .the notion of has been extensively investigated in the context of quantum - error correction . in this paperwe mainly focus on of which little was known .a few preliminary works have been done towards to a better understanding of the zero - error classical capacity of quantum channels . in particular ,some basic properties of of quantum channels were observed in .later , it was shown that the zero - error classical capacity for quantum channels is in general also extremely difficult to compute . however , in these works the only allowable input states for channels were restricted to be product states and entangled uses of the channel were prohibited .consequently , many of the properties of this notion is similar to the classical case and it was not clear what kind of role the additional quantum resources such as entanglement will play in zero - error communication . in a recent work it was demonstrated that the zero - error classical capacity of quantum channels behaves dramatically different from the corresponding classical capacity .more precisely , it was shown that in the so - called multi - user communication scenario , there is noisy quantum channel of which one use can not transmit any classical information perfectly yet two uses can . to achieve this , one needs to encode the classical message using entangled states as input and thus to make two uses of the channel entangled .this is a purely quantum effect that can not happen for any classical channels .furthermore , it can not be observed under the assumptions of refs . where only product input states between different uses are allowed .one drawback of the channel constructed in is that we have at least two senders or two receivers and require the senders or the receivers to perform local operations and classical communication ( locc ) only .this locc restriction is a reasonable assumption in practice as it captures the fact that the quantum communication among the senders or the receivers would be relatively expensive .if this local requirement is removed , one use of these channels are able to transmit classical information perfectly .thus a major open problem left is to ask whether there is quantum channel with only one sender and one receiver enjoying the same property .the purpose of this paper is to further develop the theory of zero - error capacity for quantum channels .our first main result ( theorem [ th1 ] ) is an affirmative answer to the above open problem .more precisely , we show by an explicit construction that there does exist quantum channel with one sender and one receiver such that one use of can not transmit classical information perfectly while two uses of can transmit at least one bit without any error . fig .[ pic1 ] demonstrates our construction . in our construction we do nt construct directlyinstead , we construct two quantum channels and such that both of them can not transmit classical information perfectly by a single use while can transmit at least one bit if employed jointly .this confirms the usefulness of entangled input for perfect transmission of classical information . is a noisy quantum channel from alice to bob .with one use of , alice can not transmit classical information to bob perfectly .interestingly , by using twice , alice can transmit a classical bit " perfectly to bob . to do so, alice carefully encodes the bit " into a bipartite entangled state and applies twice . by decoding the output state , bob can perfectly recover the bit " . ]similar to the previous work , our main tool is the notion of unextendible bases ( or equivalently , completely entangled subspaces ) .the key ingredient in our construction is to partition a bipartite hilbert space into two orthogonal subspaces which are both completely entangled , or equivalently , unextendible .this kind of partitions has been found before and has been demonstrated very useful in quantum information theory. however , all these previous partitions are not sufficient for our purpose .additional requirements make the construction rather difficult and tricky .our second main result ( theorem [ th3 ] ) is to show that both the zero - error quantum and classical capacities of noisy quantum channels are strongly super - additive .this is achieved by introducing a class of special quantum channels which can be treated as the generalizations of retro - correctible channels .it was known that the zero - error capacity of classical channels are super - additive in the following sense : there are and such that .this is very different from the ordinary classical capacity of classical channels , which is always additive .however , any classical channels and satisfying the super - additivity must have the ability to transmit classical information perfectly , that is and .it remains unknown whether the above super - additivity still holds if one of the quantum channels are with vanishing zero - error capacity .here we show that for quantum channels such type of stronger super - additivity can exist .actually , we show that there are quantum channels and such that , , but , where is dimension of the input state space of .the channel can be chosen as a noiseless qubit channel .if we are only concerned with zero - error classical capacity , then can be made entanglement - breaking ( theorem [ th2 ] ) .furthermore , if a maximally entangled state is shared between the sender and the receiver or allowing two - way classical communication that is independent from the message sending from the main protocol , one use of can be used to send noiseless qubits .this type of has the following weird property : it is not able to communicate any classical information perfectly ; however , with a small amount of auxiliary resources ( such as one noiseless qubit channel , or one ebit , or two - way classical communication independent from the messages sending through main protocol ) , the channel acts as a noiseless quantum channel with large perfect quantum capacity ( achieving zero - error quantum capacity ) .intuitively , the hiding zero - error communication ability of channel can be activated by these auxiliary resources .our last main result is to study the role of classical feedback in zero - error communication .as pointed out by shannon , for classical channels , the classical feedback can not increase the ordinary channel capacity but may increase the zero - error capacity .however , a necessary condition for such a feedback improvement is that the channel should be able to communicate classical information perfectly , i.e. , with non - vanishing zero - error capacity .it is of great interest to ask that whether this requirement can be removed for quantum channels .surprisingly , this answer is yes .specifically , we construct a quantum channel with a two - dimensional input state space and vanishing zero - error classical capacity such that when assisted with classical feedback enables perfect transmission of classical and quantum information ( theorem [ cfb ] ) . in other words ,the zero - error capacity of quantum channels can be activated from to positive by classical feedback .this remarkable phenomenon , demonstrates that the zero - error communication ability of a quantum channel may be recovered when assisted with classical feedback .we notice that very recently several important super - activation effects about different type of capacities of quantum channels , namely quantum capacity , classical capacity , and the private capacity , were discovered . clearly , these results are incomparable to ours due to the special zero - error transmission requirement .let alice be the sender with state space , and let bob be the receiver with output state space .a _ quantum channel _ is a completely positive map from to that can be written into the form where is the set of kraus operators and the completeness condition is satisfied .a _ super - operator _ is a completely positive map for which the completeness condition does nt need to be satisfied . for simplicity ,sometimes we identify a super - operator with kraus operators by .a given quantum channel can be used for zero - error communication as follows : alice starts with , and encodes a message into a quantum state by a quantum operation , say .bob receives , and decodes the message by suitable quantum operations .define to be the maximum integer with which there exist a set of states such that can be perfectly distinguished by bob .it follows from the linearity of super - operators that a set achieving can be assumed without loss of generality to be orthogonal pure states . in termed as _ the quantum clique number _ of .intuitively , one use of can be used to transmit bits of classical information perfectly . when it is clear that by a single use of alice can not transmit any classical information to bob with zero probability of error .the _ zero - error classical capacity _ of , , is defined as follows : if we are concerned with the transmission of quantum information instead of classical information , the notion of zero - error quantum capacity can be similarly introduced .let be the maximum integer so that there is a -dimensional subspace of can be perfectly transmitted through .that is , there is a recovery trace - preserving quantum channel from to such that for any .clearly , the quantity represents the optimal number of intact qubits one can send by a single use of .the _ zero - error quantum capacity _ of , , is defined as follows : in the following discussion , we mainly focus on the properties of and .we will frequently employ the notion of unextendible bases ( ub ) .although this notion can be defined on arbitrary multipartite state space ( see ref . ) , for our purpose here it suffices to focus on matrix spaces .let be a set of matrices on . is said to be a ub if contains no rank - one matrix ; otherwise is said to be extendible . clearly , when is a ub , any nonzero matrix in with rank at least two . in this casewe say is completely entangled .if is a ub and can be spanned by rank - one matrix only , we say an unextendible product bases ( upb ) . the properties of ub , in particular upb , have been extensively studied in literature .we just mention two of them here .the first one is that the tensor product of two upb is again another upb .the second one is that if the dimension of a matrix subspace is small enough , say , is always extendible .suppose that is classical ( a so - called memoryless stationary channel ) , that is , for some states diagonalized under the computational basis .then if and only if for all pairs of and , . thus and only if for any . therefore , if and only if .in fact , we can prove that for all entanglement - breaking channel of the form , where is a generalized measurement satisfying , it always holds that implies that .( see corollary [ e - b ] below for a proof ) we will show that for quantum channels it would be very different . let , where .let us define plays an important role in determining the properties of zero - error capacity , mainly due to the following useful lemma : [ zero - error]let be a quantum channel .then if and only if is extendible , i.e. , contains a rank - one matrix .* proof . *necessity : implies there are pure states and such that and are orthogonal . substituting kraus sum representation of into we have that for any . in other words , is extendible . reversing the above arguments we can easily verify the sufficiency. combining the properties of ub mentioned above, we have the following immediate corollary .[ e - b ] let be a quantum channel with input state space .then we have i ) if , then ; ii ) if is spanned by a set of rank - one matrices , then implies .in particular , any entanglement - breaking channel satisfies this property .for any quantum channel with a set of kraus operators and input state space , one can readily verify that satisfies : a ) ; and b ) .a somewhat surprising fact is that for a given matrix subspace , these two properties guarantee the existence of a quantum channel such that . herewe define .[ keylemma]let be a matrix subspace of .then there is a quantum channel from to for some integer such that if and only if and . * proof . *necessity is trivial .we only prove sufficiency .first it is easy to see that when , we can choose a hermitian basis for . actually , for any matrix , we know that . on the other hand , and can be spanned by two hermitian matrices and .so we can choose a hermitian basis for , say .second we show this basis can be made positive definite .let us choose a positive real number and consider .since is hermitian , for sufficiently small , all can be made positive definite .consider .similarly , choose sufficiently small we can guarantee that is positive definite .so we have a set of positive definite matrices such that and .third , for each operator , we will construct a super - operator from to , where and are pairwise orthogonal for .take the spectral decomposition of , and let be an orthonormal basis for .define a super - operator , where it is clear that is from to and . nowthe desired quantum operation is given by the sum of , namely .the output space . to prove that one only needs to notice that . the above lemma greatly simplifies the study of zero - error classical capacity of noisy quantum channels .it enables us to focus on the matrix subspaces satisfying two very easily grasped conditions .some remarks are as follows : a. the condition b ) ensures that a trace - preserving super - operator can be found . for our purpose here , we only need there is a positive definite matrix .then a super - operator with kraus operators such that can be similarly constructed .based on we can further construct a trace - preserving quantum operation with kraus operations . it is easy to check that .( here we assume is also defined for any super - operator such that is positive definite ) b. none of the conditions a ) and b ) can be further relaxed .this can be seen from a one - dimensional matrix spanned by a hermitian matrix with both negative and positive eigenvalues . c. in general itselfmay not satisfy conditions a ) and b ) .however , sometimes we may find two nonsingular matrices and so that satisfies conditions a ) and b ) .the extendibility of remains the same as that of .that is , for any matrix subspace , is extendible if and only if is extendible .d. after we construct a set of positive semi - definite matrices such that and , we can use a more compact construction of the corresponding channel . to do thiswe introduce an auxiliary output system and construct from to as follows : where is the positive root of , and is an orthonormal basis for . intuitively , can be treated as a friendly environment who also outputs its measurement outcome after the interaction .one can readily verify that .note that here the output of is classical information so that a classical system is sufficient for our purpose here .this is an example of quantum communication with classical control .the following lemma shows that the function of quantum clique number is strongly super - multiplicative .[ main - lemma ] there are noisy quantum channels and such that and .* by lemmas [ zero - error ] and [ keylemma ] , we only need to construct two unextendible matrix subspaces and both satisfy conditions a ) and b ) , and are extendible .let be a matrix subspace spanned by the following matrix bases : where is a parameter .let , and let , where is the orthogonal complement via hilbert - schmidt inner product .more explicitly , is spanned by the following matrix bases : we choose as instead of so that satisfies the hermitian condition and contains the identity matrix .this is a key difference from the previous work . by the above lemma, we can define quantum channels and such that and . for any , we will show that and satisfy the following useful properties : a. both and are completely entangled and unextendible .b. are extendible .property ( ii ) holds as is orthogonal to the following rank - one element , where .we now prove property ( i ) .let be a rank one matrix orthogonal to , where and .then we have for , that is , suppose that .assume without loss of generality that .then substituting and into , we have . similarly substituting and into we have .if then .hence , which is a contradiction as both and are nonzero .thus .by we know that . substituting and into the last equation we have .that is , . againa contradiction .therefore .note that if and for a nonzero constant , , then . applying this inference rule many times ,one concludes that all , in both and cases .thus , and is unextendible . by the same technique, we can prove that is also unextendible . applying lemma [ zero - error ] , we know that . on the other hand , by property ( ii ) we know that .actually , alice can use and to encode 0 " and 1 " , respectively , and bob can recover this bit by distinguish between and , which are orthogonal by our construction . in the above construction , and are not identical .however , using the direct sum construction , we can find a quantum channel enjoying similar property .now we are ready to present our main result : [ th1 ] there is a classical of quantum channels such that and .hence . ** the idea is to take as the direct sum of and , say .more explicitly , where and are the projections on the input state spaces of and , respectively , and is the projection of the whole input state space for . the function of can be understood as follows : for any input state , we first perform a projective measurement .if the outcome is , then we apply to the resulting state ; otherwise we apply .it is clear that .furthermore , we have for channels and constructed above , we have and . based on our previous work about ub , we know that any channel with the property in theorem [ th1 ] should be at least with a -dimensional input state space .it remains unknown how to construct quantum channel with similar property and with smaller input and output dimensions .the construction in lemma [ keylemma ] suggests us to consider a special class of quantum channels , which can be treated as a generalization of retro - correctible channels introduced in .consider a quantum channel from to as follows : where both and are super - operators for each .usually we choose to be a set of quantum channels and is a set of super - operators such that is trace - preserving .: 1 ) perform a measurement to the control input system ; 2 ) if the measurement outcome is , apply to the data input ; 3 ) output both the control and the data inputs to and , respectively . herethe input dimensions and are not required to be the same as the output dimensions and , respectively . ] imposing special constraints on and , we can construct some quantum channels with desirable properties .in particular , if the receiver bob can distinguish between , he will be able to determine the quantum operation performed on the data input exactly .thus the net effect of the channel will reduce to some of . in the case that has a large amount of classical or quantum capacity , the above channel will also have large capacity .symmetrically , if bob can distinguish between then he will be able to know the measurement operator performed by the environment , and then is able to correct the errors .for example , if we choose , and , and choose to be the unitary ( isometry ) from to , and let control operators be a set of generalized measurement from to , then we have the following channel : if we ignore the one - dimensional data input , the above channel can be simplified as follows : this is precisely the channel we introduced in the previous section , where a similar interpretation has been presented .another special case is that or are not distinguishable in general , but would be distinguishable if an entangled state is provided .that is , the set of is distinguishable . intuitively , can not be distinguishable means that the channel is very noisy .so the capacity without any assistance would be generally small .however , supplying additional resources such as shared entanglement will greatly improve the capacity .the class of retro - correctible channels introduced by bennett et al is a typical example .it was known that the zero - error classical capacity of classical channels are super - additive in the following sense : there are classical channels and such that .this is very different from the ordinary capacity , which is always additive .however , any classical channels and satisfying the super - additivity must have the ability to transfer classical information perfectly , that is and . it remains unknown whether the above super - additivity still holds if one or two quantum channels are with vanishing zero - error capacity . herewe will show that both and satisfy a stronger type of super - additivity .let s consider first .[ th2 ] there is an entanglement - breaking channel on such that and , where is one qubit noiseless quantum channel . proof .consider the quantum channel where , , and .in particular , and are two orthonormal bases such that . by choicewe have that if and only if or .it is also clear that , and the set of input states can be chosen as .these facts will be useful later .first we show that .clearly is an entanglement breaking channel as it has a set of rank - one kraus operators .thus it suffices to show that .take an input state and calculate where for simplicity we assume and , etc .similarly , for another input state , we have if and are orthogonal , we should have from the first equation we know that or . without loss of generality , assume that .it follows from the second equation as .however this would imply that both and are nonzero , thus the third equation can not hold . withthat we complete the proof of .the next step is to show that if a noiseless qubit channel is supplied between alice and bob , alice can send messages perfectly to bob using .let .the key here is that and are distinguishable by in the sense that and are orthogonal .if alice encodes message into and transmits it to bob via , the received states by bob are which are mutually orthogonal . that completes the proof of . it is easy to see that the role of the noiseless qubit channel can be replaced with a pre - shared maximally entangled state between alice and bob . to encode the message , alice simply sends together with her half of entangled state to bob .the received states by bob are the same as above .with a more careful analysis we can easily see that and are locally distinguishable in the following way : bob performs a projective measurement according to , and then sends outcome to alice . if then alice measures her particle using the same basis , otherwise using diagonal basis .the outcomes correspond to , while correspond to .so a more economic way to achieve the perfect transmission is that : alice locally prepares a bell state and then send and one half of to bob .bob feedbacks his measurement outcome on the control qubit to alice .based on bob s information , alice performs the measurement on the left half of and forwards the measurement outcome to bob .bob will then know which of and is performed on the data input and can perfectly decode the message .note here we use two - way classical communication which is usually not allowable .however , from the above analysis we can see these communications are independent from the message we send in our main protocol . to summarize, we have the following for the quantum channel constructed in above theorem , we have 1 ) , where the subscript means one ebit available ; 2 ) , where the subscript denotes the two - way classical communication independent of the message sending through the main protocol . so far we havent touched the zero - error quantum capacity yet . using a similar construction, we can prove the strong super - additivity of .a somewhat surprising fact is that even for quantum channel with vanishing zero - error classical capacity , the super - activation effect remains possible .actually we have the following [ th3 ] there is quantum channel with input state space such that and , where is the noiseless qubit channel .* outline of proof*. consider the following quantum channel : where , , is an orthogonal basis for , `` '' is the complex conjugate according to , and is a set of unitary operations on .the function of can be understood as follows : first , randomly choose an integer , and perform a projective measurement on the control input qubit .if the outcome is no action to the data input ; otherwise perform .second , output the classical information but keep the measurement outcome hidden .this is exactly one special instance of retro - correctible channel .it is easy to see that is given by where and recall that .we can see that we will show that by choosing , , appropriately , the above inequality holds with equality . to achieve this , we only need to choose and so that spans , and spans . this can be done easily as and now it is clear that contains the following set of rank - one matrices : which is clearly a upb as its orthogonal complement is a completely entangled subspace .thus is unextendible for any , which follows .the argument that the above channel is able to communicate quantum information perfectly is similar to the analysis for the retro - correctible channels .the key idea is that with the assistance of shared entanglement , the hidden measurement outcome can be revealed .suppose that is supplied to alice and bob .then by inputting an arbitrary state into the data slot , and alice s half of the into control slot , bob receives the following output state note that is an orthonormal basis for .thus bob can perfectly distinguish them by a projective measurement .if the outcome is , then the data output is .if the outcome is , then the data output is . applying to the data output , we can recover .that means together with one ebit can be used to perfectly transfer a -dimensional quantum system , or in other words , with entanglement - assisted zero - error quantum capacity at least qubits .it is not difficult to see that the role of shared entanglement can be replaced by a noiseless qubit channel .that immediately implies .notice further that is an orthonormal basis for that is locc distinguishable .so the assistance of two - way classical communications that are independent from the quantum information sending through the main protocol can be used to transmit noiseless qubits .the analysis is similar to the previous theorem and we omit the details here. there is quantum channel with input state space such that and 1 ) ; 2 ) ; 3 ) , where the subscript denotes two - way classical communications that are independent of the message sending through the main protocol ; 4 ) , where the subscript means one ebit available .the above corollary indicates the behaviors of zero - error capacity of quantum channels is very weird : there are quantum channels which have a large amount quantum capacity but with vanishing zero - error classical capacity .however , the channel can be unlocked for zero - error quantum communication if a small amount of additional resources such as two - way classical communication independent of the messages sending through the main protocol , shared entanglement , or a noiseless quantum channel is available .in this section we will study the role of classical feedback .a well known result in classical information theory is that a noiseless classical feedback channel from bob to alice can not increase the capacity of a classical channel . for quantum channels, it remains an open problem whether a classical feedback can strictly increase the capacity .however , in the special case that a quantum channel with zero classical capacity it should be a constant channel , i.e. , it sends any input quantum state to a fixed state . clearly , the classical feedback can not increase the capacity under this special assumptionthe situation is very different for quantum capacity , which can be increased by classical feedback even the unassisted quantum capacity is zero .a typical example is the quantum erasure channel with erasure probability more than , which has vanishing quantum capacity but nonzero classical feedback assisted quantum capacity .it was pointed out in that for certain classical noisy channels a noiseless classical feedback channel from bob to alice may strictly increase the zero - error classical capacity .all these channels should satisfy .in other words , without any assistance they can be used to communicate classical information perfectly .thus a question of interest is to ask whether this assumption can be removed .we provide an affirmative answer to this question as follows : [ cfb ] there is quantum channel such that but and , where the subscript cfb " represents classical feedback from bob to alice .* the quantum channel constructed in eq . ( [ super ] ) is exactly one such channel when . to see this ,one only needs to show that one use of together with back communication can generate a shared entangled state .the protocol is as follows .first alice prepares and send half of them to bob .second bob measures the control output and feedbacks the outcome .for the moment he has already known the shared entangled state between him and alice should be one of or . after receiving ,alice performs a measurement according to .if is obtained , the final shared entangled state is ; if is obtained , the final shared entangled state is , and she only needs to perform to the left half of , thus the final resulting state is again . if , we already know that together with this entangled state can be used to send one noiseless qubit . in total ,two uses of and classical feedback enable one noiseless qubit transmission . therefore if in eq .( [ super ] ) we use a -dimensional control input instead of a -dimensional one , we will know that but there is , however , a quantum channel with only a two - dimensional input state space enjoying the same property .due to its simplicity , let us give a detailed analysis here . consider the following matrix subspace where are pauli matrices . by lemma [ keylemma ] , we know there is a quantum channel such that .we can construct one such channel by the arguments in the proof of lemma [ keylemma ] . herewe will carefully construct one satisfying special requirement . first choose a positive definite bases for such that and where .we have chosen such that .take and construct the following quantum channel from to : where is an orthonormal basis for auxiliary system .noticing that is a upb , we have that for any .thus . on the other hand , .so if a maximally entangled state is shared between alice and bob , alice can send one bit to bob without any error . to do this, alice first encodes 0 " by applying and 1 " by applying to her half of the shared entangled state , respectively , and sends her half of to bob .the received states by bob are and respectively . by our assumption on , and orthogonal .thus bob can decode the bit perfectly .now the whole problem is reduced to generate a maximally entangled state between alice and bob using and classical feedback only . fortunately ,this can be done as follows : step 1 .alice locally prepares and sends one half of to bob through .bob measures the auxiliary system according to .if the outcome is , bob will know that an entangled state with schmidt coefficient vector is generated between him and alice .repeat steps and once more , alice and bob will share a state , with schmidt coefficient vector .( however only bob knows the exact form of as alice does nt know the measurement outcomes and ) .bob feedbacks the measurement outcomes and to alice .so alice also knows the exact form of the shared entangled state between them .bob and alice transform the shared entangled state into a bell state with standard form . by nielsens theorem , this can be achieved with certainty as .furthermore , the transformation can be done using local measurements and classical communications from bob to alice only .combining the above discussions , we know that uses together with back communication can transmit one bit perfectly from alice to bob .thus .moreover , we can send a qubit by sending two bits and consuming one ebit .easily see that uses of can transmit one noiseless qubit .hence . it seems that the retro - correctible channel introduced in might enjoy the same property as above .however , we do nt know how to determine the value of and consequently , it remains unknown whether is vanishing or not .in sum , we have demonstrated that for a class of quantum channels , a single use of the channel can not be used to transmit classical information with zero probability of error , while multiple uses can .this super - activation property is enabled by quantum entanglement between different uses , thus can not be achieved by classical channels .we also have shown that additional resources such as classical communications independent of sending messages , shared entanglement , and noiseless quantum communication would be greatly improve the zero - error capacity for certain channels .in particular , both the zero - error classical capacity and zero - error quantum capacity are strongly super - additive even one of the channels is with vanishing zero - error classical capacity .finally we construct a special class of quantum channels to show that the classical feedback enables perfect transmission of both classical and quantum information even when the quantum channel has vanishing zero - error classical capacity .these results suggest that a new quantum zero - error information theory would be highly desirable .many interesting problems remain open , and here we mention two of them . the first one is to show whether the following strongest super - additivity is possible : find quantum channels and such that and . according to lemma [ keylemma ] , this is equivalent to find two matrix subspaces and such that 1 ) and ; 2 ) are unextendible for any , ; and 3 ) are unextendible .the quantum channels presented in lemma [ main - lemma ] may be eligible candidates .however , we are not able to answer this question at present as we do nt have a feasible way to check whether is extendible for .the second one is to study corresponding problems about the zero - error quantum capacity . in this casewe do nt even know whether is super - multiplicative . a result similar to lemma [ main - lemma ] would be highly desirable .all these problems can be successfully solved for another notion of unambiguous capacity , which is a generalization of zero - error capacity by requiring the decoding process to be unambiguous .* note added : * after the completion of this work , the author happened to know that cubitt , chen , and harrow also independently obtained some super - activation results about the zero - error classical capacity which partially overlap with ours .more precisely , they employed the choi - jamiokowski isomorphism between quantum channels and a class of bipartite mixed states to establish a theorem similar to lemma [ keylemma ] here .then two quantum channels and with four - dimensional input state spaces such that and were explicitly constructed .they further applied some powerful techniques from algebraic geometry to show that a pair of quantum channels satisfying the strongest super - additivity does exist , and thus solved the open problem mentioned above .( one of these techniques is a result about strongly unextendible bases that was previously proven and used in to demonstrate a similar super - activation effect for unambiguous capacity of quantum channels ) clearly , their remarkable result established the strongest type of super - additivity , which they termed as the super - activation of the asymptotic zero - error classical capacity of quantum channels .interestingly , it is not difficult to show that all channels we constructed in theorems [ th2]-[cfb ] can not be activated by any quantum channel with .thus it is still a surprising fact that these channels do satisfy certain type of super - activation effects which are definitely impossible for any classical channel .part of this work was completed when the author was visiting the university of michigan , ann arbor .many thanks were given to yaoyun shi for his hospitality and for many inspiring discussions .the author was also grateful to john smolin and graeme smith for discussions about zero - error capacity during their short visit to tsinghua university .in particular , the quantum channel presented in eq . ( [ super ] ) was very much motivated by smolin s suggestions .delightful discussions with prof .mingsheng ying , yuan feng , jianxin chen , and yu xin were sincerely acknowledged .special thanks were given to t. cubitt , j. x. chen , and a. harrow for sending me a copy of their manuscript prior to publication and for useful discussions .this work was partially supported by the national natural science foundation of china ( grant nos .60702080 , 60736011 ) and the hi - tech research and development program of china ( 863 project ) ( grant no .2006aa01z102 ) .this work was also partially supported by the national science foundation of the united states under awards 0347078 and 0622033 .
|
we study various super - activation effects in the following zero - error communication scenario : one sender wants to send classical or quantum information through a noisy quantum channel to one receiver with zero probability of error . first we show that there are quantum channels of which a single use is not able to transmit classical information perfectly yet two uses can . this is achieved by employing entangled input states between different uses of the given channel and thus can not happen for classical channels . second we exhibit a class of quantum channel with vanishing zero - error classical capacity such that when a noiseless qubit channel or one ebit shared entanglement are available , it can be used to transmit noiseless qubits , where is the dimension of input state space . third we further construct quantum channels with vanishing zero - error classical capacity when assisted with classical feedback can be used to transmit both classical and quantum information perfectly . these striking findings not only indicate both the zero - error quantum and classical capacities of quantum channels satisfy a strong super - additivity beyond any classical channels , but also highlight the activation power of auxiliary physical resources in zero - error communication .
|
[ sec : intro ] this paper is devoted to the numerical simulation of the process of cellular spatial organization driven by chemotaxis . the effective mechanism by which individual cells undergo directed motion varies among organisms .here we are particularly interested in bacterial migration , characterized by the smallness of the cells , and their ability to move to several orders of magnitude in the attractant concentration .several models , depending on the level of description , have been developed mathematically for the collective motion of cells .we refer to for a review on parabolic , hyperbolic and kinetic models and to for traveling waves drivn by growth and chemotaxis . among them the kinetic model introduced by othmer , dunbar and alt , describes a population of bacteria in motion ( for instance the _ e. coli _ ) in interactions with a chemoattractant concentration .these cells are so small that they are not able to measure any head - to - tail gradient of the chemical concentration , and to choose directly some preferred direction of motion towards high concentrated regions. therefore they develop an indirect strategy to select favorable regions , by detecting a sort of time derivative in the concentration along their pathways , and to react accordingly .more precisely , they undergo a jump process where free runs are followed by a reorientation phenomenon called tumble .for instance it is known that _e. coli _ increases the time spent in running along a favourable direction .this jump process can be described by two different informations .first cells switch the rotation of their flagella , from counter - clockwise , called free runs , to clockwise called reorientation or tumbling phase , and conversely .this decision is the result of a complex chain of reactions inside the cells , driven by the external concentration of the chemoattractant . then bacteria select a new direction , but they are unable to choose directly a favourable direction , so they randomly choose a new orientation . duringthe `` run '' phases a bacterium moves with a constant speed in a given direction while during a `` tumbling '' event it changes direction randomly .in the simple situation , c. s. patlak and e. f. keller & l. a. segel considered a density of cells which interacts with two chemical substances .the cells consume nutrients which drive the migration and excrete a chemoattractant that prevents the dispersion of the population . however , this approach is not always sufficiently precise to describe the evolution of bacteria movements .hence , this phenomenon of run and tumble can be modeled by a stochastic process called the velocity - jump process , and has been introduced by alt and further developed in .a kinetic transport model to describe this velocity jump process consists to study the evolution of the bacterial population by the local density of cells located in position , at time and swimming in the direction . herethe set of all possible velocities is bounded and symmetric in general . in two dimensions ,the modulus of the speed is a constant , hence circle centred in with a radius .a kinetic transport model to describe this velocity - jump process has been introduced by w. alt inspired by the boltzmann equation where the tumbles appear as scattering events and all the fluxes are explicitly introduced .we consider the boltzmann type equation where is the boltzmann type tumbling operator the contribution of the tumbles is introduced with a transition ( scattering ) kernel which stands for the change of velocity from to ; is the division rate of the bacteria ( where is the mean doubling time ) .this equation is indeed a variant of the boltzmann equation for gases , where collisions are delocalized via the secretion or consumption of chemical cues . in ( [ kinetic : eq ] ) , the transition kernel also depends on the local concentration of chemoattractant and nutrient . to estimate the respective contributions on pulse speed of the biais of the run lengths and of preferential reorientation , it is possible to split this transition kernel in two contributions , one being the tumbling rate , and the other one the reorientation effect during tumbles : with the condition where the function accounts for the persistence of the trajectories . for simplicitywe consider the case of the absence of such an angular persistence , hence the turning kernel is only proportional to the tumbling rate , _i.e. _ is constant . for the tumbling rate , we assume that bacteria are sensitive to the temporal variations of attractant molecules via a logarithmic sensing mechanism .therefore , the tumble frequency only depends on the local gradients of nutrient and attractant , both gradients having independent and additive contributions .this gives the nutrient and chemoattractant response functions and are both positive and decreasing , expressing that cells are less likely to tumble ( thus perform longer runs ) when the external chemical signal increases .these functions are smooth and characterized by their characteristic time and and their tumble frequency and . here , we have chosen the following analytical form that encompassed these characteristics : where is the mean tumbling frequency , and the parameters and are the modulation of tumble frequency . in order to define completely the mathematical problem , suitable boundary conditions on should be applied .here we consider wall type boundary conditions , for which emerging particles have been reflected elastically at the wall .more precisely , for the smooth boundary is assumed to have a unit inward normal and for , we assume that at the solid boundary we have ,\,\,\,\mathbf{x}\in\partial\omega,\,\,\,\mathbf{v}\cdot\mathbf{n}(\mathbf{x})\geq 0 , \label{eq : bcingoing}\ ] ] with = f(t,\mathbf{x},\mathbf{v}-2(\mathbf{v}\cdot\mathbf{n}(\mathbf{x}))\mathbf{n}(\mathbf{x})).\ ] ] this boundary condition guarantees the global conservation of mass .the equations describing the behaviors of nutrients density and chemottractant are the same as in : where , and are respectively the degradation rate of the chemoattractant , its production rate and the consumption rate of the nutrient by the bacteria , whereas and are the molecular diffusion coefficients . finally , these equations are completed with homogeneous neumann boundary conditions , _i.e. _ the purpose of this work is to present a numerical scheme for ( [ kinetic : eq])-([eq : bc : parabolique ] ) and to investigate numerically the occurrence of cells aggregation , pattern formation or travelling waves when it takes place , and the convergence to equilibrium otherwise for different geometries .several numerical methods have already been developed to solve the patlak - keller - segel model for chemotaxis using finite element methods , finite volume methods , and the references therein .other models have also been investigated numerically .however , it seems that none of the above - mentioned numerical approaches have been studied for kinetic models ( [ kinetic : eq])-([eq : bc : parabolique ] ) . in the present paperwe propose a numerical scheme for ( [ kinetic : eq])-([eq : bc : parabolique ] ) and investigate the influence of the geometry on the collective behavior of bacteria .we now briefly outline the contents of the paper . in the next section ,we introduce the numerical approximation of ( [ kinetic : eq ] ) and ( [ ns : eq ] ) and describe the numerical approximation of the boundary , , .two points are worth mentioning here .first , we restrict ourselves to the case of specular reflection which seems to be the most appropriate for the study of bacteria .one difficulty in the approximation of kinetic models for chemotaxis , is related to the fact that it can exhibits very different phenomena as finite time blow - up , cell aggregation , wave propagations . at the discrete level ,our approximation should also be able to describe a similar property .secondly , an important step is to discretize appropriately the effect of boundary .the final section is devoted to numerical simulations performed with the numerical scheme presented in section [ sec : scheme ] .we investigate numerically cells aggregation , convergence to equilibrium , and wave propagation in a bounded domain .we consider a computational domain \times[y_{\min},y_{\max}]\times v ] .the computational domain is covered by an uniform cartesian mesh , where , are defined by the mesh steps are respectively , and .let us denote an approximation of the distribution function .we introduce the following finite difference scheme where , is a second - order approximation of the transport operator , and is an approximation of the boltzmann type tumbling operator .we will now focus on searching the approximation . by using the trapezoidal rule, we have where is an approximation of the transition kernel .we assume that the nutrient density and the chemottractant are known at time .moreover with the hypothesis that is constant , the condition implies that .thus it remains to search an approximation of the tumbling rate , _i.e. _ .it is also equivalent to search the local gradients of nutrient and attractant .we study only the local gradients of nutrient , since the local gradient of attractant has the same expression as .we discretize the local gradient of nutrient as follows where is a discrete time derivative and will be given in the section [ sec : discratisation : parabolique ] .moreover we use centred difference approximation for , which yields in summary , the discrete tumbling rate reads finally we reduce the tumbling operator as follows the equations for nutrient density and chemottractant are parabolic equations with source terms depending on the distribution function of density .we study again the discretization for nutrient density , since the one for chemottractant is similar .the euler implicit scheme is used for time integration .hence the scheme for nutrient density reads where is an approximation of time derivative as follows then we use a five points finite difference scheme to discretize finally , a trapezoidal rule is applied for the integration , _ as we mentioned at the end of section [ sec : model ] , an appropriate discretization of boundary condition is important to exhibit very different phenomena .therefore , we present respectively the numerical approximations for specular reflection boundary condition and neumann boundary condition .the specular reflection boundary condition in 2d reads as (t,\mathbf{x},\mathbf{v})=f(t,\mathbf{x},\mathbf{v}'),\ ] ] with where is the point at the boundary , is the interior normal at point .we note that this specular reflection is just like a mirror .for example , if we follows the characteristic of the flux , its reflected flow is corresponding to the velocity .we thus use a mirror procedure to construct at each ghost point .for instance from the ghost point , we can find an inward normal , which crosses the boundary at ( see figure [ fig:2ddomain ] ) . for velocity ,its reflected velocity with respect to is .thus instead of computing at the ghost point , we compute at mirror point with respect to the boundary as follows spatially two - dimensional cartesian mesh . is interior point , is ghost point , is the point at the boundary , is the point for extrapolation , the dashed line is the boundary.,width=377 ] the last step is to approximate using of interior domain . let us assume that the values of the distribution function on the grid points in are given .we first construct a stencil composed of grid points of for interpolation or extrapolation .for instance as it is shown in figure [ fig:2ddomain ] , the inward normal intersects the grid lines , , at points , , . then we choose the three nearest points of the cross point , in each line , _ i.e. _ marked by a large circle . from these nine points , we can build a lagrange polynomial .therefore we evaluate the polynomial at , and obtain an approximation of at the mirror point .we distinguish two cases of mirror points .in the case that mirror point is surrounded by interior points , we interpolate at mirror point ; otherwise a weno type extrapolation can be used to prevent spurious oscillations , which will be detailed below .note that in some cases , we can not find a stencil of nine interior points .for instance when the interior domain has small acute angle sharp , the normal can not have three cross points in interior domain , or we can not have three nearest points of the cross point , in each line .in such a case , we alternatively use a first degree polynomial with a four points stencil or even a zero degree polynomial with an one point stencil .we can similarly construct the four points stencil or the one point stencil as above .[ [ two - dimensional - weno - type - extrapolation ] ] * two - dimensional weno type extrapolation * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + a weno type extrapolation was developed to prevent oscillations and maintain accuracy , which is an extension of weno scheme . the key point of weno type extrapolation is to define smoothness indicators , which is designed to help us choose automatically between the high order accuracy and the low order but more robust extrapolation .moreover a slightly modified version of the method was given in , such that the smoothness indicators are invariant with respect to the scaling of .we now describe this method in 2d case .the substencils for extrapolation are chosen around the inward normal such that we can construct lagrange polynomial of degree .for instance in figure [ fig:2ddomain ] , the three substencils are respectively once the substencils are chosen , we could easily construct the lagrange polynomials in satisfying then the weno extrapolation has the form where are the nonlinear weights , which are chosen to be with where , , , .the coefficients are the smoothness indicators determined by and for , either we take when , or we choose where is a multi - index and \times[y_p-\delta y/2,y_p+\delta y/2] ] describes the frequency of reorientation (\mathbf{v}',\mathbf{v})=\psi_s\left(\partial_t \log(s)\,+\,\mathbf{v}'\cdot\nabla_\mathbf{x}\log(s)\right ) , \ ] ] and the chemical signal is secreted by the cells , following the reaction - diffusion equation .we use a square domain of size ^ 2 $ ] , which is uniformly divided by a mesh size of .the velocity space belongs to the unit circle , and is uniformly divided into parts .all the parameters are chosen as in and listed in table [ tab : parameters ] .the initial chemoattractant is equal to 0 , and the initial distribution function is independent of the velocity where is the total mass ..[tab : parameters]summary of the values used in the simulation [ cols="<,<",options="header " , ]in this paper we present a new algorithm based on a cartesian mesh for the numerical approximation of kinetic models on an arbitrary geometry boundary modelling chemosensitive movements .we present first a kinetic model for chemotactic bacteria interacting with two chemical substances , _i.e. _ nutrient and chemottractant .then we give the numerical discretization for this kinetic model and the numerical method for the boundary conditions based on a cartesian mesh .a large various numerical tests in are shown and compared with biological experiences .we conclude that on the one hand this kinetic model represents well the chemotactic bacteria behaviors and on the other hand our numerical method is accurate and efficient for numerical simulation .nikhil mittal , elena o. budrene , michael p. brenner , and alexander van oudenaarden , motility of escherichia coli cells in clusters formed by chemotactic aggregation , _ proceedings of the national academy of sciences of the united states of america _ , * 100 * ( 2003 ) j. saragosti , v. calvez , n. bournaveas , b. perthame , a. buguin and p. siberzan , directional persistence of chemotactic bacteria in a traveling concentration wave , _ proceedings of the national academy of sciences of the united states of america _ , ( 2011 )
|
we present a new algorithm based on a cartesian mesh for the numerical approximation of kinetic models for chemosensitive movements set in an arbitrary geometry . we investigate the influence of the geometry on the collective behavior of bacteria described by a kinetic equation interacting with nutrients and chemoattractants . numerical simulations are performed to verify accuracy and stability of the scheme and its ability to exhibit aggregation of cells and wave propagations . finally some comparisons with experiments show the robustness and accuracy of such kinetic models . keywords . bacterial chemotaxis , chemical signaling , kinetic theory
|
the analysis of lifetime or failure time data has been of considerable interest in many branches of statistical applications . in many experiments , censoring is very common due to inherent limitations , or time and cost considerations on experiments .the data are said to be censored when , for observations , only a lower or upper bound on lifetime is available .thus , the problem of parameter estimation from censored samples is very important for real - data analysis . to obtain the parameter estimate ,some numerical optimization methods are required to find the mle .however , ordinary numerical methods such as the gauss - seidel iterative method and the newton - raphson gradient method may be very ineffective for complicated likelihood functions and these methods can be sensitive to the choice of starting values used . for censored sample problems , several approximations of the mle and the best linear unbiased estimate ( blue )have been studied instead of direct calculation of the mle . for example, the problem of parameter estimation from censored samples has been treated by several authors . has studied the mle and provided the blue for type - i and type - ii censored samples from an normal distribution . has derived the blue for a symmetrically type - ii censored sample from a laplace distribution for sample size up to . has given an approximation of the mle of the scale parameter of the rayleigh distribution with censoring . has given an approximation of the mle for a type - ii censored sample from an normal distribution . has given the blue for a type - ii censored sample from a laplace distribution .the blue needs the coefficients and , which were tabulated in , but the table is provided only for sample size up to . in addition , the approximate mle and the blue are not guaranteed to converge to the preferred mle .the methods above are also restricted only to type - i or type - ii ( symmetric ) censoring for sample size up to only .these aforementioned deficiencies can be overcome by the em algorithm . in many practical problems, however , the implementation of the ordinary em algorithm is very difficult .thus , proposed to use the mcem when the em algorithm is not available .however , the mcem algorithm presents a serious computational burden because in the e - step of the mcem algorithm , monte carlo random sampling is used to obtain the expected posterior log - likelihood .thus , it is natural to look for a better method .the proposed method using the quantile function instead of monte carlo random sampling has greater stability and also much faster convergence properties with smaller sample sizes .moreover , in many experiments , more general incomplete observations are often encountered along with the fully observed data , where incompleteness arises due to censoring , grouping , quantal responses , etc .one general form of an incomplete observation is of interval form .that is , a lifetime of a subject is specified as . in this paper , we deal with computing the mle for this general form of incomplete data using the em algorithm and its variants , mcem and qem .this interval form can handle right - censoring , left - censoring , quantal responses and fully - observed observations .the proposed method includes the aforementioned existing methods as a special case .this proposed method can also handle the data from intermittent inspection which are referred to as _ grouped data _ which provide only the number of failures in each inspection period . and have given an approximation of the mle under the exponential distribution only . described maximum likelihood methods , but the mle should be obtained by ordinary numerical methods .the proposed method enables us to obtain the mle through the em or qem sequences under a variety of distribution models .the rest of the paper is organized as follows . in section [ sec : em ] , we introduce the basic concept of the em and mcem algorithms . in section [ sec : qem ] , we present the quantile implementation of the em algorithm . in section [ sec : model ] , we provide the likelihood construction with interval data and its em implementation issues . section [ sec : parameter ] deals with the parameter estimation procedure of exponential , normal , laplace , rayleigh , and weibull distributions with interval data . in order to compare the performance of the proposed method with the em and mcem methods , monte carlo simulation study is presented in section [ sec : simulation ] followed up with examples of various applications in section [ sec : examples ] .this paper ends with concluding remarks in section [ sec : conclusion ] .in this section , we give a brief introduction of the em and mcem algorithms .the em algorithm is a powerful computational technique for finding the mle of parametric models when there is no closed - form mle , or the data are incomplete .the em algorithm was introduced by to overcome the above difficulties . for more details about this em algorithm ,good references are , , , and .when the closed - form mle from the likelihood function is not available , numerical methods are required to find the maximizer ( _ i.e. _ , mle ) .however , ordinary numerical methods such as the gauss - seidel iterative method and the newton - raphson gradient method may be very ineffective for complicated likelihood functions and these methods can be sensitive to the choice of starting values used .in particular , if the likelihood function is flat near its maximum , the methods will stop before reaching the maximizer .these potential problems can be overcome by using the em algorithm .the em algorithm consists of two iterative steps : ( i ) expectation step ( e - step ) and ( ii ) maximization step ( m - step ) .the advantage of the em algorithm is that it solves a difficult incomplete - data problem by constructing two easy steps .the e - step of each iteration only needs to compute the conditional expectation of the log - likelihood with respect to the incomplete data given the observed data .the m - step of each iteration only needs to find the maximizer of this expected log - likelihood constructed in the e - step , which only involves handling `` complete - data '' log - likelihood function .thus , the em sequences repeatedly maximize the posterior log - likelihood function of the complete data given the incomplete data instead of maximizing the potentially complicated likelihood function of the incomplete data directly .an additional advantage of this method compared to other direct optimization techniques is that it is very simple and it converges reliably .in general , if it converges , it converges to a local maximum .hence in the case of the unimodal and concave likelihood function , the em sequences converge to the global maximizer from any starting value .we can employ this methodology for parameter estimation from interval data because interval data models are special cases of incomplete ( missing ) data models . here , we give a brief introduction of the em and mcem algorithms .denote the vector of unknown parameters by . then the complete - data likelihood is where and we denote the observed part of by and the incomplete ( missing ) part by . denote the estimate at the -th em sequences by .the em algorithm consists of two distinct steps : * ` compute ` + where . * ` find ` + which maximizes in . in some problems ,the implementation of the e - step is difficult . propose to use the mcem to avoid this difficulty .the e - step is approximated by using monte carlo integration .simulating from the conditional distribution , we can approximate the expected posterior log - likelihood as follows : where .this method is called the monte carlo em ( mcem ) algorithm .major drawback to mcem is that it is very slow because it requires a large sample size in order to possess stable convergence properties .this problem can be overcome by the proposed method using the quantile function .the key idea underlying the quantile implementation of the em algorithm can be easily illustrated by the following example .the data in the example were first presented by and have since then been used very frequently for illustration in the reliability engineering and survival analysis literature including and .an experiment is conducted to determine the effect of a drug named 6-mercaptopurine ( 6-mp ) on leukemia remission times .a sample of size leukemia patients is treated with 6-mp and the times of remission are recorded .there are individuals for whom the remission time is fully observed , and the remission times for the remaining 12 individuals are randomly censored on the right . letting a plus ( + )denote a censored observation , the remission times ( in weeks ) are _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ xxx= xxx= xxx= xxx= xxx= xxx=xxx= xxx= xxx= xxx= xxx 6 6 6 7 10 13 16 + 23 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ using an exponential model , we can obtain the complete likelihood function and the conditional pdf where is a right - censoring time of test unit . using the above conditional pdf , we have the expected posterior log - likelihood then the monte carlo approximation of the expected posterior log - likelihood is given by where a random sample is from . in the monte carlo approximation , the term is approximated by this approximation can be improved by using the quantile function . for the conditional pdf ,the quantiles of , denoted by , are given by for .we can choose from any of the fractions , , , , etc . using the quantile function, we have the following approximation it is noteworthy that that a random sample in the monte carlo approximation is usually generated by using the above quantile function with from a random sample having a uniform distribution between and . ]fig.[fig : qfunction ] presents the mcem and qem approximations of expected posterior log - likelihood functions for ( dashed curve ) , 100 ( dotted curve ) and 1000 ( dot - dashed curve ) at the first step ( ) , along with the exact expected posterior log - likelihood ( solid curve ) .the mcem and qem algorithms were run with starting value .as can be seen in the figure , the mcem and qem both successfully converge to the expected posterior log - likelihood as gets larger .note that the qem is much closer to the true expected posterior log - likelihood for smaller values of . ) .[ fig : iteration ] ] fig.[fig : iteration ] displays the iterations of the em and qem sequences in the example from the starting value .the horizontal solid lines indicate the mle ( ) .the figures clearly show that the qem is stable and converges very fast to the mle .even with very small quantile sizes , the qem outperforms the mcem .it should be noted that the qem with performs better than the mcem with . another way to viewthe quantile implementation idea is by looking at the riemann - stieltjes integral .for simplicity of presentation , we consider the case where is one - dimensional .denote .let us consider a following riemann - stieltjes sum , in the limit as , we have using a change - of - variable integration technique with , we have hence the quantile approximation of the expectation posterior log - likelihood is a riemann - stieltjes sum which converges to the true integration . with , this sumis also known as the extended midpoint rule and it is accurate to the order of ; see . that is , where . on the other hand ,the accuracy of the monte carlo approximation can be assessed as follows . by the central limit theorem , we have and this is accurate to the order of .it is immediate from the weak law of large numbers that . using this and ( [ eq : clt ] ) , we have from this , it is clear that the qem is much more accurate than the mcem .we can generalize the above result as follows .in the e - step , we replace the monte carlo approximation with the quantile approximation where with , and is any fraction . in this paper, we use .it should be noted that the approximation of the expected posterior log - likelihood in the proposed method can be viewed as being similar to a quasi - monte carlo approximation in the sense that the quasi - monte carlo approximation also uses a _ deterministic _ sequence rather than a _ random _ sample .in fact , shows that there exist such sequences in the normalized integration domain , which ensure an accuracy to the order of , where is the dimension of the integration space ( see * ? ? ? * ) .thus , using the quasi - monte carlo sequences in the normalized integration domain , one can improve the accuracy of the integration in the e - step of the mcem algorithm which leads to accuracy on the order of with .however , we should point that the proposed qem method leads to accuracy on the order of . therefore , although using the quasi - monte carlo approximation can improve the mcem , the inaccuracy in that case will be greater than that of the proposed qem method . also incorporating the quantiles from the proposed method into the m - step to obtain the mle is quite straightforward .on the other hand , if the quasi - monte carlo sequences in the normalized integration domain are used , it would be irrelevant to the use of its sequences in the m - step to obtain the maximizer .thus , the focus of the paper is to construct the em algorithm using the quantiles so that the mles can be straightforwardly obtained .in this section , we develop the likelihood functions which can be conveniently used for the em , mcem and qem algorithms .the general form of an incomplete observation is often of interval form .that is , the lifetime of a subject may not be observed exactly , but is known to fall in an interval : .this interval form includes censored , grouped , quantal - response , and fully - observed observations .for example , a lifetime is left - censored when and a lifetime is right - censored when .the lifetime is fully observed when .suppose that are observations on random variables which are independent and identically distributed and have a continuous distribution with the pdf and cdf . interval data from experiments can be conveniently represented by pairs with ] notation which implies ] ) for , it is easily seen from lhospital rule that .if the observation is right - censored ( _ i.e. _ , ] and ] that where ] .* + differentiating with respect to and and setting this to zero , we obtain arranging for , we have the equation of the em sequence of is the solution of the above equation . after finding , we obtain the em sequence of in this weibull case , it is extremely difficult or may be impossible to find the explicit expectations of ] in the e - step , but the quantile function of the random variable at the -th step can be easily obtained . from ( [ eq : fz ] ), we have ^{1/\beta^{(s)}}.\ ] ] using the above quantiles , we obtain the following qem algorithm .* + denote the quantile approximation of by . then , we have * + differentiating with respect to and and setting this to zero , we obtain arranging for , we have the equation of the qem sequence of is the solution of the above equation . after finding , we obtain the qem sequence of in the m - step , we need to estimate the shape parameter by numerically solving ( [ eq : eebeta ] ) , but this is only a one - dimensional root search and the uniqueness of this solution is guaranteed .lower and upper bounds for the root are explicitly obtained , so with these bounds we can find the root easily .we provide the proof of the uniqueness under quite reasonable conditions and give lower and upper bounds of in the appendix .in order to examine the performance of the proposed method , we use the monte carlo simulations with 5000 replications .we present the performance of this new method with the em and mcem estimators by comparing their estimated biases and the mean square errors ( mse ) .the biases are calculated by the sample average ( over 5000 replications ) of the differences between the method under consideration and the mle .the mse are also obtained by the sample variance of the differences between the method under consideration and the mle .l location & + estimator & & & + + em & 1.342988 & 1.779955 & + mcem + & 7.169276 & 8.381887 & 2.123573 + & 2.223300 & 8.170053 & 2.178633 + & 7.135135 & 8.417492 & 2.114590 + & 2.265630 & 8.433365 & 2.110610 + qem + & 2.511190 & 2.558357 & 6.957412 + & 2.382535 & 2.305853 & 7.719289 + & 2.349116 & 2.432084 & 7.318639 + & 3.232357 & 2.176558 & 8.177841 + scale & + estimator & & & + + em & 3.706033 & 1.143094 & + mcem + & 1.139404 & 2.133204 & 5.358577 + & 3.540433 & 2.069714 & 5.522955 + & 1.137090 & 2.130881 & 5.364419 + & 3.621140 & 2.150602 & 5.315227 + qem + & 5.580507 & 1.272262 & 8.984739 + & 5.890585 & 1.418158 & 8.060413 + & 6.315578 & 1.644996 & 6.948917 + & 9.699447 & 4.529568 & 2.523627 + first , a random sample of size was drawn from an normal distribution with and and the largest five have been right - censored .all the algorithms were stopped after 10 iterations ( ) .the results are presented in table [ mcem : simulation1 ] . to help compare the mse, we also find the simulated relative efficiency ( sre ) which is defined as from the result , the em is as efficient as the mle ( the mse of the em is almost zero ) .compared to the mcem , the qem has much smaller mse and much higher efficiency .for example with , the sre of the mcem is only for and for . on the other hand ,the sre of the qem is for and for . comparing the results in table [ mcem : simulation1 ], the qem with only performs better than the mcem with .next , we draw a random sample of size from a rayleigh distribution with with the largest five being right - censored .we compare the qem only with the mcem because the em is not available .the results are presented in table [ mcem : simulation2 ] for rayleigh data .the results also show that the qem clearly outperforms the mcem ..estimated biases , mse , and sre of estimators under consideration with rayleigh data .[ cols="<,^ , > , > " , ]in this paper , we have shown that the qem algorithm offers clear advantages over the mcem .it reduces the computational burden required when using the mcem because a much smaller size is required . unlike the mcem, the qem also possesses very stable convergence properties at each step .the qem algorithm provides a flexible and useful alternative when one is faced with a difficulty with the implementation of the ordinary em algorithm .a variety of examples of application were also illustrated using the proposed method .analogous to the approach of , the uniqueness of the solution of ( [ eq : eebeta ] ) can be proved as follows . for convenience , letting we rewrite ( [ eq : eebeta ] ) by the function is strictly decreasing from to on $ ] , while is increasing because it follows from the jensen s inequality that \ge 0.\ ] ] now , it suffices to show that for some .since where , we have for some unless for all and . this condition is extremely unrealistic in practice .next , we provide upper and lower bounds of .these bounds guarantee the unique solution in the interval and enable the root search algorithm to find the solution very stably and easily .since is increasing , we have that is , denote the above lower bound by .then , since is again increasing , we have , which leads to freireich , e. j. , gehan , e. , frei , e. , schroeder , l. r. , wolman , i. j. , anbari , r. , burgert , e. o. , mills , s. d. , pinkel , d. , selawry , o. s. , moon , j. h. , gendel , b. r. , spurr , c. l. , storrs , r. , haurani , f. , hoogstraten , b. , and lee , s. ( 1963 ) . the effect of 6-mercaptopurine on the duration of steroid - induced remissions in acute leukemia : a model for evaluation of other potentially useful therapy . , * 21 * , 699716 .
|
the expectation - maximization ( em ) algorithm is a powerful computational technique for finding the maximum likelihood estimates for parametric models when the data are not fully observed . the em is best suited for situations where the expectation in each e - step and the maximization in each m - step are straightforward . a difficulty with the implementation of the em algorithm is that each e - step requires the integration of the posterior log - likelihood function . this can be overcome by the monte carlo em ( mcem ) algorithm . this mcem uses a random sample to estimate the integral in each e - step . but this mcem converges very slowly to the true integral , which causes computational burden and instability . in this paper we present a quantile implementation of the expectation - maximization ( qem ) algorithm . this proposed method shows a faster convergence and greater stability . the performance of the proposed method and its applications are numerically illustrated through monte carlo simulations and several examples . * keywords : * em algorithm , grouped data , incomplete data , interval data , maximum likelihood , mcem , missing data , quantile .
|
in two recent works we obtained analytical expressions for the solid angle subtended by a cylindrical shaped detector at a point cosine source in the cases where the source axis is orthogonal or parallel prata2003c to the cylinder axis of revolution . as ancillary results we also derived expressions for the solid angle defined by a circular disc in the cases where the source axis is orthogonal ( ) or parallel ( ) to the symmetry axis of the disc . this latter result ( ) appeared in a previous work by hubbell _et al _ ( * ? ? ?29 ) , where it is also credited to other authors herm00,foot15 . in that work , a quite general treatment of the radiation field due to a circular disc source with axial symmetryis given in terms of a legendre expansion and appears as a subsidiary result which is interpreted as the response of a plane detector parallel to a lambertian uniformly distributed disc source .circular apertures and sources are often considered in optics and radiation physics ; and disc - shaped detectors are widely used in nuclear science ( e.g. neutron activation analysis ) .references to other works on this subject in the context of nuclear physics can be found in and .while the case of a point isotropic source has been treated to great extent jaff54,mack56,mask56,mask57,gard71,prata2003b , the case of a point cosine source has , to the best of our knowledge , attracted little attention , with the already mentioned exception of . for these reasons , in the present workwe extend the scope of the previous results by performing the calculation of the solid angle defined by a circular disc and a point cosine source pointing at an arbitrary direction , under the sole restriction that the disc lies in the half - space illuminated by the source .the solid angle subtended by a given surface at a point source located at the origin can be defined by where is the source angular distribution . in the case of a point cosinethe distribution is defined in relation to some direction axis specified by the unit vector and it is given by , the factor ensuring that .for it follows that so that the source only emits into the hemisphere around . because of this , the integration limits in the rhs of eq .[ eq_omega_surf1 ] are to be determined from the conditions that and that each included direction hits the surface . in the followingwe shall assume that the position of the disc is always chosen is such a manner that for each point on the disc .this restriction greatly simplifies the calculation without , we believe , reducing the practical interest of the expressions .this is so because , in actual situations , the source is distributed over some planar surface ; the source axis is coincident with the normal to the surface and then the restriction simply requires that the detector be held somewhere on the illuminated side of the radiating surface or eventually directly on the surface but does not intersect it .to proceed it is advantageous to consider two coordinate systems ( s and s ) with a common origin also coincident with the source position . in the s system ( fig .[ fig1 ] ) the axis is aligned with the source direction ; the position of the disc center ( c ) is specified by and ; and the symmetry axis of the disc is specified by the angles and , being the angle between the source and the disc axes . when working in the s system ( fig . [ fig2 ] ) , the axis is parallel to the disc axis , c is located by means of and ; and is given by angles and . due to the symmetry of the source it is possible to choose the axes so that the coordinate of c is zero in each coordinate system . also from the symmetry of the sourceit is clear that the solid angle is an even function of or and in the following calculations we will thus assume that and . finally , it is sufficient to consider .the restriction that over the whole disc is easily expressed in s by being the disc radius .this is automatically satisfied , provided , but otherwise , it further reduces the range of variation of to it is readily found that the coordinates of a point in s and s ( and ) are related by an orthogonal matrix ( ) : where is given by and its inverse is given by the transpose let and denote the coordinates of the disk center in s and s , respectively .as previously said , it is possible to choose the axes in each coordinate system so that . starting in the s system , setting and fixing and ,one then uses eq .[ eq_plin_m_p ] to obtain .the value of is determined by imposing that , which gives or , for , ~. \label{eq_gamma_arccos}\end{gathered}\ ] ] the values of and are obtained from and eq . [ eq_gamma_arccos ] can then be written as ~. \label{eq_gamma_r}\ ] ] conversely , starting in the s system with , using eq .[ eq_p_mt_plin ] to obtain and imposing that , there results ~. \label{eq_alfa_arccos}\end{gathered}\ ] ] and again , eq .[ eq_alfa_arccos ] can be cast as ~. \label{eq_alfa_d}\ ] ] the substitution of the expression for ( eq . [ eq_h_ld ] ) on the rhs of eq .[ eq_hbigger_r ] and a bit of algebra gives the restriction expressed in terms of s variables : if , the previous eq .can be rewritten as the solid angle ( eq . [ eq_omega_surf1 ] ) is best calculated in s. with the notation described in fig .[ fig2 ] , it is seen that and where , for a given direction , is the polar angle from the axis and is the azimuthal angle in the plane measured from the axis .the solid angle is then given by because the position of the disc is such that ( eq . eq_lbigger_r or , equivalently , eq . [ eq_hbigger_r ] ) , the integrations limits are determined only by the condition that each included direction hits the detector . referring to figs .[ fig2 ] and [ fig3 ] , it follows that where then , where and the values of and previously obtained . from ( * ? ? ?29 ) , or ( * ? ? ?33 ) , \label{eq_omega_parallel}\ ] ] and from ( * ? ? ?20 and 55 ) , ~ , \label{eq_omega_ortho}\ ] ] where in eq .[ eq_omega_ortho ] the absolute value of is used since is restricted only by eq . [ eq_lbigger_r ] and can thus be negative .we emphasize two cases : ( i ) the axis of symmetry of the disc is parallel to that of the source ( ) ; and ( ii ) the center of the disc is located on the source axis ( ) . * when , from eqs .[ eq_gamma_arccos ] , [ eq_h_ld ] and [ eq_r_ld ] there results that , and .the same results could of course be obtained from [ eq_alfa_arccos ] , eq_l_hr and [ eq_d_hr ] .the restriction imposed by either of the equivalent eqs .[ eq_lbigger_r ] or [ eq_hbigger_r ] gives .using eqs .[ eq_omega_3 ] , [ eq_omega_parallel ] and [ eq_m_def ] it is seen that the solid angle is independent from as expected and + + it was shown in that is continuous except for since + * when , eqs .[ eq_alfa_arccos ] , [ eq_l_hr ] and eq_d_hr give , and .+ using eqs .[ eq_omega_3 ] , [ eq_omega_parallel ] , [ eq_omega_ortho ] and [ eq_m_def ] , a little algebra yields + + is then independent from , as expected .. [ eq_omega_r_eq_0 ] can be approximated by + ~,r\ll h~. \label{eq_omega_r0_approx}\ ] ] + in the case where , it is straightforward to simplify eq .eq_omega_r_eq_0 to : + we present sample plots of the solid angle . in the following we choose to work with s parameters ( ) and consider throughout a disc of radius .we first address the case , so that , and . as said before ( section section_special_cases ) is not continuous as .this is illustrated in fig .[ fig4 ] where is plotted as a function of for different values . in fig .[ fig5 ] the situation where the disc center is on the source axis ( ) is represented . here, does not depend on ( section [ section_special_cases ] ) and plots of the solid angle as a function of are shown for different values , including the special case where , regardless of . to avoid restricting the range of variation of ( see eq .[ eq_beta_range_h_less_r ] ) , the values of were chosen such that . in figs .[ fig6 ] , [ fig7 ] and fig8 the effect of offsetting the position of the disc center from the source axis is shown for three offset values ( , and ) and for two values of ( , ) . as arguedbefore , the solid angle is an even function of and it is seen that for the larger tilting angle ( ) the dependence on is enhanced . when plotting as a function of while holding all other parameters constant , there _ can _ be a zero , i.e. a value of such that .looking at fig .[ fig2 ] it is clear that if * and * . setting in eq . eq_l_hr and solving for gives an equation for the zero , ~ , \label{eq_gamma0}\ ] ] which has no solution for .thus , this qualitative feature is dependent on the sign of , which actually distinguishes the situations where the disc always presents the same face to the source from those where the face presented depends on the value of .this is schematically explained in fig .[ fig9 ] where the effect of changing from to is shown as seen in s ( see also fig.[fig1 ] ) , when looking along the axis .if ( fig .[ fig9]a ) , the source always sees the lower face of the disc and for one has and , consequently , if ( fig . [fig9]b ) the source always looks at the lower face of the disc but is never zero . finally , for ( fig .[ fig9]c ) , it is seen that , as the disc swirls with increasing , the upper face is first shown ( e.g. ) and then the lower face is presented ( e.g. ) , which means that at some point in between .this behaviour is illustrated in fig .[ fig10 ] , for and .the zero shows up for .one should notice that in the preceding discussion it was implicitly assumed that when .we now proceed to show that eq .eq_hbigger_r , in the strict form guarantees that . using eq .[ eq_gamma0 ] to eliminate in eqs .[ eq_hbigger_r_strict ] and [ eq_d_hr ] gives and respectively . since therefore expressions for the solid angle subtended by a circular disc at a point cosine source were obtained , under the single restriction that the disc is located in the half - space illuminated by the source ( eq . eq_hbigger_r or eq .[ eq_lbigger_r ] ) .it was shown ( eq .[ eq_omega_3 ] ) that the solid angle can be decomposed into the combination of two components ( and ) corresponding to the situations where the symmetry axis of the disc is parallel ( eq .eq_omega_parallel ) or orthogonal ( eq . [ eq_omega_ortho ] ) to the source direction . can be calculated relatively to two alternative coordinate systems ( s and s ) shown in figs .[ fig1 ] and [ fig2 ] .the parameters pertaining to each system ( , ) are related through eqs .[ eq_gamma_arccos ] , [ eq_h_ld ] , [ eq_r_ld ] , [ eq_alfa_arccos ] , [ eq_l_hr ] and eq_d_hr .i m grateful to joo prata for rewiewing this manuscript .i would like to thank professor john h. hubbell for providing a copy of the works by a.v .masket , a.h .jaffey and hubbell _ et al _ .this work was partially supported by fundao para a cincia e tecnologia ( grant bd/15808/98 - programa praxis xxi ) .prata , m.j . , 2003. analytical calculation of the solid angle defined by a cylindrical detector and a point cosine source with parallel axes . accepted rad .phys . chem .( rpc3140 ) .e - print : math - ph/0302003 .
|
we derive analytical expressions for the solid angle subtended by a circular disc at a point source with cosine angular distribution ( ) under the sole condition that the disc lies in the half - space illuminated by the source ( ) . the expressions are given with reference to two alternative coordinate systems ( s and s ) , s being such that the axis is parallel to the symmetry axis of the disc and s such that the axis is aligned with the source direction . sample plots of the expressions are presented . solid angle , point cosine source , disc detector , circular disc , analytic expressions 29.40.-n , 42.15.-i
|
dynamics of collective opinion formation is widely studied in various disciplines including statistical physics . in typical models of opinion formation ,agents interact and dynamically change the opinion , which i call the state , according to others states and perhaps the agent s own state .the voter model is a paradigmatic stochastic model of this kind . in the voter model, each agent flips the binary state at a rate proportional to the number of neighboring agents possessing the opposite state . in arbitrary finite contact networks and in some infinite networks ,the stochastic dynamics of the voter model always ends up with perfect consensus of either state . the time required before the consensusis reached has been characterized in many cases .the possibility of consensus and the relaxation time , among other things , have also been examined in other opinion formation models .the voter model as well as many other opinion formation models assume that the population is homogeneous .in fact , real agents are considered to be heterogeneous in various aspects .the agents heterogeneity has been incorporated into the voter model in the form of , for example , heterogeneous degrees ( i.e. , number of neighbors ) in the contact network , positions in the so - called watts - strogatz small - world network , heterogeneity in the flip rate , and zealosity . in the present study ,i examine extensions of the voter model in which some agents are not like - minded voters .such contrarian agents would transit to the state opposite to that of others and were first studied in ref .it should be noted that contrarians are assumed to dynamically change their states ; contrarians are assumed to be zealots ( i.e. , those that never change the state ) in a previous study . in models in which consensus is the norm in the absence of contrarians , contrarians often prohibit the consensus to be reached such that the dynamics finally reaches the coexistence of different states .this holds true for the majority vote model , ising model , so - called sznajd model , a model with a continuous state space , and a general model including some of these models .these models show phase transitions between a consensus ( or similar ) phase and a coexistence phase when the fraction of contrarians in the population ( i.e. , quenched randomness ) or the probability of the contrarian behavior adopted by all the agents in the population ( i.e. , annealed randomness ) is varied .the effects of contrarians have been also examined in the so - called minority game .in contrast to these nonlinear models , i focus on three linear extensions of the voter model with contrarian agents ( i.e. , quenched randomness ) . by linearity ,i mean to pertain to stochastic mass interaction . a previous study numerically examined coevolutionary dynamics of a linear extension of the voter model with contrarians and network formation . in contrast , i focus on a fixed and well - mixed population .i show that even a small density of contrarians changes the collective dynamics of the extended voter models from the consensus configuration to the coexistence configuration .i also analytically quantify the fluctuations in the agents behavior in the coexistence equilibrium .i consider three variants of the voter model with contrarians . the agent that obeys the state transition rule of the standard voter in the voter modelis referred to as congregator . the fraction of congregators and that of contrarians are denoted by and , respectively .contrarian is assumed to be a quenched property . in other words ,an agent is either congregator or contrarian throughout the dynamics .each agent , either congregator or contrarian , takes either state or state at any time .i denote the mean fraction of congregators in state within the congregator subpopulation by and that of contrarians in state within the contrarian subpopulation by ( ) .the mean fractions of congregators and contrarians in state in the congregator and contrarian subpopulations are given by and , respectively .i assume that the population is well mixed and contains agents .the continuous - time stochastic opinion dynamics is defined as follows .each congregator in state independently flips to state with the rate equal to the number of agents , no matter whether they are congregators and contrarians . likewise, each congregator in state flips to state with the rate equal to the number of agents .this assumption is common to the three models .the behavior of the congregator in the present models is the same as that of the voter in the standard voter model .the three models are different in the behavior of contrarians as follows . in model 1, it is assumed that contrarians oppose congregators and like contrarians . in other words ,each contrarian in state independently flips to state with the rate equal to the sum of the number of congregators and that of contrarians . in model 2 ,contrarians like congregators and oppose contrarians . in other words , each contrarian in state flips to state with the rate equal to the sum of the number of congregators and that of contrarians . in model 3 , contrarians oppose both congregators and contrarians . in other words ,each contrarian in state flips to state with the rate equal to the sum of the number of agents . in all models ,the parallel definition is applied to the flip rate for the contrarian to transit from to .it should be noted that the cognitive demand for the agents is considered to be the lowest for model 3 because the contrarian does not have to recognize the type of other agents when possibly updating its state .the definition of the three models is summarized in table [ tab:3 models def ] ..agents behavior in the three models .[ cols="^,^,^,^",options="header " , ]the rate equations for model 1 are given by ,\label{eq : x}\\ \frac{{\rm d}y}{{\rm d}t } = & ( 1-y)\left[x(1-x)+yy\right ] - y\left[xx+y(1-y)\right ] .\label{eq : y model 1}\end{aligned}\ ] ] if , the steady state is given by where denotes the values in the equilibrium .it should be noted that putting and in eq . yields the standard voter model . in this case , we obtain , which implies that is conserved .it is an artefact of the mean - field equation .in fact , stochastic dynamics of the voter model drives the population toward or , which are absorbing configurations .in contrast , and with whatever values are not absorbing in the present model with . using the relationship ,the following characteristic equation is obtained for the mean - field dynamics in the steady state given by eq .: because the real parts of the two eigenvalues obtained from eq .are negative , the steady state is stable .therefore , consensus is not asymptotically reached in this model , and the dynamics starting from an arbitrary initial condition tends to the steady state given by eq ., regardless of the density of contrarians , . if or , the two eigenvalues are real , such that the dynamics overdamps to the equilibrium .if , the two eigenvalues have imaginary parts such that the relaxation accompanies an oscillation .the equilibrium fraction of agents in either state is equal to , for both the congregator subpopulation and contrarian subpopulation , regardless of the density of contrarians in the population .the influence of even just a few number of contrarians on the dynamics can be huge ; they prevent the consensus .the behavior of the model is very different from that of the voter model , for which consensus is necessarily reached via diffusion . in the limit ,the larger eigenvalue , which determines the decay rate of the dynamics to the steady state , is approximately equal to .therefore , for a small density of contrarian , the actual dynamics would fluctuate around the steady state in a long run .i will quantify fluctuations of the stochastic dynamics in secs .[ sub : van kampen ] and [ sub:1 contrarian ] . for model 2 ,the rate equations are given by eq . and - y\left[x(1-x)+yy\right ] .\label{eq : y model 2}\ ] ] the equilibrium is in fact given by eq . .the characteristic equation in the equilibrium is given by which has two eigenvalues with negative real parts , implying that the equilibrium given by eq .is stable .however , the leading eigenvalue when is given by , which is much closer to zero than for model 1 ( i.e. , ) .therefore , the fluctuation in the equilibrium for model 2 is expected to be much larger than that for model 1 .this is in fact the case , as shown in sec .[ sub : van kampen ] . for model 3 ,the rate equations are given by eq . and - y(xx+yy ) .\label{eq : y model 3}\ ] ] the equilibrium is again given by eq . .the characteristic equation in the equilibrium is given by which has two eigenvalues with negative real parts .therefore , the equilibrium given by eq .is stable .when , the eigenvalue scales as , the same as for model 1 . to understand the fluctuation around the equilibrium of the mean - field dynamics ,i carry out the small - fluctuation approximation of the master equation developed by van kampen for the three models .the van kampen expansion reveals the relationship between the system size and the magnitude of fluctuation under the gaussian assumption of the quantities of interest . to this end ,let us shift from the density description used in sec .[ sub : rate equations ] to the number description .the number of congregators and that of contrarians are denoted by and , respectively .let and represent the number of state congregators and that of state contrarians , respectively .it should be noted that , , and .the ansatz for the van kampen small - fluctuation approximation is given by where and are the mean densities of state congregators and state contrarians in the congregator and contrarian subpopulations , respectively , as introduced in sec .[ sub : rate equations ] . and are stochastic variables , which are assumed to be intensive quantities .i represent the probability that there are state congregators and state contrarians by . for model 1 , the master equation in terms of given by + ( e_x^{-1}-1)\left[\left(n_x - n_x\right)\left(n_x+n_y\right)p\right]\notag\\ + & ( e_y-1)\left[n_y\left(n_x+n_y - n_y\right)p\right ] + ( e_y^{-1}-1)\left[\left(n_y - n_y\right)\left(n_x - n_x+n_y\right)p\right ] , \label{eq : master equation with p model 1}\end{aligned}\ ] ] where , , , and are the operators representing an increment in by one , a decrement in by one , an increment in by one , and a decrement in by one , respectively .for example , the first term on the right - hand side of eq . represents the inflow and outflow of the probability induced by a decrement in by one .the operators are given by by substituting eqs ., , , , , and in eq . andreplacing the time derivative of by that of , i obtain \pi\notag\\ & + \left(-\frac{1}{\sqrt{n_x}}\frac{\partial}{\partial\xi } + \frac{1}{2n_x}\frac{\partial^2}{\partial\xi^2 } \right ) \left[n_x\left(1-x\right)-\sqrt{n_x}\xi\right ] \left(n_x x+\sqrt{n_x}\xi + n_{y}y+\sqrt{n_y}\eta\right)\pi\notag\\ & + \left(\frac{1}{\sqrt{n_y}}\frac{\partial}{\partial\eta } + \frac{1}{2n_y}\frac{\partial^2}{\partial\eta^2 } \right ) \left(n_yy+\sqrt{n_y}\eta\right ) \left[n_xx+\sqrt{n_x}\xi + n_{y}\left(1-y\right)-\sqrt{n_y}\eta\right]\pi\notag\\ & + \left(-\frac{1}{\sqrt{n_y}}\frac{\partial}{\partial\eta } + \frac{1}{2n_y}\frac{\partial^2}{\partial\eta^2 } \right ) \left[n_y\left(1-y\right)-\sqrt{n_y}\eta\right ] \left(n_x\left(1-x\right)-\sqrt{n_x}\xi + n_yy+\sqrt{n_y}\eta\right)\pi .\label{eq : master equation with pi model 1}\end{aligned}\ ] ] the highest order terms on the right - hand side , where and are regarded to be of the order of , are equal to \frac{\partial\pi}{\partial\xi } + n_x\sqrt{n_y } \left[xy-\left(1-x\right)\left(1-y\right)\right]\frac{\partial\pi}{\partial\eta}. \label{eq : highest model 1}\end{aligned}\ ] ] by comparing eq . to the highest order terms on the left - hand side of eq ., i obtain ,\label{eq : mf rho_x}\\ \frac{{\rm d}y}{{\rm d}t } = & \frac{n_x}{n } \left[\left(1-x\right)\left(1-y\right ) - xy\right],\label{eq : mf rho_y}\end{aligned}\ ] ] which are equivalent to the mean - field dynamics given by eqs . and .by equating the second highest order terms in eq ., i obtain \frac{\partial^2\pi}{\partial\xi^2}\notag\\ + & n_x\frac{\partial}{\partial\eta}(\eta\pi ) + \sqrt{n_xn_y}\xi\frac{\partial\pi}{\partial\eta } + \left[\frac{n_x}{2}(1-x - y+2xy)+ n_yy(1-y)\right]\frac{\partial^2\pi}{\partial\eta^2}. \label{eq:2nd highest model 1}\end{aligned}\ ] ] application of and to eq . yields and respectively . because the characteristic equation for the jacobian of the dynamics given by eqs . and coincides with eq . , and converge to the unique equilibrium given by .application of , , and to eq .yields and respectively . by substituting in eqs . , , and and setting the left - hand sides to 0 , i obtain in terms of the original variables , i obtain and in the infinite time limit .therefore , in terms of the fraction of congregators in the congregator subpopulation and that of contrarians in the contrarian subpopulation , i obtain where stands for the standard deviation .the results obtained from direct numerical simulations of model 1 are compared with the theoretical results given by eqs . and in fig .[ fig : fluctuation ] .i set .the numerical results agree well with the theory except when is small .the van kampen expansion assumes that the relevant distributions are gaussian .numerically calculated distributions of the fraction of congregators in state and that of contrarians in state are compared with the gaussian distributions with mean 0 and standard deviations as given by eqs . and in fig .[ fig : distribution ] .i set and examined the cases ( fig .[ fig : distribution](a ) ) , ( fig .[ fig : distribution](b ) ) , and ( fig .[ fig : distribution](c ) ) .the numerically obtained distributions are very close to the theoretical ones when is not small ; in figs .[ fig : distribution](b ) and [ fig : distribution](c ) , the numerical and theoretical results almost completely overlap each other for both and . in contrast , the numerical and theoretical distributions are not similar when is small ( fig . [fig : distribution](a ) ) . discrepancies between the numerical and theoretical results for small values are also nonnegligible in fig .[ fig : fluctuation](a ) .the deviation in the case of small owes at least partly to the fact that the distributions are significantly affected by the boundary conditions at 0 and 1. the deviation may be also due to the fact that the distribution of is very discrete when is small . within the congregator subpopulation ( i.e. , ) and that in the fraction of contrarians in state within the contrarian subpopulation ( i.e. , ) .i set and varied .the distributions are calculated on the basis of the results from through in a single run starting from .this condition is common to the following numerical results unless otherwise stated .( a ) model 1 ; ( b ) model 2 ; ( c ) model 3.,title="fig:",width=302 ] within the congregator subpopulation ( i.e. , ) and that in the fraction of contrarians in state within the contrarian subpopulation ( i.e. , ) .i set and varied .the distributions are calculated on the basis of the results from through in a single run starting from .this condition is common to the following numerical results unless otherwise stated .( a ) model 1 ; ( b ) model 2 ; ( c ) model 3.,title="fig:",width=302 ] within the congregator subpopulation ( i.e. , ) and that in the fraction of contrarians in state within the contrarian subpopulation ( i.e. , ) .i set and varied .the distributions are calculated on the basis of the results from through in a single run starting from .this condition is common to the following numerical results unless otherwise stated .( a ) model 1 ; ( b ) model 2 ; ( c ) model 3.,title="fig:",width=302 ] congregators ( i.e. , ) and that of state contrarians ( i.e. , ) in the equilibrium .i set .( a ) model 1 with , ( b ) model 1 with , ( c ) model 1 with , ( d ) model 2 with , ( e ) model 2 with , ( f ) model 2 with , ( g ) model 3 with , ( h ) model 3 with , and ( i ) model 3 with . in ( d ) and ( e ) , the distributions are calculated using the results obtained from through in a single run , 10 times longer simulation time than in the other cases .this was done because the convergence of the distributions is much slower in the cases shown in ( d ) and ( e ) than in the other cases.,title="fig:",width=188 ] congregators ( i.e. , ) and that of state contrarians ( i.e. , ) in the equilibrium .i set .( a ) model 1 with , ( b ) model 1 with , ( c ) model 1 with , ( d ) model 2 with , ( e ) model 2 with , ( f ) model 2 with , ( g ) model 3 with , ( h ) model 3 with , and ( i ) model 3 with . in ( d ) and ( e ) , the distributions are calculated using the results obtained from through in a single run , 10 times longer simulation time than in the other cases .this was done because the convergence of the distributions is much slower in the cases shown in ( d ) and ( e ) than in the other cases.,title="fig:",width=188 ] congregators ( i.e. , ) and that of state contrarians ( i.e. , ) in the equilibrium .i set .( a ) model 1 with , ( b ) model 1 with , ( c ) model 1 with , ( d ) model 2 with , ( e ) model 2 with , ( f ) model 2 with , ( g ) model 3 with , ( h ) model 3 with , and ( i ) model 3 with . in ( d ) and ( e ) , the distributions are calculated using the results obtained from through in a single run , 10 times longer simulation time than in the other cases .this was done because the convergence of the distributions is much slower in the cases shown in ( d ) and ( e ) than in the other cases.,title="fig:",width=188 ] congregators ( i.e. , ) and that of state contrarians ( i.e. , ) in the equilibrium .i set .( a ) model 1 with , ( b ) model 1 with , ( c ) model 1 with , ( d ) model 2 with , ( e ) model 2 with , ( f ) model 2 with , ( g ) model 3 with , ( h ) model 3 with , and ( i ) model 3 with . in ( d ) and ( e ) , the distributions are calculated using the results obtained from through in a single run , 10 times longer simulation time than in the other cases .this was done because the convergence of the distributions is much slower in the cases shown in ( d ) and ( e ) than in the other cases.,title="fig:",width=188 ] congregators ( i.e. , ) and that of state contrarians ( i.e. , ) in the equilibrium .i set .( a ) model 1 with , ( b ) model 1 with , ( c ) model 1 with , ( d ) model 2 with , ( e ) model 2 with , ( f ) model 2 with , ( g ) model 3 with , ( h ) model 3 with , and ( i ) model 3 with . in ( d ) and ( e ) , the distributions are calculated using the results obtained from through in a single run , 10 times longer simulation time than in the other cases .this was done because the convergence of the distributions is much slower in the cases shown in ( d ) and ( e ) than in the other cases.,title="fig:",width=188 ] congregators ( i.e. , ) and that of state contrarians ( i.e. , ) in the equilibrium .i set .( a ) model 1 with , ( b ) model 1 with , ( c ) model 1 with , ( d ) model 2 with , ( e ) model 2 with , ( f ) model 2 with , ( g ) model 3 with , ( h ) model 3 with , and ( i ) model 3 with . in ( d ) and ( e ) , the distributions are calculated using the results obtained from through in a single run , 10 times longer simulation time than in the other cases .this was done because the convergence of the distributions is much slower in the cases shown in ( d ) and ( e ) than in the other cases.,title="fig:",width=188 ] congregators ( i.e. , ) and that of state contrarians ( i.e. , ) in the equilibrium .i set .( a ) model 1 with , ( b ) model 1 with , ( c ) model 1 with , ( d ) model 2 with , ( e ) model 2 with , ( f ) model 2 with , ( g ) model 3 with , ( h ) model 3 with , and ( i ) model 3 with . in ( d ) and ( e ) , the distributions are calculated using the results obtained from through in a single run , 10 times longer simulation time than in the other cases .this was done because the convergence of the distributions is much slower in the cases shown in ( d ) and ( e ) than in the other cases.,title="fig:",width=188 ] congregators ( i.e. , ) and that of state contrarians ( i.e. , ) in the equilibrium. i set .( a ) model 1 with , ( b ) model 1 with , ( c ) model 1 with , ( d ) model 2 with , ( e ) model 2 with , ( f ) model 2 with , ( g ) model 3 with , ( h ) model 3 with , and ( i ) model 3 with . in ( d ) and ( e ) , the distributions are calculated using the results obtained from through in a single run , 10 times longer simulation time than in the other cases .this was done because the convergence of the distributions is much slower in the cases shown in ( d ) and ( e ) than in the other cases.,title="fig:",width=188 ] congregators ( i.e. , ) and that of state contrarians ( i.e. , ) in the equilibrium .i set .( a ) model 1 with , ( b ) model 1 with , ( c ) model 1 with , ( d ) model 2 with , ( e ) model 2 with , ( f ) model 2 with , ( g ) model 3 with , ( h ) model 3 with , and ( i ) model 3 with . in ( d ) and ( e ) , the distributions are calculated using the results obtained from through in a single run , 10 times longer simulation time than in the other cases .this was done because the convergence of the distributions is much slower in the cases shown in ( d ) and ( e ) than in the other cases.,title="fig:",width=188 ] equations and imply the following .first , if the fluctuation of the fraction , not the number , of state congregators and that of state contrarians are compared , they are of the same order . however when the contrarians are rare , and are different by a factor of 3 .second , substitution of and , where ( ) is the density of contrarians ( section [ sub : rate equations ] ) , in eqs .and yields when is fixed , .when is fixed , it holds that as . for model 2 , the calculations in appendix a yield when is fixed , result is the same as that for model 1 .when is fixed , it holds that as .this scaling is different from that for model 1 .model 2 generates larger fluctuations than model 1 when the contrarians are rare .the numerically obtained and values are compared with eqs . and in fig .[ fig : fluctuation](b ) .the numerical and theoretical results agree well when .it should be noted that the fluctuation is larger for model 2 than for model 1 when takes intermediate values , i.e. , .the numerically obtained distributions of and are compared with the gaussian distributions whose standard deviations are given by eqs . and in figs .[ fig : distribution](d ) , [ fig : distribution](e ) , and [ fig : distribution](f ) for three values .the numerical and theoretical distributions agree well when is large enough ( i.e. , ; fig .[ fig : distribution](f ) ) . in fig .[ fig : distribution](f ) , the numerical and theoretical results almost completely overlap each other for both and .however , when is smaller , the numerically obtained distributions of and have peaks at and 1 such that they are far from the gaussian distributions shown by the dotted lines in figs . [ fig : distribution](d ) and [ fig : distribution](e ) .it should be noted that the theoretical results for and that for are indistinguishable in figs .[ fig : distribution](d ) and [ fig : distribution](e ) . in this range of ,the small - fluctuation expansion breaks down , which is consistent with fig .[ fig : fluctuation](b ) . for model 3 , the calculations in appendixb yield when is fixed , .when is fixed , it holds that as .the scaling is the same as those for model 1 .the numerically obtained and values are compared with eqs . and in fig .[ fig : fluctuation](c ) .the numerical results agree well with the theory unless is small .the numerically obtained distributions of and are compared with the gaussian distributions whose standard deviations are given by eqs . and in figs .[ fig : distribution](g ) , [ fig : distribution](h ) , and [ fig : distribution](i ) for three values .the theoretical results agree well with the numerical results if is not small ( figs .[ fig : distribution](h ) and [ fig : distribution](i ) ) . in figs .[ fig : distribution](h ) and [ fig : distribution](i ) , the numerical and theoretical results almost entirely overlap each other .the results for model 3 are similar to those for model 1 .the small - fluctuation approximation can not capture the behavior of the model when is small ( sec .[ sub : van kampen ] ) .to better understand this situation , i calculate the stationary distribution of the fokker - planck equation for the single - contrarian case , i.e. , . in this extreme case ,the single contrarian does not find other contrarians in the population .therefore , model 1 and model 3 are equivalent .i analyze this model in the following .model 2 is reduced to the standard voter model and therefore is irrelevant .there are congregators .i denote by ( ) the probability that there are congregators in state and the ( ) contrarian in state .the normalization is given by =1 ] terms , which is justified unless or , i obtain + \frac{\partial^2}{\partial x^2 } \left[x^2 p(x,1 ) \right]= & 0 , \label{eq : approximated equilibrium 1}\\ -g(x)-\frac{\partial}{\partial x}\left[(1-x ) p(x,1)\right ] + \frac{\partial^2}{\partial x^2 } \left[x(1-x ) p(x,1 ) \right ] = & 0 .\label{eq : approximated equilibrium 2}\end{aligned}\ ] ] by summing eqs . and, i obtain + \frac{\partial^2}{\partial x^2}\left[x p(x,1)\right]=0 .\label{eq : summed approximated equilibrium}\ ] ] the general solution of eq .is given by where and are constants .equation and the symmetry relationship yield for this quantity to be of order ( see eq . ) , is required .it should be noted that i have already discarded terms in deriving eqs . and .therefore , eq . is reduced to .the normalization condition , which in fact should be applied with a caution because the solution given by eq . may be invalid near and , leads to .finally , i obtain equations and imply that the fraction of congregators that is not conditioned by the state of the contrarian , i.e. , , is uniformly distributed on 12 & 12#1212_12%12[1][0] _( , , ) * * , ( ) _ _ ( , , ) _ _ ( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * , ( ) * * , ( ) ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ , ed .( , , ) * * , ( ) * * , ( ) * * , ( )
|
in the voter and many other opinion formation models , agents are assumed to behave as congregators ( also called the conformists ) ; they are attracted to the opinions of others . in this study , i investigate linear extensions of the voter model with contrarian agents . an agent is either congregator or contrarian and assumes a binary opinion . i investigate three models that differ in the behavior of the contrarian toward other agents . in model 1 , contrarians mimic the opinions of other contrarians and oppose ( i.e. , try to select the opinion opposite to ) those of congregators . in model 2 , contrarians mimic the opinions of congregators and oppose those of other contrarians . in model 3 , contrarians oppose anybody . in all models , congregators are assumed to like anybody . i show that even a small number of contrarians prohibits the consensus in the entire population to be reached in all three models . i also obtain the equilibrium distributions using the van kampen small - fluctuation approximation and the fokker - planck equation for the case of many contrarians and a single contrarian , respectively . i show that the fluctuation around the symmetric coexistence equilibrium is much larger in model 2 than in models 1 and 3 when contrarians are rare .
|
in pseudoscalar meson photoproduction , the reaction is completely described by four amplitudes that are functions of hadronic mass and center of mass scattering angle ( or , equivalently and ) . if one were able to extract these amplitudes ( allowing of course for an overall phase ) at or points , there is nothing else one could measure that would alter how one could interpret the physics of the reaction .this observation is especially important in the study of the spectrum of baryon resonances . despite several decades of investigation , it is still not clear whether certain states that are predicted by quark models exist or not .the signatures of any hitherto undiscovered states must be very subtle , to the extent that they are not readily apparent from cross section measurements alone . if one could unpick the reaction amplitudes from suitable observables , that would constitute the most comprehensive test for models . in the case of establishing -channel resonances , extraction of the four amplitudes may not even be enough .partial - wave analyses will be required , and these can lead to finite ambiguities that require additional information to resolve . in any event , a potential new physical effect would have to manifest itself clearly , or be declared unproven . in order to extract the amplitudes ,it is necessary to measure several polarization observables .in addition to the cross section , there are three single - spin observables :- particle and -beam asymmetry , as well as -photon polarization and -recoil . ] ( photon beam asymmetry ) , ( recoil polarization ) and ( target polarization ) , which can be labelled as -type measurements .there are also four beam - recoil ( -type ) , four beam - target ( -type ) and four recoil - target ( -type ) observables .all these observables are bilinear combinations of the four reaction amplitudes , and are not independent . in principle, therefore , it is not necessary to measure all of them to be able to infer the amplitudes .as we have now entered an era in which single- and double - polarization measurements can be made , there exists a real opportunity for progress in understanding pseudoscalar meson photoproduction reactions , and for potential discovery of new states .the problem of finding a minimum set of measurements that allows the unambiguous extraction of amplitudes was addressed by barker , donnachie and storrow ) .they found that , in addition to the single polarization set , five more double polarization observables were needed to remove all ambiguities in the quadrants for each relative phase angle .more recently , chiang and tabakin carried out a detailed analysis of the algebra of observables using fierz identities , and showed that the selection of just four suitably chosen double polarization observables was sufficient to remove the ambiguities .such sets have been designated as `` complete '' sets .the fierz identity analysis led to a large number of identities among observables .work by artru et al . extended this by using positivity constraints to derive many _ inequalities_. this means that the measurement of a subset of observables places limits on the possible values of the undetermined observables , so the inequalities provide useful guides to whether the values of experimental data are physical .labelling sets of observables as `` complete '' , implies somehow that one has reached an ultimate state of knowledge .however , the reality is that all experimental measurements of observables carry with them a finite uncertainty , so the concept of completeness is not well defined .one might be tempted to regard this as an experimental failing , but in practice any experiment has to be performed within constraints of time and technological feasibility ; the experiment with zero uncertainty can only be accomplished in an infinite time . the alternative is to embrace experimental uncertainty and include it in the interpretation of results .the problem of uncertainty due to noise in communication channels led shannon to develop the foundations of information theory . in that seminal work ,the concept of entropy was used as a means of quantifying an amount of information .one can also apply this to measurements . to introduce the idea with a concrete example ,suppose one measured a quantity and obtained a measured value with an uncertainty .the reporting of this measurement would usually be in the form , but this is really shorthand for a gaussian probability density function ( pdf ) .the entropy is then which for a gaussian pdf is .if a more accurate measurement were to be made , resulting in a reduced uncertainty , the gain in information can be quantified as by extending this idea to the uncertainty in the reaction amplitudes , it is possible to quantify how much information is gained following the measurement of one or more observables .this article represents a preliminary study of information entropy as applied to pseudoscalar meson photoproduction .section [ sec : measuring - information ] develops the idea encapsulated by eq .( [ eq : entropy ] ) for the reaction amplitudes , and introduces a means of calculating it . in section [ sec : results ] examples of hypothetical measurements are given , which show how the magnitudes and relative phases of the amplitudes can be determined .in addition to this , section [ sec : comparison - of - models ] briefly considers how the information content of measured data can be used as a guide to estimating whether the measurement could in principle reduce uncertainty in derived physical quantities .a full analysis of reactions will involve measurements over all scattering angles and cover the mass range of interest . to develop the concept of information content ,however , we restrict ourselves to considering one region ( or `` bin '' ) in space . the ideas can be straightforwardly extended to include many regions , since entropies are additive .the issue of whether different experiments ( measuring different observables ) have covered the same space has been avoided .the choice of basis for amplitudes is arbitrary ; information content is derived from the measured observables , so it can not depend on the choice . in this work ,the transversity basis is used , where the spin of the target nucleon and recoiling baryon is projected onto the normal to the scattering plane , and the linear polarization of the photon is either normal or parallel to the scattering plane .it is assumed that differential cross section measurements have been performed to a level of accuracy of , say , a few percent , so that further measurement would be unlikely significantly to improve knowledge of the amplitudes .the information gain to be studied here is solely due to an increased accuracy in the knowledge of the polarization observables . since all these observables are asymmetries , no generality is lost if we rescale the amplitudes such that so that the cross section provides an overall scale factor . applying this rescaling , we have since these reduced amplitudes are complex, this represents the equation of a unit 7-sphere , i.e. the eight numbers that are the real and imaginary parts are constrained to be on the surface of a unit hypersphere in 8 dimensions ( a unit 8-ball ) .the definitions of the observables in terms of the reduced amplitudes are given in appendix [ sec : definitions - of - observables ] .one side - effect of choosing transversity amplitudes is that measurement of the -type observables leads to the extraction of the magnitudes , leaving just the relative phases to be determined .there is often a tacit assumption that it is easier to perform single - spin asymmetry measurements .for that reason many analyses start from a point where values of the -type observables have been determined .the entropy associated with the state of knowledge of the amplitudes is an multidimensional extension of eq .( [ eq : entropy]): where represents the values of the real and imaginary parts of the amplitudes . before the measurement of any polarization observable , there is no knowledge of , other than the constraint imposed by eq .( [ eq:7-sphere ] ) . to encode this as a pdf, we can spread the probability uniformly over the surface area of the unit 7-sphere to give which results in a pleasingly simple entropy of the act of measurement can be viewed as a compression of this `` uniform '' pdf into as small a region of space as possible . as a rough example , consider a set of measurements that results in a multi - dimensional gaussian pdf in amplitude space .the entropy of an -dimensional gaussian is where is the determinant of the covariance matrix .while the four complex amplitudes have eight numbers in total , representing real and imaginary parts , all observable quantities are invariant to the choice on an overall phase angle , so the effective number of numbers to extract is seven . in this case , a 7-dimensional gaussian is used to estimate information gain .the projection of the gaussian onto the 7-sphere will induce off - diagonal correlations in , but for simplicity we ignore any correlations and take the standard deviation in each of the to be the same ( , say ) .the resulting approximate expression is the gain in information is the difference between this and the initial uniform pdf over the 7-sphere: a plot of this quantity as function the size of standard deviation is shown in fig .( [ fig : rough - guide - to ] ) .the choice of logarithm base is arbitrary , but for this work we select it to be 2 .this means that the unit of information is the `` bit '' ( i.e. knowing whether a quantity is 1 or 0 ) .this unit system is convenient for considering quantities related to polarization ; determining whether an asymmetry is positive or negative is equivalent to a gain of one bit of information , whereas determining a phase angle quadrant is a gain of two bits .( color online ) rough guide to information gain as a function of the standard deviation in the real and imaginary parts of the amplitudes ., scaledwidth=80.0% ] from fig .( [ fig : rough - guide - to ] ) it can be seen that if one wants to have a measured accuracy of the amplitudes to a value , the gain in information is roughly 21 bits ( see dashed vertical line on graph ) . attempting to achieve much better accuracy than this from experimentsis not likely to be practical , so we should therefore regard the 21-bit information gain as a target figure to aim for , if we want to be able to say that we have extracted amplitudes . furthermore ,if two models differ by only a few percent in the values of their amplitudes , it is not reasonable to expect that comparison with data would ever lead to being able to differentiate them . while the calculation sketched out aboveis a useful rough guide , when an actual set of observables have been measured , eq .( [ eq : nd - entropy ] ) will need to be evaluated numerically .the number of dimensions in this system indicates the use of monte carlo techniques , and a simple implementation of this is as follows .sample points are generated randomly in amplitude space with uniform density on the surface of the unit 7-sphere .the number of points needs to be sufficiently large to minimize monte carlo sampling uncertainty .for each point , the observables are evaluated according to the algebra of table [ tab : definition - of - observable ] in the appendix .the use of random values of amplitudes was described in in order to establish , for combinations of observables , the limits of regions in observable space that are allowed by postivity constraints , and using this a a guide for deriving inequalities .the present work goes further by not only taking into account these positivity constraints , but also estimating the pdfs of the combinations of observables .one can then simulate the process of measuring an observable by weighting all the points by another pdf representing the measured observable . in practice ,the pdf of an asymmetry is likely to be something like a beta distribution ( or a gaussian approximation thereof ) . for illustrative purposes , however , we can use a simple top - hat function , which for a single observable is equivalent to reducing the range of values from ] , where is the measured result with some uncertainty . if the uniform probability density on a multi - dimensional surface is .the entropy of a uniform distribution in a volume is then as illustrated by the value for the 7-sphere in eq .( [ eq:7-sphere-1 ] ) .if the surface is reduced by a cut , say from to , the probability density will be uniform in and zero otherwise , so the gain in information is simply the log of the ratio of the two surface areas: when cuts representing the measurement of a combination of observables are imposed , the number of remaining points is an estimate of the remaining volume , so in order to gain the 21 bits of information , the surface area in amplitude space ( and hence the number of points ) needs to be reduced by a factor of .this is best illustrated with a simple example , such as the measurement of one polarization observable , recoil polarization , say .figure [ fig : distribution - of - values ] shows in the light shade the distribution of points when sampling is done uniformly in amplitude space .the dark shaded region shows 126045 points selected when a simulated measurement of is selected .the result is an information gain of bits , where the uncertainty is an estimate of the monte carlo error .so we can expect that a measurement of one polarization observable to an accuracy of % will give us about 3 bits of information .( color online ) distribution of values of recoil polarization from the uniform pdf in amplitude space .shaded region represent the possible values remaining after a `` measurement '' . ] note that the `` uncut '' or prior distribution is quadratic in shape , not only for recoil polarization , but for all observables .this is a consequence of the observables being bilinear combinations of the amplitudes .for the extraction of amplitudes , it is usually assumed that the -type observables ( , and ) have to be measured .let us examine how much information one gains by making such measurements . as shown in , the constraints among observables carve out a tetrahedron inside a cube ^{3}$ ] in -space . to approximate a measurement of , and , we define a spherical region , of radius , i.e. where are the coordinates of the sphere centre .this spherical cut can be moved to various points within the tetrahedron , and the effect on the distributions of magnitudes and phases studied .a typical example is depicted in fig .[ fig : bottom - left - panel ] .the bottom left panel shows a projection of the distributions , which highlights the tetrahedral region .recall that the points in the light shaded region have been initially sampled over amplitude space , so this represents a projection into -space , and affirms the constraints defined by eq .( [ eq : brt - constraints ] ) .the points in the dark sphere are those selected by the choice of cut region .the radius of the spherical cut is 0.1 , which is equivalent in information gain to a measured accuracy in each observable of better that ( see later ) .it is unlikely , when statistical and systematic uncertainties are taken into account , that experiments will be able to determine observables to much greater accuracy than this . in the example of fig .[ fig : bottom - left - panel ] , the spherical cut is just touching the midpoint of one of the tetrahedron faces .the top row shows the magnitudes of the amplitudes , and it is clear that values for each one can now be estimated .note , however , that there is much greater uncertainty in than in the other ones .the relative phase angles are displayed in the remaining panels . while only three relative angles are independent , all six possibilities are shown .this is because , for situations in which the magnitudes of two amplitudes and are almost equal ( as in this case ) , very small uncertainties in the relative phase of the two amplitudes with respect to a third ( and ) could lead to very large uncertainties in their relative phase .it is to be expected that there should be no relative phase information for transversity amplitudes if only -type measurements are made , and this is apparent from the distributions in fig .[ fig : bottom - left - panel ] .the observed increase towards is due to the fact that the relative angles are formed from the difference of two uniform random variables .( color online ) light shade - uniform sample of amplitude space ; dark shade - region surviving cut .panel ( a ) is the projection of brt tetrahedron , ( b)-(e ) show the magnitudes of the amplitudes and the other panels are the distributions of relative phase angles ( in degrees).,scaledwidth=80.0% ] by examining the variations in the distributions of magnitudes and phases for different positions in the tetrahedron , one can deduce some general heuristics governing the relation between what we shall call a measurement and the magnitudes .these are listed in table [ tab : guide - to - relative ] .
|
information entropy is applied to the state of knowledge of reaction amplitudes in pseudoscalar meson photoproduction , and a scheme is developed that quantifies the information content of a measured set of polarization observables . it is shown that this definition of information is a more practical measure of the quality of a set of measured observables than whether the combination is a mathematically complete set . it is also shown that when experimental uncertainty is introduced , complete sets of measurements do not necessarily remove ambiguities , and that experiments should strive to measure as many observables as practical in order to extract amplitudes .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.