article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
a classical game can be considered an abstract mathematical entity that is connected to the physical world abbottshalizidavies in at least three recognizable ways : a ) it describes a strategic interaction among the participating players b ) it is implemented using a classical physical system that the players share to play the game c ) it is played in the presence of a referee who ensures that the participating players abide by its rules .quantum games - retain a ) and c ) but they are distinguished from the classical games in that the physical system used in the implementation of the game is quantum mechanical .this naturally gives rise to the central question for the area of quantum games : how quantum mechanical features of the shared physical system , used in the physical implementation of the game , express themselves in terms of the outcome / solution of the game ?for a faithful answer to this question it seems natural to establish , as the first step , a correspondence between the classical features , or classicality , of the shared physical system and the classical game and its particular outcome . establishing this correspondencepaves the way for the next step asking what impact it will have on the outcome / solution of the game as the classical features of the shared physical system are replaced by quantum features .the physical system used in a two - party einstein - podolsky - rosen - bohm ( epr - bohm ) experiment epr , bell , bell(a),bell1,bell2,aspect , peres , cereceda , winsbergfine , fine is known to have genuinely quantum features .this naturally motivates the use of a two - party epr - bohm physical system to play a two - player quantum game . motivated by developing this approach towards quantum games , we proposed in ref . a scheme to play quantum games using epr - bohm experiment .we reported that this scheme is able to construct genuine quantum games from quantum mechanical probabilities only .this is accomplished in the proposed scheme without referring to the quantum mechanical state vectors , and with little reliance on the mathematical tools of quantum mechanics .we proposed this scheme for quantum games in view of jarrett s position jarrett stating that the experimentally observed violations of bell inequalities in epr - bohm experiments are due to violations of the conjunction of two probabilistic constraints locality and completeness .jarrett concluded that the predictions of quantum mechanics , in good agreement with the experimental results , satisfy locality , but violate completeness .winsberg and fine prefer the wording * * * * _ _ factorizability _ _ * * * * for jarrett s * * * * _ completeness_. we adopted winsberg and fine s terminology in ref . as well as in this present paper . that is , the quantum features of epr - bohm experiments emerge for non - factorizable joint probabilities . by constructing quantum games from unusual non - factorizable joint probabilitiesthis scheme provides a unifying perspective for both quantum and classical games , and also presents a more easily accessible analysis of quantum games for researchers working outside the domain of quantum physics .this scheme was developed for quantum games and applied it to analyze the games of prisoner s dilemma ( pd ) , stag hunt , and chicken .for the pd game our analysis showed that , contrary to the widely held belief , no new solution that is different from the classical solution emerges when a quantum version of this game is constructed using an epr - bohm setting .however , within the same setting , for three - player pd iqbalcheonm , iqbalcheonabbott a new solution indeed emerges that is also found to be pareto - optimal .moreover , we showed that for the two - player quantum chicken game , new solution(s ) arise for two identified sets of quantum mechanical joint probabilities that maximally violate the clauser - horne - shimony - holt ( chsh ) sum of correlations .the classical game of pd has a unique nash equilibrium ( ne ) consisting of a pair of identical pure strategies and , in the two - player case , its quantum version in the scheme using the epr - bohm setting , it does not generate a new outcome .this motivates us , in the present paper , to study a quantum version of a two - player game , within the same scheme , that has a unique mixed ne .the well - known game of matching pennies ( mp ) provides such an example . using the scheme based on epr - bohm experiments to play this game , we find the impact on the solution of this game when the factorizability condition on joint probabilities is dropped , while the conditions describing normalization and locality are retained . another motive behind investigating the mp game , played using the epr - bohm setting , is as follows .we notice that when multiple ne emerge in a classical game , the analysis of its quantum version generates a separate set of constraints on joint probabilities corresponding to that particular ne .these constraints ensure that the classical game and its particular outcome remains embedded within the quantum game . as the mp game has a unique mixed classical ne, it presents an ideal situation to study how dropping the factorizability condition on joint probabilities may change the outcome of the game .in the game of mp each of the two players , henceforth labelled as alice and bob , have a penny that each secretly flips to heads or tails .no communication takes place between bob and alice and they disclose their choices simultaneously to a referee , who organizes the game and ensures that its rules are respected by the participating players .if the referee finds that the pennies match ( both heads or both tails ) , he takes one dollar from bob and gives it to alice ( for alice , for bob ) . if the pennies do not match (one heads and one tails ) , the referee takes one dollar from alice and gives it to bob ( for alice , for bob ) .as one player s gain is exactly equal to the other player s loss , the game is zero - sum and is represented with the payoff matrix : where we take and .it is well known that mp has no pure strategy nash equilibrium rasmusen and instead has a unique mixed strategy ne . for completeness of this paperwe describe here how this is found .consider repeated play of the game in which and are the probabilities with which is played by alice and bob , respectively .the pure strategy is then played with probability by alice , and with probability by bob , and the players payoff relations read a strategy pair is a ne when for the matrix ( [ matrix ] ) these inequalities read and and generate the strategy pair as the unique ne of the game . at thisne the players payoffs work out as the first step in our quantization scheme for the mp game consists of translating the game into a classical arrangement using a physical system that involves joint probabilities .the arrangement we use consists of two players sharing biased coins to play the game , assuming that the referee has the means to set constraints on their biases .the referee has coins and s / he marks them as .s / he identifies to be alice s coins and to be bob s coins . in a run , the referee hands over the coins to alice and the coins bob .alice s and bob s strategies consist of choosing one coin out of the two that each player receives in a run .the pair of chosen coins in a run is one of the .the players return the two chosen coins to the referee who tosses them together and records the outcome .the referee collects the coins ( tossed and untossed ) and repeats the same procedure over a large number of runs .referee defines and makes public the players payoff relations that depend on a ) the outcomes of a large number of tosses of biased coins , while coins are tossed in each run b ) the players strategies and c ) the real numbers defining the matrix of the game .we now state that the statistical behavior of the biased coins , expressed over a large number of tosses , is described by : where the state of a coin is denoted by and the state by .the joint probabilities are factorizable for coins , that is , one can find numbers and ] also assumes locality .notice that for a factorizable set of joint probabilities ( [ factorizability ] ) the locality constraints ( [ locality constraints ] ) always hold .( [ factorizability ] ) state that the joint probabilities can be written in terms of ] that allows this , it is assumed that joint probabilities satisfy the locality constraints ( [ locality constraints ] ) .we now refer to a result , reported by cereceda stating that , because of normalization ( [ normalization ] ) , half of the eqs .( locality constraints ) are redundant thus making eight among sixteen probabilities independent .cereceda has reported that a convenient solution of the system ( [ normalization ] , [ locality constraints ] ) , is the one for which the set of variables : is expressed in terms of the remaining set of variables : is given as these relationships arise because the quantum mechanical joint probabilities fulfill both the normalization condition ( [ normalization ] ) as well as the locality constraints ( [ locality constraints ] ) .notice that using ( [ table ] ) the correlation , for example , can be found as the correlations , , , and can similarly be worked out .the chsh sum of correlations is then defined as and the chsh inequality : which holds for any theory of local hidden variables .cereceda has reported that there exist two sets of joint probabilities that maximally violate the quantum prediction of the clauser - holt - shimony - horne ( chsh ) sum of correlations .the first set is given as whereas the second set is given as where and are defined in ( [ first set of probabilities],[second set of probabilities ] ) .that is , these two sets provide the maximum absolute limit of for .now , alongside the constraints ( [ constraint on joint probabilities ] ) there is another set of constraints on joint probabilities that are imposed by the _ cirelson limit _ , saying that the quantum prediction of the chsh sum of correlations , defined in ( chsh(a ) ) , is bounded in absolute value by i.e. . taking into account the normalization condition ( [ normalization ] ) ,the quantity is then equivalently expressed as in the following , the epr setting , introduced in this section , is used to play the quantum version of the matching pennies game .essentially , our quantum mp game corresponds when the joint probabilities , that appear in the payoff relations ( [ q payoffs ] ) , are obtained using the epr - bohm setting , instead of using a large number of tosses performed on biased coins .the players payoff relations in the quantum mp game , therefore , remain exactly the same as they are defined and made public by the referee in eq .( [ q payoffs ] ) for the translated game that uses factorizable joint probabilities .players strategies also remain exactly the same as they are in the classical game .the referee is free to prepare any quantum pure or mixed bi - partite state and to forward it to the players .s / he also fixes the available directions at the start of the game ( refer to fig .1 ) that can not be changed as the game progresses and large number of its runs are carried out .a player s strategic choices do not go beyond choosing between the two assigned directions . referring to eq .( [ constraint on indepedent probs ] ) we recall that it expresses the constraints on the coin probabilities .we also notice that the factorizability , expressed by ( [ factorizability ] ) , permits one to write the coin probabilities in terms of joint probabilities : which allows us to rewrite the constraints ( [ constraint on indepedent probs ] ) on coin probabilities as this provides the the key for embedding the classical game within the quantum game .s / he makes prior ( experimental ) arrangements in the epr - bohm setup ensuring that the constraints ( [ constraint on joint probabilities ] ) on joint probabilities hold during the whole course of playing the game .when this is the case the classical game remains embedded within the corresponding quantum game in that the quantum game attains classical interpretation with the joint probabilities becoming factorizable .however , the joint probabilities that the epr - bohm setting can generate can also be non - factorizable .this permits playing a quantum game in which the constraints ( [ constraint on joint probabilities ] ) hold , while the factorizability condition on joint probabilities is dropped .we now look at how dropping the factorizability condition for joint probabilities affects the outcome of the game . with the constraints ( constraint on joint probabilities ) continuing to hold , the referee can then find a pair of ne strategies in the quantum game using the inequalities ( [ ne ] ) as usual .because of non - factorizable joint probabilities the strategy pair may be different from the one which comes out for factorizable joint probabilities .notice that the rewards at the ne are identical to the ones given in ( payoffs at the ne ) .that is , when the joint probabilities become factorizable , the ne and the players payoffs become identical to the ones obtained in the usual mixed strategy solution of the mp game . also , the joint probabilities , even when they are non - factorizable and , therefore , violate one or more of the set of eqs .( [ factorizability2 ] ) , will always satisfy the normalization constraints ( [ normalization ] ) as well as the locality constraints ( [ locality constraints ] ) . to be consistent with the standard setting for playing a two - player two - strategy game ,the referee considers it reasonable to require that in the epr setting a player plays a pure strategy if s / he chooses the same direction over all the runs and that s / he plays a mixed strategy if s / he has a probability distribution with which s / he chooses between the two directions at her / his disposal . however , identifying pure and mixed strategies in such a way is not of much help as the payoff relations , which referee uses to reward the players , generate the classical mixed strategy game even when the players play ` pure strategies . 'this , however , remains consistent with the known result in the area of quantum games stating that a pure product initial state leads to the classical mixed strategy game .we now find the ne that comes out from a set of non - factorizable ( and thus quantum mechanical ) joint probabilities when the players payoff relations in the quantum game are obtained from the eq .( [ q payoffs ] ) .for the inequalities defining the ne in the quantum game we obtain (x^{\star } -x)\geq 0 , \notag \\ \pi _ { b}(x^{\star } , y^{\star } ) -\pi _ { b}(x^{\star } , y)=[x^{\star } \left\ { \pi _ { b}(s_{1},s_{1}^{\prime } ) -\pi _ { b}(s_{1},s_{2}^{\prime } ) -\pi _ { b}(s_{2},s_{1}^{\prime } ) + \pi _ { b}(s_{2},s_{2}^{\prime } ) \right\ } \notag \\ + \left\ { \pi _ { b}(s_{2},s_{1}^{\prime } ) -\pi _ { b}(s_{2},s_{2}^{\prime } ) \right\ } ] ( y^{\star } -y)\geq 0 , \label{qne}\end{gathered}\ ] ] where eqs .( [ qpayoffsparts ] ) and the matrix ( [ matrix ] ) gives where the right sides of these equations express the fact that the quantum game is a zero - sum game as is the classical game .using eqs .( [ dependent probabilities ] ) we eliminate the probabilities from the inequalities ( [ qne ] ) that gives the inequalities for the ne in the quantum game in terms of the probabilities appearing in the set ( [ second set of probabilities ] ) : (x^{\star } -x)\geq 0 , \notag \\ \pi _ { b}(x^{\star } , y^{\star } ) -\pi _ { b}(x^{\star } , y)=-2[x^{\star } \left\ { ( 1+p_{1}+p_{4})-(p_{5}+p_{8}+p_{9}+p_{12}+p_{14}+p_{15})\right\ } \notag \\ + ( p_{9}+p_{12}+p_{14}+p_{15}-1)](y^{\star } -y)\geq 0. \label{qne(a)}\end{gathered}\ ] ] as some of the joint probabilities are constrained by ( [ constraint on joint probabilities ] ) , using ( [ dependent probabilities ] ) we rewrite these constraints as now , adding the two equations in ( [ constraints(a ) ] ) and subtracting the second from the first gives and we write in order to eliminate arbitrarily the probabilities and from the inequalities ( [ qne(a ) ] ) to obtain (x^{\star } -x)\geq 0 , \notag \\ \pi _ { b}(x^{\star } , y^{\star } ) -\pi _ { b}(x^{\star } , y)=-2[x^{\star } \left\ { ( 1+p_{1}+p_{4}+p_{8})-(3p_{5}+2p_{9}+2p_{14})\right\ } \notag \\ + \left\ { 2(p_{5}-p_{8}+p_{9}+p_{14})-1\right\ } ] ( y^{\star } -y)\geq 0 . \label{qne(b)}\end{gathered}\ ] ] the right sides of these inequalities involve six joint probabilities , which we treat as ` independent ' and these are .these inequalities guarantee that for a factorizable set of joint probabilities the classical mixed strategy game of mp emerges .refer to the probability sets ( [ first set],[second set ] ) that maximally violate the chsh inequality .probabilities in these sets are non - factorizable as for both sets a solution for obtained from the eqs .( [ factorizability ] ) makes one or more of the probabilities to be negative or greater than one .this is also equivalent to stating that for either of the sets ( [ first set],[second set ] ) one or more of the equations ( [ factorizability2 ] ) does not hold , when $ ] and the constraints ( [ locality constraints ] ) imposed by locality hold . now a natural question arising here is to ask if these two probability sets can be used for the quantum game of mp .this will indeed be possible if for each of these two sets the constraints given by ( [ constraint on joint probabilities ] ) hold ensuring that the classical mp game is embedded within the quantum .for both the sets ( [ first set],[second set ] ) we find that the constraint ( [ constraint on joint probabilities ] ) hold , thus these probability sets , maximally violating the chsh sum of correlations , can legitimately be used in the quantum mp game . forthe first set ( [ first set ] ) the inequalities ( [ qne(b ) ] ) work out as which give the strategy pairs as ne .at the strategy pair the players payoffs are obtained from eqs .( [ q payoffs],[qpayoffspartsexplicit ] ) as whereas at the strategy pair the players payoffs are obtained to be the same i.e. .similarly , for the second set ( [ second set ] ) the ne inequalities ( qne(b ) ) are giving the strategy pairs as the ne . at the strategy pair the players payoffs work out as whereas at the strategy pair the players payoffs are obtained as the same i.e. .this paper is motivated by the observation that by having a unique mixed ne the classical mp game offers an opportunity for seeing more clearly how dropping the factorizability condition on joint probabilities may affect this unique ne , which emerges for factorizable joint probabilities in the quantization scheme based on epr - bohm experiments .notice that in the scheme based on epr - bohm experiments the referee s role is significantly increased as compared to other schemes for playing quantum games .this is because s / he is free to provide any pair of directions to each player and makes quantum measurement(s ) on any pure or mixed bi - partite states .the available options for the players are , therefore , reduced in comparison to what is the case in other quantization schemes ewl , marinattoweber , and they have exactly the same options as in the classical game . in a classical two - player two - strategy game each player can play a linear combination ( with real and normalized coefficients ) of two pure strategies and this remains exactly the same in the our scheme for playing a two - player quantum game .joint probabilities in epr - bohm experiments , performed on entangled bipartite states , are known to become non - factorizable when players make their strategic choices along certain pairs of directions .this provides the opportunity to look at the possible new outcomes of the game that non - factorizable joint probabilities may generate . in the quantization scheme based on epr - bohm experiments the constraints placed on probabilities guarantee that the classical game remains embedded within the quantum game , while probabilities may become non - factorizable . by constructing quantum games directly from quantum probabilities the suggested approach contributes towards an understanding and potential use of quantum probabilities in the area of game theorythat is , the question addressed in this paper asks whether quantum probabilities have more to offer to game theory .the answer to this we find is ` yes ' .the possibility that chsh inequality can be rephrased in terms of two - player cooperative games has been reported in literature . in ref . cleve et al . have reported a game based on chsh inequality in which the maximum probability of winning the classical game is whereas , using a quantum strategy , the players can win this game with probability , which , as they show using cirelson s limit , is optimal . also , in ref .cheon cheon has reported a quantum game in which both players maximize a quantity ( utility ) defined from spin projections of two particles which they share , whereas the payoff operators are the measurement operator for epr - bohm experiment .cheon then finds the ne of the quantum game that rewards the players far better for particles with maximum entanglement compared to when the particles are uncorrelated .both of these studies show that epr - bohm experiments can be translated into special games . the contribution of the quantization scheme developed in ref . , and that of the present paper , however , is that it shows that , along with the reported possibility of translating epr - bohm experiments as special games , one can in fact quantize any two - player game using the framework of epr - bohm experiments .secondly , that we can analyze our quantum game using the non - factorizable property of quantum mechanical joint probabilities .nonfactorizability is known to be a necessary but insufficient condition for the violation of bell s inequality , the chsh form of which we consider here .that is , a set of joint probabilities that violates bell s inequality will always be non - factorizable , whereas one can find a set of joint probabilities that is non - factorizable and still does not violate the chsh form of bell s inequality .this known result has the following implications when it is considered in our scheme for playing quantum games using epr - bohm experiments : as a new solution of the game , which emerges because of dropping the factorizability condition , the relevant joint probabilities may not violate the bell s inequality ( in its chsh form)only those outcomes of the quantum game are to be considered to have a _ bona fide _ quantum aspect for which the corresponding set of joint probabilities violates the chsh form of bell s inequality .the ne of the quantum game for which the bell s inequality is not violated will , therefore , have a pseudoclassical aspect .using bell s inequality one can identify the pseudoclassical domain from the quantum domain as follows . with the constraints ( [ constraint on joint probabilities ] ) the chsh inequality ( [ chsh ] ) using ( [ delta ] , p12&p15 ) reduces itself to .now , if a set of joint probabilities results in a ne in the quantum game and for this set we have then this ne has the pseudoclassical aspect .however , if for this set we have then it has a _ bona fide _ quantum aspect . note that in the quantum mp game the strategy pairs emerge as ne for the set ( [ first set ] ) .for these ne we obtain .similarly , the strategy pairs emerge as ne for the set ( [ first set ] ) and for these ne we obtain .these four ne , therefore , have a _ bona fide _ quantum aspect . here by quantum mechanical probabilities we mean the probabilities that are obtained from squaring the probability amplitudes .in this paper we do not consider the negative probabilities that are sometimes introduced to give another perspective on quantum phenomena .that is , in each run the referee s measurement generates one of the possible pairs and , where the first entry in a bracket is the measurement outcome along alice s chosen direction ( which is either or ) and , similarly , the second entry corresponds to bob s chosen direction ( which is either or ) .
|
a quantum version of the matching pennies ( mp ) game is proposed that is played using an einstein - podolsky - rosen - bohm ( epr - bohm ) setting . we construct the quantum game without using the state vectors , while considering only the quantum mechanical joint probabilities relevant to the epr - bohm setting . we embed the classical game within the quantum game such that the classical mp game results when the quantum mechanical joint probabilities become factorizable . we report new nash equilibria in the quantum mp game that emerge when the quantum mechanical joint probabilities maximally violate the clauser - horne - shimony - holt form of bell s inequality . keywords : quantum games , nash equilibrium , epr - bohm experiment , quantum probability
|
the search for wireless power transfer techniques is as old as the invention of electricity . from tesla , trough the vast technological development during the 20th century till recent days, many proposals have been made and implemented in this research field .established techniques for wireless energy transfer are known both in the near- and far - field coupling regimes .examples for the former can be found in resonant inductive electric transformers , optical waveguides and cavity couplers . in the far field, one can find the mechanism of transferring electromagnetic power by beaming a light source to a receiver which is converted to usable electrical energy .although these techniques enable sufficiently efficient energy transfer , they suffer either from the short - range interaction in the near - field , or from the requirement of line of sight in the far - field approaches .recently , it was shown that weakly radiative wireless energy transfer between two identical classical resonant coils is possible with sufficiently high efficiency kurs , karalis , hamam .this breakthrough was made possible by the application of the coupled mode theory into the realm of power transfer . in this experiment , kurs _ showed that energy can be transferred wirelessly at distances of about 2 meters ( mid - range ) with efficiency of about 40% .currently , most efficient wireless energy transfer devices rely upon the constraint of exact resonance between the frequencies of the emitter ( the source ) and the receiver ( the device , or the drain ) coils kurs , karalis , hamam . when the frequency of the source is shifted from the frequency of the device , due to lack of similarity between the coils or by random noise ( introduced , for example, by external objects placed close to either coils ) , a significant reduction of the transfer efficiency would occur .in such case , one may implement a feedback circuit , as suggested in ref . , in order to correct the reduction of the transfer efficiency . in this paper ,we suggest a different approach to resolve the issues of the resonant energy transfer process . here , we present a novel technique for robust and efficient mid - range wireless power transfer between two coils , by adapting the process of adiabatic passage ( ap ) for a coherently driven two - state quantum system , as will be explained in the following sections .the adiabatic technique promises to be both efficient and robust for variations of the parameters driving the process , such as the resonant frequencies of the coils and the coupling coefficient between them .( b ) the proposed adiabatic technique , with time - varying coil frequency , can transfer energy to multiple devices even when most of the time . ]we follow the description of the coupled mode theory in the context of wireless energy transfer as described in detail by kurs _the interaction between two coils , in the strong - coupling regime , is described by the coupled - mode theory , through the following set of two differential equations : = \left[% \begin{array}{cc } \omega_{s}(t ) -i\gamma_{s } & \kappa ( t ) \\ \kappa ( t ) & \omega_{d}(t ) -i\gamma_{d}-i\gamma_{w}%\end{array}% \right ] \left[% \begin{array}{c } a_{s}(t ) \\a_{d}(t)% \end{array}% \right ] .\ ] ] here , and are defined so that the energies contained in the source and the drain are , respectively , and . and are the intrinsic loss rates ( due to absorption and radiation ) of the source and the drain coils , respectively , and the extraction of work from the device is described by the term .the intrinsic frequencies of the source and drain coils are and ; these are given explicitly as where and are the inductance and the capacitance , respectively , of the source and the drain coils .the coupling coefficient between the two coils reads where is the mutual inductance of the two coils .the source coil is a part of the driving circuit and is periodically recharged , while the energy is transferred wirelessly to the device coil .the dynamics of such process in the case of static ( time independent ) resonance frequencies , as describe in ref . is illustrated in fig .[ wireless dynamics ] ( top ) . the evolution of eq .( [ wireless equations ] ) is connected to the dynamics of the schrdinger equation for a two - state atom written in the rotating - wave approximation .the variables and can be identified as the probability amplitudes for the ground state ( corresponding to the source ) and the excited state ( corresponding to the drain ) , respectively .the coupling between the coils are analogous to the coupling coefficient of the two - state atom ( also known as the rabi frequency ) , which is proportional to the atomic transition dipole moment and the laser electric field amplitude : .the difference between the resonant frequencies of the two coils corresponds to the detuning in the two - state atom : .the power transfer method was demonstrated for the resonant case of , which is the case of in atomic physics .however , the power transmitted between the coils drops sharply as the system is detuned from resonance , i.e. for the case of .also , any time dependent dynamics or change of coupling strengths between the coils can results in lower energy transfer between the coils .in the following , we develop a systematic framework of the adiabatic criteria in the context of wireless energy transfer .the technique of adiabatic passage was successfully implemented in other research fields , such as nuclear magnetic resonance ( nmr ) , interaction of coherent light with two level atoms , or in sum frequency conversion techniques in nonlinear optics .this dynamical solution requires a time - dependent intrinsic frequency change of the source coil .the variation of the frequency should be adiabatic ( very slow ) compared to the internal dynamics of the system that is determined by the coupling coefficient .we will first assume that the loss rates , and are zero and write eq .( [ wireless equations ] ) in the so - called _ adiabatic basis _ ( for the two - state atom this is the basis of the instantaneous eigenstates of the hamiltonian ) : = \left [ \begin{array}{cc } -\varepsilon ( t ) & i\dot{\vartheta}(t ) \\i\dot{\vartheta}(t ) & \varepsilon ( t)% \end{array}% \right ] \left [ \begin{array}{c } b_-(t ) \\ b_+(t)% \end{array}% \right ] , \ ] ] where the dot denotes a time derivative and the connection between the original amplitudes and and the adiabatic ones and is given by [ adiabatic states ] when the evolution of the system is adiabatic , and remain constant .mathematically , adiabatic evolution means that the non - diagonal terms in eq .are small compared to the diagonal terms and can be neglected .this restriction amounts to the following _ adiabatic condition _ on the process parameters : ^{3/2}.\ ] ] hence adiabatic evolution requires a smooth time dependence of the coupling and the detuning , long interaction time , and large coupling and/or large detuning . in the adiabatic regime , but the energy contained in the source and the drain coil will _ vary _ if the mixing angle varies ; thus adiabatic evolution can produce energy transfer between the two coils . if the detuning sweeps slowly from some large negative value to some large positive value ( or vice versa ) , then the mixing angle changes from to ( or vice versa ) . with the energy initially in the first coil , the system will stay adiabatically in thus the energy will end up in the second coil .therefore the detuning sweep ( i.e. the frequency chirp ) will produce complete energy transfer .furthermore , ap is not restricted to the shape of the coupling and the detuning as far as the condition is fulfilled and the mixing angle changes from to 0 ( or vice versa ) .the variation of the detuning can be achieved by changing the capacitance ( or the inductance ) of one , or the two coils .the time variation of the coupling can be achieved , for example , with the rotation of one coil ( or two coils ) , thereby changing the geometry and thus the mutual inductance of the two coils .s ( right frames ) . in all the graphs, solid line refers to the source coil , and dashed line refers to the device coil . for the static case , the functions and are given by eqs . with the following parameters : s , s ,whereas for the ap case , they follow eqs . with the following parameters : s , s , s.,width=377 ]when the loss rates are nonzero , the dynamics become more complicated and more realistic .nevertheless , the essence of ap remains largely intact , if one follow another important constraint , states that the coupling coefficient also should be larger than the loss rates , and that the initial and final detunings are larger than the coupling coefficient .the physical reasoning behind it is that the dynamics should be faster then the damping rates that exist in the system ( mainly on the device ) and not only adiabatic . in fig .[ fig2 ] we compare the resonant ( static ) and adiabatic mechanisms , without losses ( left frames ) and with losses ( right frames ) . for the numerics , we used the following coupling and detuning for the resonant case : [ resonance ] and for the adiabatic mechanism : [ ap ] where for our simulations we set . as can be seen in fig .[ fig2 ] the energy in the static case oscillates back and forth between the two coils . in ap ,once the energy is transferred to the drain coil it stays there .this feature of ap is used to minimize the energy losses from the source coil .we see that when following the adiabatic constraints , the ap process outperforms the static resonant method .to describe the efficiency of the proposed technique we use the efficiency coefficient , which is the ratio between the work extracted from the drain for the time interval divided by the total energy ( absorbed and radiated ) for the same time interval , in the static steady state case , the efficiency reads as can be seen from eq ., in order to maximize , one should reduce the time that the energy stays in the source coil .this can not be obtained in the resonant ( static ) case , because the energy oscillates back and forth between the source and the drain coils .nevertheless , in the ap mechanism , there is only one transition between the source and the device , so it should be chosen to be as early as possible .the ap dynamics ( or any other time - dependent dynamics ) should also repeat itself after some repetition time .this is illustrated in fig .the time scale is of the order of several `` loss times '' ( equal to ) , in this way we ensure that each cycle of coil charging begins , after the consumption of all the energy from the previous cycle , otherwise interference appears , which will be difficult to be predicted analytically . for the propose ap techniquewe assume that energy is instantly loaded into the source coil without loss in the beginning of each cycle .another important measurement is the amount of energy transferred from the source coil to the device , which is the useful energy consumed as a function of time .this measurement is in fact equal to the nominator in eq . .( right y - axes ) and the useful energy consumed ( left y - axes ) as a function of the static detuning .the blue solid line depicts ap and the red dotted line is for the static method .the functions and are given by eqs .for the static method and for ap , with the following parameter values : ( top frames ) s , s , s , s .( bottom frames ) s , s , s , s ., width=302 ] for ap ( top frame ) and the static case ( bottom frame ) versus the loss rate and the coupling coefficient .the functions and are given by eqs .for the static method and for ap , with the following parameter values : s , s , s . , width=302 ]in order to compare ap and the static mechanisms for energy transfer , several sets of simulations were performed , measuring both conversion efficiency and total energy consumed by the device : * comparison between efficiencies as a function of the detuning , for different distances between the coils ( determined by the ratio ) ; * influence of variations in the coupling and the loss coefficients ; * robustness of the adiabatic energy transfer , for time - dependent coupling coefficient .first , the effect of different resonant frequency between the source coil and the device coil on the total wireless transfer efficiency were examined . forthat we changed only the resonance frequency of the device in each numerical run , for two different distances between the coils : 0.8 meter and 1 meter between the coils ( where we used the same units and notations as reported in ref . ) . fig .[ fig4 ] shows that ap is less sensitive to the static detuning .as can be shown , the maximal conversion efficiency is achieved for the resonance approach is symmetric about zero detuning , while ap is asymmetric with its maximal efficiency value shifted toward the positive detuning .the explanation is that then the energy transfer occurs at early stage and therefore the energy stays less time in the source coil .next , we examine the effect of different coupling and loss coefficients .[ fig5 ] shows contour plots of the efficiency coefficient as a function of the coupling and the loss rate of the source and the drain coils ( where we assumed ) , where we assume fix values of ( i.e. can not be changed for different value of coupling and loss coefficients ) .the upper frame presents results for the ap technique , while the lower frame is for the static method .ap is obviously more robust to the change in the parameter values compared to the static method .the scheme proposed here , which does not require feedback control , is therefore an alternative to the scheme suggested in .we also wanted to check the effect of time - dependent coupling to the dynamics of energy transfer , which is a more realistic modelling to the process . when the detuning is varied between the two coils , the coupling changes as well (can be inferred from eq . ) .the maximal coupling coefficient value is expected to be obtained when the detuning is zero .for that , we chose the following time dependent detuning and coupling coefficient , respectively : [ shapes time dependent ] where for our simulations we set .[ fig6 ] shows the corresponding energy transfer efficiency as a function of time , along with the detuning and coupling .s , s , s , s ., width=302 ]in conclusion , we have shown that the technique of adiabatic passage , which is well known in quantum optics and nuclear magnetic resonance , has analog in the wireless energy transfer process between two circuites .the factor that enables this analogy is the equivalence of the schrdinger equation for two - state system , to the coupled - mode equation which describes the interaction between two classical coils in the strong - coupling regime .the proposed procedure transfers energy wirelessly in effective , robust manner between two coils , without being sensitive to any resonant constraints and noise compared to the resonant scheme demonstrated previously .the application of this mechanism enables efficient energy transfer to several devices as well as optimizing the transfer for several distances and noise interferences .a. karalis , _ novel photonic phenomena in nanostructured material systems with applications and mid - range efficient insensitive wireless energy - transfer _ ( scd thesis , massachusetts institute of technology , 2008 ) .
|
we propose a technique for efficient mid - range wireless power transfer between two coils , by adapting the process of adiabatic passage for a coherently driven two - state quantum system to the realm of wireless energy transfer . the proposed technique is shown to be robust to noise , resonant constraints , and other interferences that exist in the neighborhood of the coils . wireless energy transfer , adiabatic passage , robust and efficient power transfer , coupled mode theory . 05.45.xt , 32.80.xx , 84.32.hh , 85.80.jm
|
flows involving free surfaces lend themselves to observation , and thus have been scrutinized for hundreds of years .the earliest theoretical work was concerned almost exclusively with the equilibrium shapes of fluid bodies , and with the stability of the motion around those shapes .experimentalists , always being confronted with physical reality , were much less able to ignore the strongly non - linear nature of hydrodynamics .thus many of the non - linear phenomena , that are the focus of attention today , had already been reported 170 years ago . however , with no theory in place to put these observations into perspective , non - linear phenomena took the back seat to other issues , and were soon forgotten . herewe report on the periodic rediscovery of certain non - linear features of drop formation , by retracing some of the history of experimental observation of surface tension driven flow .recently there has been some progress on the theoretical side , which relies on the self - similar nature of the dynamics close to pinching .modern research on drop formation begins with the seminal contribution of .he was the first to recognize that the breakup of liquid jets is governed by laws independent of the circumstance under which the jet is produced , and concentrated on the simplest possible case of a circular jet . without photography at one s disposal , experimental observation of drop breakupis very difficult , since the timescale on which it is taking place is very short . a figure from savart s original paper ( ) , showing the breakup of a liquid jet 6 mm in diameter .it clearly shows the succession of main and satellite drops as well as drop oscillations . ]yet savart was able to extract a remarkably accurate and complete picture of the actual breakup process using his naked eye alone .to this end he used a black belt , interrupted by narrow white stripes , which moved in a direction parallel to the jet .this effectively allowed a stroboscopic observation of the jet . to confirm beyond doubt the fact that the jet breaks up into drops and thus becomes discontinuous , savart moved a `` slender object '' swiftly across the jet , and found that it stayed dry most of the time . being an experienced swordsman, he undoubtedly used this weapon for his purpose ( ) .savart s insight into the _ dynamics _ of breakup is best summarized by fig.[fig1 ] taken from his paper ( ) . tothe left one sees the continuous jet as it leaves the nozzle .perturbations grow on the jet , until it breaks up into drops , at a point labeled `` a '' . near a an elongated neckhas formed between two bulges which later become drops . after breakup , in between two such drops , a much smaller `` satellite '' drop is always visible . owing to perturbations received when they were formed, the drops continue to oscillate around a spherical shape .only the very last moments leading to drop formation are not quite resolved in fig.[fig1 ] . from a theoretical point of view, what is missing is the realization that surface tension is the driving force behind drop breakup , the groundwork for the description of which was laid by and .savart however makes reference to mutual attraction between molecules , which make a sphere the preferred shape , around which oscillations take place. the crucial role of surface tension was recognized by , who confined himself mostly to the study of equilibrium shapes .this allows one to predict whether a given perturbation imposed on a fluid cylinder will grow or not . namely , any perturbation that will lead to a reduction of surface area is favored by surface tension , and will thus grow .this makes all sinusoidal perturbations with wavelength longer than unstable .at the same time as plateau , hagen published very similar investigations , without quite mastering the mathematics behind them ( ) .the ensuing quarrel between the two authors , published as letters to _ annalen der physik _, is quite reminiscent of similar debates over priority today . a littleearlier plateau had developed his own experimental technique to study drop breakup ( ) , by suspending a liquid bridge in another liquid of the same density in a so - called `` plateau tank '' , thus eliminating the effects of gravity . yetthis research was focused on predicting whether a particular configuration would be stable or not .however plateau also included some experimental sketches ( cf .fig.[plateau ] ) that offer interesting insight into the nonlinear dynamics of breakup for a viscous fluid : first a very thin and elongated thread forms , which has its minimum in the middle . however , the observed final state of a satellite drop in the center , with even smaller satellite drops to the right and left indicates that the final stages of breakup are more complicated : the thread apparently broke at 4 different places , instead of in the middle . following up on plateau s insight , added the flow dynamics to the description of the breakup process . at low viscosities ,the time scale of the motion is set by a balance of inertia and surface tension : here is the radius of the ( water ) jet , the density , and the surface tension . for the jet shown in fig.[fig1 ] , this amounts to , a time scale quite difficult to observe with the naked eye .rayleigh s linear stability calculation of a fluid cylinder only allows to describe the initial growth of instabilities as they initiate near the nozzle .it certainly fails to describe the details of drop breakup leading to , among others , the formation of satellite drops .linear stability analysis is however quite a good predictor of important quantities like the continuous length of the jet .two photographs of water jets taken by , using a short - duration electrical spark . ] a sequence of pictures of a drop of water falling from a pipette ( ) . for the first time , the sequence of events leading to satellite formation can be appreciated . ]rayleigh was well aware of the intricacies of the last stages of breakup , and published some experimental pictures himself ( ) . unfortunately , these pictures were produced by a single short spark , so they only transmit a rough idea of the _ dynamics _ of the process .however , it is again clear that satellite drops , or entire sequences of them , are produced by elongated necks between two main drops .clearly , what is needed for a more complete understanding is a sequence of photographs showing one to evolve into the other .the second half of the 19th century is an era that saw a great resurgence of the interest in surface tension related phenomena , both from a theoretical and experimental point of view .the driving force was the central role it plays in the quest to understand the cohesive force between fluid particles ( ) , for example by making precise measurements of the surface tension of a liquid .many of the most well - known physicists of the day contributed to this research effort , some of whom are known today for their later contributions to other fields ( ) .a particular example is the paper by , who observed the drop oscillations that remain after break - up , already noted by savart . by measuring their frequency, the value of the surface tension can be deduced . to record the drop oscillations , lenard used a stroboscopic method , which allows to take an entire sequence with a time resolution that would otherwise be impossible to achieve . as more of an aside, lenard also records a sequence showing the dynamics close to breakup , leading to the separation of a drop .it shows for the first time the origin of the satellite drop : first the neck breaks close to the main drop , but before it is able to snap back , it also pinches on the side toward the nozzle .the presence of a slender neck is intimately linked to the profile near the pinch point being very asymmetric : on one side it is very steep , fitting well to the shape of the drop . on the other side it is very flat , forcing the neck to be flat and elongated . a drop of water ( left ) and a glycerol - alcohol mixture ( right ) falling from a pipette ( ) .the drop of viscous fluid pulls out long necks as it falls ., title="fig:",width=188 ] a drop of water ( left ) and a glycerol - alcohol mixture ( right ) falling from a pipette ( ) .the drop of viscous fluid pulls out long necks as it falls . , title="fig:",width=188 ] however , as noted before , few people took note of the fascinating dynamics close to breakup . from a theoretical point of view, tools were limited to rayleigh s linear stability analyses , which does not allow to understand satellite formation .many years later , the preoccupation was still to find simple methods to measure surface tension , one of them being the `` drop weight method '' ( ) .the idea of the method is to measure surface tension by measuring the weight of a drop falling from a capillary tubes of defined diameter .harold edgerton and his colleagues looked at time sequences of a drop of fluid of different viscosities falling from a faucet ( ) , rediscovering some of the features observed originally by lenard , but adding some new insight .fig.[edgerton ] shows a water drop falling from a faucet , forming quite an elongated neck , which then decays into _ several _ satellite drops .the measured quantity of water thus comes from the main drop as well as from some of the satellite drops ; some of the satellite drops are projected upward , and thus do not contribute .the total weight thus depends on a very subtle dynamical balance , that can hardly be a reliable measure of surface tension .in addition , as fig.[edgerton ] demonstrates , a high viscosity fluid like glycerol forms extremely long threads , that break up into myriads of satellite drops . in particular , the drop weight can not be a function of surface tension alone , but also depends on viscosity , making the furnishing of appropriate normalization curves unrealistically complicated . a high - resolution sequence showing the bifurcation of a drop of water ( ) .] a sequence of interface profiles of a jet of glycerol close to the point of breakup ( ) .the experimental images correspond to , and . corresponding analytical solutions based on self - similarity of the entire profile are superimposed ., width=302 ]after edgerton s paper , the next paper that could report significant progress in illuminating _ non - linear _ aspects of drop break - up was published in 1990 ( ) .firstly , it contains a detailed sequence of a drop of water falling from a pipette in diameter , renewing efforts to understand the underlying dynamics .secondly , it was proposed that close to pinch - off the dynamics actually becomes quite simple , since any _ external _ scale can not play a role .namely , if the minimum neck radius is the only relevant length scale , and if viscosity does not enter the description , than at a time away from breakup on must have for dimensional reasons . at some very small scale ,one expects viscosity to become important .the only length scale that can be formed from fluid parameters alone is thus the validity of ( [ inviscid ] ) is limited to the range between the external scale and this inner viscous scale .these simple similarity ideas can in fact be extended to obtain the laws for the entire profile , not just the minimum radius ( ) .namely , one supposes that the profile around the pinch point remains the same throughout , while it is only its radial and axial length scales which change . in accordance with ( [ inviscid ] ) , these length scales are themselves power laws in the time distance from the singularity . in effect , by making this transformation one has reduced the extremely rapid dynamics close to break - up to a static theory , and simple analytical solutions are possible . the experimental pictures in fig.[kowalewski ]are again taken using a stroboscopic technique , resulting in a time resolution of about ( ) . since for each of the pictures the temporal distanceaway from breakup is known , the form of the profile can be predicted without adjustable parameters .the result of the theory is superimposed on the experimental pictures of a glycerol jet breaking up as black lines . in each picturethe drop about to form is seen on the right , a thin thread forms on the left .the neighborhood of the pinch point is described quite well ; in particular , theory reproduces the extreme _ asymmetry _ of the profile .we already singled out this asymmetry as responsible for the formation of satellite drops .one of the conclusions of this brief overview is that research works in a fashion that is far from straightforward .times of considerable interest in a subject are separated by relative lulls , and often known results , published in leading journals of the day , had to be rediscovered . however from a broader perspective one observes a development from questions of ( linear ) stability and the measurement of static quantities , to a focus that is more and more on the ( non - linear ) dynamics that makes fluid mechanics so fascinating .
|
surface - tension - related phenomena have fascinated researchers for a long time , and the mathematical description pioneered by young and laplace opened the door to their systematic study . the time scale on which surface - tension - driven motion takes place is usually quite short , making experimental investigation quite demanding . accordingly , most theoretical and experimental work has focused on static phenomena , and in particular the measurement of surface tension , by physicists like etvs , lenard , and bohr . here we will review some of the work that has eventually lead to a closer scrutiny of time - dependent flows , highly non - linear in nature . often this motion is self - similar in nature , such that it can in fact be mapped onto a pseudo - stationary problem , amenable to mathematical analysis .
|
in the last decades , the importance of copulas in mathematical modeling was recognized by many researchers ; see e.g. .many applications come from actuarial and financial mathematics , where the joint distribution of a vector of random variables is studied frequently .typical problems are the pricing of basket options or the derivation of the value at risk of a portfolio . in this context , an interesting question concerns the best or worst case when the marginal distributions are given but the dependence structure of the underlying random vector is unknown or only partially known .such situations appear frequently since dependence structures are in general more difficult to calibrate from empirical data than marginal distributions .thus we are interested in maximizing the value of an integral by considering all possible copulas as integrators .+ the underlying problem is in general open , however there exist solutions for some particular classes of integrand functions .for instance , rapuch and roncalli consider basket option pricing when no information on the dependence of the underlying random variables is available .they derive bounds for the prices of several options of european type in the black - scholes model , where the integrand function has a mixed second derivative with constant sign on the unit square .tankov extends these results to the greater class of two - increasing ( or supermodular ) functions , a definition will be given in the next section .furthermore the author gives an extension to option pricing problems under partial information on the dependence of the underlying random variables .note that the above results are based on classical findings due to tchen . +similar results and applications in number theory are presented by fialov and strauch .they consider bounds for functionals which depend on two uniformly distributed point sequences . under similar conditions as in , they show that the frchet - hoeffding bounds and are the copulas for which the extremal values are obtained .we remark that the underlying problem was formulated as an open problem in the unsolved problem collection of _ uniform distribution theory_. a more detailed introduction to applications in uniform distribution theory is given in section [ udt ] of this article . + a list of results for a different class of functions exists in the context of financial risk theory ; see e.g. puccetti and rschendorf or albrecher et al . . in authors derive sharp bounds for quantiles of the loss of a portfolio , represented by a finite sum of dependent random variables , when no or only partial information on the dependence structure within the portfolio is available .such quantities play an important role in actuarial and financial mathematics , for instance in the computation of the value at risk . recentlythis approach has been generalized to derive optimal bounds for the expected shortfall of a portfolio ; see puccetti .many of these results rely on the so - called rearrangement method due to rschendorf .note that the application of the rearrangement method requires a rather strong regularity of the integrand function ; see e.g. .the optimal bounds in the articles mentioned above are attained by using so - called shuffles of -class of copulas , which we define in the next section .+ the structure of our paper is the following : in the next section , after a short introduction to copulas , we present our main results , which are bounds on integrals of piecewise constant functions .furthermore , we formulate an approximation technique for a very general class of integrand functions . in the third section , we apply our results to problems in uniform distribution theory and financial mathematics .in the sequel we consider expectations , ,\ ] ] where is a function on and are uniformly distributed random variables on the unit interval . in this situationthe joint distribution function of and is a copula .[ cop ] let be a positive function on the unit square. then is called ( two)-copula iff for every and for every with and a function which satisfies is called two - increasing or supermodular . in the sequelwe denote by the set of all two - copulas . notethat the restriction to uniformly distributed marginals is insignificant since by sklar s theorem , see e.g. ( * ? ? ?* theorem 2.3.3 ) , we can write every continuous two - dimensional distribution function as where denote the marginal distributions of and is a copula . moreover if and are continuous , then is unique and we have where denote the inverse distribution functions of the marginals .+ copulas can be ordered stochastically , where the upper and lower bounds are called frchet - hoeffding bounds ( see e.g. ( * ? ? ?* theorem 2.2.3 ) ) .more precisely , for every two - copula we have it is also well - known that the frchet - hoeffding lower and upper bounds and are copulas in the two dimensional setting . for higher dimensionsan analogon of exists , however the lower bound is in general not a copula , see ( * ? ? ?* theorem 3.2 and 3.3 ) . for a detailed introduction to copulassee .+ thus , according to the discussion in the beginning of section [ intro ] , we are interested in bounds of the form for all , where are copulas . as mentioned above a particularly interesting subclass of copulas for our problemsare so - called shuffles of , see ( * ? ? ?* section 3.2.3 ) .+ [ shuf ] let , be a partition of the unit interval with , be a permutation of and .we define the partition such that each is a square . a copula is called shuffle of with parameters if it is defined in the following way : for all if , then distributes a mass of uniformly spread along the diagonal of and if then distributes a mass of uniformly spread along the antidiagonal of .note that the two frchet - hoeffding bounds are trivial shuffles of with parameters and , respectively .furthermore , it is well - known that every copula can be approximated arbitrarily close with respect to the supremum norm by a shuffle of ; see e.g. ( * ? ? ?* theorem 3.2.2 ) . in the sequelwe denote by the partition of the unit interval which consists of intervals of equal length .+ in next theorem we illustrate the close relation of to problems in optimization theory , namely linear assignment problems of the form where is the set of all permutations of .such problems are well understood and can be solved efficiently , for example by using the celebrated hungarian algorithm due to kuhn . for a detailed description of assignment problems and related solution algorithmswe refer to .[ main1 ] let , be a real - valued matrix and let the function be defined as then the copula which maximizes is given as a shuffle of with parameters , where is the permutation which solves the assignment problem moreover , the maximal value of is given as let be the set of all shuffles of with parameters of the form and let , where .then is always a copula satisfying where is given in the statement of the theorem .+ for an arbitrary copula we define the matrix as it follows by definition [ cop ] that is doubly stochastic and by definition [ shuf ] that is a permutation matrix .furthermore it follows by the birkhoff - von neumann theorem that the set of doubly stochastic matrices coincides with the convex hull of the set of permutation matrices , see e.g. .thus for every there exist with such that and hence note that the maximal copula in theorem [ main1 ] is by no means unique , since for instance the value of the integral in is independent of the choice of .+ obviously , we can derive a lower bound in theorem [ main1 ] by considering .furthermore , it is easy to see that theorem [ main1 ] applies to all functions which are constant on sets of the form where and are rational numbers .+ the following generalization of our approach applies to a wide class of functions on the unit square .[ main2 ] let be a continuous function on ^ 2 ] we have that for every there exists an integer such that ^ 2.\ ] ] moreover , by theorem [ main1 ] , for every we can write for a permutation and a real valued matrix with using , we get that and thus combining this with , we get . the assumption that is continuous can ,perhaps , be relaxed to the case that is -continuous a.e . for all .this is required to make sure that exists for all .+ by defining the function families differently , we might get an approximation technique which converges faster to the optimal value , for instance we could use furthermore , the mini- and maximization steps in can be time - consuming , for instance when these problems are not explicitly solvable .however the advantage of the present approach lies in the fact that we get an upper and lower bound of the optimal value for every , which is obviously useful for numerical applications .+ in numerical investigations , where could not be solved explicitly , we used mini- and maximization over a fixed grid in each .this results in a fast computation , however we obviously lose the property of upper and lower bounds for every . + by assuming lipschitz - continuity of , we can describe the rate of convergence of our method .let the assumptions of theorem [ main2 ] hold and , in addition assume that is lipschitz - continuous on ^ 2 $ ] with parameter . then following the proof of theorem [ main2 ] and using the lipschitz - continuity of we get ^ 2,\ ] ] and thus this section we present two numerical examples in which we apply the approximation technique presented in theorem [ main2 ] .we use an implementation of the hungarian algorithm in matlab , which makes it possible to derive the solution of the linear assignment problem for a given matrix of size within seconds .the involved mini- or maximization of the integrand function on a given grid can be done efficiently , since the integrand functions are piecewise smooth .a deterministic sequence of points in is called uniformly distributed ( u.d . )iff for all intervals .furthermore , we call the asymptotic distribution function ( a.d.f . ) of a point sequence in if holds in every point of continuity of , for a survey of classical results in this field see . in ,fialov and strauch consider where are u.d .sequences in the unit interval and is a continuous function on , see also . in this casethe a.d.f . of is always a copula and we can write now we can derive upper bounds for by maximizing over the set of all copulas .this has already been done in for functions where has constant sign for all .note that this condition is equivalent to the two - increasing property of provided that exists on the unit square .+ as a numerical example , we consider the numerical results are illustrated in table [ tab : t2 ] .note that the approximations of the lower bound can be easily computed using the symmetry of the sine function .+ a further interesting question concerns the sequences which maximize .let be a u.d .sequence and a shuffle of , then it is easy to see that is u.d . , where is the support of .thus if is the shuffle of which attains the maximum in , an optimal two - dimensional sequence is given as , where is an arbitrary u.d .sequence . in figure [ fig : max1 ], we present the copula which attains the upper bound for the maximum in our approximation when .+ although we can not give a rigorous proof , by increasing it seems that the copula which attains the maximum is the shuffle of with parameters .in this case we have where denotes the support of ..upper and lower bounds for the maximum in with respect to .[ cols= " > , > , > , > , > , > , > " , ] .,width=264 ] .,width=264 ]the method presented in this paper can be used to derive sharp bounds for integrals of piecewise constant functions with respect to copulas .this extends the scientific literature on this topic , that is in general still open .the numerical effectiveness of our method was illustrated in two numerical examples from different branches of applied mathematics .+ a starting point for further research is an extension of the presented technique to higher dimensional problems , since founding bounds for multidimensional integrals with respect to copulas has several applications in fields of mathematics such as number theory , financial and actuarial mathematics .of course our aim is to study and investigate general problems and try to find a link between different branches of mathematics . nevertheless ,since the resulting so - called multi - index assignment are in general np - hard , we plan to investigate heuristics ; see e.g. .the authors would like to thank prof .robert tichy from tu graz and prof .oto strauch from the slovak academy of science for helpful remarks and suggestions .furthermore the authors are indebted to two anonymous referees who helped to improve the paper .
|
we consider the integration of two - dimensional , piecewise constant functions with respect to copulas . by drawing a connection to linear assignment problems , we can give optimal upper and lower bounds for such integrals and construct the copulas for which these bounds are attained . furthermore , we show how our approach can be extended in order to approximate extremal values in very general situations . finally , we apply our approximation technique to problems in financial mathematics and uniform distribution theory , such as the model - independent pricing of first - to - default swaps .
|
kelvin - helmholtz instability ( khi ) is the name given to the primary instability that occurs when velocity shear is present within a continuous fluid or across fluid boundaries .the shear is converted into vorticity that , subject to secondary instabilities , cascades generating turbulence .the khi is one of the most important hydrodynamical instabilities and plays a significant role in various parts of astrophysics .it is believed to be responsible for additional mixing in differentially rotating stellar interiors , and to keep a finite - thickness layer of dust around the midplane of protoplanetary disks .it also contributes to convective mixing in any deep stellar interior at the stiff convective boundaries , for instance in asymptotic giant stars or novae .moreover , khi can lead to the destruction of cool gravitationally bound objects moving in a hot ambient medium such as galaxies in the intracluster medium , substellar companions engulfed by a giant star and comets entering a planetary atmosphere .khi plays a role in the interactions of the magnetopause and solar wind and has been observed in the solar corona . in order to understand these phenomena and their implications , it is therefore important to define a well - posed method to quantify how accurately khi can be modeled by different numerical techniques .verifying the correct treatment of khi has attracted increased interest following the conclusions made by including vigorous discussions of khi in lagrangian schemes .the main conclusion reached was that smoothed particle hydrodynamics ( sph ) fails to resolve khi due to a surface tension effect between the sph particles at the shear interface .however , the test was done at a sharp shear and contact discontinuity . attempted to address the problem with khi growth from a sharp contact discontinuity in by adding an artificial thermal conductivity to sph .a prescription achieving a similar end by adding a diffusion motivated by a subgrid turbulence model to sph was proposed by . in a case where traditional sph largely fails to reproduce khi at a sharp interface , demonstrated that using a godunov - sph formulation with zeroth and first order consistency that growth of khi can be obtained . using a voronoi - mesh based scheme showed improvement over sph in a sharp contact discontinuity khi test , but compared the compressible solution to the growth rate for an incompressible flow and did not perform a convergence study . with the arepo voronoi - mesh godunov code ran a sharp contact discontinuity khi test and pointed out the difference seen in the secondary instabilities developed when the mesh was given a motion following the flow ( a quasi - lagrangian motion ) or kept fixed . in the same codeis used for a extended discussion , with both a sharp contact discontinuity khi test and a smooth transition test , but comparing both of the compressible results to the growth rate for an incompressible sharp contact continuity initial test . pointed out the zeroth - order inconsistency in sph , and designed a kernel to minimize these effects , achieving better qualitative results on a sharp contact discontinuity khi test .zeroth - order inconsistency is the inability of sph interpolation to reproduce a constant function at any finite resolution .a quantitative analysis of the growth of khi from a sharp contact discontinuity was performed by with sph and grid - based godunov codes . with a focus on sph, argued that a sharp contact discontinuity khi test is not ideal , and propose an alternative sph smoothing kernel which yields improved results . compared qualitatively a grid based method and sph using both a cubic and quintic kernel on a sharp contact discontinuity khi test , finding that the choice of a quintic kernel improved the sph results significantly .one of the only well posed convergence tests for khi was done in , but that was in a study of galilean invariance restricted to fixed - mesh schemes .the test by is a well posed problem influenced by , but the evaluation of the sph result was done by comparison to an analytic solution for a sharp transition initial condition and incompressible flow in an infinite domain , not for the problem posed with a softened transition in compressible flow in a finite periodic domain .the commonly used solution for the khi growth rate in numerical tests is for a sharp transition at the shear interface .however , for numerical approximations the interface should be smoothed to yield an initial value problem with finite spatial derivatives , as argued by . for a sharp interface ,the initial approximation of the derivatives across the interface does not converge with resolution . to obtain convergence a smooth interface must be used .we pose the problem in such a way that the analytic result for the incompressible limit is known for an infinite domain , as this type of analytic result is usually used to compare numerical results .however , a difficulty with kelvin - helmholtz problems is that the unstable modes are global , so solutions in a finite periodic domain , as commonly tested , are different from than the solution in an infinite domain . to circumvent this difficulty , we perform an exhaustive convergence study to establish a fully compressible nonlinear solution with a very small and rigorously derived uncertainty . in the following discussion, we will refer to different types of discretizations used for numerical solutions to the chosen governing equations of hydrodynamics or magnetohydrodynamics . to clarify , most fixed grid ( or structured mesh ) codes use a square eulerian grid as a basis for either a point - value ( values of the fields at grid points ) or volume - average ( average volume of the field in a grid cell ) discretization .the distinction here is that a finite - volume scheme can be arranged to solve the integral form of the governing equations , and the point - value discretization can only solve the differential form of the equations .unstructured mesh discretizations do not pose the restriction of the discretization mesh being a regular grid , however the nodes are logically connected by edges , and the mesh cells form a tessellation on the computational volume .moving - mesh voronoi tessellation discretizations have begun to appear in astrophysical applications .these are a case of an unstructured mesh , where the mesh is defined by the voronoi tessellation of a set of mesh generating points .the mesh cells may be used to define a finite - volume discretization .when the mesh generating points are allowed to move in time to make a moving - mesh discretization , the voronoi mesh will be recalculated on the new point distribution at every step yielding a new set of cells and new mesh edges connecting them .the mesh movement can be arbitrary , but a quasi - lagrangian mesh movement is a particularly good choice as this minimizes numerical errors associated with advection across the grid .it is also possible to define meshless discretizations that represent the fields on a set of points or particles without specifying a set of mesh edges to connect these points .meshless discretizations then do not form a strict tessellation of the computational volume .these discretizations are commonly used to define lagrangian methods , in the sense that the points or particles are comoving with the fluid .if finite - mass particles are used , then the particles form a partition of the total computational mass , and the form of discretization used in sph is obtained and the integral form of the governing equations can be solved . however , if the meshless points carry only field values at those points , then a point - value type discretization is obtained and the differential form of the governing equations must be solved . in section [ sec_setup ]we give the problem setup used and in section [ sec_analysis ] we discuss the methods used to extract measured quantities from the results .the codes used in this paper are listed in section [ sec_codes ] . the detailed convergence study used to generate the reference compressible solutionis presented in section [ sec_reference ] .the results and comparison from several codes with different underlying algorithms and discretizations are presented in section [ sec_results ] .we discuss the various results and implications in section [ sec_discussion ] .extended discussion of the sph results and analysis of extra experiments is in section [ sec_sph ] . in section [ sec_secondary ]we discuss secondary instabilities arising from the problem setup in this work , and the difficulty of determining if they are produced in a physically meaningful manner .our conclusions are summarized in section [ sec_conclusions ] .our motivation in choosing the initial condition is that the initial conditions are smooth , reflect as closely as possible a configuration that can be treated analytically , and can be represented easily in a wide variety of codes . in all codes , we will solve the inviscid compressible euler equations .the setup we use is chosen to be a periodic version of that used in the analysis of kelvin - helmholtz instability in : the domain is 1 unit by 1 unit in the and directions if two - dimensional , and an arbitrary thickness in the direction if needed for a three dimensional code .runs with resolutions of , , and cells , or equivalent were used in the comparison .all boundaries are periodic .the initial condition is smooth and periodic , as illustrated in figure [ figic ] .the density is given by : where with , , and the smoothing parameter .the -direction velocity is given by : where with , , and as in the density so that the smooth transition in density and velocity occurs over the same interval .the background shear is perturbed by adding some velocity in the -direction with the form an ideal gas equation of state with is used .the internal energy is set such that pressure is initially uniform with value .the problem is run till at least time .analysis is done on snapshots spaced at a minimum of . however , in most cases the snapshots will not be spaced exactly as codes often do output or analysis on an approximate interval , e.g. at the first time step after the specified snapshot or analysis time .the test can be run in two dimensions in a structured grid code , but for unstructured meshes or mesh free methods two dimensional and three dimensional simulations may yield slightly different results depending on how the resolution elements are arranged in the initial condition . for unstructured mesh methods and meshless methods the results will differ for a disordered node distribution and a regularly gridded one .to quantitatively describe the growth of the kelvin - helmholtz instability , we use two measurements , the amplitude of the -velocity mode of the instability , and the maximum -direction kinetic energy density .these two quantities are a useful pair , as the mode amplitude is a smoothed quantity and the maximum -direction kinetic energy density is very sensitive to noise in the computed velocity field . as a loose guide , the analysis of treats a non - periodic incompressible version of the problem studied in this paper .their linear perturbation theory yields growth rates for the two quantities studied in this work , in the infinite domain and incompressible flow limit .however , as we run our test in periodic boundaries with a compressible flow we must go further than their analysis .the maximum -direction kinetic energy is the simplest of the two quantities to compute .this quantity is the maximum value of computed for all resolution elements ( cells , points , or particles ) in the computation volume at each time . in the non - periodic , incompressible limit, the growth of this quantity should be ( * ? ? ?* equation 18 ) . in practice ,the growth will start from a finite perturbation , will reflect erroneous velocities occurring both at the interface due to unbalanced pressures at the cell scale , and any velocity and density noise in the bulk flow .it is also important that the test posed here , and those commonly used in other works , are actually posed in a periodic domain with a compressible flow . to obtain a basis for comparison we use a numerical reference solution to the problem as posed andestablish the uncertainty on this reference solution in a rigorous manner in section [ sec_reference ] . to extract the amplitude of the -velocity mode of the instability a more involved calculationis required .we wish to define the measurement in a manner that can be made consistent across different types of discretizations .a simple fourier transform defined on a grid would be entirely appropriate for point - based finite difference schemes or pseudospectral schemes , but is somewhat less well motivated for finite - volume schemes , and inappropriate for meshless or unstructured mesh schemes . to state the analysis in a manner that is straightforward to describe for all codes , which treats all results in the same manner we use a discrete convolution . for the case of a uniform grid this amplitude given by : where ranges over all grid points or cell centers ( total grid points or cell centers ) and the positions are grid points or cell centers .this expression can be used in two or three dimensions .the mode amplitude is calculated at each time snapshot . for sph simulations ,each particle needs a different weighting in the sums as the particle density varies . in the case of variable smoothing length sph where the smoothing length is set to encompass a fixed number of neighbors, we can use the smoothing length for particle to do this weighting . in the following formulas is the number of dimensions the simulationis run in . the quantities and are defined for each particle , from the position and the -velocity of that particle .an advantage of the definition used here is that we can directly analyze the sph particle values as simulated without introducing an additional interpolation to a fixed grid .this feature follows over to a unstructured mesh or meshless code . for an unstructured mesh code , or a meshless code that defines quadrature volumes for the pointsthe appropriate general form would be : where is the area or volume of cell or the quadrature volume for point and the positions are the cell centers or point positions .for an infinite domain with incompressible flow , the growth of the velocity mode should be ( * ? ? ? * eq .the growth rate for a kelvin - helmholtz instability with these two conditions has been used before as a comparison for results obtained in periodic domains with a compressible flow , but the two problems are formally different , and depending on the parameters the growth rates may vary .again , to circumvent this difficulty , we compare results to the numerical solution of the test problem specified , and establish the uncertainty on this reference solution in a rigorous manner in section [ sec_reference ] .[ cols="<,<,<",options="header " , ] in this paper , we compare the results from six codes to the reference solution , which itself is produced with the pencil code .the pencil code is a fixed eulerian mesh , non - conservative , finite - difference , mhd code that uses sixth order centered spatial derivatives and a third order runge - kutta time - stepping scheme , being primarily designed for weakly compressible turbulent hydromagnetic flows . for the problem in question , in order to keep the reynolds number low at the grid scale while keeping the integral and intermediate scales nearly inviscid , explicit sixth - order hyperdiffusion and hyperviscosity are added to the mass and momentum equations as specified in .the other codes , enzo , athena , ndspmhd and phurbas are introduced below .enzo is a three - dimensional , eulerian adaptive mesh refinement hybrid ( hydrodynamics + n - body ) grid - based code . for this problem the euler equationsare solved using a third - order piecewise parabolic method ( ppm ) with the two - shock approximate riemann solver .time - stepping is constrained by a courant condition for the gas with a courant factor c=0.4 .the run - time ppm diffusion , flattening , and steepening parameters were set to zero .enzo version 1.5 was used .athena is a three dimensional eulerian grid code that ( among other algorithms ) implements a higher order godunov method for hydrodynamics .specifically , we have used the third - order cell reconstructions with the hllc approximate riemann solver and the unsplit corner - transport - upwind ( ctu ) second order time integration algorithm .otherwise the options used were as specified in the two - dimensional test problem supplied with the code , with a courant number .we used athena version 4.1 obtained from the project website .ndspmhd is a one , two , and three dimensional reference implementation of sph and a platform for experimentation .we obtained ndspmhd version 1.0.1 from the author s website .ndspmhd was run on this problem in two dimensions , using both the cubic and quintic kernel options .the cubic kernel is the conventional choice for sph , whereas the quintic kernel delivers higher accuracy at the cost of computational expense . describes the ndspmhd implementation of sph as converging as higher order kernels are used .that is , the result on the test problem shown here should converge with the combination of using more particles and using a higher order kernel .ndspmhd also supports the artificial thermal conductivity described in .the results of sph simulations may depend strongly on the initial particle distribution used . to generate the initial condition , we relaxed a set of equal mass particles to an approximate equilibrium with an artificially imposed pressure field which produced the required density profile .the particles settle into a roughly hexagonal grid , although with dislocations required to produce the spatially varying density .the number of particles used at each resolution matched the number of cells or points used for the grid code ( , , ) .otherwise , the code was run with the default parameters used in test 6 of the ndspmhd examples package .phurbas is a meshless , adaptive , lagrangian code for magnetohydrodynamics .phurbas uses third order least square fits to derive spatial derivatives , and a second order scheme for time integration .stabilization is achieved through an artificial bulk viscosity .it is run here in three dimensions , using volumes with height , , and in thickness in the -direction .phurbas does not use a grid , so instead we use spatially constant resolution and set the resolution parameter to the cell size used in the grid codes . to produce the initial particle distribution, we first used a tiling procedure as in and then further relaxed the distribution to one that would arise naturally in a shearing flow by running the problem to and restarting the test with the initial condition defined on resulting particle distribution .because the disordered particle distribution is inherently three dimensional , the results at a given resolution can not be strictly compared to the two - dimensional runs performed in other codes here .output at time .,width=188 ] to produce a solution to the full nonlinear , periodic , compressible case as run in this work , we performed an extensive convergence study with the pencil code .this convergence study allows us to establish not only a very high quality reference solution , but also a notion of the uncertainty in this reference solution .the importance of the unusual step of establishing the uncertainty of the reference result is that we can then assert with confidence that the differences seen between other lower quality results and this reference result are overwhelmingly due to errors in the lower quality solutions . in the results in section [ sec_results ] the pencil codeis shown to be well suited to the smooth , subsonic problem posed here .we use grids of , , , , , and points , specified so that every second grid coordinate overlaps on successive refinements , and with the time stepping scheme in the pencil code modified to provide outputs at exact time unit intervals .this set of outputs enables a resolution study at each output time for the convergence of the mode amplitude .establishing the empirical rate of convergence of the mode amplitude allows a richardson extrapolation based estimate of the uncertainty in the most resolved measurement .hence , we are able to make comparisons of the results from other codes to the highest resolution pencil code result while knowing in a rigorous manner that the errors in this reference result are negligible . first , we can calculate the empirical rate of convergence of the mode amplitudes defined by equations [ eqmodebegin]-[eqmodeend ] for a set of three results with a refinement ratio of 2 between each resolution as where is the value of the mode amplitude on the coarsest grid and , are the values on the medium and finest grid respectively ( * ? ? ?* equation 5.10.6.1 ) .this rate of convergence tells us how fast the series of values from each resolution is converging towards the correct result .once we have identified the convergence rate of the series of results , we can apply a generalized form of richardson extrapolation to estimate the converged result , and hence derive an indication of the uncertainty in our highest resolution result .this indication of uncertainty is the grid convergence index ( gci , * ? ? ?* ) , a uniform method of reporting the uncertainty on such a convergence study given as from ( * ? ? ?* equation 5.6.1 ) .the value of the safety factor we use is .this value is that suggested by ( * ? ? ?* section 5.9 ) as being appropriate when the rate of convergence is explicitly determined with a convergence study as in this work . a density plot at time from the highest resolution ( ) calculation is shown in figure [ figpencilkhrho4096 ] .the results of evaluating equation [ eqorder ] for each set of three resolutions is shown in figure [ empiricalorderplot ] .this figure shows that the convergence rate settles at approximately for most of the time interval when the highest resolution results are considered . using the observed rate of convergence at each time, we can assign the uncertainty on the result with equation [ eqgci ] , which is shown in figure [ pencilrefsolplot ] .the high resolution results used are necessary to establish a well behaved convergence in figure [ empiricalorderplot ] , which means that the uncertainty is only well known when the uncertainty itself is very small . to demonstrate more explicitly the convergence behavior , and the magnitude of the changes between successive resolutions we have plotted the differences in the -velocity values between successive resolutions in one quadrant of the domain in figure [ fig_refvydiff ] .the greatest changes between successive resolutions are localized to the density change interface , and show not suggestion of the presence of secondary instabilities .a similar plot for the density is shown in figure [ fig_refrhodiff ] , again showing no suggestion of secondary instabilities .the simulations are identified by a two letter prefix as outlined in table [ prefixtable ] and the resolution ( , ) .results for the -velocity unstable mode amplitude itself , and the growth of that quantity for all codes are plotted in figure [ fig7.pdf ] .figure [ allenergyplot ] gives the results for all codes for the minimal -direction kinetic energy .the following two subsections discuss these two measured quantities .in interpreting the -velocity unstable mode amplitude ( figure [ fig7.pdf ] ) it is important to note that a relative comparison of the solution quality of the codes can not be made exactly . for the unstructured mesh and meshless methods , the code performance in two dimensions and three dimensionsis expected to differ notably as the possible arrangements of cells and particles differs .a strict comparison between phurbas and the other codes can not be drawn as phurbas was run in three dimensions not two .for eulerian grid codes , the problem is grid - aligned , and performance will differ as it is rotated against the grid . with these caveats , we proceed to comment on the results obtained .the results for the growth of the -velocity unstable mode amplitude in the pencil code , enzo , and athena are very similar at the level of this comparison . here , the main difference is a variation between the codes of the growth rate at the lowest resolution .reassuringly , the resolution mode amplitude growth curves from the two piecewise parabolic method variations used in enzo and athena resemble each other more than they do the result from the pencil code .these results demonstrate that the pencil code reference result is reasonable . for phurbasunstable mode amplitudes converge with increasing resolution from below the reference value , but at the resolution the growth rate at late times exceeds the reference value for the growth rate while the amplitude stays below the reference value . in comparing the absolute values from phurbas to the other codesone shall to remember that the phurbas simulation is in three dimensions with a unstructured particle distribution .however , at low resolutions the results for the growth rates are definitely lower than that obtained in the grid codes . as the resolutionis increased a definite convergence towards the reference result is observed . from the mode amplitude plotted for ndspmhd in , and notwithstanding the aforementioned limitations to making comparisons in two dimensions ,it is clear that the cubic - kernel sph is the least accurate method for the problem studied in this work .the result given here is however for a single initial arrangement of sph particles .results with sph do depend , and in this problem depend strongly , on the initial particle arrangement . in general , the ndspmhd simulations show value and growth rates for the -velocity unstable mode amplitude which are too small . at the lowest resolution ( )the simulation with artificial thermal conductivity ( ne ) gives a slightly improved result over the simulation without that addition ( nc ) , but the dependence is minimal and more so at the higher resolutions .the cubic - kernel sph results do not depend strongly on the use of a thermal conductivity term unlike the sharp transition khi test specified by .this is demonstrated by the nc simulation where the thermal conductivity was turned off , yielding results very similar to the ne simulation .this illustrates that the artificial conductivity is not so much a patch for correcting kelvin - helmholtz in sph , but for ensuring that contact discontinuities stay well resolved .quintic kernel sph , labeled as simulation no , uses a larger number of neighbors and has smaller zeroth - order sph inconsistencies .this gives a more accurate result than cubic - kernel sph for the same number of particles .the pair of ndspmhd results nc and no demonstrate that sph converges in a limit that is a combination of increasing particle number and neighbor number .the importance of using the quintic kernel over simply increasing the number of sph neighbor particles is to avoid particle clumping , which would effectively lower the resolution , undermining the intent of a convergence study . unlike in we do not see sph starting at a acceptable growth rate at low resolutions and converging to a lower growth late at high resolution .we observe a much less surprising behavior wherein the growth rate at low resolutions is too low , and the solution appears to improve with increasing resolution , though the absolute error is significant .-direction kinetic energy in all codes.,width=302 ] the behavior of the maximum -direction kinetic energy is qualitatively different from the mode amplitude as this measurement tracks the maximal value , not a smooth average .maximum -direction specific kinetic energy histories are shown for all simulations in figure [ allenergyplot ] . here the velocity noise in sph resulting from pressure force errors can be seen clearly in the overview figure , while all other codes behave in a roughly similar manner .the convergence study does not establish an uncertainty on the maximum -direction kinetic energy , but the highest resolution pencil code result plotted as the reference curve can be taken as a useful indicator of the correct nonlinear solution . in pencil ,enzo , athena , and phurbas , at late times at low resolution , when the unstable velocity mode value is low , the maximum -direction kinetic energy is also low .this is the opposite of the situation found in ndspmhd , where at late times at low resolution the unstable velocity mode value is low , but the maximum -direction kinetic energy is too high . at lower resolutions in phurbasthe influence of velocity noise at the interface can be clearly seen . at early timesthe maximum -direction kinetic energy is too high and the unstable mode amplitude is too low. pencil does not suffer from this to the same extent .enzo and athena have the best initial behavior at the interface as they are finite - volume schemes and hence the initial pressure equilibrium is well represented across the interface .the initial maximum kinetic energy and the initial mode amplitude are both to low at low resolution in these codes .the resolution dependence of the velocity noise is illustrated for the cubic - kernel sph with artificial conductivity ( ne ) . neglecting the artificial conductivity yields virtually the same result as shown in figure [ allenergyplot ] ( simulation nc ) .quintic kernel sph , with smaller zeroth - order inconsistency errors than cubic - kernel sph , does show smaller velocity noise , but it is still very large ( simulation no ) .we show gray scale slices of the density field at in figure [ figendrho512 ] .all the images have the same limits on the grey scale between density of and , the density extremes in the highest resolution result in the pencil code convergence study .the results for pencil , enzo , and athena are largely similar , as at high resolution these codes agree well with each other and with the reference result .though the result with phurbas strongly resembles the reference result although it clearly shows more diffusion .the sph results from ndspmhd ( only ne and no shown ) reflect the slow growth of the unstable -velocity mode already discussed . the simulation no result using quintic - kernel sph shows less diffusion than simulation ne using cubic - kernel sph . especially in simulationne , secondary features of a filamentary appearance can be seen along the interface , and these are less apparent in the simulation no result .the quintic kernel result ( no ) overall shows better agreement with the reference than the cubic kernel result ( ne ) .overall , the grid based codes pencil code , athena , and enzo had very similar performance .for these codes , the test problem in this work ( run to ) confirms their correctness .this shows that the test as outlined here can be used to discriminate among numerical schemes . in this test, we demonstrated that phurbas and ndspmhd , while both using meshless lagrangian schemes , give significantly different convergence behaviors .though phurbas was run in three dimensions , and ndspmhd in two , the strikingly different qualitative behavior bears some explanation .a primary observation is that phurbas differs from ndspmhd in that phurbas uses a third order accurate and consistent spatial discretization , while ndspmhd uses an sph discretization which has zeroth - order inconsistency .this issue is sufficiently complex that it is discussed in a separate section ( section [ sec_sph ] ) .we also note that no code developed obvious signs of secondary instabilities in the solution by time , in agreement with the findings of the convergence study performed on the reference result .how , and when , secondary instabilities may arise in a khi test such as this is discussed in section [ sec_secondary ] .the results for the maximal -direction kinetic energy in section [ sec_ydirener ] show significant noise appearing in the velocity in the sph simulations ne , nc , and no .this section is devoted to exploring the source and behavior of this noise .it has been argued that maintaining particle order is vital to achieving good results with sph .particle ordering in sph can be expressed as a condition that the lagrangian of the system of particles is minimized ( * ? ? ?* section 2.5 ) . to seek this minimum, the particles must have some re - meshing motion in addition to the pure fluid motions ( * ? ? ?* section 5.2 ) .these re - meshing motions mean that in sph one always has some motions which are not physical , but purely related to the sph particles attempting to relax to an ordered state ( * ? ? ?* section 5.2 ) .these re - meshing motions are also shown in the post - shock state in ( * ? ? ?* figure 10 ) .the re - meshing motions are provided by the linear errors in the sph pressure forces , which are in turn a result of the zeroth - order inconsistency of sph interpolation .that is , the zeroth - order inconsistency in sph interpolation provides a linear error in the pressure force which causes two particles which approach each other to repel .re - meshing motions created by this repulsion are in turn damped by the artificial viscosity to encourage the particle distribution to relax . in this way , both the zeroth - order inconsistency in the pressure estimate is vital to maintaining particle order and the artificial viscosity can not simply be disabled .further , though more advanced artificial viscosities can be designed , the identification of the particle velocity with the fluid velocity means that the need for motions preserving particle order will necessarily corrupt to some degree the fluid velocity itself .the root cause that creates this situation is the zeroth - order inconsistency in sph interpolation , so the parameter which we vary is the one which varies this error , the choice of sph smoothing kernel . as the change from the cubic kernel to the quintic kernel decreases the size of the zeroth order inconsistency , the re - meshing pressure forces are smaller , and the resulting velocities are smaller .consequently , the level of -direction kinetic energy noise seen in the simulation no is smaller than that seen in the simulation ne .recently and have connected zeroth - order inconsistency in sph to poor results for related khi test problems , and has demonstrated the connection in the context of low mach number turbulence . in this workwe have demonstrated this connection by first showing that on a smooth test problem the artificial conduction of does not significantly affect the results .then , we have demonstrated that when the quintic kernel is used , with smaller inconsistency errors than the cubic kernel , the test results improve significantly .the size of the inconsistency errors are reflected in the maximal -direction specific kinetic energy statistic , as pressure gradient errors dive spurious particle motion .finally , phurbas , while being meshless and lagrangian like sph , uses a third - order consistent interpolation .the consequence of this for the test in this paper is that it performs much better than sph , as the pressure forces are accurate enough to keep the velocity and density noise much smaller than in sph .hence , phurbas has a qualitatively different behavior on this test , and convergence does not depend on varying the number of neighboring particles used in the interpolation . to demonstrate that the velocity noise behavior seen in this work is general, we have run a series of additional tests .the first test is a version of the khi setup in section [ sec_setup ] with a uniform density of 1.0 . for sph ,this is a particularly simple choice as a uniform hexagonal close packed grid of particles is the unique relaxed distribution in two dimensions .hence , initially the setup does not suffer from any velocity noise .figure [ sphisorho ] shows that regardless of this initially relaxed distribution , the maximal -direction kinetic energy still reflects the growth of the velocity noise .again , as in the previous tests the noise grows sooner at higher resolutions .we note that the velocity noise in the highest resolution quintic kernel case of our isodensity test appears to be triggered by the growth of the primary khi instability .to simplify the setup further , we remove the -direction velocity perturbation from the initial condition , yielding a smooth unperturbed shear flow . for the maximum -direction specific kinetic energy measurement ,the trivial analytic solution for this problem is a value of zero for all times .figure [ sphshear ] , upper panel , shows that in the this setup , run with the cubic kernel , the maximal -direction kinetic energy grows to the same level as before in the isodensity khi test , although it takes longer .this growth happens at earlier times for higher resolutions .figure [ sphshear ] , lower panel , displays the same behavior for the quintic kernel , although the timescales involved are longer .these simulations are stopped abruptly when they succumb to particle pairing instability ( * ? ? ?* section 5.4 ) and two particles approach within length units . some recent proposed modifications to sph reduce or eliminate this instability for large kernels . with each kernel choice , the time interval until the velocity noise jumps is shorter as the number of particles is increased , but for a given number of particles the time interval until the velocity noise jumps is longer if the quintic kernel is used .that is , the results converge towards to analytic solution as the zeroth - order inconsistency in the sph interpolation is reduced .we have shown that given a convergence test stated in a well posed manner , all the methods tested appear to converge towards the correct result for the growth of the primary instability .recent discussion of kelvin - helmholtz tests has broadened to include secondary instabilities . shows secondary instabilities developing from a similar initial condition .the reference solution we compute shows no indication of these structures . suggested that their moving - mesh code is able to resolve secondary khi billows which can not be resolved in their fixed - mesh code because it was too diffusive .though the solution in a fixed mesh and moving mesh code should not be expected to by equivalent at any finite resolution , that a given code does not develop secondary kelvin - helmholtz instabilities is not simply a function of the diffusivity of the code , it is also a dependent on the seeding of such instabilities . in general , we can categorize the possibilities for why our reference case and the tests done at lower resolutions in this work do not show the development of any secondary instabilities in three cases : 1 .the secondary billows should grow physically , either due to the nature of the initial perturbation fed into the problem or the interaction between some combination of the initial perturbation and modes of the instability directly seeded by the initial perturbation . in this casethe secondary billows should eventually show up at some resolution in any convergent code , but should arise at a particular location and time .the secondary billows grow due to the balance of numerical perturbations and numerical diffusion . in this casethe billows seen at some time should disappear at some resolution in any convergent scheme as the significant power in the numerical perturbations should eventually move to spatial scales too small to seed the secondary instability efficiently .3 . the slight differences between the setup of and our setup mean the difference between seeing physical growth of secondary billows and failing to produce them . in the first case ,developing the secondary instabilities is merely a matter of using a sufficiently large resolution . at lower resolutionsa resolution study should still suggest a significant uncertainty or change between simulations of different resolution as the secondary billows are damped less and less . in the second case ,the resolution necessary to make the secondary billows disappear may be quite large , as a numerical mechanism introducing noise at a small scale may still have significant power at larger wavelengths depending on the spatial correlation of the mechanism introducing the noise .however , the resolution study performed with the pencil code to much higher resolution shows no indication that these modes grow .the third case can not be ruled out explicitly , as does not specify the exact details of the setup used .however , we can show that for our problem that secondary instabilities that do develop are of a purely numerical origin .this strongly suggests that the secondary billows seen in are a numerical artifact , so the observation that a fixed grid codes does not develop these on the same problem does not imply that the fixed grid code is too diffusive to support the modes . to demonstrate how numerical effects can seed secondary khiwe have performed a test with athena .we ran the khi test at resolutions of , and until . in figure [ figatt30 ] the density in a region centered on a single primary khi billow is shown at time .secondary khi billows can be seen growing in the case , and this pattern is successively suppressed at the higher resolutions , suggesting it is an artifact of the finite resolution and is converging away . however , in the resolution , a different set of secondary instabilities can be seen growing at much shorter wavelengths in the central winding of the primary billow . as the resolutionis increased , the numerical seeding of the secondary instabilities changes , and the secondary modes which are excited change .figure [ figatt32 ] shows the same region at time . by this point , the secondary instabilities in the resolution simulation have become apparent .surprisingly , a new set of secondary billows has appeared on the outer winding of the primary billow .we can not reach well justified conclusions about a particular mode of the secondary instability from this study as we can not reproduce the same instability at two different resolutions .though the growth of secondary instabilities is likely a physical reality at the reynolds numbers involved in astrophysical problems , relying on numerical effects to seed them will not result in a true physical model of the phenomena as the seeding , and hence growth of these instabilities , will be inherently dependent on parameters such as resolution . models relying on numerically seeded instabilities , even if the presence of the instability is physical , make it difficult to seperate numerical effects from physical behavior , which is turn makes it difficult to come to strong conclusions about the effect of the instability. conclusions about the instability must consist of a measurement and some manner of characterization of the error in that measurement . in order to characterize the error in the model of the instability , a convergence study must be performed . to perform this convergence study ,a fixed seeding of the instability must be possible across all resolutions .if the seeding is a numerical effect caused by the finite resolution , it will not be fixed between two resolutions .hence , the tests in this work do not show that any code used can not , at any of the the resolutions tested , resolve secondary kelvin - helmholtz instabilities as these have not been seeded in a controlled way or even in an avoidable manner . from the results of the convergence study in section [ sec_reference ] we propose that if a code develops secondary kelvin - helmholtz billows in this test by , it is due to the growth of numerical perturbations .the less rigorous study performed with athena suggests that the same conclusion should hold to at least . in the limit of infinite resolution, any convergent code should reproduce the correct result .however , if at finite resolution a code shows a tendency to produce secondary instabilities , then the scheme can be improved by adding a diffusive operator to damp the noise leading to the the instability . particularly with respect to moving - mesh tessellation codes ,kelvin - helmholtz tests are not the only place where behavior suggests that some additional numerical diffusion should be used to damp grid scale noise .development of secondary kelvin - helmholtz instability after is likely due to the presence of spurious noise in the solution .for example , in the preparation of this work , we discovered that the evolution of the test problem here differed greatly at high resolution ( ) between enzo versions 1.5 and 2.0 . in enzo 2.0 , a bug existed that caused slightly incorrect pressure reconstructions .this caused small sound waves to launch from the interface , and propagate though the periodic domain interacting with themselves and forming small short - wavelength perturbations .the discovery of this bug was fortuitous however , because it demonstrates again how artificial , numerical perturbations can give rise to secondary kelvin - helmholtz in this test problem if they are able to overwhelm the dissipation of the scheme .the underlying cause of the tendency of many schemes to develop secondary khi in this test problem is that the shear interface becomes increasingly steep as it stretches in the primary khi billow .eventually , the width of the interface approaches the grid scale and it become susceptible to numerically seeded secondary instabilities .this behavior is also commonly seen in the initial evolution of grid - aligned sharp transition versions of the khi test .two examples are ( * ? ? ?* figure 13 ) and ( * ? ? ? * figure 8 , upper right panel ) .we suggest then that even fixed - grid finite volume godunov schemes may be improved in pathological cases of unresolved shear interfaces by the addition of a diffusive flux .this flux should be chosen to spread the interface over enough grid cells to suppress the numerically seeded instabilities .another lesson to be derived here is a cautionary one .not all new instabilities seen as resolution is increased when solving the discretized euler equations are physically real .new numerical instabilities can reveal themselves as resolution is increased , as the flow can enter into new regimes where it is more sensitive to the inevitable numerical noise in a method . in the high resolutionset of athena simulations , this can be seen in the magnitude of the density gradients . in figure [ figatrhograd30 ]the density gradient in a slice through a primary billow at time in the three athena simulations used in this section is plotted , calculated with a four point second - order finite difference stencil .as the resolution is increased , the maximum gradient achieved increases .mathematically , when solving the euler equations , this behavior arises because the modified equations that are actually solved by the method change as resolution is increased - the diffusive effects become smaller .one route around this difficulty can be to solve the navier - stokes or boltzmann equations instead with a fixed viscosity or particle mean free path . since these equations have a physical scale where diffusion dominates dynamics , the reliable elimination of numerically generated instabilities for arbitrarily long run times can be obtained by fixing the physical diffusive scale and reducing the grid scale far below the diffusive scale .the same point illustrates how the transition to turbulence and mixing must be studied when the euler equations are used . to produce the secondary instabilities that break up the flow , the nonlinear interaction or modes or seeding of secondary instabilitiesmust be done on a controlled manner .this job can not be left to the numerical noise , or the time and manner in that the flow breaks up will be a reflection of numerical issues and not of physical reality . finally , we suggest that it is possible to produce a controlled test of the growth of secondary billows from definite perturbations , similar to the study performed by in an incompressible flow. such a setup could be useful in determining the appropriate and minimal diffusion to add to a scheme to suppress the numerical seeding of secondary instabilities in given conditions .we have constructed a reference solution with a well characterized uncertainty , along with defining a general manner in that the test can be analyzed .this methodology was applied to example codes from the major families of numerical techniques used in astrophysics .all codes tested showed convergence towards the reference result when the resolution was increased in the appropriate manner . for sph , the use of an artificial thermal conductivity does not significantly effect the results , but using a higher - order kernel ( and hence a larger number of neighbors ) does improve the results .we conclude then that the fundamental reason for poor performance of sph in khi is the zeroth - order inconsistency of sph interpolation .visually , to time in the test problem there are no secondary instabilities that arise in the reference solution . by examining the relative behavior of different types of code, we argue that the presence of secondary instability on this test is caused by having a numerical diffusion that is very low compared to the grid noise in the method .hence , we propose that it is advantageous in some methods , particularly moving - mesh tessellation methods , but also in fixed - grid godunov schemes , to include an extra diffusion operator to smooth the solution such that grid noise does not drive small scale instabilities .we are indebted to the authors of ndspmhd , athena , enzo and the pencil code for making the codes freely available to us .the sph visualizations were created with splash .we thank the anonymous referee for constructive comments that significantly improved the organization of the paper .we thank daniel price for feedback on a draft of this manuscript , and suggesting the use of fully relaxed sph initial conditions and the iso - density special case .we thank mordecai - mark mac low for his support and useful discussions .we thank paul duffel for useful discussions about moving - mesh schemes .j - c.p .thanks orsola de marco and falk herwig for their support .this work has been supported by national science foundation ( nsf ) grants ast-0835734 and ast-0607111 . w.l .gratefully acknowledges partial financial support from the nsf under grant no .ast-1009802 . w.l . completed co - writing this work at the jet propulsion labratory , california institute of technology , under a contract with the national aeronautics and space administration .this work used the extreme science and engineering discovery environment ( xsede ) , which is supported by national science foundation grant number oci-1053575 .this work used computer time provided partially by westgrid and compute canada .
|
recently , there has been a significant level of discussion of the correct treatment of kelvin - helmholtz instability in the astrophysical community . this discussion relies largely on how the khi test is posed and analyzed . we pose a stringent test of the initial growth of the instability . the goal is to provide a rigorous methodology for verifying a code on two dimensional kelvin - helmholtz instability . we ran the problem in the pencil code , athena , enzo , ndspmhd , and phurbas . a strict comparison , judgment , or ranking , between codes is beyond the scope of this work , though this work provides the mathematical framework needed for such a study . nonetheless , how the test is posed circumvents the issues raised by tests starting from a sharp contact discontinuity yet it still shows the poor performance of smoothed particle hydrodynamics . we then comment on the connection between this behavior to the underlying lack of zeroth - order consistency in smoothed particle hydrodynamics interpolation . we comment on the tendency of some methods , particularly those with very low numerical diffusion , to produce secondary kelvin - helmholtz billows on similar tests . though the lack of a fixed , physical diffusive scale in the euler equations lies at the root of the issue , we suggest that in some methods an extra diffusion operator should be used to damp the growth of instabilities arising from grid noise . this statement applies particularly to moving - mesh tessellation codes , but also to fixed - grid godunov schemes .
|
many engineering problems are governed by partial differential equations which can only be solved numerically .the prospect of parallelising such simulations in the time dimension have attracted attention by the scientific community , and the parareal algorithm is popular and widely studied .the parallelism in the time dimension can be achieved by leaving the distinction between time and space , in what is normally referred to as timespace methods .the technique is often used for solving hyperbolic problems with dicontinuous galerkin ( dg ) formulations , as demonstrated for the full four dimensional case in the context of rotor flows . for that application all the complications of a time varying geometry disappear , when switching to a timespace method .this advantage was also exploited for dynamic stall prediction as simulated with the navier - stokes equations .that work utilised spacetime adaptation and , as it is often the case , a compromise was made in the form of timeslabs , which allows for a tunable problem size .anisotropic mesh adaptation is an established technique for ensuring computational efficiency in the context of multiscale problems .there exists many heuristics methods for applying the technique to transient problems , but the only way to exploit the anisotropy in the physics as seen in timespace , is by using a timespace method .this is also a conceptually simple way to avoid the interpolation errors and heuristics associated with combining conventional timestepping algorithms with mesh adaptation .the benefits of anisotropic mesh adaptation generally increases with dimensionality due to the fact that elements can be stretched in more dimensions , and one could thus expect significant benefits from the use of a four dimensional anisotropic mesh adaptation algorithm , but ( to our knowledge ) anisotropic mesh adaptation has only been demonstrated for two and three dimensions .we are not aware of any studies involving the use of three dimensional anisotropic meshes for solving a timespace problem . in this work ,we focus on a miscible viscous fingering problem , which is transient and has two spatial dimensions .the problem is known to exhibit chaotic behaviour , which causes some issues for the combination of a timespace method and anisotropic mesh adaptation .we attribute this to the fact that the numerical noise can grow in time .this could mean that a non - linear solver is unable to converge to the error level that one would expect for the computational resources utilised .we suggest to use the method of timeslabs , where the slab thickness is a kind of timestep size .the method is used to investigate the relation between the timeslab thickness , the computational resources used and growth of numerical errors .mesh adaptation is the art of choosing the discretisation that minimises the error level for a given computational cost .it is important to distinguish between p- and h - adaptation , which relate to the discretisation order and length scale , respectively .anisotropic mesh adaptation is the most general type of h - adaptation , because it not only considers the local element size , but also the element orientation .the optimal size and orientation can be expressed in terms of a metric tensor field , .this has units of inverse square length , and it can be used to map elements to metric space , where the optimal element has unit edge lengths .the metric that minimises the interpolation error of a variable with hessian , , is \right)^{-\frac{1}{2q+d } } { \underline{\underline{\mathbf{\mathrm{abs}}}}}({\underline{\underline{\mathbf{h } } } } ) \label{eqn : m } , \end{aligned}\ ] ] where is the number of dimensions , is the error norm to be minimised , is the determinant , is a scaling factor and takes the absolute value in the principal frame .the metrics of several variables can be combined with the inner ellipsoid method , illustrated for two dimensions in figure [ fig : ellipse ] .we use 1st order polynomials for all fields , so we compute the hessian by first performing a galerkin projection of the gradient onto 1st order polynomials and then repeating the process to get a nodal hessian . the nodal metric can then be calculated explicitly using equation ( [ eqn : m ] ) .we use the technique of local mesh modifications to adapt the mesh to the metric .there are four operation types : coarsening , swapping , refinement and smoothing , as shown in figure [ fig : meshmod ] .coarsening removes edges that are short in metric space without regard to mesh quality , but the other modifications only allow the worst local element quality to go up . the mesh quality is quantified by means of the vassilevski functional .we use an octave / matlab implementation , which is suboptimal in terms of performance , especially compared to a similar c++ implementation .we consider a simple two - phase miscible problem , where the viscosity , , depends exponentially on the saturation , , with and corresponding to the low and high viscosity fluid , respectively . where determines the viscosity ratio . the velocity , , is given by the darcy equation , where is the pressure and is the permeability .mass conservation leads to a poisson equation for the pressure , where is the density .this drops out of the equations , because we choose to study the case of equal densities for the two fluids .the saturation is convected with the velocity given in equation ( [ eqn : darcy ] ) , the equation is treated as a convective equation with a three dimensional velocity vector , and it is stabilised with streamline upwind petrov / galerkin diffusion . the local characteristic length scale for the meshis calculated by computing the steiner ellipsoid for every element and projecting this along the velocity direction . the governing equations ( [ eqn : p ] ) and ( [ eqn : phi ] ) are solved with the boundary conditions illustrated in figure [ fig : setup ] . the use of a 13-sided polygon to approximate a circle avoids extrapolation , when comparing solution fields on different meshes .the perfect circular initial condition for the saturation makes the solution undefined , which causes the growth of numerical perturbations to become very obvious .all equations are solved with the finite element method using fenics , an open source finite element package .both pressure and saturation is discretised with first order polynomials .a direct solver ( lu ) is used for equations ( [ eqn : p ] ) and ( [ eqn : phi ] ) , while an iterative solver ( cg+ilu ) is used for galerkin projections .all computations are single threaded . and are prescribed at the inlet.,scaledwidth=40.0% ] we non - dimensionalise the equations with as characteristic pressure and as characteristic length scale , where , , are dimensionless space , time and pressure , respectively . is the simulation time . is a numerical parameter , which determines the relative resolution of space and time .we fix it at unity .another numerical parameters is the norm of the interpolation error to be minimised , and we go with the popular choice of for this .the saturation at the initial time is imposed by means of a dirichlet boundary condition .we initialize the non - linear solver with and , where being the radial coordinate .we use a segregated fixed point method to deal with the non - linearity of the equations .the timespace problem is solved in slabs with a thickness of ( omitting the tilde ) , i.e. * initialise mesh , and set , .* solve equation ( [ eqn : p ] ) for the pressure with fixed saturation .* solve equation ( [ eqn : phi ] ) for the saturation by taking the viscosity to be constant using the previous saturation . *if it is the first iteration at this timeslab , go to # 1 .otherwise , we calculate metric fields associated with the pressure and the saturation .these are combined with the inner ellipsoid method ( see figure [ fig : ellipse ] ) , the mesh is updated and the fields are interpolated onto the new mesh .if it is not the first timeslab , the mesh is fixed at the boundary matching up to the previous timeslab , * if 20 iterations have been carried out with the current timeslab , we proceed to the next one by mirroring the mesh in a plane normal to the time direction , and set , .we then go to # 1 .the mesh is changed in every iteration , and therefore the non - linear solver can not converge to machine precision .a typical timeslab is shown in figure [ fig : funky ] together with the result of a full timespace simulation .both figures illustrate the value of full timespace anisotropy . and is plotted in terms of a wireframe of the final iteration with slab # 5 ( left ) and all slabs in terms of the isosurface ( right ) .the time dimension is from left to right pointing out of the plane .large elements clearly show up away from the interface , but even at the interface the anisotropy in timespace can be exploited such that very few elements are needed .note that the mesh is fully tetrahedral ; the quadrilaterals on the isosurface representation appear , when cutting tetrahedrons with two nodes on both sides of the threshold.,title="fig:",scaledwidth=45.0% ] and is plotted in terms of a wireframe of the final iteration with slab # 5 ( left ) and all slabs in terms of the isosurface ( right ) .the time dimension is from left to right pointing out of the plane .large elements clearly show up away from the interface , but even at the interface the anisotropy in timespace can be exploited such that very few elements are needed .note that the mesh is fully tetrahedral ; the quadrilaterals on the isosurface representation appear , when cutting tetrahedrons with two nodes on both sides of the threshold.,title="fig:",scaledwidth=45.0% ]we show result for timeslab thicknesses = 0.05 , 0.1 , 0.2 and 0.5 with as well as for mesh tolerances = 0.04 , 0.02 and 0.01 with .the result at the end time is plotted in figures [ fig : dt ] and [ fig : eta ] , respectively .the final iteration number , time , value , value and final total computational time is printed above each image . as one would expect , the shape and number of fingers clearly varies with , but not with . and = 0.05 , 0.1 , 0.2 and 0.05 .the shape of the fingers vary , but the number and length seems independent of .,title="fig:",scaledwidth=30.0% ] and = 0.05 , 0.1 , 0.2 and 0.05 .the shape of the fingers vary , but the number and length seems independent of .,title="fig:",scaledwidth=30.0% ] + and = 0.05 , 0.1 , 0.2 and 0.05 .the shape of the fingers vary , but the number and length seems independent of .,title="fig:",scaledwidth=30.0% ] and = 0.05 , 0.1 , 0.2 and 0.05 .the shape of the fingers vary , but the number and length seems independent of .,title="fig:",scaledwidth=30.0% ] and = 0.04 , 0.02 and 0.01 . both the shape and number of fingers seem highly dependent on .,title="fig:",scaledwidth=30.0% ] and = 0.04 , 0.02 and 0.01 . both the shape and number of fingers seem highly dependent on .,title="fig:",scaledwidth=30.0% ] and = 0.04 , 0.02 and 0.01 . both the shape and number of fingers seem highly dependent on .,title="fig:",scaledwidth=30.0% ] we measure convergence in terms of the difference between saturation fields in consecutive iterations , this is plotted in figure [ fig : conv](a - b ) .figure [ fig : conv](a ) clearly shows that simulations become more accurate as is lowered , but the convergence is different when is decreased , at least in the last half of the simulations : the error seems to scale with , but it scales down to a certain level with . by comparing figure [ fig :conv](a ) and ( b ) , we can see that this is the level of the overall discretisation error .we attribute this effect to the fact that perturbations due to the mesh grow with time due to the non - linear nature of the problem .the extreme case of this is illustrated in figure [ fig : nconv ] , where snapshots of the contour is shown for three different iterations with .there are significant differences due to the growth of mesh perturbations , particularly between iteration 16 and 20 .this is to say , that the solution is not well defined and therefore it has an oscillatory component arising from the changing mesh .the error associated with this component increases with time , so it can be smaller than the overall discretisation error , if is sufficiently small .focusing on this case with and , it looks like is a good value .residual is plotted as a function of normalised iterations numbers .( a ) shows varying at , while ( b ) shows varying at .the peaks occur whenever the timeslab is advanced in time . ]contour is plotted for a simulation with and for iteration numbers 16 , 18 and 20 .note how different the fingers between iteration 16 and 20 are .we expect the mean radial position of the contours to be well defined , but not the actual fingers ; they are determined by the mesh at the initial time , which changes at every iteration.,scaledwidth=40.0% ]the results show that it is possible to simulate chaotic phenomena with a timespace method .a cfl like condition still seems to exists , so the timeslab can not be too thick .this has to do with the fact that the chaotic nature of the problem allows for a range of solutions and the difference between these solutions , should not exceed the overall discretisation error .the lower limit for the timeslab thickness , is high enough to indicate that the total number of timesteps can be reduced by orders of magnitude compared to a conventional method .it would be interesting to compare an established method against an optimised timespace implementation in the context of large scale simulations and anisotropic mesh adaptation .if practical problems are to be solved , a four dimensional anisotropic mesh adaptation algorithm will ultimately be needed .this work is supported by the villum foundation .20 [ 1]#1 [ 1]`#1 ` urlstyle [ 1]doi : # 1 guillaume bal and yvon maday . a `` parareal '' time discretization for non - linear pde s with application to the pricing of an american put . in _ recent developments in domain decomposition methods _ , pages 189202 .springer , 2002 .charbel farhat and marion chandesris .time - decomposed parallel time - integrators : theory and feasibility studies for fluid , structure , and fluid structure applications . _ international journal for numerical methods in engineering _ , 580 ( 9):0 13971434 , 2003. christiaan m klaij , jaap jw van der vegt , and harmen van der ven .time discontinuous galerkin method for the compressible navier stokes equations ._ journal of computational physics _ ,2170 ( 2):0 589611 , 2006 .jjw van der vegt and h van der ven .time discontinuous galerkin finite element method with dynamic grid motion for inviscid compressible flows : i. general formulation ._ journal of computational physics _ , 1820 ( 2):0 546585 , 2002 .wagdi g habashi , julien dompierre , yves bourgault , djaffar ait - ali - yahia , michel fortin , and marie - gabrielle vallet .anisotropic mesh adaptation : towards user - independent , mesh - independent and solver - independent cfd .part i : general principles ._ international journal for numerical methods in fluids _ , 320 ( 6):0 725744 , 2000 . cc pain , ap umpleby , cre de oliveira , and ajh goddard .tetrahedral mesh optimisation and adaptivity for steady - state and transient finite element calculations ._ computer methods in applied mechanics and engineering _ , 1900 ( 29):0 37713796 , 2001 .frdric alauzet , pascal j frey , paul - louis george , and bijan mohammadi .3d transient fixed point mesh adaptation for time - dependent problems : application to cfd simulations ._ journal of computational physics _ ,2220 ( 2):0 592623 , 2007 .anders logg , kent - andre mardal , garth n. wells , et al ._ automated solution of differential equations by the finite element method_. springer , 2012 .isbn 978 - 3 - 642 - 23098 - 1 .doi : 10.1007/978 - 3 - 642 - 23099 - 8 .
|
we report findings related to a two dimensional viscous fingering problem solved with a timespace method and anisotropic elements . timespace methods have attracted interest for solution of time dependent partial differential equations due to the implications of parallelism in the temporal dimension , but there are also attractive features in the context of anisotropic mesh adaptation ; not only are heuristics and interpolation errors avoided , but slanted elements in timespace also correspond to long and accurate timesteps , i.e. the anisotropy in timespace can be exploited . we show that our timespace method is restricted by a minimum timestep size , which is due to the growth of numerical perturbations . the lower bound on the timestep is , however , quite high , which is indicative that the number of timesteps can be reduced with several orders of magnitude for practical applications . anisotropic , mesh , adaptation , viscous , fingering , timespace
|
quantum correlation plays a crucial role in many quantum computation and information processing tasks .quantum discord , which goes beyond the traditional measure of quantum correlation , i.e. , quantum entanglement , is proposed to be responsible for the power of a mixed state quantum algorithm with vanishing or negligible entanglement .it has potential applications in detecting critical points of quantum phase transitions even at finite temperatures .quantum discord is also found to be the necessary resource for remote state preparation , quantum state discrimination , and quantum locking .a connection between discord consumption and the quantum advantage for encoding information has been identified as well .these findings have prompted a huge surge of interest in understanding quantum discord from different perspectives , such as its dynamical behaviors under decoherence , its operational interpretation via quantum state merging and teleportation fidelity , the generation of discord via local operations and the discording power of nonlocal unitary gates , see a recent review paper for more results .due to the fundamental significance and potential applications , various measures of quantum discord , as well as other related measures of quantum correlations , have been introduced .the general positive operator valued measurements ( povms ) are proposed in the original definitions of these measures . on the other hand , in view of the generally negligible improvement by doing minimization over full povms , measures of quantum correlation are usually evaluated by restricting to only projective measurements .the process of projective measurements are performed by constructing a set of orthogonal projectors in the hilbert space of a hermitian operator , and the possible outcomes of the measurements are given by the spectra of .this process usually induces strong perturbations to the measured system , and possibly constrains one s ability to extract as much quantum correlations as possible .weak measurement can provide new insights into the study of some fundamental problems of quantum mechanics and has already been realized experimentally .also it can be used for signal amplification practically and for state tomography .particularly , these measurement processes are universal in that any generalized measurements can be decomposed into a sequence of weak measurements . since its fundamental role in quantum theory and practical applications , it is natural to consider the quantumness of correlations by weak measurement .the weak measurement which can be implemented by coupling the system to the measurement apparatus weakly and generally have small influence on a quantum state since of the partial collapsing of the measured wavefunction .this differs it from projective measurement performed in standard quantum discord .the quantum correlation based on weak measurements is proposed as super quantum discord .this new quantifier has been shown to play a potential role in the protocol of optimal assisted state discrimination where entanglement is totally not necessary , and it has also stimulated other related definition of quantum correlations .the super discord is always larger than the normal discord defined by the strong ( projective ) measurements , and this may be regarded as a figure of merit by using the weak measurements in characterizing quantumness of correlations in a state .but just as every coin has two sides , here we will show that the use of weak measurements in defining quantum correlations can also induce other counterintuitive effects .as explicit examples , we will show that the super discord captured by the weak measurements with different strengths can impose different orderings of quantum states .this phenomenon is very different from those of the states ordering obtained in the literature , which are easy to understand as they are induced by different correlation measures , e.g. , the entropic measure of discord and the geometric measure of discord .moreover , we will also show that the super discord can change the monogamy nature for certain classes of states .detailed examples show that this change presents in a wide class of quantum states , and therefore may result in failure of certain quantum tasks , such as the protocol that distinguishes the generalized greenberger - horne - zeilinger ( ghz ) states from the generalized _ w _ states by using the monogamy conditions of quantum discord .in this section , we will introduce the concept of super discord .its definition is somewhat similar as that of the normal discord introduced by ollivier and zurek .the only difference is that the original projective operators are replaced by the weak measurement operators of the following form where the strength of the measurement process is parameterized by a parameter . and are the orthogonal projectors summing to the identity , and therefore . along this line of measurement formalism , one can then obtain the nonselective postmeasurement state as }{p_\pm},\end{aligned}\ ] ] after the weak measurements being performed on party , and $ ] is the probability distribution for the measurement outcomes .then the super discord is defined as where the minimization is taken over the complete set of the projection - valued measurements .the conditional von neumann entropy for the premeasurement state is denoted by , while the averaged conditional von neumann entropy for the postmeasurement state is denoted by with being the reduced density operator of , and represents the von neumann entropy .given two quantum correlation measures , and , they are said to give the unique states ordering if and only if the following condition is satisfied for arbitrary and . for the entanglement measures of the _ concurrence _ and the _ negativity _ , or quantum correlation measures of the _ entropic discord _ and the _ geometric discord _ , it is a well accepted fact that the condition presented in eq . can be violated by certain two - qubit mixed states , see , for example , refs . . for higher dimensional system ,entanglement quantified by rnyi entropies may also violate this condition .but these violations are conceptually easy to understand as they are induced by the correlation measures defined from different perspectives . here , as an unexpected result , we will show that even under the framework of weak measurements with different strengths , the resulting super discords similarly do not necessarily imply the same orderings of quantum states . in the following ,we illustrate the above argument through an explicit example .we consider a family of two - qubit states with maximally mixed marginals where denotes the identity operator , and are the usual pauli operators .physical are those with being confined to the tetrahedron with vertices , , , and . these states are usually termed as the bell - diagonal states as they can be decomposed into linear combinations of the four bell states .the super discord for is calculated as where . in fig .[ fig:1 ] , we present an exemplified plot of the super discord as functions of for the bell - diagonal states of eq . with and .the four curves from top to bottom are obtained by choosing the controlling parameters as , , , and ( corresponds to the normal discord ) , respectively , from which one can observe obviously that there are states having different orderings induced by the super discord with the weak measurements of different strengths .this counterintuitive phenomenon can be further confirmed by the cyan ( gray ) shaded region shown in the inset of fig .[ fig:1 ] , which stands for the valid for which of eq . with different can have different states ordering .it provides an intriguing perspective of the super discord in that it implies the quantum correlation in a state is not only measurement - method - dependent but is also strongly dependent on the internal structures of the related measurements .the lack of the unique states ordering may results in completely different dynamical behaviors of quantum correlations with respect to super discord . as an example , we consider the case of two qubits being prepared initially in the bell - diagonal state of eq . , and subject to the same phase damping channels , the process of which preserves the bell - diagonal form of , with however the time dependence of the three parameters being given by , , and , where is the phase damping rate . for this kind of local dissipative channel, it has been found that there are sudden transition from dynamical regimes of the classical decoherence ( cd ) to the quantum decoherence ( qd ) when considering the normal discord defined via the projective measurements , see , for example , the blue ( gray ) solid and red dashed lines shown in fig .[ fig:2 ] . when considering quantum and classical correlations based on the paradigm of weak measurements , however , the original two distinct regimes disappear . as can be seen from the black solid line shown in fig .[ fig:2 ] , the super discord decays with increasing even in the cd regime , which is in sharp contrast to that of the normal discord who is constant in time during the same regime . inspired by the connection between the normal discord and classical correlation , we further define the super classical correlation as with being the quantum mutual information . then with the same parameters as those for the super discord in fig .[ fig:2 ] , we presented dynamics of as the green dash - dot line in the same figure , from which one can note that it displays qualitatively similar behaviors as that for the normal classical correlation , i.e. , it decays with time in the cd regime , and remains constant in the qd regime .we thus see that although the weak measurements can change the dynamical behaviors of super discord in the cd regime , it has no influence on the qualitative behaviors of the classical correlation .in fact , the unique ordering of states with the super classical correlations is universal for the class of bell - diagonal states of eq ., for which we always have which can be shown to be a monotonic increasing function of for arbitrary , and therefore it always impose the same ordering for the bell - diagonal states .but it should be note that the above argument does not hold for general case , as there are and such that the unique ordering condition is violated .we now turn to discuss the role weak measurements played in exploring monogamous character of quantum correlations . due to the asymmetry of the super discord , there are two possible lines of research on this problem , which can be illustrated explicitly through the following two monogamy inequalities where the first one is formulated with the measurements being performed on different subsystems , and of , and the second one is formulated with the measurements being performed on the same subsystem . for convenience of later presentation , we further define ( ) as the related discord monogamy score .as the super discord is an extension of the normal discord , and the normal discord has been found to be monogamous for certain classes of quantum states ( e.g. , the generalized ghz - class state ) , it is natural to ask whether this monogamy nature is universal for the super discord with arbitrary measurement strengths , or whether the super discord still respect monogamy for these states ? here , we will show through some explicit examples the answer to this question is indeed state - dependent , that is to say , these states can be monogamous as well as polygamous with respect to super discord .our first exemplification is that of the generalized ghz states which is known to be monogamous with respect to the normal discord with infinite measurement strength . when considering the super discord defined with finite , the two monogamy conditions in eq .are in fact equivalent due to the exchange symmetry of . in fig .[ fig:3 ] we plotted the related monogamy score against with the measurement strengths ( corresponds to the normal discord ) and , respectively .the states are monogamous whenever take positive values .this figure shows evident transitions from observation to violation of monogamy for the super discord .more specifically , the super discord does not respect monogamy during the small and large regions of .this result is interesting as it implies that the monogamy property of discord , even for those defined under the same formalism of measurements , is not only state - dependent but is also determined by the intrinsic properties , e.g. , the measurement strengths , of the related measurements . as another example , we consider the generalized _ w_-class states for which the two monogamy inequalities in eq .are no longer equivalent , and when evaluated via the normal discord , the first one is always satisfied , while the second one may be satisfied or violated .when considering quantum correlations captured by the super discord , our results revealed that the first monogamy condition in eq . still remains satisfied for the weak measurements of arbitrary strengths .but if we consider the second condition , the case will be very different . as can be seen from the contour plots shown in fig .[ fig:4 ] , the monogamy property turns out to be state - dependent , and in contrast to that for the generalized ghz - class states , here the regions for monogamy are enlarged with finite measurement strength . from a practical point of view, the monogamous nature of the normal discord can be used to distinguish two stochastic local operations and classical communication ( slocc ) inequivalent classes of tripartite states , i.e. , the generalized ghz and _ w _ classes .but when evaluated via the super discord , the above results revealed that this application will does not applicable , as the ghz - class states can also be monogamous as well as polygamous for this case ( see , fig .[ fig:3 ] ) .this exhibits another perspective of the weak measurements in defining quantum correlations , which is beyond our expectation .finally , we remark that the change to the monogamy nature of quantum states with respect to the super discord can also happen when the weak measurements are performed on different subsystems .see , for example , the results displayed in fig .[ fig:5 ] , which is plotted for the tripartite pure states .weak measurements are important complementary to the standard measurement in quantum theory . by reconsidering the processes of weak measurements with different strengths, we showed that while being able to capture more quantum correlation , it can also induce other counterintuitive effects meanwhile .as the first exemplification , we showed that the weak measurements with different strengths can impose different orderings of quantum states .this effect is very different from those observed with different quantum correlation measures , e.g. , the entropic and the geometric measures of the normal discord , and it may results in unexpected dynamical behaviors of quantum correlations .moreover , we have also showed that the monogamous nature of the normal discord for certain classes of quantum states can be changed by the weak - measurement - defined super discord , and this change can even invalidate the feasibility of some quantum tasks , such as the detection of two slocc - inequivalent classes of tripartite states based on monogamy . in view of these facts , we then conclude that the weak measurements play a dual role in defining quantum correlations . on the other hand , since different physical systemsmay naturally interact strongly or weakly with probing systems , the full description of measurement - dependent quantum correlation may be complete only when weak measurement with adjustable strengths are considered . this may also provide a full quantification of quantum correlation restricted experimentally to some specified quantum systems .in particular , in case the quantum correlation based on weak measurements may enhance or diminish its usefulness in some protocols , a complete view of super quantum discord is necessary and may shed light on our understanding of other quantum characteristics .this work was supported by nsfc ( 11205121 , 10974247 , 11175248 ) , the `` 973 '' program ( 2010cb922904 ) , nsf of shaanxi province ( 2010jm1011 ) , and the scientific research program of the education department of shaanxi provincial government ( 12jk0986 ) .50 r. horodecki , p. horodecki , m. horodecki , and k. horodecki , rev .phys . * 81 * , 865 ( 2009 ) .h. ollivier and w. h. zurek , phys .lett . * 88 * , 017901 ( 2001 ) ; l. henderson and v. vedral , j. phys .a * 34 * , 6899 ( 2001 ) .b. daki , v. vedral , and .brukner , phys .105 * , 190502 ( 2010 ) .k. modi , t. paterek , w. son , v. vedral , and m. williamson , phys .lett . * 104 * , 080501 ( 2010 ) .a. datta , a. shaji , and c. m. caves , phys .rev . lett . * 100 * , 050502 ( 2008 ) .t. werlang , c. trippe , g. a. p. ribeiro , and g. rigolin , phys .lett . * 105 * , 095702 ( 2010 ) ; t. werlang and g. rigolin , phys .rev . a * 81 * , 044101 ( 2010 ) ; y .- c .li and h .- q .lin , _ ibid . _ * 83 * , 052323 ( 2011 ) .et al . _ , nat* 8 * , 666 ( 2012 ) .l. roa , j. c. retamal , and m. alid - vaccarezza , phys .107 * , 080401 ( 2011 ) ; b. li , s. m. fei , z. x. wang , and h. fan , phys .a * 85 * , 022328 ( 2012 ) .a. datta and s. gharibian , phys .a * 79 * , 042325 ( 2009 ) ; s. wu , u. v. poulsen , and k. mlmer , _ ibid ._ 80 , 032319 ( 2009 ) .et al . _ , nat .* 8 * , 671 ( 2012 ) .t. werlang , s. souza , f. f. fanchini , and c. j. villas boas , phys .a * 80 * , 024103 ( 2009 ) ; f. f. fanchini , t. werlang , c. a. brasil , l. g. e. arruda , and a. o. caldeira , _ ibid ._ * 81 * , 052107 ( 2010 ) ; r. lo franco , b. bellomo , e. andersson , and g. compagno , _ ibid . _ * 85 * , 032318 ( 2012 ) ; r. lo franco , b. bellomo , s. maniscalco , and g. compagno , int .b * 27 * , 1245053 ( 2013 ) .v. madhok and a. datta , phys .a * 83 * , 032323 ( 2011 ) ; d. cavalcanti , l. aolita , s. boixo , k. modi , m. piani , and a. winter , _ ibid . _ * 83 * , 032324 ( 2011 ) .s. adhikari and s. banerjee , phys .rev . a * 86 * , 062313 ( 2012 ) .a. streltsov , h. kampermann , and d. bru , phys .rev . lett . * 107 * , 170502 ( 2011 ) .x. hu , h. fan , d. l. zhou , and w .- m .liu , phys .a * 85 * , 032102 ( 2012 ) ; x. hu , h. fan , d. l. zhou , and w .- m .liu , _ ibid . _* 87 * , 032340 ( 2013 ) .m. gessner , e .-laine , h .-breuer , and j. piilo , phys .a * 85 * , 052122 ( 2012 ) ; t. abad , v. karimipour , and l. memarzadeh , _ ibid ._ * 86 * , 062316 ( 2012 ) . k. modi , a. brodutch , h. cable , t. paterek , and v. vedral , rev . mod . phys . * 84 * , 1655 ( 2012 ) .g. l. giorgi , b. bellomo , f. galve , and r. zambrini , phys .* 107 * , 190501 ( 2011 ) .s. luo and s. fu , _ ibid ._ * 106 * , 120401 ( 2011 ) ; s. luo , phys . rev .a * 77 * , 022301 ( 2008 ) ; b. bellomo , r. lo franco , and g. compagno , _ ibid . _ * 86 * , 012312 ( 2012 ) ; b. bellomo , g. l. giorgi , f. galve , r. lo franco , g. compagno , and r. zambrini , _ ibid ._ * 85 * , 032104 ( 2012 ) .s. hamieh , r. kobes , and h. zaraket , phys . rev .a * 70 * , 052325 ( 2004 ) ; f. galve , g. l. giorgi , and r. zambrini , europhys . lett . * 96 * , 40005 ( 2011 ) . a. n. korotkov and a. n. jordan , phys . rev. lett . * 97 * , 166805 ( 2006 ) ; q. sun , m. al - amri , and m. s. zubairy , phys .a * 80 * , 033838 ( 2009 ) .s. wu and m. ukowski , phys .* 108 * , 080403 ( 2012 ) .a. miranowica , phys .a * 327 * , 272 ( 2004 ) .y. yeo , j .- h . an , and c. h. oh , phys . rev .a * 82 * , 032340 ( 2010 ) .m. l. hu and h. fan , ann .( ny ) * 327 * , 851 ( 2012 ) .m. okrasa and z. walczak , europhys . lett .* 98 * , 40003 ( 2012 ) .j. cui , m. gu , l. c. kwek , m. f. santos , h. fan , and v. vedral , nat . commun . * 3 * , 812 ( 2012 ) .g. l. giorgi , phys .a * 84 * , 054301 ( 2011 ) .r. prabhu , a. k. pati a. sen(de ) , and u. sen , phys . rev .a * 85 * , 040102 ( 2012 ) ; sudha , a. r. usha devi , and a. k. rajagopal , phys .a * 85 * , 012103 ( 2012 ) .a. streltsov , g. adesso , m. piani , and d. bru , phys .lett . * 109 * , 050503 ( 2012 ) .m. n. bera , r. prabhu , a. sen(de ) , and u. sen , phys .a * 86 * , 012319 ( 2012 ) .h. c. braga , c. c. rulli , t. r. de oliveira , and m. s. sarandy , phys .a * 86 * , 062106 ( 2012 ) .f. f. fanchini , m. c. de oliveira , l. k. castelano , and m. f. cornelio , phys .a * 87 * , 032317 ( 2013 ) x. j. ren and h. fan , quantum inf* 13 * , 0469 ( 2013 ) .m. a. nielsen and i. l. chuang , _ quantum computation and quantum information _( cambridge university press , cambridge , 2000 ) .w. k. wootters , phys .lett . * 80 * , 2245 ( 1998 ) . g. vidal and r. f. werner , physa * 65 * , 032314 ( 2002 ) .r. horodecki and m. horodecki , phys .a * 54 * , 1838 ( 1996 ) .l. mazzola , j. piilo , and s. maniscalco , phys .* 104 * , 200401 ( 2010 ) .
|
the information - theoretic definition of quantum correlation , e.g. , quantum discord , is measurement dependent . by considering the more general quantum measurements , weak measurements , which include the projective measurement as a limiting case , we show that while weak measurements can enable one to capture more quantumness of correlation in a state , it can also induce other counterintuitive quantum effects . specifically , we show that the general measurements with different strengths can impose different orderings for quantum correlations of some states . it can also modify the monogamous character for certain classes of states as well which may diminish the usefulness of quantum correlation as a resource in some protocols . in this sense , we say that the weak measurements play a dual role in defining quantum correlation .
|
beta pictoris , a type a5 iv star at a distance of pc from the earth , is one of the youngest close stars with an approximate age of myr .observational evidence indicates that this star is surrounded by a planetary debris disk .the close proximity of such a young circumstellar disk has made pictoris an ideal candidate for the study of the evolution of protoplanetary disks and planetary system formation . in the past several years, there have been a number of reports of the detection of symmetric warps in the pictoris debris disk .models have best explained these warps as gravitational perturbations caused by planetary companions . among these reports ,the disk warps discovered by were explained as the edge - on projection of debris rings orbiting the central star .models of the flux density allow some parameters of these rings to be determined via fitting . proposed a multiple planet system to account for ring formation , noting that all adjacent rings are in mean - motion resonances . in consideration of these findings ,we undertake here a study of the dynamical evolution of a multibody system similar to that proposed by .we perform a statistical analysis of the stability of randomly generated systems within a portion of the total available parameter - space and briefly analyze the relationship between the parameters of the system and its orbital stability .the planetary system proposed by consists of four planets with radial distances and orbital inclinations equal to those of their corresponding warps ( see * ? ? ?* table 1 ) . to study the dynamics of this planetary system, we explore a parameter - space which includes the mass number of the four planets , their radii , and their positions and velocities . the mass and radius of pictorisare taken to be and , respectively .we consider planets with masses randomly chosen between one to three jupiter - masses . for each value of the mass of a planet, we calculate its radius assuming an average density equal to that of jupiter ( 1.33 g ) .we also assume that all planets are initially on direct keplerian circular orbits and their orbital phases are chosen randomly from the range . to explore the orbital stability of this planetary system , we integrated the system for 50 myr using mercury integrator package vi .we considered the system to be stable if no planet came closer than three hill s radii or obtained a radial distance larger than 1000 au .we ran a total of 20457 simulations using randomly generated values of planet masses and initial orbital phases .of this total , 14409 simulations used unique parameter sets .the remaining 6048 systems were exact replications of the systems in the unique set a consequence of the random number generation routine .the duplicate systems have been kept for the sake of statistical analysis , since they are randomly distributed throughout the phase space .table 1 shows the statistical data for all simulations grouped in five 10 myr intervals .the middle two columns show the number and percentage of simulations that became unstable within their corresponding time intervals .the majority of the randomly generated systems became unstable at early stages of the integration , with their number decreasing as time increases .the rightmost column shows the percentage of the systems remaining stable beyond their respective time interval .as shown here , approximately 41% of the systems remained stable after the first 10 myrs . from these systems , 6.7% were still stable after 20 myr from the beginning of the integrations the upper estimate of the lifetime of pictoris ( 20 myr ) . during the last 10 myr ,only 0.1% of all systems were still stbale ..statistical data on five - body system stability .[ cols= " > , > , > , > " , ] figure [ stable_hist ] shows how stability lifetimes are distributed across a random sampling of the parameter - space .this histogram shows the number of systems that became unstable in year intervals .it can be seen from this figure that for a system chosen randomly within our assumed parameter - space , it is most probable that the stability lifetime of the system is approximately 7myr .it is also seen that no system became unstable at an age of less than 1.0myr .we note that 15 systems remained stable for the entire 50myr and are therefore not accounted for in the histogram . in a preliminary attempt to determine the relationship between initial conditions and stability lifetime ,we have plotted the mass of each planet versus its initial phase - angle for those systems that remained stable for 50 myr ( fig .the initial phase of planet 1 was set to for all simulations .it is interesting to note that none of these systems contain a planet with a mass greater than .we have analyzed the stability of the proposed five - body planetary system embedded in the pictoris debris disk for over 14000 initial conditions .our results indicate that the majority of systems became unstable between a time of 1 to 10 million years .there were only 8 unique simulations that remained stable for the entire 50myr integration time .these systems contained planets with masses less than 2.4 times the mass of jupiter .this work is partially supported by the carnegie institution of washington internship program , and also an reu site for undergraduate research training in geoscience , nsfear-0097569 for jhc , and nasa origins of the solar system program under grant nag5 - 11569 , and also the nasa astrobiology institute under cooperative agreement ncc2 - 1056 for nh .wahhaj , z. , koerner , d. w. , ressler , m. e. , werner , m. w. , backman , d. e. , and sargent , a. i. , _ astroph .j. _ , * 584 * , l27l31 ( 2003 ) .zuckerman , b. , song , i. , bessel , m. s. , and webb , r. a. , _ astroph . j. _ , * 562 * , l87l90 ( 2001 ) .smith , b. a. , and terrile , r. j. , _ science _ ,* 226 * , 14211424 ( 1984 ) .hobbs , l. m. , vidal - madjar , a. , ferlet , r. , albsert , c. e. , and gry , c. , _ astroph .j. _ , * 293 * , l29l33 ( 1985 ) .weinberger , a. j. , becklin , e. e. , and zuckerman , b. , _ astroph .j. _ , * 584 * , l33l37 ( 2003 ) .burrows , c. j. , krist , j. e. , and stapelfeldt , k. r. , _ bul . am ._ , * 27 * , 1329 ( 1995 ) .mouillet , d. , larwood , j. d. , papaloizou , j. c. b. , and lagrange a. m. , _ month . not ._ , * 292 * , 896904 ( 1997 ) .heap , s. r. , lindler , d. j. , lanz , t. m. , cornett , r. h. , hubeny , i. , maran , s. p. , and woodgate , b. , _ astroph .j. _ , * 539 * , 435444 ( 2000 ) .carroll , b. w. , and ostlie , d. a. , _ an introduction to modern astrophysics _ , addison - wesley , new york , pp .a13 ( 1996 ) .chambers , j. e. , _ month . not ._ , * 304 * , 793799 ( 1999 ) .
|
it has recently been stated that the warp structure observed around the star pictoris may be due to four planets embedded in its debris disk . it , therefore , becomes important to investigate for what range of parameters , and for how long such a multibody system will be dynamically stable . we present the results of the numerical integration of the suggested planetary system for different values of the mass and radii of the planets , and their relative positions and velocities . we also present a statistical analysis of the results in an effort to understand the relation between the different regions of the parameter - space of the system and the duration of the orbital stability of the embedded planets . address = new mexico institute of mining and technology , 801 leroy place , socorro , nm 87801 address = department of terrestrial magnetism and nasa astrobiology institute , + carnegie institution of washington , 5241 broad branch road , washington , dc 20015
|
the last few decades have witnessed an explosion of high - dimensional data in applied fields including biology , engineering , finance and many other areas . given a dataset consisting of where is the response and is the covariate for the observation , the main interest is often to conduct a regression analysis between and , the simplest model for which takes the linear form . an important assumption in linear regression is usually that the observations are all generated from the same model . in many applications , however , the data collected often contain contaminated or noisy observations due to a plethora of reasons . those observations exerting great influence on statistical analysis , thus named influential points ,can seriously distort all aspects of data analysis such as alter the estimation of the regression coefficient and sway the outcome of statistical inference .thus , when influential points are present , fitting the model based on a clean data assumption leads to at best a very crude approximation to the model and at worst a completely wrong solution . for fixed dimensional models ,we refer the reader to , among many others . for high - dimensional models, found that influential observations could negatively impact many attractive methods recently developed for dealing with high - dimensionality , such as lasso for variable selection ( tibshirani , 1996 ) and sis for variable screening ( fan and lv , 2008 ) .as a result , influence diagnosis has been long recognized as a central problem and routinely recommended in statistical analysis .an entire line of research has been devoted to devising robust methods that are less prone to influential observations ; see , for example , two excellent books on robust regression by and when is fixed .when , and investigated m - estimation and proposed algorithms to find the optimal loss function . and , among others , devised robust methods for variable selection when heavy tailed noises are present .these papers made no attempt to quantify the influence of individual points , which can often be the main question of interest in practice .as highlighted by , it is worth investigating factors causing influentialness to determine the best course of action , as influential points often are the best indicators of the stability of a model . for multivariate data containing only s , proposed to find outliers in a high - dimensional space via projection , while used a robust covariance matrix estimator for defining distance for detecting outliers . is among the first to study outlier detection in regression by using nonconvex penalized likelihood .their method was mainly developed for problems and a theory on how successful their approach is in terms of identifying influential observations is lacking .it is also found that empirically she and owen s method is outperformed by the new approach studied in this paper ( section 4 ) .when is fixed , there are many measures proposed for quantifying the influence of each observation , noticeably , cook s distance , studentized residuals , dffits , and welsch s distance .these measures have now been implemented in most statistical software such as r and sas .since these measures are all based on the ordinary least squares ( ols ) estimation , they are not applicable to high - dimensional data due to lack of degrees of freedom . on the other hand ,the problem of influence diagnosis in a high - dimensional setting has received little attention despite its obvious importance .this is mainly due to the difficulty of establishing a coherent theoretical framework and the lack of easily implementable procedures . appears to be the the first work under this setting . in particular, they proposed a new high - dimensional influence measure ( him ) based on marginal correlations and established its asymptotic properties .the asymptotic theory further permits the development of a multiple testing based procedure for detecting influential points .similar to many fixed dimensional measures , him is based on the idea of leave - one - out .that is , to quantify the influence of an observation , one compares a predefined measure evaluated on the whole dataset and the measure evaluated on a subset of the data leaving out the observation under investigation . because of this , him is effective in detecting the presence of a single influential point . in practice , however , multiple influential observations are more commonly encountered and it is not appropriate to apply a test for a single influential point sequentially in order to detect multiple ones . on the other hand ,detecting multiple influential observations is much more challenging , due to the notorious masking " and swamping " effects .specifically , masking occurs when an influential point is undetected as such , while swamping occurs when a non - influential point is classified as an influential one .masking is one reason that trying to apply a single influential point test sequentially can fail .for example , if there are multiple influential points , masking may cause the influence test for the first influential point to return a conclusion of no influential points , causing testing for any additional influential points not performed . to handle the masking and swamping effects in fixed dimensional models , many group deletion methods have been proposed ( rousseeuw and zomeren , 1990 ; hadi and simonoff , 1993 ; imon , 2005 ; pan et al ., 2000 ; nurunnabi et al . , 2014 , roberts et al . ,dealing with masking and swamping effects for high - dimensional data , however , is much more challenging and is currently an open problem .the main aim of this paper is to propose a new procedure for detecting multiple influential points for high - dimensional data that is theoretically sound and operationally simple . to substantially extend the scope of the new method, we study the problem in the context of a multiple response linear model where .this encompasses the univariate response linear model in as a special case .the high - dimensional linear model with multiple responses has been extensively studied for statistical learning and dimension reduction , but not for influence diagnosis . in extending him to the multiple response context ,our first contribution is to define a new influence measure by explicitly taking into account the covariance structure of and derive its theoretical properties .based on this new influence measure and the idea of random group deletion , we propose a novel procedure named mip , short for ultiple nfluential oint detection for high - dimensional data . along the process , we propose two novel quantities named max and min statistics to assess the extremeness of each point when data are subsampled .our theoretical studies show that these two statistics have complementary properties .the min statistic is useful for overcoming the swamping effect but less effective for masked influential observations , while the max statistic is well suited for detecting masked influential observations but is less effective in handling the swamping effect . combining their advantages , we propose a computationally efficient yet simple min - max algorithm for obtaining a clean subset of the data that contains no influential points .this clean set of data is then served as the benchmark for assessing the influence of other observations , which in turn permits the use of false discovery rate approaches such as the benjamini - hochberg method for assessing influence .remarkably , the theoretical properties of max and min statistics can be studied and are rigorously established in this paper .we point out that establishing the theoretical results for the two extreme statistics is very challenging and that similar results even for fixed dimensional problems do not exist .this can be seen as our second contribution . to the best of our knowledge , mip is the first influence diagnosis procedure that is theoretically justified for multiple influential point dection in a high - dimensional setting .before we proceed , we highlight the usefulness of the max and min statistics via an analysis of the microarry data in section 4.3 .figure [ fig1 ] plots the logarithms of the -values associated with the max statistic in ( a ) and the min statistic in ( b ) versus the observation indices , respectively . with a prespecified false discovery rate at , using the min statistic , we identify a set of influential observations , represented as the blue points in plot ( a ) and ( b ) .it is interesting that the mip procedure combining the strengths of the two statistics identifies the same set of influential points . on the other hand ,using the max statistic , additional observations , represented as red triangles in plot ( a ) , are declared influential .these findings are consistent with our theory that the max statistic tends to identify more influential observations , making it more suitable for overcoming the masking effect , but may suffer from the swamping effect . on the other hand, the fact that the min statistic gives the same set of influential points as mip in plot ( b ) implies that there may not exist any masking effect in this data .further analysis in section 4.3 shows that the reduced data , obtained by removing the influential observations identified by mip , results in a sparser model with a better fit , when lasso is applied for model fitting .in addition , the estimated coefficient based on the reduced data is almost orthogonal to that of full data .therefore , being able to detect influential observations may be helpful for achieving better variable selection and more accurate coefficient estimation ..45 -values by using the min statistic ., title="fig : " ] .45 -values by using the min statistic ., title="fig : " ] the main flow of this paper is organized as follows . in section 2, we review the high - dimensional influence measure in , extend it to the multiple response linear model accounting for the covariance of the responses , and establish the asymptotic behaviour of this new influence measure .in particular , we show in theorem 1 that , when there is no influential point , the proposed influence measure for each observation converges to a distribution where , the number of responses , is the degrees of freedom . in section 3 , based on the idea of random group deletion orleave - many - out , we propose max and min statistics for assessing extremeness and establish their theoretical properties .the max and min statistics for a given point are the maximum and the minimum quantity , respectively , of the influence measures defined over randomly subsampled data .we show in theorem 2 that , surprisingly , when there is no influential point , these two statistics both follow a distribution . when there are influential points , theorem 3 and theorem 4 show that for a non - influential point , its max and min statistics still follow a distribution .furthermore with the presence of influential points , theorem 3 and 4 demonstrate that , under suitable conditions , the max and min statistics can identify the influential points with large probability .we then argue that these two statistics are complementary in detecting influential observations and the min - max algorithm can suitably combine their strengths .simulation results and data analysis , showing the competitive performance of mip in comparison to him and the method of , are presented in section 4 . in section 5 ,we provide further discussions , including the use of alternative quantities for defining influence , parallel computing for alleviating computational demand , and generalization of mip beyond linear models .finally , all the proofs are relegated to the appendix .here are the notations used throughout the paper . for any set , we write as its cardinality .let and be the set of the influential and non - influential observations , respectively .denote by the norm of a vector . for any matrix , and denote its frobenius norm and spectral norm , respectively .finally , let and we use to denote a generic constant that may change depending on the context .to motivate the development of a new influence measure for multiple response regression , we first give a short review of the high - dimensional influence measure ( him ) in for problems with . towards this end , consider the following linear model with a scalar response [ eq : model ] y_i= _ i^+_i , where the pair , , denotes the observation of the subject , is the response variable , is the associated -dimensional predictor vector with zero mean , is the coefficient vector , and is a mean zero normally distributed random noise . the idea of him is to define the influence of a point by measuring its contribution to the average marginal correlation between the response and the predictors .specifically , define the marginal correlation between variable and the response as .given the data , we can obtain its sample estimate as , for , where and are the sample estimates of and , respectively . the sample marginal correlation with the observation removedis similarly defined as for .him then measures the influence of the observation by comparing the sample correlations with and without this observation , defined as intuitively , the larger is , the more influential the corresponding observation is . when there is no influential point and , under mild conditions , it is proved that where is the chi - square distribution with one degrees of freedom .based on this result , we can formulate the problem of influential point detection as a multiple hypothesis testing problem where one tests hypotheses , one for each observation stating that the observation under invstigation is non - influential .subsequently , the benjamini - hochberg procedure for multiple testing can be used to control the false discovery rate .since him is based on the leave - one - out idea , the derived distribution is invalid whenever there are one or more influential points .that is , the presence of one single influential point can distort the null distribution of him for a non - influential point according to the definition above . similarly , the presence of more than one influential point can distort him of an influential point as well .this is the manifestation of a more general difficulty of multiple influential point detection where the masking and swamping effects greatly hinder the usefulness of any leave - one - out procedures . in the language of multiple testing, masking is the the problem of getting false negatives and swamping is the problem of getting false positives . to appreciate how masking and swamping effects negatively impact the performance of him , we quickly look at example 1 and 2 in section 4 .the data are generated such that there exists a strong masking effect for example 1 and a strong swamping effect for example 2 .the magnitude of these effects depends on a parameter denoted as .figure [ fig2 ] presents a comparison of him in and the proposed mip method proposed in this paper for detecting influence , when the nominal level used for declaring influential in the benjamini - hochberg procedure is set at . from plot ( a ) of figure [ fig2 ], we see that the true positive rates ( tprs ) of him are much lower than those of mip ; that is , him identifies much fewer influential points as influential and thus suffers severely from the masking effect .meanwhile , the false positive rates ( fprs ) of him are also much larger than the nominal level especially when becomes large ; that is , him identifies much more non - influential points as influential , meaning that him also suffers from the swamping effect . from plot ( b ) , we see that him suffers from the swamping effect greatly , as the fprs can be very close to 1 for large . on the other hand , for both examples, the fprs of the mip procedure are controlled well below the nominal level while its tprs are monotone functions of and eventually become one for large ..5 .5 we are now ready to extend him to deal with multiple response models .assume that the non - influential observations are from the following model where , , with . in this paper , we consider the situation when is fixed and diverges with such that both and diverge to infinity .denote , .without loss of generality , we assume that and ; otherwise , we can always center the data . write . to detect the influential observations in a high - dimensional linear model with multiple responses , we need to define a proper measure .similar to him , to circumvent the unstableness of the ols estimator , we use the marginal correlations between with for all the covariates .different to him , however , by noting that is multivariate with a covariance matrix , the correlation among s must be taken into account .we introduce the notations used first .let be the standardized version of , .define the marginal correlation between the covariate and the response vector as with and write as the cross correlation between the responses and the predictors .denote by , , the estimate of , respectively .when and are unknown , a simple choice is to replace them by their corresponding sample moment estimates .another natural choice is to employ robust estimators , for example , the median , the median absolute deviation ( mad ) and robust covariance matrix estimate . regardless of what estimates are used , we denote the estimator of as , for .we define the estimator , where with .then we define another estimator by deleting the -th observations as with where for _ js^(k ) = , and are the estimators of , respectively , obtained without the observation . then similar to him , we define our influence measure for the observation as that is , assesses the average difference between the two correlation matrices that are between the covariates and the responses , one defined on a complete dataset and the other a dataset without the observation .this is basically the delete - one - out idea .the statistic reduces to in section 2.1 if .if the observations is not influential , the value of will be small . in order to quantifyhow small is small , we need to study its asymptotic behavior .let , , and .furthermore , let , . similarly , let and . to study the properties of , we make the following assumptions . * for , is constant and does not change as increases .* for the covariance matrix of the covariates with eigen - decomposition , we assume for some . * the predictor follows a multivariate normal distribution and the random noise follows a multivariate normal distribution with mean zero and an unknown variance . * denote as the eigenvalues of .assume for some constant .* and are finite .assumption ( c1 ) is a natural extension of ( c.1 ) in as is assumed fixed .assumption ( c2 ) and the assumption on in ( c3 ) are also made in .the assumption on in ( c3 ) is an extension of the univariate linear model considered in .assumption ( c4 ) on the eigenvalues of is reasonable for a fixed .assumption ( c5 ) is a natural extension of ( c.4 ) in , which requires that the estimates of the means and variances are consistency and that certain moments exist .since are _ i.i.d ._ normal , it is shown that when and are moment estimates , the assumptions on and hold .if is the sample mean and is the sample covariance matrix , by similar arguments and noting ( c4 ) , it can be shown that assumptions on and in ( c5 ) also hold . in section 3 ,we strengthen ( c5 ) further as ( c7 ) .[ th1 ] assume a fixed . under assumptions ( c1)(c5 ) ,when there is no influential observations , we have as , where is the distribution with degrees of freedom . as alternatives to the sample estimates , robust estimates of , , and also be used in practice .for example , we can estimate and by the sample median and by the median absolute deviation ( mad ) estimator , respectively .these estimates satisfy assumption ( c5 ) by noting the normality of s . for , write , where is the correlation matrix for response and is a diagonal matrix with .we use the mad estimator for , denoted as .the matrix can be estimated by the approach of as follows .write where is the pearson correlation between variables and .following , we have , where is kendall- correlation between and , when follows an elliptical distribution including the normal distribution as a special case . again by , we can estimate as and for .consequently , the robust estimate of is simply and that of is . combining everything ,a robust estimate of is .because is normal and is fixed , it has been shown that for any , decays to zero exponentially fast . combining this results with the property of the mad estimator , it can be checked that ( c5 ) holds under the normality assumption of s .these robust estimates are the quantities used in our numerical examples .as discussed before , any measure based on the leave - one - out approach may be ineffective when there are multiple influential observations due to the masking " and swamping " effects .since the number of influential observations is generally unknown in practice , it is natural to employ a notion of leave - many - out or group deletion .group deletion has also been used for fixed dimensional problems in identifying multiple influential points , but is not employed in a way similar to how we define our statistics which are theoretically tractable .recall that and denote the indices of influential and non - influential observations such that .let be the size of influential point set and be the number of non - influential points .write as the data point . for any fixed , to check whether is influential or not , we draw at random with replacement some subsets ; that is , these subsets do not include .write where for some .these subsets are repeatedly drawn in the hope that there exists some subset that contains no influential observations .if such a clean set can be found , then the statistic associated with any non - influential point as developed in section 2.2 has the distribution as in theorem [ th1 ] .a conservative choice for is , because the number of non - influential points is usually larger than that of the influential points .formally , we make the following assumption on and . * denote which is allowed to vary with .assume for some independent of .we take .assumption ( c6 ) allows . for ,let be the subset of non - influential observations in and denote its size as . under ( c6 ), we have , that is , for any subset , the number of non - influential observations does not vanish .for , let which is of size . for , we compute its generalized influence measure with respect to the random subset as where and denote the estimate of based on observations in and respectively .we are now ready to define the following two extreme statistics we name them the min and max statistic respectively as they measure the extremeness of the generalized influence measure in terms of the randomness of the subsampling scheme .to establish the asymptotic behaviours of and , we first study the behaviour of a key quantity in which is defined as where , , and is the estimate of , a diagonal matrix in . by definition, is the frobenius norm associated with the non - influential observations in only .denote as the population version of and note that is the population version of .we make assumptions on , and , which can be seen as strengthened assumptions in ( c5 ) . *assume that assumptions on and in ( c5 ) still hold .furthermore , there exist constants , independent of and , such that for any , in assumption ( c7 ) , is assumed to have sub - gauassian tails and s have sub - exponential tails .this assumption is satisfied for the sample mean and the sample variance under the normality of s .the sub - exponential tails assumption is stronger than that in ( c5 ) where only the eighth moments are required .we now quantify the magnitude of , the maximum effect of the non - influential points , which is a key quantity for establishing the asymptotic properties of the min and max statistics .[ lemma1 ] assume that the non - influential observations satisfy ( c1)-(c4 ) and that ( c6 ) and ( c7 ) hold .assume further .then for any , obviously , if for some sufficiently small . herethe number of the subsamples is allowed to grow to to help us understand the approach as explained in the next section , although in practice we only need to be large .based on lemma [ lemma1 ] , we have the following conclusion when there is no influential observation .[ th2 ] suppose that all observations are non - influential . under the assumptions of lemma [ lemma1 ], it holds that , for any , and theorem [ th2 ] seems surprising at first glance , since we always have .an explanation is in place .it will be shown that can be decomposed into two parts .the first part , depending on the quantity defined in the next paragraph , represents the effect of the observation , and the second part is controlled by . since by lemma [ lemma1 ] , the asymptotic distributions of and are mainly determined by .thanks to the blessing of dimensionality , we can show that asymptotically has a distribution . from theorem [ th2 ] ,when or is larger than , the quantile of the distribution , for some prespecified such as , we can label as an influential observation .recall that is the set consisting of the indices of the non - influential observations in .let be its complment in .for each , it is obvious that , the latter equal to if . since ,similar to the proof of theorem [ th2 ] , we have [ dec_d_rk ] n_^2_r , k&=&p^-1-^(k)_f^2 + & = & p^-1_tk , ta_r_ t_t^-_k_k^_f^2 + & = & p^-1_tb_r _ t_t^+_to_r _t_t^-_k_k^_f^2 + & : = & p^-1w_non , k , r+w_,k , r-_k_k^_f^2 , where and are associated with influential and non - influential observations , respectively .define which represents the effect of the -th observation .let quantify the maximum and minimum joint effect of the influential observations , respectively .the asymptotic behavior of and depends on the magnitude of , and when multiple influential observations are present .see theorem [ th3 ] in section 3.1 and theorem [ th4 ] in section 3.2 .we state the properties of and separately . in theorem [ th2 ] , we derive the null distribution of and when there is no influential point .we now study when there are influential observations and develop the corresponding detection procedure .recall and in ( c6 ) .denote , the ratio of over , and let , for any .simple calculation in the proof of theorem [ th3 ] shows .we have the following results for . [ th3 ] under the assumptions of lemma [ lemma1 ] , when there are influential observations , the following two conclusions hold .* suppose further .if observation is non - influential , that is , , then both and converge to in distribution .* for an influential point , if holds for some small prespecified where is the quantile of a distribution , then .in addition , it holds that for some . under the condition in , for any non - influential observation , the asymptotic distributions of and are the same as those in theorem 2 .that is , the distributions of the min and max statistics of a non - influential observation are not affected by the presence of influential observations . as such, a non - influential point can be identified as non - influential with high probability .that is , the swamping effect can be overcome under the condition in .since , a sufficient condition for is that , which holds if and .this condition might be violated , however , if does not vanish or some influential observations have large values in terms of .this condition implies that deleting points with large values in is helpful to alleviate the swamping effect . for an influential observation ,the max - unmask condition in gives the requirement on its signal strength for it to be identified as influential .as decreases , the condition becomes weaker and easier to be satisfied , and is easier to be detected .this provides opportunity to identify the influential observations that are masked by others , as long as we can make small enough .in fact , as argued below , can be very small if is sufficiently large .now , we discuss the upper bound in of theorem [ th3 ] . recall that denotes the indices of the influential observations in andnote .then we have .\ ] ] define . by allowing ,it is easy to see that is a decreasing function of with , since there are many subsets that contain no influential observations under assumption ( c6 ) , i.e. . therefore , . of course , in practice is not achievable .assume further .then , which will be small for large and .if is unbounded but for some , we have , which converges to 0 , as . generally , when and are large , will be small under some mild conditions .therefore , has advantages in overcoming the masking effect if is large .we formally formulate a multiple testing problem to test the influentialness of individual observations with null hypotheses is non - influential , . by of theorem [ th3 ] and the above discussions, we can estimate the set of the influential observations as where is the -value under and s are determined by the specific procedure used to control the error rate . heres can be independent of , if we aim to control the familywise error rate by the bonferroni test .alternatively , s can depend on , if we want to control the false discovery rate ( fdr ) at level .for example , for the procedure in , can be taken as the largest such that , where are the ordered s .we now state the theory of using the benjamini - hochberg procedure and will use it later for numerical illustration , although other procedures developed for controlling fdr can also be used .[ proposition2 ] suppose that the benjamini - hochberg procedure is used to control fdr at level .if the max - unmasking condition in of theorem [ th3 ] holds with but with replaced by the constant defined there , then under the conditions in lemma [ lemma1 ] , we have .note that discussed further after theorem [ th3 ] is independent of .proposition [ proposition2 ] shows that all the influential points will be identified as influential with high probability .that is , the true positive rate is well controlled .in addition , if , by in theorem [ th3 ] , there will be no swamping effect and then the statistic under follows distribution .let be the estimated fpr .when the benjamini - hochberg procedure is applied and there is no swamping effect , will be controlled .however , the condition is strong and it may fail if does not converge to zero . in this case , fpr may be out of control . to summarize , the detection procedure based on the max statistic is effective in overcoming the masking effect , but it is somewhat aggressive in that the fpr may not be controlled well without strong conditions . on the other hand , we point out that the procedure based on is computationally efficient , compared with that based on below .we have argued that the statistic is effective in alleviating the swamping effect .we formally state this in the following theorem .[ th4 ] under the assumptions of lemma [ lemma1 ] , the following two conclusions hold .* assume .for any non - influential point , it holds that . * for any influential , if holds , then , where is a small constant . compared with of theorem [ th3 ] where is required, the condition in of theorem [ th4 ] is much weaker . as discussed in section 3.1 , .therefore , the statistic is less sensitive to the swamping effect . on the other hand , is involved in the min - unmask condition in , which is much stronger than the max - unmask condition in of theorem [ th3 ] .that is , an influential observation will not be identified as influential unless its signal is very strong .thus , the min statistic is efficient in preventing the swamping effect but may be conservative for identifying influential points . combining with the result in section 3.2 that the max statistic is effective in overcoming the masking effect but is aggressive , we conclude that the max statistic and the min statistic are complementary to each other .if the min - unmask condition holds for all simultaneously , then with will be detected correctly , when certain error control procedure is used .for example , similar to proposition [ proposition2 ] , with , one can show that the benjamini - hochberg procedure can correctly detect the influential observations .however , the min - unmask condition is very strong and may not be satisfied for all simultaneously .we provide a sufficient condition for this condition to hold . without loss of generality ,we assume and write ranking in a decreasing order .[ prop3 ] if , then the min - unmask condition holds simultaneously for all the influential points .the condition in proposition [ prop3 ] is strong .when and is large , proposition [ prop3 ] needs not to be too small but this condition may be violated easily .a remedy is to sequentially remove the influential observations that have been detected so far and then apply the detecting procedure recursively on the remaining data , as we explain below . to simplify the description , we introduce some notations . for any subset with cardinality and any observation with , we can draw at random with replacement subsets , with the same cardinality , where .similar to , we define , where .denote by the indices of non - influential observations in and let , .let be such that . then similar to , we define , which denotes the minimum of the joint effect of influential observations with indices in . and similar to , one can define . obviously , when , , and are exactly the same as , and , respectively .generally , suppose that s can be separated into several groups in successive order , that is , , such that .denote .let , and , . for simplicity, we assume that s are independent of , denoted still as , and that the sufficient condition in proposition [ prop3 ] holds for group , that is , [ gmin - unmaks ] e_(m_j)^1/2>r _ e_(m_j-1 + 1)^1/2+(^2_1-(q))^1/2 , 1j , which is referred to as gmin - unmask condition for simplicity .then , similarly to the argument of proposition [ prop3 ] , we see that min - unmask condition holds simultaneously for any on the data set , that is , .consequently with will be large than with high probability .if influential observations in are detected correctly and removed sequentially , the influential observations in group can be detected successfully with high probability .we remark that the gunmask - condition is much weaker than the condition in proposition [ prop3 ] .this motivates us to consider the following multi - round procedure .define the set of influential observations identified in the round as where depends on the specific procedure used , similar to the discussion in section 3.1 , with , and .finally , we can estimate by , where is such that .let be the false positive rate associated with estimate .[ proposition3 ] suppose that ( c6 ) holds and that fdr is controlled at level in each round. then .although the above iterative procedure can improve the performance of to overcome the masking effect , requiring only weaker gmin - unmask condition in ( [ gmin - unmaks ] ) , the computation of this procedure will be more costly if the number of rounds is large .on the other hand , the gmin - unmask condition will be easier to satisfy for larger .theoretically , can be as large as , where gmin - unmask condition in ( [ gmin - unmaks ] ) becomes by noting that , which is much weaker than the condition in proposition [ prop3 ] . however , larger demands more intensive computing .if an early stopping strategy is adopted , it may still suffer from the masking effect . as a quick summary ,the test statistic is more efficient in dealing with the masking effect , because the strength of the influential observations required by in ( ii ) of theorem [ th3 ] is much weaker than gmin - unmask condition ( [ gmin - unmaks ] ) required by , when is large .moreover , any procedure based on is computationally efficient , identifying the influential observations in just one round .however , may suffer from the swamping effect if the strong condition of theorem [ th3 ] is violated . on the other hand ,the estimate based on the statistic can maintain good fpr at the expense of more intensive computation .taking advantages of both statistics , we propose the following computationally efficient min - max - checking algorithm for identifying a clean set that contains no influential points and can serve as the benchmark for assessing the influence of other points .we propose the following algorithm to combine the strengths of the max and min statistics . *min - max algorithm for estimating a clean set * * .let and fix .repeat steps 1 and 2 until stop . ** * min - step*. for the data indices in , compute .alternatively we may simply take as the set of indices with the first smallest -value for some small number .update . * * * max - step*. estimate as in section 3.1 based on observations in and denote its complement as an estimate of the clean set .if , then stop ; otherwise , go to min - step .this algorithm identifies a clean dataset containing no influential points with cardinality at least by successively removing potential influential points . here is specified by the procedure that controls the error rate , and can be determined in the same way as in section 3.1 .the main rational of this algorithm is , as argued , that the max statistic is aggressive in declaring influential while min statistic is conservative .we first run a min - step to eliminate those influential observations with strong strength to alleviate the swamping effect .combined with the efficiency of in overcoming the masking effect , it is highly possible to obtain a clean set with a large size in one iteration .if the clean set is not large enough , we run the min - step again to remove further influential observations with strong strength . in our numerical study , we find that this algorithm is computationally very efficient , usually stops in 1 or 2 rounds . with some abuse of notations ,write as the final clean set obtained by the min - max algorithm. then its supplement , written as , is an estimate of the set which contains all potential influential observations .however , may still contain non - influential observations as the procedure for obtaining a clean set only aims to find a subset of the non - influential points . a further step to check whether any point in is truly influential if necessary .this step , however , is easy since we have now a clean dataset .we now outline the exact procedure . for any , consider the data with indices in and , respectively .we then compute statistic as in section 2 where and are computed on data set and , respectively . since is a good estimate of the clean data containing no influential point, this leave - one - out approach will be effective for testing multiple null hypotheses in the form of .if is good , according to the results in section 2 , will follow distribution under by theorem [ th1 ] , where .the benjamini - hochberg procedure can then be applied to control fdr .those whose corresponding hypotheses are rejected by the fdr procedure can be labeled as influential observations .the algorithm for detecting multiple influential observations , called min - max - checking algorithm , is summarized as follows . *min - max - checking algorithm * * estimate a clean subset by the min - max algorithm ; * check for each whether the observation is influential .we evaluate the performance of mip for detecting multiple influential points and compare it to him whenever possible . because him is only developed for and does not use any group deletion procedure , it is not applied to examples where . throughout the simulation study, we set the sample size as and the number of predictors as .we generate observations from [ model1 ] _ i = b_i+_i , 1 in , where , , , and .we then replace the first points in by which are generated differently .the resulting dataset denoted as thus may contain influential points . for , we set and where .the coefficient matrix and how is generated are specified below .we evaluate performance by assessing the success in identifying influential and non - influential points , the accuracy in estimating in model , and the success in identifying the support of .let be the index set of the influential points and as its estimate either by him or mip .we first compute , the true positive rate for influential observation detection , and , the false positive rate for detection .that is , and . denoting as the false negative rate, we also compute the -score defined as .obviously , the larger , the better the corresponding method is . denote as an estimate of which is based on the full data ( full ) , or based on a reduced dataset after him is applied ( him ) , or a reduced dataset after mip is applied ( mip ) . in this paper, we estimate via the lasso when or via the group lasso by treating the rows of as groups when .the accuracy of the estimation is evaluated by computing and we compare the accuracy of full , him and mip .let be the -th row of and likewise for . for , denote the support of as and its complement as .we report the success in identifying the support of by reporting in the following simulations , we set .that is , the random subsets all have cardinaltiy .we repeat each experiment times and report the means of the quantities defined above . in implementing mip , we set the number of random subsets as for example 24 . for example 1, we take or to assess the effect of . in table 2 , because the of him can be large , we decided not to compute the coefficient estimates based on the reduced data to save space as long as , finally , the fdr level is fixed at .in the following examples , example 1 and 2 focus on scalar responses ( i.e. ) , and example 3 and 4 on multiple responses ( ) .we simulate the data such that there exists a strong masking effect in example 1 and a strong swamping effect in example 2 .denote as a -dimensional zero vector and as a -dimensional vector of s .* example 1 * ( strong masking effect ) .we first generate non - influential observations from ( [ model1 ] ) with , . let .we then replace the first non - influential observations by where , with , are subsets of chosen independently with replacement , and .this example is designed such that the influential observations are clustered together and consequently many influential observations are masked by other influential ones .him based on leave - one - out will likely fail to identify many influential points .the simulation results are presented in table [ table1 ] and plot ( a ) of figure [ fig2 ] ..[table1]simulation results of example 1 with different . [ cols="<,<,^,^,^,^,^,^,^",options="header " , ] from the above table , we see that in all cases , the computational time of the non - parallel algorithm is about 2.5 times more than that of the parallel algorithm . note that only 4 cpu are used for parallel computing in this simple comparison .a much higher computational gain is expected if more cpus are used . *models beyond linear models*. the mip procedure may be extended beyond the linear model and here we provide a short discussion . motivated by the marginal correlation idea for linear regression , zhao et al .( 2013 ) discussed an extension of him to generalized linear models , by replacing the marginal correlation with the marginal maximum likelihood estimator which has been used for variable screening by .the group deletion procedure employed by mip may be then used for defining new min and max statistics . however , substantial theoretical challenges exist to understand their asymptotic distributions . generalizing the method in this paper to other models is an interesting topic for future research .* final comment*. in conclusion , we hope that this paper can bring to the attention of the statistics community the importance of influence diagnosis and how one might think about defining influence and devising automatic procedures for assessing influence , in a theoretically justified fashion . with the rapid advances of the big data analytics, we believe that the issue of influence diagnosis will only become more relevant and hope that this paper can serve as a catalyst to stimulate more research in this area .due to the complexity of the statistic , we complete the proof in two major steps . in step 1 , we first establish the asymptotic distribution of a statistic denoted as assuming that are known . in step 2 , we derive the results of by analyzing the difference between and . here is an intermediate quantity used only to simplify the proof .now we give the explicit expression of .when are known , without loss generality , we assume that and define as where with and with . [ lemmadotdk ] under assumptions ( c1)-(c4 ) , we have .the proof of lemma [ lemmadotdk ] is similar to that of zhao et al .( 2013 ) , where is a scalar variable following a normal distribution .in fact , the main requirement on in the proof there is that the eighth moment of exists , which is equivalent to under the normal assumption . for the multivariate responses considered in this paper , and has bounded eigenvalues . moreover , recall that , due to .therefore , the main argument of zhao et al .( 2013 ) can be applied here with some minor revision .we only give a simple description of the proof .let for any and .after some algebra , we have , for , _ k&=&p^-1-^(k)_f^2 , + & = & p^-1_j=1^p_1tn^tk _ tx_tj-_kx_kj^2 + & = & _ t=1^n _ t^2 k_p , tt+_k^2 k_p , kk + & & + _ ts _ t^_sk_p , ts-_t=1,tk^n _ k^_tk_p , tk + & : = & a_1+a_2+a_3 - 2a_4 . due to assumption ( c4 ) and the assumption , we know that and . under assumptions ( c1)(c3 ) , by nearly the same argument to prove proposition 1 of zhao et al .( 2013 ) , we have ^{-1}e(\|\dot \by_k\|^2)e(k_{p , kk } ) + o(n^{-2}p^{-1}l_p^{1/2 } ) ] , , , and . then _ k&=&e(_k)+[_k - e(_k ) ] + & = & e(_k)+[_i=1,2,3(a_i - e(a_i))-2(a_4-e(a_4 ) ) ] + & = & e(_k^2)e(k_p , kk)+(_k^2-e(_k^2))e(k_p , kk ) + & & + o_p(p^r/2 - 1)+o_p(n^-3/2 ) + & = & _ k^2e(k_p , kk)+e(_k^2)e(k_p , kk)+o_p(p^r/2 - 1)+o_p(n^-3/2 ) + & = & _ k^2+o_p(1),where we use the fact that .since , we have .this completes the proof of lemma [ lemmadotdk ] . _ proof of theorem [ th1]_. now we establish the asymptotic results of by analyzing the difference between and .the proof is similar to that of proposition 1 in zhao et al .( 2013 ) . for clarity and simplicity, we use the general version by replacing , with and with in the definition of , and .obviously , this does not change the asymptotic distribution of .when and are unknown , we use the corresponding estimates and obtain and .the corresponding statistics are denoted as and .we show that is small with an argument similar to proposition 2 of zhao et al .( 2013 ) where .we briefly review the conditions required in proposition 2 of zhao et al .( 2013 ) . assumption ( c.2 ) and ( c.3 ) in zhao et al .( 2013 ) are for .the requirement on the response in zhao et al .( 2013 ) is that , which is met when is normal .furthermore , the estimators of parameters are required to satisfy some moment condition in ( c.4 ) there . in the case of considered in this paper , the argument there still holds with some minor revisions on the requirement on .recall that assumptions ( c2 ) and ( c3 ) are exactly the same assumptions as ( c.2 ) and ( c.3 ) in zhao et al .therefore the requirement on in the proof of zhao et al .( 2013 ) is met under ( c2 ) and ( c3 ) of this paper .there are only two points that should be handled : ( i ) we need to show the eighth moments of exists .( ii ) assumption ( c5 ) in this paper is slightly different from ( c.4 ) of zhao et al .( 2013 ) , where condition ( c.4 ) is used to prove the assumption ( zhao et al . , 2013 ) with ^ 2 ] .therefore , we need to show that still holds under ( c5 ) in this paper . _ step 1_.we prove conclusion ( ii ) . recall . then ( x_tj-_xj)/_xj & = & + + & = & x_tj+x_tj(-1)+ + & : = & x_tj+e_1j+e_2j.we have k_p, tt - k_p , tt&=&2p^-1_j=1^p x_tj e_1j+2p^-1_j=1^p x_tj e_2j+p^-1_j=1^p(e_1j+e_2j)^2 + & : = & 2f_1n+2f_2n+f_3n.we only show that and other terms can be proved similarly .note that .obviously , due to the normality of , it holds that .moreover , due to ( c5 ) , we have ^ 4<\infty.\ ] ] combining them , we have and consequently . by the similar approach , one can show that and are of order .therefore , we still have ( zhao et al . , 2013 ) ._ step 2_.now we prove the final conclusion by showing that the eighth moment of exists . recalling the definition of , it is easy to see _ t & & _y^-1/2(_t-_y)+_y^-1/2(_y-_y)+(_y^-1/2-_y^-1/2)(_t-_y ) + & : = & _ t+q_y+b_n . since , the eighth moment of exists . by assumption( c4 ) that , we have b_n&&_y^-1/2-_y^-1/2 ( _ t-_y+_y-_y ) + & & _ y^-1/2-_y^-1/2_f .y^-1/2_y^1/2-i_q_f_y^-1/2_f .by assumptions ( c4 ) and ( c5 ) that and are finite , we have .consequently , similar to the proof in zhao et al .( 2013 ) , we have and , .consequently , we have this completes the proof of theorem [ th1 ] . .observe the main idea of the proof is to show that the two terms on the righthand are small . for simplicity ,we assume that each element of has population mean 0 and variance 1 , that is , and . before the proof , we review some facts . for any ,define then by lemma 1 of zhao et al .( 2013 ) , we have if and if . besides , and , for any .in addition , due to .it is easy to see that for , where and .let and . by ( c7 ) andsimple calculation , we have , for some constant , [ eq_uni_sigm_mu ] p(n^1/2 w_x , n^(1)>cp)p^-3 ,p(n^1/2 w_x , n^(2)>cp)p^-3 . that is [ eq_max_sigm_mu ] w_x , n^(1)=o_p((p)n^-1/2 ) , w_x , n^(2)=o_p((p)n^-1/2 ) . similarly , let , which follows standard norma . then _t&=&_y^-1/2(_t-_y)=_t+(_y^-1/2_y^1/2-i_q)_t+_y^-1/2(_y-_y ) + & : = & _ t+u_n_t+_n , where and are defined accordingly . let .note that according to the assumption on in ( c7 ) .similarly , we have by ( c4 ) and ( c7 ) .recall the definition of , , and .define the we have |a_t_1t_2| & & |k_p , t_1t_2|| f_p , t_1t_2-f_p , t_1t_2|+ | f_p , t_1t_2| |k_p , t_1t_2- k_p , t_1t_2|.by assumption ( c6 ) , we see that for all , that is , has the same order as . by simple calculations, we have then it follows that [ max_r - dotr ] _1rm & & _ r\{n_b_r^-1_1tn|a_tt|+_t_1t_2,1t_1,t_2na_t_1t_2 } + & & _ r\ { n_b_r^-1_1tn|a_tt|+_t_1t_2,1t_1,t_2n|a_t_1t_2|}. + & & ( _ r n_b_r^-1)_1tn|a_tt|+_t_1t_2,1t_1,t_2n|a_t_1t_2| . + _ step 3_. we study the terms in . because s are __ variables with distribution , and consequently , by the tail probability of distribution , we have . by cauchy - schwarz inequality , we see that holds .in addition , by the results in step 1 , applying cauchy - schwarz inequality and triangle inequality , we have _ t_t-_t^2 & & _ tu_n_t+_n^22 [ ( _ t_t^2)w_y^(1)+(w_y^(2))^2 ] + & = & o_p((n)/n ) + o_p(n^-1)=o_p((n)/n ) .moreover , for any , _ t_1,t_2|f_p , t_1t_2-f_p , t_1t_2|&=&_t|_t_1^(_t_2-_t_2)+(_t_1-_t_1)^_t_2+(_t_1-_t_1)^(_t_2-_t_2)| + & & 2_t_1t_2|_t_1^(_t_2-_t_2)|+_t_1t_2|(_t_1-_t_1)^(_t_2-_t_2)| + & & 2_t_1_t_1 _ t_2_t_2-_t_2+_t_1_t_1-_t_1)^2 + & = & o_p((n)/ ). _ step 3.2_. we show in fact , it is easy to see _t_1,t_2|k_p , t_1t_2-k_p , t_1t_2|&=&_t_1,t_2|p^-1[_t_1^(_t_2-_t_2)+(_t_1-_t_1)^_t_2+(_t_1-_t_1)^(_t_2-_t_2)]| .+ for any , we have since are standard normal and s are independent with respect to , we have combining with ( [ eq_max_sigm_mu ] ) in step 1 , we have . by similar arguments and noting , we have p^-1|(_t_1-_t_1)^(_t_2-_t_2)|&&_1jp note the second term has been analyzed in step 3.2 .consider the first term which satisfies since s are standard normal , we have arguments similar to before that for , recall that and if .thus , we have the conclusion of step 3.3 . finally , combining all the results in step 3, it follows that combining with ( [ max_r - dotr ] ) , we have the conclusion of step 3 and it follows that this complets the proof of part i. & & ( n_1)^-2 n e|f_t_1t_1 k_p , t_1t_1| + & & ( n_1)^-2 n [ e(f_t_1t_1)^2]^1/2 [ e(k_p , t_1t_1)^2]^1/2 . noting that ,we have that is bounded . moreover , noting and , we have then .similarly , we have t_2&:=&e\{[_r n_b_r]^-2_1t_1,t_2n , t_1t_2 |f_t_1t_2 k_p , t_1t_2| } + & & ( n_1)^-2n(n-1 ) e|f_t_1t_2 k_p , t_1t_2| + & & _ 1 ^ 2 [ e(f_t_1t_2)^2]^1/2 [ e(k_p , t_1t_2)^2]^1/2 , where in the second inequality . by the cauchy - schwarz inequality , we have ^ 2<\infty ] as the rank of in the series s .let ] for , then according to the rejection rule of the benjamini - hochberg procedure , all with will be rejected .noting that , we have in probability tending to one on the other hand , it is easy to see that \ge n_{\inf} ] in probability tending to 1 .therefore , all with will be rejected by the benjamini - hochberg procedure . the proof of theorem [ th4 ] is similar to that of theorem [ th3 ] .we first prove the conclusion in ( i ) of theorem [ th4 ] . by ( [ app_tmin ] ) and as , we see that and that for any .now we turn to conclusion ( ii ) of theorem [ th4 ] .note that . according to the argument in the proof of proposition [ proposition2 ] ,the term is independent of .therefore , . combining with the assumption , we have the conclusion as desired .finally , we prove proposition [ prop3 ] . recall that in ( [ eq3.3 ] ) .the sufficient condition in proposition [ prop3 ] is derived from the fact that and the min - unmask condition of theorem [ th4 ] . we consider only the case when .the proof of the general case is similar .denote by and as the number of hypothesis rejected in round 1 and 2 , respectively . since the fdr level is controlled at in each round , then for estimate ,the number of falsely rejected hypotheses is less than where is the total number of rejected ones. therefore fdr is still controlled at level , that is , , where is the number of non - influential observations that are falsely labeled as influential ones , and is the number of influential observations that are correctly identified . due to the fact , we have . then where we use in the last equality the assumption that in ( c6 ) . , a. p. , beck j. s. , yen , h. j. , tayeh , m. k. , scheetz , t. e. , swiderski , r. e. , nishimura , d.y . , braun , t. a. , kim , k. y. , huang , j. elbedour , k. , carmi , r. , slusarski , d. c. , casavant , t. l. , stone , e. m. , and sheffield , v. c. ( 2006 ) .homozygosity mapping with snp arrays identifies trim32 , an e3 ubiquitin ligase , as a bardet - biedl syndrome gene ( bbs11 ) ._ proceedings of the national academy of sciences of the united states of america _ , * 103 * , 6287 - 6292 . nurunnabi , a. a. m. , hadi , a. s. , and imon , a. h. m. r. ( 2014 ). procedures for the identification of multiple influential observations in linear regression ._ journal of applied statistics _ , * 41 * , 13151331 .yuan , m. , ekici , a. , lu , z. , and monteiro , r. ( 2007 ) .dimension reduction and coefficient estimation in multivariate linear regression ._ journal of the royal statistical society , series b _ , * 69 * , 329346 .
|
influence diagnosis should be routinely conducted when one aims to construct a regression model . despite its importance , the problem of influence quantification is severely under - investigated in a high - dimensional setting , mainly due to the difficulty of establishing a coherent theoretical framework and the lack of easily implementable procedures . although some progress has been made in recent years , existing approaches are ineffective in detecting multiple influential points especially due to the notorious masking " and swamping " effects . to address this challenge , we propose a new group deletion procedure referred to as mip by introducing two novel quantities named max and min statistics . these two statistics have complimentary properties in that the max statistic is effective for overcoming the masking effect while the min statistic is useful for overcoming the swamping effect . combining their strengths , we further propose an efficient algorithm that can detect influential points with prespecified guarantees . for wider applications , we focus on developing the new proposal for the multiple response regression model , encompassing the univariate response linear model as a special case . the proposed influential point detection procedure is simple to implement , efficient to run , and enjoys attractive theoretical properties . its effectiveness is verified empirically via extensive simulation study and data analysis . * keywords * : false discovery rate , group deletion , high - dimensional linear regression , influential point detection , masking and swamping , multiple responses , robust statistics . * running title * : multiple influential point detection .
|
online social media has become an ecosystem of overlapping and complementary social networking services , inherently multiplex in nature , as multiple links may exist between the same pair of users .multiplexity is a well studied property in the social sciences and it has been explored in social networks from renaissance florence to the internet age . despite the broad contextual differences , multi - relational tiesare consistently found to exhibit greater intensity of interactions across different communication channels , and therefore a stronger bond . nevertheless , _ there is a lack of research about online social networks and their value from a multiplex perspective_. recently , empirical models of multilayer networks have emerged to address the multi - relational nature of social networks . in such models ,interactions are considered as layers in a systemic view of the social network .we adopt such a model in our analysis , where we shift the concept of a link and neighbourhood to encompass more than one network .this allows us to study interactions and structural properties across online social networks ( osns ) , addressing the need for further understanding of their complimentary and overlapping nature , and multiplexity online .although there have been some recent comparative studies of multiple online social networks , and their intersection , the applications of multiplex network properties to osns is yet to be substantially addressed . in this work , we explore intersecting networks , multiplex ties , and their application to link prediction across osns .link prediction systems are key components of social networking services due to their practical applicability to friend recommendations and social network bootstrapping , as well as to understanding the link generation process .link prediction is a well - studied problem , explored in the context of both osns and location - based social networks ( lbsns ) .however , only very few link prediction works tackle multiple networks at a time , while _ most link prediction systems only employ features internal to the network under prediction _ , without considering additional link information from other osns .our main contributions can be summarised as follows : * we generalise the notion of a _ multilayer online social network _ , and extend definitions of neighbourhood to span multiple networks , adapting measures of overlap such as the adamic / adar coefficient in social networks to the multilayer context . + * we find that _ pairs with links on both twitter and foursquare exhibit significantly higher interaction on both social networks _ in terms of number of mentions and colocation within the same venues , as well as a lower distance and higher number of common hashtags in their tweets . * a significantly _ higher overlap can be observed between the neighbourhoods of nodes with links on both networks _ , in particular with relation to the adamic / adar measure of neighbourhood overlap , which is significantly more expressed in the multilayer neighbourhood . * in our evaluation ,_ we predict twitter links from foursquare features and vice versa _ , and we achieve this with auc scores up to 0.86 on the different datasets . in predicting links which span both networks , we achieve the highest auc score of 0.88 from our multilayer features set , _ proving the multilayer construct a useful tool for social bootstrapping and friend recommendations_. the remainder of this work details these contributions , and summarises related work , concluding with a discussion of the implications , limitations , and applications of the proposed framework .our work identifies with three main areas : multi - relational social networks , media multiplexity , and link prediction in online social networks .we summarise the state of the art in these areas in the following sections .multi - relational or multilayer networks have been explored in the context of a wide range of systems from global air transportation to massive online multiplayer games .a comprehensive review of multilayer network models can be found in . in the context of social networks, it is generally accepted that the more information we can obtain about the relationship between people , the more insight we can gain . a recent large - scale study on the subjecthas demonstrated the need for multi - channel data when comprehensively studying social networks . despite the observable multilayer nature of the composite osns of users ,most research efforts have been focused on theoretical modelling , with little to no empirical work exploiting data - driven applications in the domain of multilayer osns , especially with respect to how location - based and social interactions are coupled in the online social space .we attempt to fill these gaps in the present work by presenting a generalisable online multilayer framework applied to classic problems such as link prediction in osns .our framework is strongly motivated by the theory of media multiplexity , which we review next .media multiplexity is the principle that tie strength is observed to be greater when the number of media channels used to communicate between two people is greater ( higher multiplexity ) . in the authors studied the effects of media use on relationships in an academic organisation and found that those pairs of participants who utilised more types of media ( including email and videoconferencing ) interacted more frequently and therefore had a closer relationship , such as friendship .more recently , multiplexity has been studied in light of multilayer communication networks , where the intersection of the layers was found to indicate a strong tie , while single - layer links were found to denote a weaker relationship .the strength of social ties is an important consideration in friend recommendations and link prediction , and we employ the previously understudied multiplex properties of osns to such ends in this work . the problem of link prediction was first introduced in the seminal work of kleinberg et al . and since then , has been applied in various network domains .for instance , in the authors exploit place features in location - based services to recommend friendships , and in a new model based on supervised random walks is proposed to predict new links in facebook .most of these works build on features that are endogenous to the system that hosts the social network of users . in our evaluation , however , we train and test on heterogeneous networks . in a similar spirit ,the authors in show how using both location and social information from the same network significantly improves link prediction .our approach differs in that it frames the link prediction task in the context of multilayer networks and empirically shows the relationship between two different systems - foursquare and twitter - by mining features from both . before presenting our framework and analysis , we will next state the research questions we are interested in answering through this work .in light of the related work presented above , our goal is to mend the gap between multilayer network models , media multiplexity properties , and link prediction systems .more specifically , we address the following research questions in this work : + * rq1 : * _ how do structural properties such as degree extend into the multilayer neighbourhood ? _ we propose a multilayer version of the network neighbourhood , which extends it to multiple networks ( layers ) and observe how such structural properties are manifested across twitter and foursquare .+ + * rq2 : * _ what are the structural and behavioural differences between single network and multiplex links ? _ in order to understand the value of multiplex links ( users connected on more than one network ) , we observe how they compare to single network links in terms of neighbourhood overlap , twitter interaction , similarity and mobility in foursquare .+ + * rq3 : * _ can we use information about links from one layer to predict links on the other ? _ many online social systems suffer from a lack of initial user adoption .although many social networks nowadays incorporate the option of importing contacts from another pre - existing network and copying links , this method does not offer a ranking of users by relevance targeted towards the specific platform .+ + * rq4 : * _ can we predict links which exist on more than one network ( i.e. , multiplex links ) ? _ media multiplexity is a valuable source of tie strength information , and has further structural implications , which are of interest to osn services and link prediction systems .we would like to explore the potential of identifying such links for building more successful online communities .+ + we will next present our multilayer framework for osns , and study user behaviour and properties across twitter and foursquare , extending our analysis to multiplex links in comparison with single - layer links .we finally integrate this into a link prediction system for osns , where we evaluate the utility of the metrics and features described in this work in hope to answer the above posed questions .the network of human interactions is usually represented by a graph where the nodes in set represent people and the edges represent interactions . while this representation has been immensely helpful for the uncovering of many social phenomena , it is focused on a single - layer abstraction of human relations . in this section ,we describe a model , which represents the multiplexity of osns by supporting multiple friendship and interaction links .we represent the parallel interactions between nodes across osns as a _ multilayer network _ , an ensemble of graphs , each corresponding to an osn .we indicate the -th layer of the multilayer as , where and are the sets of vertices and edges of the graph .we can then denote the sequences of graphs composing the -layer multilayer graph as .the graphs are brought together as a multilayer system by the common members across layers as illustrated in figure [ fig : mdat ] .multilayer social networks are a natural representation of media multiplexity , as each layer can depict an osn .figure [ fig : multi ] illustrates the case at hand , where there are two osn platforms represented by and .members need not be present at all layers and the multilayer network is not limited to two layers . while each platform can be explored separately as a network in its own right, this does not capture the dimensionality of online social life , which spans across multiple osns . 0.24 ; and slowromancap3@. single - layer link on .,title="fig : " ] 0.26 ; and slowromancap3@. single - layer link on .,title="fig : " ] figure [ fig : links ] illustrates three link types as observed in figure [ fig : multi ] for the case of a two layer network .firstly , we define a _ multiplex link _ between two nodes and as a link that exists between them _ at least in two layers _ .second , we say that a _ single - layer link _ between two nodes and exists if the link appears _ only in one layer _ in the multilayer social network . in systems with more layers ,multiplexity can take on a value depending on how many layers the link is present on . in the case at hand ,given layer and layer , we denote the set of all links present in the multilayer network as , which yields the global connectivity .we also define the set of multiplex links as and the set of all single - layer links on layer only as .these multilayer edge sets can be further extended to the layer network by considering more layers as part of the intersection or union of graphs .the presence of multiplex and single - layer links in the above edge sets defines the multilayer neighbourhood of nodes in the network , as expanded upon next .following our definition of a multilayer online social network , we can redefine the ego network of a node as the _ multilayer neighbourhood_. while the simple node neighbourhood is the collection of nodes one hop away from the ego , the multilayer global neghbourhood ( denoted by ) of a node can be derived by the total number of unique neighbours across layers : and their global multilayer degree as : which provides insight into the entire connectivity of nodes across layers , and can therefore be interpreted as a global measure of the immediate degree of a node .we can similarly define the core neighbourhood ( denoted by ) of a node across layers of the multilayer network as : and their core multilayer degree as : where we only consider neighbours which exist across all layers .this simple formulation allows for powerful extensions of existing metrics of local neighbourhood similarity .we can define the overlap ( jaccard similarity ) of two users and s global neighbourhoods as : where the number of common friends is divided by the number of total friends of and .the same can be done for the core degree of two users .the jaccard coefficient , often used in information retrieval , has also been widely used in link prediction .we can further extend our definition of the multilayer neighbourhood to the adamic / adar coefficient for link likelihood , which considers the overlap of two neighbourhoods based on the popularity of common friends ( originally through web pages ) in a single - layer network as : where it is applied to the global common neighbours between two nodes but can be equally applied to their core neighbourhoods .this metric has shown to be successful in the link prediction in its original single - layer form in both social networks and location - based networks . in the present work ,we aim to show its applicability to the multilayer space in predicting online social links across and between twitter and foursquare .we will next describe the specific datasets , which we apply this framework to .twitter and foursquare are two of the most popular social networks , both with respect to research efforts and user base .they have distinct broadcasting functionalities - microblogging and check - ins .while twitter can reveal a lot about user interests , foursquare check - ins provide a proxy for human mobility . in foursquare users check - in to venues that they visit through their location enabled devices , and share their visit or opinion of a place with their friends .foursquare is two years younger than twitter and its broadcasting functionality is exclusively for mobile users ( 50 m to date ) , while 80% of twitter s 284 m users are active on mobile .twitter generally allows anyone to follow " and be followed " , where followers and followed do not necessarily know one another . on the other hand, foursquare supports undirected links , referred to as friendship " .a similar undirected relationship can be constructed from twitter , where a link can be considered between two users if they both follow each other reciprocally .since we are interested in ultimately in predicting friendship , we consider only reciprocal twitter links throughout this work .our dataset was collected from twitter and foursquare in the united states between may and september 2012 , where tweets and check - ins were downloaded for users who had checked - in during that time , and where those check - ins were shared on twitter .this allows us to study the intersection of the two networks through a subset of users who have accounts and are active on both twitter and foursquare , and have chosen to share their check - ins to twitter .l*6lr property & new york & chicago & sf & all + & 6,401 & 2,883 & 1,705 & 10,989 + & 9,101 & 5,486 & 1,517 & 16,104 + & 13,623 & 7,949 & 1,776 & 23,348 + & 6,394 & 4,202 & 863 & 11,459 + & 4.55 & 6.12 & 2.44 & 4.63 + & 1.42 & 1.9 & 0.89 & 1.47 + & 2,509,802 & 1,288,865 & 632,780 & 4,431,447 + & 228,422 & 105,250 & 46,823 & 380,495 + & 24,110 & 11,773 & 6,934 & 42,817 + 0.245 0.245 0.2450.245 we focus our analysis on the top three cities in terms of activity during the period. table [ tab : datt ] shows the details for each city , in terms of activity and venues , multilayer edges and degrees for each network , where denotes the set of edges , which exist on both twitter and foursquare , and are the sets of edges on twitter only and foursquare only respectively .figure [ fig : sf ] additionally illustrates the case of san francisco , where blue edges represent single - layer links on either foursquare or twitter , and pink edges represent multiplex links on both .we use a fruchterman reingold graph layout to show the core - periphery structure of the network , with larger nodes having a higher global degree . in the following section ,we discuss the implications of these sets in detail , where we consider all three cities together , and later evaluate each one separately .we begin our analysis by exploring the intersection between the twitter and foursquare social networks .we observe user the degree properties across the two networks at a larger scale for all three cities , while later we perform our evaluation on each city separately .we introduced two degree metrics based on the multilayer neighbourhood of a node in equations 2 and 4 , where the _ global neighbourhood _ is equivalent to the union of neighbours on both networks , and the _ core neighbourhood _ is equivalent to the intersection of neighbours across both networks . in this section we consider how the degrees relate to user activity and each other . in both cases ( figures [ fig : ideg ] and [ fig : udeg ] ) ,users with high activity on both networks , and in particular with high twitter activity , have the highest degrees in both the core and global neighbourhoods .when we compare the two in figure [ fig : uvi ] , we observe that their joint distribution follows the long - tail exhibited in single - layer social networks as well .further , we observe the multiplex overlap ratio of the core to global neighbourhood degrees in figure [ fig : odeg ] .this is simply the core over the global degree : which indicates the percent of multiplex links in s multilayer neighbourhood .high activity nodes across both layers at the centre of figure [ fig : odeg ] have the highest overlap . in figure[ fig : uvi ] , we compare the two multilayer degrees .we note that the majority of users have a low degree in both , and there is a relationship between the two .the core degree is bound by the global degree and is always a fraction of it , while the global degree may never exceed the sum of the individual layer degrees .this relationship is apparent in the figure , where _ the highest degree users are those who have a large number of links which overlap ( multiplex links)_. this can be due to the fact that these users are more engaged across the two platforms .we further explore the value of link multiplexity in the following section .0.24 0.24 0.24 0.24 we study the three types of links as described in our multilayer model above : multiplex links on both twitter and foursquare , which we denote as _ tf _ for simplicity ; single - layer links on foursquare only ( denoted as _ fo _ ) ; single - layer links on twitter only ( denoted as _ to _ ) , and compare these to unconnected pairs of users ( denoted as _we consider reciprocal twitter links only , where .reciprocal relationships in twitter have been considered as equivalent to undirected ones in other osns .the number of common friends has been shown to be an important indicator of a link in social networks . moreover ,the neighbourhood overlap weighted on the popularity of common links between two users has been shown to be a good predictor of friendship in online networks .figure [ fig : structure ] shows the adamic / adar metric of neighbourhood similarity across the various single and multilayer neighbourhoods described in section 3 , and the four link types .the adamic / adar metric is distinctly higher for multiplex links . in agreement with previous studies of tie strength , we observe that multiplex links share a greater overlap in all single and multilayer neighbourhoods . in single - layer neighbourhoods ( figure [ fig : aat ] and [ fig : aaf ] ) we observe that after multiplex links , those links internal to the network under consideration have a higher overlap than exogenous ones ( _ to _ in figure [ fig : aat ] and _ fo _ in figure [ fig : aaf ] ) , followed by unconnected pairs , which have the least overlap . with respect to the multilayer neighbourhoods , we can observe a much more pronounced overlap across the link types . while the global neighbourhood overlap follows a similar distribution to the single - layer neighbourhoods but at a much lower scale , in figure [ fig : aai ] we can observe more clearly that unconnected pairs share little if any neighbours , while multiplex links have a significant overlap .with respect to the global neighbourhood ( figure [ fig : aau ] ) , both foursquare only and twitter only links share significantly more overlap ( scale is higher on x axis ) than when observing the single - layer neighbourhoods in figures [ fig : aat ] and [ fig : aaf ] .this indicates that some common neighbours lie across layers , and not just within , _ the global neighbourhood revealing a more complete image of connectivity , which stretches beyond the single network_. the core neighbourhood overlap is most prominent for multiplex links ( figure [ fig : aai ] ) , which indicates that they share more friends across networks than any other type of link .while this is expected , it _ confirms that the neighbourhood overlap is a good indicator of multiplexity in ties _ , and is particularly strengthened in its weighted form through the adamic / adar metric of neighbourhood similarity .the volume of interactions between users is often used as a measure of tie strength . in this sectionwe compare how the volume of interactions reflects on multiplex and single - layer links .we consider the following interactions on twitter and foursquare : + * number of mentions : * this interaction feature simply measures the number of times user has mentioned user on twitter during the period .any user on twitter can mention any other user and need not have a directed or undirected link to the user he is mentioning .+ * number of common hashtags : * similarity between users on twitter can be captured through common interests .topics are commonly expressed on twitter with hashtags using the # symbol .similar individuals have been shown to have a greater likelihood of forming a tie through the principles of homophily . + * number of colocations : * the number of times two users have checked - in to the same venue within a given time window . in order to reduce false positives , we consider a shorter time window of 1 hour only .two users at the same place , at the same time on multiple occasions , increases the likelihood of them knowing each other ( and having a link on social media ) .we weight each colocation on the popularity of a place in terms of the total user visits , to reduce the probability that colocation is by chance at a large hub venue such an airport or train station .+ * distance : * human mobility and distance play an important role in the formation of links , both online and offline , and have been shown to be highly indicative of social ties and useful for link prediction .we calculate the distance between the geographic coordinates of two users most frequent check - in locations as the haversine distance , the most common measure of great - circle spherical distance : where the coordinate pairs for are of the places where those users have checked - in most frequently , equivalent to the mode in the multiset of venues where they have checked - in .we only consider users who have more than two check - ins over the whole period , and resolve ties by picking an arbitrary venue location from the top ranked venues of a user . in figures [ fig : men ] to [ fig : dist ] , we observe four types of geographic and social interaction on the two social networking services , where each box - and - whiskers plot represents an interaction between multiplex links ( _ tf _ ) , twitter only ( _ to _ ) , foursquare only ( _ fo _ ) , and unconnected pairs ( _ na _ ) on the x axis . on the y axiswe can observe the distribution in four quartiles , representing 25% of values each .the dark line in the middle of the box represents the median of the distribution , while the dots are the outliers .the whiskers " represent the top and bottom quartiles , while the boxes are the middle quartiles of the distribution . in terms of twitter mentions( figure [ fig : men ] ) , multiplex ties and non - connected pairs of users exhibit an overall greater number of mentions than any other group , including the twitter only group .it is uncommon that pairs connected on foursquare only mention each other .mentions are quite common between users who are not connected on any network , which may be as a result of mentioning celebrities and other commercial accounts .this is not the case for hashtags , where we find that almost all of unconnected users share 10 or less hashtags with the exception of outliers .hashtags distinguish the link type between users better than mentions . with regards to foursquare interaction , multiplex ties have the highest probability of multiple colocations , with foursquare and twitter only ties having less , and unconnected pairs more so with the exception of some outliers . in terms of distance ,twitter only and unconnected pairs are the furthest apart in terms of most frequented location , making multiplex and foursquare links more distinguishable through this feature , as those pairs have less distance between their most frequented locations .although _ there is certainly greater interaction between multiplex links _, followed by twitter only and foursquare only links , we would like to eliminate the randomness introduced by the positive results for unconnected pairs ( _ na _ ) .we propose two multilayer interaction metrics combining heterogenous features from both networks in order to better distinguish between the different link types .firstly , we define the global similarity as the twitter similarity over foursquare distance as : where can be replaced with any type of similarity , which is the mass or sum of that similarity for a pair of users , and are exponents which can be tuned to optimise the features .figure [ fig : mult2 ] shows how this feature captures the different levels of links ( a=2 , b=1 ) .we additionally frame a feature which captures the complete interaction across layers of social networks : where can be any type of interaction of layer , this can be further refined by giving a weight to each interaction but in our case , we consider the coefficient to be equal to 1 and use colocations from the foursquare layer and mentions from the twitter layer to express the global interaction of two users in the multilayer network .this feature allows us to capture the levels of different link types significantly better as shown in figure [ fig : mult1 ] .0.22 0.22 0.22 0.22 0.22 0.22 although we base our analysis on only two of many possible communication channels online , we are nonetheless able to observe the greater overlap of neighbourhoods and higher intensity of interaction characteristic of multiplex links , which is in consistency with the theory of media multiplexity .we evaluate the predictive performance of the union of the features presented in the following section .in this section we address the link prediction problem across layers of social networks , and aim to answer our final two research questions : _ can we predict one network using information from the other ? _ , and _ can we predict multiplex links in osns ? _ we evaluate the likelihood of forming a social tie as a process that depends on a union of factors , using the foursquare , twitter , and multilayer features we have defined up until now in a supervised learning approach , and comparing their predictive power in terms of auc scores for each feature set respectively .0.19 0.19 0.19 0.19 0.19 the main motivation for considering multiple social networks in a multilayer construct is that each layer carries with it additional information about the links between the same users , which can potentially enhance the predictive model . in light of the multilayer nature of osns , we are also interested in whether we can achieve better prediction by combining features from multiple networks . formally , for two users , where the nodes ( users ) that are present in any layer of the multilayer network , we employ a set of features that output a score so that all possible pairs are ranked according to their expectation of having a link on a specific layer in the network .we specify and evaluate two distinct prediction tasks .our first goal is to rank pairs of users based on their interaction on one social network in order to predict a link on the other .this entails using mobility interactions to predict social links on twitter , and using social interactions on twitter to predict links on foursquare .subsequently , we are interested in predicting the multiplex links at the cross - section of the two networks using multilayer features . this type of links have both structural and social tie implications as we have demonstrated in this work , which makes them desirable to identify .we perform our evaluation on three datasets described at the start of this work in section 5 , where we have twitter , foursquare , and the derived multilayer features for the cities of san francisco , chicago , and new york .we adopt a supervised learning approach for the prediction tasks , and for each city , which is considered as an independent multilayer network , where we train and test on different layers . supervised learning methodologieshave been proposed as a better alternative to unsupervised models for link prediction .we compare the performance of feature sets using the random forest classifier with -fold cross - validation testing strategy : for each test we train on of the data and test on the remaining . for every test casethe user pairs in the test set were ranked according to the scores returned by the classifiers for the positive class label ( i.e. , for an existing link ) , and subsequently , area under the curve ( auc ) scores were calculated by averaging the results across all folds .we use auc scores as a measure of performance because it considers all possible thresholds of probability in terms of true positive ( tp ) and false positive ( fp ) values rate , which are computed by comparing the predicted output against the target labels of the test data . in terms of algorithmic implementation, we have used public versions of the algorithms available in .the features presented earlier in this work , of which each feature set comprises are summarised in table [ tab : feats ] .we denote the twitter neighbourhood as and the foursquare neighbourhood as .next , we specify each prediction task and present the results of the supervised learning evaluation in terms of the predictive power of each feature set in both tasks .+ & + & + & + & + + & + & + & + & + + & + & + & + & + the receiver operating characteristic ( roc ) curves ( defined as the true positive versus false positive rate for varying decision thresholds ) and the corresponding area under the curve ( auc ) scores are shown in figure [ fig : roc1 ] for the three datasets .we now discuss these results with respect to each task . in the first prediction task ,for a pair of users and we define a feature vector encoding the values of the users feature scores on layer in the multilayer network .we also specify a target label representing whether the user pair is connected on the layer under prediction .we use the supervised random forest classifier ( 45 trees , optimised with tree depth = 25 ) to predict links from one layer using features from the other . figure [ fig : roc_fsq ]shows the roc curves and respective auc scores for each dataset in predicting foursquare links from twitter features , ranging between 0.7 for the new york dataset to 0.81 for chicago , and 0.73 for san francisco .we compare this to the reverse task of predicting twitter links using foursquare features in figure [ fig : roc_mfsq0 ] , where we obtain auc scores of 0.86 , 0.73 , and 0.79 for the three cities respectively .we observe slightly higher results for twitter links , and we note that this may be as a result of the higher number of twitter links in our dataset or as a result of the greater difficulty of the inverse task . + in our second prediction task , we are interested in evaluating the performance of each feature set in predicting link multiplexity . given a feature vector , we would like to predict a target label , where a link exists on both layers ( + 1 ) or none ( -1 ) .we compare performance of the multilayer features to the twitter and foursquare sets . in this task , we use all three feature sets to predict multiplex links , which generally exhibit signs of a stronger online bond through interaction and structural properties as we have seen in the first part of this work . in figures [ fig :m_fsq ] and [ fig : rm_twt ] , we observe how twitter and foursquare features perform in predicting multiplex links using the random forest algorithm again , with the highest auc scores of 0.82 and 0.84 for each set respectively .the foursquare feature set performs better in terms of auc scores but the multilayer feature set outperforms both ( auc = 0.88 for chicago ) , due to its inclusion of features from different layers and cross - layer structural properties . in conclusion , it is possible to predict links between heterogeneous social networks and to predict multiplex links spanning multiple networks using multilayer features as we have seen in our subset of users .we discuss the applications of these results in the following section .in this work we have demonstrated the structural and interaction properties of links across two online social networks and have also shown the value of multilayer features in predicting links on both twitter and foursquare , and multiplex links .we believe that the primary contribution is methodological , since it provides a novel framework for investigating multiplexity across different social networks .the techniques discussed in this work are general and can be potentially used to investigate other scenarios for which datasets containing information about social interactions across multiple networks are available . in this section ,we discuss the implications , limitations and real - world applications of these results .recently , social media has been increasingly alluded to as an _ ecosystem_.the parallel comes after the emergence of multiple osns , interacting as a system , while competing for the same resources - users and their attention .we have addressed this system aspect by modelling multiple social networks as a multilayer online social network in this work .we have also identified two extensions of the node neighbourhood .the global neighbourhood or degree gives insight into a users full connectivity across services , this is especially important when considering users with asymmetric activity and degree across networks since their centrality in the online ecosystem can be under or over - estimated .we additionally defined the core degree , which on the other hand reveals the intersection across networks , and therefore the stronger online ties - those relevant on multiple networks .the strength of ties manifested through multiplexity is expressed through a greater intensity of interactions and greater similarity across attributes both the offline , and in the online context as we have seen in this work .we have introduced a number of features , which take into consideration the multilayer neighbourhood of users in osns .the adamic / adar coefficient of neighbourhood similarity in its core neighbourhood version proved to be a strong indicator of multiplex ties .additionally , we introduced combined features , such as the global interaction and similarity over distance , which reflect more distinctively the type of link , which exists between two users , than its single - layer counterparts .these features can be applied across multiple networks and can be flexible in their construction according to the context of the osns under consideration .media multiplexity is fascinating from the social networks perspective as it can reveal the strength and nature of a social tie given the full communication profile of people across all media they use .unfortunately , full online and offline communication profiles of individuals were not available and our analysis is limited to two social networks . nevertheless , we have observed some evidence of media multiplexity manifested in the greater intensity and structural overlap of multiplex links and have gained insight into how we can utilise these properties for link prediction .certainly , considering more osns and further relating media multiplexity to its offline manifestation is one of our future goals , and we believe that with the further integration of social media services and availability of data this will be possible in the near future . our data is limited to a sub - sample of users who we know have active accounts on both networks in three us cities , foursquare check - ins also being limited to those posted on twitter .this excludes a number of users who may have foursquare accounts but have not linked them on twitter .nevertheless , we were able to show that it is possible to predict one social network from the other in a cross - network manner and we hope to extend our prediction and analysis to a greater scale and geographical scope in the future .+ most new osns use contact list integration with external existing networks , such as copying friendships from facebook through the open graph protocol .copying links from pre - existing social networks to new ones results in higher social interaction between copied links than between links created natively in the platform .we propose that extending this copied network with a rank of relevance of contacts using multiplexity can provide even further benefits for newly launched services .in addition to fostering multiplexity , however , new osns and especially interest - driven ones such as pinterest for example , may benefit from similarity - based friend recommendations . in this work ,we apply mobility features and neighbourhood similarity from foursquare to predict links on twitter and vice versa , highlighting the relationship between similar users across heterogeneous platforms . similarly in , the authors infer types of relationships across different domains such as mobile and co - author networks . although using a transfer knowledge framework , and not exogenous interaction features like we do , the authors also agree that integrating social theory in the prediction framework can greatly improve results .the present work is a step towards understanding the composite nature of online social network services and hopefully towards enhancing their functionality and purpose .this work was supported by the project lasagne , contract no .318132 ( strep ) , funded by the european commission .
|
online social systems are multiplex in nature as multiple links may exist between the same two users across different social networks . in this work , we introduce a framework for studying links and interactions between users beyond the individual social network . exploring the cross - section of two popular online platforms - twitter and location - based social network foursquare - we represent the two together as a composite _ multilayer online social network_. through this paradigm we study the interactions of pairs of users differentiating between those with links on one or both networks . we find that users with multiplex links , who are connected on both networks , interact more and have greater neighbourhood overlap on both platforms , in comparison with pairs who are connected on just one of the social networks . in particular , the most frequented locations of users are considerably closer , and similarity is considerably greater among multiplex links . we present a number of structural and interaction features , such as the multilayer adamic / adar coefficient , which are based on the extension of the concept of the node neighbourhood beyond the single network . our evaluation , which aims to shed light on the implications of multiplexity for the link generation process , shows that multilayer features , constructed from properties across social networks , perform better than their single network counterparts in predicting links across networks . we propose that combining information from multiple networks in a multilayer configuration can provide new insights into user interactions on online social networks , and can significantly improve link prediction overall with valuable applications to social bootstrapping and friend recommendations .
|
many fields such as high - energy astrophysics etc . may involve flows at speeds close to the speed of light or influenced by large gravitational potentials such that the relativistic effect should be taken into account .relativistic flows appear in numerous astrophysical phenomena , from stellar to galactic scales , e.g. super - luminal jets , gamma - ray bursts , core collapse super - novae , coalescing neutron stars , formation of black holes and so on .the governing equations of the relativistic hydrodynamics ( rhd ) are highly nonlinear so that their analytical treatment is extremely difficult . a primary and powerful approach to understand the physical mechanisms in rhdsis numerical simulations .the pioneering numerical work may date back to the finite difference code by may and white with the artificial viscosity technique for spherically symmetric general rhd equations in the lagrangian coordinate .wilson first attempted to numerically solve multi - dimensional rhd equations in the eulerian coordinate by using the finite difference method with the artificial viscosity technique , which was systematically introduced in . since 1990s, the numerical study of the rhds began to attract considerable attention , and various modern shock - capturing methods based on exact or approximate riemann solvers have been developed for the rhd equations , the readers are referred to the early review articles and more recent works on numerical methods for the rhd equations in .recently , second - order accurate direct eulerian generalized riemann problem ( grp ) schemes were developed for both 1d and 2d special rhd equations and the third - order accurate extension to the 1d case was also presented in .the grp scheme , as an analytic high - order accurate extension of the godunov method , was originally devised for non - relativistic compressible fluid dynamics , by utilizing a piecewise linear function to approximate the `` initial '' data and then analytically resolving a local grp at each interface to yield numerical flux , see the comprehensive description in .there exist two versions of the original grp scheme : the lagrangian and eulerian . the eulerian version is always derived by using the lagrangian framework with a transformation , which is quite delicate , particularly for the sonic case and multi - dimensional application .to avoid those difficulties , second - order accurate direct eulerian grp schemes were respectively developed for the shallow water equations , the euler equations , and a more general weakly coupled system by directly resolving the local grps in the eulerian formulation via the riemann invariants and rankine - hugoniot jump conditions .a recent comparison of the grp scheme with the gas - kinetic scheme showed the good performance of the grp solver for some inviscid flow simulations .combined with the moving mesh method , the adaptive direct eulerian grp scheme was developed in with improved resolution as well as accuracy . the accuracy and performance of the adaptive grp schemewere further studied in in simulating 2d complex wave configurations formulated with the 2d riemann problems of non - relativistic euler equations .recently , the adaptive grp scheme was also extended to unstructured triangular meshes .the third - order accurate extensions of the direct eulerian grp scheme were studied for 1d and 2d non - relativistic euler equations in and the general 1d hyperbolic balance laws in .the aim of the paper is to develop a second - order accurate direct eulerian grp scheme for spherically symmetric general rhd equations .the traditional godunov - type schemes based on exact or approximate riemann solvers can be extended to the general rhd equations from the special rhd case through a local change of coordinates in terms of that the spacetime metric is locally minkowskian .similar idea can be found in developing the so - called locally inertial godunov method for spherically symmetric general rhd equations .however , such approach can not be used to develop the direct eulerian grp scheme for the general rhd equations , because it is necessary to resolve the local grp together with the local change of the metrics taken into account .moreover , the metrics should be approximately obtained at the cell interface by accurate scheme for the sse equations to keep the continuity of the approximate metric functions . in short , developing the grp scheme for the general rhd equations is not trivial and much more technical than the special relativistic case .the paper is organized as follows .section [ sec : goeq ] introduces the governing equations of general rhds in spherically symmetric spacetime and corresponding riemann invariants as well as their total differentials .the second - order accurate direct eulerian grp scheme is developed in section [ sec : scheme ] .the outline of the scheme is first given in section [ sec : outline ] .then the local grps are analytically resolved in section [ sec : resolugrp ] , where sections [ sec : resolurare ] and [ sec : resolushock ] resolve the rarefaction and shock waves by using the riemann invariants and the rankine - hugoniot jump conditions , respectively , section [ sec : pupt ] concludes the the limiting values of the time derivatives of the conservative variables at the `` initial '' discontinuous point along the cell interface for both nonsonic and sonic cases , and section [ sec : acoustic ] discusses the acoustic case .several numerical experiments are conducted in section [ sec : experiments ] to demonstrate the performance and accuracy of the proposed grp scheme .section [ sec : conclude ] concludes the paper with several remarks .the general rhd equations consist of the local conservation laws of the current density and the stress - energy tensor where the indexes and run from 0 to 3 , and stands for the covariant derivative associated with the four - dimensional spacetime metric , that is , the proper spacetime distance between any two points in the four - dimensional spacetime can be measured by the line element .the current density is given by , where represents the fluid four - velocity and denotes the proper rest - mass density .the stress - energy tensor for an ideal fluid is defined by in which and denote the rest energy density ( including rest - mass ) in the fluid frame and the pressure , respectively , and with denoting the kronecker symbol .the rest energy density can be expressed in terms of the rest - mass density and the internal energy as , where denotes the speed of light in vacuum .an additional equation for the thermodynamical variables , i.e. the so - called equation of state , is needed to close the system for a fixed spacetime .this paper focuses on the equation of state describing barotropic fluids where is a function of and satisfies it is worth noting that the equations form a close system if is given . in the general theory ,the einstein gravitational field equations relate the curvature of spacetime to the distribution of mass - energy in the following form where is einstein coupling constant , is newton s gravitational constant , and and denote the ricci tensor and the scalar curvature , respectively . for the sake of convenience , unitsin which the speed of light and newton s gravitational constant are equal to one will be used throughout the paper .the general rhd system in spherically symmetric spacetime is a simple but good `` approximate '' model in investigating several astrophysical phenomena , e.g. gamma - ray bursts , spherical accretion onto compact objects , and stellar collapse , etc .its numerical methods have also received lots of attentions , see e.g. .the spherically symmetric gravitational metrics in standard schwarzschild coordinates are given by the line element where is called the _ lapse function _, are temporal and radial coordinates , and is the spacetime coordinate system .this paper is only concerned with the numerical method for the system in spherically symmetric spacetime .assume that the spherically symmetric metrics are lipschitz and the stress - energy tensor is bounded in sup - norm , then the system is weakly equivalent to the following system \label{eq : rhd_1a } & \frac{{\partial m}}{{\partial r } } = \frac12 \kappa r^2 { \cal t}^{00 } , \\[2 mm ] \label{eq : rhd_1b } & \frac{1}{b } \frac{{\partial b}}{{\partial r } } = \frac{{1 - a}}{{ar } } + \frac{{\kappa r}}{a } { \cal t}^{11},\end{aligned}\ ] ] where and the mass function is related to by .here are the stress - energy tensor in locally flat minkowski spacetime , related to by and is the lorentz factor with the velocity eq .may be replaced with to derive another equivalent system , , and .the eigenvalues of the jacobian matrix of with are where denotes the local sound speed .corresponding right eigenvectors , may be given as follows and the inverse of the matrix is the condition implies that .thus the system is strictly hyperbolic .moreover , both characteristic fields related to are genuinely nonlinear if and only if the function further satisfies which does always hold for the riemann invariants associated with the characteristic field can be obtained as follows which will play a pivotal role in resolving the centered rarefaction waves in the direct eulerian grp scheme for the rhd equations . in the smooth region , by using and ,the rhd equations can be reformed in the primitive variable vector as follows \ ] ] where & { \mbox{\boldmath \small } } = ( h_1,h_2)^{\rm t}= - \frac{{\sqrt { ab } } } { { r(1 - v^2 c_s^2 ) } } \left ( \begin{array}{c } 2v(\rho + p)\left ( { 1 - \frac{{\kappa r^2 ( \rho + p)}}{4a } } \right ) \\ ( 1 - v^2 ) \left ( { - 2v^2c_s^2 + \frac{{(1 - a)(1 - v^2 c_s^2 ) } } { { 2a } } + \frac{{\kappa r^2 ( p + \rho v^2 c_s^2 ) } } { 2a } } \right ) \\ \end{array } \right).\end{aligned}\ ] ] by using , one can derive the following differential relations of the riemann invariants where denote the total derivative operators along the characteristic curves .this subsection gives the outline of the grp scheme . for the sake of simplicity ,the equally spaced grid points are used for the spatial domain and the cell is denoted by ] is also divided into the ( non - uniform ) grid with the time step size determined by , , and approximate the values of , and at the point , respectively , and is the cfl number .assume that the `` initial '' data at time are piecewise linear functions as follows \left ( \begin{array}{c } a_h ( t_n , r ) \\b_h ( t_n , r ) \\\end{array } \right ) = \dfr{{r_{j + \frac{1}{2 } } - r}}{{\delta r}}\left ( \begin{array}{c } a_{j - \frac{1}{2}}^n \\b_{j - \frac{1}{2}}^n \\\end{array } \right ) + \dfr{r - r_{j - \frac{1}{2 } } } { \delta r } \left ( \begin{array}{c } a_{j + \frac{1}{2}}^n \\ b_{j + \frac{1}{2}}^n \\\end{array } \right ) , \end{array } \right.\ ] ] for , where and are continuous at cell interfaces .step i. evaluate the point values approximating by where is the values at of the solutions to the following riemann problem of the homogeneous hyperbolic conservation laws { \mbox{\boldmath \small }}(t_n , r)= \begin{cases } { \mbox{\boldmath \small }}_{j+\frac12,l}^n : = { \mbox{\boldmath \small }}_h ( t_n , r_{j+\frac12 } -0 ) , & r < r_{j+\frac{1}{2}},\\ { \mbox{\boldmath \small }}_{j+\frac12,r}^n : = { \mbox{\boldmath \small }}_h ( t_n , r_{j+\frac12 } + 0 ) , & r > r_{j+\frac{1}{2 } } , \end{cases } \end{cases}\ ] ] and is analytically derived by a second order accurate resolution of the local generalized riemann problem ( grp ) at each point , i.e. { \mbox{\boldmath \small }}(t_n , r)= \begin{cases } { \mbox{\boldmath \small }}^n_{j}(r ) , & r < r_{j+\frac{1}{2}},\\ { \mbox{\boldmath \small }}^n_{j+1}(r ) , & r > r_{j+\frac{1}{2}}. \end{cases } \end{cases}\ ] ] the calculation of is one of the key elements in the grp scheme and will be given in section [ sec : resolugrp ] .calculate the point values and , which are approximation of and , respectively , by step iii .approximately evolve the solution vector at time of by a second - order accurate godunov - type scheme \label{eq : evolve } & + \dfr{{\delta t_n } } { 2 } \bigg ( { { \mbox{\boldmath \small } } \left ( { r_{j - \frac{1}{2 } } , a_{j - \frac{1}{2}}^{n + \frac{1}{2 } } , b_{j - \frac{1}{2}}^{n + \frac{1}{2 } } , { \mbox{\boldmath \small }}_{j - \frac{1}{2}}^{n + \frac{1}{2 } } } \right ) + { \mbox{\boldmath \small }}\left ( { r_{j + \frac{1}{2 } } , a_{j + \frac{1}{2}}^{n + \frac{1}{2 } } , b_{j + \frac{1}{2}}^{n + \frac{1}{2 } } , { \mbox{\boldmath \small }}_{j + \frac{1}{2}}^{n + \frac{1}{2 } } } \right ) } \bigg).\end{aligned}\ ] ] step v. calculate and by & \ln b_{j + \frac{1}{2}}^{n + 1 } = \ln b_{j - \frac{1}{2}}^{n + 1 } + { \delta r } \left ( \frac{1 - a_j^{n + 1 } } { a_j^{n + 1 } r_j } + \frac{\kappa r_j } { a_j^{n + 1 } } { \cal t}^{11 } \left ( { \mbox{\boldmath \small }}_j^{n + 1 } \right ) \right ) , \quad a_j^{n+1 } : = \frac12 \big ( a_{j-\frac12}^{n+1 } + a_{j+\frac12}^{n+1 } \big).\end{aligned}\ ] ] step iv .update the slope component - wisely in the local characteristic variables by where , the parameter , and the paper does not pay much attention to the treatment of singularity in the source of and the imposition of boundary conditions at the symmetric center for the grp scheme , the readers are referred to for the details .this subsection resolves the grp in order to get in . for the sake of convenience , the subscript and the superscript will be ignored and the local grp is transformed with a linear coordinate transformation to the `` non - local '' grp for with the initial data { \mbox{\boldmath \small }}_r + ( r - r_0){\mbox{\boldmath \small }}'_r , & r > r_0 , \end{cases } \ ] ] where and are corresponding constant vectors .the notations and will also be simply replaced with and , respectively , which also denote the limiting states at , as .since both and are locally lipschitz continuous , the initial structure of the solution to the grp for with may be determined by the solution of the associated ( classical ) riemann problem ( rp ) { \mbox{\boldmath \small }}(0,r ) = \begin{cases } { \mbox{\boldmath \small }}_l,\ \ & rr_0,\\[1 mm ] { \mbox{\boldmath \small }}_r , & r > r_0 , \end{cases } \end{cases}\ ] ] and the local wave configuration around the singularity point of the grp for with depends on the values of four constant vectors and consists of two nonlinear waves , each of which may be rarefaction or shock wave .[ fig : wave - pattern - a ] shows the schematic description of a local wave configuration : a rarefaction wave moving to the left and a shock to the right .[ fig : wave - pattern - b ] displays corresponding local wave configuration of the rp . in those schematic descriptions, denotes the limiting state at , as , and and are characteristic coordinate within the rarefaction wave and will be introduced in section [ sec : resolurare ] ..,title="fig:",scaledwidth=60.0% ] ( -288,133) ( -208,158) ( -138,173) ( -266,155) ( -32,120) ( -248,4) ( -212,5) ( -175,133) ( -168,90) ( -275,40) ( -148,5) ( -15,5) ( -116,90) ( -60,40) ( -298,72) ( -228,145) ( -138,173) ( -288,118) ( -45,102) ( -256,4) ( -215,5) ( -185,120) ( -170,88) ( -270,35) ( -148,5) ( -15,5) ( -110,90) ( -60,40) although there are other local wave configurations , we will restrict our discussion to the local wave configuration shown in figs .[ fig : wave - pattern - a ] and [ fig : wave - pattern - b ] .other local wave configurations can be dealt with similarly and are considered in the code .the solutions to the grp inside the left , intermediate and right subregions are denoted by , and , respectively . for any variable , which may be or the derivatives or etc . , the symbols and are used to denote the limiting values of as in the left and right subregions adjacent to -axis , respectively , and is used to denote the limiting values of as in the intermediate subregions .the main task of the direct eulerian grp scheme is to form a linear algebraic system a_r \left(\dfr{\pt \rho}{\ptt}\right ) _ * + b_r \left(\dfr{\pt v}{\pt t}\right ) _ * = d_r , \end{cases}\ ] ] by resolving the left wave and the right wave as shown in figure [ fig : wave - pattern - a ] , respectively .solving this system gives the values of the derivatives and , and closes the calculation in .this section resolves the left rarefaction wave shown in figure [ fig : wave - pattern - a ] for the grp and , and gets the first equation in .the relation for the riemann invariant will be used to resolve the left rarefaction wave by tracking the directional derivatives in the rarefaction fan . for this purpose ,a local coordinate transformation is first introduced within the rarefaction wave , i.e. the characteristic coordinates , similar to those in .the region of the left rarefaction wave can be described by the set , -\infty < \al \leq 0\big\} ] . setting in and substituting it into may give the expression of in and complete the proof .< 1.5em - 1.5em plus0em minus0.5em height0.75em width0.5em depth0.25em if , one has , and for this case , the integral in can be expressed as & = \frac{1}{4\sigma } \big [ ( \sigma-1)^2 \ln(1+\varpi ) - ( \sigma + 1)^2 \ln(1-\varpi ) \big]_{\varpi=\hat \beta/\sqrt{a_*b_*}}^{\varpi=\beta/\sqrt{a_*b _ * } } .\end{aligned}\ ] ] [ rem : rightrare ] if the right rarefaction wave associated with the eigenvalue appears in the grp , then the above derivation can be used by the `` reflective symmetry '' transformation where and denote the primitive variables before and after the reflective transformation , respectively .specially , the `` reflective symmetry '' transformation is first used to transfer the `` real '' right rarefaction wave into a `` virtual '' left rarefaction wave , theorem [ thm:001 ] is then directly applied to the `` virtual '' left rarefaction wave , and finally using inverse transformation gives the linear equation of and for the right rarefaction wave .this section resolves the right shock wave for the grp and in figure [ fig : wave - pattern - a ] and gives the second equation in through differentiating the shock relation along the shock trajectory .let be the shock trajectory which is associated with the field , and assume that it propagates with the speed to the right , see figure [ fig : wave - pattern - a ] .denote the left and right states of the shock wave by and , respectively , i.e. and .the rankine - hugoniot relation across the shock wave is = s\left [ { { \mbox{\boldmath \small } } } \right],\ ] ] or equivalently , = ( s/\sqrt { ab } ) \left [ { { \mbox{\boldmath \small } } } \right],\ ] ] where v v v h j v v v v v ] .outflow boundary conditions have been specified at the inner boundary .it can be seen that the velocity approaches the speed of light when the gas approaches to the black hole , while the proposed grp scheme exhibits good robustness .[ fig : accretion2 ] displays the convergence history in the residuals with respect to the time on three meshes of 200 , 400 and 800 uniform cells respectively .it can be seen that the correct steady solutions are obtained by the grp scheme with the residuals less than . before simulating the shock wave models ,we first consider several continuous models in fullly general relativistic case , which are two transformations of the friedmann - robertson - walker ( frw ) metrics ( denoted by frw-1 and frw-2 respectively ) and the tolmann - oppengeimer - volkoff ( tov ) metric presented in .the exact solutions to those continuous models are smooth and may be used to test the accuracy of the proposed grp scheme .[ example : frw1]consider the conformally flat frw metric , where the distance measure by the line element ,\ ] ] where is the time since the big bang , and the cosmological scale function is defined by . under the coordinate transformation eq .goes over to with the metric components the exact fluid variables at are where is einstein s coupling constant , , and denotes the parameter in .this model is solved by using the proposed grp scheme from to on several different uniform meshes for the spatial domain s u u u u$}}_{j+1}^n , & r >r_{j+\frac{1}{2 } } , \end{cases } \end{cases}\ ] ] m. liebenderfer , s. rosswog , and f .- k .thielemann , an adaptive grid , implicit code for spherically symmetric , general relativistic hydrodynamics in comoving coordinates , _ astrophys ._ , 141 ( 2002 ) , 229 - 246 .m. liebendrfer , o.e.b .messer , a. mezzacappa , s.w .bruenn , c.y .cardall , and f.k .thielemann , a finite difference representation of neutrino radiation hydrodynamics for spherically symmetric general relativistic supernova simulations , _ astrophys ._ , 150 ( 2004 ) , 263 - 316 . b. temple and j. smoller , expanding wave solutions of the einstein equations that induce an anomalous acceleration into the standard model of cosmology , _ proc .natl acad ._ , 106 ( 2009 ) , 14213 - 14218 .
|
the paper proposes a second - order accurate direct eulerian generalized riemann problem ( grp ) scheme for the spherically symmetric general relativistic hydrodynamical ( rhd ) equations and a second - order accurate discretization for the spherically symmetric einstein ( sse ) equations . the former is directly using the riemann invariants and the runkine - hugoniot jump conditions to analytically resolve the left and right nonlinear waves of the local grp in the eulerian formulation together with the local change of the metrics to obtain the limiting values of the time derivatives of the conservative variables along the cell interface and the numerical flux for the grp scheme . while the latter utilizes the energy - momentum tensor obtained in the grp solver to evaluate the fluid variables in the sse equations and keeps the continuity of the metrics at the cell interfaces . several numerical experiments show that the grp scheme can achieve second - order accuracy and high resolution , and is effective for spherically symmetric general rhd problems . , spherically symmetric general relativistic hydrodynamics ; godunov - type scheme ; generalized riemann problem ; riemann problem ; riemann invariant ; rankine - hugoniot jump condition .
|
the paradigm shift of reductionism to holism are in common use nowadays , turning the scientist s interests to the interdisciplinary approach .this endeavor may be accomplished as far as the fundamental concepts of complex network theory are applied on problems that may arise from many areas of study , like social networks , communication , economy , financial market , computer science , internet , world wide web , transportation , electric power distribution , molecular biology , ecology , neuroscience , linguistics , climate networks and so on .as the study of the objects under the network subject paradigm goes on , some classes may arise as function of their general structure or topology . given these structures ,some patterns may be perceived into the aspect of network measures that , in addiction to one another , may determine a class of networks . considering this one can define some sort of taxonomy of networks that can be built by the simple comparison regarding their topological properties . with conformity with this reasoning ,a notion of universality also may be used as key - concept to unify the various kinds of networks into characteristic groups .based on this concept of universality , carron and kenna proposed an analysis method to discriminate a given narrative into real or fictional , based on the social network it would represent .specifically they analyzed three classical narratives with uncertain historicity , which was : beowulf , iliad and tin b cuailnge .from these was created some sort of social networks where nodes would represent characters and edges their social relation in the tale . as result, it generated what was denominated the mythological networks .this type of network is essentially a social network with distinguished topological properties . in order to determine what is distinguished, we must first determine what is common in topological terms .the real social networks are known to be usually small world , hierarchically organized , highly clustered , assortatively mixed by degree , and scale free .in addition to these basic characteristics , real social networks also may show power law dependence of the degree distribution , hold a giant component with less than 90% of the total number of vertices , be vulnerable to targeted attacks and hold robustness to random attacks . in the other extreme of a social networks configuration derived from a narrative ,there are the fictional social networks .these can be characterized as being small world , bear hierarchical structure , hold a exponential dependence of degree distribution , not scale free , size giant component with more than 90% of the total vertices , show no assortativity by degree and being robust to random or targeted attacks .its properties show some resembling features of real social networks .however a profound analysis would show their artificial nature .based on these set of classificatory properties , the authors of the preceding research figured out that iliad would be more realistic than fictional in term of its social network , whilst the other two ( beowulf and tin ) would be needed some simple and reasonable modification to render their network as real .although this made manipulation , they managed to synthesize a way to analyze folktales , myths or other classical poems , epics or narratives .this synthesis can be used to identify sociological structure which is valuable as a tool in the field of comparative mythology . conversely it is worth noting and citing campbell s work the hero with a thousand faces , where is brought to light a notion that mythological narratives from diverse culture frequently share the same structure denominated the monomyth . with carron s and kenna s results and campbell s statement, we can build an idea that classic narratives tend overall to be based on some historicity mixed with some sort of myth or legend , turning a historical document more attractive to be passed throughout generations .inspired by this preceding speculation , we propose a social network analysis of the homeric greek epic odyssey . looking for some meaning in terms of its network topology , we will attempt to classify the resulting net into real or imaginary as well as considering its implication for the comparative mythology .in addition to this methodology , we shall also run an algorithm to discriminate the so called `` communities '' or modules into the network .the novelty of this work stands is to verify if these sub - social groupings ( i. e. communities ) may have meaning , in terms of its characters composition and within topology , long story shorting : what are their contribution to the odyssey s social network ?we shall perform this task through random walk algorithm . as far as the communities are found , an interpretation must follow concerning its characters composition , internal topology and its importance to the whole topological structure may follow .the first fundamental property that appears with a network is the total number of vertices n along with the total number of edges between nodes e. as the net is formed , each node will have a certain number of edges that make the connection to other vertices ; this will be the degree k of the vertex .the averaging over all degrees gives us the mean degree of the network .exploring a longer bit the degree property , we can derive , which represents the probability that a given node has degree , then for most real network the degree distribution holds for for positive and constant .this is the power law dependency of degree distribution . for a networkthis reflects that the nodes are sparsely connected or there are little nodes with high degree and numerous vertices with low degree .the scale free characteristic of a network is maintained if [ eq1 ] is satisfied .some other important structural properties are also to be collected in the light of graph s theory , likely : the average path length , the longest geodesic and the clustering coefficient . consider a graph and its set of vertices . if are to be the shortest distance between the vertices , where and ; assume that or ca nt reach . given these conditions we can define the average path length as where is the number of vertices of the graph .the longest geodesic , often known as diameter of a graph , consist simply in the largest value of or in other terms , the longest topological separation between all pairs of vertices of the graph .the third property , quantify to what extent a given neighborhood of the network is cliqued . if vertices has neighbors , we find out that the maximal number of potential links between them will be .analogous to this we define as the actual number of links between the neighbors of , the clustering coefficient of the node shall be defined as , where the clustering coefficient for the whole network is simply the averaging of [ eq2 ] .many real network show a modular structure implying that groups of nodes organize in a hierarchical manner into increasingly larger groups .this feature can be overviewed as the power - law dependency fitting of the averaged clustering coefficient versus degree : additionally , we test the small - world phenomenon on the network . forthat we sustain that the network will be small world if and are both satisfied . where and , are respectively , the average path length and the clustering coefficient for a random network built with the same size and degree distribution .we also intent to measure the assortative mixing by degree , which brings the notion that nodes of high degree often associate with similarly highly connected nodes , while nodes with low degree associate with other less linked nodes .this quantity is given by the simple pearson correlation r for all pairs of nodes of the network .newman showed that real social network tend to be assortatively mixed by degree , conversely gleiser sustained that disassortativity of social network may signal artificiality , and in our context , a fictional social network .the size of the giant component is an important network property which , in some fashion , measure the connectivity capturing the maximal connected components of a network .it is also stated that in scale free networks , removal of influential nodes causes the giant component to break down quickly demonstrating vulnerability .this is an important feature of real social network may have .however the process depends on how we define the importance of a node in the network . as well as degree ,the `` betweenness '' centrality of a given node indicates how influential that node is in the net .this measure can be defined as an amalgamation of the degree and the total number of geodesics that pass through a vertex . if is the number of geodesics between vertices and and number of these which pass through node is , then the betweenness centrality of shall be given by will be 1 if all geodesics pass through . with this node s importance defined , it is possible to perform the target attack , which is the removal of the most important nodes seeing how the size of the giant component behaves after the removalin addition to the target attack , we shall realize a random attack where , differently from the target attack ; the vertices to be removed are chosen at random .the main difference between these two kinds of attacks may show us some kind of intrinsic organization within the social network . as a complement of all topological measurements described until now, we also applied an algorithm called walktrap that captures the dense subgroups within the network often known as `` communities '' via random walk .this method allows us to describe the `` communities '' composition in terms of its characters array and topological configurations .along with iliad , the odyssey of homer express with fierce and beauty the wonders of the remote greek civilization .the epics date around the viii century b.c ., after the writing system development using the phoenician alphabet .it is also known that odyssey carry some echoes from the trojan war narrated mainly on iliad .recalling again carron s and kenna s paper , from their three myths analyzed , the network of characters from iliad showed properties most similar to those of the real social networks . in addictionthey maintained that this similarity perhaps reflects the archeological evidence supporting the historicity of some conflict occurred during the xii century b.c .the poem s title ( odyssey ) comes from the name of the protagonist , odysseus ( or ulysses , for the roman adaptation ) , son and successor of laertes , king of ithaca and husband of penelope .the epic has its center scenario on the protagonist journey back home after his participation on the trojan war .this saga takes odysseus ten years to reach ithaca after the ten years of warring .the epic poem is composed by 24 chants in hexameter verses , where the tale begin 10 years after the war in which odysseus fought siding with the greeks .worth noting that the narrative has inverse order : it starts with the closure , or the assembly of the gods when zeus decides odysseus s journey back to home .the text is structured on four main parts : the first ( chants i to iv ) , entitled `` assembly of the gods '' ; the second ( chants v to viii ) , `` the new assembly of the gods '' ; the third ( chants ix to xii ) , `` odysseus s narrative '' ; and forth ( chants xiii to xxiv ) , `` journey back home '' .odyssey s masterwork after all , holds a set of adventures often considered more complex than iliad ; it has many epopee aspects that are close to human nature , while the predominant aspect of iliad is to be heroic , legendary and of godlike wonders .however there is a consensus that odyssey completes iliad picture of the greek civilization , and together they hold the very witness geniality of homer , being both pieces of fundamental importance to universal poesy in the occident . as a careful textual analysis was performed , we managed to identify 342 unique characters bounded socially by 1747 relations [ fig1 ] .we should point out that this network may be socially limited ; it rather captures some spotlights on the societies from that time .we define the social relation between two character when they ve met in the story , speak directly to each other , cite one another to a third or when it is clear they know each other . to avoid some possible misleading interpretation of the poem s social relations , we studied different translations and editions of odyssey .the basic differences from the odyssey s translations generated no significant deviation as the network creation process was made .a summary ( table [ tab1 ] ) of the found topological properties was compiled along with carron s and kenna s results for their described mythological networks .as expected , the social network analyzed has average path length similar to a random network build with same size and average degree .additionally it also has high clustering coefficient compared to random network indicating the small world phenomenon .the hierarchical feature of the network is displayed on [ fig2 ] , where the mean clustering coefficient per degree is plotted .it is possible to verify that nodes with smaller degree present higher clustering than those with higher degree , the decay these relation may follow approximately [ eq3 ] .we may interpret that high degree vertices integrate the small communities , generating the unification of the whole network . .size( ) , number of edges ( ) , average path length ( ) , diameter ( ) , clustering ( ) , size of giant component ( and assortitativity ( ) .odyssey * , beowulf * and tin are teh same original network with some character modification . [ cols="^,^,^,^,^,^,^,^,^,^,^",options="header " , ] costa , luciano da fontoura ; oliveira , osvaldo n. , jr . ; travieso , gonzalo ; _ et al_. analyzing and modeling real - world phenomena with complex networks : a survey of applications .advances in physics * 60 * , 329 ( 2011 ) .
|
the intriguing nature of classical homeric narratives has always fascinated the occidental culture contributing to philosophy , history , mythology and straight forwardly to literature . however what would be so intriguing about homer s narratives at a first gaze we shall recognize the very literal appeal and aesthetic pleasure presented on every page across homer s chants in odyssey and rhapsodies in iliad . secondly we may perceive a biased aspect of its stories contents , varying from real - historical to fictional - mythological . to encompass this glance , there are some new archeological finding that supports historicity of some events described within iliad , and consequently to odyssey . considering these observations and using complex network theory concepts , we managed to built and analyze a social network gathered across the classical epic , odyssey of homer . longing for further understanding , topological quantities were collected in order to classify its social network qualitatively into real or fictional . it turns out that most of the found properties belong to real social networks besides assortativity and giant component s size . in order to test the network s possibilities to be real , we removed some mythological members that could imprint a fictional aspect on the network . carrying on this maneuver the modified social network resulted on assortative mixing and reduction of the giant component , as expected for real social networks . overall we observe that odyssey might be an amalgam of fictional elements plus real based human relations , which corroborates other author s findings for iliad and archeological evidences .
|
since its inception by bennett and brassard in 1984 , quantum cryptography has made the transition from a concept to a technology mature enough for commercial development .there are several flavors of quantum cryptography or quantum key distribution .the initial formulation , and all current commercial systems implement so - called prepare and send ( pas ) protocols , where some degree of freedom of light is prepared by one party , alice , and sent to the other party , bob , who then measures it in one out of several complementary bases .estimation of the errors in the measurement results of the receiver allows both parties to place an upper bound on the knowledge of an eavesdropper , and is used for a subsequent removal of this knowledge in a privacy amplification step .another family of protocols evolved out of a proposal by ekert in 1991 ( e91 , ) .these protocols use entanglement as the main resource , and combine some of the measurements on the biphotons such that a bell inequality can be tested , or the state is tomographically estimated to evaluate the knowledge of an eavesdropper .an important development was an explicit way to calculate the amount of information leaked to an eavesdropper out of a less than perfect violation of a bell inequality , which makes it possible to implement this idea in a practical system with imperfect sources and measurement devices .these protocols reduce the assumptions about the physical implementation ( like e.g. the size of the hilbert space used to encode information ) in comparison with most pas qkd schemes .an entanglement - based bb84-type qkd scheme was described by bennett , brassard and mermin in 1992 ( bbm92 , ) . there , the prepare part of bb84 is replaced by a measurement scheme similar to the receiver side , but the knowledge of an eavesdropper is still evaluated from the observed errors .this results in a larger fraction of final key bits than under a full e91 protocol in its quantitative version , and probably maintains the insensitivity against an unknown size of the hilbert space , as long as the measurement devices can be trusted .furthermore , this scheme retrieves the randomness for the key used for encryption directly out of the measurement process on a quantum system , and does not need to provide for an active choice of a key bit .this qkd scheme has been demonstrated in the field using optical fiber links without amplifiers or signal regeneration stages . if the link is to be established _ad hoc _ , e.g. in a mobile environment , or it is not feasible to have a fiber deployed ( e.g. in the satellite qkd proposals ) , propagation of the photons through free space is necessary .a free space transmission channel using polarization encoding of the qubits has the advantage of not inducing decoherence ( negligible birefringence of air ) , and has low absorption under clear weather conditions .so far , such entanglement based qkd systems over free space have been demonstrated at night , taking advantage of low background light levels . in this paperwe demonstrate daylight operation of a qkd system implementing a bbm92 protocol .continuous operation over a full day / night cycle brings free space entanglement based qkd one step closer to the stage of development of free space pas protocols , where such daylight operation has been shown .the main challenge for operating over a free space channel in daylight is to handle the high background from the sun .first , actively quenched avalanche photodiode ( apd ) detectors may be subject to irreversible destruction when exposed to an excessive amount of light ; such a situation may occur if there is excessive scattering in the optical communication link . for passively quenched apdsthis is not a problem , since the electrical power deposited into the device can be limited to a safe operation regime at all times .second , saturation of detectors leads to a reduced probability of detecting photons at high light levels .this effect can usually be modeled by a dead time or recovery time for the device . for passively quenched apds ,this time is about , but may be over an order of magnitude smaller for actively quenched devices .while modeling the saturation with a single dead time may not completely reflect the details of the re - arming of a detector , it gives a useful estimation of the fraction of time a detector can register photoevents . given an initial photoevent rate ( i.e. , the rate a detector with no recovery time would report ) , a detector with dead time will register a rate of third , a high background level will lead to detection events which are mistaken with the detection of a photon pair .these are uncorrelated in their polarization and lead to an increase in the quantum bit error ratio ( qber ) , which is used to establish a bound for the knowledge of an eavesdropper . in the following, we estimate the operational limit for generating a useful key under such conditions , assuming an implementation of a symmetrical bbm92 protocol , i.e. , both complementary measurement bases are chosen with an equal probability of 50% on both measurement units . assuming that all quoted rates already include detector efficiencies , we can characterize a pair source by its single event rates , , and its coincidence rate .we denote the transmission of the entire optical channel as , in which we include absorptive losses in optical components , the air , geometrical losses due to imperfect mode transfer from an optical fiber , and losses in spatial filters .the signal or raw key rate for a symmetric bbm92 protocol is given by half of the detected coincidence rate , for an external background event rate , a coincidence time interval of , and assuming no correlations between source and background events , the accidental coincidence rate with matching bases is given by assuming that only one of the detectors , here with index 2 , is exposed to the background events .imperfections in practical entangled photon pair source and the detector projection errors are often characterized by visibilities of polarization correlations and .the intrinsic qber of the qkd system with a symmetric usage of both bases is given by the polarization of background events on one side can be assumed to be uncorrelated the photons detected in the other arm , thus the qber due to accidentally identified coincidences is .the total qber of the complete ensemble is given by the weighted average over both components , the detector saturation modifies both signal and accidental rates similarly to equation ( [ eq : saturation ] ) by the same dead time correction factor , where we assume an equal distribution of photoevents over all four detectors , resulting in a dead time constant of : therefore , the resulting qber in equation ( [ eq : qbertot ] ) does not get affected .however , the signal rate does , leading to the modified expression for typical parameters in our experiment ( =78kcps , =71kcps , =11kcps , , =15% , =4.3% , =2ns ) , the total detector rate on the receiver side , the available raw key bit rate and the resulting qber are plotted as a function of an external background rate in figure [ fig : saturation ] . above a certain background rate , would exceed the limit of 11% for which a secret key can be established for individual attack schemes .it is instructive to consider the excess qber due to background events : in a parameter regime useful for key generation , , , and for simplicity assuming , this quantity can be approximated by while the source property and the channel transmission are typically optimized already , the only way to reduce the excess error is to reduce the background rate and the coincidence time window .the limitation on reducing is the timing jitter of all detectors , which in our case is on the order of a nanosecond .emphasis thus has to be drawn to reduce the background rate .we prepare the polarization - entangled photon pairs in a source based on type - ii parametric down conversion ( pdc ) in a non - collinear configuration .it is pumped with a cw free - running diode laser with power of 30mw and a center wavelength of 407 nm , producing pairs at a degenerate wavelength around 814 nm ( similar to ) in single mode fibers . when directly connected to single photon detectors, we typically observe single rate per arm of 78kcps and 71kcps , with a coincidence rate of 12kcps .the visibility of polarization correlations in the hv and basis are % and % , respectively .while these sources have been substantially surpassed in quality and brightness , this particular device is both simple and robust .the minimal incident angle of the sun and the line of sight was about .as endpoints in our transmission channel , we use a pair of custom telescopes to transmit one member of the entangled photon pair across a distance of 350 m for convenient logistics .the relative orientation of both telescopes is adjusted using manual tip / tilt stages with an angular resolution of , mounted on tripods intended for mobile satellite links .the telescopes are not actively stabilized , but this could be added for spanning larger distances , or to compensate for thermal drifts in the mounting stages .similarly to , the sending telescope consists of fiber port , a small achromat with mm to reduce the effective numerical - aperture of the single mode fiber , and a main achromat with mm and 75 mm diameter , transforming the optical mode of the fiber to a collimated gaussian beam with a waist parameter of 20 mm .nominally this results in a rayleigh length of 1.6 km at our operation wavelength , well above our target distance . a combination of spectral , spatial and temporal filtering is used to reduce the background to tolerable levels . at the receiving end , an identical mm achromat as the front lens focuses the incoming light onto a pinhole of 30 m diameter at its focal position for spatial filtering .assuming diffraction - limited performance of that lens , this corresponds to a solid angle of .the pinhole is then imaged with a magnification of 6.8 through an interference filter onto the passively quenched silicon avalanche detectors with an active diameter of 500 m in a compact module that performs passively the random basis selection for the measurement ( see figure [ fig : setup ] ) .our pair source has a measured spectral width of 8.7 nm , given by the phase matching conditions , and the geometry of the collection .an interference filter with a peak transmission of 72% and a full width at half - maximum of 6.7 nm was chosen to maximize the amount of signal transmitted and eliminate the background outside of the spectral region of the source ( see figure [ fig : spectra ] ) .this filter reduces the ambient background level by about two orders of magnitude , much less than what can be achieved in pas experiments based on extremely narrow band lasers and matching spectral filters .a significant reduction of background events was also achieved addressing scattering from various elements in the field - of - view ( fov ) of the detector , and from scatterers close to the optical channel ( see figure [ fig : telescopes ] ) .reduction of the fov with a smaller pinhole is finally limited by diffraction ; we found a 30 m pinhole to be the optimal choice when considering pointing accuracy and signal transmission .this corresponds to a fov of mm diameter for our test range which will strongly contribute to daylight background counts .a circular area with a diameter of about 3 fov as well as the inside of the sending telescope is covered with low scattering blackout material .the blackout area was also shielded against direct sunlight .together , these steps reduce the background by about 12db . a set of apertures at the receiver telescoperemoves light coupled to the detector by multiple reflections from outside the line - of - sight .five concentric apertures extending 30 cm upstream , and seven apertures with tapered diameters downstream of the main receiver lens matched the receiving mode and reduced the background by about 3 - 4db .the processing of detection events into a final key has been described in and the software is available as open source .each detector event results in a nim pulse which is sent to a custom timestamp unit with a nominal resolution of 125ps , referenced to a local rb oscillator .our time stamp units exhibit a dead time of 128ns , and are able to transfer up to events per second to a commodity host pc via a usb connection .the timing information on one of the sides is then losslessly encoded as differences between consecutive events with an adaptive resolution , and together with the basis information sent to the other side on a classical channel , in our case over a standard wireless tcp / ip connection .the encoding , together with a small overhead , consumes about 13% more bandwidth than necessary due to the shannon limit . to minimize the bandwidth for this communication, the timing information was sent from the source side with lower overall detection rate during daylight conditions . to identify corresponding photon pair events , the temporal correlation of the two photons generated in the pdc processis used , with a coincidence time window determined by the combined timing jitter from both photodetector sets , the timestamp electronics , and the time difference servoing . for that process to work ,an initial time difference between the two receiver units due to different timing origins and light propagation is determined to a resolution of 2ns using a tiered cross correlation technique on a set of detector events acquired over seconds .once that time difference is established , coincidences are identified within a time window of .its center drift due to residual frequency differences between the two reference clocks is tracked with a servo loop with an integration time constant of 2s for events falling in a time window around the expected center for coincidence events .we were able to resynchronize the system during daylight conditions at a coincidence rate of 1500cps up to an ambient light level of 250kcps , well below saturation of the detectors . to maintain a common time frame when no useful signal is available for servoing , one of the clock frequencieswas manually adjusted such that the relative frequency difference was .this would allow a loss of signal over a period of two hours without loss of timing lock .again , the tight time correlation of the photon pairs emerging in pdc acts as a natural way of comparing differences and synchronizing clocks at a distance easily . the ability to resynchronize during daytime and the use of the pdc signal for mutual calibration of the clocks makes this system very robust against signal interruptions or temporal unavailability of the channel . in the discussion of temporalfiltering we have assumed that all detectors on one side have the same relative lag , or more generally , that their temporal response is identical . this detector equivalence is not guaranteed , and is necessary for an efficient time filtering and , more importantly , to prevent information leakage to an eavesdropper .figure [ fig : histos ] shows the measured time differences between those pairs of detector combinations which contribute to the key generation .the figure shows that detectors from the same basis are well matched , but there is significant difference between the two bases .given our detector assignment , the information leakage is 0.52% and 0.44% in the hv and basis , respectively . for continuously pumped sources , extraction of timing information by an eavesdropper would need a measurement of the presence of a photon in the communication channel without disturbing the polarization state ; with pulsed sources , however , the problem becomes more acute , as the pulse train provides a clock with which to compare the publicly exchanged timing information .the experiment was run continuously over a period from 9.11.2008 , 18:00 sgt to 14.11.2008 , 2:00 sgt over four consecutive days . in this periodwe saw extremely bright sunlight , tropical thunderstorms and partly cloudy weather ; over the whole period the rate of detected pairs and background events varied by about 2 orders of magnitude . in figure [ fig:24hrs ] we show the results collected over two consecutive days . on the second day we identified raw coincidences . after sifting ,this resulted in of raw uncorrected bits , with a total of errors corrected using a modified cascade protocol , which was carried out over blocks of at least 5000 bits to a target bit error ratio of . for the privacy amplification step ,we arrive at a knowledge of an eavesdropper on the error - corrected raw key determined by ( a ) the actual information revealed in the error correction process , and ( b ) the asymptotic ( i.e. assuming infinite key length ) expression for the eavesdropping knowledge inferred from the actually observed qber , of an equivalent true single photon bb84 protocol .privacy amplification itself is carried out by binary multiplication / addition of blocks of raw key vectors with a length of at least 5000 bits with a rectangular matrix filled with a pseudorandom balanced bit stream from a 32 bit linear - feedback shift register , seeded with a number from a high - entropy source for each block .we are left with of secure bits for this 24 hour period , corresponding to an average key generation rate of 385 bits per second ( bps ) . in these conditions ,the key generation rates are far from uniform during the acquisition period ; we see a maximum secure key generation rate of 533bps in darkness and a minimum of 29bps around noon in rainy conditions .the raw key compression ratio in the privacy amplification step should actually also take care of a limited entropy in the raw key due to part - to - part variation in detector efficiencies .this information was obtained before the main key generation process by establishing the complete correlation matrix ( see table [ tab1 ] ) out of an ensemble of 148493 coincidence events with matching bases ..correlation events between each of the four detectors on both sides [ cols="^,^,^,^,^",options="header " , ] the asymmetry between 0 and 1 results in the hv basis is , and in the basis . using again entropy as a simple measure of information leakage , this detector asymmetry would allow an eavesdropper to obtain 0.45% of the raw key for events in the hv basis , and 0.18% in the basis . at the moment , however , it is not obvious that a simple reduction of the final key size in the privacy amplification step due to various information leakage channels would be sufficient to ensure that the eavesdropper has no access to any elements of the final key .we also note that the choice between the two measurement bases is not completely balanced ; the ratio of hv vs. coincidences is .furthermore , this asymmetry varies over time . for the combined asymmetry between logical 0 and 1 bits in the raw key we find around 51.5% during night time , and 54.0% during daytime .a system which captures this variability in detection efficiencies ( and also would allow to discover selective detector blinding attacks ) would have to monitor this asymmetry continuously . as introduced in section [ chap : theory ] we can estimate how well the experiment performs for a given number of background events .figure [ fig : saturation ] shows theoretical values for background- and signal rates according to equations ( [ eq : qbertot ] ) and ( [ eqn : rs ] ) , and experimental data for the 25000 recorded outputs of the error correction module during the two days of the experiment .the dead - time affected detector response is also shown assuming .the night time periods with and a total dark count rate of 7kcps contribute to events on the low background regime of the experiment forming a vertical line to the left of figure [ fig : saturation ]. we can not differentiate between fluctuations due to changes in the source and those in the transmission channel , but since the source itself was protected against thermal fluctuations , we attribute them to variations in transmission due to changes in the coupling of the telescopes .the strongly fluctuating background during daytime contributes to the broadly scattered data between = 20 and 500kcps .if the source properties and channel coupling were constant , the deviation of and from the theoretical value would be both randomly distributed .figure [ fig : saturation ] , however , shows more structure in than in which we attribute to changes in the coupling between the telescopes due to thermal expansion .nevertheless , the experimental values fit the theoretical prediction well .we note that saturation of the detectors is never a problem . for parameters representative for our experiment :the _ detected _ background rate shows saturation , due to the intrinsic dead time of the four detectors .less counts are detected for higher count rates . the observed background rate increases up to 450kcps , which leads also to a reduction of the sifted key rate , , by 20% and an increase of the resulting qber up to 6.5% .the efficient filtering of the ambient light prevents a higher background , which would lead to an increase of the qber above the threshold of 11% where no private key can be established between the two parties at a count rate of 1.8 .this threshold is not reached during the whole experiment , thus continuous operation is possible when the coupling between the parties is maintained . ]there are two contributors to the variability in key generation rate : first , atmospheric conditions such as rainfall reduce the transmission and thus the number of raw key events before the error correction and privacy amplification steps , but the qber remains unchanged .on the other hand we have extremely bright conditions where accidental coincidences increase significantly . in this regime , as the background rises , the signal rate is reduced due to the dead time of the detectors .furthermore , the qber increases according to equation ( [ eq : qbertot ] ) , occasionally preventing the generation of a secure key .but even under bright conditions , the system still keeps track of the time drift between the two reference clocks with the time - correlated coincidences from the source without a need for re - synchronization .we have demonstrated the continuous running of a free space entanglement qkd system over several full day - night cycles in variable weather conditions . a combination of filtering techniques is used to overcome the highly variable illumination and transmission conditions .the software and synchronization scheme can tolerate the remaining 16db variation in light levels without interruption of the key generation .we continuously generate error corrected , privacy amplified key at an average rate of 385bps . withthe newly available bright sources larger distances and/or a higher key generation rates are possible .
|
many quantum key distribution ( qkd ) implementations using a free space transmission path are restricted to operation at night time in order to distinguish the signal photons used for a secure key establishment from background light . here , we present a lean entanglement - based qkd system overcoming that limitation . by implementing spectral , spatial and temporal filtering techniques , we were able to establish a secure key continuously over several days under varying light and weather conditions .
|
with the progress of modern physics and astronomy , our outlook on the universe is dramatically changing from the stationary universe to the dynamic , ever changing , universe .the dynamical phenomena in the universe appear as variations at various time - scales ranging from the cosmological evolutionary time scale to less than a milisecond .these time - variations are becoming actually observable with the advent of the modern observing equipment and technology . among them , time - variations arising from extreme gravity as best exemplified by black holes , and from degenerate objects , such as white dwarfs and neutron stars , have been receiving extreme attention from various fields of modern science , as the natural laboratory of general relativity and quantum mechanics , which best represent the glorious success of the century of physics " . as can be easily expected from the extreme conditions , astronomical phenomena under strong gravity or in degenerate conditions have extremely short time - scales , andare known to be usually very unpredictable .these astronomical phenomena are now generally called transient phenomena " , or referred to as transient objects " .the concept of _ transient object astronomy _ appeared very late in the history of astronomy and now flourishing as a new modality of astrophysical research . , the vsnet is the earliest group which began using the term _ transient objects _ in the present context of astronomical significance . ]this success greatly owed to the recent great advancement of observing modalities , information technology and computational astrophysics . the variable star network ( vsnet ) ,the objective of this review , is one of the earliest and most successful international groups that led to the modern success of transient object astronomy . in the research history of _transient object astronomy _, there were two major breakthroughs in the early 1990 s .the one is the development of easy availability of ccds and personal computers , and the other is the advent of the internet .these two breakthroughs played a key role in establishing _transient object astronomy _ as one of the most popular contemporary astronomy topics . from the traditional viewpoint ,ccds were usually used as a faint - end " extension of the former photon detection methods , e.g. photoelectric and photographic observations .this naturally led to a pursuit of observing fainter stars on long - exposure ccd images ( cf . ; ; ; ) .the founder of the vsnet was one of the first to break this tradition , and was virtually the first person who systematically turned modern ccd equipment to bright , transient objects , such as classical novae and outbursting dwarf novae ( the best examples being ; ; , see the later sections for their scientific achievements ) . the traditional time - resolved observations of classical novae and outbursting dwarf novae were almost restricted to so - called target - of - opportunity ( too ) observations .the best traditional examples include the 1978 outburst of wz sge ( ; ; ) , and the 1986 outburst of sw uma .this kind of observations was usually severely limited by the telescope time allocation , and many important transient phenomena ( e.g. the 1985 historical long outburst of u gem : ) faded away without receiving sufficient observational coverage .traditional proposals for telescope time were also limited because of the transient and unpredictable nature of these phenomena ; there is no guarantee that there is a suitable transient target at the time of allocated observation .for this reason , systematic observational research in these objects was severely restricted to short - period , less unpredictable objects , with an enormous effort of world - wide coordination ( e.g. vw hyi : ; ; yz cnc : ) .timely circulation of alerts on transient objects or phenomena is also crucially important , particularly for too - type observation . before the wide availablity of the internet ,the typical way of communicating such alerts was a phone call from an observer ( usually an amateur astronomer watching variable stars ) to a variable star organization , which was typically relayed ( with some delay ) to local observers for confirmation .the information , if it was recognized as particularly important , was then distributed to world - wide observers usually from the central bureau of astronomical telegrams ( cbat ) via telegrams , direct phone calls , or slow postcards .it usually took , even in best cases , a day or more before this crucial information was relayed to the actual observer undertaking a too observation .the early stage of transient objects was usually missed because of this delay .for example , the detection of the 1986 historical outburst of sw uma was relayed via an astronomical telegram only when the object reached a historical brightness of , although the outburst was initially reported .5 below the peak brightness .there had been very few early stage observations ( i.e. within a day of the event detection ) of transient objects before the 1990 s .this situation drastically changed with the public availability of the internet . in the early times( around 19901991 ) , there were only sporadic internet communications on observations , mainly via personal e - mails and on public bulletin board systems .this strategy worked slightly better than in the past , the situation was basically unchanged in that most of observers had to rely on occasional communications or a slow access to news materials . from the necessity of publicly and electronically disseminating urgent astronomical phenomena , there appeared e - mail exploders ( mailing lists ) .the scandinavian _ varstars _ list and the ( mainly ) professional _ novanet _ by the arizona state university team played an early important role in publicly relaying information on transient objects .the early - time progress of these electronic communications is summarized in the _ vsnet - history _ list.http://www.kusastro.kyoto-u.ac.jp/vsnet/mail/vsnet-history/ + maillist.html . ]( 160mm,80mm)fig1.eps the scientific role of wide - availability of these e - mail exploders was recognized upon the appearance of sn 1993j in m 81 ( ; ) .this supernova showed an unusual early - time light curve and a spectral transition from a type - ii to type - ib supernova ( ; ; ; ) . in communicating nightly rapid changes and distributing most up - to - date observation strategies, the e - mail exploders played a more crucial role than ever .another advantage of e - mail exploders as a _ standardization tool _ of observations became evident ( figure [ fig : sn93jcht ] ) .early - time non - standard observations were quickly corrected using the updated photometric comparison stars , and questionable observations were examined real - time to clarify the cause .this led to a huge world compilation of sn 1993j photometry updates ( see figure [ fig : sn93j ] ) contributed by a number of volunteers , including the vsnet founder.ftp://vsnet.kusastro.kyoto-u.ac.jp/pub/vsnet/sne/ + sn1993j / sn.mag ] this high - quality , uniform compilation of real - time observations greatly contributed to real - time theoretical modeling of this object ( e.g. ) , spectroscopy ( e.g. ; ; ; ) and photometry ( e.g. ; ) .we published our own results in .we also contributed to a number of international astronomical union circulars ( iaucs ) ( ; ; ; ; ; ) .the complete history of this sn 1993j story can be also seen in the _ vsnet - history _ archive .( 160mm,80mm)fig2.eps upon the recognition of the importance of e - mail exploders on the occasion of sn 1993j , more systematic efforts were taken to standardize the communication and data reporting method . in relation to reporting observations , we started widely disseminating observations of regular variable star observations , mainly submitted to the variable star observers league in japan ( vsolj),http://www.kusastro.kyoto - u.ac.jp / vsnet / vsolj / vsolj.html and see ://vsolj.cetus - net.org/ for the vsolj variable star bulletin page . ] and those personally reported to us .people started recognizing the scientific importance of widely disseminating regular observations , which can be readily reflected on scheduling new observations .new findings based on widely reported observations ( e.g. superhump detection of a dwarf nova ) were also relayed real - time , which worked as a positive feedback to original observers .the prototype of vsnet - type e - mail exploders was thus established in 1993 .the next major astronomical event at this stage of the history was the discovery of nova cas 1993 ( v705 cas ) .this nova showed considerable degree of early - time fluctuations , as well as a later dust - forming episode . during all of the stages of evolution of the nova explosion ,the data circulating strategy established at the time of sn 1993j played an impressive role : the comprehensive compilation of v705 cas by yasuto takenaka ( see figure [ fig : v705 ] ) was cited in a nature paper as best authenticated optical record of this nova .the nova was later even symbolically called _ an electronic nova _ , representing the opening of new electronic era of transient object astronomy .( 160mm,80mm)fig3.eps the information of these transient objects and regular variable star observations was initially relayed manually , or relayed on existing less specified e - mail exploder systems . in 1994 , our own e - mail exploder system ( vsnet ) started working .this service smoothly took over the past manual e - mail announcement systems , and immediately received wide attention both from amateur and professional communities .the establishment of the vsnet thus became the prototype " of world - wide amateur - professional collaborations based on public e - mail communication .this initiative later led to a flourishing vsnet collaboration ( section [ sec : vsnetcollab ] ) .the early history was reviewed by d. nogami et al .( 1997 ) in `` electronic publishing , now and the future '' , joint discussion 12 of the 23rd iau general assembly .considering the historical significance in the advent of _ transient object astronomy _ and the current unavailability of this document in a solid publication , we reproduce the presented contents in appendix [ sec : app : iaupos ] ( in order to preserve the original contents , we only corrected minor typographical errors ) .the vsnet mailing list system now has more than 1300 subscribers from more than 50 countries all over the world .during the very initial stage of the development of the vsnet , we simply relayed observations to those who ( potentially ) need the data . however , it soon became evident , from the experiences with sn 1993j and v705 cas ( subsection [ sec : earlyelec ] ) , that there is a need for a newly designed reporting system adapted for electronic data exchanges .since we already had sufficient experience with relaying vsolj reports to the world - wide variable star observers , it was a natural solution to extend the vsolj format to an international version .this changes were minimal , by introducing universal time ( ut)-based system and the extension of coding system of observers .the details of the reporting system is described in appendix [ sec : app : report ] . by globally collecting data, we soon recognized the necessity for setting up a dedicated e - mail list for reporting observations , _ vsnet - obs_.http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet - obs/ + maillist.html . because of the large number of articles , the online archive is subdivided ; see ://www.kusastro.kyoto - u.ac.jp / vsnet / mail / index.html . for the complete message archive . ] the alert list _ vsnet - alert_http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet - alert/ + maillist.html .] was prepared at the same time , which has been one of the most renowned and reliable sources of noteworthy phenomena of variable stars and transient objects , and the messages in _ vsnet - alert _ have been frequently cited in the professional literature as the primary source of information ( e.g. ; ; ; ; ) .the vsnet played an important role in standardization of variable star observing .there are several steps in standardization : ( 1 ) standardization of reporting format ( see subsection [ sec : rephist ] ) , ( 2 ) standardization of comparison stars , and other minor steps . in standardizing comparison stars, the vsnet group also took an initiative in the history of transient object astronomy .the earliest example includes sn 1993j and classical novae , for which reliable ccd - based ( or sometimes photoelectric ) comparison star sequences were determined and distributed through the vsnet lists . before thisstandardizating efforts were taken , the nova researchers had to cope with often unreliable early reports in iaucs , which were often based on various sources of comparison stars with notoriously diverse photometric quality . with the advent of the vsnet ,the modern - day nova observations have now become as reliable as those of other variable stars with well - established comparison star sequences .the vsnet group has been paying attention to the quality of the original discovery reports , and have issued several magnitude updates superseding the iauc announcements .the same effort has been taken for supernova photometry , although the faintness and the large number of the target objects have made it a more difficult task than in classical novae .( 140mm,210mm)fig4.eps in observation of ( non - transient ) ordinary variable stars , the vsnet took the initiative to standardize the comparison star magnitudes to the modern system , from various old systems including the traditional harvard visual photometric system .the effort was initially taken to revise the faint - end magnitudes for cataclysmic variables ( cvs ) and peculiar variables using the ccd camera at ouda station , kyoto university .these results were continuously released as vsnet charts " through the vsnet lists ( see figure [ fig : gmchart ] ) .several independent contributed efforts , by rafael barbera ( grup destudis astronomics)http://www.astrogea.org/ . ] , and fraser farrell ( astronomy society of south australia),http://www.assa.org.au/ . ] as well as those by the vsnet administrator team , were made to write software packages to graphically display this vsnet chart format .brian skiff has been continuously contributing to the vsnet in photoelectrically standardizing sequences for selected variable stars ; this initiative was globally taken over with the ccd works , notably by arne henden and bruce sumner.ftp://ftp.nofs.navy.mil/pub/outgoing/aah/sequence/sumner/ . ]these standardized sequences and charts have own lists _ vsnet - sequence _ and _ vsnet - chart_.http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + vsnet - sequence / maillist.html and ://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet - chart / maillist.html . ] for the bright end , we were the group to first extensively use hipparcos and tycho magnitudes at the earliest epoch ( 1997 ) of the public release of these catalogs .we immediately made public variable star charts based on hipparcos and tycho magnitudes.ftp://vsnet.kusastro.kyoto-u.ac.jp/pub/vsnet/charts/hiptyc/ ] since then , this adoption of the standard -band system ( selected for colors ) for visual photometry has been gradually becoming the global standard .since the public release of tycho-2 catalogue , this standard was extended to a slightly fainter magnitude.ftp://vsnet.kusastro.kyoto-u.ac.jp/pub/vsnet/charts/ + tycho-2/ ] for poorly observed faint objects , we also took an initiative ( vsnet - chat 700 , in 1998)http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet - chat/ + msg00700.html ] to calibrate usno catalog magnitudes since the early release of the usno a1.0 catalog .this calibration has been widely used , when an alternative sequence is not readily available . from the verybeginning of its history , the vsnet has been in continuous collaboration with the vsolj .the activity includes hosting the vsolj alert and discussion mailing lists , distributing the vsolj reports and prediction of variable stars , and hosting the public vsolj database and light curves ( see subsection [ sec : publc ] ) .a part of standardization schemes ( subsection [ sec : standard ] ) has been developed in collaboration with the vsolj administrator group ( see also appendix [ sec : app : report ] ) and through a discussion with world - wide variable star leaders at the international vsolj meeting held in conjunction with the 23rd iau general assembly . the other pioneering aspect of the vsnet vsolj collaboration was the introduction of ccds for cv photometry .this work was mainly done in collaboration with makoto iida , who took the world initiative to monitor faint cvs with a commercial ccd .this collaboration led to fruitful scientific results ( ny ser : ; dv dra : ; bc uma , see ://www.kusastro.kyoto - u.ac.jp / vsnet / dne / bcuma.html ) .these successful results were demonstrated in a number of international conferences , including the padova cv conference in 1995 and the keele cv conference in 1997 , and the results became gradually digested by the professional community .this pioneering amateur - professional collaboration in transient objects and cvs finally led to the most successful vsnet collaboration ( section [ sec : vsnetcollab ] ) , and the strategy was taken over , albeit with a lesser degree of the original flavor " of amateur - professional relations and publicity policy , by a number of following world - wide groups with the similar strategies .since the vsnet service initially started with the coordinating role among mostly northern hemisphere observers , there were initially few reports from the southern observers . with the outstanding activity of the vsnet, there arose a number of requests from professional astronomers who were planning for too observations of southern dwarf novae .this situation has been gradually and progressively improved by increasing contributions from the southern observers , particularly by the members of royal astronomical society of new zealand ( rasnz).http://www.rasnz.org.nz/ .] by now , these contributions , notably by rod stubbings , have enabled a number of rare outburst detections and early circulation of these phenomena . together with the collaboration with southern ccd observers , the scientific achievements of transient object astronomy in the southern hemisphere is explosively growing ( e.g. microquasar v4641 sgr : ; ; , southern su uma - type dwarf novae : ; ; ) , which will be reviewed in later sections .most recently , we started collaborating with the nagoya university team for identifying x - ray transients ( see subsection [ sec : sci : xraynova ] ) with the simultaneous-3color infrared imager for unbiased survey ( sirius ) camera installed at the infrared survey facility ( irsf)http://www.saao.ac.za / facilities / irsf / irsf.html . ]situated at the south african astronomical observatory , southerland , south africa .several scientific achievements have already been issued ( ; ; ) .a work on v359 cen in collaboration with the microlensing observations in astrophysics ( moa ) projecthttp://moa.scitec.auckland.ac.nz/ . ] team has been also published .from the very beginning of the vsnet , all messages posted to vsnet public mailing lists are opened to the public . a message can be relayed ( unless the sender otherwise specifies the usage ) to third - party members , or can be posted to a different mailing list or a public news service . in the earliest times , these messages were archived ( they were sometimes made public ) at individual receivers . from the necessity of publicly providing standardized charts and related materials ( see subsection [ sec : standard ] ) , we initially used private anonymous ftp service operated at the department of astronomy , kyoto university .this was replaced by the official vsnet anonymous ftp service in 1995 july. contributed programs , notably vsncht written by rafael barbera , to graphically display vsnet - format charts were made public from the start of the service .magnitude summaries of selected objects such as new novae , our own and contributed ccd images of outbursting dwarf novae were soon made available through the anonymous ftp service .we set up the official vsnet world - wide web ( www ) servicehttp://www.kusastro.kyoto - u.ac.jp / vsnet/ . ] in 1995 june .the web pages have been continuously updated , particularly in announcing newly discovered transient object phenomena . even during these updates ,we have paid all our efforts to preserve the original urls for future reference ; almost all pages that existed in the past can be tracked with the original urls even now .the other design of the vsnet www system was in the combined usage of the www and ftp services .in the earliest times , not all internet users were able to use window - based browsers .we have thus set up two ways ( www and anonymous ftp ) of access to the desired data . with this feature ,a user is able to get necessary images or programs even without a browser or a fast internet connection .this feature became , however , less important with the wide availablity of the window - based browsers .the ftp service soon included a complete archive of the vsnet mailing list messages . in 1997april , we started public www service of all archival messages posted to the vsnet lists .the www archive is automatically updated by using the mhonarc system.http://www.mhonarc.org/ .] we once implemented the namazu - basedhttp://www.namazu.org/ for the namazu project . ] full - text search engine on the vsnet www service , but this was discontinued because publicly available search engines now have equivalent functions . with the development of the vsnet www service , we started public light curve archive service in 1996 august.http://www.kusastro.kyoto-u.ac.jp/vsnet/lcs/index.html .] these light curves are drawn and regularly updated from _ vsnet - obs _ reports , which were incorporated into the vsnet database ( appendix [ sec : app : report ] ) .at present , the regular updates of these ( static ) light curves archive are performed by a java-based light curve generator engine wrapping the linux - based vsnet database system .the vsnet light curve archive also hosts light curves drawn from the vsolj and association franaise des observateurs detoiles variables ( afoev)http://cdsweb.u - strasbg.fr / afoev/ . ]public database , by courtesy of the respective organizations .these light curves can be easily reached from the vsnet top page , as well as from the afoev website .we also implemented individual variable star pageshttp://www.kusastro.kyoto - u.ac.jp / vsnet / gcvs / index.html . ]generated from the gcvs electronic edition .these pages provide handy links to the light curves ( vsnet , vsolj , afoev ) and links to the relevant pages and charts on the vsnet .soon after the establishment of the data reporting system , we also implemented a common gateway interface ( cgi)-based data search engine in 1996 june.http://www.kusastro.kyoto-u.ac.jp/vsnet/etc/searchobs.html .] this service returns the observations of a specified variable star selected from the vsnet public database , which is coherently updated from the regular variable star reports to the vsnet .this type of interactive variable star data browser was the first one in the world , and became a prototype of the subsequent similar services .this interactive data search engine has been widely used and frequently referred in the professional literature . a number of www services , including the well - known latest supernovae " pagehttp://www.rochesterastronomy.org / snimages/ . ]by david bishop , provides links to most up - to - date compilations of variable star reports by directly referring to the vsnet data search engine .this www - based data search engine provides the modern - day extension of the early - time magnitude summaries ( subsection [ sec : openingera ] ) . in 1998 march , this vsnet data search engine furthermore started providing a java applet - based interactive light curve interface , http://www.kusastro.kyoto - u.ac.jp / vsnet / etc / drawobs.html . ] with which a user can freely browse the data ( both vsnet and vsolj observations ) with a graphical user interface ( gui ) .this was one of the earliest java applications in public astronomical service .as described in section [ sec : vsoljcolab ] , the vsnet played a pioneering and essential role in establishing a new modality of internet - based amateur - professional collaboration .this direction was one of the main aims of the vsnet from the very start of its internet presence . among various kinds of variable stars ,cvs are the `` canonical '' class of transient objects , as briefly reviewed in subsection [ sec : newwindow ] . with the common interest in cvs and related systems , the vsnet amateur - professional collaborationoriginally mainly focused on cvs , especially on unpredictable outbursts of dwarf novae .the actual collaborative studies were done on the existing vsnet list , most frequently on _ vsnet - alert _ and _ vsnet - obs_. in most cases , real - time reports of visual detections of outbursts in dwarf novae , usually after some verification process involving the vsnet administrator team , triggered the actual observing campaigns .this process was usually performed within several hours and a day of the detection .this prompt reaction to event triggers of transient objects later enabled an efficient reaction to gamma - ray burst ( grb ) triggers ( section [ sec : sci : grb ] ) .the unique feature of the vsnet campaigns on transient objects is that they are a collaborative effort between visual observers and ccd observers .the vsnet is historically the first organization that realized the high productivity involving traditional visual variable star observations , although this importance in transient object astronomy had long been stressed and had been a dream among researchers .the early - time successful cooperative works include : recurrent nova v3890 sgr ( , phone call ) , detection of superhumps in aq eri ( , phone call and e - mail ) , detection of superhumps in v1251 cyg ( ; , phone call and e - mail ) , wx cet in 1991 ( independent detection and e - mail ) , ef peg ( ; , phone call and e - mail ) , hv vir in 1992 ( , e - mail ) , sw uma in 1992 ( , phone call ) .since then , most information is relayed by e - mail and e - mail exploder systems ( most of the works were conducted upon response to real - time outburst detection alerts reported through the vsnet ) : v344 lyr ( ) , hy lup = nova lup 1993 ( ) , ak cnc ( ; ) , t leo ( ) , kv and ( ) , ay lyr ( ) , cy uma ( ) , kv and in 1994 ( ) , fo and ( ) , tt boo ( ) , discovery of er uma ( ) , which will be described in subsection [ sec : sci : dwarfnova ] , pu per ( ) , hs vir ( ) , go com ( ) , dh aql ( ) , v1159 ori ( ) , hv aur ( ) , v725 aql ( , own detection ) , rz lmi ( ) , v1028 cyg in 1995 ( ) .this continuous stream of scientific reports based on the vsnet amateur - professional cooperations brought a great impact on the community . during this early - epoch collaborative works through the vsnet , several noteworthy rare ( typically once in a decade to several decades ) phenomena occurred and were studied in unprecedented detail through the vsnet : al com in 1995 ( ; : the outburst was detected by an aavso member , soon relayed to the vsnet , which enabled the detection of early superhumps " , which are observed only for several nights after the start of the outburst .see ; for our summaries of this event).http://www.kusastro.kyoto - u.ac.jp / vsnet / dne / alcom1.html .the star was nominated as the star of the year " in this conference . ]this information on the vsnet further enabled extensive follow - up studies by different groups ( ; ; ; ; ; ) .the 19961997 outburst of eg cnc ( ) was a great surprise .this outburst was detected by patrick schmeer , a vsnet member , and immediately relayed through the vsnet alert system .together with the early detection of early superhumps ( ; ) , the object showed an unexpected sequence of post - superoutburst rebrightenings ( ) .this unexpected phenomenon was discovered by ourselves and the collaborative efforts through the vsnet.http://www.kusastro.kyoto-u.ac.jp/vsnet/dne/egenc.html .it was our greatest pleasure that a number of speakers in the wyoming cv conference in 1997 presented this vsnet webpage as representing the most unusual activity of a dwarf nova .] this detection of the spectacular rebrightening phenomenon , which most clearly illustrated the power of real - time information exchange , also led to a number of observational and theoretical papers ( ; ; ; ; ) .this impressive phenomenon exhibited to the public , through the vsnet , the real - time science " in the making .the 1999 outburst of the recurrent nova u sco ( ; figure [ fig : usco ] ) again illustrated the ability of the international alert network system provided by the vsnet.http://www.kusastro.kyoto-u.ac.jp/vsnet/novae/usco.html . ]the outburst detection was made by patrick schmeer , and the alert was immediately disseminated through the vsnet . with this quick notification ,a very stringent upper limit became immediately available , which was obtained only less than four hours before the outburst detection .the early news enabled an american observer to catch the real optical maximum ( = 7.6 ) , which was more than one magnitude brighter than had been supposed for this recurrent nova .it is very likely the delays in delivering information were partly responsible for the underestimate of the maximum magnitudes in past outbursts .almost all important findings near the maximum light ( within one day of detection ) were obtained before the relevant iauc was issued .the vsnet collaborative study on this outburst additionally led to the first - ever detection of eclipses during the outburst ( ; ; ) .this outburst again produced a rich scientific outcome from various researchers ( ; ; ; ; ; ; ; ) : when compared to the results from previous outbursts ( 1979 : ; , 1987 : ; ) , the 1999 result could be even referred to as the victory in the electronic era " of transient object astronomy .( 160mm,80mm)fig5.eps after experiences with these , and other spectacular transient phenomena ( which will be reviewed in section [ sec : science ] ) , we set up new lists ( _ vsnet - campaign_http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + vsnet - campaign / maillist.html . ] and other sublists ) , which primarily deal with campaigns on selected targets . with the establishment of these dedicated lists , we started referring to our amateur - professional world - wide collaboration group as vsnet collaboration .this ( not too tightly bound ) group has become harboring subsequent intensive studies undertaken in the vsnet .figure [ fig : obsmap ] shows the global distribution of contributors .the vsnet campaign list and sublists were thus progressively established since 2000 , encapsulating a wide range of transient astronomical phenomena .the vsnet campaign lists are subdivided into categories based on object classes ( e.g. _ vsnet - campaign - dn _ for dwarf novae , _ vsnet - campaign - xray _ for x - ray binaries ) .most recently , following the discovery of very unusual objects or phenomena of public interest , we sometimes set up a separate list focused on single objects ( e.g. _ vsnet - campaign - v4641sgr_http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + vsnet - campaign - v4641sgr / maillist.html . ] for the microquasar v4641 sgr ( subsection [ sec : sci : xraynova ] ) , _ vsnet - campaign - v838mon_http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + vsnet - campaign - v838mon / maillist.html . ] for the most unusual stellar explosion ( v838 mon ) with an astounding light echo , _ vsnet - campaign - sn2002ap_http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + vsnet - campaign - sn2002ap / maillist.html . ] for the nearest hypernova sn 2002ap ( see subsection [ sec : sci : sn ] ) .these new features enabled interested theoreticians to share real - time information of these most unusual objects .the number of vsnet campaign individual lists is 36 ( 2003 august ) .summaries of the activities of the vsnet collaboration have been compiled by makoto uemura and issued on a weekly basis ( vsnet campaign news).http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + vsnet - campaign - news / maillist.html .] this information has been summarized as a yearly review of the activity of the vsnet collaboration.http://www.kusastro.kyoto-u.ac.jp/vsnet/summary/ . ] these vsnet campaign news and summaries comprise an important ( authorized ) part of the nearly real - time contribution from the vsnet to the other organizations ( section [ sec : otheralert ] ) , and are sometimes cited in themselves as a convenient record of the activities of particular objects . nowadaysthe vsnet campaigns are more or less continuously undertaken . in order to inform the current targets of interest ,we have recently set up a notification list _ vsnet - campaign - target _ to campaign contributors .( 170mm,90mm)fig6.eps from the very start of the astroalert systemhttp://skyandtelescope.com / observing / proamcollab / astroalert/ . ]maintained by the sky publishing co. , the vsnet has been nominated as one of the authorized information providers .we now regularly contribute news from vsnet " to this alert system , primarily notifying the transient astronomical phenomena of current interest .we also occasionally issue alerts on particularly urgent phenomena and nova discoveries .the news have been relayed to the aavso , the _ astro - l _ mailing list , and are widely distributed as the primary source of information on transient astronomical phenomena .in addition to main scientific achievements ( section [ sec : science ] ) , the vsnet group has been engaged in various activities in the field of variable stars and variable star - related matters . herewe show our representative activities . the vsnet group has been historically engaged in providing updated information to the general catalogues of variable stars ( gcvs : ; ; ; ; ) team ,http://www.sai.msu.su / groups / cluster / gcvs / gcvs/ . ] and provided a large amount of variable star identifications and suggestions , mainly based on examination of the historical literature with the modern technology .the present most relevant lists are _ vsnet - id _ and _ vsnet - gcvs_http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet - id/ + maillist.html and ://www.kusastro.kyoto - u.ac.jp / vsnet/ + mail / vsnet - gcvs / maillist.html .] , which deal with variable star identifications and the gcvs revision project , respectively .the identifications reported to _ vsnet - id _ are also relayed to the simbadhttp://simbad.u - strasbg.fr / simbad ] office .there is a series of solid papers on systematic variable star identifications ( ; ; ; ; ; ; ; ; ) .there have also been collaborations with the multitudinous image - based sky - survey and accumulative observations ( misao ) project , http://www.aerith.net / misao/ . ] which used their own ccd images to identify variable objects near the cataloged positions ( ; ; ; ; ) .these collaborative works were conducted also in collaboration with the vsolj . in identifying variable stars , our own chart - plotting system ( appendix [ sec :app : chart ] ) has played an important role .the vsnet has been providing the role of a world center of reporting newly discovered variable stars .there is a dedicated mailing list _ vsnet - newvar_.http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + vsnet - newvar / maillist.html . ]the reported new variable stars are usually checked by the vsnet administrator group for identification with other astronomical sources ( e.g. x - ray sources , infrared sources , and emission - line stars ) .these identification processes are also conducted on the public mailing list .these reports on new variable stars on _ vsnet - newvar _ are now regarded as the primary source of original information by the gcvs team ( e.g. ; ; ) . a preliminary list ( newvar.cat ) of newly reported , or objects with some peculiarity ( which can be candidate variable stars ) has been updated , and made publicftp://vsnet.kusastro.kyoto - u.ac.jp / pub / vsnet / others / newvar.cat .] as a reference for those who are looking for new variable stars and variable star identifications .examination of the properties of poorly known variable stars are also our regular works ( many results have been reported through _ vsnet - gcvs _ lists , as well as through some solid publications such as ; ; , which were based on the asas-3 public data ; see subsection [ sec : novaconfirm ] ) .confirmation of reported nova candidates is another regular work conducted by the vsnet administrator team , as well as scientific work in novae ( subsection [ sec : sci : nova ] ) .this work takes advantage of the established vsnet software and catalog systems in variable star identifications ( subsection [ sec : vsid ] ) and in confirming new variable stars ( subsection [ sec : newvar ] ) . the vsnet system in screening and confirming newly reported nova candidates is one of the most reliable and efficient among all the existing nova confirmation systems .in particular , the vsnet team has even succeeded in recognizing novae from otherwise dismissed new variable star reports ( e.g. v1548 aql : ; ) .the recent discovery of v463 sct ( ; ; ) is another example of the reliability of the vsnet nova identification system.http://www.kusastro.kyoto-u.ac.jp/vsnet/ + novae / hadv46.html for the full story . ]the reverse case in that the vsnet played an important role in disqualifying a suspected nova .the most recent example was with v4006 sgr , which was originally reported as a probable nova in iauc .this object was soon identified with a known variable star through the public discussion in the vsnet.http://www.kusastro.kyoto-u.ac.jp/vsnet/novae/v4006sgr.html ] the initial discovery announcement was subsequently readily corrected within a day of the original announcement .most recently , we refer to the public photometric real - time database provided by the asas-3 team ://www.astrouw.edu.pl/ / asas / asas_asas3.html ] for confirming southern nova suspects .the most recent successful example includes v2573 oph = nova oph 2003 ( ) .this application of asas-3 public data is also true for confirming new variable stars ( e.g. v2552 oph = had v98 : ) and better characterizing already known variable stars ( e.g. ) .the individual variable star pages on vsnet website provides convenient links to the asas-3 pages ( see subsection [ sec : publc ] ) . upon recognition of new novae, the vsnet team makes an announcement to nova researchers and the variable star community to enable early - time confirmation and follow - up observations .the vsnet is now recognized as one of the most powerful media for disseminating such a kind of alerts to the open community , and is relied on by many professional nova researchers . for this purpose, vsnet takes an `` open policy '' of any nova ( and supernova , variable star etc . )discovery announcements , i.e. such announcements will be immediately released and made public .the full reasoning of the policy and the actual recommended reporting procedure is described in ://www.kusastro.kyoto - u.ac.jp / vsnet / etc / discovery.html and the links from this page .the vsnet has a dedicated list for announcing on new novae _ vsnet - discovery - nova_,http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + vsnet - discovery - nova / maillist.html . ]although this kind of information is usually also posted to _ vsnet - alert _ from the convention . in any case , readers are strongly recommended to refer to the above page before attempting to make an actual nova discovery report . as well as confirmation of nova candidates , confirmation of suspected supernovae is also undertaken from the very beginning of the vsnet .the first example was sn 1995d in ngc 2962 , which was discovered by reiki kushida .the object was confirmed on the same night in japan by three individual observers ( vsnet - alert 30 , 31 , 32 , 1995 february)http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet - alert/ + msg00030.html and so on . ] , and the discovery announcement led the multicolor photometry starting soon after the discovery .spectroscopic confirmation of several supernovae has also been reported on the vsnet lists .the earliest one was for sn 1995al ( vsnet - alert 266 , 1995 november),http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet - alert/ + msg00266.html . ] which was distributed well earlier than the formal publication in iauc .such a quick distribution of spectral type of supernova made it possible to schedule larger telescopes or space - borne instruments to observe it .the nearest type - ic hypernova sn 2002ap ( vsnet - alert 7120 , 2002 january)http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / alert7000/ + msg00120.html .] was the most successful recent example ( , ) , which was followed by many instruments including the subaru telescope or the xmm - newton .plentiful information , including prediscovery observations , has been posted to the specially made sublist _ vsnet - campaign - sn2002ap_. these contributions to early - time observation are summarized in subsection [ sec : sci : sn ] , as well as the scientific results drawn from them .the vsnet has been playing a role providing an electronic medium for announcements made by other variable star - related organizations .the examples include the early announcements of aavso newsletters ( vsnet - alert 14 , 1995 january ) , alert notices ( vsnet - alert 58 , 1995 march ) , news flashes ( vsnet - alert 344 , 1996 february ) , website announcement ( vsnet - alert 178 , 1995 august ) , alexishttp://nis - www.lanl.gov/ / alxhome/ . ] transients ( vsnet - alert 321 , 1996 february ) , international toad watch websitehttp://www - astro.physics.ox.ac.uk/ / itw/ . ] ( vsnet - alert 439 , 1996 june ) , website for hungarian astronomical association variable star sectionhttp://vcssz.mcse.hu/ . ] ( haa / vss , vsnet 756 , 1996 september ) , website for center for backyard astrophysicshttp://cba.phys.columbia.edu/ . ]( cba , vsnet 872 , 1996 november ) , website for british astronomical association , variable star sectionhttp://www.britastro.com / vss/ . ]( baavss , vsnet 893 , 1996 november ) , website for group of amateur astronomers , czechia , ( gama , vsnet - alert 596 , 1996 november ) , iau commission 42 , bibliography on close binarieshttp://a400.sternwarte.uni - erlangen.de / ftp / bcb/ . ]( bcb , vsnet 981 , 1997 january ) , and numerous announcements on call for observations , electronic publications , and international conferences.http://www.kusastro.kyoto-u.ac.jp/vsnet/mail/ + vsnet - conference / maillist.html . ]the vsnet website and mailing lists thus have been _ a world center of variable star - related announcements_.dwarf novae ( cf . ) have historically been the best - studied class of objects by the vsnet collaboration and its precedent mailing list - based cooperation ( hereafter referred to the vsnet collaboration throughout this section ) .figure [ fig : dn ] shows a comparison of long - term light curves of three representative classes of dwarf novae ( cf . ) drawn from the vsnet observations . ) , or upon request to the vsnet administrator ( vsnet-adm.kyoto-u.ac.jp ) .there is no specially requested authorization for the usage of these data in scientific publications other than the usual scientific manner of acknowledgement ( please refer to the message from the data search engine for a recommended form of reference ) .the data usually cover observations since 1995 , and are particularly convenient for making correlation analysis with other modern observations ( e.g. spectroscopy and multiwavelength observations ) . ]( 160mm,220mm)fig7.eps the early work ( up to 1996 ) almost immediately doubled the number of su uma - type dwarf novae , by the detection of superhumps ( ; ; ) during their superoutbursts . in typical su uma - type dwarf novae , superoutbursts only occur approximately once per year , and last for d. the intervals of superoutbursts ( supercycles ) are not constant , making it difficult to plan a scheduled observation .the observations of su uma - type dwarf novae in the past were more or less too - type observations ( subsection [ sec : newwindow ] ) .we first prepared a comprehensive list of candidates of su uma - type dwarf novae , selected by various criteria from the literature and from our own observations .this strategy was in some way similar to the other projects such as : * high - galactic latitude cv search : ; ; ; * its descendant concept of tremendous outburst amplitude dwarf novae " ( toads ) : ; ; ; * recurrent object programme : http://www.theastronomer.org / recurrent.html . ] e.g. , and the related extensive work by although all of these other projects or works basically relied on catalog selections ( e.g. ) or were based on the ( usually poorly known ) past activities .our strategy was different in that : ( 1 ) our selection was based on comprehensive and extensive search through all available literature ( this work was conducted in collaboration with the vsolj ) , ( 2 ) we had examined the palomar observatory sky survey ( poss ) plates and our own systematic ccd survey to get unique identifications and more reliable outburst amplitudes of the objects , ( 3 ) we applied a theoretical background ( e.g. ; ; ) for selecting the objects , and ( 4 ) we had amateur collaborators to share our interest in monitoring faint dwarf novae with a ccd ( subsection [ sec : vsoljcolab ] ) .the combination of these factors brought an unprecedented success in discovering new su uma - type dwarf novae . since the early success story was already described in subsection [ sub : cvcenter ] , we mainly focus on the recent contributions of the vsnet collaboration to this field . in the canonical picture of su uma - type dwarf novae, the mass - transfer ( driven by angular momentum loss from the binary ) in these systems was believed to be the result of gravitational wave radiation ( gwr : cf . ; ; ; ; ) , since the fully convective secondary at this ( hr ) orbital period ( ) is generally considered improbable to sustain magnetic braking , which plays a major role in longer period ( hr ) cvs ( ; ; ; ; ; ; ; ; ; ; ) .since the angular momentum loss by gwr is a unique function of component masses and orbital separation , and the masses of the primary white dwarfs and the main - sequence secondary stars only have a small degree of diversity , the su uma - type stars were historically considered as an one parameter system " , i.e. the basic properties are determined by a single parameter , .this concept so widely prevailed that statistics and classifications were usually given following this concept ( ; ) , and that numerical simulations implicitly assumed this canonical picture . in observations ,a search for new su uma - type dwarf novae was usually restricted to dwarf novae with long recurrence times ( the best example being the recurrent object programme ) .our discovery , however , of the er uma stars completely changed this picture .er uma ( = pg 0943 + 521 ) was originally classified as a novalike object selected from its ultraviolet excess ( palomar - green survey : , see also ) .this object was , together with other pg cvs selected from , regularly monitored by the vsolj members for potential activities .in 1992 , makoto iida ( vsolj ) noticed that this object shows dwarf nova - like outbursts ( , see _ vsnet - history _archive for full details in the public reports circulated in 19921993 ) .because of the presence of long - lasting states of intermediate brightness ( which later turned out to be superoutbursts ) , this star was originally considered to be a z cam - type dwarf nova , which is characterized by the presense of standstills in addition to dwarf nova - type outbursts ( see e.g. , sect .5.4 ; see also ; ; ) . in early 1994 , following the detection of a bright outburst by gary poyner , our ccd observations revealed the presence of superhumps .combined with the visual observations electronically reported , this object was finally identified as an su uma - type dwarf nova with an unexpectedly short ( 43 d ) supercycle ( ) .the shortest before the discovery of er uma was 134 d of yz cnc ( ; ) .it later turned out through a discussion at the padova cv conference in 1994 that this object was independently studied at least by two groups : by the roboscope ( ) team and by . among all groups ,the vsnet team was the first to unambiguously identify the nature of this object .spectroscopic as well as photometric identifications of the orbital periods later confirmed this identification .once this discovery was announced , new members of the group of er uma stars were immediately identified through impetuous real - time competitions through the vsnet public lists : v1159 ori ( ; ) and rz lmi ( ; ) .it later turned out that the supercycle of v1159 ori was independently recognized by , but was only interpreted within the classical framework .the supercycle of rz lmi is exceptionally short ( 19 d ) , which is the shortest record of supercycles up to now [ see also for the basic observational review , and ; for recent discussions of er uma stars ] .later additions to these er uma stars include di uma , an rz lmi - like system , and ix dra , both of which were discovered by the vsnet collaboration . from the standpoint of the disk - instability model, these discoveries immediately led to theoretical interpretations ( ; ) , and significantly contributed to the unified theory of dwarf novae " ( ; ) .these theoretical calculations indicate an unexpectedly ( from the canonical picture based on the gwr - driven angular momentum loss ) high mass - transfer rate ( er uma : ) .the extremely short of rz lmi requires an additional ( still poorly understood ) mechanism .this theoretical effort recently led to ramifications of ideas including the effect of extremely low mass ratio ( = / ) or the effect of irradiation , which need to be investigated by future work .it has been speculated that these mechanisms are partly responsible for the manifestation of the unusual properties of still the still poorly understood wz sge - type dwarf novae , which will be discussed later ( subsection [ sec : sci : wzsge ] ) .the required high mass - transfer is still a mystery .although there have been a number of suggestions , including the long - term effect of nova explosions ( ; ; ; ) , which is , in some sense , a modern extension of the old " discussion of the nova hibernation " scenario ( ; ; ; ; ; ; ) , irradiation - induced mass - transfer feedback ( ; ) , none of them has succeeded in explaining the required high mass - transfer rates in er uma .recent observations ( ; ) suggest that the high mass - transfer rates in er uma stars are less likely the result from a secular evolutionary effect , but may be more related to the activity cycle in the secondary star ( ; ; ) .the origin of the unusually high activity of er uma stars is still an open question .from the observational side , there has been a systematic search , mainly conducted by the vsnet collaboration and relevant results communicated to the public in real time , to find intermediate systems between er uma stars and classical su uma - type dwarf novae .several interesting objects were recognized as a result : ny ser = pg 1510 + 234 ( su uma - type dwarf nova in the period gap : ) , hs vir = pg 1341 ( su uma - type dwarf nova with very short outburst recurrence times : ; ; ; ; ) , sx lmi ( low amplitude su uma - type dwarf nova : ; ; ) , ci uma ( su uma - type dwarf nova with irregular supercycle behavior : ) , v1504 cyg ( su uma - type dwarf nova with a short , ) , v503 cyg ( unusual su uma - type dwarf nova with unusual outburst behavior , d : ; ) , ss umi ( normal su uma - type dwarf nova with a short .7 d : ; ) , bf ara ( normal su uma - type dwarf nova with the shortest .3 d : ; ) , v344 lyr ( large - amplitude su uma - type dwarf nova with a short supercycle : ; ) , mn dra = var73 dra ( su uma - type dwarf nova with d : ) .the evolution of superhumps in er uma stars is also known to be unusual . reported the early presence of large - amplitude superhumps , which later turned out to be part of an unexpected early phase reversal of superhumps .this phenomenon resembles the so - called late superhumps ( ; ; ; ) seen in late stages of superoutbursts in su uma stars , but the striking difference is that they appear in the early stage of the superoutburst .the origin of this phase reversal is not yet understood . reported some peculiar features in superhumps of er uma . suggested a possible link between er uma stars x - ray transients by comparing the evolution of their superhumps . to summarize, the discovery of er uma stars brought a revolutionary turning point of dwarf nova studies : the su uma - type dwarf novae are no longer one - parameter systems " ( see e.g. ; ) .research on short- cvs ( mainly su uma - type dwarf novae ) is continuously broadening , and is now becoming one of the mainstreams " of cv research .( 150mm,100mm)fig8.eps wz sge - type dwarf novae are a peculiar subtype of su uma - type dwarf novae .the properties of wz sge - type dwarf novae include : ( 1 ) long ( 10 yr or more ) outburst recurrence time , ( 2 ) large ( 8 mag ) outburst amplitude , ( 3 ) very few , or sometimes no , occurrence of ( isolated ) normal outbursts , ( 4 ) presence of early superhumps " , which are modulations having periods very close to the orbital periods , during the earliest stage of superoutbursts , and ( 5 ) frequent occurrence of post - superoutburst rebrightening ( see and for modern observational reviews ) .these properties are difficult to explain , even with the recent progress of the disk - instability theory , and the wz sge - type dwarf novae have been continuously providing challenging problems to both theoreticians and observers .one of the main difficulties reside in their extremely long recurrence times . if one assumes the standard disk - instability model , the recurrence time is limited by the diffusion time in quiescence ( ; ) .in order to avoid a thermal instability resulting from this diffusion , one needs to assume an extremely low viscosity parameter in quiescence ( , ; , ) .although the origin of such a low viscosity is becoming positively resolved by considering a very cold disk with a low electronic conductivity , there still exist a number of arguments to avoid an extremely low quiescent viscosity .for example , , assumed evaporation / truncation of the inner disk to prevent thermal instability to occur . and presented slight modifications of these ideas .the models , however , are expected to show much shorter outburst lengths than in the low- model ( ) .some models thereby assume an enhanced mass - transfer during a superoutburst ( as originating from and extended by ) , there is , however , no concrete observational evidence supporting this supposed enhanced mass - transfer . a careful analysis of the observations of the 2001 outburst of wz sge also supports this lack of mass - transfer ( r. ishioka et al . in preparation ) .some authors ascribed the existence of a brown - dwarf secondary star to the origin of the required low ( ) .this possibility , together with the expectations from the theoretical viewpoint ( e.g. ) , led to a wide interest to search for brown dwarfs in wz sge - type dwarf novae ( ; ; ; ; ; ; ; ; ; ) .although earlier reports tended to suggest the presence of a brown dwarf , the evidence is less clear from more recent detailed studies ( ; ; ; ; ) . most recently, the observational lack of cvs having brown dwarf secondaries is even becoming a serious problem ( ) . in all aspects of disk - instability problems ,origin of disk viscosity , and late - stage evolution of compact binaries , wz sge - type dwarf novae continue to be key objects .since the outbursts of wz sge - type dwarf novae are quite rare , these objects have best illustrated the ability and the achievement of the vsnet as the real - time network .before the vsnet was established , most ( presumable ) wz sge - type outbursts were only poorly studied ( e.g. al com : , uz boo : , pq and : ; ; ; , gw lib : , other systems : ) .the establishment of the vsnet has been one of the greatest steps toward understanding the wz sge - type dwarf novae . in particular , without the collaboration with the vsolj and without the vsnet alerts , the early part of the 2001 outburst of wz sge would have still remained a mystery . the earliest work on wz sge - type dwarf novae was on hv vir in 1992 .although this object was originally recorded as a classical nova in 1929 ( ; ) , we started monitoring for a future potential outburst as a candidate wz sge - type dwarf nova .this outburst was discovered by patrick schmeer , and was immediately relayed through the alert network .the vsnet team was the first to record periodic modulations ( early superhumps ) in the light curve .the final result on this superoutburst was published as , which has been chosen as a best modern reference on new wz sge - type stars in .after yr , the object again went into superoutburst , which was observed in detail by the vsnet collaboration .the next object was uz boo in 1994 ( ; ) .the outburst detection was also immediately relayed through the vsnet alert network . due to the short visibility , only a preliminary superhump period of 0.0619d was obtained , although the post - superoutburst rebrightenings reported through the vsnet have raised a possible link between wz sge - type dwarf novae and soft x - ray transients ( sxts , or x - ray novae , subsection [ sec : sci : xraynova ] ) .this suggestion has been more substantiated by , , ; this relation is becoming one of the major contemporary topics in sxt outbursts . in 1995 and 1996 , outstanding wz sge - type outbursts of al com and eg cnc occurred .these outburst detections were immediately relayed through the vsnet , and produced a wealth of scientific results as already introduced in subsection [ sec : vsnetcollab ] .al com again underwent an superoutburst in 2001 . a timely outburst announcement by steve kerr on vsnet enabled early - time observations when the object was still rising .the detection of growing early superhumps was already reported , which was actually the first detection of growing early superhumps before the 2001 spectacular outburst of wz sge itself . in late 2000 , another spectacular outburst of rz leo occurred , which had long been suspected as a candidate wz sge - type dwarf nova ( ; ; ; ) .the vsnet collaboration succeeded in detecting both early superhumps and ordinary superhumps ( ; ) , giving credence to the wz sge - type nature of this dwarf nova .the unexpected outburst of the prototype wz sge in 2001 was one of the greatest astronomical phenomena in recent years .it was detected by a japanese amateur observer , tomohito ohshima , was immediately relayed to the vsnet , enabling early coverage ( ; ; ) , which was one of the greatest achievements of the vsnet as an alert network ( see figure [ fig : wzlc ] ) .this outburst produced a burst of scientific results from a number of researchers , both ground - based and satellite - borne : detection of early superhumps , growth of ordinary superhumps ( ; ) , spectroscopic detection of spiral patterns ( ; ) whose details have been published in , real - time " modeling of the outburst , chandra observation , hst observation , far ultraviolet spectroscopy , infrared spectroscopy , extensive photometry ( ; ; r. ishioka et al . in preparation ) , and theoretical modeling of the superhump light curve .other wz sge - type dwarf novae observed and reported by the vsnet collaboration include : uw tri ( ) , ll and ( ; data cited in , ) , v2176 cyg ( ) , cg cma ( ) , v592 her ( ) . among them, v2176 cyg showed a dip " phenomenon , noted for first time since the 1995 outburst of al com ( ) , and v592 her was confirmed to be a dwarf nova with an exceptionally large outburst amplitude .the vsnet collaboration has first systematically demonstrated that all well - observed wz sge - type dwarf novae show early superhumps " during the earliest stage of their superoutbursts ( which may be the best defining characteristic of wz sge - type dwarf novae : ) .these early superhumps are usually double - wave ( sometimes more complex ) variations , which have periods extremely close to the orbital periods ( e.g. ) .although there is a historical ( ) and a modern version ( ) interpretation , that the phenomenon results from an enhanced mass - transfer , it is now understood as the result of some sort of resonance on the disk ( 2:1 resonance : or vertical 2:1 resonance : ) .the identification of rz leo as a wz sge - type dwarf nova ( ) , is a surprise from this point of view , since the superhump period ( 0.078529 d ) of rz leo is anomalously long compared to the canonical picture of wz sge - type dwarf novae .this identification indicates that neither a brown - dwarf secondary ( ) nor an extreme mass - ratio , enabling 2:1 resonance ( ) , may be a necessary condition for the wz sge - type outburst phenomenon .this implication is presently under discussion for wz sge - type dwarf novae ( cf . ; ) . as concerns early superhumps, detected a smooth transition from the orbital to superhump period in a more usual su uma - type dwarf nova , t leo .this phenomenon may be somehow related to the evolution of early superhumps . several large - amplitude su uma - type dwarf novae share some common properties with wz sge - type dwarf novae , particularly in the lengthening of the superhump period .the periods of the superhumps are usually not constant , but show a significant period derivative ( ) .the are usually negative in classical su uma - type dwarf novae ( ; ; for a recent progress , see ) .this negative is usually considered to be a result of a decrease in the angular velocity of precession of a shrinking eccentric disk .the decrease may also be a result of inward propagation of the eccentricity wave ( cf . ; ; ) .a small number of su uma - type dwarf novae , notably many wz sge - type dwarf novae , are known to show positive .this effect was first clearly detected in v1028 cyg , although this phenomenon first appeared in solid publications on sw uma ( ; ) , v485 cen , and on the wz sge - type star al com ( ; ; ) .the su uma - type dwarf novae newly identified by the vsnet collaboration as having positive include : hv vir ( wz sge - type star , ) , wx cet , eg cnc ( wz sge - type star , ) , and xz eri .the true origin of this phenomenon is not yet well understood ( ; ; ; ; ) .post - superoutburst rebrightenings are also a renowned feature of the wz sge - type dwarf novae ( ; ; ) , for which , and presented an interpretation based on the slow viscosity decay in the early post - superoutburst state .this mechanism would require a mass reservoir in the outer disk , whose original observational implication was proposed by .the existence , or non - existence , of post - superoutburst rebrightenings have been systematically studied by the vsnet collaboration in almost all su uma - type dwarf nova .recent examples include go com , wx cet ( ; ) , v1028 cyg , v725 aql , and in the unusual system ( see subsection [ sec : sci : ultrashort ] ) ei psc . the statistics is presented in .a missing link between wz sge - type dwarf novae and ordinary su uma - type dwarf novae has been sought .ct hya ( ; ) was suggested to be one such system , though a more recent statistical analysis suggests a more rigid segregation between wz sge - type dwarf novae and ordinary su uma - type dwarf novae .several systems possibly related to wz sge - type dwarf novae have been also studied by the vsnet collaboration : cc scl = rx j2315.5 ( ) .long su uma - type dwarf novae with rare outbursts have also been systematically studied by the vsnet collaboration .the objects include : ef peg ( ; ) and v725 aql ( ; ) .the origin of the necessary low mass - transfer rate in such long systems is still a problem . from the standard evolutionary scenario of compact binaries , there should be a period minimum " ( e.g. ) at which the mass - losing secondary star becomes degenerate and the binary period starts to lengthen .this period is observationally determined to be .3 hr , which is about 10% longer than the theoretical predictions ( ; ; ; ) .this discrepancy has not been yet resolved , although several attempts have been made ( e.g. ; ) to reconcile theory with observation . there exist , however , hydrogen - rich systems with periods well below this theoretical minimum period .the classical " object is v485 cen ( ; ; ) ; the faintness of this object , however , prevented a detailed observational study . in the past few years , the vsnet team found a hydrogen - rich , nearby bright system ( ei psc = 1rxs j232953.9 + 062814 ) having a short period comparable to v485 cen ( ; ; ) .both the radial - velocity study ( ) and the superhump period analysis ( ; ) independently confirmed that the secondary star of this system is more massive than what is expected for this orbital period , suggesting that the mass donor is an evolved core with a thin hydrogen envelope . from this finding , suggested , following the evolutionary calculations by , that both ei psc and v485 cen can be ancestors of helium cvs ( or am cvn stars , ; ) consisting of a white dwarf and a mass - losing helium white dwarf .if this interpretation is confirmed , this object would become the first direct observational evidence that helium cvs are descendants of a certain class of cvs with hydrogen - rich appearance ( cf . ; ; ; ) .this object , given its proximity and relatively heavy component masses , is also considered to be an excellent candidate for next generation experiments of directly detecting gravitational wave radiation ( ; ) . in clarifying the nature of ei psc , proper motion studies played an independent important role in identifying the object as a nearby object .this finding was soon confirmed by later researchers ( ) , and the same technique has been applied to different sorts of objects by the vsnet collaboration ( rx j2309.8 + 2135 : , v379 peg : , cw mon : ) .this application of astrometry soon became the global standard in studying cvs and related systems ( , see also recent entries of a survey of proper motions in downes et al .online cv cataloghttp://icarus.stsci.edu/ / cvcat/ . ] ) .helium cvs have been one of the best observed targets by the vsnet collaboration . among them , cr boo has been identified as the first helium er uma star " with a supercycle of 46.3 d ( cf .subsection [ sec : sci : eruma ] ) by .the vsnet team also joined a campaign to study superhumps in cr boo . identified a similar supercycle in the helium cv , v803 cen . further identified standstills in v803 cen , which were initially suggested for cr boo in . later detected a transition of cr boo to a state of a short supercycle ( 14.7 d ) , which they called the second supercycle " .this phenomenon is still difficult to explain . studied v803 cen for its long - term behavior and its 2003 june superoutburst , and revealed that the object ( and probably also cr boo ) shows outburst behavior similar to wz sge . from these studies ,both cr boo and v803 cen have been well - established helium counterparts " to hydrogen - rich su uma - type dwarf novae , in contrast to the traditional vy scl - type ( variable mass - transfer rate from the secondary , see subsection [ sec : sci : vyscl ] ) explanation of high and low states in these systems ( ; ) .this interpretation is perfectly in line with the dwarf nova - type interpretation , although this interpretation was not originally correctly applied to observation ( ; ) .outburst detections of other helium dwarf novae ( e.g. kl dra = sn 1998di ) have been also announced through the vsnet , and provided necessary fundamentalshttp://www.kusastro.kyoto - u.ac.jp / vsnet / dne / kldra.html . ] for detailed research ( e.g. ; ; ) .since cvs are close binary systems , high - inclination systems show eclipses .the presence of eclipses in cvs historically provided most crucial information about the geometry and fundamental physics of the accretion disk or the accretion stream ( ; ; ; ; ; ) and the clarification of the cause of outbursts ( ; ; ; ) . in recent years, the eclipse mapping technique ( ) has been used to geometrically resolve the accretion disk by numerically modeling the eclipse light curve ( and sometimes line variations ) of cvs ( ; ; ; ; ) .this and analogous methods have also been used to study the time - evolution of the accretion disk during dwarf nova outbursts ( ; ; ; ; ) , discuss the presence of a spiral pattern ( which may be theoretically predicted spiral shocks : ; ; ; ) in dwarf novae ( ; ; ) , for mapping of the superoutbursting disks , ( ) , to spectrally resolve the accretion disk ( ; ) , and to directly obtain physical parameters of the accretion disk ( ; ) .the vsnet collaboration played an important role in studying eclipsing cvs , especially eclipsing dwarf novae .the initial efforts were made to follow the eclipses of the rising phase of an ip peg outburst .the vsnet alert lists provided an initiative role of systematic studies when the northern new eclipsing dwarf nova ex dra (= hs 1804 + 6753 ) was discovered .this action was soon extended to observe rare outbursts of an eclipsing su uma - type dwarf nova , dv uma .the 1995 outburst of ht cas was also spectacular .this is a well - known eclipsing dwarf nova ( cf . ; ; ; ) . in 1995 ,the vsnet team received a request for an optical ground - based campaign coordinated with the hubble space telescope ( hst ) .the observation by the vsnet team succeeded in correcting the eclipse ephemeris , which is readily reflected on the hst observing schedule , when the object suddenly jumped into an outburst !thanks to this coincidence , we were able to obtain eclipse information only two days prior to the outburst maximum , which precluded the enhancement of the hot spot as expected from the mass - transfer burst .the eclipses during this outburst was also followed by another group , who observed this object in response to this outburst detection .the results of eclipse mapping , together with later outburst observations , have been recently reported .ir com (= s 10932 ) , a system very similar to ht cas , has also been extensively studied by the vsnet team . in particular, we detected the 1996 january outburst , and succeeded in taking the earliest eclipse observations , the true nature of this object had remained unclear before ( ; ) .the real - time circulation of this outburst detection and eclipse information enabled a third - party follow - up observations .in most recent years , the vsnet collaboration discovered a deeply eclipsing bright su uma - type dwarf nova ( iy uma = tmz v85 ) in the northern hemisphere ( ; ; ) .for the first time in history , this observation yielded the simultaneous discovery of superhumps and eclipses .this system , the only bright normal su uma - type dwarf novae suitably situated for northern telescopes , has been proposed as the best candidate object for next generation detectors on huge telescopes ( cf . ; ; ) .this and the subsequent outbursts were followed by a number of teams , resulting in rich physical insights ( ; ; ; ; ; ) . with strong emission lines of heii and ciii / niii in outburst ( ;such objects are known to be quite rare : ) , this object is a good candidate for spatially resolving a superoutbursting disk by the emission - line eclipse mapping method , as well as with classical doppler tomography of the velocity field . we also succeeded in identifying the supercycle .the other outstanding object is dv uma , which was observed during the entire stage of the 1999 december superoutburst following the outburst report by timo kinnunen.http://www.kusastro.kyoto-u.ac.jp/vsnet/dne/dvuma9912.html . ]this observation first fully covered the early evolution of eclipses in this rarely outbursting system . in 2002 february , a collaborative effort on gy cnc = rx j0909.8 + 1849 led to the discovery of the eclipsing nature of this dwarf nova ( ; ; ; ) .eclipse observations during the 2001 november outburst revealed the noticeable absence of the hot spot during the late stage of an outburst .this observation suggested that gy cnc may be the first long- object sharing common properties with ht cas and ir com .recent detailed outburst ( or superoutburst ) observations of eclipsing dwarf novae include : xz eri ( ) and ou vir ( r. ishioka et al . in preparation ) .both stars show prominent superhumps as well as eclipses .xz eri is the first eclipsing su uma - type dwarf nova with a positive period derivative .ou vir is another object continuously receiving world - wide attention ( ; ) , for which we succeeded in determining the first reliable orbital and superhump periods .v2051 oph is another eclipsing cv , which had been thought to be a low - field polar ( ; ; ) for which the vsnet collaboration first provided unambiguous clarification of its su uma - type nature by securely detecting superhumps and supercycles ( ; ) . with the help by this clear identification, this object has also been receiving special attention both with ground - based and satellite - borne observations ( ; ; ; ) the recovery and clarification of the nature of the lost " dwarf nova v893 sco is another noteworthy achievement by the vsnet collaboration .this object had long been lost , when katsumi haseda ( vsolj ) reported an outbursting object ( at a nominally different position from that of the originally reported v893 sco ) to the vsnet .after careful research of the discovery material , as a part of confirmatory process of a new variable star ( subsection [ sec : newvar ] ) , this newly reported object was eventually identified with the lost v893 sco .this was only the beginning of the story ; the object soon turned out to be the brightest , and presumably one of the nearest , eclipsing dwarf novae below the period gap ( ; ; ) .this object has been extensively studied since its recovery ( ; ) .cw mon ( ) shows grazing eclipses during certain stages of outbursts .together with the transient appearance of pulsed signals , the presence of a premaximum halt in the outburst light curve and relatively strong x - ray radiation , this object has been suspected to be an intermediate polar ( see also subsection [ sec : sci : ip ] ) .quasi - periodic oscillations ( qpos ) are short - period , quasi - periodic oscillations widely observed in accreting binary systems including cvs ( cf . ) . qpos in cvs are usually subdivided into two classes .one is dwarf nova oscillations ( dnos ) observed during outbursts of dwarf novae .dnos have short periods ( usually 1929 s ) and long coherence times ( ; ; ; ) .the other is qpos , which have longer ( 40 to several hundred seconds ) and shorter coherence times ( usually less than wave numbers ) .we discovered a potentially new class of qpos ( super - qpos ) during the 1992 superoutburst of sw uma .these super - qpos have long ( several hundred seconds ) periods and long coherent times ( more than several tens of wave numbers ) . in some cases , the amplitude can be quite large ( up to 0.2 mag ) .the most outstanding feature of super - qpos is that they are observed only during certain stages of su uma - type superoutbursts . in sw uma ( 1992 ) and ef peg ( ) , the super - qpos were observed during the growing stage of superhumps .a similar , but less striking , probable appearance of super - qpos was also recorded by the vsnet collaboration during the early stage of an superoutburst of nsv 10934 ( ) . during the 2000 superoutburst of sw uma , similar super - qpos temporarily appeared during the decay phase of a superoutburst ( vsnet - alert 4331).http://www.kusastro.kyoto - u.ac.jp / vsnet / dne / swuma00.html . ]these observations suggest that the appearance of super - qpos is closely related to the growth and decay of superhumps , or related to the existence of heating / cooling waves ( proposed that some sort of qpos can be an excitation of trapped oscillations around the discontinuity of physical parameters ) .different interpretations have also been suggested .for example , suggested that the super - qpos may be a result of interaction between the weak magnetism of the white dwarf and some kind of wave in the inner accretion disk .although this explanation would be compatible with the suggested presence of a weak magnetic field in sw uma ( ; ; ; ) , there would be a need for a different mechanism to explain why these super - qpos only appear only during temporary stages of superoutbursts . in ef peg , a rapid decrease in the periods of super - qpos was recorded . from this finding, suggested a rapid removal of angular momentum from an orbiting blob in the accretion disk , via a reasonable viscosity in a turbulent disk .the origin of super - qpos is still an open question , but their prominent profile is expected to provide crucial information about the origin of qpos in cvs .although some nova - like cvs ( vy scl - type stars , subsection [ sec : sci : vyscl ] ) are best known to show low states " , during which mass - transfer from the secondary is reduced ( ; ) , this phenomenon has not been clearly confirmed to occur in dwarf novae .although there have been claims for low states " ( ht cas : ; ; , ir com : ; , ww cet : ; bz uma : ) , it is not evident whether or not these phenomena directly reflect a reduced mass - transfer from the secondary , since the state change in the disk ( especially the viscosity parameter ) would reproduce similar phenomena .extensive studies on selected well - observed dwarf novae ( e.g. ; ) have found no evidence for the long - term variation of mass - transfer . during the extensive work by the vsnet collaboration, we discovered that the z cam star rx and underwent a deep fading in 1996 september ( vsnet - obs 3750).http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / obs3000/ + msg00750.html . ] this fading ( and the lack of outbursts ) lasted until 1997 january , which yielded the first unambiguous detection of a temporarily reduced mass - transfer in dwarf novae .this phenomenon was thoroughly studied by .careful research on the historical light curve indicated that similar phenomena were sporadically observed in rx and ( ; ) , but had been overlooked mainly because of the confusion of the true quiescent identification , and the lack of real - time circulation of information .the vsnet collaboration succeeded in detecting such a phenomenon through real - time , regular monitoring of light curves of dwarf novae , and through a prompt reaction when an anomalous change was observed ( the deep quiescence of rx and was originally confirmed by our own ccd observation ) .this is another aspect how effectively the vsnet alert network worked , besides other outburst - type transient events .this detection , announced world - wide , thereby led to a prompt hst observation , which revealed the presence of a hot white dwarf . further reported the detection of short fading episodes in rx and and su uma , which may be a result of temporary reduction of mass - transfer rate .other dwarf nova - related works , not covered by the above subsections , by the vsnet collaboration include : 1 .dwarf novae in the period gap : gx cas , v419 lyr ( ) ( see subsection [ sec : sci : eruma ] for ny ser and mn dra ) 2 .time - variation in more usual su uma - type dwarf novae and candidates : aw gen ( ) , rz sge ( ) , v1113 cyg ( ; ) , cc cnc ( ; ) , vz pyx ( ) , cy uma ( ; ) , pu per ( ) , aq eri ( ; ) , ct hya ( ) , v364 peg ( ) , qw ser ( ; ) , bz uma ( ) , ci gem ( ) , ty vul ( ) , kv dra ( ) , v844 her ( ) , qy per ( ) , ty psc ( ; ) , v630 cyg ( ) , v369 peg ( ) , uv gem , fs and , as psc ( ) , rx cha ( ) , yz cnc ( ) , ir gem ( ) , ft cam ( ; ) , gz cnc , nsv 10934 ( ; ) , dm dra ( ) , su uma ( ) , dm lyr ( ) 3 .time - variation in ss cyg - type dwarf novae : v1008 her ( ) , v1101 aql ( ) , dk cas ( ) , hh cnc = tmz v36 ( ) , is del ( ) , iz and ( ) , dx and ( ) , ah eri ( ) , cg dra ( ) , 4 .standstills of z cam - type dwarf novae : vw vul ( ) , at cnc ( , doppler tomography ; ) , z cam ( ) , hl cma ( ) , fx cep ( ) , v363 lyr ( ) , ey cyg ( ) , iw and ( unusual z cam star : ) , 5 .quiescent dwarf novae : uv per ( ) , go com ( ) 6 .classification : bf eri ( ; ) , lx and ( ; ) , hp and ( with subaru , ) 7 .statistics and compilation : ; most of the work was done with the help of the alert network and the collaboration described in subsections [ sec : vsoljcolab ] and [ sub : cvcenter ] .as the vsnet has been mediating an enormous number of outburst alerts of dwarf novae since its very early history , these alerts , as well as long - term observations , contributed to world - wide dwarf nova studies by different teams .since they are so numerous , we only list representative ones : * v485 cen : ; * pv per : * tu crt : * ks uma : ; * kv dra : * wy tri : * kx aql : * xy psc : * ah her : ; * cvs in the 2mass survey : * v844 her : * v2051 oph : ; * qz ser : , another peculiar dwarf nova discovered by katsumi haseda ( had v04 ) , and announced in collaboration with the vsnet ( subsection [ sec : newvar ] ) * ip peg : * rx j0944.5 + 0357 : * v1504 cyg : * gz cnc : * v1141 aql : * qw ser : * fs aur : * em cyg : * short - period dwarf novae : * faint cvs survey : * su uma stars : from the beginning of the vsnet , novae and recurrent novae have been widely studied as one of the classical representatives of transient objects .the earliest observations include the recurrent nova v3890 sgr ( 1990 ) , whose exact identification was clarified by us .the next advancement was with v838 her ( nova her 1991 ) , whose eclipsing nature and exact orbital period was clarified by our observation ( ) .this is the first classical nova whose evolution of eclipses was caught from the early decline stage of the outburst ( ; ; ) .the nova of the century " v1974 cyg ( nova cyg 1992 ) was followed with the advent of the e - mail alert list ( see _ vsnet - history _messages ) .this nova later turned out to be a permanent superhumper system ( ; ; ; ) .the vsnet collaboration later contributed to the international observing campaigns of the superhumps ( a. retter et al ., in preparation ) .the next major step was with v705 cas ( nova cas 1993 ) , as introduced in subsection [ sec : openingera ] .after this nova , the vsnet has continuously provided public pages on individual novae , http://www.kusastro.kyoto - u.ac.jp / vsnet / novae / novae.html . ] which are referenced as a primary resource on recent novae .v723 cas ( nova cas 1995 ) has been one of the best studied novae in the vsnet history .the object was discovered by minoru yamamoto , whose report immediately triggered early follow - up observations .several early reports discussed against the classical nova - type classification ( e.g. ) .we were the first , with the enormous amount of information collected by the vsnet , to predict that the object is a premaximum phase slow nova resembling hr del ( vsnet 223).http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + vsnet / msg00223.html .] this prediction was later confirmed by a number of works ( ; ; ; ; ) .although it was not a classical nova , v4334 sgr ( sakurai s object ) in 1996 brought a major breakthrough in stellar evolution in real time " ( ; ) .this star is one of the best studied and discussed variable stars since the late 1990 s ( ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ) .the vsnet not only relayed this breathtaking discovery of a final helium flash object , but also promptly provided prediscovery observations by kesao takamizawa ( vsnet - alert 341)http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet - alert/ + msg00341.html . ] ( finally published by ) , which has been frequently referenced and employed for theoretical modeling .see subsection [ sec : sci : rcb ] for its relation with r crb - like stars ( cf . ) . in recent years, the vsnet collaboration has been issuing scientific results , as well as early announcements and identifications , on individual novae : v2487 oph ( nova oph 1998 , recurrent nova candidate : ; ) , v4444 sgr ( nova sgr 1999 , the possibility of this being a recurrent nova has been discussed : ) , v463 sct ( nova sct 2000 , fast nova with an unusually prominent premaximum halt : ; ) , v445 pup ( nova pup 2000 , unusual nova with no indication of hydrogen features : ; ; ; ; m. uemura et al . in preparation ) , v1548 aql ( nova aql 2001 , slow nova initially reported as a more usual variable star : ; ) , v1178 sco ( nova sco 2001 , object originally confusedly reported to be a novalike object , later turned out to be a genuine nova with early stage oscillations : ; ; ) , v2540 oph ( nova oph 2002 , large - amplitude slow nova with strong post - outburst oscillations : ; ) .the vsnet collaboration also joined multiwavelength campaigns on novae with satellites ( e.g. v4743 sgr : ) .the vsnet has recently been relaying discovery , independent and prediscovery detections of novae , as well as spectroscopic confirmation and early photometric observations : v2274 cyg ( nova cyg 2001 : ) , v4643 sgr ( nova sgr 2001 : ) , v4740 sgr ( nova sgr 2001 no .3 : ) ; v4741 sgr ( nova sgr 2002 : ; ) , v4742 sgr ( nova sgr 2002 no . 2 : ) , v4743 sgr ( nova sgr 2002 no . 3 : ; ) , v4744 sgr ( nova sgr 2002 no . 4 : ) , v4745 sgr ( nova sgr 2003 : ; ) , possible nova ( 2002 ) in ngc 205 ( ) , possible nova ( 2003 ) in scutum : ( ) .the discovery of a very unusual eruptive object ( v838 mon , ; ; ) was relayed through the vsnet alert system during its early stage of eruption , enabling early - stage observations ( ; ) .the identification of the object with gsc , iras and 2mass objects was reported .the early stage of this object was most unusual , showing an m - type spectrum at outburst maximum . on 2002february 2 , the object underwent a second major brightening , which was quickly relayed via the vsnet alert system worldwide , and the star became world - popular within a day .the object subsequently showed a prominent light echo when it faded ( by the obscuration by the forming dust ) .there have been a number of works based on these observations ( ; ; ; ; ; ; ) , which more or less employed vsnet observations and findings when discussing the peculiarity of this object .the origin of this eruption is still a mystery .based on an idea proposed by , tried to explain the historical mysterious eruption of ck vul .the last two decades have dramatically changed our view of recurrent novae .the discovery of recurrent outbursts of v394 cra in 1987 ( ; ; ) , v745 sco in 1989 ( ; ; ) , v3890 sgr in 1990 ( ; ; ) and nova lmc 1990 no .2 ( ; ) resulted in a dramatic increase of our knowledge in recurrent novae ( cf . ; ; ) .another epoch of recurrent nova discoveries arrived like a flurry in the late 1990 s and early 2000 s .all of these recurrent nova outbursts were mediated via vsnet and followed in detail .the initial object in this series of recurrent nova outbursts was u sco in 1999 , whose early history was described in detail in subsection [ sub : cvcenter ] .this outburst of u sco first enabled eclipse observations in real time during outburst [ the eclipsing nature of u sco was revealed only in 1990 ; the 1999 outburst was the first outburst since this discovery ; see also for the retrospective detection of an eclipse during the 1987 outburst ] .this observation led to the first detection of the period change in this system .this finding severely constrained the mass - transfer rate in quiescence ( ; ) , which makes u sco the most promising candidate for an immediate precursor of a type - ia supernova .the next object in this series was ci aql .this object had long been suspected to be a dwarf nova based on the ( apparently ) small outburst amplitude of the 1917 outburst . based on this potential identification ,the object had been monitored by amateur variable star observers , notably by vsolj members and by members of the recurrent object programme ( see subsection [ sec : sci : dwarfnova ] ) .the proposed quiescent counterpart , however , was found to be an eclipsing binary which did not show cv characteristics . the same conclusion had been reached with snapshot spectroscopy , showing no indication of hydrogen emission lines . with this information , almost all observers stopped monitoring for an outburst , although there was the unexplained presence of a heii emission line . the news of a possible nova detection by kesao takamizawa on films taken on 2000 april 28 arrived at the vsnet on april 29 . the reported position was extremely close to that of ci aql .minoru yamamoto independently detected this phenomenon , and reported it to be a brightening of ci aql .after careful examination of the identification , this possible nova is identified as a recurrent outburst of ci aql , 83 years after the 1917 discovery ( ; ) .the nova - type nature of the outburst was soon clarified with spectroscopy , confirming that ci aql is a new recurrent nova .being already known as an eclipsing binary , the evolution of the outburst light curve and the eclipse profile were precisely followed ( ; ) . in particular , a dip - like sudden fading in 2000 november was noted . with these observational constraints , succeeded in modeling the light curve , which was further refined and extended ( ; ) to explain the unique high / low transitions in the supersoft x - ray source rx j0513.9 in the lmc .its galactic counterpart , v sge , has been very recently identified .these objects are now considered to be promising candidates for precursors of type - ia supernovae ( e.g. ; ; for earlier and other suggestions , see e.g. ; ; ; ) . the discoveries and modern detailed observations of u sco and ci aql outbursts thus provided firm observational evidence for recurrent novae and supersoft x - ray sources as immediate precursors of type - ia supernovae .other representative works ( outside the vsnet ) on ci aql include photometry , spectroscopy ( ; ) , chandra x - ray observation , modeling , re - examination of the 1917 outburst .the next discovery of the series was on i m nor ( possible nova in 1920 ) in 2002 january .the outburst detection by william liller was quickly relayed to the vsnet , and enabled early astrometric work to first firmly identify the quiescent counterpart and its recurrent nova nature ( ; ) .the light curve of i m nor was published more than 50 years after the 1920 outburst .there had been a suggestion of identification with the uhuru x - ray source 2u 1536 , which was later confirmed to be spurious ( , see also ) .although the outburst light curve in 1920 resembled that of a slow recurrent nova t pyx , the unusually faint quiescence inferred from had been a mystery . the exact identification with a new outburst solved this mystery , by the detection of a considerable variation in quiescence .this suggestion was later confirmed by the detection of a short period variation with eclipse - like fadings . suggested , from the light curve and spectroscopic appearance , that both ci aql and i m nor are members of a new class of recurrent novae having intermediate properties between classical novae and fast recurrent novae . intermediate polars ( ips ) , which are a class of magnetic cvs ( mcvs ) having a magnetic white dwarf asynchronously rotating with the orbital motion ( sometimes referred to as dq her stars : for recent reviews , see e.g. ; ; ; ; chapters 8 and 9 in ) .although many classical " ips are novalike systems with thermally stable accretion disks , there are a number of ips showing transient outbursts .some of them look like dwarf novae ( subsection [ sec : sci : dwarfnova ] ) , including well - known objects such as gk per ( ; ; ; ; ) , do dra ( sometimes called yy dra , see , for the official nomenclature , ; ) , ex hya ( ; ; ; ) . there has been a long - standing discussion whether these ip outbursts originate from disk - instabilities or from mass - transfer bursts ( ; ; ) .the modern understanding is that at least some of them are better understood as mass - transfer events ( tv col : ) , while others can be understood as disk - instability events ( gk per : ; ) .rapid circulation of outburst alerts are extremely important for short - period systems , because these ip outbursts are usually very brief ( usually less than 1 d ) and require prompt follow - up observations ( e.g. tv col : ; ; ; ; , ex hya : ; ; ; ; ; ; ; , do dra : ; ; ; ; ; ; ) .although these outburst detections have historically been relayed via iaucs , they were not usually rapid enough to enable early - stage observations of these outbursts .the vsnet collaboration has played a role in detecting , and rapidly relaying these ip outbursts . since ip outbursts tend to cluster ( cf . , m. uemura et al . in preparation ) , rapid electronic announcements of these outbursts have dramatically increased the chance of detailed follow - up observations , including simultaneous observations with satellites .the most remarkable recent example includes x - ray / optical simultaneous observations of two outbursts of do dra , whose optical coverage was based on observations by the vsnet collaboration .the ip outbursts for which the vsnet collaboration played an important role of early notification include gk per in 1996 ( ; ; ) , in 1999 , whose occurrence was notably predicted by a vsnet member , tsutomu watanabe ( vsnet - future 2),http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + vsnet - future / msg00002.html . ] and real rise observed in detail by the vsnet collaboration ( vsnet - alert 2652).http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / alert2000/ + msg00652.html .] these two outbursts were of special importance because detailed modern time - resolved ccd observations set a stringent limit on the expected occurrence of eclipses during outburst .these outbursts also provided an opportunity to study time - resolved spectroscopy of the qpos , as well as x - ray / optical detections of long - period qpos ( ; ) .multicolor observations of the 1996 outburst also provided observational constraints on the outburst models .the 2002 outburst of gk per provided an opportunity to study magnetic accretion .the long - term visual data , including the data reported to the vsnet , have been used for analysis of outburst properties . in most recent years, the vsnet collaboration has succeeded in characterizing outburst properties of the unusual short - period intermediate polar ht cam .the outbursts are extremely brief , showing precipitous declines during the late part of outburst . from the time - difference of outburst maximum and the maximum appearance of ip pulses during the 2001 outburst , concluded that the outburst was triggered by a dwarf nova - type disk instability phenomenon .the existence of a precipitous later decline can be explained by the truncation of the inner accretion disk .the 2001 outburst of ht cam was also studied by another group .the outburst of ht cam thus has been one of milestones in the study of ip - dwarf nova relation . reported short - period , nearly coherent , qpos in v592 cas , a nova - like star in the period gap .this object has been suggested to be a unique object in the period gap showing both properties of superhumps and occasional ip - like , nearly coherent , photometric oscillations .several other dwarf novae have been suspected of ip - type signature , and were briefly discussed in the relevant parts of subsection [ sec : sci : dwarfnova ] .bright polars ( mcvs with synchronously rotating white dwarfs ) have been regularly monitored by the vsnet collaboration .many polars , notably am her ( figure [ fig : amher ] ) and v834 cen , occasionally show low states , whose occurrence has been notified through the vsnet alert system .long - term ccd monitoring of faint polars have been regularly reported by berto monard .( 160mm,80mm)fig9.eps coordinated multi - wavelength observations have been conducted on several occasions .the targets include am her , ar uma , st lmi , vv pup , an uma and ef eri .some of these collaborative studies has been already published as a solid paper .( 160mm,80mm)fig10.eps vy scl - type stars are novalike cvs with occasional low states , or fading episodes ( cf . ; ; see figure [ fig : mv ] for the vsnet light curve of mv lyr ) . in some systems, these low states occur very infrequently ( the best example being tt ari : ; ; ; ; ; ) .such low states provide a unique opportunity to study the white dwarf atmosphere or to directly detect the secondary star ( e.g. ; ) . from the viewpoint of disk - instability model, the decreasing mass - transfer rate would produce a dwarf nova - type disk instability if there is no special mechanism to suppress the instability ( ; ) .observations , however , tend to show smooth monotonous declines ( ; ) .this makes a clear contrast to the low states " in dwarf novae ( cf .[ sec : sci : dnlow ] ) .there must be a mechanism in vy scl - type stars to somehow thermally stablizing the disk when the mass - transfer is reduced ( cf .dense observational coverage of vy scl - type stars is therefore highly needed immediately after the start of their declines .it is also known that vy scl - type stars tend to show superhumps ( cf . ) . since the mass - ratios ( ) of vy scl - type stars are not usually considered sufficiently small to enable excitation of the 3:1 resonance to produce superhumps ( cf . ; ; ; ; ) , there is apparently the need for an explanation of the cause of the superhumps . the variation in the mass - transfer rate in vy scl - type stars , and presented a working hypothesis why vy scl - type stars , with intermediate , can show superhumps .this hypothesis also needs to be tested by more observations of vy scl - type stars during different brightness states .the vsnet collaboration succeeded in early announcing a rare fading of v751 cyg , whose vy scl - type nature was suspected more than 20 years ago but had no comparable fading in recent years .the 1997 fading of v751 cyg was originally reported to the vsnet by laszlo szentasko , and its progress was followed in detail by the vsnet collaboration members , notably with ccd photometry at ouda station .this fading not only presented authentication of v751 cyg as a genuine vy scl - type star , but also enabled x - ray observations which led to the discovery of transient supersoft x - ray emission ( ; ) .this observation , suggesting a possible extension of luminous supersoft x - ray sources ( ssxs ) toward a less - massive white dwarf , led to a revolutionary change in our view of vy scl - type stars .other rare low states of vy scl - type stars announced through the vsnet collaboration include lq peg ( = pg 2133 + 115 ) in 1999 ( the second historical fading : ; ; ) and bz cam in 1999 ( the second historical fading : ; ; ; ) .the 1999 fading of bz cam is notable in that transient superhumps were detected during the fading , which may give support for the explanation by .the vsnet collaboration also succeeded in presenting the first - ever light curve of v504 cen , which has been suspected to be a vy scl - type star from spectroscopy , but had no solid photometric record qualifying the vy scl - type nature .well - known vy scl - type stars , such as mv lyr , have long been best - observed targets by the vsnet collaboration .some of these long - term observations were employed to qualify the light curves of vy scl - type stars . the well - known vy scl - type star kr aur was intensively studied by the vsnet collaboration , which led to the detection of short - term variations having power - law type temporal properties .the other notable object is v425 cas , whose low state in 1998 was announced by timo kinnunen through the vsnet , which led to our own discovery of short - term ( 2.65 d ) , large - amplitude ( up to 1.5 mag ) variations .such a type of variation had never been never in any class of hydrogen - rich cvs , and suggested that they are dwarf nova - type instabilities in a moderately stabilized disk .this discovery was introduced as the shortest period dwarf nova " in astrophysics in 2002 .x - ray binaries are close binary systems which consist of a compact object and a normal star .the mass accretion from the normal star onto the compact object generates strong x - ray emission .a number of x - ray binaries have been discovered as transients ( cf .their outbursts can be observed in all wavelengths , hence simultaneous multi - wavelength observations have played a key role to reveal the nature of x - ray binaries and x - ray transients ( e.g. ; ) .their outburst cycle is generally longer than a year , and in some cases , longer than decades .prompt observations of an early outburst phase are hence important .the vsnet has been providing information about x - ray transients which enables prompt observations , not only for optical observers , but also for x - ray , uv , ir , and radio observers . besides these recent studies ,the earliest work by the authors include infrared quiescent observation of v404 cyg = gs 2023 + 338 , which first revealed the existence of the photometric period of 5.76 hr .another outstanding early result was the discovery of superhumps and orbital variation in the outbursting x - ray transient gro j0422 + 32 = v518 per ( ; ; ) .this observation has been one of the most comprehensive studies of superhumps in classical " soft x - ray transients up to now ( cf . ) . herewe focus on the soft x - ray transients for which the vsnet collaboration conducted intense world - wide campaigns .soft x - ray transients are also called x - ray novae , whose common characteristics were established in the mid-1990 s ( ; ) : their light curves are typically described with a fast rise and an exponential decay ( fred ) .the -folding time is 3040 d during the decay phase . in the fred - type outburst , a reflare , or a secondary maximumis observed d after the outburst maximum .the outbursts are considered to occur due to a sudden increase of mass accretion rate in an accretion disk ( ; ) .radial velocity studies of the secondary star have revealed that a dozen of soft x - ray transients contains stellar - mass black holes .soft x - ray transients thus provide an ideal laboratory of the accretion physics onto the black hole , and they are called black hole x - ray transients . in the framework of classical soft x - ray transients ,the optical and x - ray emissions originate from the outer and the inner region of the accretion disk , respectively .simultaneous optical and x - ray observations therefore enable us to study the evolution of the accretion disk and the mechanism of its activity .the vsnet collaboration enables us to obtain dense samples throughout outbursts . with these observations ,we summarize the progress of studies of classical soft x - ray transients in subsection [ sec : xray : cxt ] .on the other hand , several recent transients have exhibited characteristics which are difficult to explain within the classical framework .the prompt observations by the vsnet collaboration has played an important role , in particular , for the research on the luminous fast transients ( ) and rapid optical variations ( ; ; ) .we summarize our studies on these new classes of activity in subsections [ sec : xray : fxt ] and [ sec : xray : rapid ] .it has been proposed that the outburst of classical soft x - ray transients is induced by thermal instability of the accretion disk ( ; ) .this model can explain the large -folding time during their decay phase by considering a strong x - ray irradiation which stabilizes the outer disk .this standard picture can not , however , explain two atypical outbursts , that is , the fast transients v4641 sgr ( ; ) and ci cam ( ) .their outburst duration was only a few days , which is too short to be interpreted with the viscous decay of the classical soft x - ray transients .on 1998 march 31 , the all - sky monitor ( asm ) of the rossi x - ray timing explorer ( rxte ) detected a new x - ray transient named xte j0421 + 560 ( ) .we were among the first to point out the presence of a supposed symbiotic star ci cam ( ) within the error ( vsnet - alert 1621).http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + alert1000/msg00621.html .] this information was immediately relayed to x - ray and optical observers through the vsnet , and the proposed identification was securely confirmed with the discovery of an outbursting object at the location of ci cam ( ) .contrary to classical x - ray transients , this object started a rapid fading with an -folding time of d just after outburst maximum ( ) .its x - ray spectrum can be described by an absorbed power - law model with high - energy cutoff , which is atypical for x - ray transients ( ) .optical observations reported to the vsnet show that ci cam brightened to 8.8 mag on april 3,http://www.kusastro.kyoto - u.ac.jp / vsnet / xray/ + cicam.html . ] and faded by mag within two days .spectroscopic observations revealed that ci cam is a b[e]x - ray binary ( ) .the nature of the compact object is still unknown ( ; ; ; ) .the vsnet collaboration further obtained quiescent observations , which revealed the presence of weak activity .v4641 sgr is a variable whose optical spectrum is that of an a - type star ( ; note that the object was not correctly identified in the literature until we published a reliable chart , see subsection [ sec : standard ] ) . in 1999august , tsutomu watanabe , a member of the vsolj , noticed that the object entered an active state in the optical range ( ) .this state was characterized by the presence of large - amplitude variation having a possible periodicity of 2.5 d . on september 15 , the state was terminated by a short outburst reaching 8.8 mag independently detected by rod stubbings and the kyoto team , following berto monard s detection of a brightening preceding this event ( vsnet - alert 3475 , 3477 , 3478 ; ).http://www.kusastro.kyoto - u.ac.jp / vsnet / xray/ + gmsgr.html for the full story . ]all these reports were immediately circulated through the vsnet alert network ; the resultant vigil eventually enabled the historical detection of this perfectly unexpected giant outburst .following our notification , a corresponding x - ray outburst was found in the backlog recorded with the rxte / asm ; the x - ray outburst was not discovered real - time " even with x - ray all - sky monitor .the x - ray outburst , after reaching an astonishing flux of 12 crab at the x - ray maximum , rapidly faded and returned to the pre - outburst level within only 2 hours ( ) .the duration of the optical outburst was also short ( d ) ( ) .the vsnet collaboration team performed prompt observation of this short x - ray outburst , and revealed that the optical variation exhibited anti - correlation against the x - ray variation ( ) .spectroscopic observations revealed that v4641 sgr consists of a black hole and a late b - type star ( ) .as reported below , v4641 sgr experienced active phases in the next couple of years ( ; ; ) , however , no comparable outburst to the 1999 september giant one has been yet observed .while their binary components are totally different , the short outbursts of v4641 sgr and ci cam have several common characteristics : first , spatially resolved radio jets were associated with the outburst in both systems ( ; ) .second , the peak luminosities reached the eddington luminosity ( ; ) .their high luminosity implies that supercritical accretion occurred .since the matter falls with almost free - fall time scale in the supercritical accretion , the problem of the short duration of the outbursts can be reconciled .the optical x - ray anti - correlation of v4641 sgr may be understood with the scenario that the optically - thick supercritical accretion flow absorbed the x - ray emission and re - emitted the optical emission ( ; ) .it is , however , still unknown how the supercritical accretion was induced .( 150mm,90mm)fig11.eps in the classical picture , the optical emission is thermal emission from the outer portion of the accretion disk where the temperature is relatively low ( k ) . the observed time scale of optical variations is hence long , such as superhumps ( e.g. ) and orbital period variations ( e.g. ) . the black hole binary system , gx 339 - 4 is , however , known to show rapid optical variations of the time scale of seconds ( ; ) .such short time scale variations indicate that they originate from an inner portion of the accretion flow .they are proposed to be cyclo - synchrotron emission from the inner region ( ) , however , the mechanism to generate the emission and variations is poorly known .the vsnet collaboration team has recently observed optical rapid variations of two sources , that is , xte j1118 + 480 = kv uma ( ) and v4641 sgr ( ; ) .xte j1118 + 480 was discovered with the rxte / asm on 2000 march 29 ( ) .we discovered an optical counterpart at 12.92 mag on march 30 ( ) .the optical light curve showed so many fluctuations that suggested the presence of optical rapid variations . detected optical short - term ( a few tens of seconds ) variations which correlate with the x - ray variations .these rapid optical variations are also proposed to be synchrotron emission from the inner accretion flow ( ) . since xte j1118 + 480 remained at a low / hard state throughout the outburst , the inner regionis considered to be filled not by the standard disk , but by the advection dominated accretion flow ( adaf : ) . in the adaf region ,the gas density is so low that the magnetic pressure is dominant ( ) .the optical rapid variations are probably generated at shock regions in such an inner region , which are formed by magnetic reconnections or collisions of blobs in the magnetically - dominated accretion flow ( ) .after the luminous , short outburst in 1999 september , v4641 sgr experienced a new active phase in 2002 and 2003 .the vsnet collaboration succeeded in detecting rapid optical variations during the 2002 active state ( figure [ fig : v4641flash ] ) , whose detailed features and interpretation are reported in in this volume .superhumps , which have been originally studied in su uma - type dwarf novae , are also observed in soft x - ray transients ( e.g. ) . to observe the evolution of superhumps was difficult in the case of soft x - ray transients because of their rare outbursts , relatively long orbital period , and long durations of outbursts made a dense sampling throughout outbursts difficult .our intense campaign of xte j1118 + 480 , however , first revealed the evolution of superhumps in soft x - ray transients ( ) .the superhump period was first 0.43% longer than the orbital period , and then decreased during the main outburst .superhumps in su uma - type dwarf novae also exhibit this behavior , which can be interpreted by the contraction of the elliptical accretion disk or the inward propagation of the eccentricity wave .it is proposed that the reflare is induced by the growing tidal dissipation ( ; ) .this model was developed based on the fact that superhumps appeared only after the reflare ( ) .on the other hand , superhumps appeared even before the reflare in xte j1118 + 480 ( ) .the vsnet collaboration obtained a dense sample around the reflare of xte j1859 + 226 , which is reported in in this issue .we detected periodic variations , which may be superhumps , even before the reflare of xte j1859 + 226 .these observations are unfavorable for the above scenario for the reflare .it is also proposed that strong x - ray irradiation onto the outer accretion disk may induce the reflare ( e.g. ) .ss 433 = v1343 aql is an active high - mass x - ray binary ( hmxb ) with relativistic jets , and the nature of the binary system is still mysterious in many aspects .its optical magnitude is frequently monitored by vsnet observers , and remarkable ( brightening or fading ) behavior is reported via _ vsnet - campaign - xray _ when it occurs . in 1995 , 1998 , and 2000 , simultaneous multi - wavelength observations were organized by the asca team of riken ( n. kawai et al . , in preparation ) in order to determine an accurate ephemeris of the eclipse and to compare light curves in the x - ray and optical wavelengths , which yield a clue to understand the emitting regions in the binary system . in these campaigns ,vsnet played the role of a medium for exchanging information on the optical parts , such as calls for optical information , explanation of the background , and practical conditions and notes for observation ( e.g. vsnet 103 , for the 1995 campaign).http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet/ + msg00103.html . ] our contributions to the spectroscopic type determination of supernovae are summarized in table [ tab : snspec ] .confirmatory observations , identifications and similar activities reported in iaucs are also listed in table [ tab : snid ] .we also run the mailing list _ vsnet - campaign - sn_,http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + vsnet - campaign - sn / msg00785.html . ] which is now widely known as the most reliable and up - to - date information of supernovae .the latest supernovae " page ( subsection [ sec : datasearch ] ) has been collaborating with it , which also provides usno - a2.0 based charts made by odd trondal for almost every supernova brighter than 20th magnitude .the charts gives blue and red magnitudes for photometric reference stars , the latter ones have been widely used for cr " measurements ( see appendix [ app : format ] ) .besides objects described in the subsections [ sec : earlyelec ] and [ sec : conf : sn ] , here we mention here remarkable examples .sn 1997ef in ugc 4107 was discovered by yasuo sano , one of the most active participants of vsnet .it was originally announced as possible supernova " , because its spectra was quite unusual and could not convincingly be classified as that of a supernova .further spectroscopic observations revealed that it was likely an explosion with massive ( several m ) ejecta .the object consequently received the designation of sn 1997ef ( , ) . and suggested that it was likely be an explosion of a stripped very massive star , a suggestion supported by theoretical modeling .it is the first example of the so - called type - ic hypernova " , a concept developed after the suggested association sn 1998bw with grb 980425 ( , ) , sn 2002ap ( see subsection [ sec : conf : sn ] ) , and the unambiguous identification of the supernova signature ( sn 2003dh ) in grb 030329 ( see subsection [ sec : sci : grb ] ) .sn 1997ei was discovered by masakatsu aoki in ngc 3963 .the first spectroscopy reported in iauc indicated that it is a type - ia supernova .our spectroscopy showed some peculiarity , then we reported that it could be a peculiar type - ia supernova . from later spectroscopic observations , it finally turned out to be a type - ic supernova ( , ) .the sn 1998 t case taught us the importance of identification .it was produced in a pair of interacting galaxies , and some catalogues of galaxies gave discrepant designations for them .the blobby nature of the host galaxies also led to misidentifications of the supernova .the discussion on _ vsnet - chat _ , including the consultation of the ngc / ic project , http://www.ngcic.com/ . ] led us to a correct identification of the galaxy , which was accompanied by precise astrometry .sn 1998bu in m 96 was discovered by marko villi .it was the nearest supernova since sn 1993j in m 81 .the first spectroscopy reported in iauc was a high - dispersion one , which could only determine the depth of the interstellar absorption within our galaxy and the host galaxy .our spectrum revealed that it is a type - ia supernova , and this report was naturally posted to vsnet - alert 1785http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / alert1000/ + msg00785.html . ] .it was distributed earlier than the relevant iauc , which also included other spectroscopy . in response to the type determination, the comptel instrument was pointed towards sn 1998bu in order to detect the line -ray of decay , which had been detected only from sn 1991 t , the peculiar luminous type - ia supernova . despite sn 1998buis as close to us as sn 1991 t , the -ray lines was not detected , which may suggest a diversity among type - ia supernovae in the line -ray , as well as in the light curves and in the spectra .the first spectroscopic observation of sn 1998es indicated that it is an intrinsically bright type - ia supernova like sn 1991 t .our report confirmed it , giving in addition the spectral evolution and the interstellar extinction .sn 1999dn was a case similar to sn 1997ei .our spectrum suggested that it was of type ic with week hei lines .the same iauc also contained two spectroscopic observations , one of which reached the same conclusion , but the other suggested it was of type ia .later spectroscopy revealed that sn 1999dn is an intermediate event between type ib and type ic .sn 2000ch was a very subluminous supernova .it was originally announced as a variable star in field of ngc 3432 .we noticed that the object can be seen on the dss images since 1998 ( vsnet - chat 2908),http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / chat2000/ + msg00908.html . ]on the other hand , spectroscopy of this object suggested that it is located within ngc 3432 . from these findings , assigned the supernova designation of this object , resembling the type - v " sn 1961v .the comment of the discoverer ( vsnet - chat 2944)http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / chat2000/ + msg00944.html . ] finally supported this classification .sn 2001bf was yet another example similar to sn 1997ei and sn 1999dn .despite of its low signal - to - noise ratio , our spectrum clearly showed a deep siii absorption feature , from which we estimated that it is a type - ia supernova .another group suggested that it is of type ic , but the later spectral evolution revealed that it is indeed a type - ia supernova .sn 2002ao was a slightly different case .the first report on iauc quoted the resemblance with type - iib supernovae ( , ) .we estimated that it is of type ic , which is consistent with a later report indicating the resemblance with sn ic 1999cq .the rapid decline of sn 2002ao was another common feature with sn 1999cq .other iauc issues in relation with the activity of the vsnet administrators include iauc 7033 ( sn 1998eg : ) , iauc 8101 ( sn 2003cg : ) , and iauc 8171 ( sn 2003gs : ) .ccccl sn name & type & vsnet article & relevant iauc & remark + 1995al & ia & alert 266 & 6256 & see sec .[ sec : conf : sn ] .+ 1997ei & ic & alert 1399 & 6800 & see text .+ 1998an & ia & alert 1684 & 6878 & + 1998aq & ia & alert 1681 & 6878 & + 1998bu & ia & alert 1785 & 6905 & see text .+ 1998es & ia , pec & alert 2394 & 7059 & see text .+ 1999bg & ii & alert 2816 & 7137 & + 1999dn & ib / c & alert 3380 & 7244 & see text .+ 1999gn & ii & alert 3842 & 7336 & + 1999gq & ii & alert 3862 & 7339 & + 2001bf & ia & alert 5873 & 7625 & see text .+ 2001bg & ia & alert 5871 & 7622 & + 2001dp & ia & alert 6299 & 7683 & + 2002ao & ic & camp - sn 339 & 7810 & see text .+ 2002ap & ic , pec & alert 7120 & 7811 & see sec .[ sec : conf : sn ] .+ 2002bj & ii(n ? ) & camp - sn 363 & 7844 & luminous + 2002bo & ia & alert 7241 & 7848 & + 2002bu & iin & alert 7259 & 7864 & + 2002fk & ia & alert 7516 & 7976 & + 2003j & ii & camp - sn 534 & 8048 & + 2003k & ia & camp - sn 534 & 8048 & + + + + cccl sn name & vsnet articles & relevant iaucs & remark + 1998 t & chat 775 , 779 , 780 , 786 , 787 & 6859 & see text .+ 1999et & & 7344 & + 2000 m & alert 4320 & 7373 & + 2000p & alert 4363 , 4365 , 4366 , 4368 , 4369 & 7378 , & + & & 7379 ( corrigendum ) & + 2000ch & chat 2908 , 2941 , 2944 & 7415 , & see text .+ & & 7417 , & + & & 7419 , & + & & 7421 & + 2000 cm & alert 4944 , chat 3035 & 7436 , & + & & 7437 , & + & & 7438 & + 2001dp & alert 6317 & 7683 & + 2002ao & alert 7190 & 7836 & see text .+ 2002ap & camp - sn2002ap 154 & 7836 & prediscovery + 2002 dm & camp - sn 440 & 7921 , & + & & 7923 & + 2002ed & alert 7441 , camp - sn 454 & 7940 , & + & & 7943 & + 2003ez & & 8141 , & + & & 8142 & + bright symbiotic variables ( cf . ; ; ; ) have been well - observed by many members of the vsnet collaboration ( see figure [ fig : chcyg ] ) .these observations provided a number of detections of outbursts and eclipses , which were immediately relayed to more specialized researchers for detailed study .( 160mm,80mm)fig12.eps the eclipse phenomenon in the outbursting object fn sgr was discovered through the vsnet regular activity.http://www.kusastro.kyoto-u.ac.jp/vsnet/symbio/fnsgr.html . ]this work was summarized by .the outburst and possible eclipse phenomenon in v343 ser = as 289 was discovered by kesao takamizawa (= tmz v17 , vsnet - obs 8957)http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / obs8000/ + msg00957.html . ] and minoru wakuda . the final publication is by .v1413 aql = as 338 is another object whose eclipsing symbiotic nature ( cf .figure [ fig : v1413 ] ) was revealed by amateur astronomers .http://www.kusastro.kyoto - u.ac.jp / vsnet / docs/ + v1413aql.html for a full story .see also , , . ] the outbursts and eclipses were regularly announced in vsnet , which have been followed by a number of researchers ( e.g. ; ) .( 160mm,80mm)fig13.eps reported the similarity of light curves between ch cyg and the supersoft x - ray sources ( v sge and rx j0513.9 ) based on vsnet observations .the 1997 and 2000 outbursts of z and were reported in vsnet - alert 938,http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + vsnet - alert / msg00938.html . ] and vsnet - alert 5232 , 5233,http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + alert5000/msg00232.html and ://www.kusastro.kyoto - u.ac.jp / vsnet / mail / alert5000/msg00233.html .] respectively .these outbursts enabled , more or less owing to the vsnet alerts , modern observations of this classical symbiotic binary ( ; ; ; ) . studied short - term variations of v694 mon = mwc 560 . during the entire period of observations ,the object showed pronounced flickering activity .this work has been referred to as one of the most intensive photometric observations of ths peculiar symbiotic variable ( ; see also for a recent survey work ) .we now have a dedicated mailing list for symbiotic stars _ vsnet - symbio_http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + vsnet - symbio / maillist.html . ] for informing about recent activities , particularly announcements of outbursts and eclipses , of symbiotic variables .( 160mm,80mm)fig14.eps r crb stars are hydrogen - deficient carbon stars which show occasional fadings caused by dust formation .a representative light curve of r crb from vsnet observations is shown in figure [ fig : rcrb ] .early announcements of the fadings of r crb stars can provide the best opportunities to study the formation mechanism of dust in these stars . before the vsnet era, these fadings had been only widely announced only when the objects had unmistakably faded ( typically below 1 mag below their usual maximum ) ; the early decline stage had been usually overlooked .the real - time communication via the vsnet public lists broke this historical limitation .the most dramatic instance was with the first - ever fading of fg sge ( also known as a final helium flash object ) in 1992 late august september ( cf .history partly recorded in vsnet - history 200 , 202 ) .it was only when the object had already faded by 1 mag when iaucs were issued ( ; ; ) .this detection of a possible fading ( 1992 august 30 ) reported by nobuhiro makiguchi ( vsolj , see subsection [ sec : vsoljcolab ] ) was immediately attended by a number of world - wide observers .this fading was the first one of the series of fadings successively occurring up tp now ( cf . ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ) .the discovery of the fading in fg sge brought a breakthrough in the understanding of late - time low - mass stellar evolution : our understanding of fg sge had been slow and limited before this phenomenon ( cf . ; ; ; although the unusual nature of this object was recognized more than 30 yr ago and had long been discussed from different standpoints , including binary hypothesis and a thermal pulse in stellar evolution ( e.g. ; ; ; ; ; ; ; ; ; ; ; ) .it is now widely believed that fg sge , v605 aql , and v4334 sgr ( cf .subsection [ sec : sci : cn ] ) , as well as some unusual r crb stars ( cf .v348 sgr , e.g. ) comprise a sequence of final helium - flash objects ( cf . ) .the dramatic fading of es aql , which had been suspected to be an r crb - type star , was first announced through the vsnet alert network ; peter williams was the first to detect this object getting fainter than 14.0 on 2001 march 22 . based on this information , succeeded in clarifying the r crb - type nature of this object .the other r crb - type star recognized through the vsnet activity is v2552 oph = had v98 ( , ; see subsection [ sec : novaconfirm ] ) .we now have a dedicated mailing list for r crb stars and related objects _ vsnet - rcb_,http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet - rcb/ + maillist.html . ] which is best employed by the researchers of this field .more recent announcements of rare r crb - type fadings include the 2003 fading of v3795 sgr ( vsnet - rcb 585).http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet - rcb/ + msg00585.html . ]although the attempt was not very successful , tried to predict future light curve of r crb using vsnet observations . among recent phenomena inbe stars ( b - type emission - line stars ) , the case of scorpii is still fresh in our memory .the star , which had been considered as a non - variable b - type star , underwent a dramatic change in 2000 july .sebastian otero , a vsnet member , visually noticed a 0.1 mag brightening in scorpii and issued an alert through the vsnet ( vsnet - be 2).http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet - be/ + msg00002.html . ] the supposed transition to a be star was subsequently confirmed by spectroscopic observations .the star further brightened to a maximum of .8 around 2000 july 30 .such a dramatic change in a bright naked - eye star is extremely rare .the only comparable precedent phenomenon was cassiopeiae in 1937 , which brightened to .6 .this news was widely distributed through public news media , as originating from the vsnet , and became the one of the most popular astronomical phenomena in that year .this object , after reaching a temporal minimum just following the initial peak , continues to show remarkable activities up to 2003 ( ; ) . on several occasions in 2002 and 2003 ,the star even brightened close to .5 , even slightly surpassing in brightness the historical event of cassiopeiae . in the aftermath of this event , visual monitoring of brightbe stars has been conducted by a number of vsnet members , most intensively by otero .these observations have detected of number of outbursts , e.g. in centauri ( cf . ) and canis majoris ( cf .the vsnet runs dedicated lists on the be - star phenomenon , _ vsnet - be_http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet - be/ + maillist.html .] and _ vsnet - campaign - be_.http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + vsnet - campaign - be / maillist.html . ]wolf rayet ( wr ) stars are massive , luminous , and hot stars which have lost their hydrogen envelope , and are considered to be immediate precursors of some kinds of supernovae , and likely grbs . in spite of their astrophysical importance , wr stars were less conspicuous objects in terms of optical variability .the only known categories of variability in wr stars had been occasional short - period obscuration by dust production , instabilities in the wind , or possible pulsation ( see ; ; for recent references ) .this situation dramatically changed by the discoveries of the two most actively variable wr stars by the vsnet team ( wr 104 = had v82 = v5097 sgr : , wr 106 = had v84 = v5101 sgr : ) .both objects were initially reported as variable stars by katsumi haseda , detected during his search for novae .the variables soon turned out to be identical with known wr stars .in particular , wr 104 is a well - known binary consisting of a late - type wr star and an ob star .the most remarkable feature in this object is the presence of a dusty pinwheel nebula " ( ; ) co - rotating with the interferometric binary .the importance of the detection of large - amplitude optical variation was immediately recognized and communicated via vsnet alert network .such a large variation in wr 104 required a non - classical interpretation . with these conspicuous discoveries ,the vsnet significantly broadened the scope of variability studies of wr stars , and this field is now becoming one of the contemporary topics in studying the wr - type activity .combined with the recent advance in grb astronomy , the importance of fundamental understanding of various phenomena in wr stars will be a matter of key importance .some pre - main sequence stars show dramatic outbursts ( fu ori stars : fuors ) and smaller outbursts ( ex lup stars : exors ) , probably originating from some kind of instabilities in the circumstellar disk ( e.g. ) .outbursts of these objects were occasionally followed by the vsnet collaboration .the best example was the 1995 outburst of v1143 ori ( baba et al . in preparation ) .outbursts of v1118 ori have also been occasionally reported ( e.g. ) .( 160mm,80mm)fig15.eps among pre - main sequence stars , herbig ae / be stars have been one of the best observed objects by vsnet members .the recent discovery of a likely herbig ae / be star with large - amplitude variation , misv1147 , has been widely studied through the vsnet ( m. uemura et al . in preparation ) .other objects of this class , well observed and timely informed through the vsnet , include ux ori , rr tau , cq tau , ab aur , rz psc and ww vul ( see figure [ fig : rrtau ] ) .remarkable variations in these objects have been relayed through a dedicated list of pre - main sequence variables , _ vsnet - orion_,http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet - orion/ + maillist.html . ] and utilized for detailed follow - up studies .( 160mm,80mm)fig16.eps some peculiar ( or unique ) variable stars were also studied by the vsnet collaboration .the most striking objects include v651 mon ( the central star of the planetary nebula ngc 2346 ) .this binary system containing a b - type subdwarf underwent a totally unexpected fading in 19811985 ( for a summary of this event , see ) .although no similar phenomenon was recorded in the century - long past photographic records , the object underwent another unexpected fading episode in 19961997 .this phenomenon was detected by danie overbeek , and immediately reported to the vsnet ( vsnet - alert 548).http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet - alert/ + msg00548.html . ] following this announcement , succeeded in recording the phenomenon in detail .there was a sharply defined transient clearing ( brightening ) even during this fading , which was ascribed to a sharply defined , small ( several times cm ) lucent structure within the obscuring body .see figure [ fig : v651mon ] for the recent light curve .( 160mm,80mm)fig17.eps although pulsating variables were only occasionally selected as intensive targets of the vsnet collaboration , the vsnet public data archive ( cf . figure [ fig : rsct ] ) , as well as the vsolj database , have been frequently used in period analysis , and correlation studies with other observational modalities .these references to the vsnet / vsolj data have been so numerous that we only list most recent ones : historical archive for cephei ( ) , evolution of r hya ( ) , dust formation in l pup ( ) , possible chaotic behavior in r cyg ( ) , period determination of v648 oph ( ) , non - variability of ek and ( ) , and period variation in t umi ( ) .numerous new variable stars reported in _ vsnet - newvar _ ( subsection [ sec : newvar ] ) have been studied in detail , and have been reported in a numerous number of papers published in information bulletin on variable stars ( ibvs ) .( 160mm,80mm)fig18.eps cas , originally classified as a yellow semiregular variable , is now considered to be a low - temperature counterpart ( yellow hypergiants : ; ; ; ) of the extremely luminous hot hypergiants ( luminous blue variables : lbvs ) .this object occasionally undergoes temporary optical fadings caused by huge mass - loss events , usually once in decade(s ) ( figure [ fig : rhocas ] ) .the most spectacular recent event occurred in 2000 ( ) .independent detections , including that by one of the authors ( tk ) , of this phenomenon were circulated through the vsnet ( vsnet - alert 5186 , 5187),http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / alert5000/ + msg00186.html and ://www.kusastro.kyoto - u.ac.jp/ + vsnet / mail / alert5000/msg00187.html . ] which enabled first dense optical coverage of this kind of phenomenon . upon recognition of the astrophysical significance of this event, we set up a dedicated mailing list _ vsnet - campaign - rhocas _ in 2000 august.http://www.kusastro.kyoto-u.ac.jp/vsnet/mail/ + vsnet - campaign - rhocas / maillist.html . ]these observations have provided the primary resource on this rare event , and are referenced on lobel s dedicated page on cassiopeiae.http://cfa-www.harvard.edu// . ] in 2003 there was a small signature of a line variation similar to the precursor event in 2000 .this news was widely announced through _ vsnet - campaign - rhocas _ and the object has been intensively observed .although classical ( other than cv - type or symbiotic type ) binaries are usually not intensive targets for the vsnet collaboration , there have been several calls for observations , particularly for long - period eclipsing binaries ( e.g. ow gem , ee cep ) .visual and ccd / photoelectric observations , as well as comparison star sequence information , have been exchanged on vsnet lists .the discovery of eclipses in the bright naked - eye star velorum was one of the breaking news mediated through the vsnet ( see also ) .a dedicated list for velorum observations has been set up , _ vsnet - campaign - deltavel_.http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + vsnet - campaign - deltavel / maillist.html . ]the vsnet runs dedicated lists on eclipsing binaries , _ vsnet - ecl_http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet - ecl/ + maillist.html .] which are now recognized as one of world - wide networks exchanging information in eclipsing binaries .kazuo nagai ( vsolj ) has been summarizing the times of minimum of eclipsing binaries reported to _ vsnet - ecl_. blazars ( bl lac objects and optically violently variable quasars ) are also one of the targets of long - term and intensive observing campaign .we run a dedicated list _ vsnet - campaign - blazar_http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + vsnet - campaign - blazar / maillist.html . ] for exchanging information on blazar activities and upcoming collaboration with other world - wide blazar groups . from the very establishment of the whole earth blazar telescope ( webt : ),http://www.to.astro.it / blazars / webt/ .] we have been continuously in collaboration with this group .in addition to continuous visual monitoring campaigns on strongly active blazars , such as oj 287 , markarian 421 , 3c 279 , quick electronic circulation and prompt feedback via the vsnet discussion group led to the notable discovery of unexpectedly large , short - term intranight variation of bl lac in 1997.http://www.kusastro.kyoto-u.ac.jp/vsnet/bllac/bllac.html . ] this intensive observation was initiated by a report of a bright state of bl lac in iauc , which initially revealed outbursts " every five to ten days . following this stage ,the object entered a more active phase in mid - july to early august . during this stage , real - time comparisons of visual observations through the vsnet discussion group revealed substantial discrepancies depending on observers longitudes .these discrepancies , which were much larger than what had been recognized as blazar microvariability , were initially considered as a result of an inhomogeneous magnitude system .this possibility was soon disproved by the real - time distribution of the modern photoelectric comparison star magnitudes . by the end of 1997 july ,the incredible short - term variation ( 0.8 mag in four hours ) in bl lac had become a doubtless phenomenon . in response to these visual observations ,the kyoto university team obtained long time - resolved ccd photometry at ouda station , revealing the unprecedented complexity and fast variation in the light curve .a similar conclusion was reached with time - resolved ccd photometry by tonny vanmunster , which was also rapidly communicated via vsnet .the variation has a power - law temporal properties , analogous to those of agn variability ( although the variation in bl lac was much more violent and rapid ) , which can be tracked down to five minutes .this is one of the shortest time scales ever recorded in blazar optical variation .the early results by the vsnet team were presented at the 23rd iau general assembly held in kyoto held in 1997 august . since then , several studies have been performed in collaboration with the webt ( ; ) .long - term light curves of selected blazars from vsnet observations have been widely used for correlation with multiwavelength data .the vsnet collaboration team has tried to search optical afterglows of grbs and reported their detection or upper - limits of magnitudes .earlier announcements of beppo satellite per astronomia x ( bepposax ) grb detections were reported to _ vsnet - alert_. since 2000 june , we set up a dedicated list for grb announcements ( _ vsnet - grb_).http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet - grb/ + maillist.html .] this mailing list is currently employed to distribute satellite - borne alerts , as a secondary distribution node of the grb coordinates network ( gcn).http://gcn.gsfc.nasa.gov/ . ]a public mailing list _ vsnet - grb - info_,http://www.kusastro.kyoto - u.ac.jp / vsnet / mail/ + vsnet - grb - info / maillist.html . ]originally extended from _ vsnet - grb _ , currently provides gcn circular information as well as our own observations by the vsnet collaboration .table [ tab : grbobs ] summarizes our own grb afterglow observations ( not all observations are listed ) , which is an extension of table 1 in . see the corresponding gcn circulars for more details .cccccc grb name & alert type & epoch ( d ) & telescope ( cm ) & magnitude & gcn circular + 001025a & ipn & 1.7 & 60 & .0 & 866 + 001025b & ipn & 1.6 & 60 & .0 & unpublished + 001212 & ipn & 2.0 & 25 & .5 & 902 + 010214 & sax & 0.29 & 30 & .0 & 948 + 010220 & sax & 1.5 & 30 & .5 & unpublished + 010222 & sax & 0.23 & 30 & 17.8 & 984 + & & 0.55 & 30 & 19.3 & + 011030 & sax & 0.28 & 25 & .0 & unpublished + 011130 & hete & 0.47 & 25 & of & unpublished + 020124 & hete & 0.079 & 25,30 & of & unpublished + 020331 & hete & 0.033 & 25,30 & 17.9 & 1363 + 020812 & hete & 0.018 & 60 & .2 & 1515 + & & 0.038 & 25,30 & .1 & 1521 + 020813 & hete & 0.36 & 60 & & unpublished + 020819 & hete & 0.14 & 60 & .0 & unpublished + 020903 & hete & 0.18 & 25 & of or .5 & 1537 + 021004 & hete & 0.036 & 25,30 & 16.3 & 1566 + 030227 & hete & 0.046 & 25 & .0 & 1899 + 030329 & hete & 0.053 & 25,30 & 12.6 & 1989 , + & & & & & 1994 , + & & & & & 2147 + 030528 & hete & 0.004 & 25 & .0 & 2252 + 030823 & hete & 1.24 & 60 & .5 & 2370 + + + here we summarize our significant detections and their importance in grb astronomy : _ grb010222 _ : our observation of the afterglow of grb010222 covered a period around the jet break ( ) .this is the first detection of the grb afterglow in japan .this observation encouraged observers having small telescopes in japan to try observations of grb afterglows . _grb020331 _ : we succeeded in observing an early afterglow with ( 17.518.7 ; 1- limits ) 64 min after the burst ( ) .this is the earliest observation for this afterglow .our observation revealed that the light curve can be described with a single power - law from the early phase ( hr ) of the afterglow . _grb021004 _ : owing to the prompt identification by the hete-2 satellite ( ) , we first revealed the continuous behavior of the grb afterglow around 1 hr after the burst ( ; ) .an early afterglow was also observed in grb990123 ( ) , however , our observation covered a period corresponding to an observing gap of the early afterglow light curve of grb990123 . in the light curve of grb021004 , the initial fading phase was terminated by a short plateau phase that lasted for about 2 hours from 0.024 to 0.10 d after the burst ( ) .the object then entered an ordinary power - law fading phase .we propose that the plateau phase is evidence that the maximum of the synchrotron emission from a forward shock region appears around d after the burst , as expected from theoretical calculations ( ; ) .the initial fading phase can be interpreted as part of the optical flash , which was recorded in grb990123 .if this is the case , the color of the afterglow would have dramatically changed from blue to red around the maximum , while no color information is available in this early phase ( ) .detections the color change in the early phase will be an important future step for grb astronomy . on the other hand , the feature around 0.1 d after the burst can be one of a series of bumps observed 1 d after the burst ( ; ; ; ; ) .( 150mm,100mm)fig19.eps _ grb030329 _ : the world - famous `` monster grb '' grb030329 ( figure [ fig : g0329 ] ) occurred closest to us ( ) , which gave a chance for us to study the detailed structure of the grb afterglow ( ; ; ) .we detected a bright , 12th - mag optical afterglow 76 min after the burst ( ; ; ) . with the international collaboration through the vsnet, we obtained an hr continuous light curve of this afterglow ( ) .our observation revealed that the afterglow experienced repetitive modulations even in the early phase of our observation . in conjunction with public data reported to the gcn circulars , the light curve was rich in unexpectedly complicated structures throughout days after the burst .it is a surprise that the amplitude of modulations was almost constant with time .this feature of modulations is difficult to be understood with density variations of the interstellar medium ( ) .the energy in the shock region must have changed with time , while the mechanism to generate additional energy is an open issue ( ) .the vsnet team also contributed to a another collaborative work on the earliest stage afterglow .variable star network ( vsnet ) is a global professional - amateur network of researchers in variable stars and related objects , particularly in transient objects , such as cataclysmic variables , black hole binaries , supernovae and gamma - ray bursts .the vsnet has been playing a pioneering role in establishing the field of _ transient object astronomy _ , by effectively incorporating modern advance in observational astronomy and global electronic network , as well as collaborative progress in theoretical astronomy and astronomical computing .the vsnet is now one of the best - featured global networks in this field of astronomy .we review on the historical progress , design concept , associated technology .we also review on the breathtaking scientific achievements , as well as regular variable star works , particularly focusing on dwarf novae ( discovery of er uma stars , works in wz sge - type dwarf novae , more usual su uma - type dwarf novae , eclipsing dwarf novae ) , black hole x - ray transients ( discoveries of an unexpected violent outburst of v4641 sgr , rapid optical variations from the same object ) , and recent achievements in gamma - ray bursts .we are grateful to seiji masuda and katsura matsumoto , who greatly contributed to the activities of the vsnet administrator group .we are grateful to many vsnet members who have been continuously supporting our activity .we are grateful to emile schweitzer ( afoev ) , keiichi saijo and makoto watanabe ( vsolj ) for kindly allowing us to use afoev and vsolj public database for drawing light curves .we are also grateful to dave monet for making usno a1 cd - roms readily available for us .this work is partly supported by a grant - in - aid [ 13640239 , 15037205 ( tk ) , 14740131 ( hy ) ] from the japanese ministry of education , culture , sports , science and technology .part of this work is supported by a research fellowship of the japan society for the promotion of science for young scientists ( mu , ri ) .this research has made use of the astronomical catalogs at astronomical data centers operated by national astronomical observatory , japan , and nasa .this research has also made use of the digitized sky survey producted by stsci , the eso skycat tool , the vizier catalogue access tool , and the electronic edition of the gcvs .* development of a mailing list on + variable stars , vsnet * ( daisaku nogami , taichi kato , + hajime baba , chatief kunjaya ) * abstract * as the computer environment has been developing drastically these years , the style of astronomical study has been changed .the key words of these changes are thought to be `` real - time '' and `` interactivity '' .suspecting that they have potential to made an great effect on the study of transient objects ( cataclysmic variables , x - ray binary , super novae , and so on ) , we set up the mailing list , vsnet , in 1994 .our policy on vsnet since the start is that vsnet is world - widely opened to any kinds of researchers including professionals and amateurs , observers and theorists .subscribers have increased as time goes and now are over 400 from over 40 countries .although vsnet started as a mailing list , it at present consists of five sub - maling lists , vsnet : vsnet - alert , vsnet - obs , vsnet - chat , and vsnet - chart , and each of these lists works independently for different purposes . using the different characteristics of these sub - lists to advantage , various types of study have been proposed and carried out .vsnet will be developed further with cooperation of subscribers .if you have any comments or questions , please feel free to contact with vsnet administrators ( vsnet-adm.kyoto-u.ac.jp ) .the computer environment is making remarkable progress with development of the infrastructure and the machine power . the world - wide web ( www , berners - lee et al .1992 ) is explosively coming into wide as well , redrawing the common sense on the information distribution .these have an great affect on the astronomical study , for example , 1 ) the complicated rapid control of telescopes ( adaptive optics , etc . )becomes possible , 2 ) the scale of simulations are getting much larger , 3 ) editors of various journals encourage the electronic submission , 4 ) the electronic publication of papers on www or by cd - rom is discussed and partly realized , 5 ) iau circulars are distributed by e - mail , 6 ) preprints are usually distributed by e - mail or on web pages ( e.g. http://www.lanl.gov , ginsparg 1996 ) , 7 ) the search of papers has become quite easy by the foundation of the astrophysics data system ( nasa , accomazzi et al .1995 ) , and so on .the key words of this revolution are thought to be `` real - time '' and `` interactivity '' .is there a possibility for these characteristics to be quite useful in the study of transient objects ?we had an eye on the mailing list since the system makes it possible to share much information among much people quite quickly and can be used interactively by much people .then , in july 1994 , we started a mailing list , vsnet ( vsnet.kyoto-u.ac.jp ) , mainly on cataclysmic variables , supernovae , x - ray binaries , and so on , with a policy that we aim to make a contribute to the astronomical community by providing a mechanism to make information shared and a room to discuss all aspects of the astronomy concerning those transient objects among subscribers from all over the world irrespective of the status ( an amateur or a professional ) , the style of astronomical study ( an observer or a theorist ) , and other properties .the first member of vsnet is several tens of amateurs belongs to amateur associations ( the american association of variable stars ( aavso , see http://www.aavso.org/ ) , association franaise des observateurs detoiles variables ( afoev ) , variable star observer league in japan ( vsolj ) , and so on ) and a few tens of professionals .subscribers , then , have increased by mouse - to - mouse advertising and our invitation to authors of papers in astronomical journals .now the number of subscribers from about 50 countries exceeds 400 , which means that major part of researchers in this field has already subscribed .at first , all e - mails other than administrative ones for subscription and unsubscription were distributed to all subscribers . however , accepting a request mainly from theorists that they do not need daily data but want to know final results deduced from observations , we modified the system of vsnet to divide vsnet to three sub - mailing lists in october 1994 , 1 ) vsnet - obs ( vsnet-obs.kyoto-u.ac.jp ) for reporting daily observations , 2 ) vsnet - alert ( vsnet-alert.kyoto-u.ac.jp ) for alert on the discovery of supernovae , novae , rare outbursts , discovery of new variable star , dramatic change of known variable stars , and so on , and 3 ) vsnet ( vsnet.kyoto-u.ac.jp ) for the general information not suited for the former two lists , for example , complied data , finding charts , preprints , calls for observations for international co - observation campaigns , and so on . in order to refer to an old log correctly , serial numbers in the subject were added as [ vsnet - obs 1997 ] since january 1995 . in june 1995 , the vsnet web pages ( http://www.kusastro.kyoto-u.ac.jp/vsnet/ ) were opened .you can see all articles ever posted to vsnet on www .you can see light curves drawn from observations distributed via vsnet - obs in the recommended format , too .the most important results among vsnet - alert logs and public information like conference announcement are gathered on the top page almost daily up - dated .many useful tools developed and provided by various groups are available on the pages as well . at the same time we started vsnet anonymous ftp service ( ftp.kusastro.kyoto-u.ac.jp/vsnet/ ) where you can get objects almost same as on www . in january 1997, we made an addition of a new sub - mailing - list of vsnet - chat ( vsnet-chat.kyoto-u.ac.jp ) in order to discuss various subjects .though vsnet was used nearly only for distribution of information , this modification widened the usage of vsnet .having the background as mentioned here , vsnet at present consists of four sub - mailing - lists ( vsnet , vsnet - alert , vsnet - obs , and vsnet - chat ) as well as vsnet web pages and anonymous ftp service .however , one of the sub - lists , vsnet , is now closed , since commercial informations never related to astronomy , so - called spam , were posted again and again .this type of problem , although common to all mailing lists , is hard to completely solve .the number of e - mail posted to vsnet - obs , vsnet - alert , and vsnet - chat is 20 - 40 , a few , per day , although changing from day to day .tables show several e - mails posted to vsnet andf igures show light curves available on www .the major usages of vsnet are 1 ) to forecast the behavior of variable stars from daily observations available via vsnet - obs and plot a strategy of the observation , 2 ) to check the status of a variable star in optical on vsnet - obs at the time of the observations in uv or x - ray , 3 ) to publish new results on vsnet - alert ( discoveries of supernovae , determination of superhump period in su uma - type dwarf novae , and so on ) , 4 ) to call the follow - up observations of transient objects detected in uv or x - ray , 5 ) to call the co - operation for international co - observation campaigns , 6 ) to notify conferences newly held , 7 ) to discuss various subjects ranging from the first step of the observation to theoretical interpretations of new interesting phenomena , and so on . for reference , tables list a part of subjects of e - mails posted to vsnet . these usages would be born in the active environment under the complex of `` real - time '' , `` interactivity '' , and the vsnet policy . in addition , new original applications of vsnet are appreciated all time , and , actually , will be proposed .we , vsnet administrators , would be much glad if studies completed on use of vsnet contribute to the progress of astrophysics .if you have any questions , requests , and suggestions , please do not hesitate to communicate with vsnet administrators ( vsnet-adm.kyoto-u.ac.jp ) .part of this work was supported by a research fellowship of the japan society for the promotion of science for young scientists ( d.n . ) .accomazzi , a. , eichhorn , g. , grant , c. s. , murray , s. s. , and kurtz , m. j. 1995 , vistas in astronomy 39 , 63 berners - lee , t. j. , et al .1992 , in `` electronic networking : research , applications and policy '' ( meckler publishing , westport ) , vol . 2 ,no.1 , p52 ginsparg , p. 1996, http://xxx.lanl.gov/blurb/ + pg96unesco.htmlas described in subsection [ sec : rephist ] , the vsnet data handling , database managing , and data analysis tools are mostly common to the ones developed for the vsolj database project , and later adapted for wider range of observations .the programs were originally written for microcomputers running on ms - dos , and later ported to linux .the following information is mainly an excerpt from , partly rewritten for the recent changes for the vsnet management .the vsnet reporting format is an ut extension of the vsolj electronic reporting format , whose design was established in 1987 .the standard format vsnet observations are composed of lines separated by new - line characters . since a full description of the format used in these data or filesmay not be necessary to all readers , the minimal requirements to interpret the data which are made available through the vsnet world - wide web service .each line contains the following items .1 . name of the object 2 .time of the observation in decimals of ut 3 . observed magnitude 4 .magnitude system ( ccd / photoelectric ) or film and filter ( photographic ) 5 . observer s code these items were designed to express original observations as exactly as possible ( mainly in terms of significant digits ) .comparison stars , charts used , or any other text information can be written as a comment following the item ( d ) ; they are usually used for future reference or examination , and not directly used at present in the vsnet regular database management .each item is separated by one or more space characters ( ascii code 20 in hexadecimal ) and does not contain spaces within the item .this is an identifier for the object .if the name is unique enough to discriminate the object from other celestial objects , any expression is basically allowed ; the use of the names listed in the general catalogue of variable stars ( gcvs ) , beyer names , numbers in the new catalogue of suspected variable stars ( nsv ) , and durchmusterung numbers ( bd , cd , cpd ) , however , is strongly recommended . when there is no specific relevant catalog , the use of gsc and usno identifier can be used . for gcvs and beyer names , three - letter iau code of constellation ( in upper - case letters )precedes the name of the star in the constellation .greek letters are written in the standard english expressions . [ cols= " <, < " , ] translation of reported names into standard expressions , when necessary , is performed either automatically or manually using the _ alias _ database .this process particularly becomes necessary when new gcvs names are released in the form of regular name - list updates , or when a new gcvs designation is given for a new nova .all the name translation rules are centralized in the _ alias _ database and reflected on the entire vsnet system , eliminating additional efforts to modify individual observation reports or expressions in vsnet circulars .this is an great advantage of the vsnet database managing process , and a newly released name - list update ( containing some hundreds of newly designated variable stars ) can be usually reflected within a day of the release .observers are allowed to continue using the old expressions as long as the expressions can be uniquely and automatically translated into the new expressions . in order to facilitate detecting errors by eye ,times are expressed in the following decimal format using utc .example : 20030701.123 ( 2003 july 1.123 utc ). no heliocentric or barycentric corrections are introduced at this stage .the conversion to tai or td ( in any expression including julian date ) , or helio-(or bary-)centric corrections are left to data analysis software and users .these measures are partly because widely used software packages at the observer s end are known to frequently contain problems .the other reason is that leap seconds do not allow to distribute fixed tables for conversion beforehand .decimal points are explicitly used to show the significant digits .upper limit observation ( non - detection ) is expressed by a prefix ` ' .expression may be followed by one of ` : ' or ` ? ' to show uncertainty .if the observation is visually performed , this item is not necessary .otherwise , the code of the magnitude system ( or the film and the filter if the observation is done photographically ) follows the item ( c ) without placing a space .well - defined standard photoelectric systems ( e.g. johnson - cousins , , , , , , ) systems are used as in usual sense .other vsnet - specific codes include c " ( unspecified unfiltered ccd magnitude ) , cr " ( unfiltered ccd magnitude calibrated on -band ) and p " ( unspecified photographic magnitude ) .the complete list of the codes representing the films and filters at present is available at the vsnet website.http://www.kusastro.kyoto-u.ac.jp/vsnet/etc/format.html . ]example : .5rc ( fainter than 15.5 in -band ) the code is usually a three - letter code , as canonically used by the american association of variable star observers ( aavso)http://www.aavso.org . ] and the vsolj .the code may be immediately followed by a period mark ( ` . ' ) and the organization code .when there is no fixed affiliation for the observer , the vsnet manager group issues a code , which scheme also enables archiving historical or literature observations . with this format , each line of observation has all information in itself ; any operation of moving lines in a data file ( such as sorting ) does not affect the properties of the data .the above design of electronic data has greatly facilitated overall aspects of the following electronic data management , and this pioneering concept has been taken over by a number of world - wide variable star organizations .most of programs are written in the language c , and were originally compiled with turbo - c ( borland international ) . many of machine - independent source codes ( handling text - based data ) can be compiled by ansi c ( ansi x3.159 ) compilers ( such as gcc ) without much correction . in the actual vsnet database management ,these codes are either compiled on windows/dos personal computers , or linux workstations .most of the text - based works are currently done on linux workstations . besides storing original data as text files ( as reported in vsnet - obs ), they are also incorporated into an observational database in rewritable media allowing random access .several interesting graphic programs and data analysis programs run in this mode .the maximum number of observations handled at one time depends only on capacity of the media and the addressing capacity of the operating system .for example , we now handle dynamic and random access to the entire vsnet observations , consisting of more than 1.2 million visual observations ( mb ) of randomly accessible data , and we have registered more than 1.3 million ccd observations submitted to the vsnet collaboration . we have also confirmed that a combination test of the vsnet , vsolj and afoev public data ( about 3 million observation ) has yielded satisfactory efficiency of handling a huge volume of randomly accessible data .in addition to the observational database , there also exist other system databases including _ vartype _ ( database on individual variable star types ) , and _ alias _( name resolver database ) .a programmer on this system can inquire these databases for the type of variability or the standard expression of the name for the object specified by a given identifier .because the number of the data or the objects is very large , and both random and sequential accesses are necessary for easy operation on the light curves , we adopted the combination of b+tree and bidirectional linear list .this basic structure of the database was originally established in 1989 to fully incorporate the vsolj database .complete data of observations ( other than remarks ) are stored in a file equivalent to an index file of an relational database , and there is no need for reading an additional data file .the size of a storage block is 16 kilobytes , but this can be modified upon different compilation .the main memory is dynamically allocated to simulate a virtual memory in order to minimize access to the storage media . with this feature ,the main database module only requires an order of a few hundreds of megabytes of actual memory .the kernel of the database module is written transparently to the upper modules , so upper modules have only to pass the key or the virtual address to the kernel to inquire the next or the previous data .the modules are written to enable simultaneous handling of multiple databases ( e.g. observational database and _ vartype _ ) without interference .the database functions are prepared as a form of c - language based application interface ( api ) , but we skipped the details of individual apis because they are too technical to be presented in this paper .basic user operations to the database can be performed by command - line tasks on ms - dos or linux .the most frequently used and basic commands include : 1 .creates a new database .2 . merges a text file to a database .3 . deletes data specified by a text file from a database .4 . merges an observational data file in the standard format to the observational database , and lists potential errors by referring to the previously registered data and _ vartype _ database .lists data of a given object for a given chronological period . 6 .replaces an identifier for an object by another .7 . lists all data in a database . 8 .lists all stars in a database .. selects data from a text file by specifying types of variability , by referring to the _ vartype _ database .checks observation data file if they contain newly reported objects by comparison with the existing observational database .. 11 .checks observation data file for grammar , and lists discordant data and potential errors by referring to the previously registered data and _ vartype _ database . 12 .returns the variable star type by referring to the _ vartype _ database . 13 . sets or modifies the variable star type in the _ vartype _ database .14 . converts a variable star name to the standard name by referring to _ alias _ database .15 . sets or modifies the entry in the _ alias _ database ( this command has a different name on linux because of the collision with the shell built - in command ) .most basic operations on the vsnet data are done with these basic commands , and are frequently used as a combination regulated by a shell script .most of these programs use a gui .these packages are architecture - specific , and presently run in limited environments ( e.g. nec pc 9801 machines ) , although efforts have been taken to port these applications to windows operating system , or to write an equivalent wrapper gui applications written in java .figure [ fig : bl ] shows a sample image by * grp * interactive light curve viewer program .( 170mm,90mm)fig20.eps 1. displays light curves , and enables interactive zooming , data selection , data editing .displays and prints light curves automatically . an equivalent package written in java presently used to produce vsnet online light curves .3 . automatically produces vsnet cv circulars . besides drawing light curves ,there also exist several kinds of scientific data - analysis programs . some require databases and others require only data files in the text formata few samples are shown here .many of the general - purpose programs ( mostly with source codes ) are available from the vsnet website tools and programs " section.http://www.kusastro.kyoto-u.ac.jp/vsnet/etc/prog.html . ]we have implemented both conventional heliocentric corrections for the observed jds , by using the well - known newcomb s expansion of the planets .we have also implemented barycentric corrections by numerically integrating the de 200 ephemeris produced by nasa .the original source code of the de 200 ephemeris was imported from a software package written by mr .novas .we have our own implementations of discrete fourier transform ( dft ) , phase dispersion minimization ( pdm : ) , discrete wavelet transform ( dwt : ) and other period analysis tools .the pdm package was ported to windows by andreas wijaya , and has been conveniently used by many users .cv circulars were originally prepared monthly and issued by the vsolj , from the observations reported by the end of the next month . in the modern vsnet service , these circulars are issued almost daily to fully incorporate the daily changes in the rapidly varying cvs .the corresponding list is _ vsnet - cvcirc_.http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet - cvcirc / maillist.html . ]cv circulars contain information about all reported outbursts and standstills of dwarf novae in the form of nightly averages , and nightly averages of other peculiar objects ( cvs other than dwarf novae , x - ray binaries , symbiotic variables , eruptive variable such as r crb stars , s dor stars , fu ori stars , and supernovae , active galactic nuclei and other objects of special interest ). unpredictable variations of brightness of these objects make it extremely difficult to check observations as automatically done for pulsating variables .together with faintness of these objects ( which implies existence of large number of negative observations ) , cross checks between different observers are indispensable .the present program * circ * , referring to a list of more than 1000 objects containing their properties ( type of variability , normal range of variation , ephemerides of eclipses etc . ) , checks all observations using databases .the list is updated by the editor whenever new information becomes available in order to maintain circulars up - to - date .the results are then listed as a file in the prototypical form of cv circulars , containing the reports of potentially discordant data ( or rapid intrinsic changes ) with special marks .the present version can handle more than tens of thousands of observations per month , and produces a circular in a minute .the same program , slightly modified to produce long - term averages , is used to produce vsnet mira circulars _ vsnet - miracirc_.http://www.kusastro.kyoto - u.ac.jp / vsnet / mail / vsnet - miracirc / maillist.html . ]when identifying variable stars and locating the ccd field of view , chart - plotting and star identification programs are very useful , particularly when the network resource was unavailable . for this purpose , one of the authors ( tk ) developed in 1990 a chart - plotting computer program , when machine - readable guide star catalog ( gsc ) 1.0 was released .the program was designed to run on a stand - alone personal computer running on ms - dos non - extended ( less than 640 kb ) memory .since the distribution form of gsc 1.0 was composed of huge ascii tables on two cd - roms ( 1.2 gb ) , and since the objects were randomly arranged within individual files covering 23 degrees square , it was an absolute requirement to compress these data and make them quickly and randomly accessible . in this software , we subdivided the entire gsc data into 0.5 degree bins in declination , and sorted the objects within the same bin according to right ascension . in order to compress the data , we used binary files which can be directly mapped in c structure . in order to avoid redundancy in right ascensions and enable quick random access, we used a separate jump table which records the file positions of given coordinate meshes .individual entries in the 0.5-deg bins only contain residuals to the mesh coordinates . in original compression of gsc 1.0 , each catalog entry corresponded to 5 bytes . by combining the information of coordinate mesh in the jump table and the individual entries ,one can obtain the fully decoded coordinates .these functions ( including sequential reading functions of a given box ) were implemented as transparent apis , many of which were designed to take a pointer to the display function as a call - back function . with this compression, the entire gsc 1.0 ( disregarding the object names , plate numbers ) can be compressed into 100 mb .this program was one of the earliest chart - plotting software packages that used gsc as source catalogs and implement encoded compressed catalogs , and the design was taken over in various successive third - party applications .the program is also able to display gcvs and nsv variable star catalogs , and iras psc objects , which were compressed in a similar manner employed in gsc compression .the program was later updated to accommodate gsc 1.1 and usno a1.0 catalogs in 1998 , and other catalogs ( with variable - length object labels ) . with the increase of capacity of storage media , the present version uses 12 bytes for one gsc 1.1 full entry and 8 bytes for one usno a1.0 entry . with this compression method ,the entire usno a1.0 data ( more than 4.8 objects ) can be stored in 4 gb storage and can be quickly and randomly accessed .the present version is equipped with a function to handle the 2mass point source catalog in the same manner .the program also handles a name resolver by using the same database engine described in appendix [ sec : app : database ] .the entire program is presently ported to windows and linux using _ xlib _ graphic library ( see figure [ fig : v4743 ] for an example ) . with this program and apis, one can very quickly identify new and known variable stars either interactively or in a batch .online vsnet charts with hipparcos and tycho magnitudes ( subsection [ sec : standard ] ) have been prepared with this software operated in a batch mode .the object databases have been regularly updated , especially when a name lists for new variable stars is released .the source codes are available upon request to the author .kato , t. 1990a , in light curves of variable stars by vsolj 1 .mira type variable stars for jd 24459502447050 , ed .m. watanabe , k. hirosawa , & k. saijo ( tokyo : variable star observers league in japan ) , 2
|
variable star network ( vsnet ) is a global professional - amateur network of researchers in variable stars and related objects , particularly in transient objects , such as cataclysmic variables , black hole binaries , supernovae and gamma - ray bursts . the vsnet has been playing a pioneering role in establishing the field of _ transient object astronomy _ , by effectively incorporating modern advance in observational astronomy and global electronic network , as well as collaborative progress in theoretical astronomy and astronomical computing . the vsnet is now one of the best - featured global networks in this field of astronomy . we review on the historical progress , design concept , associated technology , and a wealth of scientific achievements powered by the vsnet .
|
phishing webpages ( `` phishs '' ) lure unsuspecting web surfers into revealing their credentials . as a major security concern on the web ,phishing has attracted the attention of many researchers and practitioners .there is a wealth of literature , tools and techniques for helping web surfers to detect and avoid phishing webpages .nevertheless , phishing detection remains an arms race with no definitive solution .state - of - the - art large scale real - time phishing detection techniques are capable of identifying phishing webpages with high accuracy ( > 99% ) while achieving very low rates of misclassifying legitimate webpages ( < 0.1% ) .however , many of these techniques , which use machine learning , rely on millions of static features , primarily taking the bag - of - words approach .this implies two major weaknesses : ( a ) they need a huge amount of labeled data to train their classification models ; and ( b ) they are language- and brand - dependent and not very effective at identifying new phishing webpages targeting brands that were not already observed in previous attacks .commercial providers of phishing detection solutions struggle with obtaining and maintaining labeled training data . from the deployability perspective ,solutions that require minimal training data are thus very attractive . in this paper, we introduce a new approach that avoids these drawbacks .our goal is to identify whether a given webpage is a phish , and , if it is , identify the _ target _ it is trying to mimic .our approach is based on two core conjectures : * * modeling phisher limitations * : to increase their chances of success , phishers try to make their phish mimic its target closely and obscure any signal that might tip off the victim .however , in crafting the structure of the phishing webpage , phishers are restricted in two significant ways .first , external hyperlinks in the phishing webpage , especially those pointing to the target , are to domains _ outside the control _ of phishers .second , while phishers can freely change most parts of the phishing page , the latter part of its domain name is _ constrained _ as they are limited to domains that the phishers control .we conjecture that by modeling these limitations in our phishing detection classifier , we can improve its effectiveness . * * measuring consistency in term usage * : a webpage can be represented by a collection of key terms that occur in multiple parts of the page such as its body text , title , domain name , other parts of the url etc .we conjecture that the way in which these terms are used in different parts of the page will be different in legitimate and phishing webpages . based on these conjectures ,we develop and evaluate a phishing detection system .we use comparatively few ( 212 ) but relevant features .this allows our system , even with very little labeled training data , to have high accuracy and low rate of mislabeling legitimate websites . by modeling inherent phisher limitations in our feature set , the system is resilient to adaptive attackers who dynamically change a phish to circumvent detection .our basic phishing detector component ( section [ sec : classification ] ) does not require online access to centralized information and is fast .therefore , it is highly suited for a privacy - friendly client - side implementation .our target brand identification component ( section [ sec : target ] ) uses a simple technique to extract a set of _ keyterms _ characterizing a webpage and , in case it is a phish , uses the keyterms set to identify its target .both components eschew the bag - of - words approach and are thus not limited to specific languages or targeted brands .we claim the following contributions : * a new set of features to detect phishing webpages ( section [ subsec : feat_comp ] ) and a classifier , using these features , with the following properties that distinguish it from previous work : * * it learns a generalized model of phishing and legitimate webpages from * a small training set * ( few thousands ) . * * it is * language- and brand - independent*. * * it is resilient to adaptive attackers . * * its features are extracted only from information retrieved by a web browser from the webpage and it does not require online access to centralized information . hence it * admits a client - side - only implementation * that offers several advantages including ( a ) better privacy , ( b ) real - time protection and ( c ) resilient to phishing webpages that return different contents to different clients . * comprehensive evaluation of this system , showing that its accuracy ( > 99% ) and misclassification rate ( < 0.1% ) are comparable to prior work while using significantly smaller training data .( section [ subsec : exp_classif ] ) * a fast target identification technique ( section [ sec : target ] ) for phishing webpages with accuracy ( 90 - 97% ) comparable to previously reported techniques . it can also be used to remove false positives from the basic phishing detection component described above .( section [ subsec : target_eval ] ) this research report is an extended version of an icdcs 2016 paper .a proof of concept of this technique has been implemented as a phishing prevention browser add - on refers to the class of attacks where a victim is lured to a fake webpage masquerading as a target website and is deceived into disclosing personal data or credentials .phishing campaigns are typically conducted using spam emails to drive users to fake websites .impersonation techniques range from technical subterfuges ( email spoofing , dns spoofing , etc . ) to social engineering .the former is used by technically skilled phishers while unskilled phishers resort to the latter .phishing webpages mimic the look and feel of their target websites . in order to make the phishing webpages believable, phishers may embed some content ( html code , images , etc . )taken directly from the target website and use relatively little content that they themselves host .this includes outgoing links pointing to the target website .they also use keywords referring to the target in different elements of the phishing webpage ( title , text , images , links ) . in this paper , our focus is on detection of phishing webpages created by an attacker and hosted on his own web server or on someone else s compromised web server .webpages are addressed by a uniform resource locator ( url ) .[ fig : url ] shows relevant parts in the structure of a typical url .it begins with the _ protocol _ used to access the page .the fully qualified domain name ( _ fqdn ) _ identifies the server hosting the webpage .it consists of a registered domain name ( _ rdn _ ) and prefix which we refer to as _ subdomains_. a phisher has full control over the _ subdomains _ portion and can set it to any value .the _ rdn _portion is constrained since it has to be registered with a domain name registrar ._ rdn _ itself consists of two parts : a _ _ public suffix _ _ ( _ ps _ ) preceded by a _ main level domain _ ( _ mld _ ) .the url may also have a _ path _ and _ query _ components which , too , can be changed by the phisher at will .we use the term _ freeurl _ to refer to those parts of the url that are fully controllable by the phisher .webpages are addressed by a uniform resource locator ( url ) .[ fig : url ] shows relevant parts in the structure of a typical url .it begins with the _ protocol _ used to access the page .the fully qualified domain name ( _ fqdn ) _ identifies the server hosting the webpage .it consists of a registered domain name ( _ rdn _ ) and prefix which we refer to as _ subdomains_. a phisher has full control over the _ subdomains _ portion and can set it to any value .the _ rdn _portion is constrained since it has to be registered with a domain name registrar ._ rdn _ itself consists of two parts : a _ public suffix _ ( _ ps _ ) preceded by a _ main level domain _ ( _ mld _ ) . the url may also have a _ path _ and _ query _ components which , too , can be changed by the phisher at will .we use the term _ freeurl _ to refer to those parts of the url that are fully controllable by the phisher .consider an example url : + _ https://www.amazon.co.uk/ap/signin?_encoding=utf8 _ + we can identify the following components : * _ protocol _= _ https _ * _ fqdn _ = _ www.amazon.co.uk _ * _ rdn _ = _ amazon.co.uk _ * _ mld _ = _ amazon _ * _ freeurl _= \{_www _ , _ /ap/ signin?_encoding = utf8 _ } from analyzing phishing webpages , we identify the following data sources , available to a web browser when it loads a webpage , that can be useful in detecting phishing webpages : * _ starting url _ : the url given to the user to access the website .it can be distributed in emails , instant messages , websites , documents , etc . * _ landing url _ : the final url pointing to the actual content presented to the user in his web browser .this is the url present in the browser address bar when the page is completely loaded . *_ redirection chain _ : the set of urls crossed to go from the starting url to the landing url ( including both ) . * _ logged links _ : the set of urls logged by the browser while loading the page .they point to sources from which embedded content ( code , images , etc . ) in the webpage are loaded . *_ html _ : the html source code of the webpage and iframes included in the page .we consider four elements extracted from this source code : * * _ text _ : text contained between _ < body > _ html tags ( actually rendered on user s display ) . * * _ title _ : text contained between _< title > _ html tags ( appears in the browser tab title ) .* * _ href links _: the set of urls representing outgoing links in the webpage . * * _ copyright _ : the copyright notice , if any , in text . *_ screenshot _ : an image capture of the loaded webpage .before describing the detailed design ( in sections [ sec : classification ] and [ sec : target ] ) , we start with an overview . in section [ subsec : url_structure ] , we saw that even on systems they control , phishers are constrained from freely constructing urls to pages they host .similarly , in section [ subsec : phishing ] , we saw that in order to maximize the believability of their phishing sites , phishers include content from urls outside their control .we conjecture that by taking these constraints and level of control into account in selecting and grouping features for our classification , we can improve classification performance .thus , we divide the data sources from section [ subsec : data_source ] into subcategories according to the level of _ control _ phishers may have on them and the _ constraints _ on phishers .* control * : urls from _ logged links _ and _ href links _ are subdivided into _ internal _ and _ external _ according to their _ rdn_. the set of _ _rdn__s extracted from urls involved in the redirection chain are assumed to be under the control of the webpage owner .any urls that include these _ _ rdn__s are marked _internal_. other _ _ rdn__s are assumed to be possibly outside the control of the webpage owner .urls containing such __ rdn__s are marked _external_. * constraints * : within a url , we distinguish between _ rdn _ , which can not be freely defined by the webpage owner , and ( _ freeurl _ ) , which can be .the primary technique of a phisher is essentially social engineering : fooling a victim into believing that the phishing webpage is the target .thus , it is plausible that lexical analysis of the data sources will help in identifying phishing webpages : we conjecture that legitimate webpages and phishing webpages differ in the way terms are used in _ different locations _ in those pages . to incorporate measurements of such term usage consistency ,we first define what `` terms '' are and how they are extracted from a webpage .let be the set of the 26 lowercase english letters : .we extract terms from a data source as follows : * canonicalize letter characters by mapping upper case characters , accented characters and special characters to a matching letter in ; e.g. , _ \ { b , , b , b } _ . * split the input into substrings whenever a character is encountered .* throw away any substring whose length is less than 3 .let be the set of all possible terms .suppose was extracted from a data source and occurs with probability .the set of pairs 0,1 \right ] , i \in \left\lbrace 1;m \right\rbrace $ ] represents the _ term distribution _ of _s_. .term distributions [ cols="<,<",options="header " , ] the obfuscation and mimicry characteristics of phishing webpages have been the basis of several solutions proposed for phishing detection and target identification .* phishing webpage detection : * analysis of the content and code execution ( _ e.g. _ the use of javascript , pop - up windows , etc . ) of a webpage provides relevant information to identify phishing webpages .some detection methods rely on url lexical obfuscation characteristics and webpage hosting related features to render a decision about the legitimacy of a webpage .the visual similarity of a phishing webpage with its target was also exploited to detect phishs .phishing detection based on visual similarity presuppose that a potential target is known a priori .in contrast , our approach is to discover the target .multi - criteria methods have been proved the most efficient to detect phishing websites .these techniques use a combination of webpage features ( html terms , links , frame , etc . ) , connection features ( html header , redirection , etc . ) and host based features ( dns , ip , asn , geolocation , etc . ) to infer webpage legitimacy .they are implemented as offline systems checking content pointed by urls to automatically build blacklists .this process induces a delay of several hours that is problematic in the context of phishing detection , since phishing attacks have a median lifetime of a few hours .in addition , it is reportedly costly and use some proprietary features preventing usage on the end - user devices .the identification method uses machine learning techniques fed with hundreds of thousands of features .these features are mostly static and learned from training sets containing data such as ip address , autonomous system number ( asn ) , bag - of - words for different data sources ( webpage , url , etc . ) .this limits the generalizability of the approach as it requires large training datasets , numbering hundreds of thousand of webpages .other methods focused , as we do , on the study of terms that compose the data sources of a webpage .cantina was among the first systems to propose a lexical analysis of terms that compose a webpage .in cantina key terms are selected using tf - idf to provide a unique signature of a webpage . using this signature in a search engine , cantina infers the legitimacy of a webpage . a similar method , based on tf - idf and google search , checks for inconsistency between a webpage identity and the identity it impersonates to identify phish .the main difference between these methods and ours is language independence since these methods rely on tf - idf computation to infer their keyterms .table [ tab : compare ] presents comparative performances results of our phishing detection system to the most relevant state - of - the - art systems .it presents the size of the testing sets used to evaluate each system and the provenance of the legitimate set , showing how representative the set is .for example , using popular websites ( such as top alexa sites ) as the legitimate set is not representative .the ratio of training to testing instances indicates the scalability of the method and the ratio of legitimate to phishing instances shows the extent to which the experiments represents a real world distribution ( ) .we also identify the evaluation method ( e.g. , cross validation vs. training with old data and testing with new data ) .finally , we present several metrics for assessing the classification performance . if data for any of the columns were missing from the original paper describing the system , we estimated them . for comparison purposes , if several experimental setups were proposed in a paper , we selected the most relevant to assess their practical efficacy using the following ordered criteria : 1 .learning and testing instances are different , 2 .the ratio of legitimate to phishing in the testing set is representative of real world observations ( ) , 3 .the learning set is older than the testing set , 4 .the false positive rate ( fpr ) is minimized .we can see that among the eight most relevant state - of - the - art techniques , only two have comparable false positive rates to ours ( ) .a low false positive rate is paramount for a phishing detection technique , since this relates to the proportion of legitimate webpages to which a user will be incorrectly denied .the technique proposed by ma _ has a lower accuracy than in our system ( ) .in addition , they use a testing set that does not represent real world distribution ( 3 legs/4 phishs ) and use a cross - validation that does not assess scalability of the approach with a 1/1 ratio for learning to testing instances . report results similar to us in several metrics .however , they use a _ huge training set ( > 9 m instances ) _ and their test set is actually _ smaller _ than the training set ( a sixth , at 1.5 m ) ! scalability and language / brand independence are likely to be poor since they use 100,000 mostly static features ( bag - of - words ) .in contrast to the state - of - the - art in phishing detection , our solution is language independent , scalable , requires much smaller training sets than test sets , and does not rely on real - time access to external sources , while performing better than or as well as the state - of - the - art . *target identification : * one proposal was to use a similar technique as cantina with keywords retrieval and google search to discover a list of potential target as the top results of the search , but the authors do not report accuracy figures for target identification .href links have been used to build community graphs of webpages . by counting the mutual links between two webpages and further performing visual similarity analysis between suspicious webpages , liu _ identify the target of a given phishing website with an accuracy of 92.1% .however , this technique is slow because of the need to crawl many additional websites to build the community graph .conditional random fields and latent dirichlet allocation ( lda ) have been applied to phishing email content to identify their target with a success rate of 88.1% .the technique we propose , in contrast to previous techniques is language independent for keyterms inference .it is as efficient as any state - of - the - art solutions achieving a maximum success rate of 90.5 - 97.3% .we presented novel techniques for efficiently and economically identifying phishing webpages and their targets . by using a set of features that capture inherent limitations that phishers face ,our system has excellent performance and scalability while requiring much smaller amounts of training data .we have also implemented a fully client - side phishing prevention browser add - on implementing this technique .this work was supported in part by the academy of finland ( grant 274951 ) and intel collaborative research center for secure computing ( icri - sc ) .we thank craig olinsky , alex ott and edward dixon for valuable discussions .s. marchal , k. saari , n. singh , and n. asokan , `` know your phish : novel techniques for detecting phishing sites and their targets , '' in _ proceedings of the ieee 36th international conference on distributed computing systems ( icdcs ) _ , 2016 . s. hardy , m. crete - nishihata , k. kleemola , a. senft , b. sonne , g. wiseman , p. gill , and r. j. deibert , `` targeted threat index : characterizing and quantifying politically - motivated targeted malware , '' in _23rd usenix security symposium _ , 2014 , pp .527541 .g. xiang and j. i. hong , `` a hybrid phish detection approach by identity discovery and keywords retrieval , '' in _ proceedings of the 18th international conference on world wide web ( www ) _ , 2009 , pp .571580 .k. thomas , c. grier , j. ma , v. paxson , and d. song , `` design and evaluation of a real - time url spam filtering service , '' in _ proceedings of the 2011 ieee symposium on security and privacy ( sp ) _ , 2011 , pp .447462 .y. zhang , j. i. hong , and l. f. cranor , `` cantina : a content - based approach to detecting phishing web sites , '' in _ proceedings of the 16th international conference on world wide web ( www ) _ , 2007 , pp. 639648 .z. li , s. alrwais , x. wang , and e. alowaisheq , `` hunting the red fox online : understanding and detection of mass redirect - script injections , '' in _ ieee symposium on security and privacy ( sp ) _ , 2014 , pp .v. ramanathan and h. wechsler , `` phishing detection and impersonated entity discovery using conditional random field and latent dirichlet allocation , '' _ computers & security _ , vol . 34 , pp .123139 , 2013 .n. singh , h. sandhawalia , n. monet , h. poirier , and j. coursimault , `` large scale url - based classification using online incremental learning , '' in _ proceedings of the 11th international conference on machine learning and applications ( icmla ) _, 2012 , pp . 402409 .z. li , s. alrwais , y. xie , f. yu , and x. wang , `` finding the linchpins of the dark web : a study on topologically dedicated hosts on malicious web infrastructures , '' in _ proceedings of the 2013 ieee symposium on security and privacy ( sp ) _ , 2013 , pp .112126 .t. vissers , w. joosen , and n. nikiforakis , `` parking sensors : analyzing and detecting parked domains , '' in _ proceedings of the 22nd network and distributed system security symposium ( ndss ) _ , 2015 , pp .114 . p.agten , w. joosen , f. piessens , and n. nikiforakis , `` seven months worth of mistakes : a longitudinal study of typosquatting abuse , '' in _ proceedings of the 22nd network and distributed system security symposium _ , 2015 , pp .114 .f. maggi , a. frossi , s. zanero , g. stringhini , b. stone - gross , c. kruegel , and g. vigna , `` two years of short urls internet measurement : security threats and countermeasures , '' in _ proceedings of the 22nd international conference on world wide web ( www ) _ , 2013 , pp . 861872 .g. xiang , j. hong , c. p. rose , and l. cranor , `` cantina+ : a feature - rich machine learning framework for detecting phishing web sites , '' _ acm transactions on information system security _14 , no . 2 , pp . 21:121:28 , 2011 .j. ma , l. k. saul , s. savage , and g. m. voelker , `` beyond blacklists : learning to detect malicious web sites from suspicious urls , '' in _ proceedings of the 15th acm sigkdd international conference on knowledge discovery and data mining _ , 2009 , pp .12451254 .j. corbetta , l. invernizzi , c. kruegel , and g. vigna , `` eyes of a human , eyes of a program : leveraging different views of the web for analysis and detection , '' in _ research in attacks , intrusions and defenses _ , 2014 , pp .130149 .a. blum , b. wardman , t. solorio , and g. warner , `` lexical feature based phishing url detection using online learning , '' in _ proceedings of the 3rd acm workshop on artificial intelligence and security _ , 2010 , pp .m. n. feroz and s. mengel , `` examination of data , rule generation and detection of phishing url using online logistic regression , '' in _ proceedings of the ieee conference on big data _ , 2014 , pp .241250 .j. ma , l. k. saul , s. savage , and g. m. voelker , `` identifying suspicious urls : an application of large - scale online learning , '' in _ proceedings of the 26th annual international conference on machine learning _ , 2009 , pp .681688 .g. stringhini , c. kruegel , and g. vigna , `` shady paths : leveraging surfing crowds to detect malicious web pages , '' in _ proceedings of the 2013 acm sigsac conference on computer & communications security _ , 2013 , pp .133144 .e. medvet , e. kirda , and c. kruegel , `` visual - similarity - based phishing detection , '' in _ proceedings of the 4th international conference on security and privacy in communication networks ( securecomm ) _ , 2008 , pp .
|
phishing is a major problem on the web . despite the significant attention it has received over the years , there has been no definitive solution . while the state - of - the - art solutions have reasonably good performance , they require a large amount of training data and are not adept at detecting phishing attacks against new targets . in this paper , we begin with two core observations : ( a ) although phishers try to make a phishing webpage look similar to its target , they do not have unlimited freedom in structuring the phishing webpage ; and ( b ) a webpage can be characterized by a small set of key terms ; how these key terms are used in different parts of a webpage is different in the case of legitimate and phishing webpages . based on these observations , we develop a phishing detection system with several notable properties : it requires very little training data , scales well to much larger test data , is language - independent , fast , resilient to adaptive attacks and implemented entirely on client - side . in addition , we developed a target identification component that can identify the target website that a phishing webpage is attempting to mimic . the target detection component is faster than previously reported systems and can help minimize false positives in our phishing detection system .
|
angiogenesis , the formation of new blood vessels from existing vessels , is important in numerous mechanisms in health and disease , including wound healing and tumor development . as a natural response to hypoxia ,normal cells and tumor cells secrete a range of growth factors , including vascular endothelial growth factors ( vegfs ) and fibroblast growth factors ( fgfs ) .these activate quiescent endothelial cells to secrete proteolytic enzymes , to migrate from the blood vessel and organize into an angiogenic sprout .angiogenic sprouts are led by tip cells , a highly migratory , polarized cell type that extends numerous filopodia .tip cells express high levels of the vegf receptor vegfr2 , delta - like ligand 4 ( dll4 ) and , _ in vitro _, cd34 .the tip cells are followed by stalk cells , a proliferative and less migratory type of endothelial cell , which expresses low levels of dll4 and , _ in vitro _ , have undetectable levels of cd34 the behavior of tip and stalk cells during angiogenic sprouting has been well characterized in mouse retina models and in endothelial spheroids . from a mechanistic point of view , however , it is not well understood why two types of endothelial cells are involved in angiogenesis .experimental and computational lines of evidence suggest that in absence of tip and stalk cell differentiation , endothelial cells can form blood - vessel like structures , albeit with abnormal morphological parameters . in cell cultures ,endothelial cells organize into network - like structures , without obvious differentiation into tip and stalk cells , although the individual endothelial cells were found to vary in other aspects of their behavior , e.g. , their tendency to occupy the nodes of vascular networks .computational models have suggested a range of biologically - plausible mechanisms , by which populations of identical endothelial cells can self - organize into vascular network - like structures and sprout - like structures can form from endothelial spheroids . experimental interference with tip and stalk cell differentiation modifies , but does not stop the endothelial cells ability to form networks . in mouse retinal vascular networks , inhibition of notch signaling increases the number of tip cells and produces denser and more branched vascular networks , while in gain - of - function experiments of notch the fraction of stalk cells is increased , producing less extensive branching . _ in vitro _ , similar effects of altered notch signaling are observed . taken together , these observations suggest that differentiation between tip and stalk cells is not required for vascular network formation or angiogenic sprouting .instead they may fine - tune angiogenesis , e.g. , by regulating the number of branch points in vascular networks .the exact mechanisms that regulate the differentiation of tip and stalk cell fate are subject to debate .activation of the vegfr2 by vegf - a , which is secreted by hypoxic tissue , upregulates dll4 expression .dll4 binds to its receptor notch in adjacent endothelial cells , where it induces the stalk cell phenotype , which includes downregulation of dll4 .the resulting lateral inhibition mechanism , together with increased vegf signaling close to the sprout tip , may stimulate endothelial cells located at the sprout tip to differentiate into tip cells `` in place '' .detailed fluorescent microscopy of growing sprouts _ in vitro _ and _ in vivo _ shows that endothelial cells move along the sprout and `` compete '' with one another for the tip position .endothelial cells expressing a lower amount of vegfr2 , and therefore producing less dll4 , are less likely to take the leading tip cell position , while cells that express less vegfr1 , which is a decoy receptor for vegfr2 , are more likely to take the tip cell position .these results suggest that the vegf - dll4-notch signaling loop is constantly re - evaluated and thereby tip cell fate is continuously reassigned .a series of recent observations , however , support an opposing view in which tip cells differentiate more stably .tip cells express the sialomucin cd34 , making it possible to produce `` tip cell '' ( cd34 + ) and `` stalk cell '' ( cd34- ) cultures using fluorescence - activated cell sorting ( facs ) .cd34 + cells have a significantly lower proliferation rate than cd34- cultures during the first 48 hours , suggesting that during this time they do not redifferentiate into stalk cells . in cultures of cd34-negative endothelial cells ( stalk cells ) ,the wild - type ratio of tip and stalk cells reestablishes only after around ten days .thus within the time frame of _ in vitro _ vascular network formation of around 24 to 48 hours cross - differentiation between tip and stalk cells is relatively rare .these data suggest that the differentiation between tip and stalk cells depends on a balance between ( a ) lateral inhibition via the dll4-notch pathway , and ( b ) a stochastically `` temporary stabilized '' tip or stalk cell fate , potentially correlated with cd34 expression .to develop new hypotheses on the role of tip and stalk cell differentiation during angiogenesis , we developed an explorative approach inspired by long _ who used a genetic algorithm to identify the transition rules between endothelial cell behaviors that could best reproduce _ in vitro _ sprouting . herewe use a cell - based , computational model of angiogenesis that is based on the cellular potts model ( cpm ) .we extend the model with tip and stalk cell differentiation , and systematically vary the parameters of the tip cells to search for properties that make the `` tip cells '' behave in a biologically realistic manner : i.e. , they should move to the sprout tip and affect the overall branching morphology .we consider both a `` pre - determined '' model in which endothelial cells are stably differentiated into tip and stalk cells throughout the simulation time of the model , and a `` lateral inhibition '' model , in which tip and stalk cells cross - differentiate rapidly via dll4-notch signaling .we compare the tip cell properties that our model predicts with differential gene expression data , and perform initial experimental tests for the resulting gene candidate _ in vitro_.to develop new hypotheses on the role of tip cells during angiogenesis , we took the following `` agnostic '' approach that combines bottom - up modeling , bioinformatical analysis and experimental validation .we started from a previously published computational model of de novo vasculogenesis and sprouting angiogenesis .briefly , the model simulates the formation of sprouts and vascular networks from a spheroid of identical `` endothelial cells '' , driven by an autocrine , diffusive chemoattractant that drives endothelial cells together ( see ref . and materials and methods for details ) . in the first step, we assumed that a fraction of the cells are `` tip cells '' ( tip cell fraction ) and the remaining cells are `` stalk cells '' , hence assuming that cross - differentiation between tip and stalk cells does not occur over the course of the simulation .we next systematically varied the model parameters of the tip cells to look for cell behavior that ( a ) takes the tip cells to the sprout tips , and ( b ) changes the morphology of the simulated vascular networks formed in the model .the predicted differences between tip cell and stalk cell behavior were then expressed in gene ontology terms , so as to compare them with published gene expression differences between tip and stalk cells .the analysis yielded a gene candidate that was further tested in an _ in vitro _model of spheroid sprouting . as a computational model for angiogenesis, we used our previous cell - based model of de novo vasculogenesis and sprouting angiogenesis .the model assumes that endothelial cells secrete an autocrine , diffusive chemoattractant to attract one another .due to the resulting attractive forces between the endothelial cells , the cells aggregate into a spheroid - like configuration . if the chemoattractant sensitivity of the endothelial cells is restricted to the interfaces between the endothelial cells and the surrounding ecm by means of a contact inhibition mechanism , the spheroids sprout in microvascular - network - like configurations .although our group and others have suggested numerous plausible alternative mechanisms for de novo vasculogenesis and sprouting , in absence of a definitive explanatory model of angiogenesis we have selected the contact inhibition model for pragmatic reasons : it agrees reasonably well with experimental observation , it focuses on a chemotaxis mechanism amenable to genetic analysis , and it has a proven applicability in studies of tumor angiogenesis , age - related macular degeneration , and toxicology . the computational model is based on a hybrid , cellular potts and partial differential equation model .the cellular potts model ( cpm ) represents biological cells as patches of connected lattice sites on a finite box of a regular 2d lattice with each lattice site containing a _ cell identifier _ uniquely identifies each cell .each cell is also associated with a cell type .to mimic amoeboid cell motility the method iteratively attempts to move the interfaces between adjacent cells , depending on the amplitude of active membrane fluctuations ( expressed as a `` cellular temperature '' ) and on a force balance of the active forces the cells exert on their environment ( e.g. due to chemotaxis or random motility ) and the reactive adhesive , cohesive and cellular compression forces . assuming overdamped motility , the cpm solves this force balance as a hamiltonian energy minimization problem ( see materials and methods for details ) .the angiogenesis model includes the following endothelial cell properties and behaviors : cell - cell and cell - matrix adhesion , volume conservation , cell elasticity , and chemotaxis at cell - ecm interfaces . to describe cell - cell adhesionwe define a contact energy that represents the interfacial tension between cells of type and .this term lumps contributions due to cell - cell adhesion and cortical tensions .we assume that cells resist compression and expansion by defining a resting area . in practicethe cells fluctuate slightly around their resting area depending on the elasticity parameter .the cells secrete a diffusive chemoattractant at a rate , with , where is a diffusion coefficient , is a degradation rate , which is zero inside cells , and .chemotaxis at cell - ecm interfaces is incorporated by biasing active cell extension and retractions up chemoattractant with a factor , which is the chemoattractant sensitivity .we start the analysis from the set of nominal parameters listed in table [ tab : par ] ; these yield the nominal collective cell behavior shown in fig [ fig : parsweep]*a*. the parameters are set according to experimental values as far as possible .the cross - sectional area of the endothelial cells in the cell cultures was ( see materials and methods for detail ) , based on which we set the target area of the cells , and , to 100 lattice sites , corresponding with .the diffusion coefficient , secretion rate and degradation rate of the chemoattractant were set equal to those used in our previous work ; note that the diffusion coefficient is set to a value lower than the one , e.g. , reported for vegf in watery conditions ( ; see ref . ) because of its binding to ecm proteins . in absence of detailed experimental data on endothelial cell cell - cell and cell - ecm adhesive forces , cell stiffness , and the chemotactic response , for the corresponding parameters we used the values from ref . ; the exact values of these parameters do not qualitatively affect the results of the model , and have modest quantitative impact ; for a detailed sensitivity analysis see refs . ..parameter values for the angiogenesis and tip cell selection model .underlined parameters are varied in the screen for tip cell behavior [ cols="<,<,<",options="header " , ] [ tab : par ] we set up a screen for differences in the parameters of tip cells and stalk cells that affect the outcome of the model .in particular , we looked for parameters for which the tip cells lead sprouts in such a way that it affects the network morphology . in the angiogenesis model , a fraction ( ) of the endothelial cellsis assumed to be the `` tip cell '' , , and the remaining fraction is set to .we assigned the nominal parameters shown in table [ tab : par ] to both `` tip cells '' ( ) and `` stalk cells '' ( ) .we varied the underlined parameters in table [ tab : par ] to change the behavior of `` tip cells '' and ran the simulation for 10 000 time steps for a series of tip cell fractions and a series of parameters .the behavior of `` stalk cells '' was fixed because the nominal parameters , which were thoroughly studied in our previous work , are based on _ in vitro _experiments in which no tip cells were observed . to keep this initial analysis computationally feasible , we tested only one parameter at a time instead of searching through the complete parameter space ( see ref . for more systematic parameter study of the initial , single - cell - type model , based on a sobol - analysis ) . also , in this initial screening we have limited the analysis to parameters that we could possibly associate directly with differentially expressed genes in tip and stalk cells . for this reason ,we have omitted cell size differences , and we fixed the tip - tip cell adhesion strength . fig [ fig : parsweep]*b * illustrates a typical range of morphologies , or _, that we obtained in this way .we analyzed the position of tip cells in each morphology ( fig [ fig : parsweep]*c * ) and analyzed the morphology of the vascular network in function of the tip cell fraction , . to evaluatewhether tip cells occupy sprout tips , we simulated the model with a tip cell fraction of , in accordance with published observations : 11.9% in a huvec monolayer and % in the growing of the retinal vasculature . because we assume that tip cell fate is strongly inhibited in a monolayer and tip cells are overexpressed in the growing front , we set the tip cell fraction at 20% , which is roughly the average of the two . at the end of each simulation we detected sprouts with tip cells on the tip using an automated method , as detailed in the materials and methods section .we then counted the percentage of sprouts with at least one tip cell at the sprout tip .if more sprout tips were occupied by a tip cell than in the control experiment with identical tip and stalk cells , the parameter values were retained for further analysis . fig [ fig : parsweep_tips ] shows the percentage of sprout tips occupied by one or more tip cells for all parameters tested .more sprouts are occupied by tip cells that : ( a ) are less sensitive to the autocrine chemoattractant than stalk cells ( ) , ( b ) adhere more strongly to the ecm than stalk cells ( ) , ( c ) adhere stronger to stalk cells than stalk cells to stalk cells ( ) , ( d ) secrete the chemoattractant at a lower rate than stalk cells ( ) , or ( e ) have a higher active motility than stalk cells ( ) . for the parameters associated with cell - cell and cell - ecm adhesion , we observed a non - monotonic trend in fig [ fig : parsweep_tips ] .a slight change in an adhesion parameter would affect the relative positions of tip and stalk cells , whereas a larger change can completely change the morphology of the network .for example , if the tip cells adhere slightly more strongly to the ecm than the stalk cells , the tip cells tend to be pushed to the sprout tip ( * a * ) .the stalk cells surround the tip cells if they adhere much more strongly to the ecm than the tip cells do ( * b * ) , an effect that differential chemotaxis counteracts . in these simulations ,the tip cells tended to cluster together . because tip cells do not cluster together , we excluded reduced stalk - ecm adhesion from further analysis . out of the cell behaviors that turned out to make cells move to the sprout tips , we next selected cell behaviors that also affect network morphology .we quantified network morphology using two measures . the _ compactness _, is the ratio of the area of the largest cluster of connected cells , , and the area of the convex hull enclosing the connected cluster , .it approaches for a disk and tends to for a sparse network .we also counted the number of `` gaps '' in the network , or lacunae , .for details see the materials and methods section . figs [ fig : parsweep_morph]*a*-*f * takes a selection of the tip cell parameters identified in the previous section , and then plots the compactness ( black curves ) and the number of lacunae ( blue curves ) as a function of the tip cell fraction . the results for the remaining parameter value are shown in ( ) . for each tip cell fraction tested , the outcome is then compared with simulations in which the tip cells where identical to the stalk cells ( i.e. , as in fig [ fig : parsweep]*a * ) . closed symbols indicate a significant difference with the respective reference simulation ( welch s t - test , , ) .tip cell parameters that affected network morphologies for at least half of the tip cell fractions tested were kept for further analysis .the screening selected three ways in which tip cells could differ from stalk cells to change network morphology : reduced chemoattractant sensitivity ( ; see fig [ fig : parsweep_morph]*a * ) , reduced chemoattractant secretion by tip cells ( ; see fig [ fig : parsweep_morph]*e * ) , and increased tip - ecm adhesion ( ; see figs [ fig : parsweep_morph]*b*-*c * ) .it turned out that increased ecm adhesion by tip cells was best modeled by reducing the adhesion of stalk cells with the ecm instead ( ) , because for ( fig [ fig : parsweep_morph]*c * ) networks could not form with too many tip cells ( see ) .the results of the screening held for the other parameter values tested ( ) with two exceptions : ( 1 ) the networks disintegrated if tip cells did not respond sufficiently strongly to the chemoattractant ( ( * j * ) ) , and ( 2 ) the tip cells spread out over the stalk cells to cover the whole network for ( * k * ) . also , the conclusions were confirmed in a screening relative to three additional nominal parameter sets ( and ) .= 0.2 . ]altogether , the computational screening presented in this section identified three tip cell parameters that affect tip cell position in the sprout and the morphology of the networks formed in our computational model : reduced secretion of the chemoattractant , reduced sensitivity to the chemoattractant , and increased tip - ecm adhesion .it is possible , however , that these effects are due to spatial or temporal averaging of tip and stalk cell parameters , not due to interaction of two different cell types .the next section will introduce a control for such effects .the computational screening highlighted three tip cell parameters that affected both the position of tip cells in the sprouts and the morphology of the networks : ( 1 ) increased tip - cell ecm adhesion , ( 2 ) reduced chemoattractant secretion by tip cells , and ( 3 ) reduced chemoattractant sensitivity of tip cells . because it was unsure whether these effects were due to ( a ) the differential cell behavior of tip and stalk cells , or ( b ) due to temporal or spatial averaging of the parameters differentially assigned to tip and stalk cells , we compared the results against a control model that had only one cell type with `` averaged '' parameters : , with the tip cell parameter value and the stalk cell parameter value . for each of the three parameters identified in the first step of the computational screening , we compared the morphologies formed in the control model after 10000 mcs with the morphologies formed in the original model with mixed cell types ( fig [ fig : tcpropmetric ] ) .figs [ fig : tcpropmetric]*a , f , and k * show example configurations formed in the original model , in comparison with example configuration formed in the corresponding `` averaged '' model ( figs [ fig : tcpropmetric]*b , g , and l * ) . in the `` mixed '' model the tip cells ( red ) tend to move to the periphery of the branches , in contrast to the `` averaged '' model in which all cells have the same parameter values . ) . * b * , * g * , and * l * morphologies for averaged cells ( ) .* c*-*e * , * h*-*j * , and * m*-*o * morphometrics for a range of tip cell fractions for both the control and mixed model . the morphometrics were calculated for 50 simulations at 10 000 mcs ( error bars represent the standard deviation ) .p - values were obtained with a welch s t - test for the null hypothesis that the mean of mixed model and the control model are identical . ]we next tested if networks formed in the `` mixed '' model differed from those formed in the corresponding `` averaged '' model for tip cell fractions ranging from 0 ( no tip cells ) to 1 ( only tip cells ) .although the measures differed for individual morphometrics and tip cell fractions in all three scenarios ( figs [ fig : tcpropmetric]*c - e , h - j , m - o * ) , only in the model where tip cells had reduced chemoattractant sensitivity all morphometrics differed significantly for practically all tip cell fractions tested ( figs [ fig : tcpropmetric]*m*-*o * ) .the analysis was repeated for three additional parameter values per scenario ( ) ; although in all three scenarios the morphometrics differed between the `` mixed '' and `` averaged '' models for a number of tip cell fractions , only in the `` reduced chemoattractant sensitivity '' scenario the differential behavior of tip and stalk cells consistently affected the morphometrics .we thus retained only this model for further analysis .the parameter screening indicated that tip cells that are less sensitive to the chemoattractant than stalk cells tend to move to the front of the sprouts , affecting in this way the network morphology .to better understand now such differential chemoattractant sensitivity can affect angiogenic sprouting , we analyzed the migration of a cell pair consisting of one tip cell and one stalk cell .as shown in figs [ fig : cellpair]*a*-*c * , cell pairs with a large difference in the chemoattractant sensitivity migrated much further than cell pairs with a smaller or no difference in chemoattractant sensitivity . to quantify this observation , we used the mccutcheon index , which is the ratio of the distance between the initial and final position , and the total path length .as shown in fig [ fig : cellpair]*d * , the mccutcheon index decreases as the tip cell s chemoattractant sensitivity approaches that of the stalk cell . indicating that a strong difference in chemotaxis causes the cell pair to move along a straighter path .these results suggest that , in a self - generated gradient , heterogeneous chemoattractant sensitivity improves migration speed and persistence . in the context of angiogenesis, this effect speeds sprouting and sprout elongation . and respectively , , and .* d * mccutcheon index as a function of the tip cell chemoattractant sensitivity .the values were averaged over 100 simulations and error bars depict the standard deviation . ] in the parameter screenings presented in the previous sections , to a first approximation we assumed that a subpopulation of endothelial cells are `` predetermined '' to become tip cells , e.g. , due to prior expression of cd34 .it is likely , however , that tip cell fate is continuously `` re - evaluated '' in a dll4-notch - vegfr2 signaling loop .tip cells express dll4 on their cell membranes , which binds to the notch receptor on adjacent cell membranes .this leads to the release of the notch intracellular domain ( nicd ) , activating the stalk cell phenotype . via this lateral inhibition mechanism ,cells adjacent to tip cells tend to differentiate into stalk cells . to simulate such `` dynamic tip cell selection '' , a simplified genetic regulatory network ( grn ) model of dll4-notch signaling was added to each simulated cell , as described in detail in the materials and methods .briefly , the level of nicd in each cell is a function of the amount of dll4 expressed in adjacent cells , weighed according to the proportion of the cell membrane shared with each adjacent cells .if the concentration of nicd of a tip cell exceeds a threshold , the cell cross - differentiates into a stalk cell ; conversely , if in a stalk cell it differentiates into a tip cell . ) with at 10 000 mcs . *g*-*l * networks formed with the tip cell selection model for varying nicd thresholds ( ) at 10 000 mcs . *m * standard deviation of lacuna area in a network after 10 000 mcs . * n*-*q * close up of the evolution of a network with 20% predefined tip cells ( marked area in * b * ) .* r*-*t * comparison of the morphometrics for networks formed with predefined and selected tip cells with reduced chemoattractant sensitivity ( ) and network at 10 000 mcs .for the simulations with tip cell selection , the average tip cell fraction was calculated for each nicd threshold .for all plots ( * m * and * r*-*t * ) the values were averaged over 50 simulations and error bars depict the standard deviation . ]fig [ fig : ptcvsdtc ] shows the behavior of the initial `` static model '' ( figs [ fig : ptcvsdtc]*a*-*f * ) in comparison with the `` dynamic tip cell selection '' model ( figs [ fig : ptcvsdtc]*g*-*l * ) . in the dynamic modelthe tip cell fraction was set using the values of , such that the exact tip cell fractions depended on the local configurations . in comparison with the initial , `` static '' model ( figs [ fig : ptcvsdtc]*a*-*f * ) , the model with `` dynamic '' selection ( figs [ fig : ptcvsdtc]*g*-*l * and ) seems to form more compact and regular networks . to quantify this difference in network regularity , we determined the variation of the areas of the lacunae of the networks at the final time step of a simulation .fig [ fig : ptcvsdtc]*m * shows this measurement averaged over 50 simulations for a range of tip cell fractions .lacunae in networks formed from mixtures of stalk cells and 10% to 60% `` static '' tip cells have more variable sizes than lacunae in networks formed by the ` dynamic tip cell ' model . to further analyze how dynamic tip cell selection regularized network morphologies in our model , we studied in detail how tip cells contributed to network formation in the `` static '' and `` dynamic '' tip cell models .figs [ fig : ptcvsdtc]*n*-*q * shows the evolution of a part of a network formed with 20% `` static tip cells '' . at first , some tip cells locate at sprout tips and others are located adjacent to or within the branches ( fig [ fig : ptcvsdtc]*n * ) .the chemoattractant gradually accumulates `` under '' the branches , with a curvature effect producing slightly higher concentrations at the side of the lacunae .this attracts the stalk cells ( fig [ fig : ptcvsdtc]*o * ) , `` squeezing '' the tip cells out of the branch and away from the lacuna , due to their reduced chemoattractant sensitivity ( fig [ fig : ptcvsdtc]*p * and ) .the resulting layered configuration with tip cells at the outer rim drives a drift away from the lacuna ( fig [ fig : ptcvsdtc]*q * ) : due to their stronger chemoattractant sensitivity , the stalk cells attempt to move to the center of the configuration , pushing the tip cells away , thus leading to directional migration driven by the mechanism outlined in the previous section ( see also ref . ) . in the `` dynamic tip cell selection '' mechanism, the persistent migration will be confined to the sprout tips .the model thus suggests that tip cells could assist in producing a local , self - generated gradient mechanism that directs the migration of sprouts , a mechanism that requires tip cells to differentiate only at sprout tips . for tip cells to `` drag ''just the sprouts , only a limited number of tip cells must be present in the network . to test this idea , we compared network morphologies for the `` dynamic '' and the `` static '' tip cell models for a range of tip cell fractions ( figs [ fig : ptcvsdtc]*r*-*t * ) .indeed , the network morphologies were practically identical for high tip cell fractions , whereas they differed significantly for all three morphometrics for tip cell fractions between 0.1 and 0.3 : in the dynamic selection model the networks become more disperse ( fig [ fig : ptcvsdtc]*r * ) and formed more branches ( fig [ fig : ptcvsdtc]*s * ) and lacunae ( fig [ fig : ptcvsdtc]*t * ) than in the `` static '' model. to validate the `` dynamic '' tip cell model , we compared the effect of the tip cell fraction on network morphology with published experimental observations .the _ in vivo _ , mouse retinal angiogenesis model is a good and widely used model for tip / stalk cell interactions during angiogenesis .networks formed with an increased abundance of tip cells become more dense and form a larger number of branches than wild type networks .our computational model is consistent with this trend for tip cell fractions between 0 and up to around 0.2 ( figs [ fig : ptcvsdtc]*r*-*t * ) , but for tip cell fractions the vascular morphologies become less branched ( figs [ fig : ptcvsdtc]*s*-*t * ) . to investigate in more detail towhat extent our model is consistent with these experimental observations , we tested the effect of the tip cell fraction in the ` dynamic ' tip cell selection model in more detail .in particular we were interested in how the difference in chemoattractant sensitivity between tip and stalk cells affected network morphology .fig [ fig : dtc_ct ] shows the effect of the nicd threshold ( increasing the nicd threshold is comparable to inhibiting dll4 expression or notch signaling , and hence controls the tip cell fraction ) for a range of tip cell chemoattractant sensitivities . when the difference in the chemoattractant sensitivity between tip and stalk cells is relatively small ( ) , increasing the nicd threshold results in the formation of denser network with fewer lacunea .in contrast , when the difference in chemoattractant sensitivity between tip and stalk cells is larger ( ) , there exists an intermediate state in which the networks are both compact and have a large number of branch points ( figs [ fig : dtc_ct]*a4 * and * b4 * ) .this intermediate state resembles the dense , highly connected networks that are observed when tip cells are abundant in the mouse retina .thus , when the difference in the chemoattractant sensitivity of tip and stalk cells is sufficiently large , the model can reproduce both normal angiogenesis and the excessive angiogenic branching observed for an abundance of tip cells . ) and nicd thresholds ( ) . ]the comparative , computational model analysis of the role of tip cells in angiogenesis , predicted that among the models tested a model where tip cells show reduced sensitivity to an autocrine chemoattractant best matches tip cell phenomenology : the tip cells lead the sprouts , and facilitate the formation of vascular networks of regular morphology for tip cell fractions of up to around 0.2 .could a chemoattractant with these , or very similar properties be involved in vascular development ? to answer this question , we evaluated four comparative studies of gene expression in tip and stalk cells .these studies identified three receptors involved in endothelial chemotaxis that were differentially expressed in tip cells and stalk cells : vegfr2 , cxcr4 , and apj .vegfr2 is upregulated in tip cells .vegfr2 is a receptor for the chemoattractant vegf that is secreted by hypoxic tissue .whether or not vegf is secreted at sufficiently high levels to act as an autocrine chemoattractant between endothelial cells has been under debate , with the emerging being that it is most likely a long - range guidance cue of angiogenic sprouts secreted by hypoxic tissues ( ; reviewed in ref .the chemokine cxcl12 and its receptor cxcr4 are both upregulated in tip cells , suggesting that tip cells would have higher , not lower sensitivity to cxcl12 signaling than stalk cells .interestingly , cxcl12 and cxcr4 are key components of a self - generated gradient mechanism for directional tissue migration in the lateral line primordium mechanisms .because of the key role of cxcl12/cxcr4 in angiogenesis ( see , e.g. , ) it is therefore tempting to speculate that cxcl12/cxcr4 may be part of a similar , self - generated gradient mechanism during angiogenesis .however , because cxcl12 expression is upregulated in tip cells relative to stalk cells , not downregulated , we will focus here on a third receptor / ligand pair differentially expressed in tip and stalk cells : apj and apelin .apj is a receptor for the endothelial chemoattractant apelin that is secreted by endothelial cells .apelin expression is upregulated in tip cells , whereas its receptor apj is not detected in tip cells .thus the expression pattern of apelin and its receptor apj fits with our model prediction : apelin is an endothelial chemoattractant that is secreted by endothelial cells and tip cells are less responsive to apelin than stalk cells . in our modelthe chemoattractant is secreted at the same rate by tip and stalk cells , whereas apelin is preferentially expressed in tip cells .the next section will therefore add preferential secretion of apelin by tip cells to the model , and test if and how this changes the predictions of our model .the computational analyses outlined in the previous sections suggest that apelin and its receptor apj might act as an autocrine chemoattractant in the way predicted by our model : both stalk cells and tip cells secrete apelin and apj and the tip cells do not express the apj receptor .gene expression analyses also suggest that tip cells secrete apelin at a higher rate than stalk cells .we therefore tested if the simulation results still held if we changed the model assumptions accordingly : in addition to a reduced chemoattractant sensitivity in tip cells ( ) , we assumed tip cells secrete chemoattractant at a higher rate than stalk cells : .although the absence of apj expression in tip cells suggests that tip cells are insensitive to the chemoattractant , , to reflect the phenomenological observation that endothelial cells are attracted to one another , we set . such intercellular attraction could , e.g. , be mediated by cell - cell adhesion , by alternative chemoattractant - receptor pairs ( e.g. , cxcr4-cxcl12 ) , or by means of mechanical endothelial cell interactions via the extracellular matrix .fig [ fig : dtc_apelin ] shows how the apelin secretion rate in tip cells ( ) affects the morphology of the vascular networks formed in our model , as expressed by the compactness .for tip cell secretion rates of up to around the model behavior does not change .the networks became more compact and exhibit thicker branches for tip cell chemoattractant secretion rates of .this result does not agree with the observation that apelin promotes vascular outgrowth .the increased compactness for is a model artifact : stalk cells were so strongly attracted to tip cells that they engulfed the tip cells and thereby inhibited the tip cell phenotype .a similar increase in compactness and branch thickness is observed in a model where tip cells are not sensitive to the chemoattractant ( ) , which indicates that a too large apelin secretion rate of tip cells destabilizes sprout elongation .altogether , these results suggest that , if the apelin secretion rate of tip cells does not become more than ten times larger than that of stalk cells , our model produces similar results independent of the tip cell secretion rate of apelin . for tip cell apelin secretion rates of , , , , and as insets .except for , all parameters have the values listed in table [ tab : par ] .data points show average values for simulations with error bars giving the standard deviation . ]previous studies have shown that apelin promotes angiogenesis of retinal endothelial cells seeded on matrigel , as well as in _ in vivo _systems such as the mouse retina , _ xenopus _ embryo , and chick chorioallantoic membrane . furthermore , _ in vivo _ inhibition of apelin or apj reduced sprouting in _ xenopus _embryos , zebrafish , and the mouse retina . to assess the relation between tip - stalk cell interaction and apelin signaling , we inhibited apelin signaling in an _ in vitro _ model of angiogenic sprouting in which the fraction of cd34- ( `` stalk '' ) cells could be controlled .spheroids of immortalized human microvascular endothelial cells ( hmec-1s ) were embedded in collagen gels and in collagen enriched with vegf . after culturing the spheroids for 24 hours at 37 degrees celsius under 5% ,the cultures were photographed ( figs [ fig : silexp]a - f and ) .the spheroids did not form network structures within the culturing time , whereas the computational model simulated both angiogenenic sprouting and subsequent vascular plexus formation ( fig [ fig : parsweep]a ) . in order to assess the effect of apelin and apj silencing on sprouting in the _ in vitro _ and _ in silico _ models, we assess the morphologies formed by the _ in silico _ model after 750 mcs . for each model the degree of sprouting was assessed by counting the number of sprouts using the semi - automated image analysis software imagej .we compared sprouting in a `` mixed '' spheroid of hmec-1s with a population enriched in `` stalk cells '' , i.e. , a population of cd34- hmec-1s sorted using facs . to inhibit apelin signaling , the spheroids were treated with an sirna silencing translation of apelin ( siapln ) or of its receptor ( siapj ) . , see materials and methods for details of the normalization and statistical analysis . *i*-*l * example morphologies formed in the computational angiogenesis model ( 750 mcs ) ; * ( i - j ) * model including tip cells ( , in absence ( * i * ) and in presence ( * j * ) of chemoattractant inhibition ; * ( k - l ) * model with reduced tip cell number ( ) in presence ( * k * ) and in absence ( * m * ) of chemoattractant inhibition . *m * number of sprouts after 750 mcs for simulations ; error bars show the standard deviation ; asterisks denote for p - values obtained with welch s t - test in comparison with controls ( no inhibition ) . ]figs [ fig : silexp]*a*-*f * and * k*-*l * show how the number of sprouts per spheroid changes , relative to the treatment with non - translating sirna ( sint ) , due to the silencing rna treatments . to determine significance, anova was performed on each data set , one for the `` mixed '' spheroids and one the `` stalk cell '' spheroids , and followed up by pairwise comparisons using tukey s range test ( see materials and methods for detail ) .relative to a control model with non - translating sirna ( sint ) , `` mixed '' spheroids in vegf enriched collagen formed fewer sprouts ( figs [ fig : silexp ] * a*-*c * and * g * , and ) when treated with siapj or siapln .interestingly , when the collagen gels are not enriched with vegf , siapj or siapln did not significantly affect the number of sprouts . since vegf can induce tip cell fate , this may suggest that without vegf there are too few tip cells present to observe the effects of inhibiting apelin signaling . in `` stalk cell ''spheroids sirna treatments interfering with apelin treatments slightly improved sprouting in some replicates and had no clear effect in others ( ) . thus these results suggest that apelin signaling requires a mix of sufficient cd34 + ( `` tip '' ) and cd34- ( `` stalk '' ) cells , in support of our hypothesis that differential chemotaxis of stalk and tip cells to apelin drives the sprout forward .we next asked if the observed reduction of sprouting associated with inhibition of apelin - signaling also occurred in the computational model . to mimic application of siapln in the computational model, we reduced the secretion of the chemoattractant both in tip and stalk cells to and . to mimic wild - type spheroids we used , which yields a mix of cd34 + and cd34- cells . to mimic spheroidsenriched in stalk cells , we reduced the nicd - levels to in which case all endothelial cells became stalk cells .figs [ fig : silexp]*i*-*l * and show how the model responds to the inhibition of apelin - signaling , showing reduced sprouting after inhibiting the chemoattractant . to quantify these observations, we repeated the simulations ten times for 750 mcs .we converted the resulting images to gray scale images ( see ) and counted the number of sprouts using imagej , thus using the same quantification procedure as that used for the _ in vitro _ cultures . in both the _ in silico _`` wild type '' spheroids ( ) and in the _ in silico _`` stalk cell '' spheroids ( ) , inhibition of apelin - signaling reduced sprouting .however , the simulations did not reproduce the experimental observation that in `` stalk cell '' spheroids silencing of apelin signaling had little effect in absence of vegf and slightly promoted sprouting in vegf - treated cd34- cultures .in this work we asked how and by what mechanisms tip cells can participate in angiogenic sprouting .we employed a suitable computational model of angiogenic network formation , which was extended with tip and stalk cell differentiation . in the extended model , the behavior of tip and stalk cellscould be varied independently by changing the model parameters . instead of testing preconceived hypotheses on tip and stalk cell behavior, we took a `` reversed approach '' in which we could rapidly compare series of alternative parameter settings , each representing different tip cell behavior : we systematically searched for parameters that led tip cells to occupy the sprouts tips , and that changed the morphology of the angiogenic networks relative to a nominal set of simulations in which tip and stalk cells have identical behavior .we studied two cases , reflecting the two extremes in the range of known molecular mechanisms regulating tip and stalk cell differentiation . in the first case, we assumed that endothelial cells are differentiated stably between a tip and stalk cell phenotype within the characteristic time scale of angiogenic development ( approximately 24 to 48 hours ) . in the second case, we assumed a much more rapidly - acting lateral inhibition mechanism , mediated by dll4 and notch . hereendothelial cells can switch back and forth between tip and stalk cell fate at time scales of the same order of cell motility .our analysis showed that in a model driven by contact - inhibited chemotaxis to a growth factor secreted by endothelial cells , tip cells that respond less to the chemoattractant move to the tips of the sprouts and speed up sprout extension . under the same conditions , more regular and more dense networks formed if endothelial cells switched between tip and stalk cell fate due to lateral inhibition .this limits tip cells to growing sprouts ; due to their stronger chemoattractant sensitivity the stalk cells push the tip cells forwards leading to faster sprout extension in a mechanism reminiscent of a `` self - generated gradient mechanism '' .we next asked if a growth factor with the predicted properties is involved in angiogenic sprouting .to this end we looked for matching , differential gene expression patterns in published data sets of gene expression in tip and stalk cells .in particular the apelin - apj ligand - receptor pair turned out to be a promising candidate : apelin is a chemoattractant for endothelial cells that is secreted by endothelial cells and the receptor apj is only detected in stalk cells . in agreement with our simulations , _ in vitro_ experiments on endothelial spheroids showed that inhibition of apelin or its receptor apj reduced _ in vitro _spheroid sprouting .thus the reversed bottom - up simulation approach employed in this study helped identify a candidate molecule mediating the interaction between tip and stalk cells during angiogenesis .our approach was inspired by a prior study that used a computational model to identify what cell behavior changed when endothelial cells were treated with certain growth factors .this study used an agent - based , 3d model of angiogenesis in which sprouts extend from a spheroid . with a genetic algorithm the parameters for which the model reproduces experimental results are derived .in this way long __ could hypothesize what changes in cell behavior the growth factors caused and successfully derived how certain growth factors affect cell behavior in 3d sprouting assays . here , we used a similar approach to study what behavior makes tip cells lead sprouts and affect network formation , using high - throughput parameter studies instead of objective optimization approaches .tip - stalk cell interactions have been studied before with several hypothesis - driven models where specific behavior was assigned to the tip cells based on experimental observations , and tip cells were either defined as the leading cell or tip cell selection was modeled such that the tip cell could only differentiate at the sprout tip .these models have been used to study how extracellular matrix ( ecm ) density , ecm degradation , ecm inhomogeneity , a porous scaffold , cell migration and proliferation , tip cell chemotaxis and toxins affect sprouting and angiogenesis .thus these studies asked how a specific hypothesis of tip cell behavior and tip cell position affected the other mechanisms and observables in the simulation .our approach aims to develop new models for the interaction between tip and stalk cells that can reproduce biological observation .these new hypotheses can be further refined in hypothesis - driven model studies , as we do here , e.g. , in fig [ fig : dtc_apelin ] .in order to make this reversed approach possible , we have simplified the underlying genetic regulatory networks responsible for tip - stalk cell differentiation .these molecular networks , in particular dll4-notch signaling , have been modeled in detail by bentley _their model describes a strand of endothelial cells , and was used to study how lateral inhibition via dll4-notch signaling in interaction with vegf signaling participates in tip cell selection . with this model bentley andcoworkers predicted that the shape of the vegf gradient determines the rate of tip cell selection , and that for very high levels of vegf the intracellular levels of dll4 and vegfr2 oscillate .based on their experimental observations that tip cells migrate within a sprout , cell movement has been added to the model by allowing cells to switch positions along the sprout .bentley and coworkers reproduced tip cell migration in the sprout and showed that the vegfr2 levels in a cell determine the chance of an endothelial cell to become a tip cell .the migration of tip cells in a sprouts was further studied using a model that included a cell migration model .bentley and coworkers thus showed that the differences in ve - cadherin expression between tip and stalk cells could cause tip cell migration to the sprout tip .altogether , these models gave useful insights in the role of dll4-notch signaling and vegf signaling in tip cell selection in a growing sprout . here , instead of focusing at single sprouts , we focused on the scale of a vascular network . by combining a tip cell selection model with a cell based model of angiogenesis , we showed that tip cell selection can aid the development of dense networks by limiting the destabilizing effects of tip cells .the model prediction that tip cells respond less to a chemoattractant secreted by all endothelial cells fits with the expression pattern of the chemoattractant apelin , which is secreted by all endothelial cells and of which the receptor is not detect in tip cells .previous studies indicated that apelin induces angiogenesis _ in vitro _apelin - apj signaling is necessary for vascular development in _ in vivo _systems such as in the mouse retina , frog embryo , and chicken chorioallantoic membrane .furthermore , high levels of vascularization in human glioblastoma are correlated with high expression levels of apelin and apj . based on these observationsapelin is considered to be a pro - angiogenic factor . similar to other pro - angiogenic factors such as vegf , apelin is expressed near areas where blood vessels develop and apelin expression is induced by hypoxia .the pro - angiogenic role of apelin is linked to its role as a chemoattractant and mitogenic factor .however , the role of apelin in proliferation may be disputed because apelin did not promote proliferation in a series of sprouting assays with human umbilical vein endothelial cells , human umbilical arterial endothelial cells , and human dermal microvascular endothelial cells .our models propose a scenario where apelin can promote angiogenesis as an autocrine chemoattractant , in contrast to the previous studies where the source of apelin was external .such a mechanism would fit with the observation that the apelin receptor apj is only expressed in stalk cells .inhibition of sprouting is manifested as a decrease in the number of sprouts .as mentioned previously , apelin may promote proliferation , and thus inhibition of apelin signaling may results in a reduced proliferation rate .a reduced proliferation rate could result in a reduced sprout length , but , a reduced number of sprouts is an unlikely effect of a decreased proliferation rate .this indicates that the mechanism that drives sprouting is affected by the inhibition of apelin signaling .however , whereas in the model inhibition of apelin signaling inhibits sprouting for all tested cases , in the experimental assays the effects of apelin or apj inhibition depended on the fraction of tip cells and the environment . in mixed spheroids , apelin and apj inhibition reduced sprouting in spheroids embedded in vegf - enriched collagen . in cd34- spheroids ,i.e. , spheroids enriched with stalk cells , apelin or apj inhibition had no effect in plain collagen and slightly enhanced sprouting in a vegf rich environment .this suggests that , in a vegf rich environment , apelin - apj signaling inhibits sprouting by stalk cells .vegf has been shown to induce tip cell fate , as well as apj expression .however , it remains unclear how the combination of a vegf rich environment and apelin signaling could inhibit sprouting and therefore further experiments studying the interaction between vegf and apelin signaling in vascular sprouting are needed .further _ in vitro _experiments are also needed to study the effects of apelin signaling on network formation , that follows the initial sprouting phase .our model predicts that inhibition of apelin signaling would also block network formation ( see ) .however , because the 3d sprouting assay does not mimic vascular network formation , this prediction could not be verified experimentally .the importance of vegf in our validation experiments suggests that we can not ignore vegf in our tip cell selection model .as mentioned above , vegf may interact with apelin - apj signaling .furthermore , vegf and apelin are both involved in endothelial cell proliferation . besides the link between vegf and apelin ,vegf is also involved in tip cell selection .dll4-notch signaling and vegf signaling interact directly in two ways .first , dll4 is upregulated by signaling between vegf and vegf receptor 2 ( vegfr2 ) .second , dll4-notch signaling downregulates vegfr2 and upregulates vegf receptor 1 ( vegfr1 ) , which acts as a decoy receptor for vegf .because _ in vivo _ vegf acts as an external guidance cue for angiogenesis , the interplay between vegf signaling and dll4-notch signaling could promote tip cell selection in the growing sprouts .the expression levels of vegfr2 also directly reduce adhesion between cells because vegfr2-vegf binding causes endocytosis of ve - cadherin .this reduced adhesion may enable cells with high vegfr2 levels , such as tip cells , to migrate to the sprout tip . because of this complex interplay between between cell behavior and dll4 , notch , vegf , and the vegf receptors , future studies will replace the simplified tip cell selection model for a tip cell selection model with explicit levels of dll4 , notch , vegf , vegfr1 and vegfr2 , and link those levels directly to tip and stalk cell behaviors .furthermore , future studies should include explicit levels of apelin and apj to study if and how vegf - induced apelin secretion affects network formation .such an extended model will provide more insight into how the interaction between stalk cell proliferation , ecm association of vegf , and pericyte recruitment and interaction , which all have been linked to apelin signaling and/or vegf signaling , affects angiogenesis .in the cellular potts model cells are represented on a finite box within a regular square lattice .each lattice site represents a portion of a cell or the extracellular matrix .they are associated with a cell identifier .lattice sites with represent the extracellular matrix ( ecm ) and groups of lattice sites with the same represent one cell .each cell has a cell type .the balance of adhesive , propulsive and compressive forces that cells apply onto one another is described using a hamiltonian , with a set of adjacent lattice sites , and , and , the contact energy , the kronecker delta : , the elasticity parameter , and the target area . to mimic random pseudopod extensionsthe cpm repeatedly attempts to copy the state of a randomly chosen lattice site , into an adjacent lattice site selected at random among the eight nearest and next - nearest neighbors of .the copy attempt is accepted with probability , with here is is the cell motility and and are shorthand notations .one monte carlo step ( mcs)the unit time step of the cpm consists of random copy attempts ; i.e. , in one mcs as many copy attempts are performed as there are lattice sites in the simulation box .the endothelial cells secrete a chemoattractant at rate that diffuses and decays in the ecm , with the chemoattractant concentration , the diffusion coefficient , and the decay rate . after each mcs equation [ eq : diffusion ]is solved numerically with a forward euler scheme using 15 steps of and a lattice spacing coinciding with the cellular potts lattice of with absorbing boundary conditions ( at the boundaries of ) ; thus one mcs corresponds with 30 seconds .chemotaxis is modeled with a gradient dependent term in the change of the hamiltonian associated to a copy attempt from to : with the chemoattractant sensitivity of a cell of type towards a cell of type and vice versa , and the receptor saturation . in the angiogenesis model we assumed that chemotaxis only occurs at cell - ecm interfaces ( contact - inhibited chemotaxis ;see for detail ) ; hence we set if and . for the remaining , non - zero chemoattractant sensitivities we use the shorthand notation .the differentiation between tip and stalk cells is regulated by a simplified tip and stalk cell selection model .the model is based on lateral inhibition via dll4-notch signaling : if dll4 binds to notch on a adjacent cell it causes the dissociation of notch , resulting in the release of notch intracellular domain ( nicd ) .we assume that tip cells express notch at a permanent level of and delta at a level of ; stalk cells express delta and notch at permanent levels of and .the level of nicd in a cell , , is given by , in which and are the levels of notch and delta in a cell of type , and is the length of the interface between cells and . to model differentiation between the stalk and tip cell type in response to the release of nicd cell type is a function of the cell s nicd level , with threshold representing the nicd - level above which the cell differentiates into a stalk cell .to prevent rapid cell type changes , we introduced a hysteresis effect by setting the notch levels to : and .the dll4 levels are set according to the experimental observation that tip cells express more membrane bound dll4 than stalk cells : and . to quantify the results of the sprouting simulations we calculated the compactness of the morphology and detect the lacunae , branch points and end points . the compactness is defined as , with the total area of a set of cells and the area of the convex hull around these cells . for the compactness we used the largest connected component of lattice sites with .this connected component was obtained using a standard union - find with path compression .the convex hull around these lattice sites is the smallest convex polygon that contains all lattice sites which is obtained using the graham scan algorithm .lacunae are defined as connected components of lattice sites with ( ecm ) completely surrounded by lattice sites with .these areas are detected by applying the _ label _ function of mahotas on the binary image , i.e. , the image obtained if medium pixels are set to 1 and all other pixels are set to 0 .the number of labels areas in this image is the number of lacuna , and the number of lattice sites in a labeled area is the area of a lacuna . to identify the branch points and end points, the morphology is reduced to a single pixel morphological skeleton .for this , first the morphology is obtained as the binary image .rough edges are removed from the binary image by applying a morphological closing with a disk of radius 3 .then , 8 thinning steps are performed in which iteratively all points that are detected by a hit - and - miss operator are removed from the image . in the skeleton , pixels with more than two first order neighbors are branch points and pixels with only one first order neighbor are end points .the skeleton may contain superfluous nodes .therefore , all sets of nodes that are within a radius of 10 lattice units are collected and replaced by a single node at : .all morphological operations are performed using the python libraries mahotas and pymorph .mahotas implements standard morphological operations , except for the closing and thinning operations required for skeleton generation . for these we use pymorph , that implements a more complete set of morphological operation than mahotashowever , as it is implemented in pure python it is computationally less efficient than mahotas .cells at the sprout tips were automatically detected in two steps : ( 1 ) detection of the sprouts in the network ; ( 2 ) detection of the cells on the sprout tip .for the first step , detecting sprouts , a sprout is defined as a connection between a branch point , , and an end point , . to find the branch point that is connected to end point , all nodes , except , are removed from the morphological skeleton ( fig [ fig : tipdetection]*b * ) . in the resulting image one part of the skeleton is still connected to , this is the branch .then , all nodes are superimposed on the image with the branch ( fig [ fig : tipdetection]*c * ) and the node connected to is the branch point . next , we search for the cells at the tip of the sprout , which are the cells in the sprout furthest away from . to find these cells we use a graph representation of the morphology .in this graph , , each vertex represents a cell and vertices of neighboring cells share an edge ( fig [ fig : tipdetection]*d * ) .now , we calculate the shortest path between each vertex and the vertex belonging to the cell at the branch point using dijkstra s algorithm .then , we iteratively search for vertices with the longest shortest path to starting at the vertex associated to ( ) . to limit the search to the a single sprout ,the search is stopped when is reached .when the search is finished , the node or nodes with the longest shortest path to represent the cells or cells that are at the sprout tip. are removed . * c * the union of the nodes and the connected component in * b * that contains .the node that , in * c * , is part of the same connected component as is the branch point . * d * detection of cells at the sprout tip ( red vertices ) , which are farthest away from the branch point ( black vertex ) . ] the simulations were implemented using the cellular potts modeling framework _ compucell3d _ which can be obtained from http://www.compucell3d.org .the simulation script is deposited in .file also includes two extensions to _ compucell3d _ , called steppables , which we developed for the simulations presented in this paper .steppable _ randomblobinitializer _ is used to initialize the simulations with a blob of cells , and steppable _ tcs _ contains the tip cell selection model . to efficiently set up ,run and analyze large parameters sweeps including the ones presented in this paper , we have developed a pipeline to set up , run , and analyze large numbers of simulations of cell - based models on parallel hardware using software like _ compucell3d _, described in detail elsewhere . briefly , the pipeline automatically generate simulation scripts for a list of parameters values , run the simulations on a cluster , and analyze the results using the morphometric methods described in sections morphometrics and tip cell detection .immortalized human dermal endothelial cells ( hmec-1s ) were cultured in 2% gelatin - coated culture flask at 37 under 5% with a m199 medium ( gibco , grand island , ny , usa ) supplemented with 10% foetal calf serum ( biowhittaker , walkersvillle , md , usa ) , 5% human serum and 1% penicillin - streptomycin - glutamine ( gibco ) .the hmec-1 cells used in this study were a kind gift of prof .p. hordijk ( sanquin , amsterdam , the netherlands ) and were derived from ref .cell suspensions were obtained from the cultures by tryple ( gibco ) treatment of adherent endothelial cell monolayers .after the cells were extracted from the culture they were seeded in methylcellulose ( sigma - aldrich ) containing medium to allow spheroid formation .after 18 hours , the spheroids were embedded in a collagen gel containing human serum . in the period that these experiments were performed , the lab had to change collagen gels because of availability issues .therefore , the following three gels were used : purecol bovine collagen ( nutacon , leimuiden , the netherlands ) , nutacon bovine collagen ( nutacon , leimuiden , the netherlands ) , and cultrex rat collagen i ( r&d systems , abingdon , united kingdom ) .the gels may be supplemented with vegf - a ( 25 ng / ml ) .after 24h images of the sprouts were obtained using phase - contrast microscopy .using imagej with the neuronj plugin the number of sprouts and the length of the sprouts in the image were counted .to compare the _ in silico _ simulations with the _ in vitro _experiments , _ in silico _ morphologies at 750 mcs were analyzed following the same method . to prevent biases in this manual analysis due to prior knowledge ,black and white images in which tip and stalk cells were indistinguishable ( see ) were counted by a technician . to study sprouting in absence of tip cells , cd34negative hmec-1s were extracted using fluorescence - activated cell sorting ( facs ) . for thisthe cells were washed in pbs containing 0.1% bovine serum albumin .cells were incubated with anti - cd34-phycoerythrin ( anti - cd34-pe ; clone qbend-10 ) and analyzed by flow cytometry on a facscalibur ( becton dickinson , franklin lakes , nj , usa ) with flowjo 6.4.7 software ( tree star , san carlos , ca , usa ) . to inhibit apelin signaling hmec-1s were transfected with a silencing rna ( sirna ) against apelin ( siapln ) or against the apelin receptor apj ( siapj ) , and a non - translating sirna ( sint ) was used as a control . for each sirna the hmec-1swere transfected with 25 nm sirna ( dharmacon , lafayette , co , usa ) final concentration and 2.5 nm dharmafect 1 ( dharmacon ) for 6 hours using the reversed transfection method .transfection efficiency was evaluated with qpcr and a knockdown of rna expression above 70% was considered as an effective transfection . for both the unsorted hmec-1s and the cd34 negative hmec-1sthe experiments were repeated several times , resulting in 4 biological replicates for the unsorted hmec-1s and 5 biological replicates for the cd34 negative hmec-1s . to combine the results of the biological replicates ,the number of sprouts of spheroid in replicate were normalized : , with the average number of sprouts formed with the non - translating sirna treatment in biological replicate .next , we computed the average number of sprouts per replicate : , with the number of spheroids in replicate .this resulted in four data points for the unsorted hmec-1s and five data points for the cd34 negative hmec-1s .then , significance of each treatment was analyzed in a two - step procedure .first , groups in which the means differ significantly were identified with analysis of variance ( anova ) .second , to identify which means in a group differ , we used tukey s range test to compare the results of the treatments in plain collagen with the sint treatment in plain collagen and the treatments in vegf - enriched collagen with the sint treatment in vegf - enriched collagen .all experimental measurements are included in s1 dataset together with the python script used to perform the statistical analysis .an archive containing the photographs of the hmec spheroids used for the image analysis is included as . the spheroid assay was performed as described above .gels were fixed with 4% paraformaldehyde for 15 min at room temperature and blocked with blocking buffer containing 1% fbs , 3% triton x-100 ( sigma ) , 0.5% tween-20 ( sigma ) , 0.15% natriumazide for 2 hours .cells where incubated with antibodies directed against f - actin ( phalloidin , life technologies , carlsbad , ca , usa ) .three - dimensional image stacks were recorded using confocal microscopy . within those , images containing the largest cross - section were selected visually , and measurements were obtained using the imagej polygonal selection tool .the image stacks and measurements are include as .* cells aggregate instead of forming a network with 20% predefined tip cells and .* * close up of tip cells on the side of a branch that cause network expansion . * for this simulation the 20% of the cells were predefined as tip cells with .* selected tip cells do not pull apart the network in a simulation with and .* * sprouting is strongly inhibited for and 90% inhibition of apelin secretion ( and ) . * * when the model is adapted for apelin , ` predefined ' tip cells get surrounded by stalk cells .* for this simulation 10% of the cells were predefined as tip cells with and .* archive containing the results of morphological analysis of the _ in vitro _endothelial sprouting assays . *the archive contains one text file for each treatment for each replicate and the python script used to perform the statistical analysis .* archive containing the photographs of the hmec-1 spheroids .* images in tiff format , as used for the image analysis .the archive also includes the output files of the neuronj plugin to imagej ( file extension `` .ndf '' ) as well as xml - files containing microscopy settings .the dataset ( 1.6 gb zipped archive ) is available via data archiving and networked services ( dans ) at http://dx.doi.org/10.17026/dans-x4d-b642 .* archive containing fluorescently stained photographs of individual endothelial cells and whole hmec-1 spheroids , as used for area estimation . * images in tiff format .the measurements ( performed using imagej ) are given in file `` cellareas.xlsx '' .* simulation script and code needed to run the simulations in the cpm modeling framework _ compucell3d _ .* the simulation script ( angiogenesis.xml ) can be used when the two cc3d steppables , randomblobinitializer and tcs , are compiled and installed .randomblobinitializer is needed to initialize a simulation with a circular blob and this steppable may be replaced with cc3d s blobinitalizer .tcs is the steppable that runs the dll4-notch genetic network and should be omitted to run simulations with predefined tip cells .+ * effects of increasing ecm adhesion for stalk cells . ** a * stalk cells that adhere more strongly to the ecm than tip cells will engulf tip cells . * b * stalk cells that adhere slightly more to the ecm than tip cells do engulf tip cells , because chemotaxis has the same effect on tip and stalk cells . * a - b * are the results of a simulation of 10 000 mcs with 20% tip cells .+ * effects of varying tip cell chemotaxis . *( * a*-*c * ) , tip cell chemoattractant secretion rate ( * d*-*f * ) and stalk - ecm adhesion ( * g*-*i * ) .the morphometrics were obtained after 10 000 mcs and are the average of 50 simulations ( error bars represent standard deviation ) .p - values were obtained with a welch s t - test for the null hypothesis that the mean of the sample is identical to that of a reference where all cells have the default properties .* j * the network disintegrates with ) and 20% tip cells .* k * tip cells over the network for and 20% tip cells .+ * differences in cell properties can enable cells of one type to occupy sprout tips for three alternative parameter sets . * for * a * the decay rate was reduced , for * b * the decay rate was increased and for * c * receptor saturation was included in the model .the percentage of sprout tips occupied by at least one tip cell was calculated at 10 000 mcs .error bars show the standard deviation over 50 simulations . in each simulation 20% of the cellswere predefined as tip cells . for each simulationone tip cell parameter was changed , except for the control experiment where the nominal parameters were used for both tip and stalk cells .p - values were obtained with a one sided welch s t - test for the null hypothesis that the number of tip cells at the sprout tips is not larger than in the control simulation .+ * effects of tip cells with . *( * a*-*c * ) , ( * d*-*f * ) or ( * g*-*i * ) on the network morphology for the three alternative parameter sets .the morphometrics were obtained after 10 000 mcs and are the average of 10 simulations ( error bars represent standard deviation ) .p - values were obtained with a welch s t - test for the null hypothesis that the mean of the sample is identical to that of a reference sample in which all cells have the default properties .+ * comparison of networks formed with mixed cells and cells with average properties for additional value of , and . *the morphometrics were calculated for 50 simulations at 10 000 mcs ( error bars represent the standard deviation ) .p - values were obtained with a welch s t - test for the null hypothesis that the mean of mixed model and the control model are identical .+ * effects of increasing tip cell apelin secretion rate for tip cells that do not respond to apelin . *compactness of the final network ( 10 000 mcs ) with the morphologies for for tip cell apelin secretion rates of , , , , and as insets . to enable network formation without tip cell chemotaxis , and were reduced to 25 .data points show average values for simulations with error bars giving the standard deviation .+ * effect of siapj and siapln on sprout lengths for all experiments . *the boxes show the first to third quartile of the data .the whiskers show q1 - 1.5 to q3 + 1.5 , with iqr = q3-q1 = interquartile range , or the most extreme observations if those fall within the range of the whisker .superimposed on the box plots are the data points .note that the experiments are done with different collagen gels : purecol collagen ( * a*,*e * ) , nutacon collagen ( * b * , * f * , * g * ) , and cultrex rat collagen ( * c * , * d * , * h * , * i * ) .+ * black and white images of the morphologies produced in the _ in silico _ apelin silencing experiments as provided to the technician . *the authors thank indiana university bloomington and the biocomplexity institute for providing the cc3d modeling environment ( www.compucell3d.org ) . this work was carried out on the dutch national e - infrastructure with the support of surf cooperative ( www.surfsara.nl ) .gerhardt h , golding m , fruttiger m , ruhrberg c , lundkvist a , abramsson a , et al . .j cell biol .available from : http://dx.doi.org/10.1083/jcb.200302047 claxton s , fruttiger m. .gene expr patterns .available from : http://dx.doi.org/10.1016/j.modgep.2004.05.004 siemerink mj , klaassen i , vogels imc , griffioen aw , van noorden cjf , schlingemann ro . angiogenesis .2012 mar;15(1):151163 . available from : http://dx.doi.org/10.1007/s10456-011-9251-z jakobsson l , franco ca , bentley k , collins rt , ponsioen b , aspalter i m , et al .nat cell biol .2010 sep;12(10):943953 . available from : http://dx.doi.org/10.1038/ncb2103 .arima s , nishiyama k , ko t , arima y , hakozaki y , sugihara k , et al . development .2011 sep;138(21):47634776 .available from : http://dx.doi.org/10.1242/dev.068023 .folkman j , hauenschild c. .available from : http://dx.doi.org/10.1038/288551a0 .califano j , reinhart - king ca . .cell mol bioeng .2008 aug;1(2):122132 . available from : http://dx.doi.org/10.1007/s12195-008-0022-x .parsa h , upadhyay r , sia sk . .p natl acad sci usa .2011 mar;108(12):51335138 . available from : http://dx.doi.org/10.1073/pnas.1007508108 .szab a , perryn ed , czirok a. .phys rev lett .available from : http://dx.doi.org/10.1103/physrevlett.98.038102 .szab a , mehes e , kosa e , czirk a. biophys j. 2008;95(6):27022710 .available from : http://dx.doi.org/10.1529/biophysj.108.129668 .siekmann af , lawson nd . .available from : http://dx.doi.org/10.1038/nature05577 .long bl , rekhi r , abrego a , jung j , qutub aa .j theor biol .2012 dec;326(7):4357 .available from : http://dx.doi.org/10.1016/j.jtbi.2012.11.030 .graner f , glazier ja . .phys rev lett .available from : http://dx.doi.org/10.1103/physrevlett.69.2013 .glazier ja , graner f. .phys rev e. 1993;47(3):21282154 .available from : http://dx.doi.org/10.1103/physreve.47.2128 .gamba a , ambrosi d , coniglio a , de candia a , di talia s , giraudo e , et al . .phys rev lett .available from : http://dx.doi.org/10.1103/physrevlett.90.118101 .namy p , ohayon j , tracqui p. j theor biol .2004 mar;227(1):103120 .available from : http://dx.doi.org/10.1016/j.jtbi.2003.10.015 .savill nj , hogeweg p. .j theor biol .1996 feb;184(3):229235 . available from : http://dx.doi.org/10.1006/jtbi.1996.0237 .angelini te , hannezo e. .proc natl acad sci usa .2011 feb;108(12):47144719 . available from : dx.doi.org/10.1073/pnas.1010059108 .don e , barry jd , valentin g , quirin c , khmelinskii a , kunze a , et al . directional tissue migration through a self - generated chemokine gradient .nature 2013;503(7475):285289 .newblock available from : http://dx.doi.org/10.1038/nature12635 .harrington ls , sainson rca , williams ck , taylor jm , shi w , harris al . .microvasc res .available from : http://dx.doi.org/10.1016/j.mvr.2007.06.006 .del toro r , prahst c , mathivet t , siegfried g , kaminker js , larrive b , et al . blood .2010 nov;116(19):40254033 .available from : http://dx.doi.org/10.1182/blood-2010-02-270819 .seghezzi g , patel s , ren cj , gualandris a , pintucci g , robbins es , et al .j cell biol .1998 jul;141(7):16591673 . available from : http://www.ncbi.nlm.nih.gov/pubmed/9647657 .franco m , roswall p , cortez e. . cell .available from : http://dx.doi.org/10.1182/blood-2011-01-331694 .geudens i , gerhardt h. development .2011 sep;4583:45694583 . available from : http://dx.doi.org/10.1242/dev.062323 .salcedo r , oppenheim jj .role of chemokines in angiogenesis : cxcl12/sdf-1 and cxcr4 interaction , a key regulator of endothelial cell responses .microcirculation 2003;10(3 - 4):359370 .available from : http://dx.doi.org/10.1038/sj.mn.7800200 .caolo v , van den akker nms , verbruggen s , donners mmpc , swennen g , schulten h , et al .j biol chem .2010 dec;285(52):4068140689 . available from : http://dx.doi.org/10.1074/jbc.m110.176065 .milde f , bergdorf m , koumoutsakos p. biophys j. 2008 oct;95(7):31463160 .available from : http://dx.doi.org/10.1529/biophysj.107.124511 .bauer al , jackson tl , jiang y. biophys j. 2007 may;92(9):31053121 . available from : http://dx.doi.org/10.1529/biophysj.106.101501 .bauer al , jackson tl , jiang y. plos comput biol .2009 jul;5(7):e1000445 . available from : http://dx.doi.org/10.1371/journal.pcbi.1000445 .artel a , mehdizadeh h , chiu yc , brey em , cinar a. .tissue eng part a. 2011;17(17):21332141 .available from : http://dx.doi.org/10.1089/ten.tea.2010.0571 .gavard j , gutkind js . .nat cell biol .available from : http://dx.doi.org/10.1038/ncb1486 .kasai a , ishimaru y , higashino k , kobayashi k , yamamuro a , yoshioka y , et al .2013 may;16(3):723734 . available from : http://dx.doi.org/10.1007/s10456-013-9349-6 .ruhrberg c , gerhardt h , golding m , watson r , ioannidou s , fujisawa h , et al .genes dev .2002 oct;16(20):26842698 .available from : http://dx.doi.org/10.1101/gad.242002 .ribatti d , nico b , crivellato e. dev biol .2011 jan;55(3):261268 . available from : http://dx.doi.org/10.1387/ijdb.103167dr .graham rl .. inf process lett . 1972;1:132133 .guidolin d , vacca a , nussdorfer gg , ribatti d. .microvasc res .available from : http://dx.doi.org/10.1016/j.mvr.2003.11.002 .dougherty er , lotufo ra . .spie press ; 2003 .dijkstra ew . .numer math .available from : http://dx.doi.org/10.1007/bf01386390 swat mh , thomas gl , belmonte jm , shirinifard a , hmeljak d , glazier ja . .in : asthagiri ar , arkin ap , editors .methods cell biol . academic press ; 2012 .p. 325366 .available from : http://dx.doi.org/10.1016/b978-0-12-388403-9.00013-8 .palm mm , merks rmh .large - scale parameter studies of cell - based models of tissue morphogenesis using compucell3d or virtualleaf . in : nelson cm ,tissue morphogenesis : methods and protocols .1189 of methods in molecular biology .new york : springer ; 2014 .p. 301322 .available from : http://dx.doi.org/10.1007/978-1-4939-1164-6_20 korff t , augustin h. .j cell biol .1998;143(5):13411352 . available from : http://dx.doi.org/10.1083/jcb.143.5.1341 .schneider ca , rasband ws , eliceiri kw . .nat methods .available from : http://dx.doi.org/10.1038/nmeth.2089 .meijering e , jacob m , sarria jcf , steiner p , hirling h , unser m. .cytometry a. 2004;58(2):167176 . available from : http://dx.doi.org/10.1002/cyto.a.20022 .accessed 15 july 2014 .available from : http://www.thermoscientificbio.com/uploadedfiles/resources/reverse-transfection-of-sirna-protocol.pdf .
|
angiogenesis involves the formation of new blood vessels by sprouting or splitting of existing blood vessels . during sprouting , a highly motile type of endothelial cell , called the tip cell , migrates from the blood vessels followed by stalk cells , an endothelial cell type that forms the body of the sprout . to get more insight into how tip cells contribute to angiogenesis , we extended an existing computational model of vascular network formation based on the cellular potts model with tip and stalk differentiation , without making a priori assumptions about the differences between tip cells and stalk cells . to predict potential differences , we looked for parameter values that make tip cells ( a ) move to the sprout tip , and ( b ) change the morphology of the angiogenic networks . the screening predicted that if tip cells respond less effectively to an endothelial chemoattractant than stalk cells , they move to the tips of the sprouts , which impacts the morphology of the networks . a comparison of this model prediction with genes expressed differentially in tip and stalk cells revealed that the endothelial chemoattractant apelin and its receptor apj may match the model prediction . to test the model prediction we inhibited apelin signaling in our model and in an _ in vitro _ model of angiogenic sprouting , and found that in both cases inhibition of apelin or of its receptor apj reduces sprouting . based on the prediction of the computational model , we propose that the differential expression of apelin and apj yields a `` self - generated '' gradient mechanisms that accelerates the extension of the sprout .
|
community is a fundamental element in social media for communication and collaboration . understanding community structure in social media is critical due to its broad applications such as friend recommendations , link predictions and collaborative filtering .however , there is no widely accepted definition of community in literature .informally , a community is a densely connected group of nodes that is sparsely connected to the rest of the network . in another word, a community should have more internal than external connections .community detection problem has been extensively studied .various algorithms have been proposed to minimize or maximize a goodness measurement , such as modularity , eigenvector , and conductance , etc . in general , the community detection algorithms can be categorized into disjoint and overlapping algorithms .comparison has been conducted to measure the performance of various community detection algorithms .furthermore , validated the obtained communities with ground truth , which is the known community membership of users .however , previous research in community detection algorithm design and evaluation has ignored an important metric , size of the community . [ 1 ] suggests that the size of community with strong ties in social media should be limited to 150 due to the cognitive constraint and time constraint of human being in both traditional social network and internet - based social network , i.e. , social media .therefore , too large communities contain weak connections therefore not stable .in addition , too small communities do nt have practical value . in this paper, we use the term _ desirable community _ to refer to the communities of size in range [ 4 - 150 ] .we study community detection from the following perspectives : 1 ) size of the detected communities , 2 ) percentage of users assigned to a desirable community , called the coverage of the communities , 3 ) extended modularity , 4 ) triangle participation ratio ( tpr ) , and 5 ) the interest of users in the same community . for user interest, we collect the top 10 hashtags tweeted by a user and manually inspect the hashtags to decide the user interest .in addition , we propose a simple and intuitive community detection algorithm called clique augmentation algorithm ( caa ) which augment the cliques in the network into communities .we use growing threshold and overlapping threshold to control the size of the community and the amount of overlap among communities .we evaluated five widely used community detection algorithms on a twitter topology of over 318,233 users we collected in 2013 .experimental results show that the well - accepted infomap outperforms newman s leading eigenvector , fast greedy , and multilevel algorithms in terms of community size distribution , community coverage , and user interest .for example , infomap assigns of users in the network into meaningful communities of size in the range of 4 to 150 , while of the size of the communities generated by eigenvector algorithm falls in the range of 1 to 3 , which leads to less than of users are assigned to a meaningful community .we also observe that our proposed caa algorithm produces communities of desired size and coverage .for example , of users are assigned to meaningful communities by caa .in addition , caa outperforms all other algorithms in triangle participation ratio and demonstrates decent modularity .finally , we show that the users in caa communities show strongest similarity than communities obtained by all other algorithms .our contributions in this paper are the following : 1 . we investigate an important but overlooked metric , the size of the community , to evaluate the quality of communities . to the best of our knowledge ,this is the first paper that carry out empirically study on the size of the community and the modularity of community with different sizes .we discover that existing algorithms which optimize modularity can be significantly improved if considering community size during the optimization process 3 .we investigate the community theme through hashtags 4 .we demonstrate that a heuristic clique augmentation algorithm can produce high quality overlapping communities , which are needed by social media the remainder of this paper is organized as follows .section 2 describes related work in community detection algorithms and performance comparison .section 3 introduces the proposed clique augmentation algorithm .performance evaluation is given in section 4 and section 5 concludes this paper and outlines our future work .much work has been conducted in the area of community detection along with ways to determine the quality of the identified communities .one comprehensive survey of recent advances discusses a wide range of existing algorithms including traditional methods , modularity based methods , spectral algorithms , dynamic algorithms , and more .similarly , points out that it is important for the community detection algorithm to extract functional communities based on ground truth , where functional ground - truth community is described as a community in which an overall theme exists .another relevant paper discusses overlapping community detection algorithms along with various quality measurements .community detection often tries to optimize various metrics such as modularity as described by girvan and newman or conductance .the work in discusses many of the various objective functions currently in use and how they perform .there exist many community detection algorithms in literature .we can categorize them into disjoint algorithms and overlapping algorithms , based on whether the identified communities have overlap or not .infomap stands out as the most popular and widely used disjoint algorithm .the infomap algorithm is based on random walks on networks combined with coding theory with the intent of understanding how information flows within a network .multilevel is a heuristic based algorithm based on modularity optimization .multilevel first assigns every node to a separate community , then selects a node and checks the neighboring nodes attempting to group the neighboring node with the selected node into a community if the grouping results in an increase in modularity .newman s leading eigenvector works by moving the maximization process to the eigenspectrum to maximize modularity by using a matrix known as the modularity matrix .fast greedy is based upon modularity as well .it uses a greedy approach to optimize modularity . in the category of overlapping algorithm ,clique percolation method is the most prominent which merges two cliques into a community if they overlap more than a threshold .in this section we propose a clique based algorithm to find communities and we call it clique augmentation algorithm ( caa ) .caa is built on the following two principles : 1 ) users in a maximal clique belong to a stable community since a clique is densely connected internally ; 2 ) a neighboring node that is highly connected to a clique should be part of the community since it keeps the triadic closure property among all nodes in the community . given a social network topology , caa algorithm discovers communities in the topology using the following steps : + step 1 : find all maximal cliques in the topology + step 2 : filter the overlapping cliques .we sort the cliques based on their size then use an _ overlapping threshold _ to control the amount of overlap between two cliques.the overlapping threshold is defined as the percentage of overlapping nodes in the smaller clique . for example , given two cliques and where is of size 10 , and is of size 5 .suppose the overlapping threshold is 0.7 .if only has 2 nodes overlapping with , we consider and as two independent cliques since which is 3.5 . if had 4 overlapping nodes with , we would discard since .+ step 3 : grows each clique into a community by adding new nodes in one by one ._ growing threshold _ is utilized for controlling the growth of each community .the growing threshold is defined as the ratio of the number of incoming edges from the new node to other nodes in the community over the size of a community .for example , if a community has size of 10 , and the growing threshold is set to 0.7 , then for a neighboring node to be added into the community , it must have at least 7 edges coming into the community .the algorithm checks the neighboring nodes for each node within the current community .this process is repeated for the updated community until no more nodes can be added .the growing threshold allows us to zoom into or out of the graph around the clique . + it is worth noting that caa takes a different approach than clique percolation method ( cpm ) wheretwo adjacent cliques are merged into a community structure . in caa , instead of merging neighboring cliques , we simply grow the community structure by adding individual node to the community sequentially .caa has a few nice features : 1 ) it tends to be faster than cpm and manages to produce similar results .2 ) caa tries to capture the natural growth process of a community in the sense that if a user befriends with many users in a densely connected community , the user will be most likely grow as part of the community .3 ) experiment result shows caa tends to produce decent sized communities and users in same community share strong interest .we carry out the comparative research on a twitter arizona user follower topology that we collected in summer 2013 which contains arizona twitter users with directed edges , and we call it aztopology . an undirected graph is derived from aztopology by removing all non - mutual edges and the isolated nodes .we call it aztopologymutual .aztopologymutual network contains nodes and undirected edges . in this section, we measure the impact of growing threshold and overlapping threshold on community size and the number of communities in order to give suggestions on the parameter selection for caa algorithm .first we look into the effect of growing threshold .we find all cliques of size 3 and larger in azmutualtopology and set overlapping threshold to 0 to find all non - overlapping cliques .we set the growing threshold to 0.5 , 0.7 , and 0.9 indicating a neighboring node can only be added to the community if it is connected to at least 50% , 70% , or 90% of nodes in the community .the result is plotted in fig .[ fig : growingthreshold ] .x - axis is the community size range and y - axis is the number of communities whose size fall in the range . it can be seen that the cliques do not grow much with growing threshold since most communities are of size between 3 and 9 .with growing threshold set to , the number of communities with size in range [ 3 - 9 ] drops significantly , from around 11,000 to 5,000 and there are more communities in the range of [ 10 - 150 ] .therefore , we recommend to set the growing threshold to .another interesting observation is that there is no significant difference in the distribution of community size for growing threshold and .next we investigate the effect of overlapping threshold .we choose all cliques of size over 15 and increase the overlapping threshold from 0 to 1 .intuitively , by increasing the overlapping threshold , less cliques are filtered , therefore the number of communities increases . as can been seen in fig .[ fig : overlappingthreshold ] , where x - axis is the overlapping threshold value , y - axis is the number of cliques , the number of cliques increases significantly for overlapping threshold . in general , we suggest to choose overlapping threshold less than 0.6 to avoid having heavily overlapping communities .previous survey papers and have carried out comparative research on community detection algorithms and proposed different evaluation metrics .however , they ignored an important factor , that is , the size of the communities .extremely large communities do not represent strong and stable communities .for this research we are interested in smaller communities where users are actually communicating and an overall theme amongst users exists . as described by dunbar , the dunbar number 150 applies to social media as well .this indicates that community size is an important factor and communities with more than 150 users are less desirable communities .on the other hand , communities of size 1 , 2 , and 3 are trivial .therefore , we propose to study communities of size in [ 4 - 150 ] and we call such communities desirable communities .users in such communities have stronger influence among each other and are less likely to leave the community .we propose to compare different community detection algorithms with the following criteria : community size , community coverage , extended modularity , triangle participation ratio , and the hash - tag similarity among users in the same community .we adopt the graph package for the implementation of newman s leading eigenvector , infomap , multilevel algorithm , fast greedy optimization of modularity , label propagation , edge betweeness , clique percolation method ( cpm ) , and implement our proposed caa algorithm .the growing threshold of caa is set to 0.7 and its overlapping threshold is set to 0 .the communities grow from cliques of size .each algorithm was given aztopologymutual graph as the input and five hours to complete .out of the eight algorithms , newman s leading eigenvector , infomap , multilevel , fast greedy and caa finished within five hours .so we only show the performance of these five algorithms .we run the algorithms on undirected graph to ensures fairness since not all algorithms support directed graph .only infomap , label propagation , edge betweenness , and caa can run on directed graph . in this section, we present the number of communities and the size distribution of the communities .table [ tab : totalcomm ] summarizes the number of communities and the size of the largest community revealed by each algorithm .modularity maximization algorithms such as multilevel , eigenvector , and fastgreedy all group users into large communities .for example , the larges community eigenvector produces has size of 136,403 , that is , over of all users are grouped into a large community .there is lack of strong connections among community users in such large community and we can hardly put this large community to practical use ..total number of detected communities and the size of largest community [ cols="^,^,^",options="header " , ] additionally , we inspect large communities of size between 150 - 300 to check whether these communities make sense .we did this for the infomap method because this is one of the most popular community detection algorithms .table [ tab : badcomm ] shows a sampling of three randomly selected communities . herethe hashtags are sorted so that the top hashtag appears first . by interpreting the meaning of these hashtags , where # wcw stands for women - crush wednesday, # mcm stands for man - crush monday , # smh stands for shake my head , and # oomf stands for one of my followers , we find that each individual communities in general does not make much sense .this is because the hashtags we found are everyday used terms on twitter . as such, the communities lack an overall theme .this is consistent to what we expected to see at larger community sizes .in this paper , we study the problem of evaluating community detection algorithm by introducing three new measurement , the community size , community coverage and user interest . we propose a simple clique based algorithm caa as baseline and compare the performance of four popular algorithms .caa discovers overlapping communities , therefore is a good fit for social media .our findings indicate that both infomap and caa are able to discover desirable communities which consist of users sharing similar interests , while many existing algorithms generate too small or too big communities .we plan to automate the user interest labeling by adopting topic modeling in our future work .we also plan to design new algorithms to maximize modularity while consider the size of the community .dunbar , r. i. m. do online social media cut through the constraints that limit the size of offline social networks ? in _ r. soc .royal society open science 3.1 ( 2016 ) : 150292 ._ ding , z. et al . overlapping community detection based on network decomposition . in _ sci . rep . 6 , 24115 ; doi : 10.1038/srep24115 ( 2016 ) ._ harenberg , steve , gonzalo bello , l. gjeltema , stephen ranshous , jitendra harlalka , ramona seay , kanchana padmanabhan , and nagiza samatova . community detection in large - scale networks : a survey and empirical evaluation . in _ wiley interdisciplinary reviews : computational statistics wires comput stat 6.6 ( 2014 ) : 426 - 39 . _yang , jaewon , and jure leskovec . defining and evaluating network communities based on ground - truth . in _ 2012 ieee 12th international conference on data mining ( 2012 ) . _xie , jierui , stephen kelley , and boleslaw k. szymanski . overlapping community detection in networks . in _ csur acm comput .acm computing surveys 45.4 ( 2013 ) : 1 - 35 ._ leskovec , jure , kevin j. lang , and michael mahoney . empirical comparison of algorithms for network community detection in _ proceedings of the 19th international conference on world wide web - www 10 ( 2010 ) ._ fortunato , santo . community detection in graphs . in _ physics reports 486.3 - 5 ( 2010 ) : 75 - 174 . _ lazar , a. , d. abel , and t. vicsek . modularity measure of networks with overlapping communities . in _ epl ( europhysics letters ) europhys .lett . 90.1( 2010 ) : 18001 . _shen , huawei , xueqi cheng , kai cai , and mao - bin hu . detect overlapping and hierarchical community structure in networks . in _ physica a : statistical mechanics and its applications 388.8 ( 2009 ) : 1706 - 712 ._ rosvall , m. , and c. t. bergstrom .`` maps of random walks on complex networks reveal community structure , '' in _ proceedings of the national academy of sciences 105.4 .blondel , vincent d. , jean - loup guillaume , renaud lambiotte , and etienne lefebvre . `` fast unfolding of communities in large networks . '' in_ j. stat .journal of statistical mechanics : theory and experiment .fortunato , s. , and m. barthelemy .`` resolution limit in community detection . '' in _ proceedings of the national academy of sciences 104.1 ( 2006 ) : 36 - 41 ._ raghavan , usha nandini , rka albert , and soundar kumara . near linear time algorithm to detect community structures in large - scale networks . in _ physical review e phys .e 76.3 ( 2007 ) ._ java , akshay , xiaodan song , tim finin , and belle tseng . why we twitter . in _ proceedings of the 9th webkdd and 1st sna - kdd 2007 workshop on web mining and social network analysis - webkdd / sna - kdd 07 ( 2007 ) . _newman , m. e. j. finding community structure in networks using the eigenvectors of matrices . in _ physical review e phys .e 74.3 ( 2006 ) ._ palla , gergely , imre dernyi , ills farkas , and tams vicsek .`` uncovering the overlapping community structure of complex networks in nature and society . '' in _ nature 435.7043 ( 2005 ) : 814 - 18 . _ newman , m. e. j. fast algorithm for detecting community structure in networks . in _ physical review e phys .e 69.6 ( 2004 ) ._ newman , m.e.j . , girvan m. finding and evaluating community structure in networks . in _ phys .e. 69 , 026113 ( 2004 ) ._ a. clauset , m. e. j. newman , and c. moore finding community structure in very large networks . in _ phys .e. 70 , 066111 ( 2004 ) ._ girvan , m. , and m. e. j. newman . community structure in social and biological networks . in _ proceedings of the national academy of sciences 99.12 ( 2002 ) : 7821 - 826 . _
|
understanding community structure in social media is critical due to its broad applications such as friend recommendations , link predictions and collaborative filtering . however , there is no widely accepted definition of community in literature . existing work use structure related metrics such as modularity and function related metrics such as ground truth to measure the performance of community detection algorithms , while ignoring an important metric , size of the community . suggests that the size of community with strong ties in social media should be limited to 150 . as we discovered in this paper , the majority of the communities obtained by many popular community detection algorithms are either very small or very large . too small communities do nt have practical value and too large communities contain weak connections therefore not stable . in this paper , we compare various community detection algorithms considering the following metrics : size of the communities , coverage of the communities , extended modularity , triangle participation ratio , and user interest in the same community . we also propose a simple clique based algorithm for community detection as a baseline for the comparison . experimental results show that both our proposed algorithm and the well - accepted disjoint algorithm infomap perform well in all the metrics .
|
we are carrying out an experiment to measure the electric dipole moment ( edm ) of the electron to test the conservation of the time - reversal symmetry assumed in the fundamental theory that governs the electron .this edm search employs a solid - state technique using a gadolinium gallium garnet ( ggg , gd ) paramagnetic insulator at low temperatures .the experiment aims to measure a small magnetic field ( ft ) generated by the stark - induced magnetization in the solid through the unique edm coupling to an external electric field . in our current experimental setup , a superconducting quantum interference device ( squid )is used as the magnetometer to monitor the magnetic signal , as the polarity of the applied electric field is modulated at a frequency of a few hz . at 10 mk ,the spin alignment in the ggg sample is enhanced as the thermal fluctuation is reduced , and we would expect an induced magnetic flux of 17 with an applied electric field of 10 kv / cm , if the edm of each unpaired electron in the solid was as large as cm . herethe flux quanta .this edm - induced magnetic flux is large enough to be measured using a standard dc squid magnetometer operated at 4 k. with a typical squid transfer function of 2 v/ and a % flux coupling efficiency from the sample to the magnetometer , we expect a voltage signal output from the squid electronics to be about 340 nv , without further amplifications .a typical data acquisition ( daq ) system with a 16-bit resolution ( i.e. , 0.3 mv resolution in v input range ) is not sufficient to measure such a small voltage signal .in addition , the experiment requires simultaneous sampling of voltage signals from the magnetometer , high voltage monitors , and leakage current monitors , each with a very different voltage scale . to meet these stringent requirements, we developed an ultra - high precision 24-bit daq system with eight input channels for simultaneously sampling the analog voltage signals of interest . because the expected edm - induced magnetic signal to be measured by the squid sensor is very small, any possible signal contamination from other voltage monitoring channels through capacitive coupling is intolerable .in particular , the high voltage monitoring channels have very large voltage that is in phase with the magnetization signal . to address this problem ,that has been plaguing the experiment since the very beginning , we take extra efforts to design a custom daq system to have each of the analog input channels individually shielded in its own isolated heavy - duty radio frequency ( rf ) shielding enclosure , with galvanic isolation from the rest of the system .fiber optic communications to the master board are used for the control of the measurement sequences and the retrieval of the digitized data . with these features ,the daq system is expected to minimize cross - talk between channels , reduce electromagnetic interference , and eliminate the possibility of unwanted currents flowing in the ground loops that result in increased noise . finally , to reduce the random noise and reach the desired edm sensitivity , we need to repeat the edm experiment over many polarity modulation cycles and carry out the average of the accumulated data sets .therefore , ensuring that the daq system has no sources of non - gaussian noise at the level of the required voltage sensitivity is essential for the success of the edm experiment .a system capable of such performance requirements is currently not commercially available .our custom daq system can be used by other experiments that require simultaneous monitoring of several voltage sources .the system can accommodate both very low - level and large - amplitude signals with a large dynamic range that comes with a 24-bit resolution .more importantly , much care is given to ensure good rejection of spurious couplings between channels , through individual shielding of each input channel with dedicated adc board and reduction of ground loops through optical communications .we explain the hardware and software of the daq system in sec . [ 2 ] and evaluate its overall performance in sec . [ 3 ] .as shown in fig . [ daq ] , the daq system has a master board to control eight independent modular adc boards , each containing a 24-bit adc chip and supporting electronic components .the front - end of the adc boards connects to the various analog voltage sources in the experiment that need to be measured , digitized , and recorded .the adc boards can be placed as close as possible to the experiment , with long optical fibers transmitting the digitized signals back to the master board for temporary data buffering .the master board is equipped with a fpga chip that can be programmed for specific tasks and daq sequences .the communication between the master board and the adc boards is implemented through serial fiber optic data links in both ways .the data is sent to the daq computer at specified intervals .any pc running a matlab program can be used as the daq computer to interface with the master board through an optically coupled ethernet port .the device acquires an ip address and thus can be remotely accessed through any computer connected to the internet .this daq computer provides the overall control of data acquisition , data storage and analysis .the daq system is triggered through an external trigger source ( with ttl signals ) which can be controlled independently by the daq computer .in addition to the daq function , we have added to the system a precision 20-bit digital - to - analog converter ( dac ) board , to supply a low drift analog voltage source .the dac board will be used to drive a high voltage amplifier in the next generation edm experiment .the individual component of the daq system will be discussed in details in the following .block diagram of the daq system . ]the adc board uses a differential - input , 24-bit delta - sigma adc chip ( ltc2440 ) made by linear technology .since a high dynamic range with a low noise level is the essential feature of this custom system , we paid extra attention to the noise from different parts of the system .the intrinsic noise of the adc chip is estimated to be 200 n when sampled at 6.9 hz ( with lots of internal oversampling ) .the sample rate can be increased up to 3.5 khz at the cost of a larger noise . to preserve this noise figure, we implement the analog front - end with low - noise , precision operational amplifiers ( lt1007 ) that have a high common - mode rejection .the voltage input can be connected single - endedly or differentially .the low degree of voltage noise and fluctuation is further ensured by the use of a very low - noise voltage reference ( adr445 ) with an adjustment to optimize the common - mode rejection ratio ( cmrr ). simplified schematic diagram of the low noise front - end of the adc board . ] the schematic diagram of the analog front - end is shown in fig .[ analog ] .the input stage is comprised of a low pass filter ( r , c , and r , c for each differential input ) to remove high frequency noises , and a unity gain buffer ( u and u ) to provide a high input impedance of 7 g to prevent any significant loading and perturbation of the voltage source .the resistor r provides a bypass ground path for the operational amplifier ( op - amp ) bias current when the source ground is not common to the adc box .the attenuation stage ( r-r ) linearly attenuates the v input signal , the voltage swing from a typical physics experiment , to a .5 v signal that is compatible with the input voltage range of the adc chip ( ltc2440 ) .the input stage is followed by the buffer stage that is comprised of an anti - aliasing filer ( r , c , and r , c for each differential input ) and a unity gain buffer ( u and u ) .the anti - aliasing filter attenuates any signal with frequencies above half of the sampling frequency to prevent high frequency interferences from being shifted into the frequency band of interest during sampling .the high impedance of the buffer prevents over - loading of the attenuation stage .finally , the feedback stage ensures the differential voltage to be centered in the adc input range around 0 v. the cmrr adjustment is made with potentiometer r .the overall gain accuracy is approximately 0.3 % .the adc control through optical communications is implemented in a complex programmable logic device ( cpld ) on the same adc board .the adc sample clock signal is recovered from the encoded data received over the serial optical interface ( see sec .2.2 ) with a phased - locked loop ( pll ) .each voltage input channel has a dedicated adc board , that is mounted in its own metal rf shielding enclosure .[ adc ] shows the photo of an assembled adc board in the metal box , that is 12 cm in size .this adc board is powered by a clean 12 vdc car battery supply at 110 ma to eliminate any power line frequency and the switching regulator rf interference used in most of the modern power supplies .the required v and v supplies for the internal circuitry are generated on - board .note that the enclosure is connected to the zero voltage reference set by the battery .the bnc input shield is isolated from the chassis to prevent ground loops . assembled adc board inside a heavy - duty rf shielding enclosure that is 12 cm in size .the analog voltage signal is input into the bnc connector on the left , and the 12 vdc powers is connected through the black and red banana connectors on the right .two toslink optical modules ( transmitter and receiver ) are also on the right ., width=307 ] the serial optical interface between the master board and each adc board is implemented with inexpensive toslink optical modules and cables , commonly used for digital audio .each adc board has its own pair of optical fibers that can be connected to the master board .this optical interface is used to provide galvanic isolations of the adc boards from each other and from the master board .this feature significantly reduces the possibility of ground loop formations and spurious noise pickups .serial communication is carried out with a custom data encoding scheme that ensures a 50 % duty cycle , allowing its use with both the optical transmitter and receiver .the data encoding scheme also embeds the clock signal in the transmitted data so that the synchronized clock signal ( sent by the master board ) can be easily recovered using the low cost pll chip on the individual adc board .block diagram of the master board . ]the master board controls the daq sequence , renders communications with the adc boards , implements packetization of the digitized data from adc , and provides ethernet connectivity with the daq computer .[ main ] shows a functional block diagram of the master board .all major functions are contained within a spartan-3e field programmable gate array ( fpga ) made by xilinx .the fpga parses ethernet packets from , and transmits ethernet packets to the daq computer .ethernet connectivity between the computer and the fpga is implemented with the use of the lantronix xport embedded ethernet device server that provides a rs-232 serial port interface connected to the fpga .data transferred between the device server and the fpga by way of an universal asynchronous receiver / transmitter ( uart ) that resides within the fpga .all uart functions within the fpga operate on the 14.7456 mhz clock oscillator , from which all standard baud rates can be derived .the ethernet downlink from the daq computer contains data words that control the oversampling rate ( osr ) of the adc .the fpga transmits the osr value from the ethernet downlink to every adc board over the optical downlink on every rising edge of the trigger input .every adc samples its analog input upon receiving the osr value . as a result ,every adc in the system reads simultaneously upon the trigger input , which defines the sample rate .data transactions between the master board and the adc board occur at a baud rate of 625 kbps , which is derived from the 10 mhz clock oscillator .the cpld on the adc board transmits the current sample over the optical uplink each time an osr value is received .the master board receives the adc sample from each adc board over the optical uplink .once parsed , all adc samples are multiplexed , along with a time - stamp , into a first - in - first - out ( fifo ) buffer that is implemented in the fpga .these data frames are stored in the fifo until a request for data is made by the daq computer through the daq software ( such as matlab ) , at which point it is sent to the device server by way of the rs-232 interface .finally , the device server transmits the data frames , that contain a time stamp and sample from every adc channel , over the ethernet uplink to the daq computer .to expand the capability of the system , the master board also contains another set of optical interface ( dac in and dac out ) and a general purpose output ( gpo1 ) .the dac interface , which may be used to control a dac , is identical to the adc board interfaces ; it uses the same optical connectors .the gpo1 output is capable of driving a 50 load , making it useful for triggering other devices to be in sync with the daq system. simplified schematic diagram of the precision dac board . ]we have also built a precision dac board which can be controlled by the master board in the same way as the adc board .a simplified schematic of the dac board is shown in fig .the serial optical interface with the master board is implemented with the same optical modules ( torx147 and totx147 ) and optical fibers as the adc board .the same pll as the adc board is used to recover the sample clock signal .the dac function is accomplished with a 20-bit analog devices dac chip ( ad5791 ) and supporting components with low - noise and low temperature drift .the dac chip offers a total harmonic distortion of - 97 db , a low noise of 7.5 nv/ , and a low temperature drift of .05 ppm/ .this dac chip is connected in a force sense reference configuration that uses the low - noise op - amp ( ad8675 ) for the reference buffer and the ultra - low drift voltage references ( lt1236 ) to apply a very stable v voltage reference , in order to minimize errors caused by varying reference currents .this reference circuit is very important as the dac output is derived from the voltage reference inputs .the force sense reference buffers are required to accurately sense and compensate a voltage drop on the reference inputs .this reference configuration provides best linearity performance of the dac chip .the dac output is buffered using another ad8675 op - amp in a unity gain configuration . because the output impedance of the dac chip is 3.4 k ,the output buffer is required for driving low resistive loads . the analog power ( lt1962 and lt1964 ) and digital power ( lt1764a )are isolated using the digital isolator ( adum1401 ) which , along with inductor l , prevent digital noise from spreading to the analog supply .this dac board is operated by a v dc power supply .the analog output waveform from this custom dac board is created by the daq computer and the output range is in v. the daq software , written in matlab , provides data acquisition control , data storage , and data analysis .ethernet connectivity with the master board is achieved with the free tcp / udp / ip toolbox .the function is implemented as a mex - file which allows one to interface c subroutines ( dynamic link libraries ) to matlab .this daq software collects data from each voltage monitoring channel at a fixed trigger rate and stores the data to a disk in the daq computer .functions for data analysis such as numerical average or data filtering are user implemented .evaluation of the performance of this custom daq system is necessary prior to use in the edm experiment .the significant characteristics that need to be evaluated include the intrinsic root - mean - squared ( rms ) noise ( without load ) , the cross - talk between channels , the settling time , the cmrr , the power supply rejection ratio ( psrr ) , and the linearity of the daq system .the intrinsic rms noise of the daq system stems from noise of the adc chips and supporting circuitry . on the level of the adc chip, we expect the intrinsic rms noise to vary with the osr value which defines the effective bandwidth of the on - chip digital filter , and the voltage level in the following ways : the rms noise increases by approximately when osr decreases by a factor of 2 from osr=32768 to osr=256 .the exception is that the rms noise at osr=128 and osr=64 has additional contributions from the internal modulator quantization noise ( see ref .the conversion between the osr and the sampling rate can be found in table [ osr ] ..[osr ] maximum bandwidths and enobs [ cols="^,^,^,^ " , ] histograms of intrinsic rms noise of the daq system at osr=16384 ( a ) and osr=128 ( b ) .the analog input is terminated in these measurements .the red lines is the gaussian fit providing the standard deviation of each histograms as the rms noise . ] to assess the rms noise at different osr values , we collected and analyzed a large number of data from the daq system with the analog input terminated on the adc board under test .[ histogram ] shows the histogram of a typical set of data with a total of 30,000 samples , collected at osr=16384 and osr=128 with a sampling rate of 15 hz and 1.5 khz respectively .the voltage distributions can be fitted by a gaussian function , with the standard deviation , , corresponding to the intrinsic rms noise of 1.39 and 14.7 at osr=16384 and 128 respectively .these noise figures agree quite well with the noise values from the adc chip specification .the effective number of bits ( enob ) at osr=16384 and 128 are measured to be 23.8 and 20.4 respectively , just slightly less than the specified values of 24.4 and 20 listed on the data - sheet of ltc2440 adc chip .the system as a whole does not introduce significantly more noise on top of the intrinsic noise of the adc chip .power spectral density ( psd ) spectra of the intrinsic rms noise of the daq system at osr=16384 ( a ) and 128 ( b ) .the vertical axis is in log scale . ] in addition , the power spectral density ( psd ) spectra of the noise measurement ( shown in fig .[ fft ] ) do not show any observable peaks , in particular , at the ac power supply frequency of 60 hz and the high order harmonics .this demonstrates that the system as a whole has no significant ground loop pickups in the daq system above the intrinsic noise level .more importantly , there exist no additional sources of spurious noise from trigger or digital switching that give rise to non - gaussian noise , that can not be suppressed by taking longer average . the base level of the noise power spectrum of the whole daq system is measured to be 0.21 at 1 hz . table .[ osr ] shows a comprehensive list of the maximum bandwidths ( equal to half of maximum sampling rate ) and the measured enobs at all available osr values .note that the maximum sampling rate at osr=64 is limited by the maximum xport baud rate of 921600 bps .intrinsic rms noise of the daq system as a function of the input voltage .the red curve shows the model fit . ]the intrinsic rms noise of the daq system also depends on the voltage level of the analog input .the procedure this noise test is the same as the preceding evaluation , except that a test voltage source is connected to the analog input of the adc board . for a low noise performance ,the test voltage source is made of 1.5 v batteries ( energizer industrial aa ) connected in series . in practice, the battery pack has its own intrinsic noise which would be added to that of the daq system so that the noise test may not correctly reflect the intrinsic noise of the daq system .we used the normalized covariance computation to further test whether the noise of the battery is small enough to be negligible . in the test, we utilized two adc boards to simultaneously sample the voltage output from the same battery pack , with a sampling rate of 15 hz and an osr of 16384 .we then estimate the extent to which the fluctuations of the data sets collected from the two adc boards are correlated .a strong correlation would indicate that the noise contribution from the battery pack is large compared to that from the daq system .the analysis shows that the normalized covariance is less than 0.1 , signifying weak correlation and verifies that the battery noise is negligible in these noise measurements .the measured rms noise of the daq system as a function of the input voltage level is plotted in fig .[ input ] . the rms noise increases as the input voltage rises .the error bars are statistical and correspond to one standard deviation .we fit the functional dependence of the rms noise using , where characterizes the intrinsic rms noise without the analog input , and characterizes the noise dependency of the input voltage level .the fit model was chosen to include the independent contributions from the psd of the noise without loads and the psd of the noise that varies with the voltage input .the result shows that the rms noise without loads ( at v=0 ) is ( 1.33 0.44 ) , matching the result in the preceding noise test within the error bar .channel cross - talk measurement .( a ) digitized output of the victim channel , averaged over 41548 cycles of the square waveform .( b ) psd spectra of the aggressor channel ( blue curve ) and the victim channel ( red curve ) in log - log scale . ]cross - talk between individual channels due to any feedthrough coupling , such as mutual capacitance coupling , is one of the primary systematic effects in a daq system . to measure the level of channel cross - talk , a 1.5 hz square wave with an amplitude of 19 v peak - to - peak ( v ) , at a 95 % of full scale input range ,was applied to one adc board and served as the aggressor channel . at the same time , the adjacent adc board with the analog input terminated , as a victim channel , was sampled with an osr of 512 and a sampling rate of 450 hz . the digitized data from the victim channel is then averaged under the same cycle of the square waveform of the aggressor channel to reduce the random noise , thereby revealing any small contribution of the cross - talk .[ cross](a ) displays the digitized signal averaged over 41548 cycles from the victim channel .the lacking of any square waveform indicates negligible cross - talk effects .the psd spectra ( fig .[ cross](b ) ) of both the aggressor and the victim channels also show no measurable correlations between the two channels . in the aggressor channel ,peaks at 1.5 hz and the harmonics are evident , whereas in the victim channel no corresponding peaks are found at these frequencies . in conclusion ,our custom daq system has a cross - talk smaller than db , much lower than any commercially available systems .the reduction of systematic effect form the channel cross - talk ( in particular between the hv - monitoring and the squid - monitoring channels ) is an indispensable requirement to accomplish the solid - state edm experiment . settling time measurement .( a ) a step function input with from 9.5 v to 0 v , and ( c ) a step function from -9.5 v to 0 v. ( b ) and ( d ) are zoomed - in voltage plots around the region of voltage transition on ( a ) and ( c ) .it takes 3 samples to settle the digitized output to the 22-bit resolution ( see ( a ) , ( c ) ) , and 760 samples to settle to 24-bit resolution ( see ( b ) , ( d ) ) . ] the settling time is defined as an elapsed time during which the output of the daq system settles to a desired accuracy . for an accurate edm measurement , the settling time should be much shorter than the time period to modulate the polarity of the high voltage applied to the ggg samples . to measure the settling time of the daq system, we supplied a step function as the analog voltage input to the adc board under test .the step function should settle in a much faster time than the daq system .therefore , we employed the photomos relay ( aqv22 ) as the test pulser to generate an instantaneous step function with high speed switching time around 0.03 ms .two types of step input generated from the pulser are applied to the adc board ( see ( a ) and ( c ) in fig . [ settle ] ) : one step ( a ) decreases from 9.5 v to zero and the other step ( c ) increases from -9.5 v to zero with a frequency of 3 mhz . using the adc board under test , we sampled the step function cycles for 200 cycles with an osr of 16384 and a sampling rate of 15 hz . the averaged results are shown in fig .[ settle ] .upon the voltage switch , the digitized output settles to 22-bit resolution within three samples , corresponding to a settling time of 200 ms ( fig .[ settle](a ) and ( c ) ) . to settle to the 24-bit resolution, it takes 760 samples and thus a much longer time around 51 s ( fig.[settle](b ) and ( d ) ) . common - mode rejection test .( a ) digitized output averaged over 7199 cycles .( b ) the psd spectrum of the output of 10,000 samples .the high and low analog inputs are connected to a common voltage source . ] before the analog signal is sent to the adc to be digitized , some undesirable common - mode noise ( undesirably picked up from any ambient sources ) is always present on both the high and low input wires of the adc board , equal in both the phase and amplitude .this common - mode noise , quite often , is generated by capacitive couplings between the wires and ground .for the best performance , the daq system must suppress the common - mode noise sufficiently so as not to add additional noise , and in the mean time not to distort any voltage input of interest . to measure the cmrr , we connect both the high and low input of the adc board under test to a common voltage source of a 1.5 hz square waveform with a 4 v amplitude .data was collected at osr=16384 with a sampling rate of 15 hz . fig .[ common](a ) shows the digitized data averaged over 7199 cycles .notice that even with the input as a square wave , the averaged output waveform is distorted because of the discrepancies on the phase and amplitude between the high and low analog inputs , as a result of the common - mode rejection adjustment . the psd spectrum is shown in fig .[ common](b ) , where the apparent peak at 1.5 hz ( and 4.5 hz ) is measurable , which indicates some degree of cmrr . by comparing the amplitude of the output , 5.68 , to the applied waveform strength of 4 v, the cmrr is estimated to be 1 ppm , which is good enough for the edm experiment .even with the most careful choice , it is inevitable that the dc power supply used to operate the adc board has some degree of noise , such as the voltage ripples , that can affect the performance of the daq system .this unwanted noise from the power supply can couple parasitically through the circuitry to the analog voltage input , adding undesirable noise to the digitized outputs .we quantify the ability of the daq system to reject power supply noise by a ) mixing a 1 v , 1.5 hz square wave together with a 12 vdc to create a rippled supply voltage , b ) terminating the input of the adc board , and c ) collecting digitized data with an osr of 16384 at a sampling rate of 15 hz .the resulting data averaged over 4798 cycles are plotted in fig .[ power](a ) with the psd spectrum plotted in fig .[ power](b ) .the time trace of the averaged data does not show any square wave corresponding to the power supply ripple .no observable peak at the frequency of the ripple is found in the psd spectrum , either . in summary ,the psrr of this daq system is quite high , and the noise from power supply is negligible even with a bad power supply with large ripples . power supply noise measurement .( a ) digitized data averaged over 4798 cycles .( b ) the psd spectrum of the output ( with a terminated input ) of 10,000 samples ] the linearity of the daq system characterizes how accurately a digitized result reflects the analog input .measurement of the linearity of a high - resolution dac system is especially difficult due to the lacking of a good calibration source .we have tried several commercial voltage sources with sinusoidal waveform output , but found that the total harmonic distortion and phase errors are too large to be useful to carry out this test .instead , we used the newly developed low - distortion , 20-bit precision dac board ( described in sec .[ sec : dac ] ) as the voltage source .an input of 0.01 hz triangular waveform with a 19 v amplitude generated by the dac board is fed into the adc board under test and sampled at osr=4096 with a sampling rate of 50 hz .we performed a least - square line fit ( ) on one cycle of the digitized output , separating the ramping down half - cycle from the ramping up half - cycle .the non - linearity is quantified by the residual deviation from the ideal triangular waveform .the non - linearity of the daq system as a function of the input voltage .( a ) with a 0.01 hz triangular waveform .( b ) with a 0.04 hz triangular waveform .the maximum non - linearity of the daq system is ppm over the full input range . ][ linearity](a ) shows the non - linearity ( in ppm ) as a function of the input voltage .the line fit has = 0.94 and 0.75 on ramp - down and ramp - up halves respectively .the maximum non - linearity of the daq system is estimated to be ppm over the full input range ( ppm between voltage range of v ) .this is good enough for the edm experiment .the non - linearity of this daq system arises mainly from the ltc2440 adc chip , with some small contributions from op - amps and resistors in the input stage and buffer stage of the adc board ( sec .[ sec : adc ] ) . in practice , this measured non - linearity should be the combined effect from the dac board and the daq system , however , without an independent calibrated voltage source we can not isolate the non - linearity of the daq system . nevertheless , the measured non - linearity is already close to the specification of the adc chip , and thus implies that errors from the dac board are probably insignificant .the small 1 ppm discrepancy between the ramp - up and ramp - down data sets ( fig .[ linearity](a ) ) is probably a result of the temperature change lagging some time behind the voltage change .increasing the frequency of the triangular waveform reduces this discrepancy .[ linearity](b ) shows the non - linearity of the daq system measured with the 0.04 hz triangular waveform .we have developed a high resolution 24-bit daq system with special attentions to the noise performance .this daq system is currently been used for the solid - state electron edm experiment . in this paper, we show the detailed characterizations on the relevant parameters of the daq system. the measured enob can be as high as 24.1 when sampled at 7 hz . the edm measurement requires a sampling rate at hz , and the daq system has a enob of 21.1 .the most important performance requirement is the ultra - low channel cross - talk which reduces the leading systematic effect observed in our edm experiment .using this custom daq system , we have not only demonstrated the feasibility of the solid state method for the electron edm search , but also obtained the first background - free limit of the electron edm on the order of 10 cm .this work was supported by nsf grants 0457219 , 0758018 .we also acknowledge the support from iu center for spacetime symmetries . 20 s. k. lamoreaux , physical review a * 66 * , 022109 ( 2002 ) c .- y .liu and s. k. lamoreaux , modern physics letters a * 19 * , 1235 ( 2004 ) sanjit k. mitra , , ( mcgraw - hill science , new york , 2nd edition , 2001 ) .y. j. kim , c .- y .liu , s. k. lamoreaux , and g. reddy , experimental search for the electron electric dipole moment using solid state techniques , proceedings of the 24 international nuclear physics conference , vancouver , canada , 2010 , journal of physics : conference series , accepted for publication y. j. kim , c .- y .liu , s. k. lamoreaux , g. visser , b. kunkler , a. v. matlashov , and t. g. reddy , new experimental limit on the electric dipole moment of the electron in a paramagnet insulator ,physical review d , submitted
|
we have built a high precision ( 24-bit ) data acquisition ( daq ) system with eight simultaneously sampling input channels for the measurement of the electric dipole moment ( edm ) of the electron . the daq system consists of two main components , a master board and eight individual analog - to - digital converter ( adc ) boards . this custom daq system provides galvanic isolation , with fiber optic communication , between the master board and each adc board to reduce the possibility of ground loop pickups . in addition , each adc board is enclosed in its own heavy - duty radio frequency shielding enclosure and powered by dc batteries , to attain the ultimate low levels of channel cross - talk . in this paper , we describe the implementation of the daq system and scrutinize its performance .
|
this paper is based on a talk given by the last author at the workshop phylogenetic models : linguistics , computation , and biology " organized by robert berwick at the csail department of mit in may 2016 .the reconstruction of phylogenetic trees of language families is a crucial problem in the field of historical linguistics . the construction of an accurate family tree for the indo - european languages accompanied and originally motivated the development of historical linguistics , and has been a focus of attention for linguists for the span of two centuries . in recent years , historical linguisticshas seen a new influx of mathematical and computational methods , originally developed in the context of mathematical biology to deal with species phylogenetic trees , see for instance , , , , , .a considerable amount of controversy arose recently in relation to the accuracy and effectiveness of these methods and the related problem of phylogenetic inference .in particular , claims regarding the phylogenetic tree of the indo - european languages made in were variously criticized by historical linguists , see the detailed discussion in .most of the literature dealing with computational phylogenetic trees in the context of linguistics focused on the use of lexical data , in the form of swadesh lists of words , and the encoding as binary data of the counting of cognate words , see for instance the articles in .other reconstructions used phonetic data and sound change , as in , or a combination of several types of linguistic data ( referred to as characters " ) , including phonetic , lexical , and morphological properties , as in , . a different approach to linguistic phylogenetic reconstruction , based on syntactic parameters , was developed recently in , , , , .this method is known as parametric comparison method ( pcm ) .a coding theory perspective on the pcm was given in .the notion of syntactic parameters arises in generative linguistics , within the principles and parameters model developed by chomsky in , .a more expository account of syntactic parameters is given in .syntactic parameters are conceived as binary variables that express syntactic features of natural languages .the notion of syntactic parameters has undergone changes , reflecting changes in the modeling of generative grammar : for a recent overview of the parametric modeling of morphosyntactic features see .a main open problems in the parameteric approach for comparative generative grammar is understanding the space of syntactic parameters , identifying dependence relations between parameters and possibly identifying a fundamental set of such variables that would represent a good system of coordinates for the space of languages .recently , the use of mathematical methods for the study of the space of syntactic parameters of world languages was proposed in , , . at present , the only available extensive database of binary parameters describing syntactic features is the sswl database , which collects data of 115 parameters over 253 world languages .it is debatable whether the binary variables collected in sswl represent fundamental syntactic parameters : surface orders , for instance , are often confounded with the deep underlying parameter values .moreover , sswl does not record any dependence relations between parameters . different data of syntactic parameters have been used in , , with dependence relations taken into account , and more data are being collected by these authors and will hopefully be available soon .for the purpose of this paper , we will use the terminology syntatic parameters " loosely for any collection of binary variables describing syntactic features of natural languages .we work with the sswl data , simply because it is presently the most extensive database available of syntactic structures . in section [ phylipsec ] of this paperwe show that just using the hamming distance between vectors of binary variables extracted from the sswl data and the neighborhood - joining method for phylogenetic inference gives very poor results as far as linguistic phylogenetic trees are concerned .we identify several different sources of problems , some inherent to the sswl data , some to the inference methodology , and some more generally related to the use of syntactic parameters for phylogenetic linguistics . in the section [ agsec ] we review the method of phylogenetic algebraic geometry of and the main results of and on phylogenetic ideals and phylogenetic invariants that we need for applications to the analysis of syntactic phylogenetic trees . in section [ agtreesec ]we show how one can use techniques from phylogenetic algebraic geometry to test the reliability of syntactic parameter data for phylogenetic linguistics , by using known phylogenetic trees that are considered reliable , and to test the reliability of candidate phylogenetic trees assuming a certain degree of reliability of the syntactic data . in section [ geomsec ]we argue that dependencies between the syntactic variables recorded in the sswl database should be taken into consideration in order to improve the reliability of these data for phylogenetic reconstruction .in particular , the presence of geometry / topology in this set of data and the presence of different degrees of recoverability of some of the sswl syntactic variables in kanerva network tests indicate that an appropriated weighted use of the data that accounts for these phenomena may improve the results .the first author is supported by a summer undergraduate research fellowship at caltech .part of this work was performed as part of the activities of the last author s mathematical and computational linguistics lab and cs101/ma191 class at caltech .the last author is partially supported by nsf grants dms-1201512 and phy-1205440 .we discuss here the problems that occurs in a naive analysis of the sswl database using the phylogenetic tree algorithm phylip .we identify the main types of errors that occur and the possible sources of the problems .we will discuss in [ agsec ] how one can eliminate some of the problems and obtain more accurate phylogenetic trees from sswl data , using different methods .we acquired the syntactic language data from the sswl database with two different methods , one consisting of downloading the data as a _.csv _ file directly , with the results separated in the format _ `` language '' _ , and one achieved by scraping the data into a _ .json _file , formatted as a list of lists of binary variables , in the format _ ` language ' : `parameters ' : ` values' _ " .this was done with a python script ` data_obtainer.py ` which went through all of sswl and dumped the data as desired .the sswl data , stored in a more convenient .json file format produced by the first author , are available as the file full_langs.json which can be downloaded at the url address .we created , for each language in the database , a vector of binary variables representing the syntactic traits of that language as recorded in the sswl database , with value indicated that the language possessed the respective trait , and value indicating that the language does not possess the trait .one of the main sources of problems regarding the use of sswl data arises already at this stage : not all languages in the database have all the same parameters mapped .the lack of information about a certain number of parameters for certain languages alters the counting of the hamming distances , as it requires a choice of normalization of the string length , with additional entries added representing lack of information .this clearly generates problems , as this inconsistency generates mistakes in the counting of hamming distances and in the tree reconstruction . in [ treesec ] we will illustrate specific examples where this problem occurs .the hamming distance algorithm hf.py takes two equal - length binary sequences , throwing an error if this length requirement is violated , and returns the sum of all bitwise xors between them , or the total number of differences . in this way, we construct with ` distance_matrix_checker.py ` the hamming distance matrix , whose entries are the hamming distances between the vectors of binary syntactic parameters of languages and . for example, germanic languages on average have normalized hamming distance in the range - .old saxon and old english have a hamming distance of from german , while swiss german has distance .modern english has below average differences at . while these distances may appear reasonable, one can detect easily another major source of problems in the use of sswl data for phylogenetic reconstruction .many languages belonging to very different families have small hamming distance : for example , the indo - european hindi ( 60 mapped in sswl ) and the sino - tibetan mandarin ( 87 mapped in sswl ) receive a normalized distance of .this is certainly in large part due to the different level of accuracy with which the two languages are mapped in the same database .however , one can also observe syntactic similarities between languages belonging to different families , which are not due to poor recording of the respective data , but are a genuine consequence of the syntactic properties being described .this matrix of hamming distances was then given as input to the phylip package for phylogenetic tree reconstruction , which is widely used in mathematical biology .given the hamming distance matrix , the phylip software provides several options for tree construction from distance matrix data : additive tree model , ultrametric model , neighbor joining method , and average linkage clustering ( upgma ) . the resulting tree produced by phylip , containing all 253 languages in the sswl database , is contained in outfile , where the tree in the text file is drawn with dashes and exclamation points .the information of the output tree and distances is also given in the output file outtree in newick format , with parentheses and commas .the accompanying file key.txt contains the key that indicates the full language name that corresponds to each two - letter string in outfile .the output files can be opened in any text editor .the python code and the output files , prepared by the second , third and fourth authors of this paper , are available at http://www.its.caltech.edu//phylogeneticsswl a quick inspection of the output file obtained by running phylip on the sswl data immediately reveals that there are many problems with the resulting phylogenetic tree. we will give explicit examples here that illustrate some of the main type of problems one encounters .there are many more such examples one can easily find by inspecting the output tree available in the repository at the url indicated above .an important problem in computational phylogenetic reconstruction is how to validate statistically the model .there are well known problem inherent in using the hamming distance as a source for phylogenetic trees . estimating tree branch lengths is a hard problem .distance matrices can be non - additive due to error , and it is typically difficult to distinguish distances that deviate from additivity due to change from deviations due to error .this problem is significant even in the context of biology , where the use of dna data is more reliable than the use of vectors of binary variables coming from linguistic properties . for a discussion of some of these issues in biologysee . for a comparison of phylogenetic methods ( not including syntactic parameters ) in linguistics ,see .as we discuss with individual specific examples in the subsections that follow , there are several different source of problems that combine to create different kinds of errors in the resulting phylogenetic tree .the main problems are the following : 1 . inherent problems in the computational method based on hamming distances , as discussed above ; 2 . problems with non - uniform coverage of syntactic data across different languages and language families in the sswl database ; 3 . the nature of the syntactic variables recorded in the sswl database ( for instance with respect to surface versus deep structure ) and the presence of relations between these variables ; 4 . the existence of languages belonging to unrelated linguistic families that can be similar at the level of syntactic structures . clearly , some of these problems are of linguistic nature , like the last one listed , while others are of computational nature , like the first one , while others depend on the nature and accuracy of the sswl data .it is difficult to disentangle the effects of each individual problem on the output tree , but the examples listed below illustrate cases where one can identify one of the problems listed here as the most likely origin of the mistakes one sees in the resulting phylogenetic tree .this type of problem occurs when a group of languages are correctly identified as belonging to the same subfamily of a given historical - linguistic family , but the internal structure of the subfamily tree appears inconsistent with the structure generally agreed upon based on other linguistic data . in the naive phylip analysis of the sswl databasewe see an example of this kind by considering the subtree of the latin languages within the indo - european family .the shape of this subtree , as it appears in in the output file , is of the form illustrated in figure [ latinfig ] .we see here that , although these languages are correctly grouped together as belonging to the same subfamily , the relative position within the subtree does not agree with what historical linguistic methods have established .indeed , one can easily see , for instance , that the position of portuguese in the subtree is incorrectly placed closer to italian and sicilian , than to spanish and catalan .this example is interesting because the error does not appear to be due to the poor mapping of parameters for these languages : italian and sicilian are mapped in sswl and spanish , catalan , and portuguese are mapped .so these are among some of the best recorded languages in the database , and still their respective position in the phylogenetic tree does not agree with reliable reconstructions from historical linguistics .it is interesting to compare the reconstruction obtained in this way , with the one obtained , on a different set of syntactic data , by longobardi s parametric comparison in , which has italian and french as a pair of two nearby branches , and spanish and portuguese as another pair of nearby branches .this example appears to outline an issue arising from the way syntactic variables are classified in the sswl ( as opposed to the different list of syntactic parameters used in ) .we discuss in [ geomsec ] below some of the problems of dependencies between the sswl syntactic variables that may be at the sources of this kind of problem . ]another type of mistake one finds in the naive phylogenetic tree reconstruction from sswl syntactic data is illustrated by the germanic languages in figure [ germfig ] . in this case , we find that most of the languages in this subtree are correctly grouped together as germanic , but a language that clearly belongs to a different subfamily is also placed in the same group .it is very puzzling why ancient neapolitan ends up incorporated in the tree of germanic languages rather then near italian and the other dialects of italian in the subtree of latin languages of figure [ latinfig ] .linguistically , one could perhaps argue that ancient neapolitan did in fact have several germanic influences due to the ostrogoths , but it is more reasonable to expect such influences to appear at the lexical rather than syntactic level .moreover , the specific placement within the germanic tree near faroese , norwegian and icelandic , does not necessarily reflect this hypothesis . in terms of the accuracy with which these languages are recorded in the sswl database , ancient neapolitanis mapped , while its nearest neighbor on this phylip output tree have norwegian , which is also mapped with a similar accuracy of , and faroese and icelandic with a lower accuracy of .it is possible that this example already reflects a problem with the different accuracy of mapping of different languages in the sswl database , or it may be a problem with the algorithmic reconstruction method itself .there are several similar instances in the output tree , which point to a problem that is systematic , hence likely generated by the method of phylogenetic reconstruction adopted in this naive analysis . ]another type of problem that occurs frequently in the output tree of this naive analysis is the case of completely unrelated languages ( from completely different language families ) that are placed in adjacent positions in the tree .we see an example in figure [ mayafig ] , where the mayan kiche language and georgian ( kartvelian family ) are placed next to each other in the tree . both kiche and georgian are mapped in the sswl database .although this is not as accurate a mapping as some of the languages we discussed in the previous examples , it is nonetheless the same level of precision available , for instance , for some of the germanic languages in the previous example , which were at least placed correctly in the germanic subtree .thus , the type of problem we see in this example is not entirely due to poor mapping of the languages involved. it must be also an effect of other factors like the computational reconstruction method used , as in the previous class of examples .however , there can also be some purely linguistic factors involved .namely , there are known cases of languages belonging to unrelated historical linguistic families that may appear close at the syntactic level .this type of phenomenon may be responsible for at least part of the cases where one finds unrelated languages placed in close proximity in the output tree .this is an indication that one should not rely on syntactic data alone , without accompanying them with other linguistic data , that can provide , for example , a prior subdivision of languages into language families .using the same method of phylogenetic tree reconstruction on data already grouped into linguistic families , with individual family trees separately constructed , improves the accuracy of the resulting trees .other combinations of syntactic and lexical / morphological data can be used to improve accuracy . ]finally , there is an additional problem one encounters in the naive phylogenetic reconstruction based on the sswl data , namely the position of the ancient languages in the tree .clearly , the algorithm assumes that all the data correspond to leaves of the tree and that the inner nodes are hidden variables , while the fact that we do have knowledge of some of the ancient languages and that several are recorded in the sswl database means that some of the inner nodes should in fact carry some of the data .this problem can be resolved if the inner languages would be placed as a single leaf attached to the corresponding inner node . by inspecting the resulting output treewe see that sometimes this is the case , and the inner node to which the corresponding ancient language is attached reasonably with respect to the modern languages that derived from it .one such example is the position of old english with respect to the tree of the germanic languages in figure [ oldenglfig ] .however , in other cases , ancient languages are correctly placed in proximity of each other , but in the wrong position , in the tree , with respect to the resulting modern languages .this is the case with ancient greek and latin ( see figure [ agreekfig ] ) . in this case , the algorithm correctly captures the close syntactic proximity between ancient greek and latin , but it does not place these two languages correctly with respect to either the tree of latin languages nor the modern part of the hellenic branch .this problem can be improved by first subdividing the data into language families and smaller subfamilies and then perform the phylogenetic tree reconstruction on the subfamilies separately , so that the corresponding ancient language is placed correctly , and then related the resulting trees by proximity of the ancient languages .however , this method clearly applies only where enough other linguistic information is available , in addition to the syntactic data .it should be noted , moreover , that , while ancient greek is correctly placed in proximity to latin , homeric greek is entirely misplaced in the phylip tree reconstruction and does not appear in proximity of the ancient greek of the classical period , even though both homeric and ancient greek are mapped with the best possible accuracy ( mapped ) in the sswl database . ] ] although the many problems illustrated above render a phylogenetic reconstruction based solely on sswl data unreliable , it is still worth commenting on what one obtains with this method regarding some of the controversial early branchings of the indo - european tree .again , the same type of systematic problems illustrated above occur repeatedly when one analyzes these regions of the output tree . for example ,tocharian a and b are treated by the phylip reconstruction as modern languages leaves of the tree and placed in immediate proximity of hittite and in close proximity of some of the modern indo - iranic languages , like pashto and punjabi , and a further step away from some turkic languages like tuvan .the proximity of tocharian and hittite suggests here a tocharian - anatolian branching .the placement of the indo - iranic languages in proximity of this tocharian - anatolian branching is likely arising from the fact that the indo - iranic branch of the indo - european family is very poorly mapped in the sswl database , with the ancient languages entirely missing and very few of the modern languages recorded , hence the reconstructed tree necessarily skips over all these missing data .the complete absence of sanskrit from the current version of the sswl database ( the entry in the database is just an empty place holder ) in particular causes the phylogenetic reconstruction to miss entirely the proximity of the indo - iranic and the hellenic branches . near the subtree shown in figure [ tochfig ]one finds several instances of misplaced languages of the type discussed in [ proxsec ] above . ]the situation with the armenian branch is very problematic in the phylip analysis of the sswl data .there are three entries recorded in the database : western armenian is mapped , while eastern armenian appears as two different entries in the database , one mapped and the other only mapped .classical armenian only appears as an empty place holder with no data in the current version of the database .these three data points are not placed in proximity of one another in the phylip reconstruction .western armenian ends up completely misplaced ( it appears in proximity of korean and japanese ) .this misplacement may be corrected if one first subdivides data by language families and then runs the phylogenetic reconstruction only on the indo - european data .the better mapped entry for eastern armenian is placed in proximity of the subtree of figure [ tochfig ] containing the tocharian anatolian branch and some indo - iranian languages ( plus some other misplaced languages from other families ) .the nearest neighbors that appear in this region of the tree are digor ossetic and iron ossetic : again this is likely an effect of the poor mapping of the indo - iranic branch of the indo - european family , as in the case of figure [ tochfig ] .another error due to misplacement from an entirely different family occurs , with the uto - aztecan pima placed in this same subtree , see figure [ armfig ] .this subtree is placed adjacent to a subtree containing a group of balto - slavic languages ( and some misplaced languages ) with both of these branches then connecting to the subtree of figure [ tochfig ] . the poorly mapped eastern armenian entry ( )is placed as single leaf attached to an otherwise deep inner node of the tree .another language that is often difficult to position in the indo - european tree , albanian ( mapped ) , is misplaced in the phylip reconstruction , and placed next to gulf arabic ( mapped ) .these examples confirm the fact that a naive phylogenetic analysis of the sswl database can not deliver any reliable information on the question of the early branchings of the indo european tree . ), ee = pima ( misplaced uto - aztecan ) , ai = ossetic digor , dh= ossetic iron.[armfig ] ]we verified that the same types of problems illustrated in the previous subsections occur when the sswl data are analyzed using phylogenetic networks instead of the phylip phylogenetic trees .we compiled the sswl data , using only the indo - european languages , which have more complete parameter information as a sample set .as in the tree analysis discussed before , we input the syntactic parameters as a sequence of binary strings into the phylogenetic networks programs .the splitstree 4 program generated a split tree , which is intuitively a confidence interval on trees .the farther from tree - like the generated tree , the less any given tree is able to describe the evolution of the languages .the output of this program indicated that the phylogenetics of languages analyzed on the basis of sswl syntactic parameters diverges strongly from being tree - like .as discussed before , this may be regarded as further indication of systematic problems that create high uncertainties in the candidate trees .these are again an illustration of the effect of a combination of the factors ( 1)(4 ) listed in [ listsec ] .we also fed the same data to the network 5 program .this generated a phylogenetic network , using the median - joining algorithm which represents all of the shortest - path length ( maximum parsimony ) trees which are possible given the data . ]we discuss below some of the aspects of the network generated by splittree 4 in comparison to some of the outputs described above obtained with the phylip phylogenetic trees .figure [ sswlnetwork ] illustrates a large region of the phylogenetic network produced by splittree 4 using the entire set of sswl data .it is evident that some of the same problems we have discussed before occur in this case as well , in particular the misplacement of the ancient languages with respect to their modern descendent ( see the position of latin and ancient greek , for example ) .however , with respect to the phylip results discussed above , we see less instances of languages that get completely misplaced within the wrong family .for example , as one can see from figures [ germanicnetwork ] and [ latinnetwork ] , ancient neapolitan now appears correctly placed in the latin languages ( and near spanish ) rather than misplaced among the germanic languages as in figure [ germfig ] .however , one can see that other problems that occurred in the phylip reconstructions for this group of languages are still present in the splittree 4 network .for example , as in figure [ latinfig ] , portuguese appears closer to italian than to spanish in the network of figure [ latinnetwork ] , contrary to the general understanding of the phylogenetic tree of the latin languages .( we will discuss the case of the subtree of the latin languages more in detail in [ agtreesec ] below . )misplacements of languages within these smaller subfamilies are still occurring , however : one can see that , for example , in the positioning of the romance language occitan in the region of the phylogenetic network in proximity of germanic languages like old norse and icelandic in figure [ germanicnetwork ] .the results of the splittree 4 phylogenetic networks analysis of the indo - european languages are available as the file indo_euro.nex , which can be downloaded at the url ] ]given the unsatisfactory results one obtains in analyzing the sswl database with software aimed at phylogenetic reconstructions , one can turn the problem on its head and try to obtain specific quantitative estimates of the level of reliability or unreliability of specific subsets of the sswl data for the purpose of phylogenetic , by relying on existing reconstructions of linguistic phylogenetic trees , obtained by other linguistic methods and other sources of data , which are considered reliable reconstructions .the problem is then to test the distribution at the leaves of the tree obtained from the sswl data with specific polynomial invariants associated to a given reliable tree .such invariants would be vanishing on a probability distribution at the leaves obtained from an evolutionary process modeled by a markov model on the tree , hence we can use the estimate of how far the values are from zero as a numerical estimate of a degree of unreliability of the data for phylogenetic reconstruction . again , this does not identify explicitly the source of the problem , among the various possible causes outlined above , but it still gives a numerical estimate that can be useful in trying to improve the results .we propose here to use methods from phylogenetic algebraic geometry to achieve this goal .we first give a quick review of the main setting of phylogenetic algebraic geometry and then we illustrate in some specific examples how we intend to use these techniques for the purpose described here .the basic setup for linguistic phylogenetic models consists of a _ dynamical process _ of language change ( which in our case means change of syntactic parameters ) , considered as a markov process on a _ binary tree _ ( a finite tree with all internal vertices of valence ) .it can be argued whether trees really give the best account of language change based on syntactic data , rather than more general non - simply - connected graphs ( generally referred to as networks " ) .we will return to discuss some empirical reasons in favor of phylogenetic networks instead of trees in [ geomsec ] below .the mathematics of phylogenetic networks is discussed at length in and . about the use of phylogenetic networks in linguistics ,see . another general assumption of phylogenetic models , which requires careful examination in the case of applications to linguistics , is the usual assumption that the variables ( binary variables in the case of syntactic parameters ) behave like _ independent _ identically distributed variables , whose dynamics evolves according to _ the same _ markov process .this assumption is especially problematic when dealing with syntactic parameters because of the presence of relations between parameters that are not entirely understood , so that it is currently extremely hard to ensure one is using a set of independent binary variables .moreover , while acceptable in first approximation , even the assumption that the underlying markov model driving the change should be the same for all syntactic parameters appears problematic . the fact that different syntactic parameters have very different frequencies of occurrence among world languages certainly suggests otherwise .we will return to this point in [ geomsec ] and suggest a possible approach , based on the results of , to correct , at least in part , for this problem .the leaves of the tree correspond to the modern languages with observed values of the parameters giving a joint probability distribution with , and with the number of leaves . herethe quantity represents the frequency with which syntactic parameters of the languages at the leaves of the tree have values , respectively . in the usual setting of markov models for phylogenetic reconstructions , one further assumes that all the _ inner nodes _ are hidden variables and that only the distribution at the leaves of the tree is known . hereagain we encounter a problem with respect to applications to linguistics . in certain language families , like the indo - european family, several ancient languages have known parameters . in the sswl database , for instance ,ancient greek is one of the very few languages that are 100 mapped with respect to their list of 115 parameters .thus , one needs to consider some of the inner vertices as known rather than hidden .one way to do that is to consider a single leaf coming out of some of the inner vertices that will correspond to the known values of the parameters at that vertex . as we discussed in [ treesec ] above , one encounters problems with the placement of the ancient languages in the phylip reconstruction of the syntactic phylogenetic trees , which should be corrected for .better results are obtained when one first separates out the data into language families and subfamilies and builds trees for smaller subfamilies first , including the known data about the ancient languages , and then combines these trees into a larger tree .this procedure avoids the type of problem mentioned in [ treesec ] , by which the greater syntactic similarity between some of the ancient indo - european languages like latin and ancient greek is detected correctly , but in turn prevents their respective placement in the correct position with respect to the modern languages that originated from them . for a given set of leaves ,there are different possible binary tree topologies .clearly , it is not a computationally efficient strategy to analyze all of them .however , one would like to have some computable invariants that one can associate to a given candidate tree , which estimates how accurate is as a phylogenetic tree , among all the possible choices , given knowledge of the joint probability distribution at the leaves . the phylogenetic algebraic geometry approach ( see , and the survey ) aims at constructing such phylogenetic invariants using algebraic geometry and commutative algebra .we review the main ideas in the next subsection .we consider here the jukes cantor model describing a markov process on a binary rooted tree with leaves .the stochastic behavior of the model is determined by the datum of a probability distribution at the root vertex ( the frequency of expression of the and values of the syntactic parameters at the root ) and the datum of a bistochastic matrix along each edge of the tree .these data are often referred to in the literature as parameters of the model . in order to avoid confusion with our use of the term parameter for the syntactic binary variables, we will refer to the as stochastic parameters " . for a tree with leaves , and variables with states , the number of stochastic parameters is in our case , with binary variables , we have and the number of stochastic parameters of the model is simply . _phylogenetic invariants _ are polynomial functions that vanish on all the expected distributions at the tails of the tree , for all values of the stochastic parameters .the simplest example of such an invariant is the linear polynomial since the joint distribution at the leaves is normalized by .this invariant is uninteresting , in the sense that it is independent of the tree , hence it does not provide any information about distinguishing between candidate phylogenetic trees .in general one seeks other , more interesting , phylogenetic invariants , and the minimum number of such invariants required for phylogenetic inference .an answer to this question is provided by algebraic geometry , as shown in , , , .consider the polynomial ring ] among the 106 vectors of sswl parameters above .the only nonzero frequencies are note how these frequencies confirm some well known facts about the latin languages .syntactic parameters ( as recorded in sswl ) are very likely to have remained the same across all five languages in the family , with a higher probability of a feature not allowed in latin remaining not allowed in the other languages ( ) than of a feature allowed in latin remaining allowed in the other languages ( ) .it is also very likely that a feature is the same in all the modern ones but different from latin , with a much higher incidence of cases of a feature allowed in latin becoming disallowed in all the other languages ( ) than the other way around ( ) . among the remaining possibilities ,we see incidences where french has an allowed feature that is missing in the other languages ( ) of disallowed ( ) and cases where latin and portuguese have the same feature allowed , which is disallowed in the other languages ( ) : all other nonzero entries have only two or less occurrences .the resulting matrices for the edge flattenings of the tree of figure [ treeflatp ] are then as computed in [ agtreesec ]. a. bouchard - ct , d. hall , t.l .griffiths , d. klein , _ automated reconstruction of ancient languages using probabilistic models of sound change _ , proceedings of the national academy of sciences ( pnas ) vol.110 ( 2013 ) n.11 , 42244229 .r. bouckaert , p. lemey , m. dunn , s.j .greenhill , a.v .alekseyenko , a.j .drummond , r.d .gray , m.a .suchard , q.d .atkinson , _ mapping the origins and expansion of the indo - european language family _ , science , vol.337 ( 2012 ) 957960 .g. longobardi , l. bortolussi , m.a .irimia , n. radkevich , a. ceolin , c. guadagno , d. michelioudakis , a. sgarro , _ mathematical modeling of grammatical diversity supports the historical reality of formal syntax _ , in proceedings of the leiden workshop on capturing phylogenetic algorithms for linguistics " , 2016 .g. longobardi , s. ghirotto , c. guardiano , f. tassi , a. benazzo , a. ceolin , g. barbujani , _ across language families : genome diversity mirrors linguistic variation within europe _ , am . j. phys. anthropol .vol.157(2015 ) n.4 , 630640 .
|
in light of recent controversies surrounding the use of computational methods for the reconstruction of phylogenetic trees of language families ( especially the indo - european family ) , a possible approach based on syntactic information , complementing other linguistic methods , appeared as a promising possibility , largely developed in recent years in longobardi s parametric comparison method . in this paper we identify several serious problems that arise in the use of syntactic data from the sswl database for the purpose of computational phylogenetic reconstruction . we show that the most naive approach fails to produce reliable linguistic phylogenetic trees . we identify some of the sources of the observed problems and we discuss how they may be , at least partly , corrected by using additional information , such as prior subdivision into language families and subfamilies , and a better use of the information about ancient languages . we also describe how the use of phylogenetic algebraic geometry can help in estimating to what extent the probability distribution at the leaves of the phylogenetic tree obtained from the sswl data can be considered reliable , by testing it on phylogenetic trees established by other forms of linguistic analysis . in simple examples , we find that , after restricting to smaller language subfamilies and considering only those sswl parameters that are fully mapped for the whole subfamily , the sswl data match extremely well reliable phylogenetic trees , according to the evaluation of phylogenetic invariants . this is a promising sign for the use of sswl data for linguistic phylogenetics . we also argue how dependencies and nontrivial geometry / topology in the space of syntactic parameters would have to be taken into consideration in phylogenetic reconstructions based on syntactic data . a more detailed analysis of syntactic phylogenetic trees and their algebro - geometric invariants will appear elsewhere .
|
most quantum experiments produce results that can not be predicted with certainty .no matter how we improve our experimental techniques , this uncertainty will not diminish . any attempt to explain why this isso will of course depend on which theory is used to describe the quantum world . in this paper, we will explain the origin of quantum randomness and uncertainty from the standpoint of adopting ( non - relativistic ) bohmian mechanics as that theory .both classical mechanics and bohmian mechanics are deterministic theories .this means that the specification of the complete state of a ( classical or bohmian ) system at one time _ _ , together with the laws of the theory ,is compatible with only one state of the system at any other time .so , if the complete state of a ( classical or bohmian ) system is specified at one time , the whole history of the system is determined .obviously , the two theories differ in terms of their laws and also in what needs to be specified in order to provide a complete characterization of a system . in classical mechanics , the state of a system is fully characterized if the velocities and positions of the particles that constitute the system are provided at an initial time .in contrast , a complete characterization of a bohmian system requires both the initial positions of its constituent particles to be given and also the initial wave function that guides them .very often , determinism is associated with absolute predictability . however , this association is unjustified .for instance , if a classical system is chaotic , since its state can only be known with a finite degree of precision , its evolution after a period of time will be uncertain . moreover , if the system is macroscopic and has some particles , measurement of all the initial conditions ( the position and the velocity of each particle in the system ) is not practically possible .but , without knowledge of the initial conditions that completely characterize the state of the system , its future evolution can not be predicted .these sources of unpredictability in classical systems are also found in bohmian mechanics .there are , however , two fundamental differences between classical and bohmian unpredictability. the first difference has to do with the experimental accessibility of the state , i.e. , the initial conditions .we usually determine the initial conditions of a system by measuring them . in classical mechanics and despite all the practical difficulties mentioned above the idea that the initial conditions of a system can ( theoretically ) be measured is not problematic .we will show that in bohmian mechanics this is not the case .it follows from the dynamical laws of bohmian theory themselves that the wave function of a system can not be measured .in addition , if the wave function is somehow known and we want to subsequently measure the positions of the particles , this measurement process itself will unavoidably modify the initial wave function .the second difference between classical and quantum theories has to do with the typical initial conditions that can be assumed in scenarios where their exact values can not be known .most quantum experiments are analyzed by assuming the initial wave function of the system to be given ( say , by means of a suitable procedure of preparation of the system ) .thus , in order to analyze quantum experiments , only a statistical assumption regarding the distribution of the initial positions of the particles is needed .the so - called quantum equilibrium hypothesis states that the distribution of the positions of particles is given by the modulus squared of their wave function .this hypothesis is of paramount importance , since it allows us to quantify precisely the the degree of quantum uncertainty in an experiment and it guarantees the empirical equivalence of bohmian mechanics and the standard quantum - mechanical approach .now that we have given this brief introduction to the topic , we can advance that the structure of the rest of the paper is as follows . in section [ sec2 ] ,we discuss why , according to bohmian mechanics , the initial conditions can not be properly measured .first , in subsection [ sec21 ] , we briefly introduce the dynamical laws of the theory and we discuss the distinction between the universal wave function , the conditional wave function and the effective wave function of a subsystem ; since these distinctions are important for understanding the origin of uncertainties in bohmian mechanics. then in subsection [ sec22 ] , after explaining why in classical mechanics the initial conditions can in principle be measured , we show why the wave function can not be measured ; and finally , we show that a measurement of position typically disturbs the wave function . in section [ sec3 ] , we introduce the quantum equilibrium hypothesis , explaining its meaning in subsection [ sec31 ] , its justification in subsection [ sec32 ] and in subsection [ sec33 ] , the absolute unpredictability of quantum mechanics that emerges as a direct consequence of quantum equilibrium hypothesis . next , in section [ sec4 ] , we add some further considerations concerning the relation between randomness , measurement and equilibrium. then finally , in section [ sec5 ] , we summarize our conclusions .in this section , we want to highlight the differences between classical measurements and bohmian measurements .as we have just mentioned , we will show that , according to bohmian mechanics , there are fundamental limits to the possibility of determining the initial conditions of a given system through measurement a sort of limit that we do not find in classical measurements . due to this inherent limitation on the determination of the initial conditions , the subsequent evolution of the system can not be predicted . throughout this sectionwe will be mainly concerned with the dynamics of bohmian mechanics .so , in what follows we present the basic dynamical laws of the theory .we take advantage of this presentation to introduce three different types of ( bohmian ) wave functions that are relevant for our later discussion . for ease of exposition and understanding ,we will focus on a non - relativistic bohmian world with spinless particles living in a one - dimensional physical space .the generalization to a 3d physical space inhabited by particles with spin does not change any of our conclusions .bohmian mechanics is a quantum theory in which the complete state of the whole system of particles in the universe , say , is given by the _universal _ wave function with , and by the actual positions of the particles .the universal wave function evolves according to the many - particle schrdinger equation : where is the mass of the -th particle and the hamiltonian operator contains , both , the kinetic energy term and the potential energy term , , of all the particles .in addition , the trajectory of each particle is given by the integral of the particles velocity defined through the so - called _ guidance equation _: in order to derive the bohmian trajectories from eq .( [ velo ] ) , we need to specify the initial position of all the particles , . in turn , in order to solve the schrdinger equation given in eq .( [ scho ] ) , the universal wave function at a given initial time , , must be specified .as we say above in the introduction , bohmian mechanics is a completely deterministic theory in the sense that specification of the complete state of the universe at one initial time , , together with the laws given in eqs .( [ scho ] ) and ( [ velo ] ) , is compatible with just one state of the universe for any other time , . in this regard ,the dynamical laws of bohmian mechanics are as deterministic as the dynamical ( newtonian or hamiltonian ) laws of classical mechanics. the following continuity equation for the modulus squared of the wave function can be straightforwardly obtained from eq .( [ scho ] ) : now , suppose that we do not know the initial positions of the particles , but that we assume that they are in accordance with a given statistical distribution .it follows from eq .( [ continuity ] ) that if this initial distribution is given by , then the dynamics preserves the form of this distribution ; that is , for any other future time , the distribution is .when the distribution is preserved by the dynamics , as happens for , we say that this distribution is equivariant . in situations of practical interest ,we are not concerned with the whole universe but with smaller subsystems , i.e. , a particle or a collection of particles in our laboratory . however , the wave function that appears in the postulates of bohmian mechanics , eqs .( [ scho ] ) and ( [ velo ] ) , is the universal wave function , .if we want the theory to make contact with real applications , we need some tools that enable us to refer to and to characterize arbitrary subsystems of the universe . in bohmian mechanics ,those tools are the so - called _ conditional _ and _ effective _ wave function of a given system .let us first define the _conditional _ wave function of a system . for simplicity, we will focus on a specific particle , labeled , with the configuration variable . by definition ,the conditional wave function is : where are the positions at of all the particles except .it is usual to refer to the set of all particles except as the environment of .note that whereas the universal wave function is a function defined in the -dimensional configuration space of the universe , the conditional wave function is a function only of ( and time ) .it follows from eq .( [ velo ] ) , that the velocity of is given by the function through the corresponding guidance equation : therefore , all the dynamics of a quantum ( sub)system can be inferred from its conditional wave function . however , in general , the conditional wave function does _ not _ evolve according to the schrdinger equation , but it obeys its own ( more complicated ) equation : where is the potential that appears in eq .( [ scho ] ) evaluated at .( [ eq_conditional ] ) demonstrates that such a single - particle wave equation exists ; however , we do not know the exact form of the terms and because to know these terms we would need to have complete knowledge of the ( universal ) wave function .our ignorance of these terms is certainly a source of uncertainty in our predictions .this source of uncertainty is of the sort that appears in ( classical or quantum ) open systems due to the interchange of particles and energy with the ( unknown ) environment . while the evolution of the conditional wave function for a quantum subsystem is given by eq .( [ eq_conditional ] ) , which is not exactly the schrdinger equation , bohmian theory provides the concept of the _ effective _ wave function of a quantum subsystem , whose equation of motion is exactly the schrdinger equation .once again , let us focus on particle and let be the position variables of s environment .now suppose that the universal wave function , over some time interval ( for example , for ] is the usual schrdinger equation : and the bohmian velocity is just : we should note that the form of the universal wave function in eq .( [ effective ] ) implies a separable potential .we emphasize that the effective wave function of a quantum ( sub)system does not always exist ; however , when the effective wave function exists , it is equal to the conditional wave function in eq .( [ effective ] ) and recalls that , by assumption , lies within the support of .] . in exactly the same way as we developed eq .( [ continuity ] ) , a new continuity equation can be deduced from eq .( [ eq_effective ] ) : whenever in orthodox quantum mechanics a definite wave function is attributed to a given subsystem , that wave function corresponds with the bohmian effective wave function .we will see in subsection [ sec31 ] that eq .( [ continuity2 ] ) guarantees equivariance , which will be relevant to ensure the empirical equivalence between bohmian and orthodox quantum mechanics .bohmian mechanics is a holistic theory . strictly speaking , its postulates eqs .( [ scho ] ) and ( [ velo ] ) only apply to the universe as a whole . therefore , when it comes to assessing the ontology of the theory , the fundamental objects are the particles with positions and the universal wave function .as we have seen , in order to deal with subsystems of the universe , the conditional wave function is introduced .the conditional wave function of a given subsystem is not fundamental but it supervenes on the fundamental and . from eq .( [ eq - guid - cwf ] ) , it can be seen that the complete dynamics of a subsystem is determined once both its conditional wave function and the initial positions of its constituent particles are specified .thus , we can say that the complete state of a subsystem is given by its conditional wave function and the position of its particles even though its conditional wave function is not a primitive object in bohmian mechanics . the conditional wave function for a subsystem does not evolve according to the schrdinger equation , but is a solution of the more complicated eq .( [ eq_conditional ] ) .yet we have seen that , when a subsystem is sufficiently decoupled from its environment so that the conditions for it to have a well - defined effective wave function are satisfied , then its effective wave function actually does obey the schrdinger equation , eq .( [ eq_effective ] ) . in this case , once again the evolution of the system is determined when both the initial positions of its constituent particles and its effective wave function are specified . in the rest of the paper ,when we refer to the wave function of a given subsystem of the universe , we are of course referring to its conditional wave function ( or to its effective wave function , if the system has one ) even if we do not make it explicit .we next want to highlight the contrast between classical and bohmian mechanics when it comes to experimentally determining the initial conditions of a system . in order to do so , in subsection [ sec221 ]we discuss a very simple model of a classical measurement interaction represented by an impulsive hamiltonian .it will follow from this discussion that , in principle , both the position and the velocity of a particle can be measured without an appreciable perturbation .then , in subsection [ sec222 ] , we show why the wave function can__not _ _ be measured ; and in subsection [ sec223 ] , we show that a bohmian measurement of the position typically disturbs the wave function .we consider a one - dimensional classical system ( object ) of mass and the measurement of the property of the system , where is the position coordinate of the object and its momentum . to perform the measurement, we consider a pointer , whose position and momentum are denoted by and respectively .we assume that the hamiltonian governing the interaction between the object and the apparatus has the following form : where is a coupling constant .the total hamiltonian of the system - plus - apparatus is composed of the interaction term in eq .( [ al1 ] ) plus the kinetic energy of the object and of the apparatus .we consider that during the course of the interaction is the only relevant term in the hamiltonian .then , the classical equations of motion can be obtained by substituting eq .( [ al1 ] ) into hamiltons equations : it can be seen from eq .( [ al3 ] ) that the value of the time derivative of the position of the apparatus is correlated with the value of . from eqs .( [ al2 ] ) and ( [ al4 ] ) , we see , however , that the variables and are disturbed during the measurement process . however, such disturbance can be made arbitrarily small by assuming . from eq .( [ al5 ] ) it follows that the momentum of the pointer remains constant during the whole interaction ( its temporal derivative is zero ) .therefore , if we consider at the initial time , the expressions eqs .( [ al2 ] ) and ( [ al4 ] ) are greatly simplified . in this case , the above equations can be easily integrated , yielding : now it can be seen that neither the position of the object nor its momentum are altered during the measurement process .we would like to note that from the second expression in eq .( [ al9b ] ) , the value of can be determined if the initial and final positions of the pointer , and , are known .given these results , it is easy to show how we can measure the position and momentum of the object at the same time without causing a significant disturbance their values .to do this , we just assume an interaction with two pieces of apparatus ( whose pointers are represented on this occasion by the variables , and , , respectively ) so that the interaction hamiltonian is now : where and are the coupling constants between the object and the first and second pieces of apparatus , respectively . applying simplifications analogous to those in the previous case ,the following equations of motion are obtained : if the position of the respective pointers at the end of the interaction is known , then both the position and the initial velocity of the object can be directly inferred from eq . ( [ al9 ] )having this knowledge is a necessary ( but not sufficient ) condition for precisely predicting the posterior behavior of the particle .as we will see next , in bohmian mechanics the detailed initial conditions of a system can not be properly measured , so in the context of this theory not even this necessary condition can be met .as we have repeatedly stressed , the initial conditions of a quantum system are its wave function and the initial positions of its constituent particles . as opposed to what happens in the classical case , in bohmian mechanics one of the initial conditions the wave function cannot be properly measured .the argument leading to the non - measurability of the wave function is rather simple and only the linearity of the quantum dynamical laws needs to be assumed .we are interested in developing an apparatus that is capable of measuring the wave function of an object with position variable .we will assume that such measuring apparatus is an additional quantum system whose final pointer position allows us to infer the wave function that is subjected to the measurement .let be the ( effective ) wave function of the apparatus when it is in the initial state ready for the measurement where is the position variable of the apparatus .when needed , we will refer to the ( effective ) wave function of the composite system as .we consider , for the sake of our argument , two possible initial wave functions of the object , and , which allows us to define two different initial wave functions of the composite system , and .a proper measurement interaction must provoke a correlation between the wave function of the system and the pointer of the apparatus , so that the two initial wave functions , and , are associated with two different values of the pointer s position at the end of the measurement .the temporal evolution of the joint wave function leading to the desired correlation between the object and the apparatus is given by the linear schrdinger equation : where the term is responsible for the proper correlation between any and .note that the evolution of during the measurement is not separable . in order for the interaction to constitute an ( ideal ) measurement of the wave function of the object, the evolution of the global wave function from till , through the corresponding schrdinger equation in ( [ schobis2 ] ) , must be : where is the final wave function of the apparatus indicating that the measured wave function is .in the same way : where is the final wave function of the apparatus indicating that the measured wave function is . the wave functions and need to have macroscopically disjoint supports , , as the set of all points of its domain such that the value of the wave function is significantly different from zero .] so that when we observe , we take this as an unambiguous indication that the result of the measurement is that the wave function of the object is .similarly , when we observe , we take this as an unambiguous indication that the result of the measurement is that the wave function of the object is .now , let the initial wave function to be measured be .note that such a state always exists in the hilbert space of the object system .if we use the same measuring device as before , it follows from ( [ 1 ] ) and ( [ 2 ] ) , and the linearity of the schrdinger equation that the evolution of the joint state will be : given the final state , the laws of bohmian mechanics entail that , at the end of the measurement , the pointer s position will lie either within the support of or within the support of . however , if , we have assumed that the result of the measurement is that the wave function is ( and not ) ; alternatively , if , the pointer s position is taken as an indication that the result of the measurement is that the wave function is ( and not ) . in conclusion , it can be seen that it is not possible to have an apparatus whose operation is based on the linearity of the schrdinger equation that measures all wave functions could imply the inclusion of a -dependent term which breaks the linearity of eq .( [ schobis2 ] ) . ] . if our measuring apparatus is designed to measure the wave functions and , it will be unable to measure correctly a new wave function constructed as a linear combination of the previous two , such as . in the previous subsection ,we show that the wave function can not be measured .since the motion of a quantum system is governed by the wave function , this has important consequences for the unpredictability of the future behavior of quantum systems . at this point, the reader may object that , in many situations , physicists assume that they know what the wave function of a given system is .for instance , by measuring the energy of a system , we can assume that the effective wave function of the system ( after the measurement ) is an eigenstate corresponding to the measured energy eigenvalue .however , strictly speaking , this procedure for obtaining information on the wave function can not be considered a measurement of the wave function , since the wave function has not been measured ; only the energy has . in the literature, this procedure is defined as `` preparation '' of the wave function . in this subsection, we bolster our results from the previous subsection by showing that , if we assume that we know the initial wave function thanks to a specific preparation technique , and we want to measure the bohmian position of the particle in the system in order to determine both initial conditions , the measurement of the position perturbs the initial wave function so we can not have knowledge of both .let us consider , again , that we have two systems : the object with position variable and the apparatus with position variable .we can assume that this latter position represents the center of mass of the pointer .we have prepared the object system in such a way that we know its wave function and we subsequently want to measure its position .the ( effective ) two - dimensional wave function of system plus apparatus is and its initial value is , for example , the product of two gaussian wave functions at rest with initial dispersions of and respectively : the modulus at the initial time , , together with the initial positions of the system and pointer , , is plotted in fig .[ fig1 ] . and ( filled - in ( blue ) circle )the corresponding ( bohmian ) position in the two - dimensional system - plus - apparatus configuration space .( color figure online ) .] we will assume that the interaction is governed by the following interaction hamiltonian , similar to the hamiltonian we use in eq .( [ al1 ] ) for modelling a classical measurement . here , , and the function where is equal to the integer number when with .in the limit we have as in eq .( [ al8 ] ) .therefore , in our simulation , we use the following equation : the interacting term generates a correlation between the position of the system and the position of the pointer .such a correlation allows us to look at the position of the pointer and infer from it the position of the system ( with some technical uncertainty related to the finite value of , which would disappear in the limit ) . in fig .[ fig2 ] , the pointer of the apparatus indicates a final value at the final time after the measurement , which is perfectly correlated with the position of the bohmian particle . at this final time, the one - dimensional ( effective or conditional ) wave function of the quantum system alone can be defined as .the relevant point is that the one - dimensional ( conditional ) wave function strongly depends on the pointer position and , even worse , it does not resemble the initial conditional wave function at all .the moral of this analysis should now be clear : measuring the position of the system involves a great perturbation of its wave function . at the final time and ( filled - in ( blue ) circle )the corresponding ( bohmian ) position in the two - dimensional system - plus - apparatus configuration space .the ( effective ) unitary schrdinger equation of the system plus apparatus does not depend on the bohmian position , but its evolution allows a correlation between different and different .such correlation is a measurement of the bohmian position of the particle , with the corresponding perturbation of the wave function ( color figure online ) . ]it is important to note that if we use a highly localized wave function in the x - direction , instead of the gaussian wave packet in eq .( [ eq : gauss ] ) , then we will obtain information on the position without disturbing this localized wave packet . here , we are referring to a wave packet whose support is localized inside a step interval ( i.e. , eigenfunctions of the position operator when ) .then , for this localized wave packet , the term has no spatial dependence and only affects the wave function . in other words ,( [ schobis ] ) is separable and the measurement of the position does not perturb the ( localized ) wave function .unfortunately , most of the wave functions of practical interest are not eigenfunctions of the position operator and measuring the position implies perturbation of the wave function. it should be noted , in addition , that the time evolution of a spatially extremely narrow wave function ( close to a delta function ) , due to the guidance equation ( [ velo ] ) , implies such a large dispersion in the velocity of the particle that an infinitesimal variation of the initial position inside the initial wave packet provokes a large variation in its final position ( as in a chaotic system ) . in short ,if the wave function is close to a position eigenstate , we are in the situation of approximately knowing the wave function and the position at the initial time through a measurement ; however , this situation also leads to an effective unpredictability of the future behavior of the system .so far we have shown that , assuming a particular hamiltonian interaction , , a measurement of the position disturbs the initial conditional wave function ( unless the latter is a position eigenstate ) .obviously , this is not a general result and the reader may wonder whether , by means of another interaction between particles and , the position of can be measured without disturbing the wave function . we want to argue , next , that this can not be the case . in order to do so, we will assume an interaction between and such that it does not provoke a modification of the conditional wave function of which means that we have eliminated the x - dependence of the function in eq .( [ schobis ] ) .therefore , from eq .( [ eq : gauss ] ) , the x - wave packet evolves independently of the y - wave packet at all times .one can clearly see that , as desired , the velocity in the x - direction is zero because the phase of the gaussian in the x - direction remains x - independent all the times : . if this interaction is to be a measurement of the position , then obviously it should connect different initial positions of with different final positions of the pointer , so that by looking at the final position of the pointer , the position of can be inferred .it is clear then that a necessary condition for the interaction to constitute a measurement is that it needs to induce a `` channelization '' of in the -space so that at the final time , several positions of the pointer can be discriminated . now , if this latter requirement is combined with the former ( namely , that the conditional wave function of is not disturbed as a result of the interaction ) , we obtain a situation similar to that depicted in fig .[ fig3 ] . here , we see that the different branches of associated with the different possible final positions of the pointer all have the same shape in the -space .note , that this is in striking contrast with the situation depicted in fig .[ fig2 ] where each branch of associated with one possible final position of the pointer clearly has a different projection on the -axis .but the former result is imposed by the requirement that , for all possible final positions of the pointer , the conditional / effective wave function must be equal to the initial effective wave function of : .the evolution represented in fig .[ fig3 ] has a dramatic consequence .it does not produce any dynamical correlation between and .it can be accepted that , by chance , the pointer may indicate nm when the position of the system is nm .yet there will be no such coincidence for other possible initial positions of the pointer .in fact , according to the quantum equilibrium hypothesis , which we discuss in more detail in the next section , there are infinitely many bohmian trajectories such that , while the pointer ends in position nm , the position of is not nm but any other value .so it is clear that the interaction represented in fig .[ fig3 ] can not be considered a measurement of position .a final remark is required here .we can conceive a rather special universe in which , for all systems initially characterized by the wave function that undergo the interaction we are now considering , the available initial positions are initially distributed in such a way that the pointer shows an apparent correlation with in spite of both variables and evolving independently .for example , imagine that our special universe only contains the following 5 initial pairs of particles positions that are deterministically associated with the final pairs of particle positions indicated : at the final time and ( filled - in circle ) the corresponding ( bohmian ) position in the two - dimensional configuration space for one particular experiment .the evolution of implies a perturbation in the direction , but an unperturbed evolution in the direction .such an interaction can not be considered as a good measurement of the position because , due to equivariance and the guidance law of effective wave functions , eq .( [ velo_eff ] ) , the final value of the pointer is compatible with many different positions of the system .( color figure online ) . ] in fact , the apparatus is not doing anything ; the initial conditions of our universe are so special that it seems that there is a perfect correlation between and : when the system is in the final pointer indicates without distorting the wave function of the system , when the system is in the final pointer indicates without distorting the system wave - function , and so on .such a universe is certainly possible , but highly atypical .assuming that such an atypical universe ( or other universes with similar pathologies ) can be disregarded , we have shown that a measurement of the initial position that does not disturb the wave function is not possible .summing up , the results in subsections [ sec221 ] , [ sec222 ] and [ sec223 ] show the fundamental differences between classical and bohmian mechanics regarding the possibility of experimentally ascertaining the initial conditions of a system .both are deterministic theories but , while according to the former we can in principle measure the initial conditions and possibly make a precise prediction of the future evolution of a system , this is not the case according to the latter .we can now see these same results under a new light .since in bohmian mechanics we can not measure the state of a system , this entails that we do not have any means to experimentally select or produce a set of systems that are guaranteed to be identical in the same bohmian state .imagine that we have a collection of electrons all prepared so that their effective wave function is the same . from the standpoint of the orthodox quantum - mechanical approach ( and of any quantum theory that assumes that the wave function alone provides a _ complete _ description of a system ) , these electrons are identical as they have the same wave function .if we measure a property and we obtain different results , it will be puzzling .we are then forced to accept indeterminism and that there is no cause of the different results . from the standpoint of bohmian mechanics ,however , the electrons are not identical : even if they have the same effective wave function , their positions ( relative to the wave function ) need not be the same .therefore , there is no puzzle if they behave differently and produce different results upon measurement .different initial ( bohmian ) states give different final results ! given that we can not make a finer - grained selection procedure ( discerning at the same time the wave function and the positions ) , the fact that we can not make precise predictions concerning the future evolution of each electron is naturally accounted for : we can not do so because we lack information on the complete state of each individual system . as we will see in the next section ,the most we can do is adopt a statistical treatment of our ensemble , making a guess as to the distribution of the positions given the wave function .on the one hand , we have explained in subsection [ sec222 ] that the wave function can not be measured . on the other hand ,we have shown in subsection [ sec223 ] that , even when the wave function is properly `` prepared '' , a measurement of the position of a particle necessarily perturbs the `` prepared '' wave function ( unless the latter is very close to a delta function in the position representation ) .these results concerning our ignorance of the initial state of the wave function and the initial positions of the particles imply an unavoidable unpredictability in the deterministic bohmian theory . at this point , the reader may wonder whether these conclusions are in conflict with the weak measurement " techniques that have recently attracted a lot of attention both from a theoretical and an experimental point of view .in particular , the work entitled `` direct measurement of the wave function '' shows the first experimental reconstruction of the transverse spatial wave function of a photon .the experimental procedure for such a reconstruction is the following .two consecutive measurements are done in a quantum system with a well prepared wave function .the first measurement performs a _ weak _ measurement of the position , which implies a _ weak _ perturbation of the wave function .the second measurement is done on the previously weakly perturbed wave function through a projective strong measurement of the momentum that finally _ collapses _ the wave function .the above two consecutive measurements are repeated in many experiments , each one with an initial wave function prepared " in an identical way .then , from all the experiments , only the experiments whose measured momentum gives a particular value are selected ( and the rest of experiments are disregarded ) .the average value of the measured position computed only from the set of experiments that have been previously selected allows to reconstruct the wave function .a discussion of this experimental procedure can be found in .the average value obtained from the above procedure ( that provides information beyond that obtained from standard projective strong measurements ) is called the `` weak value '' .does the so - called `` direct measurement '' of the wave function in ref . contradict the main result of subsection [ sec222 ] ?obviously , no . throughout this paper (unless otherwise indicated ) , we refer to a measurement as the procedure to get an outcome from a single experiment .the value of a measurement of a single experiment can be related to a position of a ( measuring apparatus ) pointer while , by its own construction , a weak value can not be related with the position of a pointer in a unique experiment .therefore , the result of ref . is not the type of `` direct measurement '' that we were dealing with in this paper .the authors of ref . do also use the word `` direct '' in their title to differentiate their experiment with weak values from another experimental technique , quantum tomography , which estimates the wave function from a collection of projective ( strong ) measurements of observables .each observable in this tomographic technique is obtained from a different experiment with an identically prepared wave function .we prefer here to use the words `` direct measurement '' when the information is obtained from an individual quantum system in a unique experiment ( with a single- or multi - time measurement ) . with our definition , extracting information of the wave function from quantum tomography or from weak values can not be considered a direct measurement of the wave function , because it is not done in a single experiment . for the same reason ,experiments involving weak values of the ( bohmian ) velocity can not be considered as direct measurements of the ( bohmian ) trajectories .when it comes to discussing phenomena such as uncertainty , unpredictability and randomness from the standpoint of bohmian mechanics , most authors start with the quantum equilibrium hypothesis which is , in fact , the essential statistical consideration assumed in bohmian mechanics in order to determine the initial positions of particles probabilistically .as we will see , this hypothesis is crucial for establishing the empirical content of the theory and it guarantees the empirical equivalence of bohmian mechanics with ordinary non - relativistic quantum mechanics . yet here , we have preferred a different approach , showing first that certain limits on what can be measured arise in bohmian mechanics .this has been done without explicitly invoking the details of the quantum equilibrium hypothesis and focusing instead on the dynamical laws of the theory .these limits which do not arise in classical mechanics in turn imply limits on what can be predicted .we ended the last section with the idea that if we have a collection of systems prepared similarly ( with the same effective wave function ) , then we can not control their positions , since measuring the positions would perturb the wave function .while it could therefore seem as if no prediction can actually be made , the quantum equilibrium hypothesis comes to our rescue with an explicit suggestion of how the positions of systems with the same wave function are statistically distributed . inwhat follows , when describing the quantum equilibrium hypothesis and its implications , we are reviewing in general the work of drr , goldstein and zangh from 1992 in ref . the quantum equilibrium hypothesis can be stated as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * quantum equilibrium hypothesis : * for an ensemble of identical systems , each having the wave function , the empirical distribution of the configuration of the particles is given by . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ all quantum experiments ( without exception ) provide strong empirical support for this hypothesis .it is well known that for any quantum theory to be compatible with the empirical results of all available data regarding quantum phenomena , the bohmian theory among them , it has to satisfy the born rule .this rule states that , if a system has a well - known ( effective ) wave function at time , the probability of finding particles with configuration in volume is equal to . in consequence , if we want to ensure that quantum mechanics and bohmian mechanics have the same empirical content , the born rule has to hold in bohmian mechanics as well .but in bohmian mechanics , the measured configuration always corresponds with the actual configuration of the system .therefore , the empirical equivalence of bohmian mechanics and quantum mechanics is satisfied when the quantum equilibrium hypothesis is considered .for bohmian mechanics to predict the results determined experimentally , we have to assume the quantum equilibrium hypothesis , which enters the theory either as a postulate or as a consequence of some deeper physical consideration .an insightful justification for the quantum equilibrium hypothesis has been proposed by drr et al .they argue that the quantum equilibrium hypothesis is just a consequence of living in a _ typical _ universe . given that eqs .( [ scho ] ) and ( [ velo ] ) are deterministic , picking an initial configuration for all the bohmian particles , , amounts to picking one complete possible history of the universe .consider a history with many subsystems of the universe that , at different places and times , have the same conditional wave function , ( with respect to each one s own subsystem coordinates ) .we will say that the quantum equilibrium hypothesis is satisfied in this particular history if the actual empirical distribution of the configurations of these subsystems suitably approximates the distribution . using the law of large numbers and assuming that initial configurations ( and therefore histories ) are weighted with the measure given by , drr et al . show that _ most _ initial configurations lead to histories that satisfy the quantum equilibrium hypothesis ( if we understand as giving a measure of typicality over the initial global configuration of particles ; that is , over the set of bohmian histories ) is a subtle and complicated matter. understood literally as a probabilistic distribution , it invites one either to think of a supernatural being playing with the initial conditions or to a subjectivist reading .we do not favor either of these .with respect to this question , bell eloquently asserts : `` a single configuration of the world will show statistical distributions over its different parts .suppose , for example , this world contains an actual ensemble of similar experimental set - ups .[ ... ] it follows from the theory that the ` typical ' world will approximately realize quantum mechanical distributions over such approximately independent components .the role of the hypothetical ensemble is precisely to permit definition of the word ` typical ' . ''( , p. 129 ) . herewe want to go along with bell in considering that is a measure of typicality not a probabilistic measure and to stress that the only role of the hypothetical ensemble is precisely to permit definition of the word ` typical . ' for more details on this issue , see . ] .note that this account is not circular , since drr et al .derive the quantum equilibrium hypothesis ( which is a constraint applied to subsystems of the universe with well - defined effective or conditional wave functions ) from the assumption that the initial configuration of the whole universe is distributed according to , which is a constraint on bohmian histories .if it is assumed that the configuration of the whole universe at the initial time is distributed according to and given that is equivariant , the following * fundamental conditional probability formula * can easily be derived : recall that here , is the variable of the ( sub)system of interest , while represents the variables of the environment and is the joint distribution of the configuration .in addition , is the normalized conditional wave function .it follows from eq .( [ cond - prob ] ) that even in the case when the exact detailed configuration of the whole environment , , is precisely known ( information which is not accessible via measurements ) , there is no more information on the position of the subsystem studied than that expressed in the right - hand side of eq .( [ cond - prob ] ) . in other words , even if an experimenter could known exactly all the positions of the particles composing the measuring apparatus or environment associated with a quantum subsystem , the maximum information on the position of the particle that constitutes the subsystem being studied is the modulus squared of its conditional wave function . in this sense ,drr _ et al ._ have called the uncertainty implied by eq .( [ cond - prob ] ) _ absolute uncertainty_. all the empirical statistical contents of bohmian mechanics follow from this formula .it is worthwhile making the following point : the absolute uncertainty and conditional probability formula just introduced do not imply that precise information on the configuration of the subsystem can not be obtained .the crucial aspect expressed in eq .( [ cond - prob ] ) is that such knowledge ( precise information on the subsystem ) must be mediated by . but in bohmian mechanics , does not merely represents our knowledge of the system ; it also has an important and crucial dynamical aspect , i.e. it guides the motion of the particles .this implies that the absolute uncertainty just introduced above embodies , when the specific dynamics is considered , _absolute unpredictability_. in section [ sec2 ] , we use dynamical arguments to show that we can not know the position and the wave function of a subsystem of interest at the same time , because measuring the position always disturbs the wave function . in what follows ,we want to show that this is in perfect agreement with the statistical considerations introduced above in this subsection . in order to see this, we again consider the subsystems and mentioned in the numerical example of subsection [ sec223 ] , with an initial ( conditional or effective ) wave function with some spread along the -axis , as represented in fig .we will also consider that the interaction between the subsystems and constitutes a measurement of the position of : after the interaction , the final position of the pointer , , is suitably correlated with , so we can infer the latter with precision by looking at the former .finally , let us assume ( for the sake of the argument ) that the ( effective ) wave function of the system at the final time is equal to that function at the initial time , i.e. .now , this last condition contradicts eq .( [ cond - prob ] ) for the following reason .given the knowledge of the environment , we know the position of perfectly .for instance , the probability distribution of the position of ( conditioned by our knowledge that the final pointer position is nm ) is approximately . from eq .( [ cond - prob ] ) , it then follows that the ( conditional ) wave function after the measurement is , indeed , a delta function .however , above we also require that .the two requirements are incompatible , because ( unless our initial wave function was in fact a delta function , which contradicts our supposition ) .as can be seen from this example , it follows from the fundamental conditional probability formula that if , after an experiment , by knowing the position of the pointer we can infer with precision the position of the object , then the conditional wave function of the object after the experiment must approximate a delta function regardless of what the initial conditional wave function was .in section [ sec2 ] , we relate quantum randomness with our inability to measure the initial conditions of a quantum system .next , in section [ sec3 ] we claimed that quantum randomness arises because our universe is in quantum equilibrium .these two clarifications of the concept of quantum randomness may seem unconnected . in this section , we want to briefly argue that this is not the case and that the notion of measurement is indeed closely related with the statistical notions of equilibrium and non - equilibrium . measuring implies obtaining knowledge of a system through the use of an apparatus . independently of whether we are considering a classical or a quantum measurement, such knowledge requires the existence of a correlation between the system and the apparatus ; a correlation that must be stable and robust enough to lead to the formation of a permanent record ( e.g. , a black spot on a photographic plate , a computer printout , etc . ) .we usually take the possibility of knowledge ( that is , the possibility of the existence of strong and robust correlations between different subsystems ) for granted .yet the fact that this type of correlations does exist depends on how our universe works ; and its working could easily have been otherwise .consider , for instance , the toy - model of a classical measurement that we introduced in subsection [ sec221 ] .the very fact that there is a macroscopic pointer that correlates with some property , , of a system , as represented by the hamiltonian in eq .( [ al1 ] ) , presupposes a scenario that is clearly out of equilibrium .if the whole universe were in thermodynamic equilibrium , no such macroscopic correlations could arise .therefore , the very possibility of measuring is related to the equilibrium conditions that hold in a particular situation .when it comes to thermodynamic equilibrium , the universe can be considered to be a sea of non - equilibrium with some islands of equilibrium .this fact regarding our universe allows for the ubiquitous existence of measuring - type interactions , such as those described by the hamiltonian in eq .( [ al1 ] ) . when it comes to quantum equilibrium , however , the situation is not the same .all the empirical evidence we have suggests that our universe is _ globally _ in quantum equilibrium .therefore , _ all _ systems are in quantum equilibrium and the limitations that this condition imposes upon measurements admit no exception . in the next subsection , we explore these limitations again . we have shown that in classical mechanics it is possible to establish a perfect correlation between a pointer and the position of a particle . here , we now argue why the same measurement of position is not possible in a universe in quantum equilibrium .a quantum measurement of position also requires a correlation between and .the two positions are , however , in quantum equilibrium . because of this equilibrium , when we predict the ( bohmian ) position of the particle we have to treat and as random variables distributed according to .therefore , different experiments are associated with different random initial positions that , via the guiding law in equation ( [ velo_eff ] ) , will correspond to different and at the final time . by equivariance , after the experiment , the probability distribution of and is given by .therefore , the only way to really establish a close correlation between and in all experiments is through the quasi - delta function shown in fig .in contrast , quantum equilibrium implies that the type of wave function depicted in fig .[ fig3 ] will never provide perfect correlation between the system and the pointer ( the same value of the pointer indicating is compatible with many different positions of the system in different experiments ) . finally , let us stress again that , as opposed to other types of equilibrium ( thermal , electrostatic or thermodynamical ) , quantum equilibrium does not require relaxation times .as evidenced by all experiments , our universe is in quantum equilibrium , and all subsystems satisfy the quantum equilibrium hypothesis at any time .this is just a consequence of the equivariant property of the universal wave function discussed in section [ sec2 ] .thus , there are unavoidable limitations to our knowledge of the positions of particles . in fact , once the wave function is prepared , there is an _absolute uncertainty _ regarding the positions of the particles .we have tried to answer the question : _ how does quantum uncertainty emerge from deterministic bohmian mechanics ? _ the equations of motion of classical and bohmian mechanics are both fully deterministic. however , to be able to determine the output of a ( classical or quantum ) experiment with certainty , we have to know the initial conditions of the system .we have seen that in classical mechanics , there are many scenarios where the initial conditions can be measured and therefore we can predict the future evolution of the system with certainty . meanwhile , in other classical systems ( such as chaotic systems or those involving a very large number of particles ), we can not determine the initial conditions with enough precision to make predictions about the future with certainty .in bohmian mechanics , the initial state of a system ( including both the positions of the particles and the wave function ) can not be determined experimentally .therefore , as our knowledge of the initial conditions is constrained , we can not predict the future evolution of the quantum system with certainty .the reader may realize that the foregoing answer as to how a ( classical or quantum ) deterministic theory becomes unpredictable is quite trivial .however , a look at the history of science shows that the unavoidable randomness of quantum systems opened an intense debate on the impossibility of using explanations based on deterministic laws for quantum phenomena .the famous ( and incorrect ) von neumann theorem and the related impossibility proofs are the most evident examples of the intensity of the arguments against deterministic quantum theories . in section [ sec2 ]we show that the bohmian dynamical laws do not allow the measurement of a wave function in a single experiment . even assuming a _ preparation _ of the wave function ( not a measurement ) ,then the same laws do not allow us to determine the position of the particle without modifying the wave function ( unless the initial wave function is a position eigenstate ) . without the possibility of having experimental access to the initial conditions ,the deterministic bohmian theory becomes a theory that involves uncertainty in its predictions .we have not only provided an answer to the question regarding how the unpredictability of the results of quantum experiments can follow from a deterministic theory , but we have also introduced the fundamental statistical hypothesis that quantifies the amount of randomness that appears in quantum experiments . in classical systems , when the initial conditions are inaccessible , classical statistical mechanics provides probabilistic information on typical initial conditions . in quantum systems , since the initial positions of particles are always inaccessible via measurement ( without converting the initial wave function into a position eigenstate ) , the quantum equilibrium hypothesis determines the probability of different initial positions .this hypothesis is merely a consequence of the fact that our universe is always in quantum equilibrium ( in the sense discussed in section [ sec3 ] ) .all the dynamical and statistical insights of bohmian mechanics can be summarized in the fundamental conditional probability formula , which states that the maximum information on the position of a particle can be obtained from the modulus squared of its ( conditional ) wave function : .stated simply , any knowledge of the particle positions must unavoidably be mediated by the wave function .this imposes an _ absolute uncertainty _ on quantum mechanics , which is the fundamental key to understanding how bohmian mechanics , despite being deterministic , can account for all quantum predictions , including quantum randomness and uncertainty .this work has been partially supported by the fondo europeo de desarrollo regional ( feder ) and ministerio de economa y competitividad through the spanish projects no .tec2012 - 31330 and no .tec2015 - 67462-c2 - 1-r , the generalitat de catalunya ( 2014 sgr-384 ) , and by the european union seventh framework program under the grant agreement no .604391 of the flagship initiative `` graphene - based revolutions in ict and beyond '' .a.s.s work was supported by the project ffi2012 - 37354 funded by the spanish ministry of economy and competitiveness .n.z . was supported in part by the italian _ istituto nazionale di fisica nucleare_. j. von neumann , `` mathematische grundlagen der quantenmechanik '' ( springer verlag , berlin , 1932 ) , english translation by : r.t .beyer , mathematical foundations of quantum mechanics ( princeton university press , princeton , 1955 )
|
bohmian mechanics is a theory that provides a consistent explanation of quantum phenomena in terms of point particles whose motion is guided by the wave function . in this theory , the state of a system of particles is defined by the actual positions of the particles and the wave function of the system ; and the state of the system evolves deterministically . thus , the bohmian state can be compared with the state in classical mechanics , which is given by the positions and momenta of all the particles , and which also evolves deterministically . however , while in classical mechanics it is usually taken for granted and considered unproblematic that the state is , at least in principle , measurable , this is not the case in bohmian mechanics . due to the linearity of the quantum dynamical laws , one essential component of the bohmian state , the wave function , is not directly measurable . moreover , it turns out that the measurement of the other component of the state the positions of the particles must be mediated by the wave function ; a fact that in turn implies that the positions of the particles , though measurable , are constrained by _ absolute uncertainty_. this is the key to understanding how bohmian mechanics , despite being deterministic , can account for all quantum predictions , including quantum randomness and uncertainty .
|
hydraulic tomography ( ht ) is a method for characterizing the subsurface that consists of applying pumping in wells while aquifer pressure ( head ) responses are measured . using the data collected at various locations , important aquifer parameters ( e.g. , hydraulic conductivity and specific storage )are estimated .an example of such a technique is transient hydraulic tomography ( reviewed in ) .oscillatory hydraulic tomography ( oht ) is an emerging technology for aquifer characterization that involves a tomographic analysis of oscillatory signals .here we consider that a sinusoidal signal of known frequency is imposed at an injection point and the resulting change in pressure is measured at receiver wells .consequently , these measurements are processed using a nonlinear inversion algorithm to recover estimates for the desired aquifer parameters .oscillatory hydraulic tomography has notable advantages over transient hydraulic tomography ; namely , a weak signal can be distinguished from the ambient noise and by using signals of different frequencies , we are able to extract additional information without having to drill additional wells . using multiple frequencies for oht has the potential to improve the quality of the image .however , it involves considerable computational burden .solving the inverse problem , i.e. reconstructing the hydraulic conductivity field from pressure measurements , requires several application of the forward ( and adjoint ) problem for multiple frequencies . as we shall show in section [ sec : application ] , solving the forward ( and adjoint ) problem involves the solution of shifted systems for multiple frequencies . for finely discretized grids , the cost of solving the system of equations corresponding to each frequency can be high to the extent that it might prove to be computationally prohibitive when many frequencies are used , for example , on the order of .the objective is to develop an approach in which the cost of solving the forward ( and adjoint ) problem for multiple frequencies is not significantly higher than the cost of solving the system of equations for a single frequency - in other words , the cost should depend only weakly on the number of frequencies .direct methods , such as sparse lu , cholesky or ldl factorization , are suited to linear systems in which the matrix bandwidth is small , so that the fill - in is somewhat limited .an additional difficulty that direct methods pose is that for solving a sequence of shifted systems , the matrix has to be re - factorized for each frequency , resulting in a considerable computational cost .by contrast , krylov subspace methods for shifted systems are particularly appealing since they exploit the shift - invariant property of krylov subspaces to obtain approximate solutions for all frequencies by generating a single approximation space that is shift independent .several algorithms have been developed for dealing with shifted systems .some are based on lanczos recurrences for symmetric systems ; others use the unsymmetric lanczos , and some others use arnoldi iteration . shifted systems also occur in several other applications such as control theory , time dependent partial differential equations , structural dynamics , and quantum chromodynamics ( see and references therein ) . hence , several other communities can benefit from advances in efficient solvers for shifted systems .the krylov subspace method that we propose is closest in spirit to .however , as we shall demonstrate , we have extended their solver significantly . *contributions * : our major contributions can be summarized as follows : * we have extended the flexible arnoldi algorithm discussed in for shifted systems of the form to systems of the form for that employs multiple preconditioners of the form .in addition , we provide some analysis for the convergence of the solver . *when an iterative solver is used to apply the preconditioner , we derive an error analysis that gives us stopping tolerances for monitoring convergence without constructing the full residual . *our motivation for the need for fast solvers for shifted systems comes from oscillatory hydraulic tomography .we describe the key steps involved in inversion for oscillatory hydraulic tomography , and discuss how the computation of the jacobian can be accelerated by the use of the aforementioned fast solvers . * limitations * : the focus of this work has been on the computational aspects of oscillatory hydraulic tomography .although the initial results are promising , several issues remain to be resolved for application to realistic problems of oscillatory hydraulic tomography .for example , we are inverting for the hydraulic conductivity assuming that the storage field is known . in practice ,the storage is also unknown and needs to be estimated from the data as well .moreover , simulating realistic conditions ( higher variance in the log conductivity field , and adding measurement noise in a realistic manner ) may significantly improve the performance with the addition of information from different frequencies .we will deal with these issues in another paper .the paper is organized as follows . in section[ sec : krylov ] , we discuss the krylov subspace methods for solving shifted linear systems of equations based on the arnoldi iteration using preconditioners that are also shifted systems . in section [ sec : geneigen ] , we discuss the convergence of the iterative solver and its connection to the convergence of the eigenvalues of the generalized eigenvalue problem . in section[ sec : inexact ] , we discuss an error analysis when an iterative method is used to invert the preconditioner matrices . in section [ sec : application ] , we discuss the basic constitutive equations in oht , which can be expressed as shifted linear system of equations and discuss the geostatistical method for solving inverse problems .finally , in section [ sec : numerical ] we present some numerical results on systems of shifted systems and then discuss numerical results involving the inverse problem arising from oht .we observe significant speed - ups using our krylov subspace solver .the goal is to solve systems of equations of the form note that , for are ( in general ) complex shifts .we assume that none of these systems are singular .in particular , for our application , both and are stiffness and mass matrices respectively and are positive definite , but our algorithm only requires that they are invertible . by using a finite volume orlumped mass approach , the mass matrices become diagonal but this assumption is not necessary .later , in sections [ sec : forward ] and [ sec : sensitivity ] , we will show how such equations arise in our applications .( 0,0 ) rectangle ( 10 , 10 ) ; ( a ) at ( 5,5 ) ; ( 10.5 , 0 ) rectangle ( 13.5 , 10 ) ; ( vm ) at ( 12 , 5 ) ; ( eq ) at ( 14.5 , 5 ) ; ( 15.5,0 ) rectangle ( 18.5 , 10 ) ; ( vm1 ) at ( 17,5 ) ; ( 18.7,0 ) rectangle ( 19,10 ) ; ( vm1 ) at ( 20,-0.5 ) ; ( 20,10 ) ( 23 , 10 ) ( 23,7 ) ( 22.6 , 7 ) ( 20,9.6 ) ( 20 , 10 ) ; ( hm ) at ( 22,9 ) ; ( 22.8,6.7 ) circle [ radius = 0.2 ] ; ( hm1 m ) at ( 24.3,5.6 ) ; as a brief introduction , we review the krylov based iterative solvers for the system of equations . in particular , we describe the variants generated by arnoldi iteration , such as full orthogonalization method ( fom ) and generalized minimum residual method ( gmres ) .krylov solvers typically generate a sequence of orthonormal vectors that are orthonormal .these vectors form a basis for the krylov subspace : at the end of the iteration , a typical relation is obtained of the form ( see figure [ fig : arnoldi ] ) , where , ] and ] , and ] and $ ] and is defined in equation . by using inexact applications of the preconditioner , the vectors for are no longer the same vectors generated from algorithm [ alg : arnoldimod ] . in particular, is no longer a krylov subspace generated by .however , by construction , is still an orthogonal matrix .having constructed the matrix , we seek approximate solutions spanned by the columns of , i.e. , solutions of the form .the true residual corresponding to the approximation solution can be computed as follows , the columns of the matrix are not computed in practice because they require an additional matrix - vector product with . as a result ,computing the true residual is expensive .however , in order to monitor the convergence of the iterative solver , we need bounds on the true residual . using such bounds, we can derive stopping criteria for the flexible krylov solvers for shifted systems with inexact preconditioning . to do this, we first derive bounds on the norm of inexact residual and a bound on the difference between the true and the inexact residual .a simple application of the triangle inequality for vector norms , leads us to the desired bounds on the true residual the inexact residual defined as the expression for is similar to the exact residual ignoring the error due to early termination of the inner iterative solver , i.e. , .it is easy to verify that .we now derive an expression for the norm of the difference between the true and the inexact residuals , finally , the norm of the true residual can be bounded using the following relation this bound on the true residual , gives us a convenient expression to monitor the convergence of the iterative solver for each system , corresponding to a given shift .we can also derive specialized results for the flexible fom / gmres for shifted systems with inexact preconditioning .the approach used is and argument similar to ( * ? ? ?* proposition 4.1 ) .let and be the true residual , respectively resulting from the flexible fom / gmres for shifted systems .we have the following error bounds one of main results of the paper is that they provide theory for why the residual norm due inexact preconditioning can be allowed to grow at the later outer iterations .in particular , they provide computable bounds for the monitoring the outer krylov solver residual when the termination criteria for the inner preconditioning is allowed to change at each iteration , from which efficient termination criteria can be derived .we have not pursued this issue and the reader is referred to for further details .in this section , we briefly review the application of oscillatory hydraulic tomography and the geostatistical approach for solving the resulting inverse problem .the equations governing ground water flow through an aquifer for a given domain with boundary are given by , where [ l represents the specific storage and [ l / t ] represents the hydraulic conductivity .in the case of one source oscillating at a fixed frequency [ radians / t ] , is given by to model periodic simulations , we will assume the source to be a point source oscillating at a known frequency and peak amplitude at the source location .in the case of multiple sources oscillating at distinct frequencies , each source is modeled independently with its corresponding frequency as in , and then combined to produce the total response of the aquifer .since the solution is linear in time , we assume the solution ( after some initial time has passed ) can be represented as where is the real part and is known as the phasor , is a function of space only and contains information about the phase and amplitude of the signal . assuming this solution , the equations in the phasor domain are , the differential equation along with the boundary conditions are discretized using fenics by using standard linear finite elements . solving it for several frequencies results in system of shifted equations of the form where , and are the stiffness and mass matrices , respectively , that arise precisely from the discretization of .the geostatistical approach ( described in the following papers ) is one of the prevalent approaches for solving stochastic inverse problems .the idea is to represent the unknown field as the sum of a few deterministic low - order polynomials and a stochastic term that models small - scale variability .inference from the measurements is obtained by invoking the bayes theorem , through the posterior probability density function which is the product of two parts - likelihood of the measurements and the prior distribution of the parameters .let be the function to be estimated , here the log conductivity , and let it be modeled by a gaussian random field .after discretization , the field can be written as . here is a matrix of low - order polynomials , are a set of drift coefficients to be determined and is a covariance matrix with entries , and is a generalized covariance kernel .the measurement equation can be written as , where represents the noisy measurements and is a random vector of observation error with mean zero and covariance matrix .the matrices , and are part of a modeling choice and more details to choose them can be obtained from the following references .the operator is known as the parameter - to - observation map or _ measurement operator _ , with entries that are the coefficients of the oscillatory terms in the expression , where , is the location of the measurement sensor and , where is the number of measurement locations . at each measurement location ,two coefficients are measured for every frequency . in all, we have measurements , where is the number of frequencies .following the geostatistical method for quasi - linear inversion , we compute and corresponding to the maximum - a - posteriori probability which is equivalent to computing the solution to a weighted nonlinear least squares problem . to solve the optimization problem ,the gauss - newton algorithm is used . starting with an initial estimate for the field , the procedure is described in algorithm [ alg : quasi ] .compute the jacobian as , solve the system of equations , the update is computed by , repeat steps until the desired tolerance has been reached .( if necessary , add a line search ) .algorithm [ alg : quasi ] requires , at each iteration , computation of the matrices and .since the prior covariance matrix is dense , a straightforward computation of can be performed in . however , for fine grids , i.e. , when the number of unknowns is large , storing can be expensive in terms of memory and computing can be computationally expensive . for regular equispaced grids and covariance kernels that are stationary or translation invariant , an fft based method can be used to reduce the storage costs of the covariance matrix to and cost of matrix - vector product to . for irregular grids ,the hierarchical matrix approach can be used to reduce the storage costs and cost of approximate matrix - vector product to for a wide variety of covariance kernels .thus , in either situation , the cost for computing can be done in and the cost of computing is .computing the jacobian matrix at each iteration is often an expensive step . although explicit analytical expressions for the entries are nearly impossible , several approaches exist .one simple approach is to use finite differences , but this approach is expensive because it requires as many runs of the forward problem , i.e. one more than the number of parameters to be estimated . for large problems and on finely discretized grids , the number of unknowns can be quite large and so this procedure is not feasible . to reduce the computational cost associated with calculating the sensitivity matrix we use the adjoint state method ( see for example , ) .this approach is exact and is computationally advantageous when the number of measurements is far smaller than the number of unknowns . for a complete derivation of the adjoint state equations for oscillatory hydraulic tomography , refer to . for the type of measurements described in, the entries of the sensitivity matrix can calculated by the following expression for \psi_{\omega } + \frac{\partial k({\textbf{x}})}{\partial s_j } \nabla \phi \cdot \nabla \psi_{\omega}\right ) \right\}d{\textbf{x}}\ ] ] since at measurement location corresponding to each frequency , two measurements are obtained from the coefficients of the oscillatory terms , so the jacobian matrix has entries where , . here , is the known as the _ adjoint solution _ that depends on the measurement location and the forcing frequency .it satisfies the following system of equations where , is the measurement location and is the particular frequency .the procedure for calculating the sensitivity matrix can thus be summarized as follows .\1 . for a given field ,solve the forward problem for .2 . for each measurement and frequency , solve the adjoint problem for .3 . compute the integral in to calculate the sensitivity .since is evaluated for all for each measurement , the adjoint state method requires only forward model solves to compute the sensitivity matrix .thus , when the number of measurements is far fewer than the number of unknowns , the adjoint state method provides a much cheaper alternative for computing the entries of the jacobian matrix .this is typically the case in hydraulic tomography , where having several measurement locations is infeasible because it requires digging new wells .further , we realize that equation takes the same form as equation for multiple frequencies .thus , we can use the algorithms developed in section [ sec : krylov ] to solve the system of equations for as many right hand sides as measurements .it is possible to devise algorithms for multiple right hand sides in the context of shifted systems but we will not adopt this approach .we present numerical results for the krylov subspace solvers and its application to oht . as mentioned before , we use the fenics software to discretize the appropriate partial differential equations .we use the python interface to fenics , with ublassparse as the linear algebra back - end . for the direct solvers we use superlu package that is interfaced by scipy whereas for the iterative solver we use an algebraic multigrid package pyamg , with smoothed aggregation along with bicgstab iterative solver . in the following sections , for brevity , we only most results for fom solver but we observed similar results for the gmres method as well .this is also suggested by the result in proposition [ prop : gmres ] . in this section ,we present some of the results of the algorithms that we have described in section [ sec : krylov ] .we now describe the test problem that we shall use for the rest of the section .we consider a aquifer in a rectangular domain with dirichlet boundary conditions on the boundaries .for the log - conductivity field , we consider a random field generated using an exponential covariance kernel using the algorithm described in .other parameters used for the model problem are summarized in table [ tab : parameters ] .we choose frequencies evenly spaced between the minimum and maximum frequencies , which results in systems each of size ..parameters chosen for test problem [ cols="<,<,<",options="header " , ]we have presented a flexible krylov subspace algorithm for shifted systems of the form that uses multiple shifted preconditioners of the form .the values of are chosen in order to improve convergence of the solver for all the shifted systems . the number of preconditioners chosen varies based on the distribution of the shifts .a good rule of thumb is that the systems having shift will converge faster if there a preconditioner with shift that is nearby .when the size of the linear systems is much larger , direct solvers are much more expensive . in such cases , preconditioning would be done using iterative solvers . the error analysis in section [ sec : inexact ]provides insight into monitor approximate residuals without constructing the true residuals .one can naturally extend the ideas in this paper to systems with multiple shifts and multiple right hand sides using either block or deflation techniques .we applied the flexible krylov solver to an application problem that benefited significantly from fast solvers for shifted systems .in particular , oscillatory hydraulic tomography is a technique for aquifer characterization .however , since drilling observation wells to obtain measurements is expensive , one of the advantages of oscillatory hydraulic tomography is obtaining more informative measurements by pumping at different frequencies using the same pumping locations and measurement wells .in future studies we aim to study more realistic conditions for tomography , including a joint inversion for storage and conductivity. this would be ultimately beneficial to the practitioners .we envision that fast solvers for shifted systems would be beneficial for rapid aquifer characterization using oscillatory hydraulic tomography .the research in this work was funded by nsf award 0934596 , `` cmg collaborative research : subsurface imaging and uncertainty quantification '' and by nsf award 1215742 , `` collaborative research : fundamental research on oscillatory flow in hydrogeology . ''the authors would also like to thank their collaborators michael cardiff and warren barrash for useful discussions and the two anonymous reviewers for their insightful comments .
|
we discuss efficient solutions to systems of shifted linear systems arising in computations for oscillatory hydraulic tomography ( oht ) . the reconstruction of hydrogeological parameters such as hydraulic conductivity and specific storage using limited discrete measurements of pressure ( head ) obtained from sequential oscillatory pumping tests , leads to a nonlinear inverse problem . we tackle this using the quasi - linear geostatistical approach . this method requires repeated solution of the forward ( and adjoint ) problem for multiple frequencies , for which we use flexible preconditioned krylov subspace solvers specifically designed for shifted systems based on ideas in . the solvers allow the preconditioner to change at each iteration . we analyze the convergence of the solver and perform an error analysis when an iterative solver is used for inverting the preconditioner matrices . finally , we apply our algorithm to a challenging application taken from oscillatory hydraulic tomography to demonstrate the computational gains by using the resulting method .
|
single - radio multi - channel ( sr - mc ) wireless ad hoc networks ( wanets ) have gained significant attentions in the past few years because of their great promise of low cost , high throughput and spectral efficiency . by using multiple orthogonal channels in single radio, we can enhance spatial reuse , alleviate jamming attack , and enable dynamic access to the scarce spectrum resource .several typical multi - channel macs are proposed to fully utilize the single - radio multi - channel capability .broadcast is a fundamental operation in wireless networks for routing discovery , information dissemination , and so on .here we focus on the minimum latency broadcast scheduling operation , where broadcast latency is defined as the end - to - end latency by which all nodes in the network receive the broadcast message from source node .such concern is very important in various applications such as military communications , disaster relief and rescue operations . for some sr - mc networks , broadcast packets can be delivered in a dedicated control channel ( dcc ) .however , the dcc can be a bottleneck , vulnerable to jamming attack , and even unavailable .therefore , in this paper , we consider sr - mc wanets without the dcc . given no dcc , the solutions to mlbs problem in single - channel wanets can not work directly , because single transmission can not reach all the neighboring nodes if the radios of neighboring nodes are tuned to different channels .in other words , it may cost multiple transmissions to deliver a message to all its neighbors .this property is similar to the _ partial broadcast property _ in duty - cycled wanets .however , parallel transmissions can still happen in different nodes within several channels at the same time ( we refer this property as _ multi - partial broadcast property _ ) , which is not allowed in duty - cycled networks .hence , solutions from duty - cycled networks can not achieve best performance , and can be further optimized . on the other hand , compared with solutions for mr - mc wanets , single - radio mode does not allow a node to transmit simultaneously in several channels , resulting in less parallel transmission opportunities .therefore , _ multi - partial broadcast property _ brings a new challenge for designing efficient , collision - free broadcast protocols . in this paper, we investigate the minimum latency broadcast scheduling ( mlbs ) problem in sr - mc wanets .we first show such problem is np - hard , and then design efficient algorithms with performance guarantees . to solve the problem, we construct a shortest - path tree ( spt ) , and schedule the transmissions layer by layer . by utilizing the _ multi - partial broadcast property _, we can schedule the cross - layer and same - layer transmissions with polynomial - time complexity .our main contributions are summarized as follows : * we show that a basic transmission scheduling ( bts ) algorithm with approximation ratio of 4k+12 can be obtained by modifying existing approaches properly , where is the number of available orthogonal channels . * we present an enhanced transmission scheduling ( ets ) algorithm by utilizing the parallel transmission opportunities , which has an improved approximation ratio of .the performance is evaluated through extensive simulations .the rest of the paper is organized as follows .section [ rw ] gives the related work .network model and problem statement are presented in section [ pre ] .we first propose bts in section [ bts ] , and give ets in section [ ets ] .then we validate our result by simulations in section [ eval ] .section [ con ] concludes our paper .channel assignment in sr - mc wanets is the most related works to ours . in general , there are three kinds of channel assignment approaches : _ fixed _ , _ semi - dynamic _ and _ dynamic_. in fixed channel assignment method , nodes are assigned fixed channels for permanent use , and radios do not change the operating frequency . in _ semi - dynamic _ approaches , though the assigned reception channel is fixed , nodes can still change their transmission channel to communicate with neighbors that have different reception channels . in dynamic approaches, nodes are not assigned static channels , and can switch their channel dynamically according to a pre - defined rule , e.g. , quorum sequences . moreover ,some works consider channel assignment and other problems jointly , e.g. , minimizing interference , fast data dissemination .in contrast , here we consider the minimum latency broadcast scheduling problem after channel assignment , and assume _ semi - dynamic _ strategy .collision - free minimum latency broadcast scheduling is well studied in single - channel wanets .gandhi et al . show mlbs problem in udgs is np - hard .recently , huang et al . gives an algorithm with approximation ratio of .for duty - cycle wanets , hong et al . shows mlbs to be np - hard too , and present an algorithm with approximation ratio where is the length of one scheduling period .our sr - mc scenario has the multi - channel dimension , which is not considered in single - channel and duty - cycled wanets .qadir et al . propose several algorithms for minimum latency broadcasting in mr - mc , multi - rate wireless meshes .however , the proposed algorithms depend on the multi - radio capability ( i.e. , multi - connection links ) , and all heuristic algorithms are evaluated by simulations without theoretical analysis . to the best of our knowledge , is the only paper to consider minimal latency broadcast directly in multi - channel cognitive radio networks , which is very close to us .the key difference is that we allow channel switch while assumes not .moreover , shows the closeness of their solution to the optimal solution through simulations .instead , we give two algorithms with performance guarantees .the sr - mc wanets can be modeled as a unit disk graph ( udg ) , where is the set of nodes ( ) , and is the set of links .an edge iff . and is within each other s communication range .we also assume that the time is slotted .each time slot is equal length , and long enough for one packet transmission and reception .moreover , the slot boundary is almost aligned , which can be achieved by local synchronization protocols .we further assume that reception is error - free if no collision happens , which is quite accurate because control packets are often well protected by physical layer , e.g. , minimum data rate 6 mbps in ieee 802.11 a / g standard .both synchronization and error - free assumptions are widely adopted by previous works .the sr - mc wanets have a total of orthogonal channels denoted by , and each node is equipped with only one radio. the radio interface can be set on any channel to transmit or listen , but not simultaneously .the reception channel is chosen randomly from during network initialization , which can be defined using a channel assignment function . for , where .neighboring nodes may have different reception channels . in order to enable connectivity , we assume that transmission nodes can switch their channels to set up connections . note that in , the edge definition depends on topology instead of channel since we allow channel switch .we also assume that the neighbors reception channels are known beforehand , which is often achieved during neighbor discovery . herewe consider the single - source broadcast problem .suppose the source node is , and the broadcast task completes when all the other nodes receive messages sent from .assume starts the broadcast operation at time - slot .then we formulate the mlbs problem ( decision version ) in sr - mc wanets as follows ( mlbs - srmc ) : _ given a udg with channel assignment function , and positive integer , is there an assignment of time slots and transmission channels to nodes , such that the broadcast scheduling is collision - free and the schedule length is no more than ? _the mlbs - srmc problem is np - hard .we prove this theorem using restriction technique .if we restrict function to map to one single channel , our problem is exactly the mlbs problem in single - channel wanets , which is np - hard .hence mlbs - srmc problem is np - hard .our objective can be interpreted as finding a broadcast schedule , where , , is the set of transmitting instances at time slot , i.e. . at time slot , only can transmit .after time slots , all nodes in receive messages from .our problem can be converted to minimize .note that if we set the cost of each edge one unit , we can construct a shortest - path tree ( spt ) rooted .then the lower bound for broadcast is the depth of spt denoted by , i.e. , .let be an undirected udg .the subgraph of induced by a subset of is denoted by ] . for udg .it s well - known that the node coloring of induced by a smallest - degree - last ordering uses at most colors .an independent set ( is ) of a graph is a set of vertices in that no two of which are adjacent .a maximal independent set of is not a subset of any other of .each node in can be adjacent to at most five nodes in any is of , and can have at most nineteen two - hop neighbors in any is of .8.8cmc|p7 cm symbol & definition + & number of available orthogonal channels in + & neighboring set of nodes in + & shortest - path tree rooted in + & depth of + & nodes of layer in + & nodes using channel in + & nodes of layer using channel in , + & maximal independent set ( dominators ) of + & set of parent nodes of set in + & parent nodes ( connectors ) of which are selected greedy + & broadcast tree constructed by algorithm [ broadcast ] , including nodes , edges and cover function +in this section , we give an algorithm basic transmission scheduling ( bts ) for minimum latency broadcast scheduling problem in udgs , which is a simple extension to existing approaches .the main notations used in this paper are summarized in table [ terminology ] .let be nodes of layer in , , be nodes using channel in , , and be nodes in layer using channel ( ) .then , bts can find a maximal independent set for each by adding eligible nodes sequentially .let be the set of parent nodes of in .note that nodes in are not guaranteed to be in reception channel .the key idea of bts is to schedule collision - free transmissions layer by layer , and channel by channel .take layer for example , bts consists of two steps : 1 . sequentially for ; 2 . simultaneously for ; we call as layer- _ dominators _ , and as layer- connectors_. for step 1 ) , we schedule transmissions channel by channel to avoid _ same - node _ collision , which means a node can be a parent of nodes in and nodes in ( ) . then we use ditance2-coloring of to achieve collision - free scheduling to cover .distance-2 coloring method is widely used to schedule collision - free transmission to avoid _ cross - node _ collision , which means , if two nodes within two hops transmit at the same slot , there is a collision in common neighbors . for step 2 ) , we also schedule collision - free transmissions in different channels simultaneously using distance2-coloring of .the details are shown in algorithm [ a1 ] .it is easy to verify that the time complexity of bts is .[ a1 ] spt tree in rooted ; depth of nodes at level in , and , return , then we give a theorem that proves the correctness of bts and shows the upper bound of the latency given by bts .[ bts ] algorithm bts is correct , and provides a collision - free broadcast scheduling with latency at most . for algorithm bts , the transmissions are scheduled layer by layer . the transmissions in layer do not start until layer ends .we consider layer ( ) .assume nodes in are not covered , and nodes in are covered .for step 1 ) , we color in layer front - to - end by distance2-coloring , it is collision - free and can cover .furthermore , transmissions in different channel are sequential. thus we can avoid _ same - node _ collisions .after that , nodes in are covered . for step 2 ) , because is the maximal independent set of , which is a dominating set of , the smallest - degree - last distance2-coloring of guarantees that collision - free transmissions of can cover .also , the parallel transmissions in different channel are collision - free . then all nodes in are covered .thus algorithm bts is correct and collision - free .we analyze the broadcast latency for layer .the cross - layer transmissions from layer to are channel by channel .hence we can consider a single channel , and then multiple .note that is still an independent set in udg .hence a node in can have at most four neighbors in , because has a parent in layer in , which is independent of nodes in .then the distance2-coloring of uses at most four colors .otherwise , if a node has the five color , it means shares five neighbors with nodes in , i.e. , connecting five neighbors in , which contradicts .hence four time slots are enough for single - channel cross - layer transmission . for transmissions from to , because is a maximal independent set , the smallest - degree - last ordering distance2-coloring of uses at most 12 colors . since the same layer transmission can be in parallel , we do not need to multiple .hence , twelve time slots are enough .given that our analysis applies from layer to layer , the overall broadcast latency is at most .in other words , bts algorithm has approximation ratio of .in this section , we present an enhanced algorithm ets , which has approximation ratio of .we notice that bts uses sequential channel transmissions in cross - layer , which is too conservative .also , the strict constraint that the next layer can not start transmissions before last layer ends can not utilize the natural no - collision of multi - channel transmissions .based on these two observations , we propose ets .ets is a broadcast tree based algorithm .if is the parent of , is responsible for transmitting packets to collision - free .the formal description of constructing broadcast tree is shown in algorithm [ broadcast ] .as stated in bts , we have _ connectors _ to connect _dominators_. ets differs from bts mainly in the selection of _bts simply selects as _ connectors _ , and schedules transmissions in separate channel to avoid _ same - node _ collision .ets selects _connectors _ greedy , i.e. , selecting parent nodes to cover maximal uncovered _ connectors_. note that here we select nodes from .all selected nodes to cover are recorded in . for _ dominators _ to cover , it is similar .[ broadcast ] ; return the broadcast scheduling is shown in algorithm [ scheduling ] . though we still find available transmission slot channel by channel and layer by layer , we break the layered transmission constraint . in other words ,layer can start before layer ends only if nodes in layer do not bring collisions to already scheduling .let step 1 ) be , and step 2 ) be note that for step 1 ) and 2 ) transmissions , we have and recorded respectively .hence , we first select tx node from ( ) sequentially , and then select the minimum time larger than reception time to satisfy no - collision constraints : ( 1 ) does not bring collisions to already scheduled transmissions in common neighbors ; ( 2 ) can not transmit at the same slot that it has been assigned to other channels .the collision slots are recorded in .after scheduling all nodes in , we can find the maximal transmission time . from algorithm [ broadcast ] and [ scheduling ], we can find that time complexity of ets is also . for , return , we first give a lemma about the correctness of ets .[ correctness ] algorithm ets is correct and provides a collision - free broadcast scheduling .ets uses no - collision rule to select transmission slot , so it must be collision - free .we only need to prove ets provides a broadcast scheduling .assume all nodes in ( ) are covered now .we show that after transmission of and ( ) , all nodes in are covered . for any node , must belong to a particular set . if , it is covered by . if , it must be covered by . because nodes in transmit before nodes in , nodes in are fully covered .the proof is complete .let be the time that all tx nodes in finish their transmissions , .we can give following lemmas .[ dominators ] let . for any , where , .note that must be in some , since we select by channel .in other words , for . given any channel , for any node , , because . note that all interfering nodes of are also in .let denote the set of nodes consisting of an independent set within two hops from any node . for udgs .in other words , for any , the maximal number of interfering nodes is 19 , because is an independent set of .hence , . because our argument is for any channel , this completes the proof of lemma [ dominators ] .[ connectors ] let . for any , where , . for ets , is dominated by some node in . due to lemma [ dominators ] , .let be the set of interfering slots . must be less than the number of interfering nodes . for ,the collision includes _ cross - node _collision and _ same - node _ collision .assume . is responsible for transmitting packets to .hence , .because can connect at most five neighbors in independent set of udgs , and has a parent in layer in , .it is trivial that since we only have channels .though u can be in ( ) , this constraint holds for . because for selects transmission slots greedy, we only need to consider the maximal constraint for all channels .therefore , . the proof is accomplished . combining lemma [ correctness ] and [ connectors ], we can have a theorem .[ ets ] algorithm ets is correct , and provides a collision - free broadcast scheduling with approximation ratio of .for source , it needs at most time slot , i.e. .nodes in transmit , and then nodes in transmit ( ) .finally , nodes in transmit , and at most 20 time slots are enough .hence , the overall latency is at most .thus , the approximation ratio is .in this section , we run simulations to study the performance of ets .since there is no directly applicable algorithms in sr - mc wanets , we use bts as benchmark .the metric is broadcast latency . to show the optimal result, we also plot the lower bound of broadcast latency using the depth of .we consider the impact of number of nodes and number of orthogonal channels . to increase the depth of constructed , we vary the network area with respect to . for example , when , the area size is .all nodes are randomly deployed in corresponding areas , and their reception channels are randomly choosen from .we run the simulation 10 times , and show the average results . for each time, we generate a new topology and channel assignment .first we evaluate the impact of , which ranges from to with step .simulation results with different are shown in figure [ fig1 ] .it is obvious that ets performs better than bts .more importantly , the performance of ets is close to the lower bound , which demonstrates the gain of parallel multi - channel transmissions .furthermore , when becomes larger , the broadcast latency also raises due to the more nodes in each channel set , but the linear tendency keeps .note that , for approximation ratio , the performance of both algorithms are much smaller than theoretical results . it can be explained that our theoretical analysis considers the worst case , but probably we are not in the worst case .then we study the impact of , which is set from 5 to 30 with step 5 .simulation results with are shown in figure [ fig2 ] .the performance of ets is still better than bts , and close to the lower bound due to the same reason mentioned above .note that the scale of y - axis in figure 2(a ) is smaller . here, the depth of keeps almost constant since does not change ( i.e. , network size does not change ) .however , for bts , the broadcast latency grows sub - linearly with respect to .intuitively , with larger and same , though the number of sequential transmissions raise linearly with respect to , the transmissions in each channel reduce since nodes in each channel are less . for figure [ fig1 ] with larger and constant , transmissions in single - channel increase , butthe number of sequential channel transmissions is the same .it can explain the linear and sub - linear phenomenon in figure [ fig1 ] and [ fig2 ] .figure [ var ] shows the variance of approximation ratio for different deployments .we can find that the variance is small .here we take , for example .the results for other parameters are similar .as we can predicate , different topology and different reception channels make the result almost stable . for bts ,the broadcast latency increases almost linearly with , since nodes in each layer of increase almost linearly corresponding to , and cross - layer transmissions are scheduled sequentially . note that as increases , the broadcast latency and approximation ratio of bts also increases accordingly , while ets keeps almost constant .it is easy to understand because we use sequential channel transmissions to avoid _ same - node _ collision in cross - layer .the cost is higher when is larger .both results show that ets outperforms bts in different scenarios . in our simulations, we compare our bts and ets algorithms with a modified existing greedy algorithm ( ga ) proposed by .the greedy algorithm first constructs a bfs tree .after that , ga runs in layers and selects a tx instance ( including node and channel ) which covers the maximum number of uncovered nodes in each layer .ga also calculates the rank of each node . for a node ,a high rank means it is responsible for relaying packets further .in this paper , we consider the minimum latency broadcast scheduling problem in sr - mc wanets .we first identify the challenge and opportunity in such networks . to solve the np - hard problem, we give an algorithm bts with approximation ratio of , which is modified from classical algorithms .then we propose an algorithm ets with approximation ratio of and .both have time complexity .the simulation results show ets improves the performance over bts significantly , and come close to the lower bound . in the future ,we want to complete our works by considering distributed algorithms , and finding algorithms for broadcast scheduling under _ dynamic _ channel assignment strategy .the future work includes : * distributed algorithms and performance bound . * channel - hopping based algorithms . 1 g. zhou , c. huang , t. yan , t. he , j.a .stankovic , and t.f .abdelzaher , `` mmsn : multi - frequency media access control for wireless sensor networks '' , in proc .infocom , 2006 .w. xu , w. trappe , y. zhang , and t. wood , `` the feasibility of launching and detecting jamming attacks in wireless networks '' , in proc .mobihoc , 2005 , pp.46 - 57 .akyildiz , w. lee , and k.r .chowdhury , `` crahns : cognitive radio ad hoc networks '' , presented at ad hoc networks , 2009 , pp.810 - 836 .e. aryafar , o. gurewitz , and e.w .knightly , `` distance-1 constrained channel assignment in single radio wireless mesh networks '' , in proc .infocom , 2008 , pp.762 - 770 .r. maheshwari , h. gupta , and s.r .das , `` multichannel mac protocols for wireless networks '' , in proc .secon , 2006 , pp.393 - 401 .k. bian , j.m .park , and r. chen , `` a quorum - based framework for establishing control channels in dynamic spectrum access networks '' , in proc .mobicom , 2009 , pp.25 - 36 .r. vedantham , s. kakumanu , s. lakshmanan , and r. sivakumar .`` component based channel assignment in single radio , multi - channel ad hoc networks '' , in proc .mobicom , 2006 , pp.378 - 389 .d. starobinski and w. xiao , `` asymptotically optimal data dissemination in multichannel wireless sensor networks : single radios suffice '' , presented at ieee / acm trans .netw . , 2010 , pp.695 - 707 .r. gandhi , a. mishra , and s. parthasarathy , `` minimizing broadcast latency and redundancy in ad hoc networks '' , presented at ieee / acm trans .netw . , 2008 , pp.840 - 851 .huang , p. wan , x. jia , h. du , and w. shang , `` minimum - latency broadcast scheduling in wireless ad hoc networks '' , in proc .infocom , 2007 , pp.733 - 739 .r. gandhi , y. kim , s. lee , j. ryu , and p. wan , `` approximation algorithms for data broadcast in wireless networks '' , accepted by ieee trans . on mobile computing .j. hong , j. cao , w. li , s. lu , and d. chen , `` sleeping schedule - aware minimum latency broadcast in wireless ad hoc networks '' , in proc .icc , 2009 , pp.1 - 5 .b. tang , b. ye , j. hong , k. you , and s. lu , `` distributed low redundancy broadcast for uncoordinated duty - cycled wanets '' , in proc .globecom , 2011 , pp.1 - 5 .j. qadir , c.t .chou , a. misra , and j.g .lim , `` minimum latency broadcasting in multiradio , multichannel , multirate wireless meshes '' , presented at ieee trans .comput . , 2009 , pp.1510 - 1523 .arachchige , s. venkatesan , r. chandrasekaran , and n. mittal , `` minimal time broadcasting in cognitive radio networks '' , in proc .icdcn , 2011 , pp.364 - 375 .d. w. matula and l. l. beck , `` smallest - last ordering and clustering and graph coloring algorithms '' , j. acm , vol .417 - 427 , 1983 .garey and d.s .johnson , `` computers and intractability : a guide to the theory of np - completeness '' , 1979 .this heuristic algorithm is proposed by , which has no theoretical analysis . here , we list it for comparison .note that for our scenario , we not only decide which node to transmit , but also select which channel .we name it tx instance which means node transmits in channel .the algorithm can be summarized as follows : 1 .construct a bfs tree layered to ; 2 . from layer to , calculate the rank of each node as , where is the nodes covered by , i.e. , ; 3 . from layer to , construct the broadcast tree by greedy selecting tx instances covering the maximum number of nodes .for example , for layer and , select a tx instance < > to cover the maximum number of uncovered nodes , where is in layer , and ; 4 . from layer to , schedulethe selected tx instances without collision , i.e. , at each time slot , schedule the tx instances s = <> , where are covered , has maximal rank and < > does not induce collision . , , , and , bfs tree in with root the depth of set of all nodes at level in , + //construct broadcast tree node in with maximum + //scheduling for broadcast and and and and , , , and , bfs tree in with root the depth of + //construct broadcast tree set of all nodes at level in , set of all nodes at level using channel in , , + //calculate transmission time + //scheduling for broadcast , , , and , bfs tree in with root the depth of set of all nodes at level in , ; + //construct broadcast tree ; node in with maximum <> + //scheduling for broadcast ; | < > and where | < > and and + // function : check collisions <> + //construct broadcast tree set of all nodes at level using channel in , , + //scheduling for broadcast
|
we study the minimum latency broadcast scheduling ( mlbs ) problem in single - radio multi - channel ( sr - mc ) wireless ad - hoc networks ( wanets ) , which are modeled by unit disk graphs . nodes with this capability have their fixed reception channels , but can switch their transmission channels to communicate with their neighbors . the single - radio and multi - channel model prevents existing algorithms for single - channel networks achieving good performance . first , the common assumption that one transmission reaches all the neighboring nodes does not hold naturally . second , the multi - channel dimension provides new opportunities to schedule the broadcast transmissions in parallel . we show mlbs problem in sr - mc wanets is np - hard , and present a benchmark algorithm : basic transmission scheduling ( bts ) , which has approximation ratio of . here is the number of orthogonal channels in sr - mc wanets . then we propose an enhanced transmission scheduling ( ets ) algorithm , improving the approximation ratio to . simulation results show that ets achieves better performance over bts , and the performance of ets approaches the lower bound .
|
two way relaying communications have recently attracted considerable attentions due to their various applications . in this communication scenario ,two users attempt to communicate with each other with the help of a relay . to this end ,physical layer network coding ( plnc ) [ 1 ] along with the conventional decode - and - forward ( df ) or amplify - and - forward ( af ) relaying strategies has been commonly employed [ 2 - 4 ] to improve the system throughput [ 5 ] . a novel relaying technique , known as compute - and - forward ( cmf ) [ 6 ] , has been designed for multi users applications with the aim of increasing the physical layer network coding throughput . in this scheme ,each relay , based on a received noisy combination of simultaneously transmitted signals of the users , attempts to recover an equation , i.e. , a linear integer - combination , of users messages , instead of recovering each individual message separately . to enable the relay to recover the equation ,the cmf scheme is usually implemented based on using a proper lattice code [ 7 ] . since the equation coefficients are selected according to the channel coefficients , this method is also called physical layer network coding [ 8 ] .the relay then transmits the decoded equation to the destination .the destination recovers the desired messages by receiving enough number of decoded equations from the relays .in fact , in contrast to conventional af and df relaying techniques , the cmf method exploits rather than combats the interference towards a better network performance . by applying cmf in point - to - point mimo systems , a linear receiver , named as integer forcing linear receiver ( iflr ) has been proposed in [ 9 ] in which sufficient independent equations with maximum rate are recovered to extract the users messages .since the number of wireless communication users will continuously increase , independently designed one - pair two - way relay systems can scarcely accommodate a vast number of users .that is , with simultaneously transmission of pairs of users , the messages interfere with each other , and hence , arbitrary transmission and reception of the messages are not an efficient solution . to solve the problem , in [ 10 - 12 ] ,centralized designed mimo multi - pair two - way transmission schemes with the help of a multi antenna relay have been proposed . in [ 10 - 11 ] , the af method has been utilized in the relay .that is , the relay simply amplifies and forwards the received signal . in[ 12 ] , the df relaying is used , as a scheme named denoise - and - forward , in which the relay after applying projection filter , first decodes each pair signal aligned messages individually and then precodes and transmits the decoded messages .the design criteria for precoder and projection filters in [ 10 - 12 ] is the minimization of the sum of the users mean squared errors ( sum mse ) . in [ 10 ] , the maximization of the users mean squared errors ( max mse ) is also considered for the transeiver design . in the simple case of single antenna one - pair two - way relay system , we have applied cmf by introducing the aligned compute - and - forward ( a - cmf ) scheme [ 13 ] which outperforms af and df based schemes significantly . in this paper, we consider a more general case of two - way communications that involves multiple pairs of multiple - antenna source nodes with considering both multiple - access and broadcast phases .we propose a new transmission scheme named integer forcing - and - forward ( iff ) .we exploit the signal alignment proposed in [ 14 - 15 ] such that the two signals received from two users in a pair can be network coded together in the relay .furthermore , we apply iflr to harness the inter - pair interference in terms of equations .in the proposed scheme , the equations are decoded with higher rate than individual messages in the relay .in addition , after transmitting all recovered equations to the users , different ways to select the equations that each user needs to recover its pair s message can be utilized .therefore , our scheme has two superiorities in comparison with the df based scheme in [ 12 ] , in which each pair message is recovered for the transmission to the respective user . in the proposed scheme ,the precoder at transmitting nodes , including the users and the relay , and the projection filter at receiving nodes are designed based on minimizing the mse criteria ._ for the first time _, we introduce the sum of the equations mean squared errors ( sum - equation mse ) and the maximum of the equations mean squared errors ( max - equation mse ) criteria for the equation recovery problem associated with the multiple - access phase precoding and filter design . these proposed equation based mse algorithmsare proven to be convergent .moreover , we use traditional mse criteria , i.e. sum mse and max mse , proposed for the individual message recovery , for the broadcast phase precoding and filter design . by means of alternating optimization approach , we present tractable solutions for these mse problems . we evaluate the performance of our proposed scheme and compare the results with those of the previous methods .our numerical results indicate that the proposed scheme substantially outperforms the previous methods in terms of the outage probability and the network throughput . in addition, the max based mse precoding design , using max - equation mse in multiple - access phase and max mse in broadcast phase , shows a better performance than sum based mse precoding design , using sum - equation mse and sum mse , at the expense of more complexity .we extend our proposed schemes for the case of imperfect channel state information ( imperfect csi ) . at first , we propose modified iflr , taking to account the effect of channel estimation errors in the conventional iflr receiver structure .then , accordingly , a robust transceiver design is proposed .simulation results show that the robust design improves the performance of the non - robust design , based on assuming the exact knowledge of csi at the presence of the channel estimation error .the remainder of this paper is organized as follows . in sectionii , the system model and the integer forcing - and - forward scheme are briefly described .section iii presents the transceiver precoder and projection filters design by assuming that a perfect knowledge of csi is available . in section iv ,the modified iflr and related design are presented .numerical results are given in section v. finally , section vi concludes the paper .* notations : * the superscripts and stand for conjugate transposition and norm of vector , respectively . , , and stand for trace , pseudo inverse , and the -th column vector of matrix .the symbol is the absolute value of the scalar , while denotes . is the expectation of a random variable . denotes identity matrix . and represent the matrix vectorization and its inverse operation , respectively . denotes the kronecker product .we consider a mimo multi - pair two - way relaying system with pairs , i.e. , users , and one relay r , as shown in fig .1 . in this system ,in each pair , users and attempt to exchange their messages , i.e. messages vectors and each with dimension of by the help of the relay r. each user exploits a lattice encoder with normalized power to project its message vector to a length- complex - valued codeword vector such that .we assume that user and relay r have and antennas , respectively .the matrix denotes the channel matrix from user to the relay , with dimension .the elements of are assumed to be independent identically distributed , i.i.d , rayleigh variables with variance .user precodes its message with matrix , with dimension , and transmits the precoded signal . for each pair , the following power constraint on the sum power is considered : in the integer forcing - and - forward ( iff ) scheme , we use multiple access broadcast ( mabc ) protocol introduced in [ 16 ] .that is , in the first time slot , named multiple access phase , the users transmit simultaneously , and therefore , the received signal by the relay r can be written as where denotes the received noise at the relay and has gaussian distribution with variance .we use the signal alignment scheme proposed in [ 14 - 15 ] such that the received signals from the users in each pair to be aligned in the relay , i.e. , hence , the user precoder , , versus its pair precoder , i.e. , , is given by [ 17 ] where is the pseudo inverse of .we can rewrite as where we define , named as the -th pair sum message .in addition , we can rewrite in a different form , similar to mimo point - to - point channel , as where ^* ] . after projecting with matrix , with diminsion , the relay transmits the result to the users .we consider power constraint for the relay transmission , i.e. , we assume that , with the dimension of , is the channel coefficient matrix from relay r to the user . the elements of the matrix are assumed i.i.d rayleigh variables with the identical variance .the received signal by each user is given by where denotes the receiver noise , having gaussian distribution with variance .user exploits projection filter , a matrix with dimension , to recover equation vector using a traditional linear receiver as according to ( 13 ) , which shows a point - to - point mimo channel , the rate of recovering the equation by user is given by [ 20 ] this achievable rate can be improved using successive interference cancellation ( sic ) [ 21 ] .therefore , the overall rate of recovering the equation with ecv , i.e. , by user is where is given in ( 11 ) .each user , among received equations , uses the best ones with the maximum overall rate that can help the user to recover its pair s messages . in comparison with the df based scheme in [ 12 ] ,not only higher rate is achieved by decoding equations at the relay instead of the messages [ 9 ] , but also more flexibility is provided for the users , having different ways to recover their pairs messages according to the ecvs of the transmitted relay s equations .please note , even in the worst case , each user can still recover its pair s messages because the relay transmits independent equations in the number of all pairs messages .in this section , based on the poposed iff scheme , we investigate the transceiver design , i.e. , finding precoding and projection filter matrices for all the nodes to minimize the mse by assuming that the perfect csi is available . according to the proposed scheme presented in sectionii , we have to select the design matrices for two phases of multiple access and broadcast , separately .first , we consider the multiple - access phase , in which the users transmitting precoding matrix and the relay s receiving projection matrix are optimized .similarly , in subsection iii.b , we consider the broadcast phase , and obtain the relay s precoder matrix and the users projection matrix . atfirst , we design the related matrices in the multiple - access phase by introducing max - equation mse criterion for our equation based problem , to ensure qos equivalency between different recovered equations . however , since maybe some users do not use all of the equations to recover their pairs messages , we introduce sum - equation mse criterion , which also has less complexity at the ecv search problem , as will be discussed . from ( 6 ) and ( 9 ) , the effective noise in recovering the equation from the projection of the received signal onto vector is equal to now , the users precoding vectors , equation matrix , and projection matrix , including the vectors in ( 10 ) , , must be selected so as to minimize the maximum effective noise of all of the recovering equations , i.e. , subject to where from ( 5 ) and ( 17 ) , can be expanded as where is the -th pair coefficient of the -th equation . by substituting from ( 10 ) and with some straightforward simplifications ,we can rewrite as where using the alternative method , we solve the given optimization problem . that is , in the first step , assuming the precoding vectors are known , the matrix is obtained as subject to .}\\{\det \left ( \mathbf{a } \right ) \ne 0.}\\{{\mathbf{a}_k } \in { \mathbb{z}^l } , k=1 , \ldots , l.}\end{array } } \right .\nonumber \end{aligned}\ ] ] this optimization problem , named ecv search , can be solved efficiently by using the proposed schemes in [ 18 - 19 ] . in the second step , by substituting the values obtained at the first step for matrices and , the precoding vectors are calculated as follows : by introducing a new variable that serves an upper bound on , the optimization problem of precoding matrices can be rewritten as subject to with definition of and as ,\end{aligned}\ ] ] in ( 19 ) can be rewritten as hence , with the help of equation in [ 17 ] , this leads to accordingly , the optimization problem of transmit precoding matrices can be rewritten as subject to this optimization problem is a second order cone programming ( socp ) problem [ 22 ] due to the fact that the objective function is linear and the constraints are second order cones .it can be efficiently solved by standard socp solver [ 23 ] or cvx , a software package that is developed for convex optimization problems .algorithm 1 summarizes the above procedures .the proposed max - equation mse minimization algorithm is convergent .let , the overall mse .thus , in the first step of algorithm 1 for the iteration , we have , and in the second step , . hence , at the end of iteration .therefore , in each iteration , the overall mse decreases , which is lower bounded by zero .hence , the proposed max - mse minimization algorithm is convergent .l initialize and + iterate + 1.ecv search : update and from ( 22 ) and ( 10 ) for fixed + 2.update , i.e. , by solving the socp problem of ( 28 ) for fixed and + until + the optimization problem , which minimizes the total effective noise from all of the recovering equations , can be considered as subject to where and according to ( 20 ) , we have we can rewrite ( 31 ) in a simpler form as again , we solve this problem by using the alternative method . in the first step , the matrix is obtained as subject to .}\\{\det \left ( \mathbf{a } \right ) \ne 0.}\\{{\mathbf{a}_k } \in { \mathbb{z}^l } , k=1 , \ldots , l.}\end{array } } \right .\nonumber \end{aligned}\ ] ] we can solve this problem by using the proposed schemes in [ 18 - 19 ] with some straightforward changes .however , since we can optimize at once , this problem is significantly more simple and tractable than ( 22 ) . in the second step, the precoding vectors can be calculated as follows .the kkt conditions for the -th pair precoder can be written as where is the kkt coefficient related to the pair . from ( 30 ) and ( 33 ) , we obtain hence , we have from ( 4 ) , its pair can be calculated as here , is determined from the second kkt condition given in ( 34 ) .we consider two cases , namely and as mentioned in [ 22 , pp .if , or in other words when the optimum solution is in the feasible region , we should have . on the other hand, if , or equivalently , when the optimum solution is on the constraint border , we have . for the latter case, we can find efficiently by applying the bisection optimization method [ 22 ] .the above procedures are summarized in algorithm 2 .the parameter used in the algorithm determines the convergence tolerance .the proposed sum - equation mse minimization algorithm is convergent .the proof is similar to the one given for theorem 1 . .algorithm 2 : multiple - access phase sum - equation mse based precoding and projection filter design [ cols= " < " , ]the transceiver proposed in the previous section requires perfect csi . however , in practice , csi is not perfect due to factors such as channel estimation error or feedback delay . in this section ,we propose a robust precoding and projection filter design for the iff scheme with imperfect csi .we can model the csi error as : and , where and are estimated channel matrices from user to relay r and vice versa , respectively .in addition , and are the estimation error matrices for the related channels .we assume the components of error matrices and have independent gaussian distribution with and , respectively . first , we introduce the modified iflr .we then derive the optimum precoder and projection matrices in subsection iv.b and subsection iv.c .after signal alignment in each pair based on estimated channels as and therefore from ( 2 ) , we can write the received signal as where .\end{aligned}\ ] ] similar to the section ii , to recover an equation with ecv , is projected onto vector , as : hence , the effective noise variance for this recovering is given by with some straightforward simplifications , ( 58 ) can be rewritten as by considering the error matrices with and , messages with and , matrices , and vector , we have the proof is given in appendix i. according to theorem 1 , the expression in ( 59 ) becomes accordingly , the computation rate for the equation with ecv is given by note that an equation with message transmission power and effective recovery noise variance has computation rate [ 6 ] .the optimum projection vector for recovering the equation with ecv is and hence , the projection matrix becomes the proof is given in appendix ii . by substituting ( 63 ) into ( 61 ) and some straightforward simplifications , the effective noise variance is obtained as where the other concepts by replacing ( 10 ) with ( 63 ) are similar to section ii . here , we consider sum - equation mse and max - equation mse critera for transceiver design with imperfect csi .the problems ( 22 ) and ( 32 ) get solved by considering the new in ( 66 ) . from ( 57 ) and ( 58 ) , the sum - equation mse minimization problem considering the estimated channel matrix can be written as subject to the objective function in ( 67 ) can be simplified to similar to the procedure of subsection iii.a.2 , using kkt conditions , we have thus , we have the parameter can be obtained as proposed in subsection iii.a.2 .algorithm 2 can be used by replacing ( 36 ) with ( 70 ) .the can be written as where ,\end{aligned}\ ] ] similar to subsection iii.a.1 , the optimization problem of transmit precoding matrices can be written as subject to similarly , the above optimization problem is a socp problem , and algorithm 1 can be used by replacing ( 28 ) with ( 75 ) . here , in the second phase, we consider sum mse and max mse with imperfect csi .the minimization problem defined in ( 38 ) and ( 39 ) , considering the estimated channel matrix can be modified to subject to where to solve the problem , with kkt conditions similar to the solution of the problem presented in subsection iii.b.1 , we have moreover , to find , according to the kkt condition in ( 45 ) , we can write hence , we have the parameter can be obtained similar to what explained in subsection iii.b.1 .algorithm 3 can be used by replacing ( 43 ) and ( 48 ) with ( 78 ) and ( 80 ) , respectively .we consider the following optimization problem : subject to where from ( 77 ) , the is given by this problem can be solved by the alternative optimization method . in the first step , for , we consider subject to in the second step , for , we consider subject to similarily , the above optimization problems are socp .algorithm 4 can be used by replacing ( 51 ) and ( 52 ) with ( 84 ) and ( 85 ) , respectively .in this section , we evaluate the performance of our proposed schemes and compare the results with the existing work in the literature . for simulation evaluation , we consider a two - pair two - way system , i.e. , .the rayleigh channel parameters are equal to .the channel noises are assumed to have a unit variance , i.e. . the parameter in the algorithms is set to , and the target rate / channel use is considered .2 shows the mse distribution among equations and the total mse for the proposed sum - equation mse minimization scheme and max - equation mse scheme , for the case that each node has two antennas , i.e. , considering perfect csi . in this fig ., for simplicity , we suppose that each user sends only one message .hence , the relay has to recover two independent equations according to the proposed algorithms .we can see that the proposed sum - equation mse minimization scheme achieves the minimum total mse , i.e. the sum of the mse of the equations , while the proposed max - equation mse scheme has less mse for the worst equation , which has lower rate .3 shows the average number of cases that each user utilizes only one of the two transmitted equations of the relay .as observed , this average is decreased by the increase of the snr , which indicates that at high snr using all of the transmitted equations can be more beneficial to each user .hence , since the users recover their messages by using all of the transmitted equations with a probability higher than 0.6 , we expect the max - equation mse , which guarantees the mse of the worst equation among all of the equations , to have a better performance than the sum - equation mse .4 compares the outage probability of our proposed scheme in the case of perfect csi with the ones introduced in [ 10 ] that uses af relaying and in [ 12 ] that uses df relaying , i.e. denoise - and - forward , for .as it is observed , the proposed scheme has better performance in all snrs , and provides at least 1 db snr improvement in comparison with the best conventional relaying scheme .in addition , the max based mse precoding and filter design , using max - equation mse and max mse , performs better compared to the sum based mse precoding and filter design , using sum - equation mse and sum mse .this result justifies what we expected form fig .3 . note , as has been discussed before , the max based mse has more complexity than the sum based mse due to the ecv search problem . in fig .5 , the average sum rate of the proposed scheme is compared with the conventional precoding and filter designs considering the availability of perfect csi for .it can be observed that our proposed scheme performs significantly better than the conventional strategies in all snrs .for example , in sum rate of 7 bit / channel use , the proposed scheme has 1.5 db improvement in comparison with the best conventional relaying scheme .moreover , the max based mse design outperforms the sum based mse transceiver .the results of fig . 4 and 5 demonstrate that the use of the interference in terms of equations has significant superiority than when the interference is considered as an additional noise , like in the conventional af and df schemes . in fig 6 , the effect of the number of antennas , i.e. , on the performance of the system is assessed .as can be observed and expected , the sum rate of the proposed scheme increases by higher .for example , in sum rate of 5 bit / channel use , the system with performs 4.5 db better than the one with . in fig .7 , we investigate the effect of channel estimation errors on the performance of the system with , where the error power is .the plots are provided for two precoder and filter designs , the non - robust design neglecting the presence of csi error , and the robust design .as expected , the robust design has a better performance than the non - robust design , and the improvement becomes more by increasing the error power .for instance , when error power is 0.1 , the robust design performs 2 db better in sum rate of 5 bit / channel use , and at error power 0.4 , about 4 db better in sum rate of 4 bit / channel use .also , as can be observed , as the error power goes up , the performance is degraded even in the robust design case .for example in sum rate of 6 bit / channel use , the design with perfect csi has 2.5 db better performance in comparison with the robust design when there is an imperfect csi with error power 0.1 , and the robust design with error power 0.1 performs significantly better than the one with error power 0.4 . in addition , the max based mse design performs better in different error powers .in this paper , we have proposed integer forcing - and - forward scheme for the mimo multi - pair two - way relaying system based on the integer forcing linear receiver structure .we designed the precoder and projection matrices using the proposed equation based mse critera , i.e. sum - equation mse and max - equation mse in the multiple - access phase , and conventional user based mse critera , i.e. sum mse and max mse in the broadcast phase .we also derived the precoder and filters design at the presence of csi error .we have introduced modified integer forcing linear receiver to overcome the channel estimation error efficiently . for the schemes , we have proposed algorithms in which the alternative method is applied , and thus , the optimum solution can be achieved . the proposed scheme shows a significantly better performance , in terms of the sum rate and the outage probability , in comparison with conventional designs . moreover , in the case of imperfect csi , the proposed robust transceiver design improves the system performance compared with the non - robust design , in which the effect of channel estimation error is neglected .with expanding , we have since and , this leads to on the other hand , for any random vector with mean and covariance , and an matrix , we have [ 17 ] : from ( 88 ) and the fact that and , we have and therefore , this leads to so , the theorem is proved .from ( 62 ) , the optimum value of is obtained by minimizing the following function : the optimum value of is the solution of hence , thus , the theorem is proved . 1 s. zhang , s.c .liew , and p.p .lam , `` hot topic : physical - layer network coding , '' in _ proc . of international conference on mobile computing and networking _ , ( new york , usa ) , 2006 .p. popovski and h. yomo , `` physical network coding in two - way wireless relay channels , '' _ ieee international conference on communications _ ,( glasgow , uk ) , 2007zhou , y. li , f.c.m .lau , and b. vucetic , `` decode - and - forward two - way relaying with network coding and opportunistic relay selection , '' _ ieee trans .3070 - 3076 , 2010 . t. cui , t. ho , and j. kliewer , `` memoryless relay strategies for two - way relay channels , '' _ ieee trans .3132 - 3143 , 2009 .w. nam , s.y .chung , and y.h .lee , `` capacity of the gaussian two - way relay channel to within 1/2 bit , '' _ ieee trans .inf . theory _ , vol .56 , no . 11 , pp . 5488 - 5494 , 2010 . b. nazer and m. gastpar , `` compute - and - forward : harnessing interference through structured codes , '' _ ieee trans .inf . theory _ ,6463 - 6486 , 2011 .u. erez and r. zamir , `` achieving 1/2 log ( 1+snr ) on the awgn channel with lattice encoding and decoding , '' _ ieee trans .inf . theory _ ,2293 - 2314 , 2004 . b. nazer and m. gastpar , `` reliable physical layer network coding , '' _ proceedings of the ieee _ ,438 - 460 , 2011 .j. zhan , b. nazer , u. erez , and m. gastpar , `` integer - forcing linear receivers , '' _ ieee trans .inf . theory _ ,99 , pp . 1 , 2014 . m. zhang , h. yi , h. yu , h. luo , and w. chen , `` joint optimization in bidirectional multi - user multi - relay mimo systems : non - robust and robust cases , '' _ ieee trans. vehicular tech .62 , no . 7 , pp . 3228 - 3244 , 2013 . z. ding and h.v .poor , `` a general framework of precoding design for multiple two - way relaying communications , '' _ ieee trans . signal process .61 , no . 6 , pp . 1531 - 1535 , 2013 .z. zhao , m. peng , z. ding , w. wang , and h.h .chen , `` denoise - and - forward network coding for two - way relay mimo systems , '' _ ieee trans .vehicular tech ._ , vol . 63 , no .2 , pp . 775 - 788 , 2014 .azimi - abarghouyi , m. hejazi , and m. nasiri - kenari , `` compute - and - forward two - way relaying , '' _ iet commun ._ , to appear , 2014 , available online : http://arxiv.org/abs/1408.2855 n. lee , j.b .lim , and j. chun , `` degrees of freedom of the mimo y channel : signal space alignment for network coding , '' _ ieee trans .inf . theory _ ,56 , no . 7 , pp . 3332 - 3342 , 2010 .r. zhou , z. li , c. wu , and c. williamson , `` signal alignment : enabling physical layer network coding for mimo networking , '' _ ieee trans .wireless commun .12 , no . 6 , pp . 3012 - 3023 , 2013 .kim , p. mitran , and v. tarokh , `` performance bounds for bi - directional coded cooperation protocols , '' _ ieee trans .info . theory _ , vol .5235 - 5241 , 2008 .k.b . petersen and m.s .pedersen , _ the matrix cookbook_. technical university of denmark , 2006 .l. wei and w. chen , `` compute - and - forward network coding design over multi - source multi - relay channels , '' _ ieee trans .wireless commun .9 , pp . 3348 - 3357 , 2012. l. wei and w. chen , `` integer - forcing linear receiver design with slowest descent method , '' _ ieee trans .wireless commun .12 , no . 6 ,2788 - 2796 , 2013 .d. tse and p. viswanath , _ fundamentals of wireless communication_. cambridge : cambridge univ .press , 2005 .m. varanasi and t. guess , `` optimum decision feedback multiuser equalization with successive decoding achieves the total capacity of the gaussian multiple - access channel , '' _ in proceedings of the 31st asilomar conference on signals , systems and computers _ , 1997 .s. boyd and l. vandenberghe , _convex optimization_. cambridge : cambridge univ .press , 2004 .j. f. sturm , `` using sedumi 1.02 , a matlab tool for optimization over symmetric cones , '' _ optim .methods softw .11 - 12 , pp .625 - 653 , 1999 .
|
in this paper , we propose a new transmission scheme , named as integer forcing - and - forward ( iff ) , for communications among multi - pair multiple - antenna users in which each pair exchanges their messages with the help of a single multi antennas relay in the multiple - access and broadcast phases . the proposed scheme utilizes integer forcing linear receiver ( iflr ) at relay , which uses equations , i.e. , linear integer - combinations of messages , to harness the intra - pair interference . accordingly , we propose the design of mean squared error ( mse ) based transceiver , including precoder and projection matrices for the relay and users , assuming that the perfect channel state information ( csi ) is available . in this regards , in the multiple - access phase , we introduce two new mse criteria for the related precoding and filter designs , i.e. , the sum of the equations mse ( sum - equation mse ) and the maximum of the equations mse ( max - equation mse ) , to exploit the equations in the relay . in addition , the convergence of the proposed criteria is proven as well . moreover , in the broadcast phase , we use the two traditional mse criteria , i.e. the sum of the users mean squred errors ( sum mse ) and the maximum of the users mean squared errors ( max mse ) , to design the related precoding and filters for recovering relay s equations by the users . then , we consider a more practical scenario with imperfect csi . for this case , iflr receiver is modified , and another transceiver design is proposed , which take into account the effect of channels estimation error . we evaluate the performance of our proposed strategy and compare the results with the conventional amplify - and - forward ( af ) and denoise - and - forward ( df ) strategies for the same scenario . the results indicate the substantial superiority of the proposed strategy in terms of the outage probability and the sum rate .
|
this article is part of a series of papers , , , , devoted to the investigation of water - hammer problems in fluid - filled pipes , both from the experimental and theoretical perspective .water - hammer experiments are a prototype model for many situations in industrial and military applications ( e.g. , trans - ocean pipelines and communication networks ) where we have fluid - structure interaction and a consequent propagation of shock - waves . after the pioneering work of korteweg ( 1878 ) and joukowsky ( 1900 ) , who modeled water - hammer waves by neglecting inertia and bending stiffness of the pipe , a more comprehensive investigation , developed by skalak in the fifties , considered inertial effects both in the pipe and the fluid , including longitudinal and bending stresses of the pipe .skalak combined the shell theory for the tube deformation and an acoustic model of the fluid motion .he shows there is a coexistence of two waves traveling at different speeds : the precursor wave ( of small amplitude and of speed close the sound speed of the pipe wall ) and the primary wave ( of larger amplitude and lower speed ) .additionally , a simplified four - equation one - dimensional model is derived based on the assumption that pressure and axial velocity of the fluid are constant across cross - sections .later studies of tijsseling - have regarded modeling of isotropic thin pipes including an analysis of the effect of thickness on isotropic pipes based on the four - equation model . while all these papers consider the case of elastically isotropic pipes , the investigation of anisotropy in water - filled pipes of composite materialswas first obtained in where stress wave propagation is investigated for a system composed of water - filled thin pipe with symmetric winding angles . in the same geometry , a platform of numerical computations , based on the finite element method , was developed in to describe the fluid - structure interaction during shock - wave loading of a water - filled carbon - reinforced plastic ( cfrp ) tube coupled with a solid - shell and a fluid solver .more complex situations involve systems of pipes mounted coaxially where the annular regions between the pipes can be filled with fluid . in this scenario ,brmann has considered the modeling of non - stationary flow of compressible fluids in pipelines with several flow sections .his approach consists of reducing the system of partial differential equations governing the fluid - structure interaction in coaxial pipes into a 1-dimensional problem by the method of characteristics .later works have appeared on the modeling of sound dispersion in a cylindrical viscous layer bounded by two elastic thin - walled shells and of the wave propagation in coaxial pipes filled with either fluid or a viscoelastic solid .motivated by the recent experimental effort of j. shepherd s group on the investigation of the water - hammer in annular geometries , , , , we extend the modeling work of and to investigate the propagation of stress waves inside an annular geometry delimited by two water - filled coaxial pipes , in elastically isotropic and cfrp pipes .a projectile impact causes propagation of a water pressure wave causing the deformation of the pipes .positive extension in the radial direction of the outer pipe , accompanied by negative extension ( contraction ) in the radial direction of the internal pipe , causes an increase in the annular area thus activating the fluid - structure interaction mechanism .the architecture of the paper is as follows . after reviewing the work of you andinaba on the modeling of elastically anisotropic pipes , we present the six - equation one - dimensional model ( paragraph [ 1311081717 ] ) that rules the fluid - solid interaction in a two - pipe system . in section [ 1311081625 ]we compare our theoretical findings with experimental data obtained during a series of water - hammer experiments . finally , in the case of fiberreinforced pipes , the wave propagation and the computation of hoop and axial strain are described in full detail in paragraph [ 1312181514 ] .here and in what follows subscript is either set to be equal to 1 ( in the case in which we refer to the internal pipe ) or 2 ( external pipe ) .[ cols="<,<,<,<",options="header " , ] the calculated primary and precursor waves are shown in fig . [ 1305101143 ] as functions of .similar to the one - pipe scenario of , as increases the result is a reinforcement of the hoop stiffness and an increase in the primary wave speed which corresponds to the breathing mode of the pipes .conversely , the speed of the precursor waves ( corresponding to a longitudinal mode of the pipes ) diminishes due to decreased axial stiffness .we remark that the computation of the physical variables is based on the incident wave because its magnitude is much larger than those of precursor waves . in this case , by eq .( [ 1305091618 ] ) we have and therefore the graph of the fluid pressure is not reported .precursor wave speeds and are very similar when the parameter is very small which happens for either small or large .this is consistent with eq .( [ 1309302314 ] ) where for we have .it is possible to prove that this corresponds to the cases for which the coupling stiffness as a function of is minimal . on the other hand ,it follows that for intermediate values of wavespeed becomes slower while becomes larger and is maximized at when also is large .proof of the dependence of on the coupling term requires a more detailed asymptotic analysis and it is therefore left to a forthcoming paper .plots of the hoop and axial strain in pipes 1 and 2 calculated from eqs .( [ 1311100014 ] ) and ( [ 1309291512 ] ) are reported in fig .[ 1305101157 ] .as the impulsive impact by the projectile generates a positive water pressure , pipe 2 undergoes a positive expansion in the radial direction ( hoop strain is positive ) while pipe 1 is contracted in the radial direction ( hoop strain is negative ) . note that the radial expansion of pipe 2 accompanies the contraction in the axial direction while the radial contraction of pipe 1 accompanies the expansion in the axial directionthis explains why hoop and axial strain have opposite signs .the hoop strain is essentially determined by the hoop compliance and therefore the absolute values of the hoop strains in both pipe 1 and 2 are large for small and are decreasing for increasing .then , at a first order of approximation , the axial strain is mainly determined by the coupling term ( * ? ? ?5 ) and therefore the axial strains in both pipe 1 and 2 are maximized ( in absolute value ) for . ,right : plot of the precursor wave speeds and as a function of the winding angle and of the coefficient ( non - dimensional , multiplied by ) as a function of .the web version of this article contains the above plot figure in color.,title="fig:",width=245,height=162 ] , right : plot of the precursor wave speeds and as a function of the winding angle and of the coefficient ( non - dimensional , multiplied by ) as a function of .the web version of this article contains the above plot figure in color.,title="fig:",width=245,height=162 ] m / s as a function of the winding angle ( eqs . ( [ 1311100014 ] ) ) .right : plot of the maximum axial strains accompanying the water - hammer wave normalized by m / s as a function of the winding angle ( eq . ( [ 1309291512 ] ) ) ., title="fig:",width=245,height=162 ] m / s as a function of the winding angle ( eqs . ([ 1311100014 ] ) ) .right : plot of the maximum axial strains accompanying the water - hammer wave normalized by m / s as a function of the winding angle ( eq . ( [ 1309291512 ] ) ) ., title="fig:",width=245,height=162 ]we have investigated the propagation of stress waves in water - filled pipes in an annular geometry . a six - equation model that describesthe fluid - structure interaction has been derived and adapted to both elastically isotropic and anisotropic ( fiber - reinforced ) pipes .the natural frequencies of the system ( eigenvalues ) and the amplitude of the pressure and velocity of the fluid , along with the mechanical strains and stresses in the pipes ( eigenvectors ) have been computed and compared with experimental data of water - hammer tests . it is observed that the projectile impact causes a positive expansion of the external pipe in the radial direction and a contraction in the axial direction ( poisson s effect ) .vice versa , the internal pipe is contracted in the radial direction and expanded axially . in the last section of the paper , which is a benchmark for future experimental investigation , we have analyzed in full detail the propagation of waves in cfrp pipes with a special emphasis on the influence of the winding angle on the wave speeds and the axial and hoop strains in the pipes .most interestingly , we found that the speed of the primary wave ( breathing mode ) increases with the increasing winding angle due to increasing hoop stiffness in both pipes .this is in agreement with the one - pipe model and analysis of .conversely , the profile of the speed of the precursor waves ( longitudinal modes ) is large when the winding angle is small ( and therefore the axial stiffness is large ) and it decreases with increasing winding angle .additionally , we have observed that the two precursor waves travel at almost the same speed when the fiber winding angle is equal to either or while a separation of the velocities is observed for in between these values , a phenomenon which is the object of further analysis .the authors are indebted to prof .k. bhattacharya and j. shepherd for many useful discussions and their advice .p.c . acknowledges support from the department of energy national nuclear security administration under award number de - fc52 - 08na28613 .this work was written when p.c .was a postdoctoral student at the california institute of technology .bitter , n. , shepherd , j. , 2013 .dynamic buckling and fluid - structure interaction of submerged tubular structures , in : blast mitigation : experimental and numerical studies , editors a. shukla , y. rajapakse , and m.e .hynes , springer .bitter , n. , shepherd , j. , 2013 .dynamic buckling of submerged tubes due to impulsive external pressure , proceedings of 2013 sem annual conference and exposition on experimental and applied mechanics , june 3 - 5 , 2013 , lombard , il .cirovic , s. , walsh , c. , fraser , w.d . , 2002 .wave propagation in a system of coaxial tubes filled with incompressible media : a model of pulse transmission in the intercranial arteries .journal of fluids and structures , 16 ( 8) , pp .1029 - 1049 .inaba , k. , shepherd , j. , 2008 .impact generated stress waves and coupled fluid - structure responses , proceedings of the ( sem ) xi international congress and exposition on experimental and applied mechanics , june 2 - 5 , orlando , fl usa .perotti , l.e . ,deiterding , r. , inaba , k. , shepherd , j. , ortiz , m. , 2013 . elastic response of water - filled fiber composite tubes under shock wave loading , international journal of solids and structures , vol .50 , issues 3 - 4 .tijsseling , a.s . , 1993 .fluid - structure interaction in case of waterhammer with cavitation , phd thesis , delft university of technology , faculty of civil engineering , communications on hydraulic and geotechnical engineering , report no .93 - 6 , issn 0169 - 6548 , delft , the netherlands .valentin , r.a . ,phillips , j.w . ,walker , j.s .reflection and transmission of fluid transients at an elbow , transactions of structural mechanics in reactor technology 5 , berlin , germany , paperb 2/6 .
|
the fluid - structure interaction is studied for a system composed of two coaxial pipes in an annular geometry , for both homogeneous isotropic metal pipes and fiber - reinforced ( anisotropic ) pipes . multiple waves , traveling at different speeds and amplitudes , result when a projectile impacts on the water filling the annular space between the pipes . in the case of carbon fiber - reinforced plastic thin pipes we compute the wavespeeds , the fluid pressure and mechanical strains as functions of the fiber winding angle . this generalizes the single - pipe analysis of j. h. you , and k. inaba , _ fluid - structure interaction in water - filled pipes of anisotropic composite materials _ , j. fl . str . 36 ( 2013 ) . comparison with a set of experimental measurements seems to validate our models and predictions . keywords : fluid - structure interaction ; water - hammer ; homogeneous isotropic piping materials ; carbon - fiber reinforced thin plastic tubes .
|
the finest level at which micromixing in fluid flows can be investigated is determined by the local gradient of a scalar .the gradient indeed rules molecular diffusion , but also gives a precise insight into the small - scale structure of scalar fields and mixing patterns .actually , it is the mean dissipation rate of the energy of scalar fluctuations , , the so - called scalar dissipation with the molecular diffusivity and the scalar gradient , which reveals the efficiency of micromixing .modelling small - scale mixing in process and chemical engineering or in combustion flow computation thus needs to understand the very mechanisms of scalar gradient production . on the basic level ,the study of the scalar gradient is relevant to the general problem of vector transport in fluid flows including the kinematics of vectors defining material lines or surfaces , the vorticity vector properties , the dynamics of the vorticity gradient in two - dimensional flows as well as the production of the magnetic field by the motion of a conducting fluid . a number of studies have tackled the connection of the detailed features of the scalar gradient and of the scalar dissipation with the properties of the flow field determined by the local velocity gradients . clearly , in scalar gradient production , strain through its intensity , persistence and the respective alignments of scalar gradient and principal axes is the chief mechanism , while vorticity , at least its magnitude , is immaterial . because of the tight interaction between strain and vorticity , however , vorticity properties are likely to be indirectly involved .in fact , the role of strain in the vicinity of vorticity structures has early been established .models based on stretched vortices have also been shown to reproduce the physics of scalar transport and mixing in turbulent flows .more recently the influence of vortical structures on mixing has been clearly shown .the present work , too , has to do with the relationship between mixing properties and the local features of the flow field .the production of scalar dissipation , and thus the mechanisms of micromixing , are probed through the kinematic features of the scalar gradient in terms of vorticity geometry .since vorticity alignments arise from the dynamics of the velocity field and are closely connected to the inner , detailed structure of turbulent flows , alignment of vorticity with respect to strain principal axes is especially considered .this article reports an extension of the findings presented in reference . the stochastic lagrangian model used in the study is described in section [ sec2 ] and its ability to predict statistics conditioned on vorticity alignments is checked in section [ sec3 ] .making use of the model results , the scrutiny of scalar dissipation in terms of vorticity geometry , including the analysis based on local flow structure , is achieved in section [ sec4 ] .conclusion is drawn in section [ sec5 ] .the model for the velocity gradient tensor has been derived by chevillard and meneveau and has been shown to predict the essential geometric properties and anomalous scalings of incompressible , isotropic turbulence .starting from an eulerian - lagrangian change of variables and using the recent fluid deformation approximation the modelled equation for the velocity gradient tensor , , is derived as in which is the integral time scale and is a model for the cauchy - green tensor , , where is the kolmogorov time scale . forcing is ensured by the increment of a tensorial wiener process , , where is a tensorial , gaussian delta - correlated noise with and .this model has been extended to the gradient of a passive scalar .the modelled equation for the scalar gradient is written where is the scalar integral time scale and is the increment of a wiener process where is a vectorial , gaussian noise such that and . in the model represented by eqs .( [ eq1 ] ) and ( [ eq2 ] ) stretching is exactly taken into account , while models are devised for the pressure hessian second term of eq .( [ eq1 ] ) , viscous effects third term of eq . ( [ eq1 ] ) and molecular diffusion second term of eq .( [ eq2 ] ) .meneveau has given a detailed discussion on this class of stochastic lagrangian models .time scales are normalised by the integral time scale ( ) . as in reference the kolmogorov time scale and the scalar integral time scale are respectively prescribed as which corresponds to a taylor microscale reynolds number , , close to 150 and .equations ( [ eq1 ] ) and ( [ eq2 ] ) are solved using a second - order predictor - corrector scheme .the calculation is run for with time step and the statistics of the velocity and scalar gradients are derived from their respective stationary time signals .the model retrieves the main features of the scalar gradient statistics and kinematics , namely the non - gaussian properties of the scalar gradient components , the probability density functions ( p.d.f.s ) of the production of scalar gradient norm , the statistical alignments with respect to strain principal axes and vorticity as well as more subtle features already underlined in two - dimensional turbulence such as the existence of special preferential alignments .it has also been shown to reproduce the statistics of the scalar gradient in rotating turbulence .this lagrangian approach has been used to model the evolution of the turbulent magnetic field as well .additional assessment of the model in connection with the present study relates to statistics conditioned on vorticity alignments .the latter are taken from the direct numerical simulations ( dns ) by tsinober __ and comparisons with the model predictions are made in figs .[ fig1 ] - [ fig3 ] .the strain eigenvalues are denoted by and the corresponding eigenvectors by ; the s are such that with and , and define , respectively , the extensional , intermediate and compressional strain principal axes .figure [ fig1 ] displays the normalised average of the intermediate strain eigenvalue conditioned on the alignment of vorticity , , with respect to the intermediate strain eigenvector and shows that the increase of with is reasonably predicted by the model for the whole field as well as for small vorticity .+ figure [ fig2 ] relates to enstrophy production .the model overpredicts the production rate of enstrophy for the strongest alignments between vorticity and the intermediate eigenvector , but displays the right trend for both the whole field and small vorticity , namely the rise of the enstrophy production rate as the alignment gets tighter . the differences between model predictions and dns data shown in figs .[ fig1 ] and [ fig2 ] are not explained by a reynolds number dependence .in fact , tsinober _ et al . _ suggest that their dns results although they were derived at are likely to be almost reynolds - number independent .the model results , displayed for 150 and 75 , are consistent with this surmise . in agreement with the numerical simulations of tsinober _ see their fig .11 , the model also correctly predicts the normalised mean enstrophy production terms conditioned on the alignment between vorticty and the extensional strain eigenvector ( fig . [ fig3 ] ) . + +as scalar dissipation is in direct proportion to the square of the scalar gradient norm , production mechanisms of scalar gradient reveal the way in which it is promoted by the flow field . using the model , we specifically focus on the case of significant alignment of vorticity with the strain principal axes .the latter is defined by where is a threshold spanning the range 0.7 to 0.99 and alignment of vorticity with a strain eigenvector , , is denoted by . the major part 92% of the results corresponds to vorticity making an angle smaller than ( ) with one of the strain eigenvectors ; more precisely , 22% for , 55% for and 15% for .vorticity aligning with is mostly stretched ; the intermediate eigenvalue , , is positive in more than of the -events 83% for , 86% for .figure [ fig4 ] clearly shows the dependence of scalar gradient production on vorticity alignments . the averages of the scalar gradient norm , , and of its production term , ( where is the strain tensor ) , conditioned on display the same trend : their largest values correspond to strong alignment of vorticity with , while mean values conditioned on alignment with are closer to the unconditioned averaged values .alignment of vorticity with the compressional eigenvector , , corresponds to the smallest production .these results thus suggest that the most intense scalar dissipation occurs for , while the mean scalar dissipation is rather well represented by the set of events for which . as shown in fig .[ fig5 ] , when vorticity aligns with a strain eigenvector the intermediate strain causes destruction of the scalar gradient , and the compressional and the extensional strains , as expected , essentially cause production and destruction , respectively .in addition , the differences in scalar gradient production resulting from the vorticity alignments stem from rather subtle mechanisms . from fig . [ fig5 ]it is clear that both the weakest production and destruction occur for .production by the compressional strain as well as destruction by the extensional strain are the largest for , while in this latter case destruction by the intermediate strain coincides with its unconditioned value . for ,production by the compressional strain is close to the unconditioned mean value , destruction by the extensional strain is weak and destruction by the intermediate strain is the largest .these results are supported by the p.d.fs of and of the production term ( fig .[ fig6 ] ) conditioned on vorticity alignments .in particular , it is clear that extreme values of are the most probable when .it also appears that both the largest destruction and production of the scalar gradient occur in this case .the above picture is explained by both the respective intensities of the strain components and the alignments of the scalar gradient with respect to the strain principal axes ( figs .[ fig7 ] and [ fig8 ] ) .when the scalar gradient alignment with both and is rather good , but the weak values of the intensities of the extensional and the intermediate strain result in a weak destruction . however , a small compressional strain intensity together with a poor alignment of the scalar gradient with also bring about a weak production . with regard to the difference in scalar gradient production when or : the extensional strain eigenvalue is the largest and alignment of the scalar gradient with is the best for which explains the largest destruction in this case ; however , for , the compressional strain eigenvalue is the largest and alignment of with is slightly better which makes production larger when than when ; furthermore , because the difference in the statistics of the intermediate strain eigenvalue between cases and is small and aligns better with for , destruction by the intermediate strain is larger when . better alignment of with for and with for results from the trend of the scalar gradient and vorticity to be normal to each other which is predicted by the model .in addition , greater absolute values of the s for are consistent with the fact that moderate and strong production of strain occurs when vorticity strongly aligns with the intermediate strain eigenvector and is rather misaligned with respect to the extensional strain .it has also been shown that strong alignment of vorticity with is correlated with large strain intensity .the model reproduces the latter property as shown in fig .[ fig9 ] which compares rather well with fig .9(d ) of reference .to summarize , this analysis suggests that the budget of scalar gradient production resulting in a more intense scalar dissipation for than for is explained by the following mechanisms : both extensional strain intensity and scalar gradient alignment explain the difference in the destruction of scalar gradient by the extensional strain , while alignment of the scalar gradient is the main mechanism resulting in a difference in destruction by the intermediate strain ; and the difference in the production by compressional strain is to be essentially put down to the compressional strain intensity .+ strain persistence was originally defined in two - dimensional flows .it can be extended to the three - dimensional case when vorticity is closely aligned with a strain eigenvector and used to check whether the flow is locally strain- or rotation - dominated .the strain persistence parameters are computed for and are respectively given by : , and with ; the s are the components of the rotation rate of strain principal axes computed as : , and where the s are the components of the pressure hessian tensor , with and standing for density and pressure , respectively modelled as shown in section [ sec2.1 ] .hatted quantities indicate components in the strain basis .prevailing strain is defined by , while indicates prevailing rotation . by including the rotation rate of strain principal axes in addition to vorticity, strain persistence considers the effective rotation rate and was shown to give a better estimate of local stirring properties than criteria just allowing for vorticity .the mechanisms of scalar gradient production can be analysed in terms of prevailing strain _ vs. _ prevailing rotation from table [ tab1 ] . within the text conditioned mean values such as .[ tab1 ] in the -sample strain is as frequent as rotation , while the -sample is slightly rotation - dominated which is consistent with a larger scalar dissipation for than for .this result also suggests that strain is statistically more persistent when vorticity aligns with the intermediate strain eigenvector and may explain the slightly better alignment with the compressional strain direction in this case ( fig .[ fig8 ] ) . the sample , by contrast , is found to be strongly rotation - dominated ; in these compressed - vorticity events small compressional strain intensity together with a poor alignment of the scalar gradient with result in low levels of production for both prevailing strain and rotation .as expected , the largest values of as well as the largest production are found for and . in agreement with previous studies ,production mainly results from prevailing - strain events .however , is significant for prevailing rotation as it takes values greater than the unconditioned average for both and .in addition , the difference between and clearly arises from the rotation - dominated events ; the difference in is indeed 3.4% for and respectively , 7.95 and 8.22 , while it reaches 27% for and respectively , 5.39 and 6.85 . for , is smaller than its unconditioned value for both prevailing strain and rotation .the net production confirms the role of rotation events in the difference between cases and : the same amount , 17.4 , is found for prevailing strain , while for prevailing rotation the net production is equal to 6.36 and 9.66 , respectively , namely a difference as large as 52% . as the differences in total destruction by the extensional and the intermediate strains are of the same order for both prevailing strain and prevailing rotation , the large difference in net production for prevailing rotation is mainly explained by production resulting from the compressional strain : 12% for prevailing strain 19.7 and 22.0 for and , respectively and 83% for prevailing rotation 8.04 and 14.7 , respectively .it is also worth noting that production for prevailing rotation when is not insignificant since both production by the compressional strain 14.7 and the net production 9.66 are greater than the corresponding unconditioned values , 12.0 and 9.35 , respectively .the mechanisms put forward in section [ sec4.1 ] to explain the difference in scalar gradient production between and are emphasized by prevailing rotation .the mean values of the extensional and compressional strains , and , are greater when than when for both prevailing strain and rotation , but the difference is larger for prevailing rotation : for the difference is 38% for prevailing strain respectively , 1.87 and 2.58 and 63% for prevailing rotation respectively 1.55 and 2.52 ; for these differences are respectively 19% and 54% .although the difference in the alignment of with respect to is almost the same for prevailing strain and prevailing rotation 27% and 28% , the difference in the alignment with is larger for prevailing rotation 14% and 38% .scalar dissipation has been analysed through scalar gradient production using a stochastic lagrangian model for the velocity and the scalar gradients which reproduces the essential dynamic and kinematic properties of isotropic turbulence .the study was specifically focused on the connection between vorticity geometry and scalar dissipation , and thus small - scale mixing .the model results show that scalar dissipation is mainly found when vorticity is stretched .more precisely , for vorticity aligning with the extensional strain , scalar dissipation is close to its unconditioned mean value , while the most intense scalar dissipation and therefore the most efficient small - scale mixing occurs for vorticity aligning with the intermediate strain .scalar dissipation is significantly lower when vorticity is compressed .the difference in scalar dissipation when vorticity aligns with either the extensional or the intermediate strain is to be put down to the interplay of mechanisms involving the strain intensities and the alignment of the scalar gradient with respect to the strain principal axes . in brief , for both a larger extensional strain intensity and a better alignment of the the scalar gradient with the extensional strain result in a larger destruction of the scalar gradient norm than when ; however , this effect is exceeded by production caused by compression , essentially through a larger compressional strain intensity ; in addition , when it is mainly the misalignment of the scalar gradient with that causes a lesser destruction by the intermediate strain . for vorticity aligning with the compressional strain direction , weak production of scalar gradient results from both small intensity of the compressional strain and misalignment between the scalar gradient and .finally , the latter mechanisms , and especially those explaining the difference in scalar gradient production between and , are retrieved in the analysis in terms of local flow structure .although scalar gradient production mostly stems from prevailing - strain events , it appears that this difference is the largest in rotation - dominated events , especially with regard to the extensional and compressional strain intensities .j. badyga and r. pohorecki , _ turbulent micromixing in turbulent reactors a review _ , chem .j. 58 ( 1995 ) , pp .bilger , _ some aspects of scalar dissipation _ , flow , turbulence and combust .72 ( 2004 ) , pp . 93114 .saffman , _ on the fine - scale structure of vector fields convected by a turbulent fluid _ , j. fluid mech .16 ( 1963 ) , pp . 545572 .girimaji and s.b .pope , _ material - element deformation in isotropic turbulence _ , j. fluid mech .220 ( 1990 ) , pp .nomura and g.k .post , _ the structure and dynamics of vorticity and rate of strain in incompressible homogeneous turbulence _ , j. fluid mech .377 ( 1998 ) , pp .b. lthi , a. tsinober , and w. kinzelbach , _ lagrangian measurements of vorticity dynamics in turbulent flows _ , j. fluid mech .528 ( 2005 ) , pp .. t. dubos and a. babiano , _ comparing the two - dimensional cascades of vorticity and a passive scalar _ , j. fluid mech . 492 ( 2003 ) ,. b. favier and p.j .bushby , _ small - scale dynamo action in rotating compressible convection _ , j. fluid mech .690 ( 2012 ) , pp .kerr , _ higher - order derivative correlations and the alignment of small - scale structures in isotropic numerical turbulence _ , j. fluid mech . 153 ( 1985 ) , pp . 3158 .ashurst , a.r .kerstein , r.m .kerr , and c.h .gibson , _ alignment of vorticity and scalar gradient with strain rate in simulated navier - stokes turbulence _ ,fluids 30 ( 1987 ) , pp . 23432353 . g.r .ruetsch and m.r .maxey , _ the evolution of small - scale structures in homogeneous isotropic turbulence _fluids a 4 ( 1992 ) , pp. 27472760 .a. pumir , _ a numerical study of the mixing of a passive scalar in three dimensions in the presence of a mean gradient _ , phys .fluids 6 ( 1994 ) , pp .21182132 . m. holzer and e.d .siggia , _ turbulent mixing of a passive scalar _ , phys .fluids 6 ( 1994 ) , pp .buch and w.j.a .dahm , _ experimental study of the fine - scale structure of conserved scalar mixing in turbulent shear flows . _ , j. fluid mech .317 ( 1996 ) , pp .buch and w.j.a .dahm , _ experimental study of the fine - scale structure of conserved scalar mixing in turbulent shear flows . _ , j. fluid mech .364 ( 1998 ) , pp .z. warhaft , _ passive scalars in turbulent flows _ , annu . rev . fluid mech .32 ( 2000 ) , pp . 203240 .p. vedula , p.k .yeung , and r.o .fox , _ dynamics of scalar dissipation in isotropic turbulence : a numerical and modelling study _ , j. fluid mech. 433 ( 2001 ) , pp .g. brethouwer , j.c.r .hunt , and f.t.m .nieuwstadt , _ micro - structure and lagrangian statistics of the scalar field with a mean gradient in isotropic turbulence _ , j. fluid mech . 474( 2003 ) , pp .g. gulitski , m. kholmyanski , w. kinzelbach , b. lthi , a. tsinober , and s. yorish , _ velocity and temperature derivatives in high - reynolds - number turbulent flows in the atmospheric surface layer .part 3 . temperature and joint statistics of temperature and velocity derivatives _ , j. fluid mech .589 ( 2007 ) , pp .h. abe , r.a .antonia , and h. kawamura , _ correlation between small - scale velocity and scalar fluctuations in a turbulent channel flow _ , j. fluid mech .627 ( 2009 ) , pp .a. tsinober , _ vortex stretching versus production of strain / dissipation _ , in _ turbulence structure and vortex dynamics _ , j.c.r .hunt and j.c .vassilicos , eds ., cambridge university press , 2000 , pp .d.i . pullin and t.s .lundgren , _ axial motion and scalar transport in stretched spiral vortices _ , phys .fluids 13 ( 2001 ) , pp .2553 - 2563 .b. kadoch , k. iyer , d. donzis , k. schneider , m. farge , and p.k .yeung , _ on the role of vortical structures for turbulent mixing using direct numerical simulation and wavelet - based coherent vorticity extraction _ ,j. turbulence 12 ( 2011 ) , art .a. tsinober , _ is concentrated vorticity that important ? _ , eur .b / fluids 17 ( 1998 ) , pp . 421449 . m. gonzalez and p. paranthon , _ influence of vorticity alignment upon scalar gradient production in three - dimensional , isotropic turbulence _ , j. phys . : conf . ser . 318( 2011 ) , 052041 ; http://iopscience.iop.org/1742-6596/318/5/052041 .l. chevillard and c. meneveau , _ lagrangian dynamics and statistical geometric structures of turbulence _ , phys .97 ( 2006 ) , 174501 .l. chevillard , c. meneveau , l. biferale , and f. toschi , _ modeling the pressure hessian and viscous laplacian in turbulence : comparisons with dns and implications on velocity gradient dynamics _ , phys .fluids 20 ( 2008 ) , 101504 .m. gonzalez , _ kinematic properties of passive scalar gradient predicted by a stochastic lagrangian model _ , phys . fluids 21 ( 2009 ) , 055104 .c. meneveau , _ lagrangian dynamics and models of the velocity gradient tensor in turbulent flows _ , annu .fluid mech .43 ( 2011 ) , pp .welton and s.b .pope , _ pdf model calculations of compressible turbulent flows using smoothed particle hydrodynamics _ , j. comput . phys . 134( 1997 ) , pp .g. lapeyre , p. klein , and b.l .hua , _ does the tracer gradient vector align with the strain eigenvectors in 2d turbulence ? _ , phys .fluids 11 ( 1999 ) , pp .y. li , _ small - scale intermittency and local anisotropy in turbulent mixing with rotation _ ,j. turbulence 12 ( 2011 ) , art .t. hater , h. homann , and r. grauer , _ lagrangian model for the evolution of turbulent magnetic and passive scalar fields _ , phys .e 83 ( 2011 ) , 017302 .a. tsinober , l. shtilman , and h. vaisburd , _ a study of properties of vortex stretching and enstrophy generation in numerical and laboratory turbulence _ , fluid dyn .res . 21 ( 1997 ) , pp .z. she , e. jackson , and s.a .orszag , _ structure and dynamics of homogeneous turbulence : models and simulations _lond . a 434 ( 1991 ) pp. 101124 . m. tabor and i. klapper , _ stretching and alignment in chaotic and turbulent flows _ , chaos , solitons fractals 4 ( 1994 ) , pp .a. garcia and m. gonzalez , _ analysis of passive scalar gradient in a simplified three - dimensional case _fluids 18 ( 2006 ) , 058101 .
|
the mechanisms promoting scalar dissipation through scalar gradient production are scrutinized in terms of vorticity alignment with respect to strain principal axes . for that purpose , a stochastic lagrangian model for the velocity gradient tensor and the scalar gradient vector is used . the model results show that the major part of scalar dissipation occurs for stretched vorticity , namely when the vorticity vector aligns with the extensional and intermediate strain eigenvectors . more specifically , it appears that the mean scalar dissipation is well represented by the sample defined by alignment with the extensional strain , while the most intense scalar dissipation is promoted by the set of events for which vorticity aligns with the intermediate strain . this difference is explained by rather subtle mechanisms involving the statistics of both the strain intensities and the scalar gradient alignment resulting from these special alignments of vorticity . the analysis allowing for the local flow structure confirms the latter scenario for both the strain- and rotation - dominated events . however , despite the prevailing role of strain in promoting scalar dissipation , the difference in the level of scalar dissipation when vorticity aligns with either the extensional or the intermediate strain mostly arises from rotation - dominated events . mixing ; scalar dissipation ; scalar gradient ; vorticity geometry
|
here we show again the trajectories taken by different algorithms as in fig . 1 in the main text , but now plotted against time rather than iterative steps . the deviation versus time taken for different algorithms , for the experimental data of . , height=238 ]we discuss various technical details pertaining to the apg and/or cg - apg algorithms described in the main text .as explained in the main text , the apg algorithm relies on a projection to enforce the quantum constraints after each gradient step .the argument of is a hermitian operator with eigenvalues ( in descending order ) and eigenvectors .one projects onto the probability simplex so that with and , and then rebuilds the operator with , i.e. , . the projection of onto the simplex is done as follows : find , then define .finally we have . during the gradient step of apg, one can wind up outside the physical state space , i.e. , at each iterative step need not be a valid state .it can even happen that not all needed in the iterative step are positive , for which is ill - defined because of the logarithm .we can prevent this by checking whether any is negative after is computed , and set if this happens to be the case .empirically , we observe such cases to occur only very rarely .we also incorporated a few small adjustments to apg recommended in ref . , as well as the barzilai - borwein method for computing step sizes , for better step - size estimation and improved performance in the implementation of the cg - apg algorithm used to produce the figures in the main text .we list those adjustments here .first , for iterative step , rather than fixing the step size as , we set if there was no restart in the previous iteration and the denominator is nonzero ; otherwise we set , for some pre - chosen constant .we used and ( see main text ) as recommended in .we also use the following update on and for to prevent changes in from affecting convergence : the rules eqs . and are exactly those stated in the apg algorithm in the main text , but with replaced by for the step - size adjustment .sometimes we observe that standard apg as prescribed by fails to restart early enough for good performance .we hence use a stricter restart criterion : restart when with set to a small positive value ( for the graphs in the main text ) . for the cg - apg algorithm , as explained in the main text, one would like to start with cg iterations and switch to apg when the hessian stabilizes , i.e. , it changes only by a little with further apg steps .this happens when the trajectory is sufficiently close to the mle .here , we explain the technical details of this switchover .the hessian of curvature characterizes its local quadratic structure .it is the second derivative " of , and comes from considering the second - order variation of : , where , the first - order variation of , with and as in the main text . here , and are independent infinitesimal variations of .a little algebra gives and we identify the linear operator [ on ] as the hessian of at . the eigenvalues of give the local quadratic structure of .ideally , determining the right time during cg to switch to apg requires computing how much changes across successive apg steps from the current value of .however , this would be very costly : it is as if one is running apg alongside cg , and the hessian is a large matrix ( in size ) and hence expensive to compute .instead , we adopt a compromise that works well in practice : ( 1 ) we treat the as if they were all mutually orthogonal so that the eigenvalues of would be equal to , and ( 2 ) we look at the change in between iterations of cg instead of between iterations of apg .the are never exactly mutually orthogonal for informationally complete measurements , but a good tomographic design would seek to spread out the directions , and for large dimensional situations , their mutual overlaps will be small and is a good enough proxy for the eigenvalues of the hessian .while looking at the change in across iterations of cg would not always guarantee a similar change for apg , a small change with cg iterations signals closeness to the mle , or that cg has stagnated . in either case, one should switch to apg .thus , in our implementation of cg - apg , we first initialize cg with the maximally mixed state , and switch to apg at the first iteration when the overlap , exceeds for some chosen small value . here , , where for state of the cg iteration .the switchover thus occurs when the angle between the for subsequent iterations is small enough .we find that radians works well in practice .figure 1 in the main text shows the trajectories taken by different algorithms for the experimental data of ref . for a noisy -state .there , we saw a long initial slow phase of apg , which is in fact atypical of the behavior seen for generic states .figure [ fig : suppmat2 ] shows the more representative behavior for a random 8-qubit pure state with 10% added white noise .as in fig . 1 , , with 100 copies for each of the settings of the 8-qubit product - pauli pom .observe the significantly shorter length of the initial slow phase of apg than for the noisy -state in fig . 1 .the plot shows the trajectories taken by different algorithms as in fig . 1 in the main text , but for simulated data generated from a random 8-qubit pure state with 10% added white noise ., height=238 ]here , we present the counting argument that gives as the computational cost of evaluating a full set of born probabilities after making use of the product structure of the pom . to remind the reader of the notation :the system comprises registers each of dimension ; the pom on each register is ; the -register pom outcome is , with and ; and .we also need the following basic fact : evaluating for an matrix and an matrix requires operations ( elementary addition / multiplication ) . in each step of the procedure described in the main text, one needs to evaluate for given .one such evaluation requires the computation of the trace of with each of the submatrices of . and each submatrix are in size , so the computational cost of evaluating is operations .one incurs this cost for every choice of , so the total cost of evaluating for all , for given , is . adding up this cost over all values of givesthe total cost for evaluating a full set of born probabilities as }.\end{aligned}\ ] ] for , as is usually the case , this gives the dominant computational cost of ; for , one has instead the cost of .m. grant and s. boyd , _ graph implementations for nonsmooth convex programs _, in v. blondel , s. boyd , and h. kimura , eds ., _ recent advances in learning and control _ , lecture notes in control and information sciences , ( springer , 2008 ) .h. hffner , w. hnsel , c. f. roos , j. benhelm , d. chek - al - kar , m. chwalla , t. krber , u. d. rapol , m. riebe , p. o. schmidt , c. becher , o. ghne , w. dr , and r. blatt , nature ( london ) * 438 * , 643 ( 2005 ) .r. j. bruck , j. math .appl . * 61 * , 159 ( 1977 ) .g. b. passty , j. math .appl . * 72 * , 383 ( 1979 ) .y. nesterov , _ introductory lectures on convex optimization : a basic course _ , kluwer academic , dordrecht ( 2004 ) .
|
conventional methods for computing maximum - likelihood estimators ( mle ) often converge slowly in practical situations , leading to a search for simplifying methods that rely on additional assumptions for their validity . in this work , we provide a fast and reliable algorithm for maximum likelihood reconstruction that avoids this slow convergence . our method utilizes an accelerated projected gradient scheme that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods . we demonstrate the power of our approach by comparing its performance with other algorithms for -qubit state tomography . in particular , an 8-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data , with far higher accuracy than previously possible . this refutes the common claim that mle reconstruction is slow , and reduces the need for alternative methods that often come with difficult - to - verify assumptions . the same algorithm can be applied to general optimization problems over the quantum state space ; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints . _ introduction. _ efficient and reliable characterization of properties of a quantum system , for example , its state or the process it is undergoing , is needed for the success of any quantum information processing task . such are the goals of quantum tomography , broadly classified into state tomography and process tomography . process tomography can be recast as state tomography via the well - known choi - jamiolkowski isomorphism ; we hence restrict our attention to state tomography . tomography is a two - step process : the first is data gathering via appropriate measurements of the quantum system ; the second is the estimation of the state from the gathered data . this second step is the focus of this article . a popular estimation strategy is that of the maximum - likelihood estimator ( mle ) from standard statistics , a matter of convex optimization . computing the mle for quantum tomography is , however , not straightforward due to the constraints imposed by quantum mechanics . while general - purpose and easy - to - use convex optimization toolboxes ( e.g. , cvx ) are available for small - sized problems , it is clear that specially adapted mle algorithms are needed for tackling useful system sizes . past mle algorithms incorporate the quantum constraints by going to the _ factored space _ ( see definition later ) where the quantum constraints are satisfied by construction via a many - to - one map back to the state space . gradient methods can then be straightforwardly employed in the now - unconstrained factored space . these algorithms can be slow in practice , with an extreme example of an 8-qubit situation purportedly ( see refs . ) requiring _ weeks _ of computation time , to find the mle , together with bootstrapped error bars ( 10 mle reconstruction in all ) , for the measured data . this has triggered a search for alternative approaches to mle reconstruction , specializing to circumstances in which certain assumptions about the system are applicable , permitting simpler and hence , faster , reconstruction . yet , the mle approach provides a principled estimation strategy , and is still one of the most popular methods for experimenters . the mle gives a justifiable point estimate for the state . it is the natural starting point for different ways of quantifying the uncertainty in the estimate : one can bootstrap the measured data and quantify the scatter in the mles for simulated data ; confidence regions can be established starting from the mle point estimator ( this is standard in statistics , but a recent discussion can be found in ) ; credible regions for the actual data are such that the mle is the unique state contained in every error region . it is thus worthwhile to pursue better methods for finding the mle . here , we present a fast algorithm to accurately compute the mle from tomographic data . the computation of the mle for a single set of data for the 8-qubit situation mentioned above now takes less than a minute , and returns a far more accurate answer than previous algorithms in the same amount of time . the speedup and accuracy originate from two features introduced here : ( i ) the cg - apg " algorithm that combines an accelerated projected - gradient ( apg ) approach , which overcomes convergence issues of previous methods , with the existing conjugate - gradient ( cg ) algorithm ; ( ii ) the use of the product structure ( if present ) of the tomographic measurements to speed up each iterative step . the cg - apg algorithm gives faster and more accurate reconstruction whether or not the tomographic measurements are of product structure ; the product structure , if present , can also be employed to speed up previous mle algorithms . _ the problem setup_. in a typical quantum tomography scenario , independently and identically prepared copies of the quantum state are measured one - by - one via a set of measurement outcomes , with and . is formally known as a povm ( positive operator - valued measure ) or a pom ( probability - operator measurement ) . the measured data consist of a sequence of detection events , where records the click of the detector for outcome for the copy measured . the likelihood for data given state is }^{f_k}\right\}}^n\,,\ ] ] where is the probability for outcome , is the total number of clicks in detector , and is the relative frequency . the mle strategy views the likelihood as a function of for the obtained , and identifies the quantum state ( the statistical operator or density matrix ) , with and , that maximizes as the best guess the mle . this can be phrased as an optimization problem for the normalized negative log - likelihood , : the domain here is the space of bounded operators on the -dimensional hilbert space . we refer to as the quantum constraints . any satisfying is a valid state ; the convex set of all valid states is the quantum state space . is convex , and hence has a unique minimum value , on the quantum state space . furthermore , is differentiable ( except at isolated points ) with gradient , so that for infinitesimal unconstrained . _ the problem of slow convergence_. previous mle algorithms converge slowly to the mle because of the by - construction " incorporation of the quantum constraints : one writes for , and performs gradient descent in the _ factored space _ of unconstrained operators , for . straightforward algebra yields to linear order in . is negative hence walking downhill for , for a suitably chosen small . this choice of prescribes a -update of the form \ ] ] to linear order in . comprises two terms , each with as a factor . when the mle is close to the boundary of the state space a typical situation when there are limited data ( unavoidable in high dimensions ) for nearly pure true states eventually gets close to a rank - deficient state and has at least one small eigenvalue . yet , has unit trace , so its spectrum must be highly asymmetric . inherits this asymmetry , leading to a locally ill - conditioned problem and slow convergence . the deviation at each step of the iteration for different algorithms , for the experimental data of . is the smallest value attained among the algorithms ( reached by the apg and cg - apg algorithms when run till further progress is hindered by machine precision ) ; is the corresponding likelihood value . here , , for 100 copies for each of the settings of the 8-qubit product - pauli pom . the dash - dotted line indicates the value obtained in with the dg algorithm . , height=238 ] to illustrate , consider the situation of : tomography of a ( target ) -qubit -state via product - pauli measurements . figure [ fig:8qubit ] shows the trajectories taken by different algorithms from the maximally mixed state to the mle the minimum of the experimental data of . the red and blue lines are for commonly used mle methods : the diluted direct - gradient ( dg ) algorithm and the cg algorithm with step - size optimization via line search . both algorithms walk in the factored space , with dg performing straightforward descent according to eq . , while cg walks along the conjugate - gradient direction . the plot shows the dg and cg iterations initially decreasing quickly , but the advances soon stall , with stagnating at values significantly larger than attainable by the apg and cg - apg algorithms ( explained below ) . note that on average the cg - apg and dg algorithms take about the same time per iterative step ; see appendix [ app0 ] for a graph similar to fig . [ fig:8qubit ] but plotted against time rather than steps . _ the cg - apg algorithm._ the slowdown in convergence for dg and cg puts a severe limit on the accuracy of the mle reconstruction : the analysis of stopped after a long wait at a state with likelihood . that was sufficient for the purpose of to show the establishment of entanglement , but can hardly be considered useful for further mle analysis . the ill - conditioning in the factored space , which leads to the slowdown in dg and cg , can be avoided by walking in the -space . there , has gradient which , unlike that of , is not proportional to . walking in the -space , however , does not ensure the quantum constraints are satisfied . the constraints are instead enforced by projecting the unconstrained operator back into the quantum state space after each gradient step . this is an example of the well - studied and often - used projected - gradient " methods in numerical optimization . in steepest - descent methods , the local condition number of the merit function [ or here ] affects convergence . poor conditioning leads to a steepest - descent direction that oscillates back and forth . one smooths out the approach to the minimum by giving each step some momentum " from the previous step . the cg method implements this for quadratic merit functions ; for projected gradients , accelerated gradient schemes are instead the focus . coupled with adaptive restart , the apg method can be thought of as indirectly probing the local condition number by gradually increasing the amount of momentum preserved ( controlled by in the algorithm below ) , and resetting ( ) whenever the momentum causes the current step to point too far from the steepest - descent direction . the apg algorithm of refs . , in -space , thus proceeds as follows : given , , and . initialize , . set , , . ( choose step size via backtracking ) set . update , . set ; termination criterion . ( restart ) , , ; ( accelerate ) set , . the operation above projects the hermitian argument to the nearest state [ satisfying constraints ] as measured by the euclidean distance . one can also modify the backtracking portion of the algorithm for better performance ; see appendix [ appa ] for further details . applying the apg algorithm to the 8-qubit example above , one indeed finds fast convergence to the mle ( see fig . [ fig:8qubit ] ) once the walk brings us sufficiently close ; no slowdown of convergence as seen in dg and cg is observed . apg with adaptive restart exhibits linear convergence ( i.e. , the deviation from the optimal value decreases exponentially ) in areas of strong convexity sufficiently close to the minimum point . far from the minimum , apg can descend slowly , as is clearly visible in fig . [ fig:8qubit ] . cg descent in the factored space , on the other hand , is rapid in this initial phase . similar behavior is observed for other states ( see a representative example in appendix [ appb ] ) , although the initial slow apg phase is usually markedly shorter than in the -state example here . thus , a practical strategy is to start with cg in the factored space to capitalize on its initial rapid descent , and switch over to apg in the -space when the fast convergence of apg sets in , _ provided _ one can determine cheaply when the switch should occur . both the apg and cg algorithms use a local quadratic approximation at each step , the accuracy of which relies on the local curvature , measured by the hessian of the merit function . the advance is quick if the hessian changes slowly from step to step so that prior - step information provides good guidance for the next step . empirically , for nearly pure true states , we observe that the hessian of changes a lot initially in the apg algorithm but settles down close to the mle . this is likely a consequence of the fact that the apg trajectory comes very quickly close to the boundary of the state space , so that some values , which occur in the hessian of as , can be very small and unchecked by the values away from the mle . on the other hand , the hessian of relevant for the cg algorithm is initially slowly changing , but starts fluctuating closer to the mle , likely due to the ill - conditioning in the factored - space gradient discussed previously . with this understanding , the proposal is then to start with cg in the factored space , perform a test along the way to detect when the hessian of settles down , at which point one switches over to apg in the -space for rapid convergence to the minimum . the hessian itself is , however , expensive to compute ; one can instead get a good gauge by monitoring the different values , cheaply computable from the already used in the algorithm ; see appendix a. this then is finally our cg - apg algorithm , with a superfast approach to the mle that outperforms all other algorithms ; see fig . [ fig:8qubit ] . _ exploiting the product structure._ part of the speed in the computation of the mle in the 8-qubit example above stems from exploiting the product structure of the situation . for the four algorithms compared , the most expensive part of the computation is the evaluation of the probabilities needed in and , for at each iterative step . for a -dimensional system and pom outcomes , the computational cost for obtaining the full set of is [ there are probabilities , each requiring operations for the trace of a product of two matrices ] . for the 8-qubit example , , and the pom has outcomes . the computational cost can be greatly reduced if one has a product structure : the system comprises registers , and the pom is a product of individual poms on each register . for simplicity , we assume the registers each have dimension , and the pom on each register is the same , written as . the -register pom outcome is then , with and . the generalization to non - identical registers and poms is obvious . the total dimension is and . exploiting this product structure reduces the computational cost of evaluating the probabilities from to ( for ) . for qubits with product - pauli measurements ( , ) , this is a huge reduction from to . the computational savings arise because parts of the evaluation of the probabilities can be re - used . let , the partial trace on the register , for a given . this same can be used to evaluate for any . one does this repeatedly , partial - tracing out the last register each time , until one arrives at the probabilities . at each stage , evaluating from involves computing the trace of with submatrices of . specifically, where with ( similarly for ) , is a submatrix , and the full is a array of these submatrices . getting from simply requires replacing each submatrix in by the number , which takes computations . since each need only be computed once for all subsequent , simple counting ( see appendix [ appc ] ) yields a total computational cost of ( for ) to evaluate the full set of probabilities . ( color only . ) time taken , for a convergence criterion of , for different algorithms on a varying number of qubits . for each , 50 states are used , each a haar - random pure state with 10% added white noise to emulate a noisy preparation . for each state , the different algorithms are run for , where are the born probabilities for the state on the -qubit product - pauli pom . the mle is hence the actual state . the lines labeled np " indicate runs _ without _ using the product structure . these stop at six qubits due to the long time taken . the lines are drawn through the average time taken for each algorithm over the 50 states ; the scatter of the timings are shown only for the algorithms using the product structure . for , cg did not converge within the maximum alloted time ( times that taken by cg - apg / apg ) in 3 out of the 50 states ; these points are plotted with that maximum alloted time ( circled points ) , and the average time taken is hence a lower bound on the actual average for cg . cg failed to converge in a reasonable time for all states beyond 8 qubits ; dg failed to converge beyond 7 qubits . , height=245 ] figure [ fig : nqb ] shows the performance of the different algorithms for a varying number of qubits with and without exploiting the product structure , for the product - pauli measurement . a significant speedup is visible when the product structure is incorporated . similar behavior is observed for the commonly used alternative pom , the product - tetrahedron measurement . for comparison , we also display the runtime for the general - purpose cvx toolbox for convex optimization ; the clear disadvantage there is the inability to capitalize on the product structure . note that cvx does not allow direct specification of a convergence criterion on the value as we have done in the other algorithms ; the plotted points are instead verified after the fact to have an average value much less than . all computations are conducted with _ matlab _ on a desktop computer ( 3 ghz intel xeon cpu e5 - 1660 ) . it is important to note that incorporating the product structure in the mle reconstruction is very different from putting in assumptions about the state or the noise : in the former , one knows the structure by design of the tomographic experiment ; the latter assumptions require additional checks of compliance , which can not be guaranteed to be easy or even possible to do . one should also note that tomography experiments with systems larger than a couple of qubits typically employ poms with a product structure , because of the comparative ease in design and construction , so this product assumption is very often satisfied in practice . _ conclusion._ we have demonstrated that , with the right algorithm , mle reconstruction can be done quickly and reliably , with no latent restriction on the accuracy of the mle obtained . as the dimension increases , there is no getting around the fact that any tomographic reconstruction will become very expensive , but our algorithm slows the onset of that point beyond the system size currently accessible in experiments . we note here that our method can be immediately applied to the reconstruction of the mle for process tomography . furthermore , it is a general method for optimization in the quantum state space or other types of constraints , and hence can also be used in other such problems . this work is funded by the singapore ministry of education ( partly through the academic research fund tier 3 moe2012-t3 - 1 - 009 ) and the national research foundation of singapore . the research is also supported by the national research foundation ( nrf ) , prime minister s office , singapore , under its create programme , singapore - mit alliance for research and technology ( smart ) biosystems and micromechanics ( biosym ) irg . hkn is partly funded by a yale - nus college start - up grant . the authors thank c. roos and o. ghne for sharing the experimental data of ref . and information about the mle reconstruction used in that work . zz thanks chenglong bao for his discussions regarding apg and george barbastathis for general discussions . j. shang and z. zhang contributed equally to this work .
|
the classical thermoacoustic tomography ( tat ) problem formulates as follows . consider an object contained in an open set of which emits an acoustic pressure wave at , considered as a dirac pulse .this wave is modeled as a solution of : where is an operator modeling an acoustic wave phenomenon . then this pressure wave is observed ( e.g. thanks to piezoelectric sensors ) and a set of observations is obtained from the solution . that can be expressed thanks to an observation operator mapping a solution to observations .the inverse tat problem consists in developing and studying methods to reconstruct from and to define situations in which this reconstruction is possible . in the three past decadesmany techniques have been developed , offering effectual results ( see works from authors of , , , among others ) .the new techniques we propose in section [ iswe ] rely on the following idea , influenced by : if the system we consider is reversible in time , then the initial state to reconstruct can be seen , backward in time , as a state to reach , so that usual control and filtering techniques can be used to solve this inverse problem . for this purpose, we first used the back and forth nudging algorithm ( see ) in . with filtering techniques , as the kalman filter defined in and one of its reduced rank formulation ,the seek filter ( cf . ) , we deal here with possible improvements of this method .of course , many assumptions are necessary to obtain a favorable observation situation : the way the wave propagates , depending on the media and on the kind of the wave , the final time and the number , size and position of the sensors recording the wave command the information contained in the data ( see e.g. , and the references therein ) . moreover , even if the continuous problem is well set , numerical issues still put up some resistance , as considering noisy data , algorithmic complexity or apparition of spurious high - frequency oscillations during numerical implementations ( the paper surveys this point ) .we deal here with this issue , as done in .we introduce an artificial attenuation term in the numerical scheme that not only yields a regularization of the solution but corrects degenerated observation configurations for which filtering techniques are not helpful .this section is devoted to main results about stabilization of the wave equation and kalman - bucy filter .then we define iterative stabilizing methods .we assume that is the dalembert operator . if necessary, we can suppose that and consider the stabilization problem for the initial value problem related to instead of the inverse tat problem .similar considerations are also valid for linear variable speed ( reversible ) wave equations. define , for any , and on , then and equation writes : the observations are defined in a hilbert space .it is convenient , when working with wave equations , to consider the time derivative of the observations , so that we assume that and use equally or . in the practice of tat , we only get observations from and use its time derivative when needed .concerning stabilizability and controllability of wave equations , we have the fundamental criterion : the _ observation inequality _ is satisfied if there exists such that : for all , where is the solution of with initial data .scalar is called_ observability constant_. indeed , in , one finds the following result : [ prop1]the three following propositions are equivalent : * the observation inequality is satisfied . * for every positive - definite self - adjoint operator , the operator generates an exponentially stable -semigroup on . *the system is exactly controllable .many geometrical interpretations to the observation inequality have been presented , mostly known as _ geometric optics condition _ ( goc from ) . in particular , these results explain this heuristic situation : when , where is an open subset of , then enough energy from has to pass through to get enough information to reconstruct .it depends on many parameters such as the position of the sensors , the speed map of the wave equation , the final time , etc . in this context , we introduce a first reconstruction method , the kalman - bucy filter .we recall the main results concerning the ( continuous ) kalman - bucy filter .it yields a way to approximate the real state that minimizes the error variance in the following situation ( see ) : assume that the true state solves the linear differential equation and the theoretical model is governed by . given data , we denote the observation error by , so as .errors and are null mean white gaussian noise processes and their respective covariance matrices are and . given and , the _ kalman - bucy filter _ consists in the two following differential equations , one to estimate the state and one for the covariance matrix , of a differential riccati type : and the kalman gain is given by let us explain some links between subsections [ osw ] and [ kseek ] . in the filters formulations , the feedback is realized thanks to an operator written as feedbacks write and in subsection [ osw ] .thus one can consider here that weights the feedback in comparison to the model .see for some results similar to proposition [ prop1 ] about feedbacks .we are studying different ways to define the stabilizing operator , first with the nudging operator , where , that is in the framework of proposition [ prop1 ] , then with filters .it leads to the following back and forth reconstruction algorithms .we get benefits from the reversibility of the wave equations , go back in time and use data again during a backward evolution , from to , to deduce an approximation of . such an idea has lead d. auroux and j. blum to define the back and forth nudging algorithm in , then d. auroux and e. cosme improved it with the back and forth seek ( from private communication , see below for a description of the bf - seek ) .these techniques can be formulated for any sequences of positive operators as follows : given a set of observations and a rough estimate , define .an _ iterative reconstruction method _ consists in iterating a back and forth process .the forward solution is given by : with initial data and and the backward solution solves : with _ final _ data and .the process is iterated for .concerning , it is left implicit that time or space derivatives of and can be considered .the sign preceding changes for convenience of notation as one can notice that , when contains a first - order time derivative , then the backward equation writes forward : thanks to the variable substitution .finally , the correcting term still helps to stabilize the system .note that unlike many usual methods , the model is considered here as a weak constraint , which can be useful since it may not be well known ( e.g. refer to about simulation of inhomogeneous acoustic speed model and relative issues in tat ) . finally , this kind of algorithms yields successive estimates of the initial object to reconstruct .as explained by proposition [ prop1 ] one knows that , under favorable observation conditions , both forward and backward equations are related to exponentially stable semigroups , that leads to the convergence of the algorithm .consider the 1-d domain and both uniformly gridded with steps and .we deal with the classical finite difference time domain theta - scheme ( -fdtd ) to simulate the wave equation .sensors are located periodically every grid points from the first one .upper discussion about observability , stabilizability and controllability have their matching in this situation : a discrete observation condition occur , similar to , which is equivalent too to the discrete stabilizability of the system ( we omit details , see and the reference therein when ) . in order to compensate for possibly damaged observation conditions , due either to sensors configuration or to noise, we carry out the solution suggested in , adding an artificial viscous heating attenuation term in the -fdtd , that leads to : where , , or , , and stands to express both forward and backward implementations .the term allows us to consider derivatives of the correcting term in the feedbacks .only main results about the kalman filter are given and we describe then how to derive the seek filter from it ( as in ) .these algorithms divide into two steps , a _ forecast _ one and an _ analysis _ one ,in which are taken account the observations to correct the forecast , and two different kinds of parameters are considered , first the states ( and ) , then the relative error covariance matrices ( and ) or their square root ( and ) . definitions of the kalman and seek filters are explained respectively on left and right columns below . given , and ,one iterates : ^{-1}c^t\vgr^{-1},\hfill&\qquad&\vgk_n=\vgs^\vgf_n\left[i_r+(c\vgs^\vgf_n)^\text{t } \vgr^{-1}(c\vgs^\vgf_n)\right]^{-1 } ( c\vgs^\vgf_n)^\text{t } \vgr^{-1},\hfill\\ \vgx^\vga_n=\vgx^\vgf-\vgk_n\left[c\vgx^\vgf_n-\vgy^o_n\right],\hfill&\qquad&\vgx^\vga_n=\vgx^\vgf_n - \vgk_n \left [ c\vgx^\vgf_n-\vgy^o_n\right],\hfill\\ \vgp^\vga_n=\left[i_n-\vgk_n c\right]\vgp^\vgf_n,\hfill&\qquad&\vgs^\vga_n=\vgs^\vgf_n \left [ i_r + ( c\vgs^\vgf_n)^\text{t } \vgr^{-1 } ( c\vgs^\vgf_n)\right]^{-1/2},\vspace{1mm}\hfill\\ \textit{forecast step}&\qquad&\textit{forecast step}\\ \vgx^\vgf_{n+1}=\vgm\vgx^\vga_{n},\hfill&\qquad&\vgx^\vgf_{n+1}=\vgm \vgx^\vga_i,\hfill\\ \vgp^\vgf_{n+1}=\vgm \vgp^\vga_{n}\vgm^\text{t}+\vgq.\hfill&\qquad&\vgs^\vgf_{n+1}=\vgm \vgs^\vga_n.\hfill \end{array}\ ] ] where is the identity matrix ( in the discrete state space if and in the reduced rank space if ) and is the kalman gain , minimizing the trace of the error covariance on , or the reduced rank kalman gain in seek . theoretically , one has in seek , but since is not well known , we define , where , which avoids an additional decomposition in seek .it is set similarly in kf . in kf ,if the state space has a range of , the forecast step necessitates model evolution steps to get the forecast error covariance matrix from the analysis error covariance matrix , such that a less optimal gain may be considered to reduce calculation cost .this is the purpose of the seek to yield such a gain by considering a reduced rank gain more simple to obtain . since the error covariance matrices are symmetric and positive - definite , pham et al . suggested to consider the following decomposition : if is a symmetric positive - definite matrix whose larger eigenvalues are and their corresponding eigenvectors , the _ reduced decomposition _ of is defined as the matrix ] .an additive white gaussian noise of level is added to the data when said .the nudging gain is set to for implementation limitations .table [ fig1 ] and fig .[ fig2 ] and [ fig3 ] show rms errors and some relative reconstructions obtained in various situations .the object to reconstruct is shown on the upper left part of figure [ fig2 ] then , to the right , one sees tr and bfn reconstructions , followed by bf - seek and kf reconstructions . object to reconstruct and reconstructions by tr , bfn , bf - seek and kf for and .,title="fig : " ] + object to reconstruct and reconstructions by tr , bfn , bf - seek and kf for and .,title="fig : " ] as kf reacts quite well to noise addition , it shows much more sensitivity to the number of sensors and easily fails .bf - seek offers obvious improvements for both calculation cost and reconstruction error ( fig .[ fig2 ] ) .when 2 sensors are left , one can observe possible interesting effects of the attenuation term against noise with tr and bf - seek .nevertheless , it may damage the reconstruction ( with bfn ) or have insignificant consequences ( with tr ) .when only 1 sensor is left , only bfn and bf - seek are robust enough to yield a good approximation of the object , but bf - seek needs to be corrected with the attenuation to keep stable ( fig .[ fig3 ] ) .bf - seek reconstructions . case and : ( a ) and ( b ) .case : ( c ) and ( d).,title="fig : " ] bf - seek reconstructions .case and : ( a ) and ( b ) .case : ( c ) and ( d).,title="fig : " ] bf - seek reconstructions .case and : ( a ) and ( b ) .case : ( c ) and ( d).,title="fig : " ] bf - seek reconstructions .case and : ( a ) and ( b ) .case : ( c ) and ( d).,title="fig : " ]a common formulation for iterative stabilization of reversible evolution systems is given .it is used to define methods which solve some inverse problems for wave equations .experiments show that the techniques we introduced may offer an alternative to usual inverse methods for tat problem . in applications ,knowledge about the quality of the sensors , the background and the model approximations can be used by filters , but in this case they need to be precisely tuned .the use of an artificial attenuation is motivated by good results , and allows one to consider lossy medium in back and forth implementations when the physical loss does not exceed the numerical attenuation . when space dimension increases , we first face an excessive calculation cost ( one bf - seek iteration with rank 60 equals almost one thousand bfn in computation time ) .so one would get interested in hybrid methods , e.g. getting first estimates with tr and bfn and then reconstructing the solution with filters .a solution is offered by perfectly matched layers to reduce the space domain of calculation and for which a first order discrete scheme formulation is necessary for the filters .99 , 2009 , _ on reconstruction formulas and algorithms for the thermoacoustic and photoacoustic tomography _ , ch . 8_ in _ l. h. wang ( editor ) `` photoacoustic imaging and spectroscopy '' , crc press , pp .89 - 101 . , 2005 ,_ back and forth nudging algorithm for data assimilation problems _ , c. r. acad .paris , ser .i , 340 , pp .873 - 878 . , 1992 ,_ sharp sufficient conditions for the observation , control and stabilization of waves from the boundary _ , siam j. control optim . ,30 , pp . 1024 - 1065 ., 2010 , _ application of a nudging technique for thermoacoustic tomography _ , preprint ., 2004 , _ determining a function from its mean values over a family of spheres _ , siam j. math .35 ( 5 ) , pp .1213 - 1240 . , 2006 , _ thermoacoustic tomography with correction for acoustic speed variations_ , med .51 , pp . 6437 - 6448 . , 1960 ,_ a new approach to linear filtering and prediction problems _ , transactions of the asme - journal of basic engineering , vol .82 , pp .35 - 45 . , 1961 , _new result in linear filtering and production theory _ , j. basic eng . , maroh . , pp . 95 - 108 . , 2010 , _ mathematics of thermoacoustic tomography _ , chapter 19 in vol . 2 of `` handbook of mathematical methods in imaging '' , pp . 817 - 866 , springer verlag , arxiv : 0912.2022v1 ., _ locally distributed control and damping for the conservative systems _ , siam j. control optim . , 1997 ,35 ( 5 ) , pp . 1574 - 1590 ., 2007 , _ photoacoustic tomography using a mach - zehnder interferometer as acoustic line detector _ , appl46 , pp .3352 - 3358 . , 1998 , _ a singular evolutive extended kalman filter for data assimilation in oceanography _ , j. marine sys . ,16 , pp .323 - 340 . , 2010 ,_ a new numerical algorithm for thermoacoustic and photoacoustic tomography with variable sound speed _ , arxiv.org/abs/1101.3729 . , 2007 , _uniformly exponentially stable approximations for a class of second order evolution equations , application to lqr problems _ , esaim : cocv , vol . 13 ( 3 ) , pp .503 - 527 . , 2007 ,_ a reduced - order kalman filter for data assimilation in physical oceanography _ , siam rev ., vol . 49 ( 3 ) , pp .449 - 465 . , 2006 , _photoacoustic imaging in biomedecine _ , rev ., vol . 77 ( 4 ) , 041101 . , 1976 , _remarks on the algebraic riccati equation in hilbert space _ , appl . math ., vol . 2 ( 3 ) , pp .251 - 258 . , 2005 , _propagation , observation , and control of waves approximated by finite difference methods _ , siam rev .. 47 ( 2 ) , pp . 197 - 243 .
|
some iterative techniques are defined to solve reversible inverse problems and a common formulation is explained . numerical improvements are suggested and tests validate the methods . nous dfinissons des techniques itratives pour la rsolution de problmes inverses rversibles et en fournissons une formulation commune . aprs avoir suggr des amliorations pour leur implmentation , des exprimentations sont prsentes qui valident ces mthodes .
|
a rooted evolutionary tree is a directed weighted tree graph ; it represents the evolutionary relationship between groups ( also called taxa ) of organisms ( figure 1(a ) ) . a leaf or a tip is a node with degree 1 ; each tip represents a modern day taxon .the root ( node 0 ) represents the most recent common ancestor ( mrca ) of all the taxa . the direction ( of evolution )is from the root to the tips .evolutionary tree as a vector of parameters influences the probability distribution of alleles at the tips .a rooted population tree is a rooted evolutionary tree where the taxa are populations from the same species .two types of parameters are common in any model of the rooted population tree : the tree - topology parameter ( a categorical parameter ) for the whole tree , and a branch parameter for each branch ( also called edge ) .the tree - topology is the order in which the path from the root separates for the given set of populations ; it is represented as a directed tree graph without the weight .( in figure 1(a ) and ( b ) , the two trees have different tree - topologies for the populations 1 - 4 . )a branch parameter is usually a branch - length ( an edge - weight ) or a transition probability matrix that influences the change in allele frequency between the two nodes of a branch . herewe will prove the identifiability of a population tree model by that uses kingman s coalescent process ( ) .the model was later modified and expanded by various authors ( ) .coalescent - based models are of significant importance as they model the underlying allele frequency changes with accuracy and relative ease ( see ) . due to the underlying structure in evolutionary tree - based models ,its identifiability is never obvious .the identifiability of certain evolutionary tree models have been a recent topic of discussion . proved the identifiability of a general time reversible ( gtr ) transition probability matrix - based model .non - identifiability of another time reversible model was established in .the non - identifiability of mixture models have been discussed in .the identifiability for the model has been proven by . to our knowledgethe identifiability of the coalescent - based model of has never been proven . for estimating evolutionary trees each independent genetic locusis viewed as a single data - point , as opposed to viewing each individual as a data - point ( see , for example ) .thus , identifiability would mean that the model parameters can be identified from the distribution of allele - types for a set of individuals at a single genetic locus .in this section we will describe the underlying model of .we start by defining our notations ( see also figure 1(c ) ) .we define a -tip population tree as .the parameter is the tree - topology , an unweighted directed tree - graph ; it takes finitely many discrete categorical values ; the in superscript denotes the number of tips .the parameter is a vector of length consisting of the branch - lengths for each branch in .a strictly bifurcating tree - topology has exactly branches .if is non - bifurcating then it has less branches and the remaining elements of are populated by zeros .the parameter is a vector containing the parameters of root distribution which we will define later in this section .we also define as the set of tips at . at each tip there are lineages , each having allele - type ` 0 ' or ` 1 ' .the allele types among these lineages at each tip are the observable random variables .similarly , at each non - tip node , the random variable is the ( random ) number of lineages that are ancestral to the tips below along the tree .we also define the random variable at each node ( tip or non - tip ) , as the count of allele ` 1 ' among the lineages . from now on we will use the term ` allele - count ' to refer to the count of allele ` 1 ' . for each tip , the allele - count is observable .consider a branch with lower ( towards the tips ) node and upper ( towards the root ) node .let be the number of lineages in that are ancestral to the lineages at ( ) .also , let be the allele - count among these lineages ( ) . if is the upper node of branches with lower nodes , then and also .( for a strictly bifurcating tree . ) from the model parameters one computes the probability of observed vector of allele - counts from samples of sizes at tips ( ) as follows .consider a branch with length , with upper node and lower node .given the probability mass function ( pmf ) of ( the number of lineages at ) , the pmf of is computed as where . then , the pmf of is determined from eq .( [ eq : nsum ] ) . using eqs .( [ eq : tn85 ] ) and ( [ eq : nsum ] ) , starting from and going upward , one computes the pmf of and for any non - tip non - root node , and finally at the root ( node 0 ) .then a ` root distribution ' with parameter gives the pmf of ( allele - count ) given at the root : where is the maximum possible value of ( number of lineages at the root ) .different authors have used different root distributions . in particular used symmetric beta - binomial distribution : where is the beta function ; is a parameter to be estimated .then , from the distribution of and for all non - root nodes , we compute the distribution of ( allele - counts ) at the rest of the nodes as follows . consider a node where branches merge from the bottom with the bottom nodes . recall that we already have the distributions of , and , .the pmf of is computed from the pmf of using the formula then the pmf of is computed from the above pmf using the following ( from an expression in ) : ( ) .thus , starting with at the root , one computes the joint pmf of from the formulae in eqs .( [ eq : hpgeo ] ) and ( [ eq : polya ] ) .note that in eqs .( [ eq : nsum ] ) , ( [ eq : tn85 ] ) , ( [ eq : betabinom ] ) , ( [ eq : hpgeo ] ) and ( [ eq : polya ] ) probability ` flows ' up along s and then flows down along s . now that we have completely described the model , we will proceed to prove the identifiability of this model in the next section .let be a tree with .we define a subtree of as a tree formed by a subset ( cardinality ) of by tracking the tips in along the tree to their most recent common ancestor ( mrca ) node .thus , , where is the tree - topology with tips of . for example , in figure 1(a ) , , , and the subtree is drawn with the dotted lines .consider two distinct trees and with a common set of tips .if , then there must be at least one doubleton subset with the following property : the subtrees and , formed by tracking and to the root in and ( respectively ) , are distinct .that is , if and is the path distance ( total branch length ) between and the mrca of and along the subtree ( ) , then ( note that there is only one possible tree - topology for a two - tip tree , denoted as above . ) thus , the set of all two tip subtrees , along with , uniquely identifies the tree .we assign the two - tip subtrees into two categories : type - i subtrees are those with the root as the mrca of the two tips .for example in figure 1(a ) , the subtree formed by tips has the root as the mrca of the two tips 3 and 4 .thus , it is of type - i .all other two - tip subtrees are type - ii subtrees . for example , in figure 1(a ) , if a subtree is formed by tips 2 and 4 , it will be a type - ii subtree as their mrca is node 6 , and not the root. we will deal with these two types of subtrees separately .we note that the root distribution of ( eq . ( [ eq : betabinom ] ) ) is identifiable as it is beta - binomial .next , we will prove the identifiability of the whole model by assuming a general identifiable root distribution that has parameter vector .( in particular , our proof would work with beta - binomial as the root distribution . ) * theorem* suppose that we have a tree with the underlying model as described in section [ sec : model ] .also , suppose that we have lineages sampled at each tip and the root distribution is identifiable .then the parameters of are identifiable from the distribution of allele types at the tips . to prove the above theorem, we will show that the parameters of each two - tip subtree can be expressed as a function of the joint pmf this will complete the proof as the set of all two - tip subtrees , along with , uniquely identifies the tree .suppose that is a type - i subtree with the underlying model as described in section [ sec : model ] .let and be its two tips .let the root be denoted as ` 0 ' ( figure 1(d ) ) and let be path distance between and the root * proposition * suppose that we have at least two lineages sampled at each of and and the root distribution is identifiable. then and can be expressed as functions of the joint pmf of allele types in and , and hence they are identifiable .[ prop : type1 ] * proof * suppose that we have samples of and lineages from and respectively , and the allele - counts among these lineages are and respectively .let the joint pmf of be .consider random subsamples ( without replacement ) of size and from and respectively with .rather than working with the allele - counts at the original samples , we will work with allele - counts at the subsamples .one computes the joint pmf of from as we will argue that the joint pmfs for ( 1,1 ) , ( 1,2 ) and ( 2,1 ) are enough to identify the parameters and . as before ,let be the number of lineages ancestral to subsamples at that are present at the top node ( the root ) ( see figure 1(d ) ) and be the allele - count out of these ; ( ) .also , let be the number of lineages at the root ancestral to the subsampled lineages at and , and be the allele - count out of these lineages .first , consider the case . then 0 or 1 for . from eq . ([ eq : tn85 ] ) it follows that ; thus , and hence does not involve and . from eq .( [ eq : polya ] ) it also follows that .also , .note that and ( ) are counts .thus , using a symmetric argument thus , it follows that thus , from eqs .( [ eq : rootf1 ] ) and ( [ eq : rootf2 ] ) can be expressed as functions of .the former is the root distribution for , which is identifiable by the condition of proposition [ prop : type1 ] .thus , can the expressed as a function of the pmf of ( given ) , and thus as a function of joint pmf of .hence , it can also be expressed as a function of .next , we consider .then 0 , 1 or 2 and 0 or 1 . from eq . ([ eq : tn85 ] ) it follows that ; thus and hence does not involve .moreover , . also , from eq .( [ eq : polya ] ) it follows that thus , note that .also , note that is a function of only ( and no other parameters ) ; hence we call it .thus , from eqs .( [ eq : tn85 ] ) and ( [ eq : hpgeo ] ) . from the above equation it follows that for some function .we have already established that can be expressed as a function of .thus , can be expressed as a function of and hence is identifiable . using a symmetric argument, one can establish that can be expressed as a function of and hence it is identifiable .thus , this proposition is proven .consider a type - ii subtree of with tips and .let the mrca node of and be denoted as .( by definition is not the root . ) also , consider the path from to the root ( node 0 ) and call it branch .there must be at least another branch attached to the root other than branch ( figure 1(e ) ) .consider a tip , such that the path between and the root goes through .let be the path distance between and and let be the path distance between and .also , let be the path distance between the root and and let be the path distance between the root and .* proposition * suppose that we have at least two haploids sampled at each of and and the root distribution is identifiable .then and can be expressed as functions of the joint pmf of the allele types at and , and hence they are identifiable . * proof * suppose that we have samples of , and lineages from , and respectively , and the allele - counts among these lineages are , and respectively .let the joint pmf of be .first we consider the type - i subtree formed by and . from proposition[ prop : type1 ] one can establish that , and can be expressed as a function of the joint pmf of and hence of . a symmetric argument also establishes that can be expressed as functions of .next we will show that each of and can be expressed as function of .consider a random subsample of size one from each of and .let and be the numbers of subsampled haploids at and respectively .( thus , ) .let and , respectively , be the observed allele - counts at these subsamples .( 0 or 1 for . ) as before , let be the number of lineages ancestral to subsamples at that are present at the top node of the branch ( in the subtree ) attached to ( see figure 1(e ) ) and be the allele - count out of these ( ) . from eq .( [ eq : tn85 ] ) it follows that and thus does not involve ( ) . hence , does not involve and . also , thus , the left side of eq .( [ eq : rabd ] ) can be expressed as a function of .it also follows from eq .( [ eq : polya ] ) that .let be the total number of lineages from subsamples of and that are present at node , and let be the allele - counts out of these lineages .also , let be the number of lineages ancestral to those lineages that are present at the top node ( root ) of the branch , and let be the allele - count out of these lineages . as before, let be the total number of lineages at the root ancestral to the subsamples at and ; let be the allele - count out of these lineages .note that . from eq .( [ eq : polya ] ) and the fact that it follows that thus , consider the part of the subtree consisting of the path from and to the root ; it is a type - i subtree with and as the tips , and and , respectively , as the lengths of the attached branches ; it has , respectively , as the numbers of observed lineages at and and , respectively , as the allele - counts in these lineages . from eq .( [ eq : tau1 ] ) and ( [ eq : r_ab ] ) as we have already established that , , , and the left side of eq .( [ eq : rabd ] ) can be expressed as functions of , it follows that and can be expressed as functions of .thus , they are identifiable and this proposition is proven. thus , the parameters of the tree are identifiable , as each two - tip subtree along with the root distribution parameter is identifiable .we have proven that the model parameters are identifiable under the coalescent - based population tree model of .thus , the problem of estimation of population tree from this model is indeed meaningfully stated . moreover , as identifiability is a required condition for consistency of maximum likelihood estimator ( mle ) , this is a step towards proving the consistency of mle for this model .we have proven the identifiability of the tree parameters for any identifiable root distribution . as a resultour proof is valid for different versions of this model ( that vary at the root distribution ) such as .nielsen , r. , mountain , j. l. , huelsenbeck , j. p. & slatkin , m. ( 1998 ) . maximum likelihood estimation of population divergence times and population phylogeny in models without mutation. _ evolution _ * 52 * , 669677 .bryant , d. , bouckaert , r. , felsenstein , j. , rosenberg , n. a. & roychoudhury , a. ( 2012 ) . inferring species trees directly from biallelic genetic markers : bypassing gene trees in a full coalescent analysis . _* 29 * , 19171932 .
|
identifiability of evolutionary tree models has been a recent topic of discussion and some models have been shown to be non - identifiable . a coalescent - based rooted population tree model , originally proposed by nielsen et al . 1998 , has been used by many authors in the last few years and is a simple tool to accurately model the changes in allele frequencies in the tree . however , the identifiability of this model has never been proven . here we prove this model to be identifiable by showing that the model parameters can be expressed as functions of the probability distributions of subsamples . this a step toward proving the consistency of the maximum likelihood estimator of the population tree based on this model .
|
the concept of fractal geometry has proved useful in describing structures and processes in experimental systems .it provides a framework which can quantify the structural complexity of a vast range of physical phenomena .fractals are objects which exhibit similar structures over a range of length scales for which one can define a non - integer dimension .there are different procedures to evaluate the fractal dimension of an empirical fractal , all based on multiple resolution analysis . in this analysis onemeasures a property of the system ( such as mass , volume , etc . ) as a function of the resolution used in measuring it ( given by a yardstick of linear size ) .fractal objects are characterized by where is the fractal dimension and is a prefactor ( related to the lacunarity of the object ) . for such objectsthe graph of vs. exhibits a straight line over a range of length scales where ( ) is the lower ( upper ) cutoff . the fractal dimension is given by the slope of the line within this range .typically , the range of linear behavior terminates on both sides by and either because further data is not accessible or due to crossover bends beyond which the slope changes . for example , in spatial fractals the scaling range is limited from below by the size of the basic building blocks from which the system is composed and from above by the system size .however , the empirically measured scaling range may be further reduced either due to properties of the measured system or limitations of the apparatus .system properties which may further restrict the scaling range may be : ( a ) mechanical strength of the object which is reduced with increasing size ; ( b ) processes which tend to smooth out the structure and compete with the fractal generating processes ; ( c ) noise , impurities and other imperfections in the system and ( d ) depletion of resources such as space available for growth or feed material .the apparatus may limit the observed scaling range due to : ( a ) limited resolution at the smallest scales ; ( b ) limited scanning area , which may be smaller than the system size ; ( c ) limited speed of operation which does not allow to collect enough statistics ; ( d ) constraints in operation conditions such as temperature , pressure , etc . which may impose parameters not ideal for the given experiment. there are different ways to classify empirical fractals .one classification is according to the type of space in which they appear .this can be : ( a ) real space ; ( b ) phase space ; ( c ) parameter space and ( d ) the time domain ( time series ) .spatial fractals appear in both equilibrium and nonequilibrium systems .the theory of critical phenomena predicts that at the critical point of fluids , magnets and percolation systems the correlation length diverges . as a result, fractal domain structures appear over all length - scales up to the system size .experimental evidence for fractal structures at criticality has been obtained for example in the context of percolation , in agreement with the theory and computer simulations .reaching the critical point requires fine tuning of the system parameters , as _these points are a set of measure zero in parameter space_. most empirical fractals have been found in systems far from thermal equilibrium and thus - not only out of the scope of critical phenomena , but where equilibrium statistical physics does not apply .a variety of dissipative dynamical systems exhibit strange attractors with fractal structures in phase space .the theory of dynamical systems provides a theoretical framework for the study of fractals in such systems at the transition to chaos and in the chaotic regime . at the transition to chaos, fractals are found also in parameter space while time series measured in the chaotic regime exhibit fractal behavior in the time domain .fractal dimensions of objects in phase space are not limited by the space dimension , giving rise to the possibility of .effective methods for embedding experimental time series in higher dimensional spaces to examine the convergence of fractal dimension calculations were developed and widely applied .however , these should be used with care as the number of data points required in order to measure fractal dimensions ( fd ) from embedded time series increases exponentially with the dimension of the underlying attractor . in this paperwe will focus on fractals in real space .one can classify the spatial fractal structures according to physical processes and systems in which they appear .we identify the following major classes : ( a ) aggregation ; ( b ) porous media ; ( c ) surfaces and fronts ; ( d ) fracture ; ( e ) critical phenomena ( e.g. in magnets , fluids , percolation ) .note that some systems may belong to more than one class .for example , classes ( a ) and ( d ) describe the dynamical processes which generate the fractal while classes ( b ) and ( c ) describe the structure itself .moreover , there is some overlap between ( b ) and ( c ) since studies of porous media often focus on the fractal structure of the internal surfaces of the pores . for case ( e ) of equilibrium critical phenomenathere are solid theoretical predictions of fractal structures at the critical point , most extensively examined for the case of percolation .the cutoffs in such systems may appear due to small deviations of the parameters from the critical point values and due to the finite system size .spatial fractals in the four other classes typically result from non - equilibrium processes .one should single out the case of surfaces and fronts ( c ) which are often inherently anisotropic and their fractal nature is characterized by self affine rather than self similar structure . among the other three classes , within the physics literature ,fractals in aggregation phenomena have been most extensively studied .the abundance of fractals in aggregation processes stimulated much theoretical work in recent years .the diffusion limited aggregation ( dla ) model , introduced by witten and sander , provides much useful insight into fractal growth .this model includes a single cluster to which additional particles attach once they reach a site adjacent to the edge of the cluster .the additional particles are launched one at a time from random positions far away from the cluster and move as random walkers until they either attach to the cluster or move out of the finite system .numerical simulations of this model were used to create very large fractal clusters of up to about 30 million particles .these clusters exhibit fractal behavior over many orders of magnitude ( although the lacunarity seems to change as a function of the cluster size ) .the asymptotic behavior of the dla cluster has been studied analytically and numerically for both lattice and continuum models indicating a considerable degree of universal behavior .a universal fractal dimension was observed in two dimensions ( 2d ) and in three dimensions ( 3d ) .morphologies similar to those of the dla model and fractal dimensions around have been observed in a large number of distinct experimental systems .these include electrodeposition and molecular beam epitaxy ( mbe ) .however , unlike the theoretical model , the experimentally observed morphologies are typically somewhat more compact and the scaling range does not exceed two orders of magnitude . this observation has to do with the fact that unlike theoretical models , which may be inherently scale free , in empirically observed fractals the range of length - scales over which scaling behavior is found is limited by upper and lower cutoffs . for finite systems ,the scaling range is limited by lower and upper cutoffs even if the internal structure is scale free . in this casethe lower cutoff is the basic unit ( or atom ) size in the system , while the upper cutoff is of the order of the system size .however , typically the scaling range is much narrower than allowed by the system size , thus limited by other factors .this width is not predicted by theoretical models and in many cases not well understood .there have been some suggestions on how to incorporate the limited range into the analysis procedure .on the one hand , this range may be simply limited by the apparatus used in a given experiment .if this is the case , we would expect to see , at least in some experiments , when the most proper apparatus is chosen , a broad scaling range limited only by the system size . on the other hand ,the scaling range may be limited by properties intrinsic to the system . in this case , using a different apparatus is not expected to dramatically broaden the scaling range . in this paperwe explore the status of experimental measurements of fractals . using an extensive survey of experimental fractal measurementswe examine the range of scales in which the fractal behavior is observed and the fractal dimensions obtained .we observe a a broad distribution of measured dimensions in the range , most of which are interpreted as non universal dimensions , that depend on system parameters .this distribution includes a peak around due to structures which resemble 2d dla - like clusters , which account for a significant fraction of the class of aggregation processes .more importantly , we find that the range of fractal behavior in experiments is limited between 0.5 - 2 decades with very few exceptions as discussed above. there may be many different reasons for this , which can be specific to each system or apparatus .however , the fact that the distribution is sharply concentrated around 1.5 decades and the remarkably small number of exceptions , indicate that there may be some general common features which limit this range . trying to identify such features, we focus in this paper on a class of aggregation problems which appear in mbe experiments . in these experiments a finite density of dla - like clusters nucleate and grow on the substrate .the width of the scaling range is limited by the cluster size ( upper cutoff ) and the width of its narrow arms ( lower cutoff ) which can be as small as the single atom .we show that a small increase in the scaling range requires a large increase in the duration of the mbe experiments .moreover , at long times edge diffusion and related processes which tend to smooth out the fractal structures become significant .these processes tend to increase the lower cutoff and in this way limit the possibility of further extending the scaling range .this detailed argument is presented only for mbe - like aggregation problems .however , we believe that related arguments , based on the fact that in empirical systems there is no complete separation of time - scales , may apply to other classes of fractal structures out of equilibrium . the paper is organized as follows . in sectionii we present an extensive survey of experimental measurements of fractals and examine the empirical dimensions and scaling range . in order to obtain better understanding of the limited scaling range , we focus in section iii on the case of nucleation and growth of fractal islands on surfaces .the width of the scaling range is obtained as a function of the parameters of the system and it is shown that under realistic assumptions it does not exceed two decades . these results and their implications to empirical systemsare discussed in section iv , followed by a summary in section v.here we present an extensive survey of experimental papers reporting fractal measurements , and examine the range of length - scales over which fractal properties were observed , as well as the reported dimensions . in our surveywe used the inspec data - base from which we extracted all the _ experimental _ papers in physical review a - e and physical review letters over a period of seven years ( january 1990 - december 1996 ) which include the word _ fractal _ in the title or in the abstract , a total of 165 papers .these papers account for of the 1821 experimental papers on fractals that appeared during that seven year period [ and of all such papers ever published ( 2425 papers since 1978 ) ] in all scientific journals listed by inspec .experimental measurements of fractal dimensions are usually analyzed using the box counting or related methods . in these measurements a log - log plot is reported in which the horizontal axis represents the length scale ( such as the linear box size ) and the vertical axis is some feature ( such as the number of boxes which intersect the fractal set ) for the given box size .typically , the reported curves include a range of linear behavior .this range terminates on both sides by upper and lower cutoffs eitherbecause further data is not accessible or due to a knee beyond which the line is curved .the apparent fractal dimension is then obtained from the slope of the line in the linear range . out of the 165 papers mentioned above , 86 papers included such a plot ( and 10 of them included two plots ) .for each one of these 96 log - log plots we extracted both the fractal dimension and the width of the linear range between the cutoffs ( table i ) .table i includes a row for each one of the 96 measurements . the first column briefly describes the context of the experiment .the second column provides a classification of the systems into the following categories : aggregation ( a ) , porous media ( p ) , surfaces and fronts ( s ) , fracture ( f ) , critical phenomena ( c ) , fracton vibrations ( v ) , turbulence ( t ) , random walk ( r ) and high energy physics ( h ) . in cases where more than one class is appropriatewe assign both classes .the next two columns provide the fractal dimension ( fd ) and the width of the scaling range in which fractal behavior was detected ( ) .the next three columns provide the lower cutoff ( ) , the upper cutoff ( ) and the units in which these cutoffs are measured .note that in many of the papers the scales in the log - log plots are provided in a dimensionless form or in arbitrary units . in these caseswe left the units column empty .the last two columns provide the reference number and the figure number in that paper from which the fd , and the cutoffs were obtained .we found that 29 measurements belong to class a , 19 to p , 18 to s , 6 to f , 8 to c , 4 to v , 2 to t , 4 to r and 10 to h. to examine the distribution of widths of the scaling range we present a histogram ( fig .1 ) which shows , as a function of the width ( in decades ) the number of experimental measurements in which a given range of widths was obtained .surprisingly , it is found that the typical range is between 0.5 - 2 decades with very few exceptions . to obtain more insight about the scaling range we present separate histograms for aggregation [ fig .2(a ) ] , porous media [ fig .2(b ) ] and surfaces and fronts [ fig .the distribution for aggregation systems is basically similar to the one of fig .1 , with a peak around 1.5 decades . we note in particular that it does not include measurements over significantly more than two decades .the width distribution for porous media has the same general shape , however , the scaling range is typically narrower and the peak is centered around one decade .the width distribution for surfaces and fronts includes both a flat range between one and two decades , in addition to a few cases with three and four decades .it is interesting to note that the papers in which three or four decades of scaling behavior are reported are in the context of surfaces and fronts , related to self affine , rather than self similar fractals .this observation raises the question whether , for self similar fractals , there are some common features of the empirical systems reviewed here , which tend to limit the width of the scaling range . to obtain the distribution of measured fractal dimensions we constructed a histogram ( fig .3 ) showing the number of experiments which observed fractal dimension in a given range .the fact that most of the experiments deal with spatial fractals is reflected in the observation that in most cases .two peaks are identified in the histogram , around and .in addition to these peaks , there is a broad distribution of observed dimensions in the entire range of . to further examine the observed dimensions we also show separately their distributions for the classes of aggregation [ fig .4(a ) ] , porous media [ fig .4(b ) ] and surfaces [ fig .the statistics available for the other classes is not sufficient to draw significant conclusions .we observe that for aggregation systems there is a huge peak around which corresponds to 2d dla .in addition , there are some systems with higher dimension , a few of them may correspond to 3d dla , for which the dimension is .for porous media we observe a rather flat distribution of fractal dimensions in the range . for surfaces and frontsthere are two peaks , one around which includes topologically one dimensional fronts and the other one around which includes rough two dimensional surfaces .the measured dimensions in table i represent not only empirical measurements of the fractal dimension , but in some cases these are generalized fractal dimensions . in particular , experimentsin which scattering techniques are used tend to provide the correlation dimension . the generalized dimension is a monotonically decreasing function of . due to the broad scope of systems included in our survey ,it is not possible at this stage to provide general arguments .we chose to focus our discussion on the class of aggregation systems in which a finite density of dla - like clusters nucleate on surfaces .these systems are in a way representative , as they exhibit spatial fractal structures which grow out of thermal equilibrium .moreover , dla - like structures account for a significant fraction of the surveyed papers and are thus particularly relevant .we will now examine the scaling properties and cutoffs in a class of systems in which dla - like clusters nucleate and grow on a surface .particularly , in mbe a beam of atoms is deposited on a substrate .these atoms diffuse on the surface and nucleate into islands which keep growing as more atoms are added .mbe experiments on systems such as au on ru(0001 ) , cu on ru(0001 ) , and pt on pt(111 ) give rise to dla like clusters with dimensions close to .we will now consider the growth processes in such experiments . in mbe experiments atomsare randomly deposited on a clean high symmetry surface from a beam of flux [ given in monolayer ( ml ) per second ] .each atom , upon attachment to the surface starts hopping as a random walker on a lattice [ which can be a square lattice for fcc(001 ) substrates and triangular lattice for fcc(111 ) substrates ] until it either nucleates with other atoms to form an immobile cluster or joins an existing cluster .the hopping rate ( in unit of hops per second ) for a given atom to each unoccupied nearest neighbor site is where is the standardly used attempt frequency , is the energy barrier , is the boltzmann factor and is the temperature .the coverage after time is then ( in ml ) .the submonolayer growth is typically divided into three stages : the early stage is dominated by island nucleation , followed by an aggregation dominated stage until coalescence sets in . in studying the fractal properties of islands we are interested in the late part of the aggregation stage , where islands are already large , but separated from each other , as coalescence is not yet dominant .the scaling behavior at this stage has been studied using both rate equations and monte carlo ( mc ) simulations .it was found that the density of islands is given by the exponent is determined by the microscopic processes that are activated on the surface during growth .it can be expressed in terms of the critical island size , which is the size for which all islands with a number of atoms are unstable ( namely dissociate after a short time ) while islands of size are stable .it was found , using scaling arguments and mc simulations that for isotropic diffusion , in the asymptotic limit of slow deposition rate , .however , in case that the small islands of size are not unstable but only mobile , the scaling exponent takes the form . for systems in which only the single atom is mobile ( such as the dla model ) , and .the typical distance between the centers of islands , which is given by then scales as the growth potential of each cluster is limited by this distance , beyond which it merges with its nearest neighbors .therefore , is an upper cutoff for the scaling range of the dla - like islands for the given experimental conditions .this cutoff can be pushed up by varying the growth conditions , namely the temperature and the flux .however , eq . ( [ upper_cut ] ) indicates that in order to add one order of magnitude to one needs to increase the ratio by a factor of .this can be done either by reducing the flux , or by raising the temperature , which would increase the hopping rate . to get a broad scaling range one can also choose a substrate with very low hopping barriers , so the required deposition rate would not have to be unreasonably small .however , the slow dependence of on indicates the inherent difficulties in growing fractal islands with a broad scaling range .we will now try obtain a more quantitative understanding of the situation .first , we will consider the case of no significant thickening of the arms of the dla - like clusters . in this casethe lower cutoff remains of the order of the atom size .the maximal width of the scaling range , is then given by , where is given in units of the substrate lattice constant .we thus obtain : to approach this width the clusters need to fill the domains of linear size available to them .the coverage at which this maximal width is obtained is where is the fd of the clusters and the deposition time up to this stage is given by .this together with eq .( [ delta.vs.hf ] ) shows the essential property that a linear increase in the scaling range ( given in decades ) requires an exponential increase in the duration of the experiment .the dependence of on the hopping energy barrier and the temperature can be obtained from eq .( [ delta.vs.hf ] ) by writing explicitly from ( [ hoppingrate ] ) which gives . \label{delta.vs.e0t}\ ] ] it is easy to see that even for a system in which the energy barrier vanishes , and for the extremely slow deposition rate of , the width of the scaling range , assuming , would be decades . under these conditions , and taking the optimal coverage given by eq .( [ maxcov ] ) for fractal measurement would be , which would be obtained after about 35 hours of deposition . however , the duration of the deposition experiment in typical submonolayer studies is usually limited to no more than a few hours .the experimentally feasible scaling range is further limited by the fact that the diffusion properties of physical substrates differ from the dla model .in particular , the assumption of an infinite separation of time scales , namely that an isolated atom has high mobility while an atom which has one or more nearest neighbors is completely immobile should be weakened . in a real high symmetry substrate onecan identify a variety of hopping rates such as : for an isolated atom ; for an atom moving along a step or island edge ; and for an atom detaching from a step or island edge .we have seen that for a given substrate temperature , the scaling range can be increased by reducing the flux .this can be done as long as . however , once the duration of the experiment ( given by ) becomes of the order of or , diffusion along and away from the edges becomes significant and modifies the morphology of the islands .these processes allow atoms to gradually diffuse into the otherwise screened regions of the dla - like island . as a result ,the arms becomes thicker and shorter and the islands become more compact . for the discussion below we will denote by the highest hopping rate among the edge moves that may affect the island morphology . can be expressed in terms of the hopping energy barrier for this process , , just as in eq .( [ hoppingrate ] ) .the lowest deposition rate that can be used , without having these edge processes affect the morphology is of the order of .using this deposition rate the deposition time up to coverage is .> from eq .( [ delta.vs.e0t ] ) and we obtain that the maximal width of the scaling range , in decades , is then given by using eq .( [ hoppingrate ] ) one can eliminate the temperature and express this width in terms of the activation energy barriers and the flux ( which is chosen equal to ) : to obtain the duration of the deposition experiment , for a given we extract from eq .( [ width.e02 ] ) and use where is given by eq .( [ maxcov ] ) .we obtain where exponential dependence of the experiment duration on clearly limits the feasible scaling range which can be obtained in these experiments .since , it is clear that .this lower bound is obtained for and , while typical values for dla like clusters are .interestingly , the situation expressed by eq .( [ duration ] ) is somewhat reminiscent of that of the theory of algorithmic complexity . in this theory, there is a distinction between algorithms for which the time complexity function depends polynomially on the input length [ typically the number of bits needed to describe the input , i.e. , (input ) ] , and algorithms for which the dependence is exponential .generally , problems for which there is a polynomial time algorithm are considered tractable while ones for which there are only exponential time algorithms are considered intractable .one can make a rough analogy between and the input size , and the experimental duration and computation time . within this analogy , the growth problem considered here , for which the desired large value of is given as input falls into the class of intractable problems .the understanding of the implications of these ideas to general aggregation problems and other classes of fractal systems would require further studies . herewe will focus on the conclusions drawn from eqs .( [ width.e02 ] ) and ( [ duration ] ) on specific experimental systems .fcc(111 ) metal surfaces are the most promising experimental systems for studies of the growth modes considered here .the energy barriers for al(111 ) are and . for rh(111 ) and while for pt(111 ) and .these numbers indicate that al(111 ) can provide the widest scaling range for an experiment of a given duration . using the equations above , for al(111 )we find that it is feasible to obtain decades , which requires and .however , decades is already highly unfeasible since it requires a deposition rate of about hours ( at ) !these results seem to be consistent with the experimental findings reported in section ii , where for aggregation processes no measurements are reported with significantly more than two decades of scaling range .to summarize , we have shown that the growth of dla - like clusters is limited by two processes : ( 1 ) the nucleation density , and ( 2 ) edge mobility and detachment .the resulting clusters can , under realistic conditions , exhibit at most 2 - 3 decades of scaling range .the mbe systems examined here are representative in the sense that they exhibit spatial fractal structures which form out of thermal equilibrium .the need for a separation of time scales seems to be more general for non equilibrium aggregation and growth processes , although the details and the particular exponents may be different .moreover , dla - like structures account for a significant fraction of the surveyed papers .the analysis presented here is directly relevant to systems in which a finite density of dla - like clusters is nucleated on a substrate . on the other hand , for growth of a single dla - like cluster , in problems such as electrodeposition ,different considerations are required but we believe that the issue of separation of time scales between the fractal generating processes and the smoothing processes determines the width of the scaling range also there . to explain why our arguments are specific to nonequilibrium systems we will use 2d percolation as an example of an equilibrium critical system . in a 2d percolation experiment , one can use a similar apparatus as described above for mbe .it is then assumed that diffusion is negligible and atoms are deposited until the coverage reaches the percolation threshold . in such an experimentthere is basically no dynamics on the surface .the only constraint is that the deposition will be completed and all measurements are performed at a time scale small compared to the hopping time .however , the hopping time can be made as long as needed by reducing the substrate temperature . under these conditions , there are no dynamical constraints on the width of the scaling range , which is only limited by the system size , the precision in which the percolation threshold is approached and the apparatus . the discussionso far , focused on highly correlated systems generated by dynamical processes such as diffusion and aggregation .however , weakly correlated systems may also exhibit fractal behavior over a limited range of length scales .this behavior may appear in porous media in the limit of low volume fraction of the pores , or in surface adsorption systems in the low coverage limit . in this casethe fractal behavior does not reflect the structure of the basic objects ( such as pores or clusters ) but their distribution . using simple models consisting of randomly distributed spherical or rod - like objects , we performed multiple resolution analysis and obtained analytical expression for the box - counting function in this case .it was shown that in the uncorrelated case , at sub - percolation coverage , one obtains fractal behavior over 0.5 - 2 decades .the dimensions are found to be non - universal , and vary continuously as a function of the coverage .the lower cutoff in these systems is determined by the basic object size while the upper cutoff is given by the average distance between them .it is interesting that this independent analysis , which applies to a different class of systems from the ones we focused on in this paper , also gives rise to a fractal range of less than two decades .in summary , we have performed a comprehensive survey of experimental papers reporting fractal measurements . focusing on spatial fractals, these systems were classified according to the types of systems and processes .it was found that for self similar fractals , the width of the scaling range is typically limited to less than two decades with remarkably few exceptions . in an attempt to examine the origin of this behavior we have focused on a class of mbe experiments in which a finite density of dla - like clusters nucleate and grow .we have derived an expression of the duration of the deposition experiment which is required in order to obtain a given width for the scaling range .this expression shows that the experimental time increases exponentially with , given in decades . applying this expression to real experimental systems , such as the mbe growth of al on al(111 )it is found that the feasible range is up to about two decades .this result is in agreement with the findings of our survey for aggregation phenomena .understanding the processes which determine the cutoffs in the entire range of fractal systems , e.g. surfaces and fronts , porous media , and other aggregation processes requires further studies .we would like to thank i. furman for helpful discussions .this work was supported by a grant from the wolkswagen foundation , administered by the niedersachsen science ministry . d.a .acknowledges support by the minerva foundation , munich . the search was done using the command `` find kw fractal or fractals and date 199n and jo physical review and pt experimental '' for .the numbers of papers obtained were , , , , , and for , respectively , a total of 165 papers .k. sengupta , m. l. cherry , w. v. jones , j. p. wefel , a. dabrowska , r. holyski , a. jurak , a. olszewski , m. szarska , a. trzupek , b. wilczyska , h. wilczyski , w. wolter , b. wosiek , k. woniak , p. s. freier and c. j. waddington , _ phys .rev . _ * d48 * , 3174 ( 1993 ) .m. schroeder and d.e .wolf , _ phys .lett . _ * 74 * , 2062 ( 1995 ) ; d. e. wolf in _scale invariance , interfaces , and non - equilibrium dynamics _ , edited by m. droz , a. j. mckane , j. vannimenus and d. e. wolf , nato - asi series ( plenum , new york , 1994 ) .in the present analysis we focus on the case and . in systems for which there are unstable or mobile islands of size .such systems exhibit mobility along island edges or detachment moves that modify the morphology from fractal to more compact even at the time - scale of single atom hopping , and therefore are not relevant for our considerations . in principle, one can vary the deposition rate during the growth process .this is typically used to increase the number of nucleation sites which is helpful for epitaxial growth .this is achieved by starting with a high deposition rate and gradually reducing it as the coverage increases .the large number of islands nucleated in the early stages are stable and keep aggregating more atoms in spite of the reduced deposition rate . for the purpose of growing larger dla - like islands in a shorter time one may want to use a slow deposition rate in the early stages and increase it gradually . however , in this case new islands will continue to nucleate and the low island density of the initial low deposition rate will not be maintained .
|
fractal structures appear in a vast range of physical systems . a literature survey including _ all experimental papers on fractals _ which appeared in the six physical review journals ( a - e and letters ) during the 1990 s shows that experimental reports of fractal behavior are typically based on a scaling range which spans only 0.5 - 2 decades . this range is limited by upper and lower cutoffs either because further data is not accessible or due to crossover bends . focusing on spatial fractals , a classification is proposed into ( a ) aggregation ; ( b ) porous media ; ( c ) surfaces and fronts ; ( d ) fracture and ( e ) critical phenomena . most of these systems , [ except for class ( e ) ] involve processes far from thermal equilibrium . the fact that for self similar fractals [ in contrast to the self affine fractals of class ( c ) ] there are hardly any exceptions to the finding of decades , raises the possibility that the cutoffs are due to intrinsic properties of the measured systems rather than the specific experimental conditions and apparatus . to examine the origin of the limited range we focus on a class of aggregation systems . in these systems a molecular beam is deposited on a surface , giving rise to nucleation and growth of diffusion - limited - aggregation - like clusters . scaling arguments are used to show that the required duration of the deposition experiment increases exponentially with . furthermore , using realistic parameters for surfaces such as al(111 ) it is shown that these considerations limit the range of fractal behavior to less than two decades in agreement with the experimental findings . it is conjectured that related kinetic mechanisms that limit the scaling range are common in other nonequilibrium processes which generate spatial fractals .
|
in a previous paper , we have introduced a realistic , covariant , interpretation for the reduction process in relativistic quantum mechanics . the basic problem for a covariant description is the dependence of the states on the frame within which collapse takes place .more specifically , we have extended the tendency interpretation of standard quantum mechanics to the relativistic domain . within this interpretation of standard quantum mechanics ,a quantum state is a real entity that characterizes the disposition of the system , at a given value of the time , to produce certain events with certain probabilities . due to the uniqueness of the non - relativistic time ,once the measurement devices are specified , the set of alternatives among which the system chooses is determined without ambiguities .in fact , they are associated to the properties corresponding to a certain decomposition of the identity .the evolution of the state is also perfectly well defined .for instance , if we adopt the heisenberg picture , the evolution is given by a sequence of states of disposition .the dispositions of the system change during the measurement processes according to the reduction postulate , and remain unchanged until the next measurement .of course , the complete description is covariant under galilean transformations .in ref we proved that a relativistic quantum state may be considered as a multi - local relational object that characterizes the disposition of the system for producing certain events with certain probabilities among a given intrinsic set of alternatives .a covariant , intrinsic order was introduced by making use of the partial order of events induced by the causal structure of the theory . to do that , we have considered an experimental arrangement of measurement devices , each of them associated with the measurement of certain property over a space - like region at a given proper time .no special assumption was made about the state of motion of each device .indeed , different proper times could emerge from this description due to the different local reference systems of each device .thus , we may label each detector in an arbitrary system of coordinates by an open three - dimensional region , and its four - velocity .we now introduce a partial order in the following way : the instrument precedes if the region is contained in the forward light cone of .let us suppose that precedes all the others .then , it is possible to introduce a strict order without any reference to a lorentz time as follows .define as the set of instruments that are preceded only by .define as the set of instruments that are preceded only by the set and .in general , define as the set of instruments that are preceded by the sets with and .the crucial observation is that all the measurements on can be considered as `` simultaneous '' .in fact , they are associated with local measurements performed by each device , and hence represented by a set of commuting operators .as the projectors commute and are self - adjoint on a simultaneous " set , all of them can be diagonalized on a single option .these conditions ensure that the quantum system has a well defined disposition with respect to the different alternatives of the set . in other wordsone can unambiguously assign conditional probabilities after each measurement for the events associated to the set . in relativistic quantum mechanics ,this description is only consistent up to lambda compton corrections .in fact , the corresponding local projectors exist and commute , up to compton wavelengths .a fully consistent description of the measurement process in the relativistic domain requires the extension of the interpretation to quantum fields.this extension is far from trivial . besides the obvious difficulty of dealing with the infinite degrees of freedom of the field theory, one has to face some issues related with the lack of a covariant notion of time order of the quantum measurements . in fact , there is not a well defined description for the schroedinger evolution of the states on arbitrary foliations of space time , even for the free scalar quantum field in a minkowski background. although the evolution is well defined in the heisenberg picture , in general the operators associated with global space - time foliations are not self - adjoint .it is not guaranteed that in the particular case of the field operators this problem will appear .however , it is clear that a careful treatment is required in order to insure that they are well defined operators .another issue concerns the causal restrictions on the observable character of certain operators in q.f.t .as it has been shown by many authors , causality imposes further restrictions on the allowed ideal operations on a measurement process .this observation arise when one considers some particular arrangements composed by partial causally connected measurements .it has been shown that while some operators are admissible in the relativistic domain , many others are not allowed by the standard formalism .although this conclusion is correct , it is based on standard bloch s notion for ordering the events in the relativistic domain .remember that bloch s approach consists on taking any lorentzian reference system and hence:_`` ... the right way to predict results obtained at is to use the time order that the three regions have in the lorentz frame that one happens to be using''_ .nevertheless , we have introduced in another covariant notion of partial order . though both orders coincides in many cases ,they imply different predictions for the cases of partial causally connected measurements . herewe shall show that our notion of intrinsic order allows us to extend the allowed causal operators to a wider and natural class . in this paper we will consider the explicit case of a free , real , scalar field in a minkowski space - time .the field operators smeared with local smooth functions are quantum observables associated with ideal measurement devices .they are associated to projectors corresponding to different values of the observed fields .we shall prove that the projectors associated with different regions of the option commute .this allows us to extend the real tendency interpretation to the quantum field theory domain giving a covariant description of the evolution of the states in the heisenberg picture . as in relativistic quantum mechanics ,the states are multi - local relational objects that characterize the disposition of the system for producing certain events with certain probabilities among a particular an intrinsic set of alternatives .the resulting picture of the multi - local and relational nature of quantum reality is even more intriguing than in the case of the relativistic particle .we shall show that it implies a modification of the standard expression for conditional probabilities in the case of partial causally connected measurements , allowing to include a wider range of causal operators .our description could be experimentally tested .a verification of our predictions would give a stronger support to the realistic interpretations of the states in quantum mechanics .the paper is organized as follows : in section 2 , we develop our approach for a real free scalar field showing that it is possible to give a standard description of the measurement process of a quantum field . in section 3 , we show that this approach is consistent with causality and provides predictions for conditional probabilities that differ from the standard predictions in the case of partial causally connected measurements .we also discuss the resulting relational interpretation of the quantum world .we present some concluding remarks in section 4 .the existence of the projectors as distributional operators acting on the fock space is discussed in the appendix .we shall study the relational tendency theory of a real free k - g field , evolving on a flat space - time .we start by considering the experimental arrangement of measurement devices , each of them associated with the measurement of the average field where is a smooth smearing function with compact support such that it is non - zero in the region associated with the instrument that measures the field .the decomposition corresponds to the coordinates in the local lorentz rest frame of the measurement device located in .the scalar field operators satisfy the field equations and the canonical commutation relations : =i{\delta}(x^j - y^j)\ ] ] = 0\ ] ] thus , we may write the field operator in terms of its fourier components as follows : \ ] ] with and generically , the devices belonging to the same set of alternatives will lie on several spatially separated non - simultaneous regions .thus , in order to describe the whole set of alternatives in a single covariant hilbert space we will have to transform these operators to an arbitrary lorentzian coordinate system .we shall exclude accelerated detectors , and consequently , we will have an unique decomposition of the fields in positive and negative frequency modes .this procedure allows us to define the hilbert space in the heisenberg picture on any global lorentz coordinate system .the crucial observation is that all the measurements on can be considered as `` simultaneous '' .in fact , two arbitrary devices of are separated by space - like intervals , and therefore , we shall prove that the corresponding operators , represented on , commute . what remains to be provedis that they are unbounded self - adjoint operators in the fock space of the scalar field and therefore they can be associated with ideal measurements .a measurement will produce events on the devices belonging to and the state of the field will collapse to the projected state associated to the set of outcomes of the measurement . the determination of the corresponding projectors is a crucial step of our construction .we are also going to prove that the construction is totally covariant and only depends on the quantum system , that is , the scalar field and the set of measurement devices .all the local operators are represented on a generic hilbert space via boosts transformations , and the physical predictions are independent on the particular space - like surface chosen for the definition of the inner product . notice that we are not filling the whole space - time with devices .instead we are considering a set of local measurements covering partial regions of space - time .if we had chosen the first point of view , we would run into troubles .indeed , it was shown that the functional evolution can not be globally and unitarily implemented except for isometric foliations . once the projectors in the local reference frame of each detector has been defined , we need to transform them to a common , generic , lorentz frame where all the projectors will be simultaneously defined . in other words , recalling that hilbert spaces corresponding to two inertial systems of coordinates are unitarily equivalent we will represent all the projectors on the same space .the projectors and the smeared field operators transform in the same way , that is : where is the unitary operator related to the boost connecting the generic lorentz frame with the local frame of each device .since we are dealing with the heisenberg picture , the states do not evolve , and only the operators change with time .one can parameterize the evolution , with the time in the local reference frame of the device located in the region , or what is equivalent with the proper time associated to this device .the projectors corresponding to the observation of a given value of the field at a given proper time may be represented on the hilbert space associated with any lorentz frame .we are now ready to study the spectral decomposition of the operators .we start by solving the eigenvalue problem in the field representation .we shall work in the proper reference system where the measurement device is at rest .we shall proceed as follows , we start by choosing the field polarization and defining the fock space. then we shall determine the eigenvectors of the quantum observables , and show that they are well defined elements of the fock space . on this representationthe field operators are diagonal and the canonical momenta are derivative operators , ,\ ] ] .\ ] ] the inner product is given by : and the eigenvectors of the field operators satisfy the fields transform as scalars under lorentz transformations and the inner product is lorentz invariant .let us now proceed to the construction of the hamiltonian and the vacuum state in this representation .the hamiltonian operator is \label{ham}\ ] ] where the functional equation for the vacuum state , ] , once the infrared regularization is taken into account .for instance , if we compute its matrix elements among two vectors of the hilbert space : where is the volume of the box where we may put the field in order of avoiding the infrared divergences . + in order to prove that is a projector we start by observing that : this property allows us to construct a decomposition of the identity for a set of projectors associated to open portions of the reals such that , and up to a zero measure set . therefore : ] furthermore , if : if not : finally two projectors associated to different spatial regions commute .this is a consequence of the commutation of the local operators . indeed , if the local regions , , define the same proper lorentz frame the commutation of and for space - like separation is straightforward and hence the projectors commute .if the regions are not simultaneous , one needs to transform both operators to a common lorentz frame . let us call , the lorentz transformation connecting both regions , then the relevant commutator will be $ ] . as an arbitrary lorentz boostmay be written as a product of infinitesimal transformations , it is sufficient to consider : \ ] ] the commutator of with only involve canonical operators evaluated at points of the region , their commutator with operators associated to a region separated by a space - like interval will commute .thus , also for non - simultaneous regions , space - like separated projectors commute . in the appendixwe prove that this projectors have a well defined action on the fock space of the free klein gordon field .thus the projectors associated to different local measurement devices on a simultaneous " set commute and are self - adjoint .these properties insure that they can be diagonalized on a single set of alternatives , and the quantum system has a well defined , dispositional , state with respect to the different alternatives of the set . in the heisenberg picture , the evolution is given by a sequence of states of disposition .the dispositions of the system change during the measurement processes according to the reduction postulate , and remain unchanged until the next measurement . as in the case of relativistic quantum mechanics ,the system provides , in each measurement , a result in devices that may be located on arbitrary space - like surfaces .notice that contrary to what happens with the standard lorentz dependent description of the reduction process , here the conditional probabilities of further measurements are unique .it is in that sense that the dispositions of the state to produce further results have an objective character .as we mentioned before , it has been recently observed by many authors that the standard time order of ideal measurements in a hilbert space may imply causal violations if partially connected regions are taken into account .here we shall show that although this analysis is correct , it is based on a different notion for the ordering of the events .if one defines the partial order as we did in ref , one may extend the causal predictions of the theory , and the reduction process is covariant and consistent with causality for a wide and natural class of operators .+ let us suppose , following sorkin , that the devices performing the observation are not completely contained in the light cones coming from the previous set .we are therefore , interested in the case where only a portion of certain instrument is contained inside the light cone of the previous set .we could generalize the previously introduced notion of order by saying that follows the instrument if at least a portion of lies inside the forward light cone coming from . with this ordering ,let us to consider a particular arrangement for a set of instruments which measure a particular observable on a relativistic quantum system .suppose three local regions : , , with their corresponding heisenberg projectors : , , associated to values of certain heisenberg observables over each region .we arrange the regions such that some points of follows and some points of follows but and are spatially separated(see figure 1) .it is easy to build such arrangement , even with local regions . in this context , due to microcausality, the commutation relations between the observables and the projectors will be : \neq 0\ ] ] \neq 0\ ] ] =0\ ] ] let us suppose that one uses this new notion of order , to define the sequence of options , , and the corresponding reduction processes followed by a quantum system .then , since the new order implies , one immediately notices that the measurement affects the measurement and also the measurement affects the measurements .consequently , one should expect that the measurement would affect the measurement , leading to information traveling faster than light between and , which are space - like separated regions .one could immediately prove this fact as follows , let us suppose that the state of the field was prepared by a initial measurement , that precedes the whole arrangement , whose density operator we denote by .now , the probability of having the result in the regions given the initial state is , using wigner s formula : \label{probm}\ ] ] this is the standard result that we would have obtained by making use of bloch s notion of order .thus , one notices that an observer located in could know with certainty if a measurement has been performed by .in fact , assume that a non - selective measurement has occurred on the region , and ask for the probability of having under this hypothesis .then , one arrive to the probability : \ ] ] one immediately notices that this probability depends on whether the measurement was carried out or not , independently of the result .this is due the non - commutativity of the projector with , and with , that prevents us for using the identity .+ notice that we have assumed that the measurement is known .however , since the region is partially connected with region , a portion of will not be causally connected with and therefore , the preservation of causality would require that the measurement carried out by should be taken as non - selective respect to an observer localized on . can not be transmitted causally to an observer in . ] if we take this fact into account one can prove that even with a non - selective measurement on one arrive to causal problems .in fact , we notice that the probability of having , no matter the result on is : \label{probm2},\ ] ] which depends on whether or not the and measurements were carried out . hence, if one starts from a different definition of the partial ordering of the alternatives , in terms of a partial causal connection , one gets faster than light signals for a wide class of operators which prevent us to eliminate the measurement . in those casesthe observer could know with certainty if the previous two measurements were carried out or not .there is not any violation with respect to the observation since an observer at may be causally informed about a measurement carried out at .however , the above analysis implies faster than light communication with respect to the measurement since it is space - like separated from .therefore , the requirement of causality strongly restricts the allowed observable quantities in relativistic quantum mechanics .+ in what follows we are going to show that our description is consistent with causality for a wider range of operations .the key observation is that our notion of partial ordering requires to consider the instruments as composed of several parts each one associated to different measurement processes .that is , in the case where only a portion of the instrument is causally connected , one needs to decompose the devices in parts such that each part is completely inside ( or outside ) the forward light cone coming from the previous devices .now the alternatives belonging to one option are composed by several parts of different instruments .in fact , a particular device could contain parts belonging to different options .although the measurement performed by any device is seen as lorentz simultaneous for any local reference system it will be associated to several events .let us reconsider the previous example with our notion of order ( see figure 1 ) .let us start with and the preparation of the state in .we will call ( ) the part of non - casually connected with and ( ) the part of non causally connected with .the part of causally connected with ( ) and the part of causally connected with ( ) .now we can construct the set of options as , then .thus , we need to deal with partial observations .let us consider the case where the operator associated to the measurement carried out on may be taken as composed by two partial operators associated to and , we shall denote the respective eigenvalues as and .notice that , the individuality of the device still persists since we do not have access to each result but only to the total result obtained on after the observation .now it is important to consider how one gets through and .let us assume that the result is extensive in the sense that .this relation depends on the particular observation we are performing on each alternative .for instance , let us call the local operators associated to the observations on and the operator associated to .therefore , is the functional relation between them . for the case of the field measurements, we will have which is just the relation .notice however , that this hypothesis also includes a wide range of observables .indeed , it allows us to measure local operators which involve products of multiple smeared fields .these operators will imply indeed a non linear behavior for the functional relation .now we can compute the probability of observing for selective measurements in given the initial state . in first place, we have to deal with the measurement of occurring on .as we have divided the device in two portions , this result will be composed by two unknown measurements and , such that .analogously for the probability of having since it results from two independent measurements in and .thus , we will have : )= \nonumber\\&\sum_{b_1}\sum_{b_2}\delta(b - f(b_1,b_2 ) ) tr[{p^c}_c{p^{b_2}}_{b_2}{p^{b_1}}_{b_1 } { p^a}_a{\rho}_0{p^a}_a{p^{b_1}}_{b_1}{p^{b_2}}_{b_2}]\nonumber \label{prob1}\end{aligned}\ ] ] where we have taken into account that , due to microcausality : the sum on goes over the complete set of possible results .the same applies for the measurement .+ now , in order to study the causal implications we need to compute the probability of having for non - selective measurements on .therefore , one gets : = \sum_{b_1}tr[{p^c}_c{p^{b_1}}_{b_1}\rho_0{p^{b_1}}_{b_1 } ] & \nonumber\end{aligned}\ ] ] where we have used that .thus , this probability does not depend on the measurement and our description does not lead to any violation of causality during the measurement process .although there is some kind of correlation introduced by the causally connected part of with , we will not have any information about the actual observation made on , as we noticed before .this correlation is very interesting and could be experimentally tested .notice that only the assignment of probabilities given by equations ( [ prob1],[prob2 ] ) is consistent with causality for the general kind of measurements that we have considered .+ several issues concerning the relational interpretation can be read from the previous analysis .the devices never lose their individuality as instruments of measurement of a certain observable , for instance , the local field on certain region .however what is quite surprising is that while the devices are turned on for a local proper time , the _`` decision '' _ made by the quantum system with respect to this region is taken by two non - simultaneous processes within the given intrinsic order .the local time of measurement is quite different to the internal order for which the _ `` decisions '' _ were taken .now , the set of `` simultaneous '' alternatives is composed by portions of several devices .the individuality of each device is preserved , since we do not have access to the results of these partial alternatives .what we observe in each experiment is the total result registered by each device . +another consequence of our approach concerns the causal connection among alternatives belonging to different sets .as we have shown , there is correlation among the causally connected portions of different devices , nevertheless this correlation does not imply any incompatibility with causality .+ all these features show a global aspect of the relational tendency interpretation which is very interesting since the decomposition is produced by the global configuration of the measurement devices evolving in a minkowski space - time , without any reference to a particular lorentz foliation .+ we have considered a measurement arrangement which is reminiscent to the observation of a non - local property . is non local with respect to .] we can indeed naturally extend our approach to the case of widely separated non - local measurements , or even widely extended observation .now the partial causal connection is simply implemented taken sorkin s arrangement , on figure 1 , modified to the case of measurements carried out on disconnected regions , or even a space - like surface ., the space - like surface we may consider will be a portion of a constant time surface on the lorentz rest frame of the devices involved in the non - local measurement . in these cases , it is possible to show that the relational observable is a well defined self - adjoint operator . ]the conclusion is the same . in the cases of partial causally connected measurementsour description includes a wider range of causal operators than the standard approach . )is causal for the linear case and indeed coincides with our expression in sorkin s arrangement .this is due to the decomposition ( [ proj ] ) for the measurement which allows us in the linear case to transform ( [ probm ] ) in equation ( [ prob1 ] ) .however this can not be done in general , for instance , in the non linear case .furthermore there are particular experimental setups where both formulae disagree even for the linear case . ]we have developed the multi - local , covariant , relational description of the measurement process of a quantum free field . we have addressed the criticisms raised by various authors to the standard hilbert approach and shown that they are naturally avoided by our covariant description of the measurement process . in order to address these issues ,we have extended the intrinsic order associated to a sequence of measurements to the case of partially connected measurement devices .this extension has further implications on the relational meaning of the measurement process . a particular measurement process of a given property performed by a given measurement device on a region of space - time ,should be considered as composed by a sequence of _ decision _ processes occurring on different regions of the device .this solves the causal problems and implies a global relational aspect of the complete set of alternatives . from an observational point of view, we have proved that causality holds in the canonical approach for a wide and natural class of operators , while the standard formalism is extreemly restrictive .our proposal could be experimentally tested trough the implementation of the particular configuration proposed in the previous section .furthermore , our predictions for the reduction of the states should be associated to the _ decision _ process during the interaction of the quantum system with the measurement devices and may be considered , if confirmed , as an experimental evidence of the physical character of the quantum states . if this is experimentally verified , the standard _ instrumentalist _ approach introduced by i.bloch concerning the measurement process in relativistic quantum mechanics , would not be compatible with experiments .this is mainly due to the fact that this order does not coincide with our intrinsic order in the case of partially connected regions , and bloch s approach would not be in general compatible with causality for the measurements we have considered . + it is now clear that the description that we have introduced has a relational nature .firstly because the intrinsic order of the options is defined in relational terms by the measurement devices . but also because self - adjoint operators may only be defined if they are associated to a set of local devices . recall that self - adjoint global operators that describe the field on arbitrary spatial hyper - surfaces do not exist .+ the tendency interpretation of non - relativistic quantum mechanics is naturally a relational theory .if one thinks , for instance , in the solution proposed by bohr for the epr paradox one immediately recognizes that one can not associate a given reality to a quantum system before measurement .even the unruh effect for accelerated detectors has a very deep relational meaning . as unruh noticed : _ "a particle detector will react to states which have positive frequency respect to the detectors proper time , not with respect to any universal time_ .+ one of the main challenges of the xxi century is the conclusion of the xx revolution toward a quantum theory of gravity .the relational point of view is crucial in both theories , the quantum and the relativistic .we have proposed a possible interpretation for any canonical theory in the realm of special relativity .how to extend it to gravity implies further study , mainly because of the nonexistence of a natural intrinsic order without any reference to a space - time background .furthermore , up to now there is no evidence of local observables in pure quantum gravity .this is another evidence of the relational character of the theory .we are now studying these issues .we would like to thank michael reisenberger for very useful discussions and suggestions about the presentation of this paper .here , we prove that the projector is a well defined operator in the fock space . which is just the vacuum state , evaluated for , up to a global phase .this result is a consequence of the poincare invariance of the vacuum , modulo the zero mode , and of the fact that in the limit when tends to zero is just .several issues may be learned from this expression .first of all , as it should be , one gets a gaussian distribution around the zero value .furthermore it is divergent free , provided the integral gives a finite result .this is achieved by demanding that the smearing functions do not contain high fourier components .this procedure may be identified with the usual one in q.f.t . , we will call , -point function the functional derivative respect to .+ now , it is not difficult to show that the inner product , may be calculated , up to multiplicative factors , in terms of the -point functions .those factors are functions of the frequencies of the modes involved in the given fock state . + to do that , we start by studying the form of the particle states in fock space .the fock space is constructed by the creation operators .their action applied to the vacuum in the field representation , consists in the multiplication by some dependent component of the field ( ) and a derivative of the vacuum state respect to this mode corresponding to the given particle state we were creating .due the structure of the vacuum , this derivative term also leads to a multiplicative factor .it is indeed the mode multiplied by some function of the frequency of the particular mode .furthermore it gives a finite result since the fock space is made by finite set of particle states .therefore , the fock -particle states in the functional representation , are obtained by multiplying the vacuum by a set of -dependent components of the field multiplied by some functions of the frequencies of each mode in the state .this is exactly the form of the n - point functions obtained from the generating function .the divergent multiplicative factor coming from the normalization of the vacuum disappears when we take the projector as in ( [ vac ] ) .furthermore the matrix elements of the projector in the fock space are given by : since the inner product is a sum of a set of -point functions times some finite functions of the -modes of the particular state under consideration , we can write them as derivatives of and take out of the integral the derivatives with respect to .hence in the integral part it remains a divergent factor coming from the vacuum state .however , the integral is quadratic in the field and contributes with a factor that cancels this infinite , as before . ]this is a well known fact , as it was noticed by jackiw , the divergent factor , which is ultraviolet divergent , does not affect matrix elements between states on the fock space since it is chosen in such a way that it it disappears from the final expression .thus , the matrix elements of the projector are well defined in the fock space and we arrive to a well defined quantum field theory as it was required .r. gambini and r. a. porto , phys .a volume 294 , issue 3 ( 129 - 133 ) , ( 2002 ) .r. gambini and r. a. porto , phys . rev .d 63 , 105014 , ( 2001 ) d. marolf and c. rovelli , e - print gr - qc/0203056 ch.torre and m. varadarajan , class .16 , 2651 , ( 1999 ) y.aharonov and d.albert , physd. 21 , 3316,(1980 ) + y.aharonov and d.albert , phys .d. 24 , 359,(1981 ) + y.aharonov and d.albert , phys .d. 29 , 228 , ( 1984 ) .r.sorkin , `` directions in general relativity , vol .ii : a collection of essays in honor of dieter brill s sixtieth birthday '' ( cup , 1993 ) b.l . hu and t.a .jacobson eds .r. jackiw , `` field theoretic results in the schroedinger representation '' lecture presented at 17th int .colloq . on grouptheoretical methods in physics , st .adele , canada , ( 1988 ) j.m .mourao , t. thiemann , and j.m .velhinho , j. math .40 2337 ( 1999 ) d. beckman , d.gottesman , m.a .nielsen and j.preskill phys.rev .a 64 ( 2001 ) 052309 .d. beckman , d.gottesman , a. kitaev and j.preskill phys .rev.d 65 , 065022 ( 2002 ) .i.bloch , phys . rev .156 , 1377 , ( 1967 ) a. corichi , m.p .ryan and d. sudarsky , e - print gr - qc/0203072 .n. bohr , phys .rev 48 , 696 , ( 1935 ) w.g .unruh , phys rev .d. 14 , 870 , ( 1976 )
|
we have recently introduced a realistic , covariant , interpretation for the reduction process in relativistic quantum mechanics . the basic problem for a covariant description is the dependence of the states on the frame within which collapse takes place . a suitable use of the causal structure of the devices involved in the measurement process allowed us to introduce a covariant notion for the collapse of quantum states . however , a fully consistent description in the relativistic domain requires the extension of the interpretation to quantum fields . the extension is far from straightforward . besides the obvious difficulty of dealing with the infinite degrees of freedom of the field theory , one has to analyze the restrictions imposed by causality concerning the allowed operations in a measurement process . in this paper we address these issues . we shall show that , in the case of partial causally connected measurements , our description allows us to include a wider class of causal operations than the one resulting from the standard way for computing conditional probabilities . this alternative description could be experimentally tested . a verification of this proposal would give a stronger support to the realistic interpretations of the states in quantum mechanics .
|
experiments at the large hadron collider ( lhc ) will produce tremendous amounts of data . with instantaneous luminosities of and a crossing rate of 40 mhz , the collision rate will be about hz .but the rate for new physics processes , after accounting for branching fractions and the like , is of order hz , leading to the need to select events out of a huge data sample at the level of .the compact muon solenoid ( cms ) experiment developed a distributed computing model from the very early days of the experiment .there are a variety of motivating factors for this : a single data center at cern would be expensive to build and operate , whereas smaller data centers at multiple sites are less expensive and can leverage local resources ( both financial and human ) .but there are also many challenges in making a distributed model work .the cms distributed computing model has different computing centers arranged in a `` tiered '' hierarchy , as illustrated in figure [ fig : tiers ] , with experimental data typically flowing from clusters at lower - numbered tiers to those at higher - numbered tiers .the different centers are configured to best perform their individual tasks .the tier-0 facility at cern is where prompt reconstruction of data coming directly from the detector takes place ; where quick - turnaround calibration and alignment jobs are run ; and where an archival copy of the data is made .the facility is typically saturated by just those tasks .there are seven tier-1 centers in seven nations ( including at fnal in the united states ) .these centers keep another archival copy of the data , and are responsible for performing re - reconstruction of older data with improved calibration and algorithms , and making skims of primary datasets that are enriched in particular physics signals .they also provide archival storage of simulated samples produced at tier-2 .there are about 40 tier-2 sites around the world ( including seven in the u.s . ); they are the primary resource for data analysis by physicists , and also where all simulations done for the benefit of the whole collaboration take place .these centers thus host both organized and chaotic computing activities .everything stated so far could just as well have been said two years ago , before the lhc actually began operations .in fact , it was .what is different now is that we have two years of real operational experience under our belts . in this presentation , we discuss the performance of the computing system in 2010 , its anticipated and actual performance in 2011 , the technical advances that have made that level of performance possible , and some thoughts for the future .2010 was the first full year of lhc operations , and the distributed computing system performed as expected .in particular , all workflows ran at their designated facilities from the very beginning ; there was no need to change figure [ fig : tiers ] on the fly .the amount of data handled by the system was truly stunning .the tier-0 facility produced 100 different datasets , with a total of 13.9 billion events and 674 tib . at the tier-1 sites , the data was re - reconstructed 19 times ( far more often than anticipated in the computing model , due to rapidly evolving understanding of the detector ) , with 17.2 billion events and 2.4 pib output .there were four re - reconstruction passes on the monte carlo ( mc ) samples , with 8.3 billion events and 2.9 pib output .mc production was done at both tier 1 and tier 2 , with upwards of 500 million events / month produced at peak rates .there were also many transfers from tier to tier .the movement of analysis datasets to tier 2 kept up with data - taking , so that data was in the hands of analysts within a day of being reconstructed .the original computing model envisioned peak rates of 600 mb / s from tier 0 to tier 1 and 1200 mb / s from tier 1 to tier 2 ; these were routinely exceeded .the original model did not envision transfers among tier-2 sites , but in fact tier-2 sites received about as much data from other tier-2 s as they did from the tier-1 s .this created more flexibility and efficiency in how data could be moved .meanwhile , user analysis was successfully migrated to the tier-2 sites .it was impossible to know for sure that the grid could handle hundreds of users , or even if all of those users would want to use the grid to begin with .but in 2010 there were about 450 unique analysis users per week , submitting 150,000 analysis jobs per day .the true metric of success was that , by the time of this conference , 75 papers on the 2010 data had been submitted , accepted or published , with more in the pipeline , and there was never any evidence that computing was ever the bottleneck .it is remarkable that all of this was achieved in the face of rapidly changing experimental conditions . as wonderfulas this is , it must be remembered that the lhc only delivered 45 pb of integrated luminosity .this is less than what the computing system was designed for , and success was mandatory under those conditions .however , it provided an opportunity to shake down the system under a relatively small load , and to gather data that would help plan for 2011 . as the 2011 run approached , it was clear that the target instantaneous luminosity for the year would be reached rather quickly , and indeed it was in june .it was also expected that this would be achieved by having large proton bunches with many interactions per event ; 16 was the amount anticipated by the september technical stop . as a result of this, the event size was expected to double ( to 0.8 mb / event for the comprehensive reco data format , 0.2 mb / event for the stripped - down aod format ) from the 2010 values , and the processing time was expected to quadruple ( to 96 hs06/event ) .while the trigger rate was nominally expected to be 300 hz , it was expected that it would be a challenge to keep it there , given the pressures to try to avoid raising thresholds in the face of higher event rates .the experience of 2010 plus the above parameters were used as the basis for a very thorough modeling effort of the necessary computing resources .the expected available resources had already been established through national pledges to the worldwide lhc computing grid ( wlcg ) .the model could then be used to tailor operational plans to make sure activities could fit into the resources available .what was clear was that cms computing was expected to be resource - limited , even after squeezing a lot of efficiency out of operations .a few highlights of the modeling follow .one fact that emerged from 2010 operations ( but in retrospect seems fairly obvious ) is that the tier-1 facilities , built at a scale to handle data re - processing when necessary , are not very busy when they are not re - processing data .thus , it makes sense to move as much mc production as possible from tier 2 to tier 1 , to make use of available tier-1 resources and to leave more room for user analysis at tier 2 .the left panel of figure [ fig : t1capacity ] shows how different activities are expected to make use of tier-1 processing resources month by month through 2013 .one can see that the tier-1 centers are extremely busy when re - processing , but still less so otherwise .the right panel of figure [ fig : t1capacity ] shows how disk space is expected to be used at tier-1 .less space is allocated to aod s at tier 1 than in the original model , which imagined that each of the seven centers would keep a complete copy of the aod .but the reliability and speed of data transfers gives confidence that keeping only two copies of the aod events across the entire system will be sufficient .still , to fit within the available resources , physicists must switch from reco to aod format as much as possible , and regular deletion campaigns will be required .figure [ fig : t2capacity ] shows similar plots for the planned use of tier-2 resources . as longas much mc production activity stays at tier 1 , the use of processing resources at tier 2 is reasonable .but during re - processing periods , mc production moves back to tier 2 , at which point the resources are overcommitted .in addition , to remain within the available disk resources , 90% of user analysis needs to move from reco to aod samples .the model assumes that there will be four copies of each analysis dataset across all of the tier-2 sites , but this might have to be reduced if there is a disk - space crunch .under any circumstances , tier-2 resources are heavily committed over the next few years .how well does real cms life match up with the plan , which was developed before the lhc started running this year ? herewe discuss recent operational experience , and some of the technological changes that have been implemented to make the cms computing system work better .figure [ fig : runtime ] shows the amount of time that the lhc has been operating for physics data - taking each month , through july , compared to the expected operational time that was an input to the model .the figure also shows the average trigger rate .overall the lhc duty cycle has been lower than expected , but this has been compensated for by a trigger rate that is consistently greater than 300 hz .( the trigger rate includes the overlap in primary datasets , which was planned to be about 25% . ) in total , about 1.1 b events have been recorded , compared to 1.3 b expected from the model .a small amount of contingency has been gained as a result .given this rate , the size of the full 2011 dataset after a re - reconstruction pass should be about 1 pb . table [ tab : eventsize ] compares the sizes of different kinds of events to the expected values , which were based on simulations with realistic running conditions .in general , pileup has been lower than anticipated , due to an optimization of the luminosity that led to adding more proton bunches rather than increasing the number of protons per bunch .event sizes are smaller than expected as a result .the time required to reconstruct events has been about as expected for minimum - bias events , and about 20% larger than planned for other datasets .so far , event sizes and processing times have been roughly constant with increasing luminosity .but the lhc has now reached the limit of how many bunches can be circulated in the current configuration , and luminosity must be increased in ways that increases the pileup at the same time . thus the events are expected to get larger and processing times longer as the year continues ..expected and observed event sizes , in kilobytes .[ cols="^,^,^,^,^,^",options="header " , ] [ tab : eventsize ] as for the operations of facilities at the various tiers , the left panel of figure [ fig : t0use ] shows the numbers of jobs running and queued during the first weekend in june , when the lhc had 40% livetime .as can be seen , the tier-0 cluster was saturated , leading to nearly a thousand jobs queued at times .once the machine stopped , this backlog was quickly cleared .however , the right panel of the figure shows that the processors were not fully used .this is because the switch to 64-bit executables and a new version of root led to a larger memory footprint that inhibited efficient use of cpu .work is in progress to reduce the size of the executable , and to take advantage of whole - node scheduling , in which multiple similar jobs could run on a single node and share read - only memory . despite these challenges , the tier-0 facility is keeping up well enough with reconstructing the data as they arrive .the seven tier-1 facilities completed a full re - reconstruction of the 2010 data ( nearly 1.5 b events ) in april , and all available 2011 data ( about 600 m events ) in may , as was expected in the original planning .it is possible that there will not be another re - reconstruction of the data before the end of 2011 , but this depends on whether the software version at tier 0 is changed because of challenging event environments .meanwhile , 2.8 b mc events were produced through the end of july , as indicated in figure [ fig : mcprod ] .the latest simulation samples include out - of - time pileup .the expected production capacity was 0.22 b events / month , but in fact the system has been capable of much more than that . some of the success of the tier-1 operations can be attributed to new technology that was implemented this year .the workflow management system that had been used for data re - processing was originally designed for mc production . in that use case, it is not fatal to lose some of the events in the course of the processing , as more can always be made .but it is unacceptable to lose any data events .a new workflow management system , wmagent , is much more robust against such problems , and its deployment was a great help to operations .it is a state machine rather than a messaging system , and has 100% accountability for all events processed .one issue that arose was that the current version of the reconstruction software uses more memory than before , and as a result jobs were running longer and more jobs failed . because wmagent can redo failed jobs straightforwardly , this did not become an operational hurdle .obviously , the system has allowed for more efficient mc production too .work has also begun to implement whole - node scheduling at the tier-1 sites for further operational efficiencies .the goal is to have 50% of tier-1 resources used this way by the end of the year .figure [ fig : t2use ] shows the number of running , completed and pending jobs at the approximately 50 cms tier-2 centers during the year so far . about 30,000 cores are continually available for use , with an increasing number of them devoted to analysis use , as more mc production moves to tier 1 .about 250,000 grid jobs are completed each day , more than was anticipated in the original computing model .still , thousands of jobs are pending on any given day , with longer queues at times of great analysis activity .this suggests that more resources are needed for analysis , and/or that the jobs are not being optimally scheduled across all the sites .meanwhile , the user community continues to grow .figure [ fig : anausers ] shows the number of unique analysis users in the cms distributed computing system over the past year and a half .it is steadily growing , modulo peaks and valleys that can be correlated with important events on the particle - physics calendar .a significant fraction of the collaboration , about 800 users / month , is making use of grid resources .as noted in the previous section , it was imperative that analysis users start to move towards using the more compact aod format for their work cms was to stay within the available computing resources .tools have recently been developed to track dataset usage in greater detail .these tools indicate that the migration is indeed happening as planned , as shown in figure [ fig : dataset ] .another experiment previously had these tracking tools ; cms is looking forward to using them to help manage dataset distribution and more . the operation of the grid sites for analysis will improve with some technology developments that will be deployed shortly in cms .the cms remote analysis builder ( crab ) that analysts use to submit jobs will soon have a significant revision .wmagent will be installed underneath to take advantage of its features . as a result ,the user interface will change , requiring some user re - education .also , the greater use of pilot jobs ( or `` glide - ins '' ) for analysis is anticipated . this could allow for a prioritization of user jobs across the distributed system , not just at individual sites , andcould also have potential for balancing usage across sites .one fact that has emerged from the operation of the cms distributed computing infrastructure for data analysis is that a key limitation of the computing model is that cpu and storage must be co - located ; a dataset must effectivly reside on a disk that is in the same room as the compute node that runs the program that analyzes the dataset .thus , the data must be placed where the processing resources are .this is difficult to optimize , as that relies on having some sense of analyst preferences of datasets and sites .however , we now know that wide - area networking is more reliable than was anticipated when the original monarc model was developed . there , dataset transfer between sites was avoided as much as possible . at the same time, cms has made much progress in optimizing the reading of data files over the network , so that there is very little additional cost compared to reading a file in the same room .thus , one is inclined to forget co - location and to think big .what if users could analyze data in one place with a cpu that is in another place ?in such a scheme , data placement would hardly matter anymore , as any data would be available anytime , anywhere .users could be insulated from storage problems at sites ; if a file was corrupt at one site , there could be a straighforward and quiet failover to the newtork for access of the same file at a different site .participation in data analysis could be broadened by enabling users who do not have large storage systems , as those users could still have access to any data .the dream of a `` diskless tier-3 '' site becomes realistic .also , it would be straightforward to access data with cloud resources , should that become cost - competitive .prototype systems for such a scheme have already been deployed , using the xrootd technology .a key element is redirectors that allow jobs to find data at remote sites without any action from the user .the us cms tier-2 sites have been configured so that a failed file access at a site will fall back to reading the file from another site using the redirector over the wide - area network .there is still much work to be done to test and operate the system at the needed scale , and to develop monitoring , accounting and throttling systems . in related work, cms is also exploring how to migrate jobs between sites to optimize the use of processing resources .the actual use of cms computing in 2011 has been largely in line with the model that was created based on 2010 experience .some of the parameters have ended up being higher or lower , within about 20% , but the variances have tended to compensate each other . the model does predict that cms will be limited by its computing resources during this year .early indications of these limitations were already being seen in the run - up to this summer s conferences .some analyses were slowed by the wait for simulation samples , and there has been significant demand for processing resources at tier 2 . if cern chooses to run the lhc at very high luminosity , this could get worse still . at this writing , in mid - september 2011, the situation is still quite fluid .physicists will need to adapt to this new environment .however , all of this should be taken as good news the resource limitations reflect the fact that the lhc datasets are growing rapidly , and provide the opportunity for the discovery of new physics .we can conclude that 2010 was an extremely good year for cms computing .the distributed system was a strategic asset for producing physics results , and no one has ever complained that their work was limited by the available computing .it is important to keep in perspective that the operational scales of everyday operations were considered `` bleeding edge '' just a few years ago .this strong performance has continued in 2011 , but cms has now entered an era of resource constraints . fortunately , continuing technology developments have given some operational breathing room , and some of these advances have the potential to change the paradigm of computing at the lhc , and for data - intensive , high - throughput computing in general .i thank ian fisk for his advice in preparing the talk and his general comprehensive knowledge of cms computing .i also thank the organizers of the dpf 2011 conference for an engaging and enjoyable week .9 see http://public.web.cern.ch/public/en/lhc/lhc-en.html for many details .r. adolphi _ et al . _( cms collaboration ) , `` the cms experiment at the cern lhc , '' journal of instrumentation 3 , s08004 ( 2008 ) . c. grandi , d. stickland , l. taylor ._ , `` the cms computing model , '' cern - lhcc-2004 - 035 ( 2004 ) . k. bloom , `` the cms computing system : successes and challenges , '' proceedings of the dpf-2009 conference , detroit , mi ( 2009 ). m. michelotto , `` a comparison of hep code with spec benchmark on multicore worker nodes , '' proceedings of computing in high energy physics ( chep09 ) , prague , czech republic ( 2009 ) .j. knobloch , l. robertson _et al . _ ,`` lhc computing grid techical design report , '' cern - lhcc-2005 - 024 ( 2005 ) .f. van lingen _et al . _ , `` job life cycle management libraries for cms workflow management projects , '' proceedings of computing in high energy physics ( chep09 ) , prague , czech republic ( 2009 ) .d. spiga _ et al ._ , `` automation of user analysis workflow in cms , '' proceedings of computing in high energy physics ( chep09 ) , prague , czech republic ( 2009 ) .see http://monarc.web.cern.ch/monarc/ for a variety of documents on this topic .a. doriga , _ et al ._ , xrootd / txnetfile : a highly scalable architecture for data access in the root environment " , _ proceedings of the 4th wseas international conference on telecommunications and informatics _ , march 2005 , prague , czech republic .
|
after years of development , the cms distributed computing system is now in full operation . the lhc continues to set records for instantaneous luminosity , and cms continues to record data at 300 hz . because of the intensity of the beams , there are multiple proton - proton interactions per beam crossing , leading to larger and larger event sizes and processing times . the cms computing system has responded admirably to these challenges . we present the current status of the system , describe the recent performance , and discuss the challenges ahead and how we intend to meet them .
|
stars with relatively low surface temperatures show distinctive envelope convection zones which affect mode stability . among the first problems of this naturewas the modelling of the red edge of the classical instability strip ( is ) in the hertzsprung - russell ( h - r ) diagram .the first pulsation calculations of classical pulsators without any pulsation - convection modelling predicted red edges which were much too cool and which were at best only neutrally stable .what follows were several attempts to bring the theoretically predicted location of the red edge in better agreement with the observed location by using time - dependent convection models in the pulsation analyses ( dupree 1977 ; baker & gough 1979 ; gonzi 1982 ; stellingwerf 1984 ) .more recently several authors , e.g. bono et al .( 1995 , 1999 ) , houdek ( 1997 , 2000 ) , xiong & deng ( 2001 , 2007 ) , dupret et al .( 2005 ) were successful to model the red edge of the classical is .[ fig:1 ] these authors report , however , that different physical mechanisms are responsible for the return to stability .for example , bono et al . ( 1995 ) and dupret et al .( 2005 ) report that it is mainly the convective heat flux , xiong & deng ( 2001 ) the turbulent viscosity , and baker & gough ( 1979 ) and houdek ( 2000 ) predominantly the momentum flux ( turbulent pressure ) that stabilizes the pulsation modes at the red edge .ll balance between buoyancy & & kinetic theory of accelerating + turbulent drag ( unno 1967 , 1977 ) & eddies ( gough 1965 , 1977a ) + - acceleration terms of convective & - acceleration terms included : , + fluctuations neglected& evolve with growth rate + - nonlinear terms approximated & - nonlinear terms are neglected + by spatial gradients & during eddy growth + - neglected in momentum equ.&- included in eq .( 1 ) + - characteristic eddy lifetime : & - determined stochas- + & tically from parametrized shear + & instability + - variation ( unno 1967):&- variation of mixing length + &according to rapid distortion + &theory ( townsend 1976 ) , i.e. + or ( unno 1977 ) : & variation also of eddy shape + + ( is pressure scale height ) & + - turbulent pressure neglected & - included in mean + in hydrostatic support equation & equ . for hydrostatic support + [ tab : tc_comp ]the authors mentioned in the previous section used different implementations for modelling the interaction of the turbulent velocity field with the pulsation . in the past various time - dependent convection modelswere proposed , for example , by schatzman ( 1956 ) , gough ( 1965 , 1977a ) , unno ( 1967 , 1977 ) , xiong ( 1977 , 1989 ) , stellingwerf ( 1982 ) , kuhfu ( 1986 ) , canuto ( 1992 ) , gabriel ( 1996 ) , grigahcne et al .( 2005 ) . herei shall briefly review and compare the basic concepts of two , currently in use , convection models .the first model is that by gough ( 1977a , b ) , which has been used , for example , by baker & gough ( 1979 ) , balmforth ( 1992 ) and by houdek ( 2000 ) .the second model is that by unno ( 1967 , 1977 ) , upon which the generalized models by gabriel ( 1996 ) and grigahcne et al .( 2005 ) are based , with applications by dupret et al .+ nearly all of the time - dependent convection models assume the boussinesq approximation to the equations of motion .the boussinesq approximation relies on the fact that the height of the fluid layer is small compared with the density scale height .it is based on a careful scaling argument and an expansion in small parameters ( spiegel & veronis 1960 ; gough 1969 ) .the fluctuating convection equations for an inviscid boussinesq fluid in a static plane - parallel atmosphere are supplemented by the continuity equation for an incompressible gas , , where is the turbulent velocity field , is density , is gas pressure , is the acceleration due to gravity , is temperature , is the specific heat at constant pressure , , is the radiative heat flux , is the superadiabatic temperature gradient and is the kronecker delta. primes ( ) indicate eulerian fluctuations and overbars horizontal averages .these are the starting equations for the two physical pictures describing the motion of an overturning convective eddy , illustrated in fig . 1 . in the first physical picture , adopted by unno ( 1967 ) ,the turbulent element , with a characteristic vertical length , evolves out of some chaotic state and achieves steady motion very quickly .the fluid element maintains exact balance between buoyancy force and turbulent drag by continuous exchange of momentum with other elements and its surroundings .thus the acceleration terms and are neglected and the nonlinear advection terms provide dissipation ( of kinetic energy ) that balances the driving terms .the nonlinear advection terms are approximated by and .this leads to two nonlinear equations which need to be solved numerically together with the mean equations of the stellar structure . the second physical picture ,which was generalized by gough ( 1965 , 1977a , b ) to the time - dependent case , interprets the turbulent flow by indirect analogy with kinetic gas theory .the motion is not steady and one imagines the convective element to accelerate from rest followed by an instantaneous breakup after the element s lifetime .thus the nonlinear advection terms are neglected in the convective fluctuation equations ( 1)-(2 ) but are taken to be responsible for the creation and destruction of the convective eddies ( gough 1977a , b ) . by retaining only the acceleration terms the equations become linear with analytical solutions and subject to proper periodic spatial boundary conditions , where is time and is the linear convective growth rate .the mixing length enters in the calculation of the eddy s survival probability , which is proportional to the eddy s internal shear ( rms vorticity ) , for determining the convective heat and momentum fluxes .although the two physical pictures give the same result in a static envelope , the results for the fluctuating turbulent fluxes in a pulsating star are very different ( gough 1977a ) .the main differences between unno s and gough s convection model are summarized in table .fig . [ fig:2 ] displays the mode stability of an evolving 1.7 delta scuti star crossing the is .the results were computed with the time - dependent , nonlocal convection model by gough ( 1977a , b ) . as demonstrated in the right panel of fig .[ fig:2 ] , the dominating damping term to the work integral for a star located near the red edge is the contribution from the turbulent pressure fluctuations .gabriel ( 1996 ) and more recently grigahcne et al .( 2005 ) generalized unno s time - dependent convection model for stability computations of nonradial oscillation modes .they included in their mean thermal energy equation the viscous dissipation of turbulent kinetic energy , , as an additional heat source .the dissipation of turbulent kinetic energy is introduced in the conservation equation for the turbulent kinetic energy ( e.g. tennekes & lumley 1972,.4 ; canuto 1992 ; : where is the material derivative , is the average ( oscillation ) velocity , i.e. the total velocity , and is the constant kinematic viscosity ( in the limit of high reynolds numbers the molecular transport term can be neglected ) . the first and second term on the right of eq .( [ eq : tke ] ) are the shear and buoyant productions of turbulent kinetic energy , whereas the last term is the viscous dissipation of turbulent kinetic energy into heat .this term is also present in the mean thermal energy equation , but with opposite sign .the linearized perturbed mean thermal energy equation for a star pulsating radially with complex angular frequency can then be written , in the absence of nuclear reactions , as ( ` ' denotes a lagrangian fluctuation and i omit overbars in the mean quantities ) : where is the radial mass co - ordinate , and is the total ( radiative and convective ) luminosity .grigahcne et al .( 2005 ) evaluated from a turbulent kinetic energy equation which was derived without the assumption of the boussinesq approximation .furthermore it is not obvious whether the dominant buoyancy production term , ( see eq .[ eq : tke ] ) , was included in their turbulent kinetic energy equation and so dupret et al .( 2005 ) applied the convection model of grigahcne et al .( 2005 ) to delta scuti and doradus stars and reported well defined red edges .the results of their stability analysis for delta scuti stars are depicted in fig .[ fig:3 ] .the left panel compares the location of the red edge with results reported by houdek ( 2000 , see also fig .[ fig:2 ] ) and xiong & deng ( 2001 ) .the right panel of fig .[ fig:3 ] displays the individual contributions to the accumulated work integral for a star located near the red edge of the mode ( indicated by the ` star ' symbol in the left panel ) .it demonstrates the near cancellation effect between the contributions of the turbulent kinetic energy dissipation , , and turbulent pressure , , making the contribution from the fluctuating convective heat flux , , the dominating damping term .the near cancellation effect between and was demonstrated first by ledoux & walraven ( 1958 , ) ( see also gabriel 1996 ) by writing the sum of both work integrals as : where is the stellar mass , is the enclosed mass at the bottom of the envelope and ( is specific entropy ) is the third adiabatic exponent .except in ionization zones and consequently .the convection model by xiong ( 1977 , 1989 ) uses transport equations for the second - order moments of the convective fluctuations . in the transport equation forthe turbulent kinetic energy xiong adopts the approximation by hinze ( 1975 ) for the turbulent dissipation rate , i.e. , where is the heisenberg eddy coupling coefficient and is the wavenumber of the energy - containing eddies .however , xiong does not provide a work integral for ( neither does unno et al .1989 , ,30 ) but includes the viscous damping effect of the small - scale turbulence in his model .the convection models considered here describe only the largest , most energy - containing eddies and ignore the dynamics of the small - scale eddies lying further down the turbulent cascade .small - scale turbulence does , however , contribute directly to the turbulent fluxes and , under the assumption that they evolve isotropically , they generate an effective viscosity which is felt by a particular pulsation mode as an additional damping effect . the turbulent viscosity can be estimated as ( e.g. gough 1977b ; unno. , ) , where is a parameter of order unity . the associated work integral can be written in cartesian co - ordinates as ( ledoux & walraven 1958 , ) \,{\rm d}m\ , , \label{eq : wnu}\ ] ] where and is the displacement eigenfunction .xiong & deng ( 2001 , 2007 ) modelled successfully the is of delta scuti and red giant stars and found the dominating damping effect to be the turbulent viscosity ( eq .[ eq : wnu ] ) .this is illustrated in fig .[ fig:4 ] for two delta scuti stars : one is located inside the is ( left panel ) , the other outside the cool edge of the is ( right panel ) .the contribution from the small - scale turbulence was also the dominant damping effect in the stability calculations by xiong et al .( 2000 ) of radial p modes in the sun , although the authors still found unstable modes with orders between .the importance of the turbulent damping was reported first by goldreich & keeley ( 1977 ) and later by goldreich & kumar ( 1991 ) , who found all solar modes to be stable only if turbulent damping was included in their stability computations .in contrast , balmforth ( 1992 ) , who adopted the convection model of gough ( 1977a , b ) , found all solar p modes to be stable due mainly to the damping of the turbulent pressure perturbations , , and reported that viscous damping , , is about one order of magnitude smaller than the contribution of .turbulent viscosity ( eq . [ eq : wnu ] ) leads always to mode damping , where as the perturbation of the turbulent kinetic energy dissipation , ( see eq .[ eq : peq ] ) , can contribute to both damping and driving of the pulsations ( gabriel 1996 ) .the driving effect of was shown by dupret et al .( 2005 ) for a doradus star .we discussed three different mode stability calculations of delta scuti stars which successfully reproduced the red edge of the is .each of these computations adopted a different time - dependent convection description .the results were discussed by comparing work integrals .all convection descriptions include , although in different ways , the perturbations of the turbulent fluxes .gough ( 1977a ) , xiong ( 1977 , 1989 ) , and unno.(1989 ) did not include the contribution to the work integral because in the boussinesq approximation ( spiegel & veronis 1960 ) the viscous dissipation is neglected in the thermal energy equation . in practise , however , this term may be important .grigahcne et al .( 2005 ) included but ignored the damping contribution of the small - scale turbulence , which was found by xiong & deng ( 2001 , 2007 ) to be the dominating damping term .the small - scale damping effect was also ignored in the calculations by houdek ( 2000 ) .a more detailed comparison of the convection descriptions has not yet been made but houdek & dupret have begun to address this problem .* -dalsgaard : * the location of the red edge is predominantly determined by radiative damping which gradually dominates over the driving effect of the so - called convective flux blocking mechanism ( dupret et al . 2005 ) .a change in the mixing length will not only affect the depth of the envelope convection zone but also the characteristic time scale of the convection and consequently the stability of g modes with different pulsation periods .a calibration of the mixing length to match the observed location of the instability strip will also calibrate the depth of the convection zone at a given surface temperature .
|
a review on the current state of mode physics in classical pulsators is presented . two , currently in use , time - dependent convection models are compared and their applications on mode stability are discussed with particular emphasis on the location of the delta scuti instability strip .
|
market impact , i.e. the interplay between order flow and price dynamics , has increasingly attracted the attention of researchers and of the industry in the last years . despite its importanceboth from a fundamental point of view ( due to its relation with supply - demand ) and from an applied point of view ( due to its relation with transaction cost analysis and optimal execution ) , market impact is not yet fully understood and different models and approaches have been proposed and empirically tested .it is important to note that market impact refers to different aspects of this interplay and that they should be carefully distinguished ( see for a discussion ) .first , there is the impact of an individual trade or of the aggregated signed order flow in a fixed time period .second , especially for transaction cost analysis and optimal execution , it is more interesting to consider the impact of a large trade ( sometimes termed as _ meta - order _ ) executed incrementally by the same investor with many transactions and orders over a given interval of time .both these definitions of market impact are typically investigated by considering one asset at a time , i.e. without considering the effect of a trade ( or of an order ) in one asset on the price dynamics of another asset .this is the third type of impact , that we study in this paper , and that is termed _ cross - impact_. understanding and modeling cross - impact is important for many reasons , since it enters naturally in problems like optimal execution of portfolios , statistical arbitrage of a set of assets , and to study the relation between correlation in prices and correlation in order flows .conceptually , while self - impact , the impact of a trade on the price of the same asset , can qualitatively be understood as the result of a mechanical component ( e.g. a market order with volume larger than the volume at the opposite best ) and an induced component ( resilience of the order book due to liquidity replenishment ) , the source of cross - impact is less clear . on one side if a trader is liquidating simultaneously two assets one can obviously expect a non - vanishing cross - impact . since impact measures are typically averages across many measurements , this mechanism produces cross - impact if simultaneous trades and positively correlated order flow are frequently observed . on the other side ,liquidity providers and arbitrageurs detect local mispricing between correlated assets and bet on a reversion to normality by placing orders .in other words this induced cross - impact relates to the possibility of identifying price changes due to local imbalances of supply - demand in one asset ( rather than to fundamental information ) and of exploiting the possibly short - lived mispricing between correlated assets .even though cross - impact has already been discussed e.g. in as an extension of their optimal execution model and in in a principal component approach , it has only recently been the subject of extensive empirical studies . show that order imbalance has a significant impact on returns across stocks and sectors at the daily scale . present evidence for a structured price cross - response and correlated order flow at the intraday time - scale across stock pairs . link cross - response and order flow in a multivariate extension of the transient impact model ( tim ) of and show that their model can reproduce a significant part of the well - known correlation structure of asset returns . perform a scenario analysis in a similar model , finding that cross - response is related both to cross - impact and correlated order flow across assets .it is clear that the cross impact problem talks naturally to dynamic arbitrage and to the possibility of price manipulation , as already discussed in .it is therefore natural to ask which constraints the no - price - manipulation assumption imposes on market impact models .there is a large literature on this problem , often focused on the single asset case ( , , , , ) . in the multi - asset casemany articles are concerned with strategies for optimal portfolio liquidation in the presence of volatility risk by expanding the model of . show that optimal execution strategies for investors with constant absolute risk aversion are deterministic and for a more general absolute risk aversion setting finds that the optimal strategies for investors with different risk preferences vary only in the speed of their execution .the case of cross - impact in a lit market when there is also a dark pool is discussed in .the paper most related to ours from a theoretical point of view is .they model multi - asset price impact by considering a linear version of the model of on a discrete - time grid , extending thus the model already considered in .they show that the absence of no - dynamic - arbitrage corresponds to the decay kernel being described by a positive definite matrix - valued function and formulate further conditions to ensure that resulting optimal strategies are well - behaved showing how they can be constructed .however it is not generally straightforward to establish positive definiteness when a decay kernel is obtained coordinate - wise from estimations and therefore necessary conditions for the absence of dynamic arbitrage that can be verified on estimated decay kernels prove useful . in this paper ,focusing on the tim framework in continuous time , we establish some easily verifiable necessary conditions that must be satisfied by self- and cross - impact , in order to avoid the presence of price manipulation . we do this in the same spirit of by explicitly constructing trading strategies that lead to price manipulation and negative expected cost .some of these relations are simple generalizations to the multi - asset case of the corresponding relations for the single asset case derived in .other relations that we derive here are instead genuinely relative to the multi - asset case .in particular we formalize in lemma [ lemma : linbounded_symm ] that cross - impact must be symmetric , i.e. the return induced in asset by a trade of volume in asset must be equal to the impact of a trade of the same volume in asset on the price of asset .it is natural to ask whether this symmetry condition is empirically verified . in this paperwe study a market whose microstructure , to the best of our knowledge , has not been explored so far .this is the mot market for sovereign bonds , a fully electronic limit order book market for fixed income assets .one of the reasons for our choice is that , due to the nature of the traded assets , we expect cross - impact , especially due to quote revisions , to be very high .in fact , two italian fixed - rate btps differ mostly through the coupon rate and the time - to - maturity - factors which are accounted for in the price , which moves in a very synchronised way since for most purposes both titles are perfectly interchangeable . calibrating a multivariate tim in trade timewe find that there exist pairings of bonds where the symmetry condition of cross - impact is violated in a statistically significant way . by comparing the potential profit from a simple arbitrage strategy to transaction costs such as the bid - ask spread , which are neglected in the model ,we conclude that arbitraging is not profitable .it is also crucial to point out that the empirical part of the paper is important because it is the first application of a tim model to fixed income markets and to the best of our knowledge it is the first work to consider cross - impact of single market orders and not the order sign imbalance aggregated over fixed time intervals ( as done in and ) .the rest of the paper is structured as follows .section [ sec : model ] introduces our model and the links to the no - dynamic - arbitrage principle .section [ sec : generalconstraints ] discusses some general constraints on cross - impact that arise in our framework for bounded decay kernels and the corresponding proofs are given in appendix [ app : proofs ] . in section [ sec : empirical ] we study cross - impact empirically and compare to the theoretical results in section [ sec : generalconstraints ] .finally section [ sec : conclusion ] concludes .the presence of dynamic arbitrage depends on the market impact model . in this paperwe consider the transient impact model ( tim ) introduced in ( see for a discussion ) .the model has been originally formulated in discrete time , and its continuous time version , that we present in the next section , has been proposed in . assumes that the asset price at time follows a random walk with a drift determined by the cumulative effect of previous trades where represents the ( instantaneous ) impact of trading at a rate at time weighted by a decay kernel with . is a noise process , for example a wiener process , and is the volatility . for consistency with equation ( [ eq : cost_nd ] ) for the cost of trading , the trading rate is given in units of number of shares per unit of time . in our multivariate extension we consider the prices of a set of assets where the drift in asset not only depends on the trading history of asset but also on past trades in assets .thus the price process of asset is given by with a correlated noise process ( e.g. a multivariate wiener process ) and where in addition to the _ self - impact _terms and we have introduced additive _ cross - impact _ terms and , , that represent the impact of trading in asset on the price of asset . if for any pairing and are non - zero , then there is cross - impact from asset to asset . for a trading strategy ] where is a round - trip trading strategy in asset . in this sectionwe assume that the decay kernel is non - increasing , bounded and therefore normalized as well as right - continuous at in all components .a special case of such a kernel is exponential decay as in .the proofs of the results in this section are given in appendix [ app : proofs ] and are obtained following the approach of .all results can also be obtained following under the slightly more restricting assumptions of the decay kernel being representable as a suitable series expansion and considering only the first non - zero orders in the limit of . in the followingwe will often make use of a simple strategy in two assets where trading is split into two phases of trading at constant rates . a simple in - out strategy .[ eg : strategy_easy ] + at first we build up a position at a constant trading rate from time until time , with , and then liquidate the position in a second phase from until . the velocities , are constrained by our choice of the strategy .since is a round - trip strategy , the trading rates and have opposite signs , i.e. , and the time when the trading direction changes is given as .let us further fix notation with .the cost of this strategy can be decomposed as where in the one - dimensional case the principle of no - dynamic - arbitrage imposes a constraint on the term which is negative due to the change in trading rate and from equation ( [ eq : nodynamicarb_nd ] ) it follows that as in . for the multi - dimensional case further implies a relationship between the strength of cross - impact and self - impact . in the following we will try to exploit cross - impact in order to push down the cost of strategiessupposing that cross - impact is positive for positive trading rates , i.e. for , we can choose , e.g. trading into asset while contemporaneously trading out of asset , in order to get a negative contribution from cross - impact. in the one - dimensional case shows that permanent market impact needs to be an odd function in the rate of trading , i.e. .we show here that the same holds for cross impact for decay kernels that are non - singular around .[ lemma : antisym_nd ] assume a price process as in ( [ eq : price_nd ] ) with a bounded , non - increasing decay kernel that is continuous around .then such a model admits price manipulation if is not an odd function of the trading rate , i.e. unless therefore we will assume for the remainder of this paper that ( [ eq : antisym_nd ] ) holds . as a corollary it follows that [ cor : absence ] absence of dynamic - arbitrage for price process as in ( [ eq : price_nd ] ) with a decay kernel that is bounded , non - increasing and continuous around requires that the cost constraint in equation ( [ eq : nodynamicarb_nd ] ) also imposes a constraint on the relative strength of .let us consider a simple example at first .trading in and out at the same rate [ eg : size_easy ] we consider a strategy as above in equation ( [ eq : trading_perm_2 ] ) where we are trading in and out of positions at the same rate , i.e. and therefore , but in different directions in the two assets , i.e. and thus . for simplicitylet us assume a uniform decay of market impact , i.e. for all pairings .the cost is then \\\nonumber & \left\ { \int_0^{t/2}{\mathrm{d}t \int_0^t{\left [ g(t - s ) - g(t+t/2 - s ) \right ] \mathrm{d}s } } + \int_{t/2}^{t } { \mathrm{d}t \int_{t/2}^{t } { \left [ g(t - s ) - g(t - s ) \right ] \mathrm{d}s } } \right\}\end{aligned}\ ] ] and shows that the term in curly brackets in equation ( [ eq : cost_inoutsame ] ) is greater than zero when further requiring that is strictly decreasing .thus the no - dynamic - arbitrage constraint ( [ eq : nodynamicarb ] ) requires that for any , thus constraining the relative size of the cross - impact terms and with respect to self - impact .note that by setting we recover the one - dimensional case and it follows that . in the general casethe decay is not uniform and we can not factor out the term in curly brackets in equation ( [ eq : cost_inoutsame ] ) , instead we have to weight each of the terms of equation ( [ eq : nodynamicarb_inoutsame ] ) with a factor that depends on the decay . furthermore we are free to choose a strategy with different trading rates as in example [ eg : strategy_easy ] or a more sophisticated strategy . consider this problem in discrete time with linear instantaneous price impact .their proposition 2.6 states that absence of arbitrage in the sense of equation ( [ eq : nodynamicarb_nd ] ) is equivalent to the condition that the product of strength of impact and the decay kernel corresponds to a positive definite matrix - valued function . finds that any single - asset market impact model as in equation ( [ eq : price_1d ] ) is inconsistent when is non - linear and is bounded and non - increasing .we expand this proposition to the multi - asset case with cross - impact , i.e. [ lemma : finite_linear ] assuming a price process as in ( [ eq : price_nd ] ) with a bounded , non - increasing decay kernel that is continuous around and a non - linear market impact function .then such a model admits price manipulation and and are inconsistent . as a corollary of lemma [ lemma : finite_linear ] we can extend lemma 4.1 in for self - impact to the case with cross - impact .[ cor : exp_linear ] a price process as in ( [ eq : price_nd ] ) with self- and cross - impact that decays exponentially at different rates and instantaneous price impact that is non - linear , admits price manipulations .we obtain the same corollary for purely permanent impact in the limit , as already observed in : nonlinear permanent self- and cross - asset market impact is inconsistent with the principle of no - dynamic - arbitrage .again we assume that is bounded , non - increasing and continuous around and therefore that impact is linear , i.e. , for otherwise our model is inconsistent , as shown in the previous section .we will further show that in this case impact needs to be symmetric , i.e. , in order to avoid price manipulations .an asymmetric strategy with purely permanent and linear impact .[ eg : linperm_asymm ] + suppose market impact is linear and permanent , i.e. and . then the cost of trading in the single - asset case only depends on the initial and final positions and .if there is cross - impact between two or more assets , there is also an interaction term between the trading rates in different assets , i.e. and while the first sum disappears for a round - trip strategy since , this is not generally the case for the second sum . to see this ,let us consider a different round - trip strategy in two assets , which is now asymmetric and lasts over three phases : while the self - impact cancels out when we calculate , the asymmetry in our strategy makes for a non - trivial total cost that stems from cross - impact : if this gives a negative cost and likewise when by interchanging assets in the strategy ( [ eq : strategy_asymm ] ) .therefore it follows that cross - impact needs to be symmetric with respect to asset pairs in order to exclude arbitrage opportunities , as observed in .in fact we can expand this result to the transient impact case : [ lemma : linbounded_symm ] if decay of market impact is bounded , non - increasing and continuous around and is linear , absence of no - dynamic arbitrage requires that let us reconsider example [ eg : size_easy ] taking into account linearity and symmetry of cross - impact as shown above .in this case equation ( [ eq : nodynamicarb_inoutsame ] ) simplifies to and minimizing the cost constrains the strength of cross - impact as in agreement with proposition 3.7.(b ) of and equivalent to the condition for a symmetric matrix to be positive - semidefinite .the conditions of linearity and symmetry of cross - impact are necessary for absence of arbitrage , but are they also sufficient ?an asymmetric strategy with symmetric , exponentially decaying linear impact .[ eg : linexp_asymm ] let us re - consider the strategy in eq .( [ eq : strategy_asymm ] ) with exponentially decaying impact and a linear instantaneous impact function that is now symmetric with .the cost terms for self - impact are now \nonumber \\ c^{bb } & = \frac{\eta^{bb } v_b^2 } { ( \rho^{bb})^2 } \left [ - e^{-2 \rho^{bb } t / 3 } + 4 e^{- \rho^{bb } t / 3 } - 3 + \frac{2 \rho^{bb } t } { 3 } \right ] \ : .\label{eq : cost_asymm_self_exp}\end{aligned}\ ] ] and likewise for cross - impact \nonumber \\c^{ba } & = \frac{\eta^{\text{cross } } v_a v_b } { ( \rho^{ba})^2 } \left [ 2 -\frac { \rho^{ba } t}{3 } -3 e^{- \rho^{ba } t / 3 } + e^{- 2 \rho^{ba } t/3 } \right ] \ : .\label{eq : cost_asymm_cross_exp}\end{aligned}\ ] ] when we develop the terms in squared brackets in ( [ eq : cost_asymm_self_exp ] ) and ( [ eq : cost_asymm_cross_exp ] ) in terms of all terms of order and cancel out , while terms proportional to sum to thanks to the symmetry of instantaneous cross - impact .the cost to the first non - zero order of is then + \sum_{i , j } \mathcal{o}\left ( ( \rho^{ij})^2 t^4 \right ) \label{eq : cost_asymm_exp}\ ] ] and when is large enough compared to the other terms the cost can still be negative . for absence of price manipulationswe also require constraints on the asymmetry of as a function of the other parameters .this is in agreement with the results of in discrete time .their proposition 3.7 proves that the conditions of symmetry , and a non - increasing decay kernel , i.e. and , are sufficient for absence of arbitrage .we complement this result in lemma [ lemma : linbounded_symm ] by showing that symmetry is indeed necessary for any quasi - permanent decay kernel .for our empirical analysis we consider italian sovereign bonds traded on the retail platform `` mercato telematico delle obbligazioni e dei titoli di stato '' ( mot ) .we choose to estimate cross - impact between bonds instead of equities since we expect the strength of cross - impact among sovereign bonds of the same issuing country , especially of similar maturity , to be bigger than the one between e.g. stocks or sectors .sovereign bonds of one country typically have a very similar underlying risk and their prices are implicitly connected via the yield curve , a link that we deem stronger than e.g. a common factor between stocks of the same sector .the secondary market for european sovereign bonds is divided into an opaque over - the - counter market ( otc ) and an observable exchange - traded market .the italian securities and exchange commission consob publishes a bi - annual report listing the share in trading of italian government bonds separated per trading venue . for the year 2014 ( 2015 ) the share of otc trading has been ( ) , while ( ) of trading on platforms took place on the inter - dealer platform mts .mot is the third - largest platform by traded value with ( ) of traded value excluding the otc market in 2014 ( 2015 ) .most of the literature for the italian and european government bonds market focuses on mts , with the exception of who compare the liquidity of dual - listed corporate bonds across mot and the eurotlx platform . review the market microstructure of mts in the context of the market for european sovereign bonds and discuss several liquidity measures based on the limit order book , trades or bond characteristics .they note that mts `` normally has a few trades per bond per day , even for the most liquid government bonds '' . indeed , due to large minimum sizes, for most titles there is on average less than one transaction per day on mts , making studies of market impact difficult . overcome this issue by building impulse response functions from regressions of returns on order flow at 10 second intervals to study permanent market impact . in a different approach a measure of ( virtual ) mechanical price impact along with other liquidity measures calculated from the limit order book to detect illiquidity shocks that can be modeled as a self- and cross - exciting hawkes process in and across italian sovereign bonds .instead in this paper we focus on mot where we observe a sufficient number of ( smaller ) trades as well as an active limit order book .italian government bonds are traded on the domesticmot segment of mot where the trading day is divided into an opening auction from 8:00 to 9:00 followed by a phase of continuous trading until 17:30 . if certain price limits are violated during the continuous trading , a volatility auction phase is initiated for a duration of 10 - 11 minutes .mot is organized as a continuous double auction where besides market and limit orders also partially hidden `` iceberg orders '' , `` committed cross '' orders and `` block trade facilities '' are allowed . while the presence of a specialist or a bid specialist is possible , in practice this is only the case for a subset of financial sector corporate bonds not in our sample .the tick size depends on the residual lifetime and is 1 basis point of nominal size or 0.1 basis points if the residual lifetime is less or equal than two years , corresponding to or euro cents respectively .our dataset contains all trades and limit order book ( lob ) snapshots for a selection of 60 isins from december 1 , 2014 to february 27 , 2015 and april 13 , 2015 to october 16 , 2015 for a total of 194 trading days . for the remainder of this paper we will focus on a set of fixed rate or zero - coupon italian sovereign bonds listed in appendix [ app : isins ] with at least 5,000 trades throughout our sample to ensure sufficient liquidity and statistical significance of our results . to avoid intraday seasonalities we further restrict our data to 10:00 - 17:00 and discard observations when we detect a volatility auction .the average spread is smaller than 10 ticks for most of the bonds with the exception of some very long - term bonds and bonds where the tick size is 0.1 basis points .more than of the orders in our sample are executed at the corresponding best bid or ask quote can either be due to orders that were executed across more than one millisecond ( so that they are recorded as two or more orders ) , missed lob updates or exotic order types . ] and thus identified as sell or buy orders respectively , while all other orders are classified according to the algorithm of .let us fix notation for the estimations in the following sections .we consider the log - price of the mid - price of the best bid and ask quote for asset at time and calculate the return from time to time as for . is the sign of a trade ( market order ) and for a buyer - initiated transaction , for a sell and undefined when there is no trade in the asset at time . is an indicator function that is when there is a trade in asset at time and 0 else and we consider the product of an undefined trade sign with a indicator function to be 0 such that the product is always defined and one of .the size of a trade is given as its nominal value in eur and the price is reported per one asset ( or contract ) with a face value of 100 eur . unlike e.g. we do not de - mean the order sign in order to avoid attributing a price impact to the absence of transactions in a bond in the sense of corollary [ cor : absence ] .however we have verified that our results are qualitatively similar when considering de - meaned order signs or and de - meaned returns .we define the self- and cross - response function as the unconditional -ahead return in asset controlled for the order sign of asset : .\label{eq : response_function_r}\ ] ] for we will speak of _ self - response _ and of _ cross - response _ for . figure [ fig : response_function ] shows the average self- and cross - response function for all bonds in our sample and their pairings respectively . for positive lags find that self - response is on average larger than cross - response by a factor of , consistent with observations of . is zero by definition , whereas for small negative we find that is on average positive , producing a cusp at .we conjecture that such behavior is not observed in because of the rather large time lag of 5 minutes , corresponding to units of transaction time in figure [ fig : response_function ] . in the single asset casethis feature is clearly present for the large - tick stock microsoft in figure 1 of .as shown there , the kink could be related to correlations of market order flow with past returns and indicates a forecasting power of current returns on the future order sign imbalance .interestingly we find that the cross - response measured at negative lags is smaller ( i.e. larger in absolute value ) than self - response , contrary to the observations in .the figure also shows the prediction from the model of the negative lag impact ( see for details ) .we observe a clear difference with the empirical data suggesting also for cross - impact a reaction of order flow to past price dynamics of other bonds .we measure the instantaneous market impact function as \label{eq : f_measurement}\ ] ] which is the expected return in asset from just before a trade at time until 2 seconds after , multiplied by the trade sign in asset at time and conditional on a trade in asset at time of size .we have chosen the two second interval as twice the maximum time between two updates of the limit order book , i.e. we can rule out that changes in the book were not reported in our data . for measurement purposes we bin similar trade sizes together , with the bin size chosen as a function of the number of trades in the triggering bond . measured in units of face value .each line corresponds to one pairing , grouped by time - to - maturity into four categories , where impact is from the column on the row .self - impact is present on the diagonal panels and shown as red solid lines , cross - impact is shown as blue dashed lines and present in all panels .price impact is calculated as average price change ( multiplied by the trade sign ) after a lag of 2 seconds , the minimum time that ensures we observe an update of the limit order book .self- and cross - impact is clearly non - linear . for comparisonthe solid black line in the lower left panel illustrates a linear impact function . ]figure [ fig : f_measurement_all ] shows self- and cross - impact between all bonds in our sample as a function of trade size measured in units of face value .cross - impact is universally present across our sample and on average smaller than self - impact by roughly one order of magnitude .the cross - impact curves of different pairings are very close one to the other when both bonds have a time - to - maturity of at least four years left . for bonds with three or less years left until maturity we do not observe an intense trading activity , thus the curves in the leftmost column in the figure are very noisy .likely the price - dynamics of these short - term titles are more decoupled from the medium- and long - term bonds with a lifespan of four or more years .the figure shows that all the estimated functions are non - linear , being concave and well described by a power law behavior with an exponent smaller than 1 .this has been already observed in self - impact ( ) and is extended here to cross - impact .. however we should remember that what is shown in figure [ fig : f_measurement_all ] is the observed impact , which might be different from the virtual impact , since the former does not take into account the selection bias due to the fact that traders condition the market order volume to what is present at the opposite best . for a discussion of this point in the self - impact case , see . ]having established this evidence for cross - impact , we investigate the possible origin of cross - impact . is this due to correlated trades across assets ( e.g. a strategy trading simultaneously several bonds ) or is it mostly due to quote revision following a trade , leading to changes of the mid - price of a bond in the absence of trades ? to discriminate between these alternatives , we repeat the analysis in figure [ fig : f_measurement_all ] and distinguish now whether there were any trades beyond the triggering one in any other bond in our sample during a period from 3 seconds before to 2 seconds after the triggering transaction , which we will call _ isolated trades_. , where impact is from the column on the row . self - impact is present on the diagonal panels in red , cross - impact on the off - diagonals in blue .solid lines show the market impact function based on all trades as in figure [ fig : f_measurement_all ] , dotted lines show market impact based on isolated trades only , i.e. when there was no other transaction from 3 seconds before to 2 seconds after the triggering trade . ] for better readability in figure [ fig : f_measurement_sametime ] we focus on the four most recently issued 30 year btps in our sample , which were shown in the lower right panel of figure [ fig : f_measurement_all ] .results are similar for all other pairs of bonds .when we consider market impact of isolated trades only , self impact is lower than unconditionally .this is somewhat expected since order signs are positively autocorrelated and we exclude contributions where other trades have on average a positive contribution to impact .however the decrease in market impact is stronger for the cross - impact components , which are smaller by a factor of on average , whereas self - impact decreases only by a factor on average .we conclude therefore that both an autocorrelation of orders across assets as well as quote revisions play a role in forming cross - impact . in the next sectionwe will take into account the ( cross- ) autocorrelation of the order sign when we estimate the shape of the decay of market impact . to estimate the empirically observed decay function we employ a multivariate version of the transient impact model of and similar to .while the advantage of the model lies in the fully non - parametric estimation of the kernel that we obtain , the tim is typically estimated in event time which is asset - specific .previous approaches avoid potential pitfalls by estimating the propagator in calendar time and binning trades .the estimation then is sensitive to the bin width . a small bin - width as 1 second in introduces problems in the treatment of bins without trading activity , while a large bin width as 5 minutes in is too coarse to observe effects of single transactions .the main difference of our estimation is that we estimate the propagator in a combined market order time .specifically our combined trade time is defined to advance by one unit for any unique timestamp at which there is at least one trade recorded , irrespective of the asset(s ) . of tradeshappen at the same time - stamp as another trade in a different bond . ]our model for the ( log- ) mid - price of asset just before a trade at time reads } + \xi^i_{t ' } \right\ } } + x^i_{-\infty } \label{eq : mvtim_m}\ ] ] where is the order sign and an indicator function for a trade in asset at time as defined in section [ sec : mot ] . is a noise term with correlation matrix and the empirically observed correlation structure of returns of is not but the noise component plus the component due to the correlated order flow and cross - impact , as shown in .finally self- and cross - impact is captured by the propagator matrix which gives the price impact of a trade in asset on asset after a positive time lag .note that here we assume that trades of all volumes have the same impact and to avoid confusion with the previous sections we denote the decay kernel .corresponds to the elementwise product of and as defined in equation ( [ eq : price_nd ] ) , given the assumption of indifference to trade size .] in this model returns are then defined as where , and due to the definition of the price process in equation ( [ eq : mvtim_m ] ) a lag of as the argument of in equation ( [ eq : price_nd ] ) corresponds to for . in practice (both due to computational limitations and to avoid dealing with overnight effects ) the sum over is performed up to a cutoff lag . for an estimation of that is more stable with respect to ( )we compute the observable \\ & = \sum_k { \sum_{n \geq 0 } { { \cal h}^{ik}(n ) \underbrace{\mathbb{e}\left [ \epsilon^k_{t+\ell - n } i^k_{t+l - n } \epsilon^j_{t } i^j_{t } \right ] } _ { \tilde{c}^{kj}(\ell - n ) } } } \label{eq : mvtim_s}\end{aligned}\ ] ] where is the cross - correlation matrix of the modified order sign at lag .is not strictly speaking a correlation matrix , as we do not de - mean nor normalize . ] to estimate we re - write equation ( [ eq : mvtim_s ] ) as a matrix equation where with a slight abuse of notation and are row vectors of block matrices , i.e. and , and is a symmetric block - toeplitz matrix of blocks of the correlation matrices at different lags , of dimension , where we use that . to estimate and thus invert and right - multiply equation ( [ eq : mvtim_matrix ] ) with , where both and are constructed from ( weighted ) averages over daily estimations .figure [ fig : g_est ] shows the weighted mean of the decay kernel for self- and cross - impact averaged over all the bonds and pairings .the mean and median values are not shown here but behave similarly .both propagators do not decay immediately but reach their peak after transactions .this indicates a market inefficiency which has been observed for self - impact in other markets , see e.g. figure 1 in . in the absence of slippagethis inefficiency could be exploited by e.g. a simple buy - hold - sell strategy .however here the expected gain is on the order of basis points while spread costs are basis points so that such a strategy would not be profitable .further we observe that self- and cross - impact decay rather slowly with average self - impact reaching its initial level after transactions , corresponding to minutes of physical time and cross - impact taking even longer .we have shown in section [ sec : finite_symmetry ] that for a bounded decay kernel the strength of cross - impact must be symmetric across pairs , i.e. . herewe check whether this is empirically verified . in the estimation of the previous section where we are averaging over the trade volume, effectively regressing returns on trade events , this corresponds to the condition that , i.e. we are assuming that prices are roughly constant so that absolute returns can be approximated by relative returns and that the average value ( trade volume weighted by price ) does not differ across bonds . as a robustness check , we repeat the estimation taking into account trading value , i.e. we modify equation ( [ eq : mvtim_r ] ) to be where we are now regressing returns on traded value and the estimated impact and the decay kernel is connected to the and discussed in sections [ sec : model ] and [ sec : generalconstraints ] via clearly the symmetry of of lemma [ lemma : linbounded_symm ] must hold also for which is the impact and decay kernel estimated at its smallest lag .again we assume a roughly constant bond price process .the added accuracy of this estimation due to including the value is countered by the fact that the empirically observed market impact function is non - linear . while we may easily check for symmetry on the estimated impact matrix , this does not allow for any statement on its statistical significance .therefore we repeat the estimation on a shorter time scale , i.e. we obtain and by averaging over the days of each calendar week instead of over the whole sample period and estimate the decay kernel ( or respectively for the estimation on trade value ) for each week separately . for each of the 41 estimated compute the asymmetry and for each of the pairs we perform a student s t - test of the null hypothesis that . for robustnesswe repeat this for three different aggregation periods : weekly as described above , bi - weekly , and monthly ..percentage of bond - pairs for which the null of symmetry in cross - impact is rejected according to a t - test on the null ( ) .tests are performed on weekly / bi - weekly / monthly estimations of ( ) from regressions of returns on signed trades ( value of trades ) .[ cols= " < , < , > , > , > " , ]
|
we extend the `` no - dynamic - arbitrage and market impact''-framework of jim gatheral [ _ quantitative finance _ , * 10*(7 ) : 749 - 759 ( 2010 ) ] to the multi - dimensional case where trading in one asset has a cross - impact on the price of other assets . from the condition of absence of dynamical arbitrage we derive theoretical limits for the size and form of cross - impact that can be directly verified on data . for bounded decay kernels we find that cross - impact must be an odd and linear function of trading intensity and cross - impact from asset to asset must be equal to the one from to . to test these constraints we estimate cross - impact among sovereign bonds traded on the electronic platform mot . while we find significant violations of the above symmetry condition of cross - impact , we show that these are not arbitrageable with simple strategies because of the presence of the bid - ask spread . * keywords * : market impact , dynamic arbitrage , cross - impact , mot , sovereign bonds
|
the covariance matrix adaptation evolution strategy ( cma - es , ) is a stochastic search algorithm for non - separable and ill - conditioned black - box continuous optimization . in the cma - es , search points are generated from a gaussian distribution and the mean vector and the covariance matrix of the gaussian distribution are adapted by using the sampled points and their objective value ranking .these parameters update rules are designed so as to enhance the probability of generating superior points in the next iteration in a way similar to but slightly different from the ( weighted ) maximum likelihood estimation .adaptive - ess including the cma - es are successfully applied in practice .however , their theoretical analysis even on a simple function is complicated and linear convergence has been proven only for simple algorithms compared to the cma - es .resent studies demonstrate the link between the parameter update rules in the cma - es and the natural gradient method , the latter of which is the steepest ascent / descent method on a riemannian manifold and is often employed in machine learning . the natural gradient view of the cma - es has been developed and extended in and the information - geometric optimization ( igo ) algorithm has been introduced as the unified framework of natural gradient based stochastic search algorithms . given a family of probability distributions parameterized by , the igo transforms the original objective function , , to a fitness function , , defined on .the igo algorithm performs a natural gradient ascent aiming at maximizing . for the family of gaussian distributions, the igo algorithm recovers the pure rank- update cma - es , for the family of bernoulli distributions , pbil is recovered .the igo algorithm can be viewed as the deterministic model of a recovered stochastic algorithm in the limit of the number of sample points going to infinity .the igo offers a mathematical tool for analyzing the behavior of stochastic algorithms . in this paper, we analyze the behavior of the deterministic model of the pure rank- update cma - es , which is slightly different from the igo algorithm .we are interested in knowing what is the target matrix of the covariance matrix update and how fast the covariance matrix learns the target .the cma is designed to solve ill - conditioned objective function efficiently by adapting the metric covariance matrix in the cma - es as well as other variable metric methods such as quasi - newton methods .speed of optimization depends on the precision and the speed of metric adaptation to a great extend .there is a lot of empirical evidence that the covariance matrix tends to be proportional to the inverse of the hessian matrix of the objective function in the cma - es .however , it has not been mathematically proven yet .we are also interested in the speed of convergence of the mean vector and the covariance matrix .convergence of the cma - es has not been reported up to this time .we tackle these issues in this work . in this paper , we derive a novel natural gradient algorithm in a similar way to the igo algorithm , where the objective function is transformed to a function in a different way from the igo so that we can derive the explicit form of the natural gradient for composite functions of a strictly increasing function and a convex quadratic function .we call the composite functions _ monotonic convex - quadratic - composite _ functions .the resulting algorithm inherits important properties of the igo and the cma - es , such as invariance under monotone transformation of the objective function and invariance under affine transformation of the search space .we theoretically study this natural gradient method on monotonic convex - quadratic - composite functions .we prove that the covariance matrix adapts to be proportional to the inverse of the hessian matrix of the objective function .we also investigate the speed of the covariance matrix adaptation and the speed of convergence of the parameters .the rest of the paper is organized as follows . in section [ sec : algo ] we propose a novel natural gradient method and present a stochastic algorithm that approximates the natural gradient from finite samples .the basic properties of both algorithms are described . in section[ sec : conv ] we study the convergence properties of the deterministic algorithm on monotonic convex - quadratic - composite functions .the convergence of the condition number of the product of the covariance matrix and the hessian matrix of the objective function to one and its linear convergence are proven .moreover , the rate of convergence of the parameter is shown . in section[ sec : exp ] , we conduct experiments to see how accurately the stochastic algorithm approximates the deterministic algorithm and to see how similarly our algorithm and the cma - es behave on a convex quadratic function .finally , we summarize and conclude this paper in section [ sec : conc ] .we first introduce a generic framework of the natural gradient algorithm that includes the igo algorithm .the original objective is to minimize , where is a metric space .let and be the borel -field and a measure on .hereunder , we assume that is -measurable .let represent any monotonically increasing set function on , i.e. , for any , s.t .we transform to an _ invariant cost _ function defined as ] .then , the infimum of } ] , where be the vectorization of such that the ( , )th element of corresponds to element of ( see ) .then the fisher information matrix has an analytical form where denotes the kronecker product operator . under some regularity conditions for the exchange of integration and differentiation we have },\label{eq : grad}\ ] ] where is the log - likelihood .the gradient of the log - likelihood can be written as then , the natural gradient ^{{\mathrm{t}}} ] , where is an orthogonal matrix and is a diagonal matrix such that . *compute the square root of , .* generate normal random vectors .* compute , for . *evaluate the objective values ; * estimate as * compute the baseline . *compute the weights .* estimate the natural gradient and as * compute the learning rates and .* update the parameters as and .we refer to this algorithm for the stochastic ngd algorithm .this algorithm generates samples from in steps 14 and evaluates their objective values in step 5 . in step 6 ,the invariant costs are evaluated .the estimates are obtained as follows . by definitionwe have applying monte - carlo approximation we have since , we have the estimates in step 6 .step 7 computes the baseline that is often introduced to reduce the estimation variance of gradients while adding no bias .we simply choose the mean value of the as the baseline . replacing the expectation in with the sample mean and adding the baseline ( in step 8) we have the monte - carlo estimate of the natural gradient in step 9 .finally in step 11 , we update the parameters along the estimated natural gradient with the learning rates computed in step 10 .the learning rates are chosen in the following so that they are inverse proportional to the largest eigenvalue of the following matrix the difference between the igo algorithm and our deterministic algorithm is that the invariant cost in the igo algorithm is defined by negative of the weighted quantile , ) ] is non - increasing weight function . since the quantile ] by the number of better solutions divided by the number of samples , .therefore , the pure rank- update cma - es simulates the same lines as the stochastic ngd algorithm described in section [ sec : stoc ] with the weights . in section [ sec : exp ] we compare the stochastic ngd algorithm with the pure rank- update cma - es where * invariance . * our algorithms inherit two important invariance properties from the igo and the cma - es : invariance under monotonic transformation of the objective function and invariance under affine transformation of the search space ( with the same transformation of the initial parameters ) .invariance under monotonic transformation of the objective function makes the algorithm perform equally on a function and on any composite function where is any strictly increasing function .for example , the convex sphere function is equivalent to the non - convex function for this algorithm , whereas conventional gradient methods , e.g. newton method , assume the convexity of the objective function and require a fine line search to solve non - convex functions .this invariance property is obtained as a result of the transformation .invariance under affine transformation of the search space is the essence of variable metric methods such as newton s method . by adapting the covariance matrix ,this algorithm attains universal performance on ill - conditioned objective functions .* positivity .* the covariance matrix of the gaussian distribution must be positive definite and symmetric at each iteration .the next proposition gives the condition on the learning rate such that the covariance matrix is always positive definite symmetric .suppose that the learning rate for the covariance update in the deterministic ngd algorithm , where denotes the largest eigenvalue of the argument matrix . if is positive definite symmetric , is positive definite symmetric for each .similarly , if in the stochastic ngd algorithm , where is defined in , and if is positive definite symmetric , the same result holds . consider the deterministic case .suppose that is positive definite and symmetric .then , since by the assumption , all the eigenvalues of is smaller than one .thus , the inside of the brackets is positive definite symmetric and hence is positive definite symmetric . by mathematical induction , we have that is positive definite and symmetric for all . the analogous result for the stochastic case is obtained in the same way . * consistency . *the gradient estimator is not necessarily unbiased , yet it is consistent as is shown in the following proposition .therefore , one can expect that the stochastic ngd approximates the deterministic ngd well when the sample size is large .let be the natural gradient operator .[ prop : consistency ] let be independent and identically distributed random vectors following .let and ^{{\mathrm{t}}} ] .note that = \operatorname{tr}({{\mathcal{i}}_{\theta}}^{-1 } ) < \infty ] .therefore , implies < \infty \text { and } \label{eq : propasm1}\\ { \mathbb{e}}[{v_{f}}(x ) ] < \infty \enspace .\label{eq : propasm2}\end{gathered}\ ] ] define and decompose as by and the strong law of large numbers ( lln ) , the first summand converges to = { \tilde \nabla}{j}(\theta) almost everywhere , we have by { lln } } & = { \mu_\mathrm{leb}}^{2/d}[y : f(y ) { \leqslant}f(x ) ] = { { v_{f}}}(x)\end{aligned}\ ] ] almost surely and almost everywhere in .this implies almost everywhere in . for ,we have since almost everywhere in as , we have almost everywhere in as . by the lebesgue s dominated convergence theorem we have \to 0 ] for large enough . then , by applying lln , we have that the right most side of converges to ] . also , by lln we have that as .therefore , the third term of converges to zero almost surely . to show the convergence of the second term of to zero, we apply the cauchy - schwarz inequality to it and we have by lln we have that the second term of the right hand side converges to = \operatorname{tr}({{\mathcal{i}}_{\theta}}^{-1}) ] is equivalent to the volume of the ellipsoid , we have that = \frac{2}{\det(a ) } v_{d}(\sqrt{x^\mathrm{t } a x}),\ ] ] where denotes the volume of the sphere with radius and is proportional to . therefore \propto x^{\mathrm{t}}a x ] , i.e. .the rate of convergence is in ] .we set the initial parameters as and .we design the learning rates as here , is a matrix defined in . [cols="^ " , ] first , we investigate the effect of the sample size and the coefficient of the learning rate .we try the following sample sizes : , , , , , .figure [ fig : lambda ] illustrates the slope of the condition number and the theoretical curve and the slope of the expected objective function value = ( m^{t})^{\mathrm{t}}a m^{t } + \operatorname{tr}(c^{t}a)$ ] , averaged over independent trials .when the sample size is larger , we see the closer performance to the theoretical result . when and , the convergence curve of the condition number approximated well the theoretical curve and the final condition number is . when and , it takes more than times longer to learn the covariance matrix and the final condition number becomes , although the stochastic algorithm still works successfully .we attain a little higher condition numbers when we choose larger learning rates , . for example , the final condition numbers are for and , and for and .this is because smaller learning rates have more effect of averaging the natural gradient estimates over iterations and reducing the estimation variance .note that we observe a slightly slower adaptation of the covariance matrix at the beginning in case that we set , although the adaptation behavior does not change in theory .see figure [ fig : m ] .this attributes to the estimation precision of .if the squared mahalanobis distance between the origin ( the global optimum ) and the current mean with respect to is larger , the function landscape around looks more like linear function. then are far from the exact values , especially in case a small sample size is chosen .for , , with initial mean vector or .other settings are the same as in figure [ fig : lambda ] . ] finally , we study how well this stochastic algorithm simulates the cma - es .we test the pure rank- update cma - es with weight scheme .we set the learning rates following where .we choose for our algorithm so that the speed of adaptation for each model is almost the same .figure [ fig : cma ] shows the results for each method for and . in both case , we confirm similar behaviors of the pure rank- update cma - es and our algorithm despite their dissimilar weight - value settings . the similar change of performance illustrated in figure [ fig : m ]is also observed for the pure rank- update cma - es . from this result , we conclude that it is possible to estimate the performance of the pure rank- update cma - es by our natural gradient algorithm , which is theoretically more attractive . however , note that the pure rank- update cma - es is not the standard cma - es and the standard cma - es performs better than the pure rank- update cma - es .the standard cma - es employes so - called evolution paths to adapt the covariance matrix and the global scale of the covariance matrix , which is called step - size in the cma - es context . moreover , the standard cma - es employes weighted recombination , where different values are assigned to the weights for , which is only slightly better than intermediate recombination and even similar to our setting .furthermore , the similar performance observed is only on a quadratic function . if there are certain functions which distinguish our algorithm from the ( rank- ) cma - es , this may help to understand both the ngd algorithm and the cma - es .further study on these topics is required .cc and on the left , and for and on the right .other settings are the same as in figure [ fig : lambda ] . ] and on the left , and for and on the right .other settings are the same as in figure [ fig : lambda].,title="fig : " ] + + and on the left , and for and on the right .other settings are the same as in figure [ fig : lambda].,title="fig : " ] + and on the left , and for and on the right .other settings are the same as in figure [ fig : lambda].,title="fig : " ] +we have proposed a novel natural gradient descent ( ngd ) method where the objective function is transformed to a function defined on the parameter space of probability distributions .we have proven that the deterministic ngd method learns the inverse of the hessian of the original objective function that is any monotonic convex - quadratic - composite function .linear convergence of the mean vector and the covariance matrix has been also proven .the numerical results for the stochastic ngd algorithm have shown that the stochastic algorithm approximates the deterministic one well when the sample size is sufficiently large . moreover , we have confirmed that the stochastic ngd algorithm and the pure rank- update cma - es behave very similarly on a quadratic function .the contribution of the paper is to derive a novel ngd algorithm that can be viewed as a variant of the cma - es from the first principle of information geometry .this allows us to analyze the algorithm theoretically . our theoretical results in section [ sec : conv ]imply that there is at least one weight - value setting in the cma - es such that the covariance matrix learns the inverse of the hessian of the objective function .moreover , since our algorithm does not only share most of the important properties of the rank- update cma - es , but also is confirmed to perform similarly to the pure rank- update cma - es on a quadratic function by numerical simulations , we could study our algorithm to find out limitations of the pure rank- update cma - es and to discover a way to improve the cma - es .
|
in this paper we investigate the convergence properties of a variant of the covariance matrix adaptation evolution strategy ( cma - es ) . our study is based on the recent theoretical foundation that the pure rank- update cma - es performs the natural gradient descent on the parameter space of gaussian distributions . we derive a novel variant of the natural gradient method where the parameters of the gaussian distribution are updated along the natural gradient to improve a newly defined function on the parameter space . we study this algorithm on composites of a monotone function with a convex quadratic function . we prove that our algorithm adapts the covariance matrix so that it becomes proportional to the inverse of the hessian of the original objective function . we also show the speed of covariance matrix adaptation and the speed of convergence of the parameters . we introduce a stochastic algorithm that approximates the natural gradient with finite samples and present some simulated results to evaluate how precisely the stochastic algorithm approximates the deterministic , ideal one under finite samples and to see how similarly our algorithm and the cma - es perform . = 10000 = 10000 [ global optimization , gradient methods , unconstrained optimization ]
|
it is not uncommon for an astronomical image obtained after a lengthy integration to reveal that all is not well . as a consequence , telescope timeis sacrificed identifying the problem . in an effort to shorten this investigation period ,we have created a catalog of astronomical images bearing signatures of a range of mishaps encountered during observing runs . included with each image is an explanation of the cause of the problem as well as a suggested solution .since a large number of observatories today are connected to the internet , the world wide web ( www ) was chosen as the ideal medium for presenting this collection of images . initially , the purpose of such a collection was to assist new graduate student observers at michigan - dartmouth - mit ( mdm ) observatory who frequently observe without the benefit of a more experienced observer .the aim was to provide these students with a means of quickly pinpointing the underlying problem affecting the image quality .this idea grew into a www accessible database complete with explanations of the `` mishaps '' responsible for the deterioration of the images , as well as suggested solutions .every www page in this catalog contains an inverted colormap gif image of the mishap , a table listing relevant information about the image ( telescope , date , instrument , filter , exposure time ) , a brief description of the problem , and , if available , a suggestion of how to fix it . in a few cases ,the cause of the problem could not be determined .these were dubbed `` unsolved mysteries '' , and no explanation of the problem or suggestion for a fix are given . since it is possible for one problem to manifest itself in a variety of ways , multiple images of the same mishapare presented where appropriate , cross referenced with the help of hypertext links .for example , condensation on the dewar window can appear as a filamentary structure or as a bright extended feature with cusps , depending on the locations of light sources in the field of view . for the more common problems of astigmatism , coma , bad guiding / focusing , and poor seeing ,we have provided supporting plots / images where applicable via links on the relevant pages .examples include radial profile plots across a stellar image or multiple images of the same field taken in different seeing conditions . in fig .[ fig1.small ] , we show an example of a typical page in the database , along with the explanation of the problem and a suggestion for the solution .much consideration was given to effectively structuring the image catalog .rather than sorting the images by cause , which is probably unknown to the astronomer accessing the database , we have grouped them by symptom .we provide the following two options for searching the database : 1 .the user may browse the complete list of compiled images .this list features links to the various mishap pages as well as a brief description ( 1 - 2 lines ) of the symptoms in the corresponding image .2 . the other option is to first broadly classify the image based on its symptoms and then choose the appropriate web page from a smaller list .this option will likely be more practical with an increasing number of images in the database .apart from the frequently occuring problems of bad seeing / focusing / guiding , fringing , dust rings , and reflections , the current revision of the database lists the following as the top categories : * unusual appearance of objects in the image : familiar objects in the image , such as galaxies , stars , etc , have an unexpected appearance ( e.g. , guider jumps , deflated airbags , etc ) .* ccd and electronics features : features seem to be correlated with the ccd rows or columns , or they are otherwise suspiciously electronic in appearance ( e.g. , readout errors , shutter failure , etc ) . * unexpected objects in the image and other external interference : unexpected features obviously not due to the ccd or the electronics appear in the image ( e.g. , occulting dropout shutter , condensation on the dewar window , etc ) . *unsolved mysteries : as mentioned above , these are the cases for which we have so far not been able to determine the cause of the problem .+ each of the above links leads to a list of mishap pages in that category with a brief description of the corresponding image appearance .the observational mishaps database can be accessed at http:.astro.lsa.umich.edu.html . it is also directly accessible from the university of michigan astronomy department home page , whose url is http:.astro.lsa.umich.edu .we have created a database of images which are deteriorated by the effects of various mishaps encountered during astronomical observing runs .its structure was designed to help users quickly identify the cause of the poor image quality , thus saving telescope time .in addition to being widely accessible via the www , the advantage of such an on - line catalog is its versatility . unlike a printed catalog , the on - line versioncan very easily be updated , corrected , and expanded , so that everytime the database is accessed the user will find it in its most up - to - date form . due to the practically infinite number of possible problems during observing runs , this collection is clearly far from and impossible to complete .its usefulness , however , is obviously directly related to the number of examples it contains , and therefore we would appreciate any contributions by the astronomical community in the form of examples which might fit into this collection .instructions for the submission of such images are given in the database .furthermore , we realize that some of our interpretations of the mishaps , as well as some of our suggestions on how to improve the images , may be incorrect or incomplete . while it is our intention to regularly update and improve this database , we welcome any input about the database in general , its structure , or even individual examples .we would like to express our gratitude to the following people who contributed to this project by supplying examples and/or providing explanations of some of the mishaps : gary bernstein , mario mateo , eric miller , patricia knezek , kelly holley - bockelmann , lynne allen , michel festou , and doug welch .
|
we present a world - wide - web - accessible database of astronomical images which suffer from a variety of observational problems , ranging from common occurences , such as dust grains on filters and/or the dewar window , to more exotic phenomena , such as loss of primary mirror support due to the deflation of the support airbags . apart from its educational usefulness , the purpose of this database is to assist astronomers in diagnosing and treating errant images _ at the telescope _ , thus saving valuable telescope time . every observational mishap contained in this on - line catalog is presented in the form of a gif image , a brief explanation of the problem , and , when possible , a suggestion for improving the image quality .
|
time - distance helioseismology is based on measuring and inverting acoustic wave travel times between separate points on the surface of the sun .it is one of widely used approaches of local helioseismology for reconstructing solar subsurface structures and flows .calculation of the temporal cross - covariance of two oscillation signals , observed in different points on the solar surface , is a key element of this method . showed that the cross - covariance function for waves with the phase speed lying in a narrow interval can be approximately represented by a gabor wavelet .the phase and group travel times of acoustic waves can be obtained by fitting the gabor wavelet to the observed cross - covariance function , using a least - square method .the measured phase travel times are used for inferring the subphotospheric perturbations of the sound ( wave ) speed and flow velocities in the quiet sun and sunspot regions .the mean travel time of acoustic waves traveling between two points in the opposite directions are used for determining the sound speed , and the travel time difference is used for determining the flows .however , an accurate inference of sunspot s subsurface sound - speed structures and flow fields by use of this approach may be affected by a series of physical and unphysical effects , such as strong wave damping in active regions , and the presence of strong magnetic field .recently , by using observations of a quiet sun region and artificially reducing solar acoustic oscillation amplitudes , i.e. masking solar wavefield to mimic the sunspot s behavior , found that this procedure could shift the measured acoustic travel times systematically by an amount of , although such a shift was not expected .furthermore , they suggested to correct the observed acoustic wavefields inside active regions by artificially increasing the wave amplitude .however , it is evident that the artificially masked wavefield of a solar quiet region can only mimic the acoustic power of the active region , but not the actual physical cause .therefore , the systematic errors estimated by this approach may be inaccurate , and the correction procedure is unjustified . in this paper, we have carried out 3d numerical simulations of solar oscillations based on three different models to mimic the sunspot s wavefield , and investigated the systematic errors caused by the amplitude effects in the time - distance measurements .these models include the artificial masking the numerically simulated wavefields , as suggested by , reducing the strength of oscillation sources to reflect the physical effect of reduced excitation in sunspots , and compare these with effects caused by using a sound - speed perturbation deduced by the previous sound - speed inversions .the numerical simulation procedure and results are described in 2 , and the results of the time - distance analysis are given in 3 , followed by discussions in 4 .propagation of acoustic waves on the sun is described by the system of the linearized euler equations \displaystyle{\frac{\partial } { \partial t}}(\rho_0 { \mbox{\boldmath } } ' ) + { \mbox{\boldmath}}\;p ' = { \mbox{\boldmath}}_0 \rho ' + { \mbox{\boldmath}}(x , y , z , t ) , \end{array}\ ] ] where is the velocity perturbation , and are the density and pressure perturbations respectively , and is the function describing the acoustic sources .the pressure , density , and gravitational accelerations with subscripts 0 correspond to the background model . to close the system ( [ eq : lineuler ] )we used the adiabatic relation between lagrangian variations of pressure and density .the adiabatic exponent is calculated from the realistic opal equation of state for the hydrogen and heavy elements abundances of the standard model .the standard solar model s with a smoothly joined model of the chromosphere of is used as the background model .the standard solar model is convectively unstable , especially in the superadiabatic subphotospheric layers where convective motions are very intense and turbulent . using this convectively unstable model as a background model leads to the instability of the solution of the linear system .the condition for stability against convection requires that the square of brunt - visl frequency is positive . to make the background model convectively stable we replaced all negative values of by zeros and recalculated the profiles of pressure and density from the condition of hydrostatic equilibrium .this procedure guarantees convective stability of the background model .it has been shown that the profiles of pressure , density , sound speed , and acoustic cut - off frequency of the modified model are very close to the corresponding profiles of the standard solar model .quantity represents the density hight scale . to prevent spurious reflections of acoustic waves from the boundaries we established non - reflecting boundary conditions based on the perfectly matched layer ( pml ) method at the top and bottom boundaries .the top boundary was set at the height of 500 km above the photosphere .this simulates a realistic situation when not all waves are reflected from the photosphere .waves with frequencies higher than the acoustic cut - off frequency pass through the photosphere and are absorbed by the top boundary .this naturally introduces frequency dependence of the reflecting coefficient of the top boundary .the lateral boundary conditions are periodic .the details are described by .the waves are generated by spatially localized sources of the z - component of force ^ 2 ( 1 - 2\tau^2)e^{-\tau^2 } & \mbox{if}\quad r\leq r_{src}\\ \displaystyle 0 & \mbox{if}\quad r >r_{src } \end{array } \right.\ ] ] with and given by where is the unit vector in the vertical direction , , , and are the coordinates of the center of the source , is the source radius , is the central frequency , is the moment of the source ignition , is the coefficient , which is measured in dyn and describes the source strength .it has a physical meaning of the force density . to solve the system ( [ eq : lineuler ] ) , a semi - discrete code developed by is used. the high - order dispersion relation preserving ( drp ) finite difference ( fd ) scheme of is used for spatial discretization .the coefficients of this fd scheme are chosen from the requirement that the error in the fourier transform of the spatial derivative is minimal .it can be shown that the 4th - order drp fd scheme describes short waves more accurately than the classic 6th - order fd scheme. a 3rd - order , three - stage strong stability preserving runge - kutta scheme with the courant number , , is used as a time advancing scheme . the efficiency of the high - order fd schemes can be reached only if they are combined with adequate numerical boundary conditions .we followed and used an implicit pad approximation of the spatial derivatives near the top and bottom boundaries to derive a stable 3rd - order numerical boundary conditions consistent with the 4th - order drp numerical scheme for interior points of the computational domain . waves with the wavelength less than are not resolved by the fd scheme .they lead to point - to - point oscillations of the solution that can cause a numerical instability .such waves have to be filtered out , and we use a 6th - order digital filter to eliminate unresolved short wave component from the solution at each time step . the simulations are carried out in a rectangular domain of size 90 mm using a uniform 600 grid .the background model varies sharply in the region above the temperature minimum .thus , to simulate the propagation of acoustic waves into the chromosphere we choose the vertical spatial step km in order to preserve the accuracy and numerical stability .the spatial intervals in the horizontal direction are km . to satisfy the courant stability condition for the explicit scheme , the time step is set to be equal to 0.5 seconds .the sources of the z - component of force with random amplitudes and uniform frequency distribution in the range of mhz are randomly distributed at the depth of 100 km below the photosphere and are independently excited at arbitrary moments of time .we describe three sets of simulations with different distribution of acoustic sources and different background models .the first reference model ( model i ) represents simulations of the acoustic wave field for horizontally uniform distribution of the acoustic sources and the horizontally uniform background model .this model corresponds to the quiet sun and will be used as a reference state for the following time - distance analysis .the acoustic travel times for models ii and iii are computed relatively to this reference model .the goal of this study is to estimate the contributions to the travel times arising from perturbations of the background model and non - uniform distribution of the acoustic sources separately . for this purpose , in model ii the acoustic source strengthis gradually decreased ( masked ) in the central region , simulating the reduction of the acoustic sources in sunspots . in this model ,the horizontal axially symmetric distribution of the acoustic source strength is given by formula where is the horizontal distance from the sunspot axis , , , and are the x- , y- coordinates of the sunspot center and the sunspot radius respectively .the background model remains unperturbed and horizontally uniform .so , as far as the background model remains unchanged all deviations of simulated wave field properties from model i can be explained as a result on non - uniform distribution of the acoustic sources . strictly speaking , travel times for the cases of uniform distribution of the acoustic sources ( model i ) and masked source strength ( model ii ) are calculated at different conditions .the amplitude of the wave field is uniform in the first case and non - uniform in the second one . to take this into account we mask the _ wave field _ of the model i by masking function computed by averaging signals azimuthally around the sunspot centerthis mimics the reduced amplitude of active regions , just as what was done by .the resultant model is called model ia for the convenience of reference in the following descriptions . although the amplitude distributions now are the same for both wave fields , the wave fields itself are different , because masking of the source strength is not reduced to simple masking of the resulting wave field .model iii combines the source masking of model ii with the sound speed perturbation in sunspots .the 3d sound - speed profile in model iii is approximated by the formula ,\ ] ] where is the vertical profile of the sound speed perturbation at the sunspot axis .this profile is shown in figure [ fg1 ] and was calculated from the inversion of helioseismic data for the sunspot observed by soho / mdi on 20 june 1998 .the change of the sound - speed perturbation sign from negative to positive at approximately 4 mm below the photosphere is a characteristic feature of this profile .the depth of inversion divides the domain in vertical direction into two regions with the sound speed greater and smaller than in the standard reference model .hence , we expect different behavior of waves propagating though this artificial sunspot if their turning points lie in different regions .the amplitude map of the resulting wave field for model iii is shown in the left panel of figure [ fg2 ] .the solid line in the right panel represents the azimuthally averaged amplitude profile .the dashed line shows the angularly averaged amplitude profile for model ii .the inhomogeneity of the sound speed causes increasing the ratio of oscillation amplitudes outside and inside of the artificial sunspot by about 40% . the acoustic power spectrum ( - diagram ) of the simulated wave field is shown in figure [ fg3 ] .we see a good agreement with the observed power spectrum in terms of the shape and locations of power ridges , yet the simulated wave field has more power in high - frequency region .it is not clear whether such power excess really exists and instrumentally filtered out during observations or it is an artifact of the numerical modeling of the wave field .the realistic non - linear simulations of solar convection also show a power excess in the - diagram at higher frequencies .for the present study , the high - frequency power excess is unimportant , because for the time - distance helioseismology analysis , we only select the frequency band of 3 4 mhz for analysis , and in this range , the simulated acoustic power is similar to the observations .to perform the time - distance helioseismology analysis of the simulated data we followed the procedure described in . for all our models , we select the annuli with radii of mm , mm , and mm to obtain the mean travel times , , and the travel time differences , , ( index marks model number , including the reference model i ) , which are respectively averages and differences of outgoing and ingoing travel times in the time - distance center - annulus measurement scheme .a phase - speed filter is applied in each case to select only waves in a narrow phase speed interval . to study the effects caused by wavefield non - uniformity ,we calculate the differences of and for the analysis . in figure[ fg4 ] , we show the maps of the mean travel time perturbations , , the travel time differences , , for all three models .although the background model of models ia and ii are the same with the reference model i , we can see systematic shifts of mean travel times inside the masked regions . for better understanding the results , it is useful to compare the profiles of the travel time deviations , azimuthally averaged around the sunspot center for both and , as shown in the middle row of figure [ fg5 ] . obviously , the mean travel time shifts , , are significantly larger in model ii than in model ia , although both have exactly the same background model and exactly the same oscillation amplitude reduction in wavefields .expectedly , model iii shows mostly positive travel time shifts in contrast with the other two models , and this is obviously due to the negative sound - speed perturbation to the background model in a shallow subsurface region .one would expect this positive time shift would increase significantly if there is no such effect that causes the time deficit in model ii , however , it is not immediately clear whether this is the case , or if it is , how much it would increase . for the travel - time difference , , the shifts for all three models are quite small , within an order of 2 sec , substantially smaller than the measured travel time shifts from a real sunspot data .the azimuthally averaged and for the other two annuli measurements are also presented in figure [ fg5 ] . for the shorter travel distances , both models ia and ii show stronger travel time deficits in the mean travel time measurements compared the the intermediate travel distance case , up to approximately 15 sec for model ii .however , model iii still displays mostly a positive sign , although it displays some dips in the central area where one would expect a stronger positive shift because of the larger negative sound - speed perturbation there .again , does not show any significant time shifts for all models .for the larger annulus radius , models ia and ii do not display significant time shifts , but model iii displays a significant negative time shift because for this set of measurement , waves reach the depth of a large positive sound - speed perturbation . for this annulus measurement ,models ii and iii display an order of magnitude of 5 sec travel time shifts in , larger than those of shorter annuli measurements , but still significantly smaller than the shifts in the real sunspot measurements . based on their artificial tests with the quiet sun data , in order to remove the measured travel time shifts caused by the oscillation amplitude reductions , have suggested to make corrections for these areas by enlarging the observed oscillation amplitude in active regions .this procedure is just reverse to the artificial masking .it obviously works if the power reduction is caused by artificial masking , like in model ia .however , it is useful to examine whether this works for the oscillation power reductions that are not caused by surface masking , but physical mechanisms , such as the reduction in the excitation power ( like in models ii and iii ) . for each model ( ia , ii , and iii )we calculate the average amplitude profile and normalized the wavefield by using this profile ( procedure of unmasking the wavefield ) , making the oscillation power nearly uniform over the whole box .the same time - distance analysis is performed like in 3.1 , and azimuthally averaged curves are displayed in figure [ fg6 ] .it can be clearly seen that , as expected , this power correction removes all travel time shifts in both and for all annuli measurements for model ia . for , for the two shorter annuli measurements , the correction slightly lifts both models ii and iii without changing signs of the profiles , and for the longest annulus , the correction does not change much the measurements . for ,the correction changes the profiles of models ii and iii for all annuli , but still , the travel time shifts are within 5 sec or so .the explanation of the fact that the acoustic travel times depend on the non - uniformness of the wave field amplitude or non - uniform distribution of the source strength is related to the definition of travel times in helioseismology , which have to deal with the stochastic randomly excited oscillations , rather than isolated point sources .the travel time of a wave packet traveling between two points on the surface is defined not as a local physical quantity , which can be explicitly computed from the background model , but rather in an observational " way as a parameter which is obtained from fitting of cross - correlation of the oscillation signals by the gabor wavelet .thus , it is very important to investigate the effect of non - uniform distribution of acoustic sources , damping and other causes of the non - uniform wave field distribution on the sun .we have presented the results for some of these effects by using numerical 3d simulations of acoustic wave propagation in various solar models .we have found that the source masking for horizontally uniform background model ( dashed curves ) may cause a systematic negative shift of about 813 seconds in the mean travel times for short distances ( annuli with radii smaller than 14.5 mm ) .such travel time shift may cause underestimation of the sound speed perturbation in the shallow ( 12 mm deep ) subsurface layers . for larger distances ,the contribution to the mean travel time shift becomes negligible . on the contrary ,the shift of the travel time differences ( due to the non - uniform distribution of the acoustic sources ) is negligible for short distances and has a value about sec for the largest distance used in our experiments .this is much smaller than perturbations of the travel - time differences observed in real sunspots .the results of our experiments are different from a similar work by , where authors report significant disbalance between ingoing and outgoing travel times ( about s for distance of 6.2 mm and about s for distance of 24.35 mm ) , and suggested that at the large distances the false travel time difference signal caused by non - uniform distribution of sources may be misinterpreted as a result of subsurface flows .however , it is quite clear that , as shown in the first two annuli measurements of figure [ fg5 ] , the oscillation power deficit due to the source masking may have greatly reduced the travel time shifts measured in model iii , which means that if doing inversions , the inverted sound - speed profile would be greatly underestimated .this suggests that the sound - speed profile under sunspots obtained by got the correct sign but might be underestimated . for the flow fields ,this masking effect might cause some systematic velocity errors , but only of a very small magnitude .in addition , our experiments show that the amplitude reduction is caused by the weaker oscillation sources in sunspots can not be corrected by a simple normalization procedure .this imposes us a difficult task on how to retrieve accurately the sound - speed profiles beneath sunspots , and improve the time - distance helioseismology inferences .kosovichev , a. g. & duvall , t. l. , jr . 1997 in proc .score96 : solar convection and oscillations and their relationship , eds . f. p. pijpers , j. christensen - dalsgaard , and c. s. rosenthal , kluwer academic publishers , dordrecht , holland , 241 .
|
by analyzing numerically simulated solar oscillation data , we study the influence of non - uniform distribution of acoustic wave amplitude , acoustic source strength , and perturbations of the sound speed on the shifts of acoustic travel times measured by the time - distance helioseismology method . it is found that for short distances , the contribution to the mean travel time shift caused by non - uniform distribution of acoustic sources in sunspots may be comparable to ( but smaller than ) the contribution from the sound speed perturbation in sunspots , and that it has the opposite sign to the sound - speed effect . this effect may cause some underestimation of the negative sound - speed perturbations in sunspots just below the surface , that was found in previous time - distance helioseismology inferences . this effect can not be corrected by artificially increasing the amplitude of oscillations in sunspots . for large time - distance annuli , the non - uniform distribution of wavefields does not have significant effects on the mean travel times , and thus the sound - speed inversion results . the measured travel time differences , which are used to determine the mass flows beneath sunspots , can also be systematically shifted by this effect , but only in an insignificant magnitude .
|
the instability of deterministic motion is typically characterized by the growing separation of initially nearby trajectories , which , in turn , indicates that dynamics is high sensitive to its initial state ; very small differences at one moment in such systems can result in very large differences later on . for a chaotic dynamical system , the separation of initially nearby trajectories , namely , evolves asymptotically as within any infinitesimal neighborhood of .the positive coefficient stands for the largest lyapunov exponent for , and most of well - known chaotic systems are ruled by dispersion rates of this type .after the pioneering work of gaspard and wang , there has been in recent years a growing interest in understanding weakly chaotic ( sporadic ) systems for which the separation of initially nearby trajectories is weaker than exponential , i.e. sublinear . for these systems, the conventional lyapunov exponent vanishes and the weakly chaotic behavior results from the intermittent switching between long regular phases ( so - called laminar ) and short irregular bursts .we consider here a general class of maps of the interval which are weakly chaotic according to being is a slowly varying function at infinity so that for and for .then , we tie all of its spatio and temporal properties , including the map equations itself , to a single characteristic function .the main step towards a unifying framework is based on determining the eigenfunctions of feigenbaum s renormalization operator ,\ ] ] where is a rescaling factor .the eigenfunction that defines the universality class is a fixed point of , i.e. .we shall see that ,\ ] ] where denotes the inverse of and is a constant . since the early 1980 s , the idea that the same functional equation employed by feigenbaum for studying the period - doubling cascade can also be used to describe intermittency and dissipative systems has been investigated by many authors , notedly . from a viewpoint more connected to the intermittency phenomenon ,considerations so far about universality have been based on the scaling properties of the laminar length .our results also shed some light on the scaling hypothesis behind the relationship between laminar length and feigenbaum s operator ( [ feig ] ) .we provide a general formula for the dispersion rate , one of the main results of this manuscript .but besides an attempt to outline the subexponencial instability in weak chaos , our goal is to provide a full description of a nonlinear dynamical system in which a scaling property is present .here we have two key quantities determining the spatio and temporal properties of weakly chaotic systems .the invariant density gives us the measure of concentration trajectories at each stage of intermittency , whereas the residence times at each stage are ruled by the waiting - time probability density function of the laminar region . by introducing a proper modeling of intermittency mechanism , together with a renormalization - group approach , we establish a universal criterion for the choice of which enables us to predict the dispersion rate , as well as determining and .we will also see that these results enable us to predict the anomalous subdiffusion for spatially extended versions of such systems by means of a relationship between and the mean squared displacement .subexponential instability ( [ wcc ] ) is a consequence of the infinite invariant measure over the laminar region , and we will establish here this relationship quantitatively . on the other hand ,the stagnant motion of laminar region is also related to eq .( [ wcc ] ) , leading to an algebraic behavior of waiting - time probability density as we shall see later on . in hamiltonian systems ,such behavior corresponds to the tendency of nonescaping particles to concentrate around regular regions , such as stable islands and invariant tori of nearly - integrable systems .in particular , a weakly chaotic model has been recently used to model the nekhoroshev stability , see .let us introduce the general class of piecewise expanding maps , from ] , resulting in the divergent invariant density up to a positive multiplicative constant for non - normalizable . the finite - time ( generalized ) lyapunov exponent of the map satisfying eq .( [ wc ] ) is |.\ ] ] in order to obtain the dispersion rate , we expect that the average of lyapunov exponent ( [ laver ] ) over the initial condition ensemble may be estimated by using a density function so that to find we will consider the continuous - time stochastic model proposed in , +c(t).\ ] ] the convective equation ( [ pde ] ) describes the stochastic motion of a particle initially in the laminar phase until it is random reinjected back to a position on the interval after reaching ( crossing ) the point .the term describes the particle s velocity in the laminar phase , and the reinjection source term is chosen to fulfill conservation of measure .equation ( [ pde ] ) can be completely solved using the method of characteristics .assuming uniform initial density and considering the characteristic function as follows we get the general solution for the laplace transform =\tilde{\rho}(x , s) ] . introducing the auxiliary functions and , one has ] . therefore, the divergence of invariant measure occurs provided that the characteristic function obeys which is more restrictive than the boundary conditions ( [ bdg ] ) . from eq .( [ psi ] ) we also have the relation , and therefore eq .( [ cond ] ) implies the divergence of mean waiting - time .thus , the weakly chaotic behavior occurs provided that does not decrease faster than . in such cases , eq .( [ sepwe ] ) can be solved by making use of karamata s abelian and tauberian theorems for the laplace - stieltjes transform . by considering the general form of cumulative distribution function associated to , i.e. , , being a slowly varying function at infinity , one obtains from eq .( [ sepwe ] ) \sim q(t)t^{\gamma},\qquad 0\leq\gamma<1.\ ] ] for we have , and thus eq .( [ sepwe ] ) gives us .furthermore , noting that , one finally gets notice that eq .( [ zetaf ] ) yields the same result ( [ bern ] ) for pomeau - manneville s characteristic function ( [ ph2 ] ) .our result ( [ zetaf ] ) enables us to develop models with weakly chaotic behavior provided that the criterion ( [ cond ] ) is fulfilled .thus , our second example relies on the family of maps for which behaves as for , provided does not go to zero as fast as for .this model should be understood as a limiting case for the rescaling factor .using again the machinery we have developed , one has , irrespective of , resulting in the strong anomaly dispersion rate which , together with eq .( [ bern ] ) , also agrees with formula ( [ zetaf ] ) .the dispersion rates ( [ bern ] ) and ( [ zetsl ] ) are in perfect agreement with their corresponding quantities in the infinite ergodic theory , the so - called return sequences . such sequences ensure a suitable time - weighted average of observables that converge in distribution terms towards a mittag - leffler distribution .note also that our eq .( [ deninv ] ) generalizes the invariant densities obtained in for these types of systems .lastly we observe that , under condition ( [ cond ] ) , the dispersion rate ( [ zetaf ] ) shows that the instability scenario for weakly chaotic systems is more general than that originally proposed by gaspard and wang in . in particular , we can also consider weakly chaotic models with dispersion rates that grow faster than logarithms but slower than polynomials such as \ ] ] for and or and .results ( [ bern ] ) and ( [ zetsl ] ) are respectively given by for and .according to eq .( [ zetaf ] ) , the intermediary cases give us \right\}\nonumber\\ & = & o(\exp(\ln^{1/a}x^{-1/b})),\end{aligned}\ ] ] as , being the lambert function .yet another application of the renormalization - group approach : weakly chaotic maps of the type ( [ map ] ) have been extensively used in the literature to model systems that exhibit anomalous transport , see for instance and references therein . the mechanism for generating deterministic subdiffusion is based on the extended version of map ( [ map ] ) , from to the entire real line , according to the rules and , where assumes integer values .this results in a series of lattice cells with marginal points located at .the corresponding transport properties can be understood in terms of a continuous - time random walk picture of this model , with probability density of waiting - times near each marginal point .the mean squared displacement for such model is given by }.\ ] ] since , from eq .( [ sepwe ] ) one has the mean squared displacements for the extended versions of models ( [ ph2 ] ) and ( [ phimap ] ) based on our results are in perfect agreement with those obtained respectively in and .equation ( [ sigz ] ) is particularly interesting because it is a non - trivial extension for weakly chaotic systems of a relationship typically observed in usual chaos , namely .why does the laminar length scale according to the feigenbaum renormalization operator ?the extension of the renormalization operator for the class of systems discussed here can be understood by means of the scaling limit since we have near zero . by using recursion , it is simple to see that eq .( [ lims ] ) leads to the renormalization operator ( [ feig ] ) with and .it is important to emphasize here that we need not find eigenfunctions of eq .( [ feig ] ) covering the whole interval ] , given by =\alpha\{g'_{*}[g_{*}(x)]h_{\lambda}(x)+h_{\lambda}[g_{*}(x)]\}.\ ] ] after considering the boundary conditions ( [ bdg ] ) , eq .( [ pert1 ] ) boils down simply to for , which implies homogeneity of degree for and , therefore , .note also that successive applications of on are such that and , thus , the stability condition imposes .this means the invariance of the class of maps ( [ map ] ) under the symmetry near , which is perfectly consistent with the results developed here .interestingly , the perturbation ( robustness ) analysis does not distinguish weak chaos from usual chaos , i.e. , there is no symmetry breaking at , despite there being a phase transition at this value ( see and references therein ) .the invariant density , the waiting - time probability density function and , even more importantly , the dispersion rate of initially nearby trajectories , are all described here by a single characteristic function .we show that this function is closely related to the feigenbaum renormalization - group operator ( [ feig ] ) . by means of an inverse problem approachwe show that , given a choice of satisfying eq .( [ cond ] ) , all of these fundamental quantities automatically become known , including the map equations itself .thus all of these results , namely eqs .( [ psi ] ) , ( [ nn ] ) , ( [ zetaf ] ) , and ( [ sigz ] ) , together with eq .( [ wcc ] ) , unify a paradigmatic class of weakly chaotic systems , the most general hitherto known , in a simple and powerful way .we believe that the main question raised in , i.e. whether intermediate dynamical behaviors of the type ( [ wcc ] ) could exist in the range , was reasonably elucidated here . in particular , we propose a broad class of weakly chaotic models with dispersion rates that grows faster than logarithms but slower than polynomials , also covering these bounding cases for appropriate choices of parameters , see eqs .( [ lnot ] ) and ( [ gsol ] ) .the author gratefully acknowledges the helpful discussions with alberto saa .this work was supported by the brazilian agencies cnpq and fapesp .equation ( [ pde ] ) can be completely solved by using the method of characteristics for the homogenous density , where . for uniform initial density has where together with eq .( [ phi ] ) . from eqs .( [ phi ] ) and ( [ psiv ] ) one has eq .( [ psi ] ) .the nonhomogeneous term is given by the source term is so that is conserved . after introducing this condition in eq .( [ pde ] ) and also considering that we shall have eq .( [ vxrho ] ) , i.e. , we get now , the source term can be solved by applying eqs .( [ rhoh ] ) and ( [ nhom ] ) in eq .( [ cpr ] ) resulting applying the laplace transform and the convolution theorem in eq .( [ stlong ] ) one obtains finally , is obtained by using convolution theorem in eq .( [ nhom ] ) , leading to the solution ( [ solve ] ) .we can check the conservation of measure from general solution ( [ solve ] ) .first , eq .( [ psi ] ) reads .\ ] ] now , from eqs .( [ phi ] ) , ( [ psi ] ) , and ( [ prco ] ) we have \nonumber\\ & = & \int_{0}^{\infty}e^{-st}\left\{\int_{0}^{1}\frac{\partial}{\partial t}[\phi^{-1}(t+\phi(x))]\phi'(x)dx\right\}dt\nonumber\\ & = & \int_{0}^{\infty}e^{-st}\left[\int_{0}^{1}\frac{\partial}{\partial x}\phi^{-1}(t+\phi(x))dx\right]dt\nonumber\\ & = & \mathcal{l}_{s}[\phi^{-1}(t)]=\frac{1}{s}\left[1-\tilde{\psi}(s)\right],\end{aligned}\ ] ] noting that and since . finally , eqs .( [ solve ] ) and ( [ cons ] ) give us =0,\ ] ] recalling that .the scaling is just the ( ) lowest order expansion of near : from eqs .( [ solve ] ) and ( [ sepwe ] ) one has }{\rho(x)v(x)},\qquad s\rightarrow0,\ ] ] while eq .( [ prco ] ) reads =x-\lim_{s\rightarrow0}s\mathcal{l}_{s}[\phi^{-1}(t+\phi(x))].\ ] ] from eqs .( [ wcc ] ) and ( [ zetaf ] ) we have and , by making use of karamata s abelian and tauberian theorems , the dependence of eq .( [ psta ] ) on is as follows \sim o(s^{\gamma}/l(1/s)),\qquad s\rightarrow0,\ ] ] and thus =x$ ] .recalling that from eq .( [ vxrho ] ) , we finally get the scaling relation previously proposed .saa , a. , venegeroles , r. : pesin s relation for weakly chaotic one - dimensional systems .proceedings of the european conference on complex systems 2012 , edited by t. gilbert , m. kirkilionis , and gregoire nicolis , p. 949- 953 , springer proceedings in complexity , new york ( 2013 ) .shinkai , s. , aizawa , y. : ergodic properties of the log - weibull map with an infinite measure . in :let s face chaos through nonlinear dynamics , edited by m. robnik and v.g .romanovski , p. 219 - 222 , american institute of physics , new york ( 2008 ) .
|
we consider a general class of intermittent maps designed to be weakly chaotic , i.e. , for which the separation of trajectories of nearby initial conditions is weaker than exponential . we show that all its spatio and temporal properties , hitherto regarded independently in the literature , can be represented by a single characteristic function . a universal criterion for the choice of is obtained within the feigenbaum s renormalization - group approach . we find a general expression for the dispersion rate of initially nearby trajectories and we show that the instability scenario for weakly chaotic systems is more general than that originally proposed by gaspard and wang [ proc . natl . acad . sci . usa * 85 * , 4591 ( 1988 ) ] . we also consider a spatially extended version of such class of maps , which leads to anomalous diffusion , and we show that the mean squared displacement satisfies . to illustrate our results , some examples are discussed in detail . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
|
the vision community has been mesmerized by the effectiveness of deep convolutional neural networks ( cnns ) that have led to a breakthrough in computer vision - related problems .hence , there has been a notable shift towards cnns in many areas of computer vision .convolutional neural networks were popularized through alexnet in 2009 and their much celebrated victory at the 2012 imagenet competiton .after that , there have been several attempts at building deeper and deeper cnns like the vgg network and googlenet in 2014 which have 19 and 22 layers respectively .but , very deep models introduce problems like vanishing and exploding gradients , which hamper their convergence .the _ vanishing gradient _ problem is trivial in very deep networks . during the backpropagation phase ,the gradients are computed by the chain rule .multiplication of small numbers in the chain rule leads to an exponential decrease in the gradient . due to this, very deep networks learn very slowly . sometimes , the gradient in the earlier layer gets larger because derivatives of some activation functions can take larger values .this leads to the problem of _ exploding gradient_. these problems have been reduced in practice through normalized initialization and most recently , batch normalization . _exponential linear unit _ ( elu ) also reduces the vanishing gradient problem .elus introduce negative values which push the mean activation towards zero .this reduces the _ bias shift _ and speeds up learning .elus give better accuracy and learning speed - up compared to the combination of relu and batch normalization .after reducing the vanishing / exploding gradient problem , the networks start converging .however , the accuracy degrades in such very deep models .the most recent contributions towards solving this problem are highway networks and residual networks .these networks introduce _ skip connections _ , which allow information flow into the deeper layers and enable us to have deeper networks with better accuracy .the 152-layer resnet outperforms all other models . in this paper, we propose to use exponential linear unit instead of the combination of relu and batch normalization . since exponential linear units reduce the vanishing gradient problem and give better accuracy compared to the combination of relu and batch normalization , we use it in our model to further increase the accuracy of residual networks .we also notice that elu speeds up learning in very deep networks as well .we show that our model increases the accuracy on datasets like cifar-10 and cifar-100 , compared to the original model .it is seen that as the depth increases , the difference in accuracy between our model and the original model increases .deeper neural networks are very difficult to train .the vanishing / exploding gradients problem impedes the convergence of deeper networks .this problem has been solved by normalized initialization . a notable recent contribution towards reducing the vanishing gradients problemis batch normalization . instead of normalized initialization and keeping a lower learning rate ,batch normalization makes normalization a part of the model and performs it for each mini - batch . once the deeper networks start converging , a _ degradation _problem occurs . due to this , the accuracy degrades rapidly after it is saturated .the _ training error _ increases as we add more layers to a deep model , as mentioned in . to solve this problem , several authors introduced skip connections to improve the information flow across several layers .highway networks have parameterized skip connections , known as _ information highways _ , which allow information to flow unimpeded into deeper layers . during the training phase ,the skip connection parameters are adjusted to control the amount of information allowed on these _highways_. residual block in a residual networks , height=188 ] * residual networks ( resnets ) * utilize shortcut connections with the help of identity transformation . unlike highway networks , these neither introduce extra parameter nor computation complexity .this improves the accuracy of deeper networks . with increasing depth ,resnets give better function approximation capabilities as they gain more parameters .the authors hypothesis is that the plain deeper networks give worse function approximation because the gradients vanish when they are propagated through many layers . to fix this problem, they introduce skip connections to the network .formally , if the output of layer is and represents multiple convolutional transformation from layer to , we obtain where represents the identity function and is the default activation function . fig .[ fig : resblock ] illustrates the basic building block of a residual network which consists of multiple convolutional and batch normalization layers .the identity transformation , is used to reduce the dimensions of to match those of . in residual networks , the gradients and features learned in earlier layers are passed back and forth between the layers via the identity transformations . * exponential linear unit ( elu ) * alleviates the vanishing gradient problem and also speeds up learning in deep neural networks which leads to higher classification accuracies .the _ exponential linear unit _ ( elu ) is the relus are non - negative and thus have mean activations larger than zero , whereas elus have negative values , which push the mean activations towards zero .elus saturate to a negative value when the input gets smaller .this decreases the forward propagated variation and information , which draws the mean activations to zero .units with non - zero mean activations act as a bias for the next layer .if these units do not cancel each other out , then the learning causes a _ bias shift _ for units in the next layer .therefore , elus decrease the bias shift as the mean activations are closer to zero .less bias shift also speeds up learning by bringing standard gradient closer towards the unit natural gradient .[ fig : elu ] shows the comparison of relu and elu ( ) . ) , height=188 ] .24 .24 .24 .24the residual network in is a functional composition of _ residual blocks _ ( resblocks ) , each encoding the update rule ( [ eq:1 ] ) .fig [ fig : resblock ] shows the schematic illustration of the resblock . in this example, consists of a sequence of layers : * conv - bn - relu - conv - bn * , where conv and bn stands for convolution and batch normalization respectively .this construction scheme is adopted in all our experiments while reproducing the results of .the function is parameterized by some set of parameters , which we omit for notational simplicity .normally , we use 64 , 32 or 16 filters in the convolutional layers .the size of receptive field is .although it does not seem attractive but , in practice it gives better accuracy without adding any overhead costs , as compared to plain networks . in comparison with the resnet model ,we use exponential linear unit ( elu ) in place of a combination of relu with batch normalization .[ fig : elublocks ] illustrates our different experiments with elus in resblock .in this model , consists of a sequence of layers : * conv - elu - conv - elu*. fig .[ fig : conv - elu - conv - elu ] represents the basic building block of this experiment .we trained our model using the specification mentioned in [ cifar10analysis ] .but we found that after few iterations , the gradients blew up .when the learning rate is decreased , the 20-layer model starts converging but to very less accuracy .the deeper models like 56 and 110-layer still do not converge after decreasing the learning rate .this model clearly fails as the trivial problem of exploding gradient can not be reduced in very deep models .this is a * full pre - activation unit * resblock with elu .the sequence of layers is * elu - conv - elu - conv*. fig .[ fig : elu - conv - elu - conv ] highlights the basic resblock of this experiment . during the training of this modeltoo , the gradients exploded after few iterations . due to the exponential function, the gradients get larger and lead to exploding gradient problem . even decreasing the learning ratealso does not reduce this problem .we decided to add a batch normalization layer before addition to control this problem ..23 .23 to control the exploding gradient , we added a batch normalization before addition .so , the sequence of layers in this resblock is * conv - elu - conv - bn * and elu after addition .[ fig : conv - elu - conv - bn1 ] represents the resblock used in this experiment .thus in this resblock , the update rule ( [ eq:1 ] ) for the layer is the batch normalization layer reduces the exploding gradient problem found in the previous two models .we found that this model gives better accuracy for 20-layer model .however , as we increased the depth of the network , the accuracy degrades for the deeper models . if the elu activation function is placed after addtion , then the mean activation of the output pushes towards zero .this could be beneficial .however , this forces each skip connection to perturb the output .this has a harmful effect and we found that this leads to degradation of accuracy in very deep resnets .[ fig : elu_after_add ] depicts the effects of including elu after addition in this resblock ..33 .33 .33 .4 .4 fig .[ fig : conv - elu - conv - bn2 ] gives an illustration of the basic building block of our model .thus in our model , represents the following sequence of layers : * conv - elu - conv - bn*. the update rule ( [ eq:1 ] ) for the layer is this is the basic building block for all our experiments on cifar-10 and cifar-100 datasets .we show that not including elu after addition does not degrade the accuracy , unlike the previous model .this resblock improves the learning behavior and the classification performance of the residual network .we empirically demonstrate the effectiveness of our model on a series of benchmark data sets : cifar-10 and cifar-100 . in our experiments , we compare the learning behavior and the classification performance of both the models on the cifar-10 and cifar-100 datasets .the experiments prove that our model outperforms the original resnet model in terms of learning behavior and classification performance on both the datasets .finally , we compare the classification performance of our model with other previously published state - of - the - art models .the first experiment was performed on the cifar-10 dataset , which consists of 50k training images and 10k test images in 10 classes . in our experiments , we performed training on the training set and evaluation on the test set .the inputs to the network are images which are color - normalized .we use a receptive field in the convolution layer .we use a stack of layers with convolution on the feature maps of sizes respectively , with on each feature map .the number of filters are respectively .the original resnet model ends with a global average pooling , a 10-way fully - connected layer and a softmax layer . in our model , we add an elu activation function just before the global average pooling layer .these two models are trained on a aws g2.2xlarge instance ( which has a single gpu ) with a mini batch - size of 128 .we use a weight decay of 0.0001 and a momentum of 0.9 , and adopt the weight initialization in and bn but with no dropout .we start with a learning rate of 0.1 and divide by 10 after 81 epochs , and again divide by 10 after 122 epochs .we use the data augmentation mentioned in during the training phase : add 4 pixels on each side and do a random crop from the padded image or its horizontal flip . during the testing phase, we only use a color - normalized image .our experiments are executed on 20 , 32 , 44 , 56 and 110-layer networks .[ fig : cifar10trainloss ] shows the comparison of learning behaviours between our model and the original resnet model on cifar-10 dataset for 20 , 32 , 44 , 56 and 110-layers .the graphs prove that for all the different number of layers , our model possesses a superior learning behavior and converges many epochs before the original model . as the depth of the model increases , our model also learns faster than the original model .the difference between the learning rate of these two models increases as the depth increases . comparing fig .[ fig : cifar10trainloss20 ] and fig .[ fig : cifar10trainloss110 ] , we can easily notice the huge difference in learning rates for 20-layer and 110-layer models .after 125 epochs , both the models converge to almost the same value .but , our model has a slightly lower training loss compared to the original model . fig .[ fig : cifar10testerror ] illustrates the comparison of classification performance between our model and the original one on cifar-10 dataset for 20 , 32 , 44 , 56 and 110 layers .we observe that for the 20-layer model , the test error is nearly the same for both the models .but , as the depth increases , our model significantly outperforms the original model .table [ table : cifar10testerrortable ] shows the test error for both the models from the epoch with the lowest validation error .[ fig : cifar10testerror ] shows that the gap between the test error of the two models increases as the depth is also increased . .test error ( % ) of our model compared to the original resnet model .the test error of the original resnet model refers to our reproduction of the experiments by he et al . [ cols="^,^,^ " , ] .23 .23in this paper , we introduce residual networks with exponential linear units which learn faster than the current residual networks .they also give better accuracy than the original ones when the depth is increased . on datasets like cifar-10 and cifar-100 , we improve beyond the current state - of - the - art in terms of test error , while also learning faster than these models using elus .elus push the mean activations towards zero as they introduce small negative values .this reduces the bias shift and increases the learning speed .our experiments show that not only does our model have superior learning behavior , but it also provides better accuracy as compared to the current model on cifar-10 and cifar-100 datasets .this enables the researchers to use very deep models and also increase their learning behavior and classification performance at the same time .deng , j. , dong , w. , socher , r. , li , l.j ., li , k. , fei - fei , l. : imagenet : a large - scale hierarchical image database . in : computer vision and pattern recognition , 2009 .cvpr 2009 .ieee conference on , ieee ( 2009 ) 248255 szegedy , c. , liu , w. , jia , y. , sermanet , p. , reed , s. , anguelov , d. , erhan , d. , vanhoucke , v. , rabinovich , a. : going deeper with convolutions . in : proceedings of the ieee conference on computer vision and pattern recognition .( 2015 ) 19 wan , l. , zeiler , m. , zhang , s. , cun , y.l ., fergus , r. : regularization of neural networks using dropconnect . in dasgupta , s. , mcallester , d. , eds . : proceedings of the 30th international conference on machine learning ( icml- 13 ) .volume 28 ., jmlr workshop and conference proceedings ( may 2013 ) 10581066 snoek , j. , rippel , o. , swersky , k. , kiros , r. , satish , n. , sundaram , n. , patwary , m. , ali , m. , adams , r.p . , et al .: scalable bayesian optimization using deep neural networks .arxiv preprint arxiv:1502.05700 ( 2015 )
|
very deep convolutional neural networks introduced new problems like _ vanishing gradient _ and _ degradation_. the recent successful contributions towards solving these problems are residual and highway networks . these networks introduce _ skip connections _ that allow the information ( from the input or those learned in earlier layers ) to flow more into the deeper layers . these very deep models have lead to a considerable decrease in test errors , on benchmarks like imagenet and coco . in this paper , we propose the use of _ exponential linear unit _ instead of the combination of relu and batch normalization in residual networks . we show that this not only speeds up learning in residual networks but also improves the accuracy as the depth increases . it improves the test error on almost all data sets , like cifar-10 and cifar-100 . = 1
|
galaxy redshifts are important for a large number of studies related with the extragalactic universe , due to their direct correlation with the distance of the sources .photometric galaxy redshifts ( hereafter photo - z s ) are crucial in the current era of large surveys , based on massive datasets . they are used in a wide plethora of tasks , such as , for example , to constrain the dark matter and dark energy contents of the universe through weak gravitational lensing , to understand the cosmic large scale structure , by identifying galaxies clusters and groups , to map the galaxy color - redshift relationships , as well as to classify astronomical sources .more recently , the attention in this field has been focused on the techniques able to compute a probability density function ( pdf ) of the photo - z s for each individual astronomical source , with the goal to improve the knowledge about statistical reliability of photo - z estimations . in the machine learning contextseveral methods have been proposed to approach this task , see for instance : ( ( * ? ? ?* bonnet 2013 ) , ( * ? ? ?* rau et al . 2015 ) , ( * ? ? ?* sadeh et al . 2015 ) , ( * ? ? ?* carrasco & brunner 2014 ) ) . herewe present a new method , named metaphor ( machine - learning estimation tool for accurate photometric redshifts ) , a modular workflow including a machine learning engine to derive photo - z s and a method to produce their pdf s , based on the evaluation of photometric data uncertainties to derive a perturbation law of the photometry . with this lawwe perform the perturbation of the features , in a controlled , not biased by systematics , way .a proper error fitting , accounting for the attribute errors , allows to constrain the perturbation of photometry on the biases of the measurements .the conceptual flow of the metaphor pipeline is based on the following sequence of tasks : given a multi - band data sample containing the spectroscopic galaxy redshifts , _ ( i ) _ for each band involved , a photometry perturbation function is derived ; _ ( ii ) _ the data sample is randomly shuffled and split into a training and a test set ; _ ( iii ) _ the photometry of the test set is perturbed , thus obtaining an arbitrary number of test set replica ; _ ( iv ) _ finally , the machine learning engine is trained and the test sets ( perturbed plus the unperturbed one ) are submitted to the training model to derive the pdf of photo - z estimations . in the last step , the values , output of the trained network , are used to calculate , for each bin of redshift , the probability that a given photo - z value belongs to each bin .the binning step , as well as the number of perturbations , are user defined parameters , to be chosen accordingly to the specific requirements of the experiment . for a given photo - z binning step , we calculate the number of photo - z s for each bin ( ) and the probability that the redshift belongs to the bin is .the resulting pdf is thus formed by all these probabilities . at the end of the procedure ,a post - processing module calculates the final photo - z estimation and pdf statistics .for instance , we evaluate the photo - z s in terms of a standard set of statistical estimators for the quantity on the objects in the blind test set : _ ( a ) _ bias : defined as the mean value of the residuals ; _ ( b ) _ : the standard deviation of the residuals ; _ ( c ) _ : the radius of the region that includes of the residuals close to 0 ; _ ( d ) _ : the normalized median absolute deviation of the residuals , defined as ; _ ( e ) _ fraction of outliers with ; _ ( f ) _ skewness : asymmetry of the probability distribution of a real - valued random variable around the mean .furthermore , in order to evaluate the cumulative performance of the pdf we compute the following three estimators on the _ stacked _ residuals of the pdf s : _ ( 1 ) _ : the percentage of residuals within ; _ ( 2 ) _ : the percentage of residuals within ; _ ( 3 ) _ : the weighted average of all the residuals of the _ stacked _ pdf s .the photometry perturbation is based on the following expression , applied on the given _j _ magnitudes of each band _ i _ as many times as the number of perturbations of the test set : the term is a multiplicative constant , used to customize the photometric error trend on the base of the specific band photometric quality .this could result particularly useful in case of photometry obtained by merging different surveys ; the quantity is the weighting coefficient associated to each specific band used to weight the gaussian noise contribution to magnitude values ; finally , the term is a random value extracted from a normal distribution . we investigated four different types of the weighting coefficient .first one is a heuristically chosen real number between and , implying a same width of the gaussian noise for each point .the second choice is based on weighting the gaussian noise contribution using the individual magnitude error provided for each source .the third one is a polynomial fitting : a binning of photometric bands is performed , in which a polynomial fitting of the mean magnitude errors is used to reproduce the intrinsic trend of the distribution .the last option is a slightly more sophisticated version of the polynomial fitting , coupled with a minimum value chosen heuristically , thus resulting in a bi - modal perturbation function .as introduced , one of the most suitable features of metaphor is the invariance to the specific empirical model used as engine to estimate photo - z s . in order to demonstrate this capability, we tested the metaphor workflow using three different machine learning methods : mlpqna neural network ( * ? ? ? *( byrd et al . 1994 ) ) , already successfully used in several astrophysical contexts ( ( * ? ? ?* brescia et al . 2013 ) , ( * ? ? ?* brescia et al .2014a ) , ( * ? ?* cavuoti et al . 2012 ) , ( * ? ? ?* cavuoti et al .2014a ) , ( * ? ?* cavuoti et al .2014b ) , ( * ? ? ?* cavuoti et al .2015b ) ) , the standard knn ( * ? ? ?* ( cover & hart 1967 ) ) , and random forest ( * ? ? ? * ( breiman 2001 ) ). in particular , the experiment with a very basic machine learning model like knn method , would demonstrate the most general applicability of any empirical model engine within metaphor .furthermore , by considering that the methods mostly based on sed template fitting intrinsically provide the pdf of the estimated photo - z s , we compared metaphor with the _ le phare _ model ( * ? ? ? * ( ilbert et al .2016 ) ) .the real data used for the tests were a galaxy spectroscopic catalogue sample extracted from the data release 9 ( dr9 ) of the sloan digital sky survey , sdss , ( * ? ? ? * ( york et al 2000 ) ) . by using mlpqna as internal engine for photo - z estimation, we reached values of , and of outliers .these statistical results are slightly worse than what we showed in a previous article ( * ? ? ?* ( brescia et al . 2014c ) ) , where we already used the mlpqna method to derive photo - z s for the galaxies of sdss - dr9 . however , this discrepancy is only apparent , by considering that the spectroscopic kb used in the cited work was much larger than the one used here ( training objects against only objects used for the training in this case ) .the decision of such limited sample used in the present experiment was induced by the different goal of the experiment .we performed a large number of experiments with mlpqna using photometric perturbations in order to find the best perturbation law .the most performing experiment turns out to be the one based on a bi - modal perturbation law with threshold and a multiplicative constant .this experiment leads to a stacked pdf with within $ ] , , of the objects falling within the peak of the pdf , falling within bin from the peak and falling within the pdf . after having found the best perturbation law , we executed perturbations of the test set .this experiment led to an increase in the statistical performances , obtaining and within the peak of the pdf , within bin from the peak and inside the pdf . the same configuration and perturbed data have been used to estimate photo - z s by replacing mlpqna with , respectively , the knn and random forest models within the metaphor workflow . in parallel, we derived also the photo - z pdf s with the _ le phare _ method .the statistical results for all these methods are summarized in table [ tab : stackedstat ] ..statistical results of photo - z s and related pdf estimation on the blind test set extracted from sdss - dr9 , obtained by three machine learning models ( mlpqna , knn and random forest ) , alternately used as internal engine of metaphor and by the sed template fitting method _le phare_. the last three estimators are related to the cumulative pdf of the estimated photo - z s .see text for the explanation of the statistical estimators . [cols="^,^,^,^,^",options="header " , ] although there is a great difference in terms of photo - z estimation statistics between _ le phare _ and mlpqna ( see table [ tab : stackedstat ] ) , the results of the pdf in terms of are comparable .but the greater efficiency of mlpqna induces an improvement in the range within , where we find of the objects against the for _ le phare_. both individual and _ stacked _ pdf s are more symmetric in the case of empirical methods .this is particularly evident by observing the skewness ( see table [ tab : stackedstat ] ) , which is times greater in the case of _le phare_. the presented photo - z estimation results and the statistical performance of the cumulative pdf s , achieved by mlpqna , rf and knn through the proposed workflow , demonstrate the validity and reliability of the metaphor strategy , despite its simplicity , as well as its general applicability to any other empirical method .mb and sc acknowledge financial contribution from the agreement asi / inaf i/023/12/1 .mb acknowledges the prin - inaf 2014 _ glittering kaleidoscopes in the sky : the multifaceted nature and role of galaxy clusters_. ct is supported through an nwo - vici grant ( project number ) .bonnet , c. , 2013 , mnras , 449 , 1 , 1043 - 1056 breiman , l. , 2001 , machine learning , springer eds . , 45 , 1 , 25 - 32 brescia m. , cavuoti s. , longo g. , et al ., 2014a , pasp , 126 , 942 , 783 - 797 brescia , m. , cavuoti , s. , longo , g. , de stefano , v. , 2014c , vizier on - line data catalog : j / a+a/568/a126 brescia m. , cavuoti s. , dabrusco r. , mercurio a. , longo g. , 2013 , apj , 772 , 140 byrd , r.h , nocedal , j. , and schnabel , r.b ., mathematical programming , 63 , 129 ( 1994 ) carrasco , k. , brunner , r. j. , 2014 , mnras , 442 , 4 , 3380 - 3399 cavuoti , s. , brescia , m. , longo , g. , mercurio , a. , 2012 , a&a , 546 , 13 cavuoti s. ; brescia m. ; dabrusco r. ; longo g. & paolillo m. , 2014a , mnras 437 , 968 cavuoti , s. , brescia , m. , longo , g. , 2014b , proceedings of the iau symposium , vol . 306 , cambridge university press cavuoti , s. , brescia , m. , de stefano , v. , longo , g. , 2015b , experimental astronomy , springer , vol . 39 , issue 1 , 45 - 71 cover , t. m. , hart , p. e. , 1967 , ieee transactions on information theory 13 ( 1 ) ilbert , o. , arnouts , s. , mccracken , h. j. , et al ., 2006 , a&a , 457 , 841 rau , m. m. , seitz , s. , brimioulle , f. , et al ., 2015 , mnras , 452 , 4 , 3710 - 3725 sadeh , i. , abdalla , f. b. , lahav , o. , 2015 , eprint arxiv:1507.00490 york , d. g. , adelman , j. , anderson , j. e. , et al . , 2000 ,aj , 120 , 1579
|
we present metaphor ( machine - learning estimation tool for accurate photometric redshifts ) , a method able to provide a reliable pdf for photometric galaxy redshifts estimated through empirical techniques . metaphor is a modular workflow , mainly based on the mlpqna neural network as internal engine to derive photometric galaxy redshifts , but giving the possibility to easily replace mlpqna with any other method to predict photo - z s and their pdf . we present here the results about a validation test of the workflow on the galaxies from sdss - dr9 , showing also the universality of the method by replacing mlpqna with knn and random forest models . the validation test include also a comparison with the pdf s derived from a traditional sed template fitting method ( le phare ) .
|
the * h*eisenberg s ( or uncertainty ) * r*elations ( hr ) have a large popularity , being frequently regarded as crucial formulae of physics or ( martens 1991 ) even as expression of the most important principle of the twentieth century physics .nevertheless today one knows ( bunge 1977 ) that hr are probably the most controverted formulae in the whole of the theoretical physics .the controversies originate in the association of the ( supposed special ) characteristics of measurements at atomic scale with hr respectively with the foundation and interpretation of quantum theory .the respective association was initiated and especially sophisticated within the * t*raditional ( conventional or orthodox ) * i*nterpretation of*hr * ( tihr ) .very often the tihr is amalgamated with the so - called copenhagen interpretation of quantum mechanics .elements of the alluded association were preserved one way or another in almost all investigations of hr subsequent to tihr .it is notable that , in spite of their number and variety , the mentioned investigations have not yet solved in essence the controversies connected with tihr .but , curiously , today , large classes of publications and scientists seem to omit ( or even to ignore ) discussions about the controversies and defects characterizing the tihr .so , tacitly , in our days tihr seems to remain a largely adopted doctrine which dominates the questions regarding the foundation and interpretation of quantum theory .for all that ( piron 1982 ) the idea that there are defects in the foundations of orthodox quantum theory is unquestionable present in the conscience of many physicists .no doubt , first of all , the above quoted idea regards questions connected with tihr .then the respective questions require further studies and probably new views .we believe that a promising strategy to satisfy such requirements is to develop an investigation guided by the goals presented under the following * p*oints ( * p * ) : : from the vague multitude of sophisticated statements of tihr to identify its main elements ( hypotheses , arguments / motivations and assertions). : to add together the significant defects of tihr located in connection with the above mentioned elements. : to examine the verity of the respective defects as well as their significance with respect to tihr. : to see if such an examination defends tihr or irrefutably pleads against it. : in the latter case to admit the failure of tihr and to abandon it as an incorrect and useless doctrine. : to see if hr are veritable physical formulae. : to search for a genuine reinterpretation of hr. : to give a ( first ) evaluation of the direct consequences of the mentioned reinterpretation. : to note a few remarks on some adjacent questions. in this paper we wish to develop an investigation of the hr problematic in the spirit of the above mentioned points * * *. * for such a purpose we will appeal to some elements ( ideas and results ) from our works published in last two decades ( dumitru 1974 a , 1974 b , 1977 , 1980 , 1984 , 1987 , 1988 , 1989 , 1991 , 1993 , 1996 , 1999 ; dumitru and verriets 1995 ) .but here we strive to incorporate the respective elements into a more argued and elaborated approach .also we try to make our exposition as self - contained as possible so that the reader should find it sufficiently meaningful and persuasive without any important appeals to other texts . through the announced investigation we shall find that all the main elements of tihr are affected by insurmountable defects .therefore we shall reveal the indubitable failure of tihr and the necessity of its abandonment .then it directly follows that in fact hr do not have any significance connected with the ( measuring ) uncertainties .that is why in this paper for the respective relations we do not use the wide - spread denomination of uncertainty relations .a consequence of the above alluded revelations is the fact that hr must be deprived of their quality of crucial physical formulae .so we come in consonance with the guess ( dirac 1963 ) that : uncertainty relations in their present form will not survive in the physics of future .the failure of tihr leaves open a conceptual space which firstly requires justified answers to the questions from and .the respective answers must be incorporated in a concordant view about the subjects of the following points : : the genuine description of the measurements. : the foundation and the interpretation of the actually known quantum theory. the above mentioned subjects were amalgamated by tihr trough a lot of assertions / assumptions which now appear as fallacious .that is why we suggest that an useful view to be built on a natural differentiation of the respective subjects . in such a viewthe actual quantum theory must be considered as regarding only intrinsic properties of the entities ( particles and fields ) from the microworld .the aspects of the respective properties included in the theoretical version of hr refer to the stochastic characteristics of the considered entities . but note that stochastic attributes are specific also in the case some macroscopic physical systems ( e.g. thermodynamical ones ) , characterized by a class of macroscopic formulae similar with hr .also the planck s constant ( involved in quantum hr ) proves itself to be similar with the boltzmann s constant k ( involved in the mentioned macroscopic formulae ) .both mentioned constants appear as generic indicators of stochasticity . in the spirit of the abovesuggested view the description of the measurements remains a question which is extrinsic as regards the properties of the considered physical systems .also it must be additional and independent from the actually known branches of theoretical physics ( including the quantum mechanics ) .the respective branches refer only to the intrinsic properties of the considered systems .then the measurements appear as processes which supply out - coming ( received ) information / data about the intrinsic properties of the measured systems .so regarded the measurements can be described through some mathematical models . in such modelsthe measuring uncertainties can be described by means of various estimators .the above announced views about the hr problematic facilitate reconsiderations and ( we think ) nontrivial comments about some questions regarding the foundations of quantum mechanics . for developing our exposition in the next sections we will quote directly only a restricted number of references .this because our goal is not to give on exhaustive review of the literature dealing with tihr .the readers interested in such reviews are invited to consult the known monographical and bibliographical publications ( e.g. : jammer , 1966 ; de witt and graham , 1971 ; jammer , 1974 ; nilson , 1976 ; yanase _ et .al . _ 1978 ; primas , 1981 ; ballentine , 1982 ; cramer , 1986 ; dodonov and manko , 1987 ; martens , 1991 ; braginski and khalili , 1992 ; omnes 1992,1994 ; bush _ et .al_. , 1996 ) .in spite of its popularity , in its promoting literature , tihr is reported rather as a vague multitude of sophisticated statements but not as a systematized ensemble of clearly defined main elements ( hypotheses , arguments / motivations and assertions ) . however , from the respective publications there can be identified and sorted out such on ensemble which , in our opinion , can be presented as follows : on the best authority ( heisenberg , 1977 ) today it is known that the tihr story originates in the search of general answers to the primary questions mentioned under the following points : all measurements affected by measuring uncertainties ? can the respective uncertainties be represented quantitatively in a mathematical scheme ? in connection with ** *,*tihr adopted the following hypotheses : measuring uncertainties are due to the perturbations of the measured system as a result of its interactions with the measuring instrument. in the case of macroscopic systems the mentioned perturbations can be made arbitrarily small and , consequently , always the corresponding uncertainties can be considered as negligible . in the case of quantum systems ( microparticles of atomic size ) the alluded perturbations are essentially unavoidable and consequently for certain measurements ( see below ) the corresponding uncertainties are non - negligible. in the shadow of the hypotheses mentioned in and the tihr attention was limited only to the quantum cases . for approaching such cases with respect to tihr restored to the following motivation resources : of some thought ( gedanken ) measuring experiments. : appeal to some theoretical formulae from the existing quantum mechanics. the two resources were used in undisguised association .so , from the starting - point , in tihr the questions regarding the description of the measurements , respectively the foundation and interpretation of the existing quantum theory , were explicitly amalgamated .. for accuracy of the discussions in the following we shall use the term _ variable _ in order to denote a physical quantity which describes a specific property / characteristic of a physical system . with adequate delimitationsthe respective term will be used in both theoretical and experimental sense . in the former caseit is connected with the theoretical modeling of system . in the latter caseit is related with the data given by measurements about the system . in connection with was considered ( heisenberg , 1927 , 1930 ) the case of simultaneous measurements of two ( canonically ) conjugated quantum variables a and b ( such are coordinate q and momentum p or time t and energy e ) .the correspondingly thought experimental ( te ) uncertainties are and they were found to be interconnected trough the following a - b formula where is the quantum planck s constant .as regards firstly there was introduced ( heisenberg 1927 , 1930 ) the following q - p theoretical formula : ( with equality only for gaussian wave function afterwards , tihr partisans replaced eq .( 2.2 ) by the more general a - b theoretical formula _ { -}\right\rangle _ \psi \right| \eqnum{2.3}\ ] ] here _ { -} ] in the case of conjugated variables .( for further details about the quantum notations , in actual usance version , see below the sec .v ) . equations ( 2.1 ) - ( 2.2)/(2.3 ) were taken by tihr as motivation supports .based on the such supports tihr partisans promoted a whole doctrine ( vision ) .the main ( essential ) elements of the respective doctrine come down to the following points , grouped in pairs of assertions ( a ) and motivations ( m ) : : the quantities and from eqs .( 2.1 ) and ( 2.2)/(2.3 ) denoted by an unique symbol , have an identical significance of measuring _uncertainty _ for the quantum variable a. : the above mentioned tihr presumptions about and the ( formal ) resemblance between eqs .( 2.1 ) and ( 2.2). : equations ( 2.1 ) and ( 2.2)/(2.3 ) admit the same generic interpretation of uncertainty relations for simultaneous measurements of the variables and . : the presumed significance for and from eq .( 2.1 ) and the resemblance between eqs . ( 2.1 ) and ( 2.12/(2.3). : a solitary quantum variable can be measured without any uncertainty ( with unlimited accuracy). : for such a variable , considered independently from other variables , the eqs .( 2.1 ) - ( 2.3 ) do not impose a lower bound for the uncertainty . : two commutable variables and can be measured simultaneously with arbitrarily small ( even null ) uncertainties and . : for such variables _ { -}=0 ] and in eq . ( 2.3 ) as well as in eq . ( 2.1 ) the product of the corresponding simultaneous uncertainties has as a lower bound a non - null quantity. : the hr defined by eqs . ( 2.1 ) - ( 2.3 ) ( and named uncertainty relations ) are typically quantum formulae and they have no similar in classical ( non - quantum ) physics. : presence of the quantum ( planck ) constant in eqs .( 2.1 ) - ( 2.3 ) and its absence in all known formulae of classical physics. the above mentioned points can be regarded as main elements of tihr .this because any piece of the variety of tihr statements is obtained and advocated by means of some combinations of the respective elements . among the alluded pieces we mention here the ones regarding the mutual relations of quantum variables .tihr adopted the idea : : a variable exists /(can be defined ) only when it is measurable with absolute accuracy ( without uncertainty). combining this idea with and in the tihr literature it is often promoted the statement : : two quantum variables are compatible respectively incompatible as their operators are respectively are not commutable .consequently a complete description of a quantum system must be made in terms of a set of mutually compatible variables. in the same literature one finds also the opinion that : : two incompatible variables ( especially the canonically conjugated ones ) are complementary ( i.e. mutually exclusive ) - similarly as in the complementarity relation for the corpuscular and wave characteristics of microparticles of atomic size. was initiated by heisenberg but later on it was developed and especially promoted by the copenhagen school guided by n. bohr . in a first stage tihr had a relatively modest motivation , based only on the eqs .( 2.1 ) and ( 2.2 ) .however it was largely accepted in scientific and academic communities , partly due to the authority of its promoters .so , the establishing of tihr as a doctrine started . in a second stage tihr partisans introduced a multitude of thought - experimental or theoretical formulae which resemble more or less the eqs .( 2.1)-(2.3 ) . in spite of their ( conceptual and/or mathematical ) diversitythe respective formulae were declared as _ uncertainty relations _ and their existence and interpretation were regarded as supports for an extended motivation of tihr .so , for its partisans , tihr was viewed as a well established and irrefutable doctrine .such a view was widely promoted in leading publications ( especially in textbooks ) . in the meantimethe alluded view was confronted with the notification of some defects of tihr .but , as a rule , the respective notifications appeared disparately , sometimes in marginal publications and from non - leading authors .so the mentioned defects were not presented as a systematized ensemble and tihr was criticized on certain points but not in its totality .an appreciation viewing somehow the alluded totality was noted altogether solitarily ( primas 1981 ) . referring to the post - copenhagen interpretation of quantum mechanics it says : heisenberg 's uncertainty relations are no longer at the heart of the interpretation but are simple consequences of the basic mathematical formalism . hereone should remark that , as we know , such an appreciation has never been used in order to elucidate the shortcomings of the tihr doctrine .moreover it seems that , even in the our - days publications regarding the interpretation of quantum mechanics , the respective appreciation is not taken properly into account . in the presented circumstances the tihr partisans ignored or even denied the alluded defectssuch an attitude was sustained mainly by putting forward thought experiments and/or the authority of the mentioned partisans .but note that , in this way , for most of the cases , the notifications of tihr defects were not really counteracted and the corresponding controversies were not elucidated . for all that tihr survived over the decades with the appearance of an uncontroversial doctrine and became a veritable myth .undoubted signs of the respective myth are present even today in publications ( especially in textbooks ) and in the thinking of peoples ( particularly in academic communities ) . here , it is interesting to observe heisenberg 's own attitude towards tihr story .it is surprising to see in the afferent literature that , although he was the initiator of tihr doctrine , heisenberg was not involved in the subsequent history of the respective doctrine .so he did not develop mathematical generalizations or interpreting extensions / sophistications of the eqs .( 2.1)-(2.3 ) .also he did not participate in the controversies regarding the tihr defects .probably that was the reason why in one of his last publications on hr ( heisenberg , 1977 ) he did not refer to such developments and controversies but reminded only his thoughts connected with the beginning of the tihr history .can the alluded attitude be regarded as an evidence of the supposition that in fact heisenberg was conscious of the insurmountable defects of tihr ?a pertinent answer to such a question is expected to be ( eventually ) known by the publication of all volumes of a planned monography ( mehra and rechenberg , 1982 ) due , in part , to one of heisenberg 's last collaborators. with the heisenberg case one discloses a particularity in the attitude of many scientists who promoted tihr . as individuals ,each of the respective scientists did not regard the tihr as a whole doctrine but argued for only a few of its elements and ignored almost all of the defects. often their considerations were amalgamated with ideas which do not pertain strictly to tihr .that is why , probably , by the term _ tihr - partisans _ it is more adequate to understand a fuzzy class of people rather than a rigorously delimited group of scientists . now looking back over the time ,we believe that the verity and true significance of tihr defects still remain open questions which require to be elucidated .. such a requirement implies the necessity of an argued and complete revaluation of tihr .then , there directly appears the need for a search of a genuine reinterpretation of hr .the alluded beliefs will guide our investigations in the following sections .tihr introduced its main elements presented in * * by appealing to some starting considerations about the eqs .( 2.1)-(2.3 ) .the appeals viewed the scientific achievements from the first years of quantum mechanics .. here , for a correct ( re)evaluation of tihr , it is the place to remember briefly the respective considerations .firstly it must be noted that the eqs .( 2.1 ) were introduced by using the wave characteristics of quantum microparticles .consequently the quantum measurements were regarded by similitude with the optical ones .but in the mentioned years the performances of the optical measurements were restricted by the classical limitative criteria of resolution ( due to abbe and rayleigh ) . then tihr promoted as starting consideration the following point : : the estimation of performances respectively of uncertainties for the quantum measurements must be done by using the alluded limitative criteria , transposed in quantum framework through de broglie formula ( = wave lenght). by means of this consideration tihr partisans obtained some relations similar with eqs . (2.1 ) , for all the thought - experiments promoted by them .referring to the eqs .( 2.2)-(2.3 ) the starting considerations promoted by tihr can be resumed as follows .the state of a quantum microparticle is described by the wave function regarded as a vector in a hilbert space ( q denotes the set of specific orbital variables ) . in the respective vectorial space the scalar product ( of two functions and is given by where =the complex conjugate of , whereas and denote the accessible respectively infinitesimal domains in q - space . variable is described by the operator and its expected ( mean ) value is defined by the quantity from eqs .( 2.2)-(2.3 ) is defined as follows : then for two variables and the following evident relation was appealed with an arbitrary and real parameter . in the tihr literaturethis relation is transcribed into the formula which is equivalent with the relation _ { -}\right\rangle _ \psi + \left ( \delta _ \psi b\right ) ^2\geq 0 \eqnum{4.6}\ ] ] where _ { -}=\widehat{a}\widehat{b}-% \widehat{b}\widehat{a} _ { cr} ] denotes the determinant with the elements here , and in the following notations , the index cr added to the number of a formula shows the belonging of the respective formula to a general family of similar _ correlation relations _ ( cr ) . for two operators and from eq .( 5.6 ) one obtains if the two operators satisfy the conditions equation ( 5.6 ) gives directly when eq .( 5.8 ) is satisfied we have also _ { + } \right\rangle _ \psi -\frac i2\left\langle i\left [ \widehat{a},\widehat{b}\right ] \right\rangle _ \psi \eqnum{5.10}\ ] ] where _ { \pm } = \widehat{a}\widehat{b}% \pm \widehat{b}\widehat{a} ] and _ { -}\right\rangle _ \psi _ { cr} _ { cr} _ { cr} _ { cr} _ { cr}_{cr}_{cr} _ { cr} ] with the same values on the boundaries , it satisfies the relation then instead of from eq .( 5.31 ) we have the fourier coefficients with and similarly with eq .( 5.33 ) we can take : this means that can be interpreted as probability density for the continuous variable while signify the probabilities associated with the discrete variable in such a case instead of eqs .( 5.33 ) and ( 5.34 ) one must write and instead of eq . ( 5.35 )one obtains this latter formula is applicable in some cases for the variables azimuthal angle and angular momentum ( see below the sec .vi.f ) . in such cases is the periodic wave function respectively with and we end this section with a notification regarding the relations expressed by eqs . : ( 5.6 ) , ( 5.9 ) , ( 5.11)-(5.14 ) , ( 5.16)-(5.18 ) , ( 5.21)-(5.23 ) , ( 5.25)-(5.29 ) , ( 5.30 ) , ( 5.35 ) and ( 5.41 ) . from a mathematical viewpointall the respective relations refer to variables with stochastic characteristics . also , by their mathematical significances , they belong to the same family of similar formulae which can be called _ correlation relations _ ( cr ) .that is why we added the index cr to the numbers of all the respective relations .with the above mentioned facts now we can proceed to present the defects of tihr .note that the respective defects appear not as a systematized ensemble but rather as a dispersed set of ( relatively ) distinct cases .that is why our approach aims not at a precisely motivated order of exposition .we mostly wish to show that taken together the set of the alluded defects irrefutably incriminate all the main elements of tihr reviewed in sec .then our presentation includes the defects revealed in the following sub - sections : trough the assertion of tihr the thought respectively theoretical quantities and from hr are termed measuring uncertainties .but the respective term appears as groundless if it is regarded comparatively with a lot of facts which we present here .firstly note that a minute examination of all thought experiments referred in connection with eq .( 2.1 ) does not justify the mentioned term for one of the implied quantities and so ( in the coordinate q -momentum p case ) and ( in the time t - energy e case ) represent the jumps of the respective variable from an initial value ( before the measurements ) to a final value ( after the measurements ). then it results that and can not be regarded as uncertainties ( i.e. measuring parameters ) with respect to the measured state which is the initial one .this because ( albertson , 1963 ) : it seems essential to the notion of a measurements that it answer a question about the given situation existing before measurement .whether the measurement leaves the measured system unchanged or brings about a new and different state of that system is a second and independent question .the remaining quantities and from eq .( 2.1 ) are also in the situation of infringing the term measuring uncertainties in the sense attributed by tihr .the respective situation is generated by the same facts which will be presented below in the sec .as regards the theoretical quantity the following observations must be taken into account . depends only on the theoretical model ( wave function of the considered microparticle but not on the characteristics of the measurements on the respective microparticle .particularly note that the value of can be modified only by changing the microparticle state ( i.e. its wave function comparatively the measuring uncertainties can be modified by improving ( or worsening ) the performances of the experimental techniques , even if the state of the measured microparticle remains unchanged . in connection with the term uncertainty it is here the place to point out also the following remarks . in quantum mechanicsa variable is described by an adequate operator which ( gudder , 1979 ) is a generalized stochastic ( or random ) quantity .the probabilistic ( stochastic ) characteristics of the considered microparticle are incorporated in its wave function then the expected value and the standard deviation appears as quantities which are exclusively of intrinsic nature for the respective microparticle .in such a situation a measurements must consist of a _ statistical _ _ sampling _ but not of a _ solitary detection _ ( determination ) .the respective sampling give an output - set of data on the recorder of the measuring instrument .from the mentioned data one obtains the out - mean and out - deviation .then it results that in fact the measuring uncertainties must be described by means of the differences and but not through the quantity .( for other comments about the measurements regarded as here see below the sec .ix ) . the above mentioned factsprove the groundlessness of the term uncertainty in connection with the quantities and but such a proof must be reported as a defect of tihr ._ observation _ : sometimes , particularly in old texts , the quantities and are termed indeterminacy of the quantum variable if such a term is viewed to denote the on - deterministic or random character of it can be accepted for a natural interpretation of so the hr given by eqs .( 2.3)/(4.8)/(5.13 ) and their generalizations from eq .( 5.6 ) appear to be proper for a denomination of indeterminacy relations .but then it seems strange for the respective relations to be considered as crucial and fundamental formulae ( as in tihr conception ) . this because in non - quantum branches of probabilistic sciences an entirely similar indeterminacy relation is regarded only as a modest , and , by no means a fundamental formula .the alluded non - quantum indeterminacy relation expresses simply the fact that the correlation coefficient takes values within the range . ] can be a non - null quantity . in this respectwe quote the following example ( dumitru , 1988 ) : let be a quantum microparticle moving in a two - dimensional potential well , characterized by the potential energy =0 for and = otherwise .the corresponding wave functions are as two commutable variables and with { + } \right\rangle _ \psi \right| ] ( ii ) the states with and coincide and ( iii ) the states with and are closely adjacent .moreover for the mentioned systems one considers only the states which are nondegenerate in respect with .this means that each of such state correspond to a distinct ( eigen- ) value of the alluded states are described by the wave functions : then , for the demarcated states , one finds , and ( 6.4 ) gives the absurd results such a result drives tihr in a evident deadlock . for avoiding the alluded deadlock the thr partisans advocated the following idea: in the case the theoretical hr must not have the ordinary form of eq .( 6.4 ) but an adjusted version ,concordant with tihr vision .thus the following lot of adjusted hr was invented : \eqnum{6.9}\ ] ] in eq .( 6.9 ) is a real nonnegative parameter . in eq .( 6.10 ) and represents the minimum respectively the maximum values of where . ] and =two arbitrary integer numbers with . in eq .( 6.13 ) e a complicated expression of connected with the eqs .( 6.7)-(6.14 ) and the afferent tihr debates the following facts are easily observed . from a subjective view , in tihr literature , none of the eqs .( 6.7)-(6.14 ) is unanimously accepted as the true version for theoretical hr . in a objective view the eqs .( 6.7)-(6.14 ) appear as a set of completely dissimilar formulae .this because they are not mutually equivalent and each of them is applicable only in particular and different circumstances .moreover it is doubtful that in the cases of eqs .( 6.7)-(6.13 ) the respective circumstances should have in fact real physical significances .another aspect from an objective view is the fact that the eqs .( 6.7)-(6.13 ) have no correct support in the natural ( non - adjusted ) mathematical formalism of quantum mechanics . only eq .( 6.14 ) has such a support through the eq .the alluded observations evince clearly the persistence of tihr deadlock as regards the case .but in spite of the mentioned evidence , in our days almost all of the publications seem to cultivate the belief that the problems of case are solved by the adjusted eqs .( 6.7)-(6.14 ) . in the tihr literature the mentioned beliefis often associated with a more inciting opinion . according to the respective opinion the ordinary theoretical hr expressed by eq .( 6.4 ) is incorrect for any physical system and , consequently , the respective relation must be prohibited .curiously , through the alluded association , tihr partisans seem to ignore the thought experimental eq .( 6.5 ) . butnote that the simple removal of the respective ignorance can not solve the tihr deadlock regarding the case .moreover , such a removal is detrimental for tihr because the eq .( 6.5 ) is only a conversion of eq .( 2.1 ) which is an unjustified relation ( see sec .vi.b ) as regards the tihr attitude towards the eq .( 6.4 ) there is another curious ignorance / omission . in the afferent literatureit is omitted any discussion on the - degenerate states of the circular systems .such a state is associated with a set of eigenvalue of and it is described by a linear superpositions of eigenfunctions of .as example can be taken the state of a free rigid rotator with a given energy ( = orbital quantum number , = moment of inertia ) .the respective state is described by the wave function where are the spherical functions , and denote the orbital respectively magnetic quantum numbers while are arbitrary complex constants which satisfy the condition in respect with the wavefunction given in eq .( 6.15 ) for the operators and one obtains ^2 \eqnum{6.17}\ ] ] with the expressions and given by eqs . ( 6.16 ) and ( 6.17 ) it is possible that the hr from eq . ( 6.4 ) to be satisfied ( for more details see below the discussions about the eq . ( 8.3 ) in sec .but surprisingly such a possibility was not examined by tihr partisans which persevere to opine that eq .( 6.4 ) must be prohibited as incorrect in respect with any physical situation .we think that such an attitude has to be considered as a defect of tihr doctrine .contrary to the tihr partisans opinion about the hr given by eq .( 6.4 ) it is easy to see ( dumitru , 1988 , 1991 ) , that the respective relation remains rigorously valid at least in the case of one quantum system . the respective system is a quantum torsion pendulum ( qtp ) oscillating around the z - axis .such a qtp is completely analogous with the well - known ( recti)linear oscillator .the states of the qtp are described by the wave functions : with , =azimuthal angle , =moment of inertia , =angular frequency , =0,1,2, ... =the oscillation quantum number and the hermite polynomials .. by its properties , qtp is -torsional ( - circular or azimuthally infinite ) .this means that for it : ( i ) , ( ii ) the states with and do not coincide and ( iii ) the states with and are not closely adjacent . in the case of qtp for the variables and , described by the operators and , one obtains the expression : with these expressions one finds that for qtp the theoretical hr is satisfied in the ordinary / common form given by eq .so the existence of qtp example invalidate the above mentioned opinion of tihr partisans about the hr from eq .( 6.4).the alluded invalidation makes even deeper the deadlock of tihr with respect to the case . all the above mentioned deadlooks of tihr - doctrine in connection with the pair be reported as indubitable defects of the respective doctrine .another case which drived tihr in deadlock is that of pair n- ( number - phase ) connected with the study of quantum oscillator .n * * represents the quantum oscillation number , described by the operator ( where and are the known ladder operators ) while is taken as variable conjugated with often , if the oscillator is considered as radiative , and are regarded as number respectively phase of the radiated particles ( photons or phonons ) . in the - representation we have _ { -}=i \eqnum{6.20}\ ] ] note that in the mentioned representation the states under the considerations are ( in a similar way with the -circular states discussed above in connection with the case ) .the respective states are described by the wave functions for the case the tihr doctrine requires that through the eq .( 2.3 ) one should have the ordinary relation : but it is easy to see that this relation is incorrect because with ( 6.20 ) and ( 6.21 ) one obtains and the incorrectness of the eq . ( 6.22 ) derives tihr in another deadlock . with the aim of avoiding the respective deadlock tihr partisans promoted the idea to replace the eq .( 6.22 ) with some adjusted relations , concordant with tihr doctrine .so in literature ( fain and khanin , 1965 ; carruthers and nieto , 1968 ; davydov , 1973 ; opatrny , 1995 ; lindner et al . , 1996 )were promoted a few adjusted relations such as : \geq \frac{\hbar ^2}4\left [ 1-\frac 3{\pi ^2}\left ( \delta _ \psi \phi \right ) ^2\right ] \eqnum{6.25}\ ] ] the new quantities appearing in eqs.(6.23)-(6.25 ) are defined through the relations : it is the place here to note the following observations . the replacement of the ordinary eq . ( 6.22 ) with the adjusted eqs .( 6.23)-(6.25 ) is only a redundant mathematical operation without any true utility for physics .this happens because for the interests of physics those of real utility are the observables and but not the above mentioned adjusted quantities or .so if the interest of physics are connected on the particles ( photons or phonons ) radiated by quantum oscillators the real measuring instruments are counters respectively the polarizers .such instruments measure directly and but not or so the measuring uncertainties , appealed by tihr as corner - stone pieces , must regard and but not or .the above noted observations show that the relations like eqs .( 6.23)-(6.24 ) ( or other adjusted formulae ) do not solve the nonconformity of the pair with tihr doctrine .the respective nonconformity remains an open question which can not be solved by means of inner elements of tihr .this means that the case appears as an irrefutable defect of tihr .the pair energy e - time t was and still is the subject of many debates in the literature ( aharonov and bohm , 1961 , 1964 ; fock , 1962 ; alcook , 1969 ; bunge , 1970 ; fujivara , 1970 ; surdin , 1973 ; kijovski , 1974 ; bauer and mello , 1978 ; voronstov , 1981 ; kobe and aquilera - navaro , 1994 ) .the respective debates originate in the following facts : on the one hand and are considered as ( canonically ) conjugated variables whoose ordinary operators and satisfy the commutation relation = i\hbar . ] and instead of eq .( 5.29 ) one obtains in this relation tihr partisans view the fact that , simultaneously , the uncertainties and can be arbitrarily small . such a view is concordant with the main concept of tihr . todaymany scientists believe that the adjusted eq .( 6.44 ) solves all the troubles of tihr generated by the eq .( 5.29 ) .it is easy to remark that the mentioned belief proves to be unfounded if one takes into account the following observations : \(i ) equation ( 5.29 ) can not be abrogated unless the entire mathematical apparatus of quantum statistical physics is abrogated too .more exactly , the substitution of the common operators and with macroscopic operators and is a useless invention . this because in the practical domain of quantum statistical physics ( see for example tyablikov , 1975 ) the common operators but not the macroscopic ones are used .\(ii ) the above mentioned substitution of operators does not metamorphose automatically eq .( 5.29 ) into eq . ( 6.44 ) .this because , if two operators are quasi - diagonal ( in the sense required by the tihr partisans ) they can be non - commutable . as an example in this sense we refer to a macroscopic system formed by a large number n of independent -spins ( dumitru , 1988 , 1989 ) .the hinted macroscopic variables are components the magnetization .the corresponding operators are where = magneto - mechanical factor , = pauli matrices associated with the n - th spin ( microparticle ) .one can see that the operators defined by eqs .( 6.45 ) are quasidiagonal in the sense required for macroscopic operators , but they are not commutable among them , as we have for example = i\hbar \gamma \widehat{m_z}. ] , as in the case of operators defined by eqs .( 6.45 ) .\(iii ) the alluded substitution of operators does not solve the troubles of tihr even if the macroscopic operators are commutable .this because eq .( 5.29 ) is only a truncated version of the more general eq .then by the mentioned substitution , in fact , one must consider the methamorphosis of eq .( 5.27 ) which gives in this formula , if _ { -}=0 ] i.e. a quantity which can have a non - null value .then it results that the macroscopic product can have a non - null lower bound .but such a result opposes to the agreements of tihr .so we conclude that in fact the mentioned macroscopic operators can not solve the tihr deficiencies connected with the eq .this means that the respective deficiencies remain unsolved and they must be reported as another insurmountable defect of tihr .a mindful examination of all the details of the facts discussed in the previous section guide us to the following remarks : : taken together , in an ensemble , the above presented defects incriminate and invalidate each of the main elements of tihr. : the mentioned defects are insurmountable for the tihr doctrine , because they can not be avoided or refuted by means of credible arguments from the framework of the respective doctrine. the two remarks reveal directly the indubitable failure of tihr which now appears as an unjustified doctrine . then the sole reasonable attitude is to abandon tihr and , as a first urgency , to search the genuine significance ( interpretation ) of the hr . as second urgency , probably , it is necessary a re - evaluation of those problems in which , by its implications , tihr persists as a source of misconceptions and confusions .a veritable search regarding the genuine significance of hr must be founded on the true meaning of the elements implied in the introduction of the respective relations. then we have to take into account the following considerations .firstly , we opine that thought - experimental hr of the type given by eq . ( 2.1 ) must be omitted from discussions .this because , as it was pointed out ( see sec .vi.b and comments about the eq .( 5.1 ) ) , such a type of relations has a circumstantial character dependent on the performances of the supposed measuring - experiment .also in the respective relations the involved variables are not regarded as stochastic quantities such are the true quantum variables .so the equations of ( 2.1 ) - type have not a noticeable importance for the conceptual foundation of quantum mechanics .moreover the usages of such relations in various pseudo - demonstration ( tarasov , 1980 ) have not a real scientific value .that is why we opine that the thought - experimental hr of the type given by eq .( 2.1 ) must be completely excluded from physics .we resume the above opinions under the following point : : the thought - experimental hr like eq .( 2.1 ) must be disregarded being fictitious formulae without a true physical significance. as regard the theoretical hr of the kind illustrated by eqs .( 2.2)/(2.3 ) the situation is completely different .the respective hr are mathematically justified for precisely defined conditions within the theoretical framework of quantum mechanics .this means that the physical significance ( interpretation ) of the theoretical hr is a question of notifiable importance .now note that the mentioned hr belong to the large family of correlation relations reviewed in sec .v. this fact suggests that the genuine significance ( interpretation ) of the theoretical hr must be completely similar with that of the mentioned correlation relations .we opine that the alluded suggestion must be taken into account with maximum consideration .then , firstly , we remark that all of the mentioned correlation relations refer to the variables with stochastic characteristics .such variables are specific both for quantum and non - quantum ( i.e. classical ) physical systems .secondly , let us regard the quantities like or which appear in the corresponding correlation relations from classical statistical physics . in our days science the respective quantitiesare unanimously interpreted as _ fluctuation parameters _ of the considered variables and .also it is clearly accepted the fact that the respective parameters describe intrinsic properties of the viewed systems but not some characteristics ( i.e. uncertainties ) of the measurements on the respective properties . in this sense in some scientific domains , such as noise spectroscopy ( weissman , 1981 ), the evaluation of the quantities like is regarded as a tool for investigation of the intrinsic properties of the physical systems . in classical conception the description of the intrinsic properties of the physical systemsis supposed to be not amalgamated with elements regarding the measuring uncertainties .the alluded description is made , in terms of corresponding physical variables , within the framework of known chapters of classical physics ( i.e. mechanics , electrodynamics , optics , thermodynamics and statistical physics ) .for the mentioned variables , when it is the case , the description approaches also the fluctuations characteristics .otherwise , in classical conception the measuring uncertainties /errors are studied within the framework of error analysis .note that the respective analysis is a scientific branch which is independent and additional with respect to the mentioned chapters of physics ( worthing and geffner , 1955 ) .the above mentioned aspects about the classical quantities and must be taken into account for the specification of the genuine significance ( interpretation ) of the quantum quantities and , as well as of the theoretical hr .we think that the respective specification can be structured through the following points : : the quantum variables ( operators ) must be regarded as stochastic quantities which admit fluctuations. : acording to the usual quantum mechanics , the time is not a stochastic variable but a deterministic quantity which does not admit fluctuations. : the theoretical quantities , _ { -}\right\rangle _ \psi ] for the pair the hr from eq .( 2.3 ) and the related tihr assertions are not applicable . it must be reminded that an early modest notification ( davidson , 1965 ) of the tihr shortcomings in respect with the uni - variable eigenstates seem to be ignored by the tihr partisans .now we can see that the alluded inapplicability of eq .( 2.3 ) is generated by the fact that for the mentioned uni - variable eigenstates the eqs .( 5.8 ) are not satisfied .this because if for such states eqs .( 5.8 ) are satisfied we must admit the following row of relations . _ { -}\psi \right ) + \left ( \psi , \widehat{b}\widehat{a}\psi \right ) = \left\langle \left [ \widehat{a}% \widehat{b}\right ] _ { -}\right\rangle_ \psi + < b>_\psi \cdot a \eqnum{8.1}\ ] ] i.e. the absurd result _ { \_}\right\rangle _ \psi + a\left\langle b\right\rangle _ \psi ] .add here the fact that for the discussed states instead of eq .( 2.3 ) the more general eq . ( 5.7 ) remain valid ( in the trivial form 0 = 0 ) .it is quite evidently that the situations of uni - variable eigenstates come into the normality if and are regarded as parameters which describe the quantum fluctuations .then in such situations a and b have respectively have not fluctuations ( i.e. stochastic characteristics ) .the situation described by the wave function given by eqs .( 6.15 ) must be discussed separately .firstly it necessitates a confrontation with the conditions expressed by eqs .in this sense we note that for the respective situation one obtains the relation \right\ } \eqnum{8.2}\ ] ] where denote the imaginary part of the complex quantity .then one observes that in the cases when the right hand term in eq .( 8.2 ) is null the variables and satisfy the eqs.s ( 5.8 ) . in such cases the eqs .( 2.3)/(5.13)/(6.4 ) are applicable .in other cases ( when the mentioned term from eq .( 8.2 ) is non - null ) the eqs .( 5.8 ) are infringed and for the pair it must apply only the eqs . ( 5.7 ) or ( 5.14 ) .note that the case in the situation described by wave functions from eqs.(6.15 ) can be approached also by using the fourier analysis procedures .so for the mentioned case and situations , similarly with eq .( 5.41 ) , one obtains as regards the case of qtp described by the wave functions given by eqs .( 6.18 ) we note the following observations . in such a case the variables and satisfy the conditions expressed by eqs .consequently for the respective variables are applicable the eqs .( 2.3)/(5.13)(6.4 ) . butthe mentioned equations must be considered as resulting from more general eqs .( 5.7 ) or ( 5.14 ) which are referring to the quantum fluctuations . the problems with the pair energy - time mentioned in sec .h become senseless if it is accepted according to the relations mentioned in sections vi .i and vi . j become simple generalizations of he theoretical hr without interpretational shortcoming .the relations discussed in sec . vi . k and vi .l are nothing but macroscopic similars of the quantum theoretical hr .also the respective relations do not imply any interpretational shortcoming .moreover , the so - called macroscopic operators discussed in sec .l appear as pure inventions without any physical utility or significance .*a reply addendum regarding the l* * pair * our first opinions about the pair in connection with tihr were presented in earlier works ( dumitru 1977 , 1980 ) . perhaps the respective presentations were more modest and less complete - e.g. we did not use at all the arguments resulting from the above mentioned examples of - degenerate states or of qtp .newertheless , we think that the alluded opinions were correct in their essence . however , in a review ( schroeck jr . , 1982 ) the respective opinions were judged as being erroneous . in this addendum , by using some of the above discussed facts , we wish to reply to the mentioned judgements . the main error reproached by prof .schroeck to us is : most of the results stated concerning angular momentum and angle operators ( including the supposed canonical commutation relations ) are false , this being a consequence of not using rieman - stieljes integration theory which is necessitated since the angle function has a jump discontinuity . in order to answer to this reproach we resort to the following specifications : ( i )one can see that the respective reproach is founded , in fact , on the idea that the variable has a jump ( of magnitude 2 at or , equivalently at and , consequently , on the commutation relation is _ { -}=-i\hbar + i\hbar 2\pi \delta ] .( iii ) here is the place to add also the cases presented in sec .f of qtp and of the - degenerate states ( the last ones for the situations with non - null term in the right hand side of eq .( 8.2 ) ) . for the respective cases we must consider ,another commutation relation , namely = -i\hbar ] and = -i\hbar . ] the implementation of the respective relation in the mentioned cases for obtaining theoretical formulae of hr - type must be made by taking into account the natural ( physical ) range of as well as the fulfillment of the eqs .( 5.8 ) .the ensemble of the above noted specifications proves as unfounded the reproaches of professor f.e .schroeck jr . regarding our opinions about the pair. the facts presented in this section show that all the problems directly connected with the interpretation of hr can be solved by the here - proposed genuine reinterpretation of the respective relationsbut , as it is known , tihr generated also disputes about the topics which are adjacent or additional with respect to the alluded problems .several such topics will be briefly approached in next sections .as it was mentioned in sec . ii the story of hr started with the primary questions regarding the measuring uncertainties . during the years the respective questions andmore generally the description of the measurements generated a large number of studies ( a good list of references , in this sense , can be obtained from the works : yanase et al . , 1978 ; braginsky and khalili 1992 ; bush et .al . , 1996 ; hay and peres , 1998 ;sturzu , 1999 and surely from the bibliographical publications indicated in the end of sec .it is surprising to see that many of the above alluded studies are contaminated one way or another by ideas pertaining to the tihr doctrine .after the above exposed argumentation against tihr , here we wish to present a few elements of a somewhat new reconsideration of the problems regarding the description of measurements ( including of measuring uncertainties ) .we think that , even modestly limited , such a reconsideration can be of non - trivial interest for our days'science .this because we agree with the opinion ( primas and mller - herold , 1978 ) that , in fact , there exists not yet a fundamental theory of actual measuring instruments .firstly , we note that , in our opinion , the questions are of real significance for the studies of the physical systems .this fact is due to the essential role of measurements ( i.e. of quantitatively evaluated experiments ) for the mentioned studies .moreover , we think that , in principle , the alluded role appears both in quantum and non - quantum physics . then in our announced reconsideration we must try to search for natural answer to the questions .. as well as to some other ( more or less ) directly connected problems .for such a purpose we shall note the remarks under the following points : : as a rule , all the measurements , both from macroscopic and microscopic physics , are confronted with measuring uncertainties. : the respective uncertainties are generated by various factors . among such factorsthe most important ones seem to be the intrinsic fluctuations within the experimental devices and the measuring perturbations ( due to the interactions of the respective devices with the measured systems). : a quantitative description of the measuring uncertainties must be made in the framework of an authentic theory of measurements. the respective theory must be independent and additional with respect to the traditional chapters of physics ( which describe the intrinsic properties of physical systems). : the measurement of a stochastic variable should not be reduced to a sole detection .it must be regarded and managed as a _ statistical sampling _ ( i.e. as a statistical ensemble of detections ) .therefore , for such a variable , the finding of a single value from a sole detection does not mean the collapse of the corresponding stochastic characteristics ( described by a wave function or by a probability density). : in the spirit of the above remark the measuring uncertainties of stochastic variables must be described in terms of quantities connected with the afferent statistical sampling but not with solitary detections. : as regards the above alluded theory of measurements we agree with the idea ( bunge , 1977b ) that it must include some specific elements , but not only generic - universal aspects .this because every experimental apparatus , used in real measurements , has a well - defined level of performance and a restricted class of utilizations - i.e. it is not a generic - universal ( for all - purpose ) device. : togheter with the mentioned agreement we opine that the measurements theory can include also some elements / ideas with generic - universal characteristics .one such characteristic is connected with the fact that , in essence , every measurement can be regarded as an acquisition of some information about the measured system. in the spirit of the latter remark we think that from a generic - universal viewpoint a measurement can be described as a process of information transmission , from the measured system to the receiver ( recorder or observer ) . in such a view , the measuring apparatus can be represented as a channel for information transmission , whereas the measuring uncertainties can be pictured as an alteration of the processed information .such informational approach is applicable for measurements on both macroscopic and microscopic ( quantum ) systems .also the mentioned approach does not contradict to the idea of specificity as regards the measurements theory .the respective specificity is implemented in theory by the concrete models / descriptions of the measured system ( information source ) , of the measuring apparatus(transmission channel ) , and of the recorder / observer ( information receiver ) . for an illustration of the above ideaslet us refer to the description of the measurement of a single stochastic variable having a continuous spectrum of values within the range .the respective variable can be of classical kind ( such are the macroscopic quantities discussed in connection with eqs .( 5.19)-(5.23 ) ) or of a quantum nature ( e.g. a cartesian coordinate of a microparticle ) . the alluded measurement , being regarded as a statistical sampling , its usual task is to find certain global probabilistic parameters of such as : mean / expected value , standard deviation , or even higher order moments .but the respective parameters are evaluated by means of the elementary probability of finding the value of within the infinitesimal range here denotes the corresponding probability density .then the mentioned task can be connected directly to related to the measured system 's own properties , has an in ( input ) expression .so viewed is assimilable with : ( a ) a usual distribution from classical statistical physics - in the case of a macroscopic system , respectively ( b ) the quantity ( = the corresponding wave function ) - in the case of a quantum microparticle .the fact that the measuring apparatus distorts ( alters ) the information about the values of means that the respective apparatus records an out ( output ) probability density which generally differs from .so , with respect to the measuring process , and describe the input respectively the output information .then it results that the measuring uncertainties ( alterations of the processed information ) must be described in terms of several quantities depending on both and a description of the mentioned kind can be obtained , for instance , if one uses the following mean values : with = an arbitrary function of .then a possible quantitative evaluation of the measuring disturbances can be made in terms of both the following parameters : \(i ) the _ mean value uncertainty _ given by \(ii ) the _ standard deviation uncertainty _ defined as where ^{1/2},\,\,\,\,\,\,(a = in;out). ] denotes the inverse of the matrix the entropy of system .as an example from classical statistical physics we can consider the system referred in connection with the eqs .( 6.41)-(6.42 ) .the corresponding stochastic variables are and .their fluctuations are described by the quantities and whose expressions are given by eqs .( 6.42 ) .now we can proceed to a direct examination of the expressions from eqs .( 6.40 ) , ( 10.1 ) and ( 6.42 ) of the quantities and which describe the thermal fluctuations in macroscopic systems .one can observe that all the respective expressions are structured as products of k with terms which are independent from k. the alluded independence is ensured by the fact that the mentioned terms are expressed only by means of macroscopic non - stochastic quantities .( note that the mean values from the respective terms must coincide with deterministic ( i.e. nonstochastic ) quantities from usual thermodynamics ) .due to the above observed structure , the examined fluctuation quantities are in a direct dependence of k. so they are significant respectively negligible as we take or . because is a constant , the limit must be regarded in the sense that the quantities directly proportional with are negligible comparatively with other terms of the same dimensionality but not containing however , the fluctuations reveal the stochastic characteristics of the physical systems. then we can conclude that the thermal stochasticity , for the system studied in nonquantum statistical physics , is an important respectively insignificant property as we consider or the mentioned features vis - a - vis the values of are specific for all the macroscopic systems ( e.g. gases , liquids and solids of various inner compositions ) and for all their specific global variables .but such a remark reveals the fact that has the qualities of an authentic _ generic indicator for thermal stochasticity _ ( i.e. for the stochasticity evidenced through the thermal fluctuations ) .now let us approach questions connected with the _ quantum stochasticity _ which is specific for the individual , nonrelativistic microparticles of atomic size .such a kind of stochasticity is revealed by the specific quantum fluctuations of the corresponding variables ( of orbital and spin nature ) .the respective fluctuations described by means of quantities like the standard deviations and correlations defined in eqs .( 5.2 ) and ( 5.15 ) .some expressions , e.g. those given by eqs .( 6.19 ) and ( 6.38 ) , for the mentioned fluctuation quantities show the direct dependence of the respective quantities on the planck 's constant .then , there results that can play the role of generic indicator for quantum stochasticity .corespondingly , as or the mentioned stochasticity appears as a significant respectively negligible property .the above mentioned connection between the quantum stochasticity and must be complemented with certain deeper considerations .such considerations regard ( dumitru and veriest , 1995 ) different behaviour patterns of various physical variables in the limit , usually called * q*uantum _ _ *c*lassical _ _ * l*imit ( qcl ) .firstly , let us refer to the spin variables .we consider an electron whose spin state is described by the function ( spinor ) given by \eqnum{10.2}\ ] ] for a specific variable we take the z - component of the spin angular moment ( being the corresponding pauli matrix ) . for the respective variable in the mentioned state we find the quantity describes the quantum fluctuations of spin kind i.e. the spin quantum stochasticity .the presence of in eq .( 10.3 ) show that the respective stochasticity is significant or not as or .this means that plays the role of generic indicator for the respective stochasticity .but in the state described by eq .( 10.2 ) one finds also .this additional results shows that , in fact , for the variable disappears completely .then we can note that for spin variables the behaviour pattern in quantum limit consists of an annulment of both stochastic characteristics and mean values ( i.e. in a complete disappearance ) .in the case of orbital quantum variables the quantum limit implies not only the condition but also the requirement that certain quantum numbers grow unboundedly .. the mentioned requirement is due to the fact that certain significant variables connected with the orbital motion ( e.g. the energy ) pass from their quantum values to adequate classical values .so , with respect to the mentioned limit the orbital variables have two kinds of behaviour patterns . as an example of the first kind we refer to the coordinate of a harmonic rectilinear oscillator considered in its n - th energy level .then similarly with from eq .( 6.19 ) we have : ^{1/2 } \eqnum{10.4}\ ] ] where and denote the mass , the angular frequency respectively the oscillation quantum number . for the mentioned examplethe quantum limit means not only but also .this because the energy must pass from the quantum expression the corresponding classical expression , where is the coordinate amplitude. then the standard deviation of passes from the quantum value given by eq .( 10.4 ) to the classical value but and are fluctuation parameters which describe the stochastic characteristics of in quantum respectively classical contexts. then one can say that in quantum limit the above considered coordinate preserves both its role of significant variable and its stochasticity . as an example of the second kind of orbital variable we consider the distance between the electron and nucleus in a hydrogen atom .we refer to an electron in a state described by the wave function with ( where and are respectively the principal , orbital and magnetic quantum numbers ) .then for we can use the expression given by schwabl ( 1995 ) , rewritten in the form with and denoting the mass respectively the change of the electron .the energy of the electron is the quantum limit requires that with denoting the classical value of the energy . then from eqs .( 10.6 ) and ( 10.7 ) it results that in the respective limit we have in the same circumstances we obtain so it results that in the quantum limit ( when and ) we have and this means that preserves its role of significant variable but loses its stochasticity .the above considerations can be concluded with the following remark : : in the quantum limit the physical variables display the following different behaviour patterns : \(i ) the complete disappearance of both stochastic characteristics and mean values , as in the case of spin variables .\(ii ) the preservation of both the role of significant variable and of stochastic characteristics , as in the case of oscillator coordinate \(iii ) the preservation of the role of significant variable but the loss of stochastic characteristics as in the case of electron - nucleus distance. it is clear that the above remark corrects the traditional belief of a unique behaviour pattern compulsorily associated with the disappearance of the uncertainties ( i.e. of the standard deviations ) .now let us return to the quantum stochasticity , specific for the variables of individual microparticles .we think that , in spite of the peculiarities mentioned in , the planck constant can be considered as a generic indicator of such a stochasticity .moreover , we consider that the respective role of is completely similar with that the boltzmann constant k with respect to the macroscopic thermal stochasticity ( see above ) .regarding the mentioned roles of and another observation must be added . in the discussed cases , and appear independently and singly .that is why one can say that the stochasticity of the corresponding systems ( microparticles and classical macroscopic systems ) has a onefold character .but there are also physical systems endowed with a twofold stochasticity characterized by a simultaneous connection with both and . such systems are those studied in quantum statistical physics , i.e. , the bodies of macroscopic size considered as statistical ensembles of quantum microparticles . the stochasticity of the respective systems is revealed by corresponding fluctuations described by the quantities given by eqs .( 5.24 ) which depend simultaneously on both and .the respective dependence is revealed by the so - called fluctuation - dissipation theorem . according to the respective theorem( kubo 1957 ; zubarev 1971 ; balescu 1975 ) one can write d\omega \eqnum{10.11}\ ] ] with as complex conjugate of in eq .( 10.11 ) represent the generalized susceptibilities which appears also in the deterministic framework of nonequilibrium thermodynamics ( de groot and mazur , 1962 ) .but it is a known fact that in the respective framework all the stochastic characteristics of physical variables are neglected and no miscroscopic ( i.e. atomic or molecular ) structure of the systems is taken into account .another fact is that are directly connected ( landau and lifschitz , 1984 ) with the macroscopic nonstochastic expression of the energy dissipated inside the thermodynamic systems actioned by external deterministic and macroscopic perturbations .the mentioned facts show that the susceptibilities do not depend on the constants and the above mentioned property of combined with eq .( 10.11 ) shows that the sole significant dependence of fluctuation quantities given by eq .( 5.24 ) on the constants and is given by the factor .for the respective factor one can write \right\ } = 0 \eqnum{10.12}\ ] ] this means that when both and tend to zero the fluctuation quantities defined by eq .( 5.24 ) become null .so it results that in the mentioned limit the fluctuations in quantum statistical systems cease to manifest themselves .consequently , for such a limit the respective systems lose their stochastic characteristics . then in the spirit of the above presented opinions one can state that quantum statistical systems can be considered as endowed with a twofold stochasticity of the thermal and quantum type , revealed respectively by and as generic indicators . in the above considerations the stochasticity appears as a property of exclusively intrinsic typethis means that it is connected only with the internal ( inner ) characteristics of the considered systems and does not depend on external ( outside ) factors .moreover the mentioned stochasticity is directly and strongly associated with and as generic indicators .but the stochasticity can be also of extrinsic type .in such cases it is essentially connected with factors from the outside ( surroundings ) of the considered physical systems .also the extrinsic stochasticity is not ( necessarily ) associated with and as examples of system with stochasticity of exclusively extrinsic type can be considered an empty bottle floating on a stormy sea or a die in a game . in practiceone finds also systems endowed with stochasticity of both the intrinsic and extrinsic types .such are for example , the electric and electronic circuits . for a circuitthe intrinsic stochasticity is caused by the thermal agitation of the charge carries and/or of elementary ( electric or magnetic ) dipoles inside of its constitutive elements ( i.e. inside of resistors , inductances , condensers , transistors , integrated circuits , etc . ) .the respective agitation is responsible for fluctuations of macroscopic voltages and currents .such fluctuations are known ( robinson , 1974 ) as thermal ( or nyquist ) noises .note that in the case of circuits the intrinsic stochasticity is characterized by generic indicators .such indicators are alone , if the circuit is considered as a classical ( nonquantum ) statistical system , and together with when the circuit is viewed as a quantum statistical system .otherwise , the stochasticity of a circuit can also be of extrinsic type when it is under the influence of a large variety of factors .such factors can be : thermal fluctuations in the surrounding medium , accidental outside discharges and inductions , atmospheric ( or even cosmic ) electrical phenomena .the mentioned extrinsic stochasticity is also responsible for the noises in macroscopic currents and voltages in the circuits .but it must be noted that even for circuits , the extrinsic stochasticity is not connected in principle with generic indicators dependent on fundamental physical constants ( such as and ) . as an interesting case which implies stochasticity of both intrinsic and extrinsic typecan be considered a measuring process viewed as in sec .in such a case the intrinsic stochasticity regards the inner properties of the measured system while the extrinsic one is due to the measuring apparatus .the corresponding intrinsic stochasticity is connected with and as generic indicators in the above discussed sense .however , the extrinsic stochasticity , due to the apparatus , seems to be not connected with certain generic indicators .this because of the large diversity of apparata as regards their own structure and accuracy .with respect to problematics of hr proper , in literature ( see the bibliographical publications mentioned in the end of sec .i ) one knows of a large variety of adjacent questions which , one way or another , are allied with the subjects discussed in the previous sections of this paper .now , we wish to note a few remarks on several such questions . firstly , let us refer to the consequences of the here - proposed reconsideration of hr for both lucrative procedures and interpretational frame of quantum mechanics .note that our reconsideration does not prejudice in any way the authentic version of the mentioned lucrative procedures , which , in fact , have met with unquestionable successes in both basic and aplicative researches . as regards the alluded interpretational frame , our reconsideration , mainly by the abandonment of tihr , generates major and important changes . but such changes must be regarded as benefic , since they can offer a genuine elucidation to the controversial questions introduced in the frame of science by tihr. by reinterpreting hr in the sense presented in sec.viii the respective relations lose their quality of crucial physical formulae .so , one can find a consonance with the prediction ( dirac , 1963 ) : i think one can make a safe guess that uncertainty relations in their present form will not survive in the physics of future .note that the above prediction was founded not on some considerations about the essence of hr but on a supposition about the future role of in physics .so it was supposed that will be a derived quantity while and ( speed of light and elementary charge ) will remain as fundamental constants .that is why we wish to add here that our view about hr does not affect the actual position of as a physical constant .more precisely , our findings can not answer the question whether will be a fundamental constant or a derived quantity ( e.g. expressed in terms of and ). as it was pointed in sec.x the planck s constant has also the significance of estimator for the spin of microparticles ( like the electron ) .so the spin appears to be a notable respectively absent property as or . on the other hand with the reference to the spinthere are also some intriguing questions related to its relativistic justification . usually ( dirac , 1958 ; blochintsev,1981 ) for electrons the spin is regarded to be essentially explicable as consequence of relativistic theory .but , as it is known , the relativistic characteristics of a particle are evidenced by the relative value of its velocity comparatively with the light velocity .particularly , the respective characteristics must be insignificant when or .then the absence of the factor ( or of some other equivalent factors ) in the description of the electron spin variable appears at least as intriguing fact .is such a fact a sufficient reason to consider as a derived quantity in the sense guessed by dirac ( 1963 ) .in such a sense ( = the permittivity of vacuum and = the fine structure constant ) and the situations with appear when .so the significance of as spin estimator can be apparently related with some aspects of relativity .but here it must be noted the surprising fact that even in the nonrelativistic limit ( i.e. when or ) the spin remains a significant variable of the electron .it is known ( ivanov , 1989 ) that the electron spin plays a decisive role ( as a fourth quantum variable / number ) in the electronic configuration of many - electronic atoms , in spite of the fact that for atomic electrons . due to the here mentioned features we think that the relativistic justification of the spin appears as a intriguing question witch requires further investigations. our findings facilitate also a remark in connection with another supposition about .the respective supposition regards the possible existence of multiple planck constants associated with various kinds of microparticles ( e.g. with electrons , protons , neutrons ) .currently , ( whichman , 1971 ; fischbach _ et .1991 ) , the tendency is to contest such a possibility and to promote the idea of a unique planck constant . for thisone appeals either to experimental data or to some connection with the fundamental conservation laws .we think that our view about pleads somewhat for the alluded idea of uniqueness ..so , regarding as generic indicator of quantum stochasticity , this one must have the same value for various kinds of microparticles .this because , similarly , the boltzman constant in its role of generic indicator for thermal stochasticity has a unique value for various kinds of macroscopic systems ( e.g. hydrogen gas , liquid water or crystallin germanium ) . the revealed stochastic similarity among quantum microparticles and macroscopic systems facilitates another remark . in the macroscopic case the stochastic characteristics for _ an individual system _ is incorporated in the probability distribution ( see sections v and vi .as we have shown , the quantum similar of is the wave function ( or the square of its module ) .such a similarity motivates us to agree the idea ( van kampen 1978 ) that refer to a single system ( microparticle ) .simultaneously , we incline to a circumspect regard about the opinions that belongs to an ensemble of equally prepared systems ( tschudi , 1987 ) or to an abstract physical object ( mayants,1984 ) .moreover , our agreement and opinion are also motivated by the observation that in practical applications both and are calculated for individual systems ( e.g. for an electron in a hydrogen atom or , respectively for an ideal gas). distinct group of remarks regards the reduction of stochasticity to subjacent elements of deterministic nature , for both cases of thermodynamic systems and quantum microparticles . in the first case the stochasticity refers to the macroscopic variables which characterize each system as a whole . but according to the classical statistical mechanics the respective variables are expressible in terms of subjacent molecular quantities ( coordinates and momenta ) which are considered as deterministic elements . in the case of quantum microparticlesa similar problem was taken into account .so it was promoted the idea that the stochastic quantum variables ( characterizing each microparticle as a whole ) would be expressible in terms of some sujacent elements of deterministic nature , called hidden variables .viewing comparatively the two mentione d cases we think that is of nontrivial interest ti note the foolowing observations : \(i ) in the case of thermodynamic systems the subjacent molecular quantities can be justfied in essence only by adequate experimental facts .\(ii ) the mentioned molecular quantities are deterministic ( i. e. depresionfree ) only from a microscopic perspective , connected with the characteristics of the molecules . from a macroscopic perspective , coonected with a thermodynamic systems as a whole , they are stochastic variables .that is why , for example , in respect with a thermodynamic system like an ideal gas one speaks about the mean value and non - null dispersion of the molecular velocity .\(iii ) even by taking into account the existence of sujacent molecular quantities the macroscopic variables , characterizing a thermodynamic system as a whole , keep their stochastic characteristics .particularly the mentioned existence does not influence the verity or the significance of the macroscopic relations frim the family of eqs .( 5.21 ) - ( 5.23 ) .\(iv ) the above observations ( i ) - ( iii ) reveal as unfounded the idea ( uffink and van lith 1999 ) that the sole examination of some theoretical formulas , from the mentioned family , can give a light on the problem of reduction of thermodynamic stochasticity to subjacend deterministic elements .\(v ) by analogy with the fact noted in ( i ) , in the case of quantum microparticles , the existence of the hidden variables must be proved firstly by indubitable experimental facts .but , as far as we know , until now such an experimental proof was not ratified by scientific research .\(vi ) the existence of the mentioned hidden variables can not be asserted only by means of considerations on some theoretical formulas regarding the global stochasticity of quantum microparticles , such are the hr .\(vii ) the global description of a quantum microparticle remain equally probabilistic in both cases , with or without hidden variables . more exactly in both casses for a variable refering to a quantum microparticle as a whole the theoretical predictions must be done in probabilistic terms while the experimental informations can be obtained only from measurements consisting in statistical samplings . the discussions from sec .x about the stochasticity suggest a remark connected with the boltzmann 's constant .as we have shown plays a major role in the evaluation of the level of the thermal stochasticity .but the respective stochasticity must be regarded as an important property of the macroscopic systems .so one finds as unfounded the idea , promoted in some publications ( wichman , 1971 ; landau and lifschitz , 1984 ; storm , 1986 ) that , in physics has only a minor role of conversion factor between temperature scales ( from energetic units into kelvin degrees). started the paper reminding the fact that even in our days tihr persists as a source of unelucidated controversies about it defects . motivated by the respective fact we proposed an investigation in the very core of the alluded controversies and defects .for such a purpose firstly we identified the main elements ( assertions and arguments ) of tihr .then , in reference with the mentioned elements , we localized the most known and critical defects of tihr .in such a reference frame we analyzed the reality of the respective defects .we found that all of them are veridical .moreover , for tihr , they are insurmountable and incriminate each of its main elements .so we can conclude that the sole reasonable attitude is to abandon tihr as an unjustified doctrine .the mentioned abandonment must be accompanied with a search for a new and genuine reinterpretation of hr . on this directionwe opine that hr of troughs - experimental nature must be disregarded because they are fictitious formulae without a true physical significance .on the other hand we think that the theoretical hr are authentic physical formulae regarding the quantum fluctuations .so regarded the theoretical hr belong to a large class of formulae specific for systems , of both quantum and non - quantum nature , endowed with stochastic characteristics . by adopting the mentioned regards about hr all the controversies connected with the tihrare elucidated on a natural way . in the mentioned regardhr lose their traditional role of crucial physical formulae connected with the description of measurement characteristics ( uncertainties ) . in the here promoted viewthe respective description must be done in terms ( and formulae ) which do not belong to the traditional chapter of physics ( including the quantum mechanics ) . we suggested that a promising version for the description of measurements can be done in terms of information theory .so a measurement can be considered as an information transmission , from the measured system ( information source ) through the measuring device ( transmission channel ) to the device recorder ( information receiver ) .then the measuring uncertainties appear as alternations of processed information . in the sec .ix we illustrated the alluded informational model with some concrete considerations . in our opinion the theoretical hr and their classical ( non - quantum ) similars are connected with the stochasticity regarded as an important property of physical systems .we showed that the respective property is characterized by generic indicators which are : \(i ) the planck s constant ( for quantum microparticles ) , \(ii ) the boltzmann s constant ( for classical thermodynamical system ) , respectively \(iii ) both and ( for quantum statistical systems ) . in the end , in sec .xi , we presented remarks on some questions which are adjacent with the subjects discussed in the other parts of the paper . several publications studied by me in connection with this and other previous papers of mine were put at my disposal by their authors ( often in a preprint or amended - reprint form ) .to all the respective authors i express my sincere thanks . refering to my own views , during the years , i have received many comments which stimulated my work .i remain profoundly grateful to the corresponding commentators ( referees and readers of my papers , colleagues ) .but , of course , that for all the shortcomings of my views i assume the entire responsibility . the work involved in the long - standing studies reported here took me away from some family duties .for the evinced agreement as well as for the permanent aid i am deeply indebted to all my family . in the end i mention that the research reported here was finalized with a partial support from the roumanian ministry of national education under a grant .crcrcrcrcr = correlation relation(s ) + cr correlation relation(s ) + hr heisenberg s relation(s ) + p point + p ... /aassertion point + p .../m motivation point + qcl quantum limit + qtp quantum torsion pendulum + srte super - resolution thought experiment(s / al ) + te thought experiment(s / al ) + tihr traditional interpretation of heisenberg s relations + croca j.r . ,a. rica da silva and j.s.ramos , 1996 , _ experimental violation of heisenberg s uncertainty relations by the scaning near - field optical microscope_preprint , university of lisboa , portugal .( this work discusses the potential implications of performances attained in optical experiments such are those reported by pohl et al .1984 , and by heiselmann and pohl 1984 ) . dumitru s. , 1987 , in : _ recent advances in statistical physics _( proceedings of international bose symposium on statistical physics , calcutta , india , 28 - 31 dec .1984 ) edited by b. datta and m. dutta ( world scientific , singapore ) dumitru s. , 1991 , in : _ quantum field theory , quantum mechanics and quantum optics .part 1 . symmetries _ _ and algebric structures . _( proceedings 18th international colloquium on group theoretical methods in physics , moscow - june 4 - 9 , 1990 ) edited by v.v .dodonov and v.i .manko ( nova science , new york ) .
|
in spite of their popularity the * h*eisenberg s ( `` uncertainty '' ) * r*elations ( hr ) still generate controversies . the * t*raditional * i*nterpretation of hr ( tihr ) dominate our days science , although over the years a lot of its defects were signaled . these facts justify a reinvestigation of the questions connected with the interpretation / significance of hr . here it is developped such a reinvestigation starting with a revaluation of the main elements of tihr . so one finds that all the respective elements are troubled by insurmountable defects . then it results the indubitable failure of tihr and the necessity of its abandonment . consequently the hr must be deprived of their quality of crucial physical formulae . moreover the hr are shown to be nothing but simple fluctuations formulae with natural analogous in classical ( non - quantum ) physics . the description of the maesuring uncertainties ( traditionally associated with hr ) is approached from a new informational perspective . the planck s constant ( also associated with hr ) is revealed to have a significance of generic indicator for quantum stochasticity , similarly with the role of boltzmann s constant k in respect with the thermal stochasticity . some other adjacent questions are also briefly discussed in the end . motto : _ uncertainty principle : it has to do with the uncertainty in predictions rather than the accuary of measurement . i think in fact that the word measurement has been so abused in quantum mechanics that it would be good to avoid it altogether _ john s. bell , 1985 .
|
in the nonlinear filtering problem one observes a system whose state is known to follow a given stochastic differential equation .the observations that have been made contain an additional noise term , so one can not hope to know the true state of the system .however , one can reasonably ask what is the probability density over the possible states .when the observations are made in continuous time , the probability density follows a stochastic partial differential equation known as the kushner stratonovich equation .this can be seen as a generalization of the fokker planck equation that expresses the evolution of the density of a diffusion process .thus the problem we wish to address boils down to finding approximate solutions to the kushner stratonovich equation . for a quick introduction to the filtering problemsee davis and marcus ( 1981 ) . for a more complete treatment from a mathematical point of view see lipster and shiryayev ( 1978 ) .see jazwinski ( 1970 ) for a more applied perspective . for recent resultssee the collection of papers .the main idea we will employ is inspired by the differential geometric approach to statistics developed in and .one thinks of the probability distribution as evolving in an infinite dimensional space which is in turn contained in some hilbert space .one can then think of the kushner stratonovich equation as defining a vector field in : the integral curves of the vector field should correspond to the solutions of the equation . to find approximate solutions to the kushner stratonovich equation one chooses a finite dimensional submanifold of and approximates the probability distributions as points in .at each point of one can use the hilbert space structure to project the vector field onto the tangent space of .one can now attempt to find approximate solutions to the kushner stratonovich equations by integrating this vector field on the manifold .this mental image is slightly innaccurate .the kushner stratonovich equation is a stochastic pde rather than a pde so one should imagine some kind of stochastic vector field rather than a smooth vector field .thus in this approach we hope to approximate the infinite dimensional stochastic pde by solving a finite dimensional stochastic ode on the manifold .note that our approximation will depend upon two choices : the choice of manifold and the choice of hilbert space structure . in this paperwe will consider two possible choices for the hilbert space structure : the direct metric on the space of probability distributions ; the hilbert space structure associated with the hellinger distance and the fisher information metric .our focus will be on the direct metric since projection using the hellinger distance has been considered before .as we shall see , the choice of the `` best '' hilbert space structure is determined by the manifold one wishes to consider for manifolds associated with exponential families of distributions the hellinger metric leads to the simplest equations , whereas the direct metric works well with mixture distributions .we will write down the stochastic ode determined by this approach when and show how it leads to a numerical scheme for finding approximate solutions to the kushner stratonovich equations in terms of a mixture of normal distributions .we will call this scheme the _ normal mixture projection filter _ or simply the l2 nm projection filter . the stochastic ode for the hellinger metric was considered in , and .in particular a precise numerical scheme is given in for finding solutions by projecting onto an exponential family of distributions .we will call this scheme the _hellinger exponential projection filter _ or simply the he projection filter .we will compare the results of a c++ implementation of the l2 nm projection filter with a number of other numerical approaches including the he projection filter and the optimal filter .we can measure the goodness of our filtering approximations thanks to the geometric structure and , in particular , the precise metrics we are using on the spaces of probability measures .what emerges is that the two projection methods produce excellent results for a variety of filtering problems .the results appear similar for both projection methods ; which gives more accurate results depends upon the problem .as we shall see , however , the l2 nm projection approach can be implemented more efficiently . in particularone needs to perform numerical integration as part of the he projection filter algorithm whereas all integrals that occur in the l2 nm projection can be evaluated analytically .we also compare the l2 nm filter to a particle filter with the best possible combination of particles with respect to the lvy metric . introducing the lvy metricis needed because particles densities do not compare well with smooth densities when using induced metrics .we show that , given the same number of parameters , the l2 nm may outperform a particles based system .the paper is structured as follows : in section [ sec : nonlinfp ] we introduce the nonlinear filtering problem and the infinite - dimensional stochastic pde ( spde ) that solves it . in section [ sec : sman ] we introduce the geometric structure we need to project the filtering spde onto a finite dimensional manifold of probability densities . in section[ sec : pf ] we perform the projection of the filtering spde according to the l2 nm framework and also recall the he based framework . in section [ sec :nimp ] we briefly discuss the numerical implementation , while in section [ sec : soft ] we discuss in detail the software design for the l2 nm filter . in section [ numres ]we look at numerical results , whereas in section [ sec : part ] we compare our outputs with a particle method .section [ sec : conc ] concludes the paper .in the non - linear filtering problem the state of some system is modelled by a process called the signal .this signal evolves over time according to an it stochastic differential equation ( sde ) .we measure the state of the system using some observation .the observations are not accurate , there is a noise term .so the observation is related to the signal by a second equation . in these equations the unobserved state process takes values in , the observation takes values in and the noise processes and are two brownian motions .the nonlinear filtering problem consists in finding the conditional probability distribution of the state given the observations up to time and the prior distribution for .let us assume that , and the two brownian motions are independent .let us also assume that the covariance matrix for is invertible .we can then assume without any further loss of generality that its covariance matrix is the identity .we introduce a variable defined by : with these preliminaries , and a number of rather more technical conditions which we will state shortly , one can show that satisfies the a stochastic pde called the kushner stratonovich equation .this states that for any compactly supported test function defined on \ , [ dy_s^k-\pi_s(b_s^k)\,ds]\ , \ ] ] where for all , the backward diffusion operator is defined by equation ( [ fkk ] ) involves the derivatives of the test function because of the expression .we assume now that can be represented by a density with respect to the lebesgue measure on for all time and that we can replace the term involving with a term involving its formal adjoint .thus , proceeding formally , we find that obeys the following it - type stochastic partial differential equation ( spde ) : [ { { \mathrm d}}y_t^k - e_{p_t } \{b_t^k \ } { { \mathrm d}}t ] \ ] ] where denotes the expectation with respect to the probability density ( equivalently the conditional expectation given the observations up to time ) .the forward diffusion operator is defined by : + { { { \textstyle\frac{1}{2}}}}\sum_{i , j=1}^n \frac{\partial^2}{\partial x_i \partial x_j } [ a_t^{ij } \phi ] .\ ] ] this equation is written in it form .when working with stochastic calculus on manifolds it is necessary to use stratonovich sde s rather than it sde s .this is because one does not in general know how to interpret the second order terms that arise in it calculus in terms of manifolds .the interested reader should consult .a straightforward calculation yields the following stratonvich spde : \,dt + \sum_{k=1}^d p_t\ , [ b_t^k - e_{p_t}\{b_t^k\ } ] \circ dy_t^k\ .\ ] ] we have indicated that this is the stratonovich form of the equation by the presence of the symbol ` ' inbetween the diffusion coefficient and the brownian motion of the sde .we shall use this convention throughout the rest of the paper . in order to simplify notation , we introduce the following definitions : \ p , \\ \\\gamma_t^k(p ) & : = & [ b_t^k - e_p\{b_t^k\ } ] p \ , \end{array}\ ] ] for .the str form of the kushner stratonovich equation reads now thus , subject to the assumption that a density exists for all time and assuming the necessary decay condition to ensure that replacing with its formal adjoint is valid , we find that solving the non - linear filtering problem is equivalent to solving this spde . numerically approximatingthe solution of equation ( [ kse : str ] ) is the primary focus of this paper .for completeness we review the technical conditions required in order for equation [ fkk ] to follow from ( [ lanc1 - 1 ] ) .* local lipschitz continuity : for all , there exists such that for all , and for all , the ball of radius . * non explosion : there exists such that for all , and for all . * polynomial growth : there exist and such that for all , and for all .under assumptions ( a ) and ( b ) , there exists a unique solution to the state equation , see for example , and has finite moments of any order . under the additional assumption ( c ) the following _ finite energy _ condition holds since the finite energy condition holds , it follows from fujisaki , kallianpur and kunita that satisfies the kushner stratonovich equation [ fkk ] .as discussed in the introduction , the idea of a projection filter is to approximate solutions to the kushner stratononvich equation [ fkk ] using a finite dimensional family of distributions .normal mixture _ family contains distributions given by : with and .it is a dimensional family of distributions . a _ polynomial exponential family _contains distributions given by : where is chosen to ensure that the integral of is equal to . to ensure the convergence of the integral we must have that is even and is an dimensional family of distributions .polynomial exponential families are a special case of the more general notion of an exponential family , see for example .a key motivation for considering these families is that one can reproduce many of the qualitative features of distributions that arise in practice using these distributions .for example , consider the qualitative specification : the distribution should be bimodal with peaks near and with the peak at twice as high and twice as wide as the peak near .one can easily write down a distribution of this approximates form using a normal mixture . to find a similar exponential family ,one seeks a polynomial with : local maxima at and ; with the maximum values at these points differing by ; with second derivative at equal to twice that at .these conditions give linear equations in the polynomial coefficients .using degree polynomials it is simple to find solutions meeting all these requirements .a specific numerical example of a polynomial meeting these requirements is plotted in figure [ fig : bimodalsextic ] .the associated exponential distribution is plotted in figure [ fig : bimodalexponential ] . ] ] we see that normal mixtures and exponential families have a broadly similar power to describe the qualitative shape of a distribution using only a small number of parameters .our hope is that by approximating the probability distributions that occur in the kushner stratonovich equation by elements of one of these families we will be able to derive a low dimensional approximation to the full infinite dimensional stochastic partial differential equation .we have given direct parameterisations of our families of probability distributions and thus we have implicitly represented them as finite dimensional manifolds . in this section we will see how families of probability distributions can be thought of as being embedded in a hilbert space and hence they inherit a manifold structure and metric from this hilbert space .there are two obvious ways of thinking of embedding a probability density function on in a hilbert space .the first is to simply assume that the probability density function is square integrable and hence lies directly in .the second is to use the fact that a probability density function lies in and is non - negative almost everywhere .hence will lie in .for clarity we will write when we think of as containing densities directly . the stands for direct .we write where is the set of square integrable probability densities ( functions with integral which are positive almost everywhere ) .similarly we will write when we think of as being a space of square roots of densities .the stands for hellinger ( for reasons we will explain shortly ). we will write for the subset of consisting of square roots of probability densities .we now have two possible ways of formalizing the notion of a family of probability distributions . in the next section we will define a smooth family of distributions to be either a smooth submanifold of which also lies in or a smooth submanifold of which also lies in . either way the families we discussed earlier will give us finite dimensional families in this more formal sense .the hilbert space structures of and allow us to define two notions of distance between probability distributions which we will denote and .given two probability distributions and we have an injection into so one defines the distance to be the norm of .so given two probability densities and on we can define : here is the lebesgue measure . defines the _hellinger distance _ between the two distributions , which explains are use of as a subscript .we will write for the inner product associated with and or simply for the inner product associated with .in this paper we will consider the projection of the conditional density of the true state of the system given the observations ( which is assumed to lie in or ) onto a submanifold .the notion of projection only makes sense with respect to a particular inner product structure .thus we can consider projection using or projection using .each has advantages and disadvantages .the most notable advantage of the hellinger metric is that the metric can be defined independently of the lebesgue measure and its definition can be extended to define the distance between measures without density functions ( see jacod and shiryaev or hanzon ) . in particularthe hellinger distance is indepdendent of the choice of parameterization for .this is a very attractive feature in terms of the differential geometry of our set up . despite the significant theoretical advantages of the metric, the metric has an obvious advantage when studying mixture families : it comes from an inner product on and so commutes with addition on .so it should be relatively easy to calculate with the metric when adding distributions as happens in mixture families . as we shall see in practice ,when one performs concrete calculations , the metric works well for exponential families and the metric works well for mixture families . while the metric leads to the fisher information and to an equivalence with assumed density filters when used on exponential families ,see , the metric for simple mixture families is equivalent to a galerkin method , see for example . to make our notion of smooth families precise we need to explain what we mean by a smooth map into an infinite dimensional space .let and be hilbert spaces and let be a continuous map ( need only be defined on some open subset of ) .we say that is frehet differentiable at if there exists a bounded linear map satisfying : if exists it is unique and we denote it by .this limit is called the frehet derivative of at .it is the best linear approximation to at in the sense of minimizing the norm on .this allows us to define a smooth map defined on an open subset of to be an infinitely frehet differentiable map .we define an _ immersion _ of an open subset of into to be a map such that is injective at every point where is defined .the latter condition ensures that the best linear approximation to is a genuinely dimensional map .given an immersion defined on a neighbourhood of , we can think of the vector subspace of given by the image of as representing the tangent space at . to make these ideas more concrete ,let us suppose that is a probability distribution depending smoothly on some parameter where is some open subset of .the map defines a map . at a given point and for a vector can compute the frchet derivative to obtain : so we can identify the tangent space at with the following subspace of : we can formally define a smooth -dimensional family of probability distributions in to be an immersion of an open subset of into .equivalently it is a smoothly parameterized probability distribution such that the above vectors in are linearly independent .we can define a smooth -dimensional family of probability distributions in in the same way .this time let be a square root of a probability distribution depending smoothly on .the tangent vectors in this case will be the partial derivatives of with respect to .since one normally prefers to work in terms of probability distributions rather than their square roots we use the chain rule to write the tangent space as : we have defined a family of distributions in terms of a single immersion into a hilbert space . in other wordswe have defined a family of distributions in terms of a specific parameterization of the image of .it is tempting to try and phrase the theory in terms of the image of . to this end, one defines an _ embedded submanifold _ of to be a subspace of which is covered by immersions from open subsets of where each is a homeomorphisms onto its image . with this definition, we can state that the tangent space of an embedded submanifold is independent of the choice of parameterization .one might be tempted to talk about submanifolds of the space of probability distributions , but one should be careful . the spaces and are not open subsets of and and so do not have any obvious hilbert - manifold structure . to see why , consider figure [ fig : perturbednormal ] where we have peturbed a probability distribution slightly by subtracting a small delta - like function . arbitrarily close to the normal distribution but not in ] given two tangent vectors at a point to a family of probability distributions we can form their inner product using .this defines a so - called _ riemannian metric _ on the family . with respect to a particular parameterization we can compute the inner product of the and basis vectors given in equation [ basisforh ] .we call this quantity . up to the factor of , this last formula is the standard definition for the fisher information matrix .so our is the fisher information matrix .we can now interpret this matrix as the fisher information metric and observe that , up to the constant factor , this is the same thing as the hellinger distance .see , and for more in depth study on this differential geometric approach to statistics .the gaussian family of densities can be parameterized using parameters mean and variance . with this parameterizationthe fisher metric is given by : \ ] ] the representation of the metric as a matrix depends heavily upon the choice of parameterization for the family .the gaussian family may be considered as a particular exponential family with parameters and given by : where is chosen to normalize .it follows that : this is related to the familiar parameterization in terms of and by : one can compute the fisher information metric relative to the parameterization to obtain : \ ] ] the particular importance of the metric structure for this paper is that it allows us to define orthogonal projection of onto the tangent space .suppose that one has linearly independent vectors spanning some subspace of a hilbert space .by linearity , one can write the orthogonal projection onto as : w_i\ ] ] for some appropriately chosen constants . since acts as the identity on see that must be the inverse of the matrix .we can apply this to the basis given in equation [ basisforh ] . defining to be the inverse of the matrix we obtain the following formula for projection , using the hellinger metric , onto the tangent space of a family of distributions : \frac{1}{2 \sqrt{p } } \frac { \partial p } { \partial \theta_i}\end{aligned}\ ] ] the ideas from the previous section can also be applied to the direct metric .this gives a different riemannian metric on the manifold .we will write to denote the metric when written with respect to a particular parameterization . in coordinates , ,the metric on the gaussian family is : \ ] ] we can obtain a formula for projection in using the direct metric using the basis given in equation [ basisford ] .we write for the matrix inverse of . \frac { \partial p } { \partial \theta_i}.\end{aligned}\ ] ]given a family of probability distributions parameterised by , we wish to approximate an infinte dimensional solution to the non - linear filtering spde using elements of this family .thus we take the kushner stratonovich equation [ kse : str ] , view it as defining a stochastic vector field in and then project that vector field onto the tangent space of our family .the projected equations can then be viewed as giving a stochastic differential equation for . in this sectionwe will write down these projected equations explicitly .let be the parameterization for our family .a curve in the parameter space corresponds to a curve in .for such a curve , the left hand side of the kushner stratonovich equation [ kse : str ] can be written : where we write . is the basis for the tangent space of the manifold at .given the projection formula given in equation [ l2projectionformula ] , we can project the terms on the right hand side onto the tangent space of the manifold using the direct metric as follows : & = & \sum_{i=1}^m \left [ \sum_{j=1}^m h^{ij } \langle { \cal l}^ * p , v_j \rangle \right ] v_i \\ & = & \sum_{i=1}^m \left [ \sum_{j=1}^m h^{ij } \langle p , { \call } v_j \rangle \right ] v_i \\ \pi_d^{\theta } [ \gamma^k ( p ) ] & = & \sum_{i=1}^m \left[\sum_{j=1}^m h^{ij } \langle \gamma^k ( p ) , v_j \rangle \right ] v_i\end{aligned}\ ] ] thus if we take projection of each side of equation ( [ kse : str ] ) we obtain : v_i\ ] ] since the form a basis of the tangent space , we can equate the coefficients of to obtain : this is the promised finite dimensional stochastic differential equation for corresponding to projection .if preferred , one could instead project the kushner stratonovich equation using the hellinger metric instead .this yields the following stochastic differential equation derived originally in : note that the inner products in this equation are the direct inner products : we are simply using the inner product notation as a compact notation for integrals .equations [ kse : l2projected ] and [ kse : hellingerprojected ] both give finite dimensional stochastic differential equations that we hope will approximate well the solution to the full kushner stratonovich equation .we wish to solve these finite dimensional equations numerically and thereby obtain a numerical approximation to the non - linear filtering problem .because we are solving a low dimensional system of equations we hope to end up with a more efficient scheme than a brute - force finite difference approach .a finite difference approach can also be seen as a reduction of the problem to a finite dimensional system .however , in a finite difference approach the finite dimensional system still has a very large dimension , determined by the number of grid points into which one divides .by contrast the finite dimensional manifolds we shall consider will be defined by only a handful of parameters .the specific solution algorithm will depend upon numerous choices : whether to use or hellinger projection ; which family of probability distributions to choose ; how to parameterize that family ; the representation of the functions , and ; how to perform the integrations which arise from the calculation of expectations and inner products ; the numerical method selected to solve the finite dimensional equations . to test the effectiveness of the projection idea, we have implemented a c++ engine which performs the numerical solution of the finite dimensional equations and allows one to make various selections from the options above .currently our implementation is restricted to the case of the direct projection for a -dimensional state and -dimensional noise .however , the engine does allow one to experiment with various manifolds , parameteriziations and functions , and .we use object oriented programming techniques in order to allow this flexibility .our implementation contains two key classes and . to perform the computation, one must choose a data structure to represent elements of the function space .however , the most effective choice of representation depends upon the family of probability distributions one is considering and the functions , and .thus the c++ engine does not manipulate the data structure directly but instead works with the functions via the interface .a uml ( unified modelling language ) outline of the interface is given in table [ uml : functionring ] ..uml for the interface [ cols="<",options="header " , ] the other key abstraction is the . we give a uml representation of this abstraction in table [ uml : manifold ] . for readers unfamiliar with uml, we remark that the symbol can be read `` list '' . for example , the computetangentvectors function returns a list of functions .the uses some convenient internal representation for a point , the most obvious representation being simply the -tuple . on requestthe is able to provide the density associated with any point represented as an element of the .in addition the can compute the tangent vectors at any point .the method returns a list of elements of the corresponding to each of the vectors in turn .if the point is represented as a tuple , the method simply adds the components of the tuple to each of the components of .if a different internal representation is used for the point , the method should make the equivalent change to this internal representation .the method is called by our algorithm at the end of every time step . at this pointthe implementation can choose to change its parameterization for the state .thus the allows us ( in principle at least ) to use a more sophisticated atlas for the manifold than just a single chart .one should not draw too close a parallel between these computing abstractions and similarly named mathematical abstractions . for example , the space of objects that can be represented by a given do not need to form a differential ring despite the method .this is because the function will not be called infinitely often by the algorithm below , so the functions in the ring do not need to be infinitely differentiable .similarly the method allows the implementation more flexibility than simply changing chart . from one time step to the next it could decide to use a completely different family of distributions .the interface even allows the dimension to change from one time step to the next . we do not currently take advantage of this possibility , but adapatively choosing the family of distributions would be an interesting topic for further researchthe c++ engine is initialized with a object , a copy of the initial and objects representing , and . at each timepoint the engine asks the manifold to compute the tangent vectors given the current point . using the multiply and integrate functions of the class , the engine can compute the inner products of any two functions , hence it can compute the metric matrix . similarly , the engine can ask the manifold for the density function given the current point and can then compute . proceeding in this way ,all the coefficients of and in equation [ kse : l2projected ] can be computed at any given point in time . were equation [ kse : l2projected ] an it sde one could now numerically estimate , the change in over a given time interval in terms of and , the change in .one would then use the method to compute the new point and then one could repeat the calculation for the next time interval . in other words , were equation [ kse : l2projected ] an it sde we could numerically solve the sde using the euler scheme . however , equation [ kse : l2projected ] is a stratonovich sde so the euler scheme is no longer valid .various numerical schemes for solving stochastic differential equations are considered in and .one of the simplest is the stratonovich heun method described in .suppose that one wishes to solve the sde : the stratonvich heun method generates an estimate for the solution at the -th time interval using the formulae : in these formulae is the size of the time interval and is the change in .one can think of as being a prediction and the value as being a correction .thus this scheme is a direct translation of the standard euler heun scheme for ordinary differential equations .we can use the stratonovich heun method to numerically solve equation [ kse : l2projected ] . given the current value for the state ,compute an estimate for by replacing with and with in equation [ kse : l2projected ] . using the methodcompute a prediction .now compute a second estimate for using equation [ kse : l2projected ] in the state .pass the average of the two estimates to the function to obtain the the new state . at the end of each time step , the method is called .this provides the manifold implementation the opportunity to perform checks such as validation of the state , to correct the normalization and , if desired , to change the representation it uses for the state .one small observation worth making is that the equation [ kse : l2projected ] contains the term , the inverse of the matrix .however , it is not necessary to actually calculate the matrix inverse in full .it is better numerically to multiply both sides of equation [ kse : l2projected ] by the matrix and then compute by solving the resulting linear equations directly .this is the approach taken by our algorithm .as we have already observed , there is a wealth of choices one could make for the numerical scheme used to solve equation [ kse : l2projected ] , we have simply selected the most convenient .the existing and implementations could be used directly by many of these schemes in particular those based on runge kutta schemes . in principleone might also consider schemes that require explicit formulae for higher derivatives such as . in this case onewould need to extend the manifold abstraction to provide this information .similarly one could use the same concepts in order to solve equation [ kse : hellingerprojected ] where one uses the hellinger projection . in this casethe would need to be extended to allow division .this would in turn complicate the implementation of the integrate function , which is why we have not yet implemented this approach .let denote the space of functions which can be written as finite linear combinations of terms of the form : where is non - negative integer and , and are constants . is closed under addition , multiplication and differentiation , so it forms a differential ring .we have written an implementation of corresponding to .although the implementation is mostly straightforward some points are worth noting .firstly , we store elements of our ring in memory as a collection of tuples .although one can write : for appropriate , the use or such a term in computer memory should be avoided as it will rapidly lead to significant rounding errors .a small amount of care is required throughout the implementation to avoid such rounding errors .secondly let us consider explicitly how to implement integration for this ring .let us define to be the integral of . using integration by parts one has : since and we can compute recursively .hence we can analytically compute the integral of for any polynomial . by substitution, we can now integrate for any . by completing the squarewe can analytically compute the integral of so long as . putting all this together onehas an algorithm for analytically integrating the elements of .let denote the space of probability distributions that can be written as for some real numbers , and with .given a smooth curve in we can write : we can then compute : we deduce that the tangent vectors of any smooth submanifold of must also lie in .in particular this means that our implementation of will be sufficient to represent the tangent vectors of any manifold consisting of finite normal mixtures . combining these ideas we obtain the main theoretical result of the paper .let be a parameterization for a family of probability distributions all of which can be written as a mixture of at most gaussians .let , and be functions in the ring . in this caseone can carry out the direct projection algorithm for the problem given by equation ( [ lanc1 - 1 ] ) using analytic formulae for all the required integrations .although the condition that , and lie in may seem somewhat restrictive , when this condition is not met one could use taylor expansions to find approximate solutions .although the choice of parameterization does not affect the choice of , it does affect the numerical behaviour of the algorithm .in particular if one chooses a parameterization with domain a proper subset of , the algorithm will break down the moment the point leaves the domain . with this in mind , in the numerical examples given later in this paper we parameterize normal mixtures of gaussians with a parameterization defined on the whole of .we describe this parameterization below .label the parameters ( with ) , , ( with ) and ( with ) .this gives a total of parameters .so we can write given a point define variables as follows : where the function sends a probability $ ] to its log odds , .we can now write the density associated with as : we do not claim this is the best possible choice of parameterization , but it certainly performs better than some more nave parameteriations with bounded domains of definition .we will call the direct projection algorithm onto the normal mixture family given with this projection the _l2 nm projection filter_. a similar algorithm is described in for projection using the hellinger metric onto an exponential family .we refer to this as the _ he projection filter_. it is worth highlighting the key differences between our algorithm and the exponential projection algorithm described in . * in the special case of the cubic sensor was considered .it was clear that one could in principle adapt the algorithm to cope with other problems , but there remained symbolic manipulation that would have to be performed by hand .our algorithm automates this process by using the abstraction . * when one projects onto an exponential family , the stochastic term in equation ( [ kse : hellingerprojected ] ) simplifies to a term with constant coefficients .this means it can be viewed equally well as either an it or stratonovich sde .the practical consequence of this is that the he algorithm can use the euler maruyama scheme rather than the stratonvoich heun scheme to solve the resulting stochastic ode s .moreover in this case the euler - maruyama scheme coincides with the generally more precise milstein scheme . * in the case of the cubic sensor , the he algorithm requires one to numerically evaluate integrals such as : + + where the are real numbers . performing such integralsnumerically considerably slows the algorithm . in effect one ends up using a rather fine discretization scheme to evaluate the integral and this somewhat offsets the hoped for advantage over a finite difference method .in this section we compare the results of using the direct projection filter onto a mixture of normal distributions with other numerical methods . in particular we compare it with : 1 . a finite difference method using a fine grid which we term the _ exact filter_. various convergence results are known ( and ) for this method . in the simulations shown below we use a grid with points on the -axis and time points . in our simulationswe could not visually distinguish the resulting graphs when the grid was refined further justifying us in considering this to be extremely close to the exact result .the precise algorithm used is as described in the section on `` partial differential equations methods '' in chapter 8 of bain and crisan .2 . the _ extended kalman filter _ ( ek ) .this is a somewhat heuristic approach to solving the non - linear filtering problem but which works well so long as one assumes the system is almost linear .it is implemented essentially by linearising all the functions in the problem and then using the exact kalman filter to solve this linear problem - the details are given in .the ek filter is widely used in applications and so provides a standard benchmark .however , it is well known that it can give wildly innaccurate results for non - linear problems so it should be unsurprising to see that it performs badly for most of the examples we considerthe he projection filter .in fact we have implemented a generalization of the algorithm given in that can cope with filtering problems where is an aribtrary polynomial , is constant and .thus we have been able to examine the performance of the exponential projection filter over a slightly wider range of problems than have previously been considered . to compare these methods , we have simulated solutions of the equations [ lanc1 - 1 ] for various choices of , and .we have also selected a prior probability distribution for and then compared the numerical estimates for the probability distribution at subsequent times given by the different algorithms . in the examples belowwe have selected a fixed value for the intial state rather than drawing at random from the prior distribution .this should have no more impact upon the results than does the choice of seed for the random number generator .since each of the approximate methods can only represent certain distributions accurately , we have had to use different prior distributions for each algorithm . to compare the two projection filters we have started with a polynomial exponential distribution for the prior and then found a nearby mixture of normal distributions .this nearby distribution was found using a gradient search algorithm to minimize the numerically estimated norm of the difference of the normal and polynomial exponential distributions . as indicated earlier ,polynomial exponential distributions and normal mixtures are qualitatively similar so the prior distributions we use are close for each algorithm . for the extended kalman filter ,one has to approximate the prior distribution with a single gaussian .we have done this by moment matching .inevitably this does not always produce satisfactory results .for the exact filter , we have used the same prior as for the projection filter .the first test case we have examined is the linear filtering problem . in this casethe probability density will be a gaussian at all times hence if we project onto the two dimensional family consisting of all gaussian distributions there should be no loss of information . thus both projection filters should give exact answers for linear problems .this is indeed the case , and gives some confidence in the correctness of the computer implementations of the various algorithms .the second test case we have examined is the _quadratic sensor_. this is problem [ lanc1 - 1 ] with , and for some positive constants and . in this problemthe non - injectivity of tends to cause the distribution at any time to be bimodal . to see why ,observe that the sensor provides no information about the sign of , once the state of the system has passed through we expect the probability density to become approximately symmetrical about the origin . since we expect the probability density to be bimodal for the quadratic sensor it makes sense to approximate the distribution with a linear combination of two gaussian distributions . in figure [ quadraticsensortimepoints ]we show the probability density as computed by three of the algorithms at 10 different time points for a typical quadratic sensor problem . to reduce clutterwe have not plotted the results for the exponential filter .the prior exponential distribution used for this simulation was .the initial state was and .as one can see the probability densities computed using the exact filter and the l2 nm filter become visually indistinguishable when the state moves away from the origin .the extended kalman filter is , as one would expect , completely unable to cope with these bimodal distributions . in this casethe extended kalman filter is simply representing the larger of the two modes .time points for the problem in figure [ quadraticsensorresiduals ] we have plotted the _ residuals _ for the different algorithms when applied to the quadratic sensor problem .we define the residual to be the norm of the difference between the exact filter distribution and the estimated distribution . as can be seen , the l2 nm projection filter outperforms the he projection filter when applied to the quadratic sensor problem .notice that the residuals are initially small for both the he and the l2 nm filter .the superior performance of the l2 nm projection filter in this case stems from the fact that one can more accurately represent the distributions that occur using the normal mixture family than using the polynomial exponential family .if preferred one could define a similar notion of residual using the hellinger metric .the results would be qualitatively similar .one interesting feature of figure [ quadraticsensorresiduals ] is that the error remains bounded in size when one might expect the error to accumulate over time .this suggests that the arrival of new measurements is gradually correcting for the errors introduced by the approximation .residuals for the problem a third test case we have considered is the _ general cubic sensor_. in this problem one has , for some constant and is some cubic function . the case when is a multiple of is called the _ cubic sensor _ and was used as the test case for the exponential projection filter using the hellinger metric considered in .it is of interest because it is the simplest case where is injective but where it is known that the problem can not be reduced to a finite dimensional stochastic differential equation .it is known from earlier work that the exponential filter gives excellent numerical results for the cubic sensor .our new implementations allow us to examine the general cubic sensor . in figure [ cubicsensortimepoints ], we have plotted example probability densities over time for the problem with , and . with two turning points for problem is very far from linear .as can be seen in figure [ cubicsensortimepoints ] the l2 nm projection remains close to the exact distribution throughout .a mixture of only two gaussians is enough to approximate quite a variety of differently shaped distributions with perhaps surprising accuracy .as expected , the extended kalman filter gives poor results until the state moves to a region where is injective .the results of the exponential filter have not been plotted in figure [ cubicsensortimepoints ] to reduce clutter .it gave similar results to the l2 nm filter .the prior polynomial exponential distribution used for this simulation was .the initial state was , which is one of the modes of prior distribution .the inital value for was taken to be .time points for the problem one new phenomenon that occurs when considering the cubic sensor is that the algorithm sometimes abruptly fails .this is true for both the l2 nm projection filter and the he projection filter . to show the behaviour over time more clearly , in figure [ cubicsensormeansandsds ] we have shown a plot of the mean and standard deviation as estimated by the l2 nm projection filter against the actual mean and standard deviation .we have also indicated the true state of the system .the mean for the l2mn filter drops to at approximately time .it is at this point that the algorithm has failed .] what has happened is that as the state has moved to a region where the sensor is reasonably close to being linear , the probability distribution has tended to a single normal distribution .such a distribution lies on the boundary of the family consisting of a mixture of two normal distributions .as we approach the boundary , ceases to be invertible causing the failure of the algorithm .analogous phenomena occur for the exponential filter .the result of running numerous simulations suggests that the he filter is rather less robust than the l2 nm projection filter .the typical behaviour is that the exponential filter maintains a very low residual right up until the point of failure .the l2 nm projection filter on the other hand tends to give slightly inaccurate results shortly before failure and can often correct itself without failing .this behaviour can be seen in figure [ cubicsensorresiduals ] . in this figure ,the residual for the exponential projection remains extremely low until the algorithm fails abruptly - this is indicated by the vertical dashed line .the l2 nm filter on the other hand deteriorates from time but only fails at time .residuals for the problem the residuals of the l2mn method are rather large between times and but note that the accuracy of the estimates for the mean and standard deviation in figure [ cubicsensormeansandsds ] remain reasonable throughout this time . to understand this note that for two normal distributions with means a distance apart , the distance between the distributions increases as the standard deviations of the distributions drop .thus the increase in residuals between times and is to a large extent due to the drop in standard deviation between these times . as a result, one may feel that the residual does nt capture precisely what it means for an approximation to be `` good '' . in the next sectionwe will show how to measure residuals in a way that corresponds more closely to the intuitive idea of them having visually similar distribution functions . in practiceone s definition of a good approximation will depend upon the application .although one might argue that the filter is in fact behaving reasonably well between times and it does ultimately fail .there is an obvious fix for failures like this .when the current point is sufficiently close to the boundary of the manifold , simply approximate the distribution with an element of the boundary . in other words , approximate the distribution using a mixture of fewer gaussians .since this means moving to a lower dimensional family of distributions , the numerical implementation will be more efficient on the boundary .this will provide a temporary fix the failure of the algorithm , but it raises another problem : as the state moves back into a region where the problem is highly non linear , how can one decide how to leave the boundary and start adding additional gaussians back into the mixture ?we hope to address this question in a future paper .particle methods approximate the probability density using discrete measures of the form : these measures are generated using a monte carlo method .the measure can be thought of as the empirical distributions associated with randomly located particles at position and of stochastic mass .particle methods are currently some of the most effective numerical methods for solving the filtering problem .see and the references therein for details of specific particle methods and convergence results .the first issue in comparing projection methods with particle methods is that , as a linear combination of dirac masses , one can only expect a particle method to converge weakly to the exact solution . in particularthe metric and the hellinger metric are both inappropriate measures of the residual between the exact solution and a particle approximation .indeed the distance is not defined and the hellinger distance will always take the value . to combat this issue , we will measure residuals using the lvy metric .if and are two probability measures on and and are the associated cumulative distribution functions then the lvy metric is defined by : this can be interpreted geometrically as the size of the largest square with sides parallel to the coordinate axes that can be inserted between the completed graphs of the cumulative distribution functions ( the completed graph of the distribution function is simply the graph of the distribution function with vertical line segments added at discontinuities ) .the lvy metric can be seen as a special case of the lvy prokhorov metric .this can be used to measure the distance between measures on a general metric space . for polish spaces ,prokhorov metric metrises the weak convergence of probability measures .thus the lvy metric provides a reasonable measure of the residual of a particle approximation .we will call residuals measured in this way lvy residuals .a second issue in comparing projection methods with particle methods is deciding how many particles to use for the comparison .a natural choice is to compare a projection method onto an -dimensional manifold with a particle method that approximates the distribution using particles .in other words , equate the dimension of the families of distributions used for the approximation .a third issue is deciding which particle method to choose for the comparison from the many algorithms that can be found in the literature .we can work around this issue by calculating the best possible approximation to the exact distribution that can be made using dirac masses .this approach will substantially underestimate the lvy residual of a particle method : being monte carlo methods , large numbers of particles would be required in practice . ] in figure [ levyresiduals ] we have plotted bounds on the lvy residuals for the two projection methods for the quadratic sensor .since mixtures of two normal distributions lie in a dimensional family , we have compared these residuals with the best possible lvy residual for a mixture of three dirac masses . to compute the lvy residual between two functions wehave approximated first approximated the cumulative distribution functions using step functions .we have used the same grid for these steps as we used to compute our `` exact '' filter .we have then used a brute force approach to compute a bound on size of the largest square that can be placed between these step functions . thus if we have used a grid with points to discretize the -axis, we will need to make comparisons to estimate the lvy residual .more efficient algorithms are possible , but this approach is sufficient for our purposes .the maximum accuracy of the computation of the lvy metric is constrained by the grid size used for our `` exact '' filter . since the grid size in the direction for our `` exact '' filter is , our estimates for the projection residualsare bounded below by .the computation of the minimum residual for a particle filter is a little more complex .let denote the minimum lvy distance between a distribution with cumulative distribution and a distribution of particles .let denote the minimum number of particles required to approximate with a residual of less than .if we can compute we can use a line search to compute . to compute for an increasing step function with and , one needs to find the minimum number of steps in a similar increasing step function that is never further than away from in the metric .one constructs candidate step functions by starting with and then moving along the -axis adding in additional steps as required to remain within a distance .an optimal is found by adding in steps as late as possible and , when adding a new step , making it as high as possible . in this waywe can compute and for step functions .we can then compute bounds on these values for a given distribution by approximating its cumulative density function with a step function .as can be seen , the exponential and mixture projection filters have similar accuracy as measured by the lvy residual and it is impossible to match this accuracy using a model containing only particles .projection onto a family of normal mixtures using the metric allows one to approximate the solutions of the non - linear filtering problem with surprising accuracy using only a small number of component distributions . in this regardit behaves in a very similar fashion to the projection onto an exponential family using the hellinger metric that has been considered previously .the l2 nm projection filter has one important advantage over the he projection filter , for problems with polynomial coefficients all required integrals can be calculated analytically .problems with more general coefficients can be addressed using taylor series .one expects this to translate into a better performing algorithm particularly if the approach is extended to higher dimensional problems .we tested both filters against the optimal filter in simple but interesting systems , and we provided a metric to compare the performance of each filter with the optimal one .we also tested both filters against a particle method , showing that with the same number of parameters the l2 nm filter outperforms the best possible particle method in levy metric .areas of future research that we hope to address include : the relationship between the projection approach and existing numerical approaches to the filtering problem ; the convergence of the algorithm ; improving the stability and performance of the algorithm by adaptively changing the parameterization of the manifold ; numerical simulations in higher dimensions .brigo , d. diffusion processes , manifolds of exponential densities , and nonlinear filtering , in : ole e. barndorff - nielsen and eva b. vedel jensen , editor , geometry in present day science , world scientific , 1999 m. h. a. davis , s. i. marcus , an introduction to nonlinear filtering , in : m. hazewinkel , j. c. willems , eds . , _stochastic systems : the mathematics of filtering and identification and applications _( reidel , dordrecht , 1981 ) 5375 .hanzon , b. a differential - geometric approach to approximate nonlinear filtering . in c.t.j .dodson , geometrization of statistical theory , pages 219 - 223,ulmd publications , university of lancaster , 1987 .kenney , j. , stirling , w. nonlinear filtering of convex sets of probability distributions .presented at the 1st international symposium on imprecise probabilities and their applications , ghent , belgium , 29 june - 2 july 1999
|
we examine some differential geometric approaches to finding approximate solutions to the continuous time nonlinear filtering problem . our primary focus is a projection method using the direct metric onto a family of normal mixtures . we compare this method to earlier projection methods based on the hellinger distance / fisher metric and exponential families , and we compare the mixture projection filter with a particle method with the same number of parameters . we study particular systems that may illustrate the advantages of this filter over other algorithms when comparing outputs with the optimal filter . we finally consider a specific software design that is suited for a numerically efficient implementation of this filter and provide numerical examples . * keywords : * finite dimensional families of probability distributions , exponential families , mixture families , hellinger distance , fisher information metric , direct l2 metric , stochastic filtering * ams classification codes : 53b25 , 53b50 , 60g35 , 62e17 , 62m20 , 93e11 *
|
the dispersion of a solid phase in turbulent wall - bounded flows occurs in many technological processes such particle - blade interactions in the turbines of aeronautical engines .inertial particles transported in turbulent wall flows display a characteristic preferential accumulation close to the wall . this phenomenology is denoted turbophoresis and has been the subject of research in the last three decades .the turbophoretic drift towards the wall is essentially controlled by one nondimensional parameter , the viscous stokes number , that is the ratio between the particle relaxation time and the viscous time scale of the flow , with and the particle density and diameter , and the fluid density and viscosity , and the friction velocity . in particular , the strongest particle accumulation towards the wall is found when the stokes number is about twenty five , .particles of vanishing stokes number behave as passive tracers and are therefore uniformly distributed in a turbulent flow , whereas heavy particles become insensitive to the turbulence fluctuations , the ballistic limit at high stokes numbers .a review of experimental studies and direct numerical simulation ( dns ) of turbophoresis over the last years can be found in , with most recent simulations presented in . in most of these previous investigations ,the dynamics of the inertial particle has been studied in parallel flows such as channels or pipes .a different approach is necessary in spatially evolving flows where it is fundamental to understand the dynamics of the near - wall accumulation when the local stokes number of the dispersed phase changes during the particle evolution , i.e. in the streamwise direction , leading to non trivial effects as those observed in particle - laden turbulent round jets . in this context , we present here statistical data from a large - scale direct numerical simulations ( dns ) of a spatially evolving particle - laden turbulent boundary layer at reynolds number , based on the momentum thickness corresponding to a friction reynolds number based on the friction velocity .this can be seen as a moderate reynolds numbers in experiments but it is certainly high for a fully resolved numerical simulation .the study of particle dynamics in spatial developing boundary layers assumes a fundamental role to advance our understanding in multiphase flows because it represents the most ideal flow where the stokes number changes along the mean stream direction .a nominal stokes number can be defined to measure the amount of inertia of a single particle population by using the free - stream velocity and the displacement thickness of the boundary layer at the inflow of the computational domain ( or any reference station such as the seeding location ) . for a spatially evolving flow ,more meaningful local stokes numbers needs to be defined , both based on local internal units , as usually defined in parallel wall flows , and using external flow units , where it should be remarked that the displacement thickness of the boundary layer and the friction velocity change along the streamwise direction .indeed , the two stokes numbers decrease moving downstream in the boundary layer and tend to the limit of passive tracers . in figure [ fig1 ]the two stokes numbers are plotted versus , the reynolds number based on the momentum thickness , usually adopted to parametrize the streamwise distance in turbulent boundary layers . as apparent by the decrease of the stokes numbers ,the effects of the inertia become less and less relevant in the downstream direction until the particles behave as fluid tracers at sufficient distance from the leading edge .note that the decay rate is different for the two parameters , something discussed in further detail below .development of the viscous stokes number in streamwise direction for two nominal stokes numbers , . development of the external stokes number in streamwise direction for two nominal stokes numbers , .[ fig1],title="fig:",scaledwidth=50.0% ] ( -130,120) development of the viscous stokes number in streamwise direction for two nominal stokes numbers , . development of the external stokes number in streamwise direction for two nominal stokes numbers , .[ fig1],title="fig:",scaledwidth=50.0% ] ( -130,120) both the inner and outer stokes numbers are monotonically decreasing downstream when the boundary layer is fully turbulent ; the peaks displayed by around occur in the region where the transition from laminar to turbulent state takes place .another peculiar aspect of the turbulent boundary layer is represented by the co - existence of two different zones , the inner turbulent region and the external free stream .these two regions are separated by an intermittent interface , the viscous super - layer , with a fractal nature where the enstrophy generated at the wall diffuses towards the outer irrotational free stream . in many applications ,the particles usually lie in the outer region and then enter the turbulent inner region , crossing the separating interface .this characteristic property , together with the spatial development of the turbulent boundary layer , introduces new features in the particle transport that can not be investigated in parallel wall flows . in our recent study , we examined the same set of simulations and show that the concentration and the streamwise velocity profiles are self - similar and depend only on the local value of the outer stokes number and the rescaled wall - normal distance . the aim of the present investigation is to further study the particle dynamics in a spatial boundary layer with emphasis on the distinction between particles that are seeded inside the turbulent regions and particles initially located outside the turbulent region , penetrating the shear layer and accumulating at the wall . in particular, we show that a minimum in the concentration profiles occurs at around one displacement thickness from the wall and this is the result of the competition of two transport mechanisms both directed towards the wall , but of different intensities .particles are first subjected to a slow dispersion process from the outer region to the buffer layer and then to a fast turbophoretic drift close to the wall .the numerical solver employed for the simulation is the pseudo - spectral code simson .the dimensions of the computational domain are in the streamwise , wall - normal and spanwise directions with the displacement thickness at the inlet .the solution is expressed in terms of fourier modes in the streamwise and spanwise direction with and , while chebyshev modes are used to discretize the wall - normal direction .flow periodicity in the streamwise direction is handled with a fringe region at the end of the computational domain where the velocity field is forced to the laminar blasius profile at .the flow is tripped just downstream of the inlet by a localized forcing random in time and in the spanwise direction to trigger the laminar - turbulent transition . to reach a fully developed turbulent flow ,the carrier phase needs to reach a reynolds number of the order . in the present case, varies from at the inflow to at the end of the computational domain .the unladen reference simulation is described in where the same geometry and flow parameters are employed . regarding the dispersed phase , we assume that the particle concentration is dilute and neglect the backreaction on the flow , inter - particle collisions and hydrodynamic interactions among particles so that the one - way coupling approximation can be safely adopted .the particles are assumed to be small , rigid spheres with density one thousand times that of the carrier phase .the only force acting on the individual particle is the stokes drag .the fluid velocity at the particle position is evaluated by a fourth order spatial interpolation and the particle time integration is performed with a second order adams - bashforth scheme .seven populations , differing only in the nominal stokes number , are evolved inside the computational domain .particles are injected at a constant rate into the already turbulent flow at the streamwise location corresponding to so that their evolution is not directly influenced by the trip forcing . for each population, particles are injected at the rate of in , randomly in the spanwise direction and at locations equispaced in the wall - normal direction in the range $ ] .hence , a part of the total particles are released inside the turbulent region at a wall - normal distance below the boundary layer thickness , whereas the remaining particles are injected further away from the wall in the irrotational free stream .an example of a corresponding physical case is a flat plate moving with velocity in dusty air with particle diameters and density ratio .the computational domain describes the evolution of the particle - laden turbulent boundary layer until a momentum thickness .computational time has been provided within the deisa project wallpart .further details about the numerical simulation can be found in .in the region close to the wall along a wall - parallel plane towards the end of the computational domain .colours represent the values of the wall - shear stress ( higher values lighter zones ) . total particles in the given configuration . particles initially released at a wall - normal distance larger than the local boundary layer thickness .[ fig2],title="fig:",scaledwidth=45.0% ] ( -120,110) in the region close to the wall along a wall - parallel plane towards the end of the computational domain .colours represent the values of the wall - shear stress ( higher values lighter zones ) . total particles in the given configuration . particles initially released at a wall - normal distance larger than the local boundary layer thickness .[ fig2],title="fig:",scaledwidth=45.0% ] ( -120,110) two instantaneous configurations of the particle positions close to the wall are shown in figure [ fig2 ] for the population with a nominal stokes number in the region close to the end of the domain , i.e. with ranging from 2000 to 2500 .the background color represents the values of the local wall - shear stress , representative of the sweep / ejection events in the flow .in particular , it is known that high levels of wall - shear stress ( lighter zones ) correspond to a sweep event ( fast velocity directed towards the wall ) while the lower values ( darker zones ) are associated to the ejection events ( slow velocity directed away from the wall ) .the visualization in figure [ fig2] displays the location of all particles , initially released both inside and outside the boundary layer .the particles tend to preferentially localize in regions of low wall - shear i.e. slow ejection events .this is consistent with the previous results from simulations in pipe and channel flows for the cases of most accumulating particles , .the viscous stokes number of the population with nominal stokes number decreases from at to at in the region depicted in the figure .as observed for parallel wall - bounded flows , the particles tend to stay in elongated streaky structures also in the spatial developing boundary layer , although this behavior appears to be less accentuated .this difference can be explained by the higher value of the friction reynolds number in the present case , to be compared with of the typical channel flow simulations such as those analyzed in references . . total particles inside the domain . particles initially released outside the boundary layer thickness .[ fig3],title="fig:",scaledwidth=45.0% ] ( -30,110) . total particles inside the domain . particles initially released outside the boundary layer thickness .[ fig3],title="fig:",scaledwidth=45.0% ] ( -30,110) the particles displayed in figure [ fig2] are all initially released outside the geometrical thickness of the boundary layer , . at few of these particles have reached the wall although their concentration at the wall increases monotonically when moving downstream along the domain . at the beginning of the accumulation phase particlesdo not show a preferential localization in the ejection events ; only further downstream when a clear peak in the near - wall concentration profiles is forming , particles start to preferentially sample the low ejection events ( low local wall - shear stress region ) .this observation is consistent with the dynamics of the spatial evolution of the turbophoresis discussed previously for pipe and channel flows . at statistical steady state , particles accumulate in regions of vertical fluid motions away from the wall to compensate for the turbophoretic drift towards the wall , yielding in this way a zero net flux .the spatial evolution of the near - wall accumulation is reported in figure [ fig3 ] .[ fig3] shows the wall concentration along the streamwise position identified by .the wall concentration has been defined as the number of particles per unit volume found below a distance of from the wall .the transient accumulation phase is characterized by a strong particle drift towards the wall , i.e. turbophoresis , proportional to the local slope of the concentration profiles .this accumulation phase ends at a different streamwise distance according to the particle inertia and in general between .the particle populations characterized by a nominal stokes number of and exhibit the highest turbophoretic drift .this phenomenon can be explained considering that the viscous stokes number of these particles , , lies in the range of maximum turbophoresis when .after this phase , we observe a secondary growth , characterized by a less steep slope .all the populations with remain in this second accumulation phase until the end of the computational domain , where particles with intermediate values of the relaxation time assume the highest wall concentration . at the end of this secondary phase , a peak in the concentrationis reached only by particles with , at the streamwise location corresponding to ; downstream of the maximum this population shows a slight decrease of the wall concentration .we expect a peak in the concentration for all the population once the local stokes number becomes of order 20 ; however , this would happen at a larger distance from the computational inlet and an even longer computational domain would be required to capture this effect , something computationally too expensive for such kind of simulations .theoretically , in an infinitely long streamwise domain all the particle populations will reach a concentration peak after the second accumulation phase , with the peak position dependent on particle inertia .this phenomenon can be explained considering that around the turbophoresis is maximum and that in a turbulent boundary layer is always diminishing with the streamwise distance , i.e. .downstream of the location of maximum accumulation , the wall concentration will diminish since particles tend to the lagrangian limit as their viscous stokes number still decreases . the near - wall accumulation of particles initially seeded outside the turbulent shear layer is displayed in figure [ fig3] . the concentration increases monotonically for the most accumulating particle families andthe behavior is similar to the phase of transient accumulation described in the discussion of the previous plot .the accumulation process starts at and continues until the end of the computational domain .also in this case the most accumulating particles are those characterized by the intermediate nominal stokes numbers .note that a clear peak for the near - wall accumulation can not yet be distinguished in this case : the initial seeding has an effect on the maximum concentration and on the local values of the stokes number where this maximum occurs , as the peak in concentration is reduced and moved further downstream when particles are seeded outside the boundary layer .. left panels : all particles inside the computational domain ; right panels : particles initially released outside the boundary layer thickness . , profiles at . , profiles at . , profiles at .[ fig4 ] , title="fig:",scaledwidth=45.0% ] ( -30,60) . left panels : all particles inside the computational domain ; right panels : particles initially released outside the boundary layer thickness . , profiles at . , profiles at . , profiles at .[ fig4 ] , title="fig:",scaledwidth=45.0% ] ( -30,60) + . left panels : all particles inside the computational domain ; right panels : particles initially released outside the boundary layer thickness . , profiles at . , profiles at . , profiles at .[ fig4 ] , title="fig:",scaledwidth=45.0% ] ( -30,45) . left panels : all particles inside the computational domain ; right panels : particles initially released outside the boundary layer thickness . , profiles at . , profiles at . , profiles at .[ fig4 ] , title="fig:",scaledwidth=45.0% ] ( -30,45) + . left panels : all particles inside the computational domain ; right panels : particles initially released outside the boundary layer thickness . , profiles at . , profiles at . , profiles at .[ fig4 ] , title="fig:",scaledwidth=45.0% ] ( -30,60) . left panels : all particles inside the computational domain ; right panels : particles initially released outside the boundary layer thickness . , profiles at . , profiles at . , profiles at .[ fig4 ] , title="fig:",scaledwidth=45.0% ] ( -30,60) and .left panels : all particles inside the domain ; right panels : particles initially released outside the boundary layer thickness . , profiles at . , profiles at . , profiles at .[ fig5 ] , title="fig:",scaledwidth=45.0% ] ( -40,90) and .left panels : all particles inside the domain ; right panels : particles initially released outside the boundary layer thickness . , profiles at . , profiles at . , profiles at .[ fig5 ] , title="fig:",scaledwidth=45.0% ] ( -40,90) + and .left panels : all particles inside the domain ; right panels : particles initially released outside the boundary layer thickness . , profiles at . , profiles at . , profiles at .[ fig5 ] , title="fig:",scaledwidth=45.0% ] ( -40,90) and .left panels : all particles inside the domain ; right panels : particles initially released outside the boundary layer thickness . , profiles at . , profiles at . , profiles at .[ fig5 ] , title="fig:",scaledwidth=45.0% ] ( -40,90) + and .left panels : all particles inside the domain ; right panels : particles initially released outside the boundary layer thickness . , profiles at . , profiles at . , profiles at .[ fig5 ] , title="fig:",scaledwidth=45.0% ] ( -40,90) and .left panels : all particles inside the domain ; right panels : particles initially released outside the boundary layer thickness . , profiles at . , profiles at . , profiles at .[ fig5 ] , title="fig:",scaledwidth=45.0% ] ( -40,90) figure [ fig4 ] shows the wall - normal concentration profiles of different particle populations , , at the three streamwise locations corresponding to . on the leftwe show the statistics of all particles tracked in the simulation , whereas on the right we present results only for those initially released outside the turbulent boundary layer . focusing on the left panels ,the turbophoresis is apparent for the particles with that exhibit mean wall concentrations more than 100 times that of tracers , i.e. , with no relevant differences when changing .particles of small inertia , , do not show relevant turbophoresis , though still appreciable .unlike the case of a turbulent channel flow , we see a minimum of the concentration for turbophoretic particles in the outer part of the boundary layer , in the figure , before the concentration recovers the unperturbed values further away from the wall .this minimum is originated by the combination of the strong turbophoretic drift that move the particles towards the wall and a gentler particle dispersion from the outer layer towards the bulk of the turbulent boundary layer .indeed , the concentration profiles for the particles released in the external region of the boundary layer , plotted in the right panels of figure [ fig4 ] , show at the location corresponding to that the particles are still mainly concentrated in the outer region and tend to slowly penetrate towards the wall ; the near - wall concentration is however still negligible . the smaller the particle inertia the larger is the wall - normal turbulent diffusion that brings the particle from the external to the inner region .moving downstream , , the turbophoretic drift becomes apparent from the near - wall peak close to the wall .particles with start to display values of concentration at the wall higher than those in the outer flow .this process leads to a concentration minimum in the outer part of the turbulent boundary layer that can be explained as follows .the particles reach the buffer layer by turbulent diffusion and are there accelerated towards the wall by the turbophoretic drift .this creates a region partially depleted of particles where the turbulent diffusion can not compensate for the increased drift towards the wall provided by the turbophoresis .the location of the minimum concentration scales in outer units , , as apparent from the left panels of figure [ fig5 ] .the concentration profiles show the minimum at for all accumulating particles and , while they become almost flat in the external free stream at .particles initially released in the free stream , are shown in the right panels of figure [ fig5 ] . from the sequence in the figure one can recognize the initial uniform diffusion towards the wall , driven by the turbulent fluctuations , and finally the strong acceleration in the thin layer close to the wall , now driven by the turbophoresis , i.e. mainly by the particle inertia and their ability to filter high - frequency events .the scaling of the minimum position in outer units , i.e. , highlights that the process is essentially controlled by the dynamics of the outer part of the boundary layer .as anticipated above , we argue that this minimum originates from the competition between the slower turbulent diffusion of the particles in the outer region of the turbulent boundary layer and the fast turbophoretic drift . from the right panels of figure [ fig5 ] , depicting the mean particle concentration in outer units for the particles initially released in the free stream , we see that the concentration is almost constant when and decreases towards the wall . increasing , i.e. moving downstream, the turbulent diffusion tends to increase the concentration inside the turbulent boundary layer . at , although the turbulent diffusion is still not able to evenly mix the particle inside the turbulent region , the turbophoresis is already apparent close to the wall , see the large concentration peak for .these data indicate that the formation of the minimum in the concentration wall - normal profiles is due to an imbalance of the flux towards the wall driven by turbulence : a slow particle turbulent diffusion in the outer part of the turbulent boundary layer and a fast turbophoretic drift towards the wall ., profiles at . , profiles at . , profiles at .[ fig6],title="fig:",scaledwidth=45.0% ] ( -40,30) , profiles at . , profiles at . , profiles at .[ fig6],title="fig:",scaledwidth=45.0% ] ( -40,30) + , profiles at . , profiles at . , profiles at .[ fig6],title="fig:",scaledwidth=45.0% ] ( -40,30) , profiles at . , profiles at . , profiles at .[ fig6],title="fig:",scaledwidth=45.0% ] ( -40,30) + , profiles at . , profiles at . , profiles at .[ fig6],title="fig:",scaledwidth=45.0% ] ( -40,30) , profiles at . , profiles at . , profiles at .[ fig6],title="fig:",scaledwidth=45.0% ] ( -40,30) in order to analyze the differences between the fluid and particle motion , we report in figure [ fig6 ] the difference between the mean particle streamwise velocity and the mean streamvise fluid velocity at three streamwise positions , , at , , at and , at , again for all particles and for those seeded outside the boundary layer .we first consider the global particle behavior , independently of their initial injection point ( left plots ) and observe that all populations are faster than the mean flow in the outer region between .note that roughly corresponds to the geometrical thickness of the boundary layer , .the position around , in correspondence with the minimum value of the concentration profile , is characterized by a mean particle streamwise velocity very close to that of the fluid .hence this location , one displacement thickness from the wall , can be considered an equilibrium region for the particle dynamics .below this point , in the region close to the wall , particles tend to be slower than the carrier phase .this is linked with the preferential localization of the particles in the low - speed streaks close to the wall , as also shown from the instantaneous configurations in figure [ fig4 ] .inertial particles tend to preferentially stay in the slow ejection regions that are characterized by a streamwise velocity smaller than the mean flow in the near - wall region as a typical feature of turbophoresis .in contrast with the results for the particle concentration , the fluid - particle velocity differences do not change significantly moving downstream .larger differences between the particle and fluid streamwise velocity emerge for the particles initially released outside the boundary layer as shown in the right panels of figure [ fig6 ] .the data in show the profiles at where just few particles have entered the turbulent boundary layer . herethe particles tend to be faster than the fluid as observed for the unconditioned case reported in the figure [ fig6] .further downstream , , panel , the particles coming from the outer stream and diffusing into the boundary layer are much faster than the fluid displaying values of the velocity difference almost twice as large as those pertaining the unconditioned case in panel .particles captured by the boundary layer and approaching the wall from the irrotational free stream are characterized by a strong wall - normal velocity directed towards the wall that is associated with a streamwise velocity larger than that of the carrier phase . in other words ,particles are seen to enter the boundary - layer wake region via large - structures similar to the high - speed streaks of near - wall turbulence : higher streamwise velocity with wall - normal velocity towards the wall .a similar trend is observed at , panel , even though the effect is now weakened because the memory of the initial seeding position tends to be lost as particles move along with the flow .we report statistics of the dynamics of inertial particles transported in a spatially developing turbulent boundary layer .the reynolds number based on the momentum thickness increases from 200 to 2500 for the computational domain of the present simulation .a boundary layer flow presents two main features that differentiate the behavior of inertial particles from the more studied case of a parallel turbulent flow , e.g. in channels and pipes , and it is therefore a relevant test case worth investigation .the first is the variation along the streamwise direction of the local dimensionless parameters defining the fluid - particle interactions .the second is the coexistence of an irrotational free - stream and a rotational near - wall turbulent flow .the first effect has been considered in .two different stokes numbers have been defined , one using inner flow units and the other with outer units .since these two stokes numbers exhibit different decay rates in the streamwise direction , we found a decoupled particle dynamics between the inner and the outer region of the boundary layer .preferential near - wall particle accumulation is similar to that observed in turbulent channel flow , while different behaviour characterizes the outer region . herethe concentration and the streamwise velocity profiles were shown to be self - similar and to depend only on the local value of the outer stokes number and the rescaled wall - normal distance .the scope of this paper is to examine the effect of the simultaneous presence of the external laminar stream and of the turbulent shear layer on the particle transport . in particular , we study how inertial particles released in the outer stream disperse inside the boundary layer and approach the wall . to this aim, we present statistics conditioned by the position of the initial injection and discuss the emergence of a minimum of the wall - normal particle concentration around the boundary layer thickness , .the imbalance between two different diffusion mechanisms , both directed towards the wall for particle coming from the free stream , induces the concentration minimum in the wall - normal direction .on one side , we have a relatively slow particle dispersion in the outer part of the boundary layer , due to the mixing by turbulent fluctuations , bringing the solid phase from the outer region to the buffer layer . on the other side, we have a fast turbophoretic drift due to the decrease of turbulent fluctuations closer to the wall pushing the particles towards the wall .the different magnitude of the two fluxes creates a region of lower concentration at the edge of the zones where each of the two mechanisms dominate .the entrainment of particles in the turbulent regions is occurring in regions of higher streamwise velocity and wall - normal velocity towards the wall .note that these structures , though similar to the near - wall high - speed streaks , are present across the laminar - turbulent interface at the boundary - layer edge effectively leading to a corrugated appearance of the instantaneous laminar - turbulent interface .a similar minimum in concentration is not observed in channel flows where a laminar region acting as a reservoir of particles and imposing a reference external concentration , does not exist .hence , there is no imbalance between the two fluxes mentioned above and the minimum concentration occurs at the channel midplane by symmetry .the present results show that a non trivial dynamics occurs in intermittent regions , relevant for particle entrainment and mixing in several applications .idealized shear - less configurations have been considered in where both large fluctuations of the particle distribution and self - similar concentration are observed .we expect that the presence of a mean shear in an intermittently turbulent / non - turbulent flow adds new interesting features as shown here for the entrainment of particles seeded in the laminar free stream . using the configuration adopted here , we therefore plan to study the behavior of inertial particles in a transitional flow where turbulent spots appear randomly in space and time .the authors acknowledge deisa ( distributed european infrastructure for supercomputing applications ) for the computer time granted within the project wallpart . in particular , we thank epcc ( edinburgh parallel computing centre ) for help in setting up the simulation and the assistance throughout the project .resources at nsc ( national supercomputer centre ) at linkping university , allocated via snic ( swedish national infrastructure for computing ) , were used for post - processing the data .we would like to acknowledge the support from the cost action mp0806 _ particles in turbulence_. marchioli , c. , soldati , a. , kuerten , j. , arcen , b. , taniere , a. , goldensoph , g. , squires , k. , cargnelutti , m. , portela , l. : statistics of particle dispersion in direct numerical simulations of wall - bounded turbulence : results of an international collaborative benchmark test .j. multiphase flow * 34*(9 ) , 879893 ( 2008 ) nordstrm , j. , nordin , n. , henningson , d. : the fringe region technique and the fourier method used in the direct numerical simulation of spatially evolving viscous flows .siam j. sci .comput . * 20*(4 ) , 13651393 ( 1999 ) schlatter , p. , rl , r. , li , q. , brethouwer , g. , fransson , j.h.m . ,johansson , a.v ., alfredsson , p.h . ,henningson , d.s . :turbulent boundary layers up to studied through numerical simulation and experiments .fluids * 21 * , 051,702 ( 2009 )
|
we present the results of a direct numerical simulation of a particle - laden spatially developing turbulent boundary layer up to . two main features differentiate the behavior of inertial particles in a zero - pressure - gradient turbulent boundary layer from the more commonly studied case of a parallel channel flow . the first is the variation along the streamwise direction of the local dimensionless parameters defining the fluid - particle interactions . the second is the coexistence of an irrotational free - stream and a near - wall rotational turbulent flow . as concerns the first issue , an inner and an outer stokes number can be defined using inner and outer flow units . the inner stokes number governs the near - wall behavior similarly to the case of channel flow . to understand the effect of a laminar - turbulent interface , we examine the behavior of particles initially released in the free stream and show that they present a distinct behavior with respect to those directly injected inside the boundary layer . a region of minimum concentration occurs inside the turbulent boundary layer at about one displacement thickness from the wall . its formation is due to the competition between two transport mechanisms : a relatively slow turbulent diffusion towards the buffer layer and a fast turbophoretic drift towards the wall . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
|
the general analysis of the evolution of material systems interacting with their own gravitational field is undoubtedly a very important and very difficult problem in general relativity . as usual in any physical theory , one can gain insight into the more difficult issues by considering idealized simplified models that , nevertheless , still preserve some of the relevant features of the general problem .one of these simplified examples was described some time ago by apostolatos and thorne .the material contents in this example is given by a cylindrical shell of counter rotating particles , and one is interested in the dynamics that results as a consequence of its interaction with its own gravitational field .as shown in , once some appropriate choice of coordinates is made , it is a straightforward matter to obtain a set of coupled equations for the evolution of the matter and field variables , which is described in some detail in the next section . in their original paper apostolatos and thorne were interested in general features of the evolution , in particular the possibility of naked singularities forming as the result of the collapse of these structures , but the detailed evolution of the shell in a situation close to one of its static configuration was only qualitatively mentioned , with no detailed results .this problem is interesting in particular because in the newtonian approximation the shell is either in a static configuration or performs periodic oscillations about this static configuration , but , once the radiative modes of the gravitational field are introduced , one expects these oscillations to be eventually damped as the results of the transfer of energy to the radiative modes .thus , in the fully relativistic description one expects to find the characteristic `` quasi normal ringing '' ( qnr ) , related to the `` quasi normal modes '' ( qnm) of the system .a recent analysis of the dynamics of the apostolatos and thorne model was given by hamity , et.al .we notice , however , that the approximations introduced in the explicit examples considered in are such that the radiative modes are effectively neglected , and , therefore , the results obtained are possibly relevant only close to the newtonian limit . a detailed analysis of the full relativistic equations in the linearized approximation was carried out by the present authors in , where we obtained the general solution for the harmonic modes with real frequencies , and considered the solutions obtained by general linear combinations of these modes .although in all the examples considered in we found a stable evolution under perturbations of ( essentially ) compact support , at least two questions remained unanswered .the first was the possible completeness of the mode expansion , and its relation to the initial value problem for the system .the second was the apparent lack of qnr in the example solutions .the main purpose of the present work is to give answers to those questions .we first review in sections 2 and 3 , mostly for completeness , the main features of the model .then , in section 4 , we review the derivation of the periodic solutions in the linearized regime , introducing a modified form , as compared with that given in , that allows us to establish , in section 5 , the completeness of expansion in the context of the characteristic value problem for the system , and , therefore , the completeness of the mode expansion .this completeness , in turn , as discussed in section 6 , implies the linear stability of static configurations under this type of perturbations , a result that differs from some conclusions reached in , and recent results obtained by kurita and nakao .next , in section 7 , we discuss the existence of qnms for the system .as shown there , these are related to the zeros of a complex expression that appears in the relation between the incoming and outgoing wave amplitudes as functions of the frequency .this expression has a complicated form involving bessel functions that makes it very difficult its analysis .for this reason we have not been able to obtain explicit expressions for the frequencies of the qnm , or even to ascertain whether there is finite number or infinite number of these frequencies . nevertheless , by resorting to numerical methods we found at least a couple of qnm corresponding to a range of parameters characterizing static configurations .the values obtained are also displayed in section 7 . in section 8we consider the numerical evaluation of the integrals obtained in section 7 that provide the evolution of the system for arbitrary characteristic data .the examples chosen display clearly the quasi normal ringing associated to one of the qnm .the reason why this qnr was not seen in is discussed in section 12 , using the price - husain toy model as an illustration . in section 9we consider again the full relativistic equations and assuming a general linear departure from a static configuration obtain the general linearized equations for the system .then , in section 10 we prove the existence of an associated positive definite constant of the motion that puts absolute bounds on the dynamic variables of the system , establishing the stability of the motion of the shell under arbitrary , finite perturbations . in section 11we show that the corresponding set of coupled ordinary and partial differential equations for the relevant dynamical variables can be solved numerically as an initial plus boundary values problem , without resorting to expansions in terms of bessel or other functions .we provide a couple of examples , using the same shell parameters as in the mode expansion , but a different form for the incoming wave form .the results , that clearly display quasi normal ringing are in complete agreement with those of the mode expansion , and with the computed qnm frequencies , although the techniques used in both approaches are totally different , confirming their separate correctness .we end the paper with some comments and conclusions , as well as comments on related work by other authors , in particular to that of reference .the apostolatos - thorne model describes the dynamics of a self gravitating cylindrical shell of counter rotating particles .both the inner ( ) and outer ( ) regions of the shell are vacuum space times with a common boundary . the corresponding metrics may be written in the form , where the sign corresponds to the outside and to the inner regions .the functions , and depend only on and satisfy the equations : \ ] ] the shell is located on the hypersurface given by , where is the proper time of an observer at rest on the shell .we may interpret as playing the role of a gravitational field whose static part is the analogue of the newtonian potential. the time dependent solutions of ( [ ateq2 ] ) represent gravitational waves ( einstein - rosen ) .equation ( [ ateq2 ] ) is the integrability condition of eqs .( [ ateq3 ] ) .the coordinates and the metric function are continuous across the shell , while and the metric function are discontinuous .smoothness of the spacetime geometry on the axis requires that , finite at , and .the junction conditions of and through require the continuity of the metric and specify the jump of the extrinsic curvature compatible with the stress energy tensor on the shell .the induced metric on is given by here .the evolution of the shell is characterized by , which is the radial coordinate at the shell s location and the proper time of an observer at rest on .if we assume that the shells is made up of equal mass counter rotating particles , the einstein field equations on the shell may be put in the form , where the constants and are , respectively , the proper mass per unit killing length of the cylinder and the angular momentum per unit mass of the particles . the other quantities in ( [ ateq5],[ateq6 ] ) are given by , where a dot indicates a derivative , and we also have , \nonumber \\ & & + \frac { r^2 \psi^-_{,n } x^-}{r^2+e^{2 \psi_{\sigma}}j^2 } -\frac{\lambda r^2 x^-}{(r^2+e^{2 \psi_{\sigma}}j^2)^{3/2 } } + \frac{j^2 e^{2 \psi_{\sigma}}x^-x^+}{r(r^2+e^{2 \psi_{\sigma}}j^2)}\end{aligned}\ ] ] these equations together with ( [ ateq2],[ateq3 ] ) determine the evolution of the shell and of the gravitational field to which it is coupled .we briefly review the conditions for a static solutions , corresponding to a shell of constant radius .we restrict to the case of a regular interior . in this casethe interior must be flat and we may take , , implying . for the exterior fieldthe general static solution ( satisfying ) is of the form , then , since , , and , we find , \ ] ] and , it is important to notice that this relation can be satisfied for real , and only if , with therefore , the system admits static solutions only if . it will be convenient to write . replacing in ( [ eq4a ] ), we find , which we now consider as an equation for as a function of .it is now easy to check that there are no real solutions for if .for we have a double solution .the interesting region for our analysis is the range where we have _ two _ real and positive solutions for , one larger and one smaller than , which approach respectively and as .this implies that for appropriate and we have _ two _ static solutions .it was concluded in previous analysis that , at least from a perturbative point of view , one of these solutions ( the one with the larger ) would be stable and the other ( with the smaller ) would be unstable .this conclusion , as shown in the following sections , is _ not _ supported by the present analysis of the perturbative evolution that follows from initial data in the neighborhood of either static configuration . in the next sectionwe derive the appropriate equations for a perturbative analysis , and then show how this construction can be used to solve the evolution equations given characteristic data .in this section we consider the problem of finding linearized periodic solutions for our system imposing the condition of a regular interior .this problem was already considered in , where the main idea was to obtain stationary periodic solutions , but left open the problem of relating these solutions to the initial value problem . herewe will consider a slightly modified approach , where we express the solutions in in terms of incoming and outgoing wave solutions for the radiative part of .we first notice that if we impose regularity on the symmetry axis , and assume that the interior region is empty but may contain gravitational radiation , then admits periodic solutions of the form , where is a bessel function , and we consider as a quantity of first order in perturbation theory .then , restricting again to linearized order we may set , since , in this case , from ( [ ateq3 ] ) , is of second order , and , therefore , also to the appropriate order , we may also set , we assume a periodic perturbation around an equilibrium configuration characterized by and and therefore take , where we assume that is also of first order .we need to specify now a corresponding solution for . in our previous analysis , , we noticed that the general periodic perturbation for may be written as , e^{i \omega_+ t_+}\ ] ] where is a bessel function and and are first order quantities . to this orderwe have , e^{i \omega_+ t_+}\ ] ] where and are given by ( [ eq3a ] ) .a straightforward calculation shows that consistency of the equations at first order requires , and , replacing now in ( [ ateq5 ] ) , ( [ ateq6 ] ) , and ( [ ateq9 ] ) , and expanding to first order , we find a set of three linear independent equations for , , , and , for every choice of , and , and , therefore , three of these quantities can be solved in terms of the fourth . since all the evolution equations are linear , we may consider linear superpositions of these solutions to obtain more general ( non periodic ) solution . in , for simplicity , we chose as the independent variable , and obtained corresponding expressions for the other three amplitudes . assuming an arbitrary dependence of on , given by the function , is given by , while for and we obtain expression in the form of fourier transforms of multiplied by complicated expressions involving bessel functions of and .details are given in .we notice that although in principle , ( [ eq12da ] ) is completely general , and one has a definite for any possible , in practice , what one would like to solve is the problem of the evolution of the system given some initial condition , and because of the structure of the equations that resulted in , the best one could do was to assume some form for , and find the corresponding evolution of the system . carrying out this programit was found there that there are rather general evolutions , in the sense that they represent the interaction of the shell with an incoming gravitational pulse of rather arbitrary shape , for which the response of the shell is given precisely by .an important question here is actually how general these solutions of the problem are , namely , whether they include every possible evolution of the system , or on the contrary , they only represent a restricted set .since we had no proof of the completeness of the set of modes , this question is highly non trivial .further , the scheme developed in did no appear to show the quasi normal modes and quasi normal ringing expected in a system such as this , that has a simple newtonian limit , where the shell can execute periodic motions .one would expect that the presence of radiative modes in the general relativistic version should lead to damped oscillations , at least in some appropriate limit , and this was not immediately apparent in the discussion given in .as we shall show here , both the completeness problem and the question of the qnr can be answered by a simple change in the formalism presented in .this can be achieved as follows .the expression , \ ] ] where and are constants , is a solution of ( [ ateq2 ] ) for . for large , using the asymptotic expansions for the bessel functions and , we find , and , therefore , ( [ eq5d ] ) represents a purely incoming wave solution of ( [ ateq2 ] ) . similarly , \ ] ] where is a constant , is a solution of ( [ ateq2 ] ) , for , which , for large has the asymptotic behaviour , and , therefore , ( [ eq7d ] ) represents a purely outgoing wave solution of ( [ ateq2 ] ) .we notice now that we have , e^{i \omega_2 t_+}\ ] ] therefore , instead of ( [ eq4da ] ) we may write the full solution for in the form , where we have the identifications : in what follows we will consider and as first order quantities . to this order , from ( [ ateq3 ] ) , we have , where and are given by ( [ eq3a ] ) .the important point here is that we have now expressions for and where we have separate amplitudes for the incoming and the outgoing wave parts .notice that since ( [ eq9d ] ) is just a reordering of terms in ( [ eq4da ] ) , all the previous relations regarding , , and are valid .then replacing again the expressions for the dynamic variables , but using now ( [ eq9d ] ) and ( [ eq10d ] ) , in ( [ ateq5 ] ) , ( [ ateq6 ] ) , and ( [ ateq9 ] ) , and expanding to first order , we find this time a set of three linear independent equations for , , , and . in this paperwe will be mainly interested in the evolution of an initially static shell , under the influence of a bounded incoming compact pulse .we choose therefore to solve these equations for , , and in terms of .the resulting explicit expressions are , unfortunately , rather long and difficult to read .nevertheless , they have in general the structure , where are bounded regular functions of and is given by , \nonumber \\ & & \left ( \left| \omega \right| j_{{0 } } \left ( \left| \omega_{{2 } } \right| r_{{0 } } \right ) - i\omega\,y_{{0 } } \left ( \left| \omega_{{2 } } \right| r_{{0 } } \right ) \right ) \nonumber \\ & & - \left [ \left ( { r_{{0}}}^{2 } \left ( { r_{{0}}}^{2}+{j}^{2 } \right ) ^{2}{\omega}^{2}-{j}^{2 } \left ( 2{r_{{0}}}^{2}+{j}^{2 } \right ) \right ) j_{{0 } } \left ( \left| \omega \right| r_{{0 } } \right ) \right .\nonumber \\ & & \left.+2{j}^{2 } \omega r_{{0 } } \left ( { r_{{0}}}^{2}+{j}^{2 } \right ) j_{{1 } } \left ( \left| \omega \right| r_{{0 } }\right ) \right ] \omega\ , \left ( 2\,{j}^{2}+{r _ { { 0}}}^{2 } \right ) ^{2 } \nonumber \\ & & \left ( \omega\,j_{{1 } } \left ( \left| \omega_{{2 } } \right| r _ { { 0 } } \right ) -i \left| \omega \right| y_{{1 } } \left ( \left| \omega _ { { 2 } } \right| r_{{0 } } \right ) \right)\end{aligned}\ ] ] where . in spite of its appearanceit is not difficult to show that has no zeros for real .a zero of for real would imply that the imaginary part of vanishes .but this is given by , where we have used properties of the bessel functions to obtain the right hand side , and , therefore , the imaginary part is non vanishing for all .this result is crucial , because it means that if depends regularly on , the solutions obtained using ( [ 13d1 ] ) and ( [ 13d2 ] ) are well defined modes for all in .therefore , because of linearity , arbitrary linear combinations of these modes will also be solutions of the problem . before discussing the possible completeness of this set of modes we will write explicit forms for the expressions we have in mind .these are given by , d \omega\ ] ] where is an arbitrary complex function of which we will assume at least square integrable.then , taking into account ( [ 13d1 ] ) , we have for the remaining dynamical variables , d \omega \nonumber\\ \xi(\tau ) & = & \int_{-\infty}^{+\infty } \frac{h_3}{d } f(\omega ) e^{i \omega \tau } d \omega\end{aligned}\ ] ] we shall analyze the properties of this expansion in the next section , and show that they solve a well defined characteristic value problem for our system , and , therefore , in this sense , the mode expansion is also complete .we remark , for completeness , that ( [ 16d ] ) , for real , represents the most general solution for , such that both are bounded for large .consider again ( [ 16d ] ) .the problem that we have in mind is one where at some given time the shell and a large neighbourhood of the space time that includes the shell and the symmetry axis , are in a static state , but there is a finite gravitational pulse incoming from large . in other words ,if corresponds to a static solution , then for some large time in the past and some , the solution in coincides with the static solution . for ,on the other hand , there is region where is non vanishing . to make this more precise, we recall that for large an incoming wave solution for has the asymptotic form , where is an arbitrary function of ( essentially ) compact support .we may compare this with the asymptotic form for large of ( [ 16d ] ) , which can be inverted to give , therefore , we find a one to one relation between and characteristic data given for , . in this sense , we have proved the following result : _ the set of modes given by ( [ 13d1 ] ) and ( [ 13d2 ] ) , used in the construction of ( [ 16d ] ) and ( [ 17d ] ) is complete , and , in particular , ( [ 16d ] ) and ( [ 17d ] ) provide the evolution of the corresponding dynamical variables for arbitrary characteristic data given at , ._ in the next two sections we shall use these results to analyze the stability of the static solutions and then consider the presence of quasi normal modes ( qnm ) and their associated quasi normal ringing ( qnr ) .conceptually , we may say that a static configuration of the shell is stable if , given data arbitrarily close to that configuration , the system evolves towards that static configuration .otherwise we would say that the static configuration is unstable .but , as we have already indicated , we have explicit forms for the evolution of the system when it is perturbed from an initially static configuration by an incoming gravitational pulse of ( essentially ) compact support .each one of these pulses is uniquely characterized by a corresponding function .concentrating in particular on , we have , and , using ( [ 13d2 ] ) , we can check that for large , plus terms of order . as consequence , on account of ( [ 17d ] ) , and our assumptions on , is the fourier transform of a square integrable function for _ any _ and , but then , we must have for , and , therefore , _ all _ static configurations are stable .more explicitly , we have shown that all finite admissible gravitational pulses incoming from large can be described by an appropriate function , and that this function is in one - to - one correspondence with the shape of the pulse at large .we have also shown there is a unique evolution for the system corresponding to each function .this evolution is such that the dynamic variables are essentially given by the fourier transform of a square integrable functions and , therefore , they are themselves square integrable and must vanish in the limit . butthis shows that all initially static configurations of the shell that are perturbed by an incoming pulse of the type described above , will eventually settle back to their original static configuration , and , in this sense , they are all _ stable ._ just for completeness , it should be clear that we have not found a self adjoint extension associated to the coupled partial plus ordinary system of differential equations of this problem , so that we could not state completeness of the mode expansion in the same way as when such an extension is possible .what we have found is that the characteristic value problem can indeed be solved by our mode expansion , since it reduces essentially to a fourier transform , and , that in this sense , the expansion is complete , and proves that the static configurations of the system are stable under a perturbation that has the form of a bounded , ( essentially ) compactly supported incoming pulse . but this is essentially all that is required to consider the system physically stable .it is still an open and interesting question to find the relation , if it exists , between the system of equations governing the evolution of the system and an associated self adjoint problem that would be more useful an the analysis of the initial value problem .nevertheless , as show in section 10 , for the general initial plus boundary value problem we have a conserved , positive definite constant of the motion , and therefore we have stable evolution for any finite initial data .+ we have already seen that for a shell with given values of and there may be just one , two or no static solutions of the equations of motion .it has been suggested that in the case of two solutions only one is stable and the other is unstable .however , our present analysis does not show a qualitative difference in the behaviour of two static solutions under perturbations .there appears to be , nevertheless , a more subtle difference between the solutions when we analyze the evolution in more detail .this is related to the presence of quasi - normal modes , and is considered in the next section .in this section we analyze the presence of quasi normal modes and the associated quasi normal ringing in our system .very roughly , qnm and qnr appear in systems coupled to a field that admits a decomposition in periodic incoming and outgoing waves.(see e.g. for more details , and further references ) .generally qnm are associated to non trivial solutions where the incoming wave amplitude vanishes .this requires , generally , a complex value for the period , and therefore , in general , the corresponding outgoing and other amplitudes are unbounded in space and time , and , therefore , are not physical . in our problem , we have seen that for the periodic solutions , the dynamical variables other than are given by expressions of the form , and therefore , qnm , that is , nontrivial solutions with , can exist only for values of such that .we have already seen that is non vanishing for _ real _ , but it may vanish for _ complex _ values of .these zeros of introduce complex poles in the expressions for and the other field variables whose complex frequency does not depend on although the amplitude of their possible contributions does depend on .since , and are given by fourier transforms of these expressions containing complex poles , we may get contributions dominated by these poles .these are the qnr amplitudes associated to the qnm .therefore , to find the qnm we need to find the complex zeros of . in principlethis is the simple matter of solving the equation for .unfortunately , if we consider the explicit expression for given by ( [ 13d2 ] ) , we notice that it contains bessel functions of different types and arguments depending on , so that only a numerical computation of the zeros appears as feasible , but even in this case the accurate computation of these zeros is a complicated task . in this paperwe only attempt a rather rough computation of the zeros closest to the real axis in the complex plane , as these would correspond to the most noticeable qnr of the system . for this explicit computationwe go back to ( [ 13d1 ] ) and notice that if define and , the equation may be written in the form , \nonumber \\ & & \left ( - \left| \sigma \right| j_{{0 } } \left ( \left| \sigma \right| \left ( 2\,{x}^{2}+1 \right ) ^{2 } \right ) + i\sigma\,y_{{0 } } \left ( \left| \sigma \right| \left ( 2\,{x}^{2}+1 \right ) ^{2 } \right ) \right ) \\ & & - \left ( \sigma\,j _ { { 1 } } \left ( \left| \sigma \right| \left ( 2\,{x}^{2}+1 \right ) ^{2 } \right ) -i \left| \sigma \right| y_{{1 } } \left ( \left| \sigma \right| \left ( 2\,{x}^{2}+1 \right ) ^{2 } \right ) \right ) \nonumber \\ & & \left [ \left ( { \sigma}^{2 } \left ( 1+{x}^{2 } \right ) ^{2}-{x}^{2 } \left ( { x}^{2}+2 \right ) \right)j _ { { 0 } } \left ( \left| \sigma \right| \right ) + 2 \sigma{x } ^{2 } \left ( 1+{x}^{2 } \right ) j_{{1 } } \left ( \left| \sigma \right| \right ) \right ] \sigma \left ( 2{x}^{2}+1 \right ) ^{2 } \nonumber\end{aligned}\ ] ] actually , since we are looking for zeroes near the real axis , we should make the replacements to look for zeros in the region , and in the region . in any case , the zeroes of satisfy the scale law , where is some complex function of its real argument .we searched for zeroes of using two procedures . in the first we noticed that if we look for zeroes in the region we may replace in ( [ eq01f ] ) , and after canceling some common factor of , we find a maximum power in the coefficients of the bessel functions .we formally solve for this factor and obtain , { x } ^{2 } \\ & & \left [ \left ( \left ( iy_{{1 } } \left ( \sigma\ , \left ( 2\,{x}^{2}+1 \right ) ^{2 } \right ) -j_{{1 } } \left ( \sigma\ , \left ( 2\,{x}^{2}+1 \right ) ^{2 } \right ) \right ) j_{{0 } } \left ( \sigma \right ) \right .\nonumber \\ & & \left .\left.+ \left ( j_{{0 } } \left ( \sigma\ , \left ( 2\,{x}^{2}+1 \right ) ^{2 } \right ) -iy _ { { 0 } } \left ( \sigma\ , \left ( 2\,{x}^{2}+1 \right ) ^{2 } \right ) \right ) j_{{1 } } \left ( \sigma \right ) \right ) \left ( 2\,{x}^{2}+1 \right ) ^{2 } \left ( 1+{x}^{2 } \right ) ^{2 } \right]^{-1 } .\nonumber\end{aligned}\ ] ] we can use this equation in an iterative scheme where we input a value for on the right hand side , and the cubic root of the ( complex ) number obtained is inserted again as an `` improved '' value for .we have numerically checked that this works very well for sufficiently small values of , say , but stops converging for larger values of . a different , in a way more direct , although not very accurate method that we have used is to consider again ( [ eq01f ] ) , fix a value of , and plot the real and the imaginary parts of ( [ eq01f ] ) , as functions of , for fixed . with this methodone can easily visualize the possible common zeros of the curves and adjust , by trial an error , the best values of and .we have found by this method that there are complex zeros of at least in the range , and possibly for larger values of .we have found zeros both near and .some of these are shown in table i. + .the zeros of . [ cols="^,^,^",options="header " , ] the first thing to notice is that all these zeroes have positive imaginary part , as one would expect from the stability arguments of the previous section .we also notice that there are zeros both for and , although having different imaginary parts and different . in general the imaginary part of increases with , and the modesbecome very strongly damped for of the order or larger than one .all these results are in good agreement with the `` first class solutions '' of for , although they obtain a much larger set of solutions . regarding this point we remark that, unfortunately , the analytic structure in the complex plane of the integrands in ( [ 17d ] ) is very complicated , and the exact relation between these zeroes and the evolution of the dynamical variables is not easily established . to obtain more information on the subject of qnr for our system we carried out numerical integrations of , as given by ( [ 17d ] ) , for simple incoming , and found examples of qnr , with parameters close to those of the zeros of found herethis is detailed in the next section .next , in section ix , we consider again the perturbation problem in general , and obtain a set of coupled equations directly for , and , without resorting to a mode expansion .this allows us to set up an initial value problem that can be solved fully numerically , without the use of the mode expansion or bessel functions .we have carried out explicit numerical integrations of ( [ 17d ] ) to obtain , with the choice , this corresponds to an asymptotically incoming pulse of the form , as a first choice we took and .this corresponds to and , therefore , is within the range considered `` stable '' in previous analysis .the resulting is displayed in fig .1 . we can see the characteristic shape of a qnr , that is , a damped oscillation . as usual, we can get a better look at this shape by displaying , as in fig .2 , as a function of ( thick line ) .we also display in fig .2 an approximate fit using the function , where , , and , although in the actual figure , for clarity , we set to avoid superposition of the two graphs .these parameters are in good agreement with those corresponding to the qnm with for shown in table i. as a second example we took and .this corresponds to and , therefore , is outside the range considered `` stable '' in previous analysis .the resulting is displayed in fig .3 . we can see again the characteristic shape of a qnr , but , in this case , as a strongly damped oscillation .this result is also in good agreement with the fact that the qnm for shown in table i corresponds to a strongly damped oscillation .we remark once again that the analytic structure of the integrand in ( [ 17d ] ) is far from simple , and that although several sets of complex qnm frequencies can be established rather accurately , their effect on the resulting amplitudes can not be easily established .nevertheless , the integrals in ( [ 17d ] ) are taken along the real axis , and therefore , they are insensitive to the manner in which the integrand might be extended to , e.g. , the upper complex- plane , where the poles corresponding to the qnm reside . in the next sectionwe consider a different approach , where we linearize the equations of motion and obtain a set of coupled linear equations for the amplitudes , together with a set of boundary and matching conditions that make possible to establish ( and numerically solve ) a well defined initial data problem .we consider again the set of dynamical equations and matching conditions for our problem , assuming that we are close to a static solution , characterized by certain values of and .we therefore write , with satisfying the equations , and expand the remaining equations to first order in . using these expansionwe first obtain , in these equations and are given by ( [ eq3a ] ) and ( [ eq4a ] ) , so that the zeroth order terms coincide with the static solution of section iii . from these , again to first order we find , the explicit form of the terms of order is not required in what follows . choosing appropriately some integration constants we set , next we replace in the matching conditions and expanding again to first order we obtain two equations for and on the boundary and an equation of motion for , for explicit numerical integration it turns out to be convenient to introduce instead of a new function defined by , and , therefore , satisfying the equation : this way all the dynamic variables involved evolve directly in terms of .this simplifies the numerical treatment of the problem as it eliminates the need of separate grid spacing for and .but , most important , using we may demonstrate the existence of a crucial constant of the motion , as is shown in the next section .as discussed in the previous section , we have a complete solution for characteristic data , and we can use this fact to prove stability regarding the evolution resulting from that type of data .this , nevertheless , leaves still open the question of the evolution of general initial data . in other words , the possibility of the existence of initial data that somehow is not registered as characteristic data , but such that it renders the motion unstable , in the sense that , i.e. can acquire arbitrarily large values starting from finite initial data . to answer this question we consider again ( [ eq05h ] ) ,replace in terms of , and solve for and .we get , and , replacing now ( [ stab01 ] ) and ( [ stab02 ] ) in ( [ eq06h ] ) , upon multiplication by , and some rearrangement , we find , we notice that the first line ( [ stab03 ] ) has the form of a total -derivative .we may put the whole expression in this form as follows .we first take the derivative of ( [ stab01 ] ) with respect to , solve for and replace it in the second line in ( [ stab03 ] ) .next , we derive ( [ stab02 ] ) with respect to , solve again for , and replace it in the third line in ( [ stab03 ] ) . after a new rearrangementwe get , where , again , all derivatives of and are evaluated at .the first two lines in ( [ stab04 ] ) are total -derivatives . to analyze the last line we notice that , on account of ( [ eq01 ha ] ) we have , dr & = & \left.\left ( r \frac{\partial \chi_1(\tau , r)}{\partial \tau}\frac{\partial \chi_1(\tau , r)}{\partial r}\right ) \right|_0^{r_0 } \nonumber \\ & = & \left . r_0 \frac{\partial \chi_1(\tau , r_0)}{\partial \tau}\;\frac{\partial \chi_1(\tau , r)}{\partial r } \right|_{r = r_0}\end{aligned}\ ] ] where in the last line we have assumed that satisfies the ( regularity ) boundary conditions that is finite , and that . similarly , from ( [ eq08h ] ) , we have , dr & = & \left . \left ( r \frac{\partial \chi_3(\tau , r)}{\partial \tau } \frac{\partial \chi_3(\tau , r)}{\partial r}\right ) \right|_{r_0}^{\infty } \nonumber \\ & = & -\left .r_0 \frac{\partial \chi_3(\tau , r_0)}{\partial \tau}\;\frac{\partial \chi_3(\tau , r)}{\partial r } \right|_{r = r_0}\end{aligned}\ ] ] where we have assumed that is such that the integral exists , and , therefore , there is no contribution for . comparing ( [ stab05 ] ) and ( [ stab06 ] ) with ( [ stab04 ] ) we find that , if corresponds to a solution of ( [ eq08h ] ) that is regular for , ( and , therefore , for ) , and is such that for , ( which must happen for the integral in ( [ stab06 ] ) to exist ) , then the quantity , dr \\ & & + { \frac { { r_{{0}}}^{8 } } { 2\left ( { r_{{0}}}^{2}+{j}^ { 2 } \right ) \left ( 2\,{j}^{2}+{r_{{0}}}^{2 } \right ) ^{2}{j}^{2 } } } \int_{r_0}^{\infty}\frac{r}{2}\left[\frac{(2 j^2+r_0 ^ 2)^4}{r_0 ^ 8}\left(\frac{\partial \chi_3}{\partial \tau}\right)^2 + \left(\frac{\partial \chi_3}{\partial r}\right)^2 \right ] dr \nonumber\end{aligned}\ ] ] is a _ constant of the motion _ ,i.e. , .notice that is positive definite for any non trivial solution of the equations of motion .therefore , it provides an absolute bound for each term in ( [ stab07 ] ) .thus , if at any time all the terms in ( [ stab07 ] ) are finite , then they will remain finite for any evolution that satisfies the condition of regularity for , demonstrating the absolute stability of the system under any finite perturbation for which all the terms in ( [ stab07 ] ) exist .of course , a simple example is the case of an incoming wave of ( essentially ) compact support , for which we have already demonstrated the stability .we should further notice that the term , in the second line of ( [ stab07 ] ) allows for mild compensating divergences of , and , provided they are compatible with the existence of the integrals in ( [ stab07 ] ) .we notice nevertheless that , taking into account ( [ stab01 ] ) , and ( [ stab02 ] ) , we may also write in the alternative forms , and , since both and are bounded , we find that both and must also remain bounded , and this implies that is also bounded . on account of ( [ eq02h ] ) , this result implies that is also bounded to first order .we remark that is directly of second order and vanishes to first order .therefore , the full metric remains bounded under arbitrary perturbations .one can now check that the system ( [ eq01 ha ] ) , ( [ eq05h ] ) , ( [ eq06h ] ) , and ( [ eq08h ] ) , with appropriate replacements of by , can be used to set up and solve numerically the initial plus boundary value problem where , for some arbitrarily chosen we give arbitrary values to , , to and , in the interval , and to and , in the interval , subject to the constraints , and those resulting from ( [ eq05h ] ) .although this procedure is well defined , and leads to a unique evolution for appropriate initial data , at least in its cauchy domain , we must remark that the well - posedness of this type of problems , as regards sensitivity to small changes in the initial data , is still an open and interesting issue , outside the limits of the present research .the purpose of this exercise is mainly to display the evolution of some appropriately chosen initial data , in particular of , in the region where we may expect the presence ( or absence ) of quasi normal ringing .as we shall see , this behaviour , at least in the region analyzed , is in complete agreement with our previous results .the integration procedure we have chosen uses a simple finite difference method for updating in , and for in , where is some appropriately chosen outer boundary .the values of are updated using a simple leap frog scheme .once these updates are carried out , we use the matching conditions ( [ eq05h ] ) to update , and at . we will be mainly interested in the behaviour of as a function of .on this account we choose some appropriate value of and boundary condition for at , but carry out the integration only from to , to ensure , by causality , that no signal coming in from has time to reach and affect that behaviour .in particular , in all the examples below we set , while we chose for the first and second example , and for the other two examples .the first explicit example of the results of this numerical integration for is given in fig.4 and fig .the initial data used was : and we set , , corresponding to , and . in fig .4 we display the evolution of as a function of .the graph corresponds clearly to an evolution dominated by qnr , although a more detailed analysis indicates the presence of a small non oscillating background at large . in fig 5we have a plot of ( thick line ) in the region dominated by the damped oscillation mode .the thin line curve corresponds to an approximate fit with the function , although the actual plot in fig .5 has been displaced up by 0.5 for clarity .we can see again the very clear signal of a damped oscillation , with a frequency and fall off in very good agreement with those of the example of fig .2 , although we have used a different procedure and form for the incoming pulse . the agreement might probably improve using a more elaborate rather than our simple minded implementation of the numerical procedure . in any case , it was not the purpose of the authors to optimize it but rather to show that even with a simple implementation we can solve the initial value problem related to the perturbation treatment of the dynamics of our system close to a static configuration , and obtain results in perfect agreement of those extracted from the mode expansion .as a second full numerical example we considered again the initial data given ( [ eq09h ] ) , but setting and , that is .the results are shown in fig .6 . in this case , in agreement with our analysis of the qnm , we find a very strongly damped oscillation , very much like that shown in fig 3 .for the same value of the parameters and , although the computational procedures applied in each case were completely different .we can also check that both the frequency and damping are in good agreement with the values given in table i for and positive real part for . as a final example , we present the numerically computed evolution of as a function of for the cases and , that is , given as the thin line curve in fig . 7 , and for and , that is , also given in fig .7 as the thick line curve . in these examplesthe qnr is over damped and no trace of `` ringing '' is apparent in the time dependence of . in particular , the last case , ,corresponds to in the notation of , where in what they call the first class solutions one has a pure imaginary qnm frequency , and , therefore , we would have only a purely exponentially decreasing qnr .these plots confirm also the stability of the corresponding static configurations .in this section we consider the problem of the apparent discrepancy between the evolutions obtained using the formalism developed in and those described here .we shall illustrate the reasons for this by considering the qnm toy model given by price and husain in , since , as will be seen , it bear a certain formal resemblance to the shell model discussed here .the price and husain model consists of two torsion strings , the first attached at and extending to , where it is attached to a second string that extends to .the strings have different characteristic constants , so that the respective equations of motion are , and , where is a constant . , and , represent the torsion angle respectively in the intervals , and .the boundary and matching conditions are , the general solutions of ( [ eeq01a ] ) and ( [ eeq01b ] ) , are , where is , in principle , an arbitrary function of , and we have included the boundary condition at , and , where , and are also functions of . one can now check that , imposing the matching conditions , and assuming that all the functions vanish for large values of their arguments , the general solution functions , , and satisfy the relations , \nonumber \\ g(t - k x ) & = & \frac{1}{2 k } \left [ ( k-1 ) f(t - kx+ka+a)-(k+1)f(t - kx+ka - a ) \right]\end{aligned}\ ] ] and , therefore , an arbitrary determines a complete solution of the problem . in particular , if we choose of compact support , we will also get and of compact support .this type of solution describes then an incoming pulse of compact support that approaches the junction at , excites a non vanishing for a finite time , and gives rise to an out going pulse , also of compact support .there is , in general , no `` ringing '' , as no quasi normal mode is excited .this is precisely the situation considered in , where we could fix the function arbitrarily , and , by choosing it of compact support we got also and of compact support , with no quasi normal ringing .but the price and husain model does display qnr . to see how this comes about we assume that the functions admit fourier transforms of the form , replacing in the system ( [ eeq08 ] ) we eventually find that we may write, } { { { \rm e}^{2\,i\omega a } } \left ( 1+k \right ) -k+1 } } d \omega\end{aligned}\ ] ] these expressions provide the solution for the problem of finding the evolution assuming that we are given an incoming pulse as the data for the problem , just as in the present shell problem .if has suitable analytic properties it may be possible to compute the integrals in ( [ eeq11 ] ) by extending the path of integration to the complex plane .we notice here that the denominators in ( [ eeq11 ] ) have simple poles at , where is an integer , including zero .notice that all have the same positive imaginary part , and , therefore , when it is possible to close the path of integration by adding an infinite semicircle in the upper half plane of , each one of these poles will contribute a factor of the form times an oscillating factor depending on and on the explicit form of .these are the characteristic features of the `` ring down '' associated with the quasinormal modes .in fact , if we go back to ( [ eeq08 ] ) , and write it in the form , \nonumber \\g(t ) & = & \frac{1}{2 k } \left [ ( k-1 ) f(t+ka+a)-(k+1)f(t+ka - a ) \right]\end{aligned}\ ] ] and assume we get , \exp(i \omega ( t - k a ) ) \nonumber \\g(t ) & = & \frac{1}{2 k } \left [ ( k-1 ) \exp(i \omega a)-(k+1 ) \exp(-i \omega a ) \right ] \exp(i \omega ( t+k a))\end{aligned}\ ] ] the condition corresponds to a purely out going wave for .this is achieved precisely for of the form ( [ eeq12 ] ) .taken literally , these type of solution correspond to both and that decrease exponentially for , but increase exponentially as .similarly , we have that grows exponentially for large . these solutions are therefore unphysical .what is then the relation between these solutions and the poles in ( [ eeq11 ] ) ?the crucial point is that the poles are effective only if the integration path can be closed on the upper half plane of . in general , for , this requires , and therefore , the exponential terms are not present for , and as a consequence , less than a certain lower bound .but , why do we see a ring down in one case and not in the other ? to understand what is happening here we go back to ( [ eeq13 ] ) , and assume that has a fourier transform .then , replacing in ( [ eeq08 ] ) , we find , \ ] ] but this implies that for general , the transform will contain a factor that precisely cancels the denominators in ( [ eeq11 ] ) , and therefore there will be no ring down in the solution of the problem , unless , of course , itself contains the appropriate poles , as in ( [ eeq11 ] ) .the general conclusion is then that both ( [ eeq08 ] ) and ( [ eeq11 ] ) provide a complete solution of the problem and are , therefore , completely equivalent .clearly , for the purpose of making the qnr apparent it is simpler and , in a sense , more `` natural '' to use ( [ eeq11 ] ) and specify freely , i.e. , the incoming pulse shape , just as was done for the shell problem in the present work .in this paper we have considered again the perturbative analysis of the static configurations of the apostolatos and thorne shell model by modifying the formalism developed in . as a result we have been able to show the completeness of the mode expansion as regards the characteristic data problem for the perturbative dynamics of the shell .this , in turn , provides a simple and direct proof of the stability under bounded ( symmetry preserving ) perturbations of the general static configurations of the shell .we have also derived a set of coupled linear ordinary and partial differential equations that describe the general perturbative evolution of the shell .this set of equation can be used to set up an initial value problem for the shell that can be solved numerically , but , more importantly , one can prove the existence of a positive definite constant of the motion , that implies the stability of the motion resulting for an arbitrary perturbation .in several examples considered we find perfect agreement between the full numerical evolution and that obtained through the numerical integration of the mode expansion , although they are completely different in detail . at first sightour results seem to be in contradiction with those of kurita and nakao .we believe that there is no contradiction here , and that our computations and those in are perfectly compatible .the problem appears because kurita and nakao conclude that the existence of certain complex zeros ( or poles ) in the complex plane automatically imply a direct effect on the evolution of the shell , but this is not necessarily the case . the usual arguments for relating these poles to the evolution relay rather heavily on the possibility of extending integrals on the real axis to the complex plane , in such a way that one effectively picks up the residues of those poles , but this may not always be possible , and , even, it might happen that one can disregard those poles , when they exist , by extending the integration path in the opposite direction .in fact , since the mode expansion involves bessel functions in rather complex combinations , the matter of finding the extensions of appropriate function to the complex plane is a highly nontrivial undertaking .going back to our own derivations , we notice that although we found qnm with a negative real part of , we only find evidence of qnr for those with a positive real part for .we remark once again that the mode expansion does not require explicit consideration of the complex plane , as only the real axis is involved .we believe that although the work is valuable as regards the finding of the qnm of the system , since these by themselves are unphysical ( they diverge generally in some space or time direction ) , given the very complex nature of the functions and of their possible extensions to the complex plane , conclusions about the stability of the static configurations of the shell can only be reached by explicitly showing the relation between these modes and the general evolution of the shell .in fact , in view of our results concerning completeness of the mode expansion and its relation to stability , we can only conclude that all in principle unstable qnm must be suppressed , as well as possibly some of the stable ones .of course , explicit confirmation of this conclusion would require performing and analyzing appropriate extensions into the complex plane , but that is completely outside the scope of the present research .this work was supported in part by conicet ( argentina ) .99 t. a. apostolatos , and k. s. thorne , phys .d * 46 * , 2435 ( 1992 ) see. e.g. nollert , class.quant.grav .* 16 * ( 1999 ) r159-r216 , for a recent introduction , review and references on this subject . v. h. hamity , m. a. ccere , and d. e. barraco , gen .. gravit . * 41 * , 2657 ( 2009 ) r. j. gleiser and m. a. ramirez , phys.rev . *d85 * ( 2012 ) 044026 we must warn the reader that the use of in the equations that follow applies only to expressions where is real , and is just a shorthand for , if , and for , if .this appears to be trivial for real , but is crucial for _ complex _ , because , as it should be clear , expressions such as ( [ eq1d ] ) , ( [ eq4da ] ) , or ( [ eq5d ] ) , with in the argument of the bessel functions , are _ not _ solutions of ( [ ateq2 ] ) for complex .we must therefore replace appropriately in any expression related to the solution of the evolution equations before we attempt to compute it for complex .this should also be taken into account in our discussion of quasi normal modes and in any other instance where we consider complex values of .y. kurita and k. i. nakao , _ dynamical instability in a relativistic cylindrical shell composed of counter rotating particles _ [ arxiv:1112.4252 [ gr - qc ] ] prog.theor.phys .* 128 * ( 2012 ) 191 - 211 r. h. price and v. husain , phys .letters , * 68 * , 1973 ( 1992 )
|
we study the perturbative evolution of the static configurations , quasinormal modes and quasi normal ringing in the apostolatos - thorne cylindrical shell model . we consider first an expansion in harmonic modes and show that it provides a complete solution for the characteristic value problem for the finite perturbations of a static configuration . as a consequence of this completeness we obtain a proof of the stability of static solutions under this type of perturbations . the explicit expression for the mode expansion are then used to obtain numerical values for some of the quasi normal mode complex frequencies . some examples involving the numerical evaluation of the integral mode expansions are described and analyzed , and the quasi normal ringing displayed by the solutions is found to be in agreement with quasi normal modes found previously . going back to the full relativistic equations of motion we find their general linear form by expanding to first order about a static solution . we then show that the resulting set of coupled ordinary and partial differential equations for the dynamical variables of the system can be used to set an initial plus boundary values problem , and prove that there is an associated positive definite constant of the motion that puts absolute bounds on the dynamic variables of the system , establishing the stability of the motion of the shell under arbitrary , finite perturbations . we also show that the problem can be solved numerically , and provide some explicit examples that display the complete agreement between the purely numerical evolution and that obtained using the mode expansion , in particular regarding the quasi normal ringing that results in the evolution of the system . we also discuss the relation of the present work to some recent results on the same model that have appeared in the literature .
|
current power systems are continuously monitored and controlled by ems / scada ( energy management system and supervisory control and data acquisition ) systems in order to maintain the operating conditions in a normal and secure state . in particular , the scada host at the control center processes the received meter measurements using a state estimator , which filters the incorrect data and derives the optimal estimate of the system states .these state estimates will then be passed on to all the ems application functions , such as optimal power flow , etc , to control the physical aspects of the electrical power grids .however , the integrity of state estimation is under mounting threat as we gradually transform the current electricity infrastructures to future smart power grids , which are more open to the outside networks from the extensive use of internet - based protocols in the communication system . in particular , enterprise networks and even individual usersare allowed to connect to the power network information infrastructure to facilitate data sharing . with these entry points introduced to the power system ,potential complex and collaborating malicious attacks are brought in as well .et al_. showed that a new false - data injection attack could circumvent bad data detection ( bdd ) in today s scada system and introduce arbitrary errors to state estimates without being detected .such an attack is referred to as an undetectable false - data injection attack .a recent experiment in demonstrates that the attack can cause a state - of - the - art ems / scada state estimator to produce a bias of more than of the nominal value without triggering the bdd alarm .biased estimates could directly lead to serious social and economical consequences .for instance , showed that attackers equipped with data injection can manipulate the electricity price in power market .worse still , warned that the attack can even cause regional blackout .being aware of its imminent threats to power system , a number of studies are devoted to both understanding its attacking patterns and providing effective countermeasures .a common approach to mitigate false - data injection attack is to secure meter measurements by , for example , guards , video monitoring , or tamper - proof communication systems , to evade malicious injections .recent studies have proposed a number of methods to select meter measurements for protection .for instance , proved that it is necessary and sufficient to protect a set of _ basic measurements _ so that no undetectable false - data injection attack can be launched .however , the protection scheme in is costly in that the size of a set of _ basic measurements _ is the same as the number of unknown state variables in the state estimation problem , which could be up to several hundred in a large - scale power system .under limited budget , the system operator should protect a subset of state variables .this is because an ill - advised protection method may leave the attackers the chance to formulate undetectable attack to compromise a large number of , if not all the state variables , even if many measurements have been secured . in this case, the system operator may give priority to protecting the state variables that have greater social / economic impact once compromised , such as those for critical buses / substations connected to heavily loaded or economically important areas , or with critical interconnection purposes .on the other hand , even if the system operator has enough budget to defend all the state variables , protecting a set of basic measurements in a random sequence may still open to attackers the possibility to compromise a large number of state variables during the lengthy security installation period . in both cases , it is valuable to devise a method that gives priority to defending a subset of state variables that serves our best interests at the current stage , and opens to the possibility of expanding the set of protected state variables in the future . in this paper , we focus on using graphical methods to derive efficient strategies that defend any subset of state variables with minimum number of secure measurements .our detailed contributions are listed as follows , * we derive conditions to select a set of meter measurements , so that no undetectable attack can be launched to compromise a given set of state variables if the selected meters are secured .the conditions are particularly useful in formulating the optimal protection problem that defends the state variables with a minimum cost .* we characterize the optimal protection problem as a variant steiner tree problem in a graph .then , two exact solution methods are proposed , including a steiner vertex enumeration algorithm and a mixed integer linear programming ( milp ) formulation derived from a network flow model .in particular , the proposed milp formulation reduces the computational complexity by exploiting the graphical structure of the optimal solution . * to tackle the intractability of the problem, we also propose a polynomial - time tree - pruning heuristic ( tph ) algorithm . with a proper parameter ,simulation results show that it yields close - to - optimal solution , while significantly reducing the computational complexity .for instance , the tph solves a problem of a -bus testcase in seconds , which may take days by the milp formulation .the proposed milp and tph algorithms can also be extended to achieve incremental protection .that is , starting from a set of protected state variables and measurements , the method can gradually expand the set of protected state variables until the entire set of state estimates is protected .the incremental protection method can be used to plan a long - term security upgrade project in a large - scale power system .state estimation protection is closely related to the concept of power network observability .the conventional power network observability analysis studies whether a unique estimate of all unknown state variables can be determined from the measurements . from the attacker s perspective , proved that an undetectable attack can be formulated if removing the measurements it compromises will make the power system unobservable .conversely , showed that no undetectable attack can be formulated if the power system is observable from the protected meter measurements . in this paper , we extend the conventional wisdom of power network observability to a generalized _ state variable observability _ to study the protection mechanisms for any set of state variables . graphical method is commonly used for power system observability analysis .the early work by krumpholz _et al_. stated that a power system is observable if and only if it contains a spanning tree that satisfies certain measurement - to - transmission - line mapping rules .a follow - up work presented a max - flow method to find such mapping to examine the observability of a power network .few recent papers also applied graphical methods to study the attack / defending mechanisms of false - data injection .for instance , based on the results in , proposed an algorithm to quantify the minimum - effort undetectable attack , i.e. the non - trivial attack that compromises least number of meters without being detected . besides , used a min - cut relaxation method to calculate the security indices defined in to quantify the resistance of meter measurements in the presence of injection attack .similar min - cut approach was also applied in to identify the critical points in the measurement set , the loss of which would render the power system unobservable .the problem of defending a subset of critical state variables against undetectable attack was first studied in our earlier work , where we proposed an arithmetic greedy algorithm which finds the minimum set of protected meter measurements by gradually expanding the set of secure state variables .however , the computational complexity of the greedy algorithm can be prohibitively high in large scale power systems .for instance , it may take years to obtain a solution in a -bus system .in contrast , we study in this paper the optimal protection from a graphical perspective . by exploiting the graphical structures of the optimal solution, the proposed milp formulation obtains the optimal solution with significantly reduced complexity .in addition , we also propose a pruning - based heuristic that yields near - optimal solutions in polynomial time .the rest of this paper is organized as follows . in sectionii , we introduce some preliminaries about state estimation and false - data injection attack . we characterize the optimal protection problem in a graph in section iii and propose solution algorithms in section iv . in sectionv , we discuss the methods to extend the proposed algorithms to some practical scenarios , including the method to achieve incremental protection .simulation results are presented in section vi . finally , the paper is concluded in section vii .we consider the linearized power network state estimation problem in a steady - state power system with buses .the states of the power system include the bus voltage phase angles and voltage magnitudes .the voltage magnitudes can often be directly measured , while the values of phase angles need to be obtained from state estimation . in the linearized ( dc ) measurement model, we assume the knowledge of voltage magnitudes at all buses ( in the per - unit system ) and estimate the phase angles based on the active power measurements , i.e. the active power flows along the power lines and active power injections at buses . by choosing an arbitrary bus as the reference with zero phase angle ,the network state consisting of the unknown voltage phase angles is captured in a vector . in the dc measurement model ,the received measurements are related to the network states as here , is the measurement jacobian matrix . is independent measurement noise with covariance . using a -bus power system in fig . for example . by setting bus as the reference bus, there are four unknown state variables .suppose that the reactance of all transmission lines equals , the measurement jacobian matrix is where the first rows correspond to flow measurements while the last two rows correspond to injection measurements .the four columns correspond to bus to , respectively .notice that the column corresponding to the reference bus is not included .when is full column rank , i.e. , the maximum likelihood estimate is given by since , i.e. the number of rows in , at least meters are needed to derive a unique state estimation .meanwhile , the other measurements provide the redundancy to improve the resistance against random errors .errors could be introduced due to various reasons , such as device misconfiguration and malicious attacks .the current power systems use bdd mechanism to remove the bad data assuming that the errors are random and unstructured .it calculates the residual and compares its -norm with a prescribed threshold .a measurement is identified as a bad data measurement if where is an identity matrix .otherwise , is considered as a normal measurement .suppose that attackers inject malicious data into measurements .then , the received measurements become in general , is likely to be identified by the bdd if it is unstructured .nevertheless , it is found in that some well - structured injections , such as those with , can bypass bdd .here is a random vector .this can be verified by calculating the residual in ( [ 5 ] ) , where the same residual is obtained as if no malicious data were injected .therefore , a structured attack will not be detected by bdd . in this case, the system operator would mistake for a valid estimate , and thus an error vector has been introduced without being detected .the risks of undetectable attacks can be mitigated if the system operator can secure measurements to evade malicious injections . within this context , we assume that the system operator s objective is to ensure that no undetectable attack can be formulated to compromise a given set of state variables , where is the set of all unknown state estimates. that is , for all .this is achieved by securing a set of meter measurements , where is the set of all the meters . in other words, attackers are not able to inject false data to any protected meter measurement , i.e. , . from , securing a set of meters would eliminate the possibility of undetectable attack to compromise a set of state variables , if and only if here , is the submatrix of including the rows that correspond to and is the submatrix of excluding the columns that correspond to . denotes the size of .naturally , we are interested in minimizing the cost to protect the state variables . for simplicity , we assume a fixed cost , e.g. manpower or surveillance installation cost , of securing each meter for the time being .this requires solving the following problem which is proved to be an _ np - hard _ problem in the next section .interestingly , we show that ( [ 98 ] ) can be characterized as a variant steiner tree problem in a graph . the results will be used in the next section to develop graphical algorithms . in this subsection, we first introduce some definitions to characterize a power network in a graph .then , we establish the equivalence between power network observability and state estimate protection criterion .the results will be used in the next subsection to formulate an equivalent graphical characterization of the optimal state protection problem in ( [ 98 ] ) .a power network can be described in an undirected graph , where vertices and edges represent buses and transmission lines , respectively .we use and to denote the two vertices connected to the edge , and to denote the set of edges incident to vertex .the following definition gives the notion of measurability in a power network .* definition : ( measurability ) * the _ measured subnetwork _ of a meter , denoted by , consists of the vertices and edges _ measured _ by the meter .that is , for a flow meter on transmission line , includes the two vertices and edge . for an injection meter at bus , includes the vertex set and edge set .the _ measured subnetwork _ of a set of meters is defined as in particular , is referred to as the _ measured full network_. using a -bus testcase in fig . for example .the measured subnetwork of the flow meter includes edge and vertices and , i.e. the measured subnetwork of the injection meter is besides , the measured subnetwork of is the conventional power network observability analysis studies whether a unique estimate of all unknown state variables can be determined . here ,we extend the concept of network observability to a generalized state variable observability in the following definition . with a bit abuse of notation, we use a set of vertices to denote the corresponding state variables .* definition : ( observability ) * a set of state variables is _ observable _ from a set of meters , if and only if a unique estimate of can be obtained from the measurements .that is , for two different vectors , if holds for an arbitrary measurement vector , then likewise , a measured subnetwork is an_ observable subnetwork _ if and only if all the unknown state variables in the subnetwork is observable from , i.e. where , with being the reference bus .* remark : * it holds that if is observable from .we refer to as a _basic measurement set _ of , if is observable from and .notice that not all s have a basic measurement set . from ( [ 52 ] ) , contains at least a basic measurement set of when is an observable subnetwork . besides, must include the reference bus , i.e. , since otherwise .note that the conventional definition of network observability is a special case with and .now , we are ready to establish the equivalence between state observability and state estimate protection criterion .* theorem : * protecting a set of meter measurements can defend a set of state variables against undetectable attack , if and only if is observable from . _ proof : _ we first prove the _ if _ part . when is observable from , there must exist an observable subnetwork that includes , i.e. and . from ( [52 ] ) , we have , where .then , the solution of to is , where is an arbitrary vector .that is , no undetectable attack can be formulated to compromise if is well protected . since and , this completes the proof of the _ if _ part .we then show the _ only if _ part .that is , there exists an undetectable attack to compromise if is unobservable from . from definition , there exists a and two different state vectors and , satisfying and for some . by letting , we have and .in other words , an attacker can introduce non - trivial error to state variable without the need to compromise any protected meter in .therefore , an undetectable attack can compromise state without being detected . * remark : * theorem indeed provides an equivalent condition as ( [ 11 ] ) in protecting a set of state variables from the perspective of network observability .this will help to develop graphical algorithms in the following subsections . from theorem , we see that all the unknown state variables to be defended , i.e. , are included in an observable subnetwork constructed from a set of protected meters . in the following subsection , we find that the optimal observable subnetwork has an interesting steiner tree structure .the power network observability analysis in showed a connection between network observability and a spanning tree structure . the idea is briefly covered in proposition .* proposition : * the measured full network is observable if and only if the graph defined on contains a spanning tree , where each edge of which is mapped to a meter according to the following rules , 1 .an edge is mapped to a flow meter placed on it , if any ; 2 .an edge without a flow meter is mapped to an injection meter that measures it ; 3 .different edges are mapped to different meters in . _proof : _ see the proof in . proposition states that any basic measurement set of can be mapped to a spanning tree in the measured full graph . on the other hand , a measured subnetwork , where , can also be considered as a closed network whose observability is only related to the components within .therefore , there also exists a measurement - to - edge mapping in an observable subnetwork , specified as following .* corollary : * a measured subnetwork is observable if and only if the graph defined on contains a tree that connects all vertices in , where each edge of the tree is one - to - one mapped to a unique meter in that takes its measurement ._ proof : _ the proof follows by replacing with in proposition . from remark and corollary , we see that the unknown state variables to be defended are indeed contained in a tree constructed from a protected meter measurement set .therefore , we propose the following _ minimum measured steiner tree _ ( mmst ) problem in a graph that is equivalent to the optimal state protection problem ( [ 98 ] ) .-bus testcase.,width=240 ] * mmst problem : * given the measured full graph . to protect a set of state variables with a minimum cost, the mmst problem finds a shortest steiner tree ( with the minimum number of edges ) and a set of meters that satisfy the following conditions . 1 . is the set of all vertices measured by ; 2 . and ; 3 .each edge in is one - to - one mapped to a unique meter in that takes its measurement .then , the set of meters is the optimal solution to ( [ 98 ] ) .we name the problem as a steiner tree problem , instead of spanning tree , because in general connects only a subset of vertices in the measured full graph .the three conditions ensure that all the unknown state variables in , including , are observable from .we present an example from fig . to illustrate the structure of a mmst .we assume that and is the reference bus .the optimal protected meters set is obtained from exhaustive search .the corresponding minimum steiner tree is plotted in fig .we see that conditions ) and ) are clearly satisfied .condition is satisfied by mapping edges and to injection meters and , and the other edges in to the flow measurements placed on them .we show that the mmst problem is _ np - hard _ by considering a special case where flow meters are installed at all edges of .then , any steiner trees that include and automatically satisfy the three conditions , i.e. by mapping each edge to the corresponding flow meter . in this case, the mmst problem becomes a standard minimum _steiner tree _( mst ) problem , which finds the shortest subtree of the full graph that connects and all the vertices in .mst is a well - known _ np - hard _ problem .the time complexity of known exact algorithms increase exponentially with or .since mst is a special case of the mmst problem , the mmst problem is also _ np - hard _ following the reduction lemma for computational complexity analysis .a special case of the mmst problem with is solved in and with time complexity .the special case is easy because holds automatically when all the state estimates are to be protected .the general mmst problem is much harder due to the combinatorial nature of possible .in this section , we first introduce two exact solution methods to solve the mmst problem , including the sve method and an milp formulation .then , a tree pruning heuristic is proposed to obtain an approximate solution in polynomial time .a vertex in the steiner tree solution is a _ terminal _ if , or a _ steiner vertex _ otherwise .the steiner vertex enumeration ( sve ) method enumerates the possible steiner vertices until a minimum observable subnetwork , including and the terminals , is found .then , can be obtained by removing redundant measurements in the subnetwork using gauss - jordan elimination .a pseudo - code of the sve is presented in algorithm .the time complexity of sve is , which is computational infeasible in large scale power networks , e.g. a -bus system .therefore , we mainly use sve as the performance benchmark to evaluate the correctness of the algorithms proposed in the following subsections . a basic measurement set of in this subsection , we propose an milp formulation to solve the mmst problem , which has much lower complexity than sve by exploiting the optimal solution structure .consider a digraph constructed by replacing each edge in the measured full graph with two arcs in opposite directions .we set the reference bus as the root and allocate one unit of demand to each vertex in .commodities are sent from the root to the vertices in through some arcs .then , the vertices in are connected to via the used arcs if and only if all the demand is satisfied .when we require using the minimum number of arcs to deliver the commodity , the used arcs will form a directed tree , referred to as a _steiner arborescence_. evidently , the solution to the mmst problem can be obtained if we solve the following _ minimum measured steiner arborescence _ ( mmsa ) problem andneglect the orientations of the arcs . without causing confusions, we say an arc is measured by a meter if the edge ] and if an injection meter is available at . is the demand at vertex , where \in \mathcal{e}}\sum_{\left(k , s\right)\in \mathcal{a } } z_{ks } & j\in \mathcal{d}\\ \sum_{\left(j , k\right)\in \mathcal{a } } z_{jk } + \sum_{\left[k , j\right]\in \mathcal{e}}\sum_{\left(k , s\right)\in \mathcal{a } } z_{ks } & j\notin \mathcal{d}.\\ \end{cases}\ ] ] for , is the total pseudo demand .otherwise , one extra unit of actual demand is counted as well .as we can see , there are two terms in ( [ 26 ] ) , each corresponding to one objective .the first term is to minimize the total number of arcs included in the arborescence .the second term is to minimize the number of injection measurements .notice that the first objective is primary , as the second term in ( [ 26 ] ) is always dominated by the first one due to the scaling factor , which makes the second term always less than . as such , ( [ 26 ] ) is to minimize the total number of arcs in the arborescence , and meanwhile eliminating redundant injection measurements , such as the case when two injection measurements are assigned to the same arc .constraint forces arc to be included in if any commodity flow passes through . constraint ( [ 22 ] ) and ( [ 23 ] ) ensure that each arc included in has at least one measurement assigned to it and each injection measurement can only be assigned to at most one arc . the flow conservative constraint ( [ 24 ] ) , together with , forces the selected arcs to form an arborescence rooted at the reference vertex and spanning all vertices with positive demand .once the optimal solution to ( [ 27 ] ) is obtained , we can restore the optimal solution to the mmst problem by including : 1 .injection measurement on bus if , ; 2 .flow measurement on arc , if and , .that is , the arcs in not mapped to any injection measurement . extensive experiments in the simulation section show that the milp formulation always obtains the same optimal solution as the sve algorithm .besides , the milp significantly reduces the computational complexity by exploiting the solution structure .for instance , a problem in a -bus system that is computationally infeasible by the sve algorithm can now be solved by the milp within minutes .nonetheless , the computational complexity of the state - of - art milp algorithms , such as branch and bound and cutting - plane method , etc , still grows exponentially with the problem size .we observe from simulations that it takes excessively long time to solve the problem in a -bus power system . to tackle the intractability of the problem, we propose a tree - pruning based heuristic ( tph ) that finds an approximate solution in polynomial time .we refer to a tree , along with a set of measurement , a _ feasible measured tree _ if and satisfy the conditions of the mmst problem .our observation is that , although it is hard to find a mmst , it is relatively easy " to find a feasible tree that includes all the vertices in the graph using the techniques in .starting from a feasible measured tree that spans all vertices in the measured full graph , our tph method iteratively prunes away redundant vertices and updates the feasible tree , until a shortest possible tree is obtained .a pseudo - code is provided in algorithm .the tph consists of multiple rounds of pruning operations . here , we explain one round of pruning , which corresponds to line - in the pseudo - code , in the following steps .* initialization : * the remaining measurements corresponding to _ step 1 : feasible tree generation ._ for a set of vertices ( initially set to be ) , we generate feasible edge - measured trees that span all the vertices in , where is a tunable parameter ( lines - ) . in this step ,we first find the meters that measure only the vertices in .this can be easily performed by examining in whether all the non - zero elements in a row lie in the columns corresponding to the state variable set .for instance , for and in fig .[ 61 ] , the selected meters are . among the selected meters , we find basic measurement sets of , denoted by ( ) , using gauss - jordan elimination .then , we construct feasible spanning trees , one for each , using the max - flow method given in the appendix .the feasible spanning trees are denoted by , ._ step 2 : vertex identification ._ for each tree , we identify the child and descendant vertices of each vertex ( included in line - in algorithm ) .this can be achieved by constructing a directed tree from the root to all leaf vertices . if there is an arc , we say is a child of , denoted by . in general ,if there exists a path from to , we refer to as a descendent of , denoted by . in fig . , for instance , and are the child vertices of , while to are all descendent vertices of . in practice , the descendent vertex identification can be achieved using breadth - first - search starting from the root ._ step 3 : tree pruning ._ for each , we start from the root to the leaf vertices to prune away redundant vertices ( line - in algorithm ) . for a vertex , we find the largest prunable subset , such that the residual tree is still a feasible measured tree after all the vertices in are pruned .in particular , can be pruned if : 1 . contains no terminal vertex , 2 .the deletion of will remove all the edges mapped to injection meters that measure any vertex in .this is because the first condition ensures the all the state variables to be protected is still included in the tree .the second condition guarantees that the vertices in the residual tree are only measured by the remaining measurements .the two conditions ensure that the residual tree is feasible to the mmst problem .then , we update by removing all the vertices in and proceed to another vertex until each vertex in is either checked or pruned ._ step 4 : vertex update ._ let be the number of remaining vertices in .then , we select among the trees the one with minimum vertices , denoted by . if , i.e. no vertex is removed for all the trees , we terminate the algorithm and output as the remaining meters in ( line - ) .otherwise , we first update as the remaining vertices in and start another round of pruning from step ) . in fig . , we present an example to illustrate the tph , where a feasible tree contains vertices is presented . starting from the root , among the three child vertices of , only can be pruned , since the descendent vertices of either or contain terminal vertex . after pruning , we proceed to check , whose only child vertex is a terminal .then , we check , where neither of its child vertices and can be pruned separately or together . on one hand , this is because contains terminal as its descendent vertices . on the other hand ,the removal of does not remove the edge ] and $ ] ) are mapped to injection meters and the other unmarked edges are mapped to flow meters.,width=288 ] the purpose of introducing the parameter is because the final output is closely related to the tree s topology obtained in step .intuitively , with larger , we have a larger chance to obtain a smaller but also consume more computations .the proper choice of will be discussed in simulations .the correctness of tph is obvious from the following facts : ) the residual trees are always feasible measured tree ; ) the size of the minimum residual tree is non - increasing during the iterations ; ) equals the size of the minimum residual tree .there are at most rounds of pruning . in each round , trees are pruned and each takes time complexity , dominated by the gauss - jordan elimination computation . the overall time complexity is , which is considered efficient even for very large scale power systems .in this section , we discuss the possibility of extending the proposed algorithms to some interesting application scenarios .the topics we consider include : the integration of phasor measurement units ( pmus ) into state estimation , the applicability to ac state estimation model and the extension to achieve incremental state variable protection .interestingly , we find that our proposed algorithms can fit in all the considered scenarios with minor modifications .recently , the introduction of more sophisticated measurement components has largely improved the accuracy and reliability of state estimation .one such device is the phasor measurement unit ( pmu ) .combined with gps technology , pmus can provide direct real - time voltage phasor measurement , i.e. voltage amplitude and phase angle , with high precision and short measurement periodic time . in other words , any bus with a pmu installed does not need to estimate its voltage phasor if the device has a credible precision . there have been a number of studies on the pmu deployment to improve power network observability .however , although the introduction of pmus can be dated back to the , its deployment had been in a slow pace until the past decade when a series of severe blackout experienced all around the world .nowadays , the available pmus alone are still not sufficient to guarantee the observability of entire power network . in practice, we need to rely on the mixed measurements provided by both pmus and the conventional scada system to derive the state estimates .interestingly , our proposed algorithms can be easily extended to protect state estimation when pmus are used .note that the state variable of a tagged bus can not be compromised by attacks if a secured pmu is installed at the bus .this is equivalent to installing a secured flow meter between the tagged bus and the reference bus .if there exists no such power line connecting the two buses , a pseudo transmission line can be added to facilitate the calculation of the mmst problem .then , the proposed protection algorithms can be directly applied to solve the mmst problem .the only modification needed is that injection meters can not be mapped to a dashed edge in the steiner tree solution , because they do not measure the dashed edges in real system .the modification can be easily made in the constraints on in the milp formulation ( [ 27 ] ) by defining if a dashed edge is made up by a pmu .for the tph , the pruning rules need slight modification due to the change of mapping rule of injection meters in the presence of pmus .the details are omitted here to avoid the repetition of presentations .we provide an illustrative example in fig . , where a graph is extracted from a -bus power network .bus is the reference bus and pmus are available at bus and .the solid edges are the actual transmission lines in the power network .the dashed edge connecting bus and is made up by the pmu at bus , where a pseudo - flow meter of random direction is placed on edge .as discussed above , in any steiner tree solution , the injection meter at bus can not be mapped to the dashed edge made up by the pmu .since now we have formulated an equivalent problem with only power flows / injections as the measurements , the proposed tree construction algorithms in section iv can be directly applied .suppose that the state variable of bus is to be protected , a steiner tree can be constructed by edges , which are mapped to the pseudo - flow meter on edge ( from the pmu at bus ) and the flow meter on edge , respectively .then , bus can be defended if the pmu at bus and the flow meter on are protected . unlike the linear dc power flow model ,the measurement functions of ac power flow model consist of non - linear and coupled active and reactive power flow measurements . meanwhile , the voltage amplitudes are also considered as the state variables in ac power flow model . specifically , the active and reactive power flows on a power line connecting bus and are functions are \\ q_{ij } & = -v_i^2 \cdot b_{ij } + v_iv_j \left [ b_{ij } \cos(\theta_i-\theta_j ) - g_{ij } \sin(\theta_i-\theta_j)\right ] , \end{aligned}\ ] ] where is the voltage amplitude at bus , and are the conductance and susceptance of the power line ( neglecting the shunt elements ) .besides , the injection measurements at a bus are merely the sum of power flows in the incident branches .the ac state estimation is commonly performed in an iterative manner using the newton s method . false - data injection attack to ac state estimation is much harder than to the dc counter part .on one hand , both the active and reactive flows measurements need to be compromised . on the other hand, the attacker also needs to know the estimated values of state variables to calculate the attack parameter .this basically requires the knowledge of all the real - time measurement readings . despite the apparent differences , we find that the proposed algorithms can still be applied to protect ac state estimation if the attackers only compromise voltage phase angle variables as in the dc model . in particular , the proposed methods remain both valid and optimal ( for exact algorithms ) in protecting state variables in ac state estimation . from the attackers perspective , given constant ( assumed untouched by attackers ) , we notice that the power flow measurements in ( [ ac ] ) are only determined by phase angles differences , which is the same as in dc power flow model .for example , suppose that an attacker wants to perform an undetectable attack to compromise the phase angle variable of bus in fig .[ pmu ] ( we assume the meters measure both active and reactive flows / injections and the pmus are removed ) , by introducing the same error to the phase angle variables of bus to . from ( [ ac ] ) , the attacker does not need to compromise the flow meters on and , and the injection meter at bus .however , it is necessary for the attacker to compromise the readings of the boundary meters , i.e. the flow meters on edge and , and the injection meter on bus .the compromised measurements will result in a biased estimate produced by the system operator s ac state estimator when the system is observable , i.e. the ac state estimation converges to a unique solution under any set of consistent measurements . in general, the attacker needs to find a cut that separates a tagged bus and the reference bus , introducing the same error to the subgraph that contains the bus and zero error to buses in the other subgraph that contains the reference bus .then , the attacker only needs to compromise the meters , either flow or injection meters , which measure the buses on boundary .conversely , if a minimum measured steiner tree is constructed by edges mapped to secured meter measurements from the reference to bus , no undetectable attack can be performed .this is because any attack formulation by cut will require the attacker to compromise at least one secured meter measurement .therefore , our proposed method for dc state estimation model remains valid and optimal in ac state estimation model .however , if attackers also compromise voltage amplitude state variables , our methods may still be valid but no longer optimal .this is because the readings of flow meters are now determined by the absolute values , rather than the difference of voltage amplitudes .more detailed analysis in ac state estimation protection will be considered as a future working direction .another interesting extension of the proposed algorithms is to achieve incremental protection .eventually , the system operator may want to protect all the state variables in the power system. however , due to the temporary limited budget and lengthy security installation time in a large scale power network , we may only be able to install security devices on a set of meters to protect a subset of state variables first .later , we can extend the coverage to protect the other state variables given the already protected meters , until all state variables are protected .in fact , our proposed algorithms can be extended to achieve such incremental protection .the intuitive idea is to grow " a new feasible tree on top of the existing feasible tree to reaches more vertices to be protected .suppose that a set of state variables has been defended by protecting a set of meters .a feasible tree can therefore be constructed using the maximum - flow technique introduced in the appendix . by doing so, we also obtain the mapping between the measurement and edges .assume that we want to extend the coverage to defend another set of state variables , i.e. , given the protected meters .notice that the choice of can be made arbitrary by the system operator .intuitively , we need to find minimum number of edges , as well as the mapped meter measurements , to connect the vertices in to the current feasible tree . for the milp formulation in ( [ 27 ] ) , we can first add to the constraints and for those edges and injection meter in the existing feasible tree .that is , if edge is included in ; if the injection meter at bus is mapped to the edge .then , a new minimum measured steiner tree ( mmst ) as well as the new meter set to be protected can be calculated using the optimization in ( [ 27 ] ) .this can be achieved by a simple replacement of with , i.e. deliver one unit of demand to each vertex in .similar calculations can be performed to defend , , , until all the state variables are protected . for the tph ,we merely need to add several new policies to make sure that the mmst generated in the previous iteration to defend the variable set remain intact in the current iteration to defend another variable set .the detailed pruning policies are omitted here due to the scope of this paper .notice that the number of meters needed to protect all the state variables equals to the number of state variables , i.e. the size of a basic measurement set as introduced in , since we always keep a feasible tree whose edge is one - to - one mapped to a secured meter measurement . before leaving this session ,we want to emphasize that all the proposed algorithms can be built on top of the existing state estimation application in ems / scada .this is because the proposed algorithms merely find out a minimum set of meter measurements to be protected without altering the algorithm of state estimation or bdd .besides , the calculation of the proposed algorithms can be done offline , independent of real - time measurements .in this section , we use simulations to evaluate the proposed defending mechanisms .all the computations are solved in matlab on a computer with an intel core2 duo -ghz cpu and gb of memory . in particular , matlabbgl package is used to solve some of the graphical problems , such as maximum - flow calculation , etc . besides, gurobi is used to solve milp problems .the power systems we considered are ieee -bus , -bus and -bus testcases , whose topologies are obtained from matpower and summarized in table [ stat ] .all the systems are observable with the respective measurement placement . for illustration purpose , a measurements placement of the 14-bus systemis plotted in fig . .the measurement placements for 57-bus and 118-bus systems are omitted for the simplicity of expositions ..statistics of different power system testcases [ cols="^,^,^,^",options="header " , ] [ tphtable ] we also investigate the impact of the parameter to the performance of tph . by varying the values of and , we show in table [ tphtable ] the average solution size of tph and milp .each entry of the table is the average performance of independent experiments . from the to the rows, we see that better solution , i.e. smaller , is obtained with larger . compared with the optimal solution obtained by milp, tph protects on average only more meters when .the optimality gap is less than for all the cases . for better visualization , we plot the ratio for some selected s in fig . .we notice that the ratio improves notably for small as increases from to .for instance , the ratio improves from to for .the improvement is especially notable when we change to .however , the improvement becomes marginal as we further increase , such as the case with , where the ratio only improves by from to .we also plot in fig . cpu time normalized against the time consumed when .we observe that the cpu time increases almost linearly with , which matches our analysis in section iv .results in fig . indicate that we should select a proper to achieve a balance between the quality of approximate solution and computational complexity .in particular , a large , such as , should be used when is small relative to , i.e. .otherwise , a small , such as , should be used when is relatively large .to the performance of tph in the -bus system .( a ) the figure above shows the solution size of tph normalized by the optimal solution size obtained by milp ; ( b ) the figure below shows the cpu time of tph normalized by the cpu time when .,width=307 ]in this paper , we used graphical methods to study defending mechanisms that protect a set of state variables from false - data injection attacks . by characterizing the optimal protection problem into a variant steiner tree problem , we proposed both exact and approximate algorithms to select the minimum number measurements for system protection .the advantageous performance of the proposed defending mechanisms has been evaluated in ieee standard power system testcases .we use an example in fig . to illustrate the method to obtain a feasible spanning tree .we consider a basic measurement set of , where and .the set of edges measured by is .then , a directed graph is constructed in fig . , where is chosen as the root to construct the spanning tree .we select in advance an edge connected to the root , say , in the final tree solution .this is achieved by setting both the lower and upper capacity bounds of the edge to be .the other edges lower and upper capacity bounds are set to be and , respectively .then , a maximum flow is calculated from to .if the problem is feasible , i.e. the flow solution is in edge , we obtain a measurement - to - edge mapping by observing the saturating flows in the graph .otherwise , we select another edge connected to the root and recalculate the maximum flow problem . since is observable from , there is always a solution . in the above example, the final measurement - to - edge mapping is .then , the edges obtained by the maximum flow calculation will form a tree that spans all vertices in .y. liu , p. ning and m. reiter , false data injection attacks against state estimation in electric power grids , " in _ proceedings of the acm conference on computer and communications security _ , chicago , illinois , 2009 , pp . 21 - 32 .a. teixeira , g. dan , h. sandberg and k. h. johansson , cyber security study of a scada energy management system : stealthy deception attacks on the state estimator , " in _ ifac world congress _ ,milan , italy , 2011 .s. cui , z. han , s. kar , t. t. kim , h. v. poor and a. tajer , coordinated data - injection attack and detection in the smart grid : a detailed look at enriching detection solutions , " _ ieee signal processing magazine _ , vol .29 , no . 5 , pp .106 - 115 , sept 2012 .security guidelines for the electricity sector : physical security - substations " , [ online ] .available : http:.optellios.com/pdf/secguide-s.0 + .pdf , oct 2004 .g. r. krumpholz , k. a. clements and p. w. davis , power system observability : a practical algorithm using network topology , " _ ieee trans . on power apparatus and systems _ ,pas-99 , no.4 , pp .1534 - 1542 , july 1980 . a. g. expsito , a. abur , p. rousseaux , a. de la villa jan and c. g. quiles , on the use of pmus in power system state estimation " . _ in proc .17th power systems computation conference _ , pp .22 - 26 , 2011 .[ ] suzhi bi ( s10-m14 ) received his ph.d .degree in information engineering from the chinese university of hong kong , hong kong in 2013 .he received the b.eng .degree in communications engineering from zhejiang university , hangzhou , china , in 2009 .he is currently a research fellow in the department of electrical and computer engineering , national university of singapore , singapore . from june to august 2010 , he was a research engineer intern at institute for infocomm research ( i ) , singapore .he was a visiting student in the edge lab of princeton university in the summer of 2012 .his current research interests include mimo signal processing , medium access control in wireless networks and smart power grid communications .he is a co - recipient of best paper award of ieee smartgridcomm 2013 .[ ] ying jun ( angela ) zhang ( s00-m05-sm11 ) received her ph.d .degree in electrical and electronic engineering from the hong kong university of science and technology , hong kong in 2004 .she received a b.eng in electronic engineering from fudan university , shanghai , china in 2000 .since 2005 , she has been with department of information engineering , the chinese university of hong kong , where she is currently an associate professor .she was with wireless communications and network science laboratory at massachusetts institute of technology ( mit ) during the summers of 2007 and 2009 .her current research topics include resource allocation , convex and non - convex optimization for wireless systems , stochastic optimization , cognitive networks , mimo systems , etc .zhang is an executive editor of ieee transactions on wireless communications , an associate editor of ieee transactions on communications , and an associate editor of wiley security and communications networks journal .she was a guest editor of a feature topic in ieee communications magazine .she has served as a workshop chair of ieee iccc 2013 and 2014 , tpc vice - chair of wireless communications track of ieee ccnc 2013 , tpc co - chair of wireless communications symposium of ieee globecom 2012 , publication chair of ieee ttm 2011 , tpc co - chair of communication theory symposium of ieee icc 2009 , track chair of icccn 2007 , and publicity chair of ieee mass 2007 .she was a co - chair of ieee comsoc multimedia communications technical committee , an ieee technical activity board gold representative , ieee communication society gold coordinator , and a member of ieee communication society member relations council ( mrc ) .she is a co - recipient of 2011 ieee marconi prize paper award on wireless communications , and a co - recipient of best paper award of ieee smartgridcomm 2013 . as the only winner from engineering science, she has won the hong kong young scientist award 2006 , conferred by the hong kong institution of science .
|
the normal operation of power system relies on accurate state estimation that faithfully reflects the physical aspects of the electrical power grids . however , recent research shows that carefully synthesized false - data injection attacks can bypass the security system and introduce arbitrary errors to state estimates . in this paper , we use graphical methods to study defending mechanisms against false - data injection attacks on power system state estimation . by securing carefully selected meter measurements , no false data injection attack can be launched to compromise any set of state variables . we characterize the optimal protection problem , which protects the state variables with minimum number of measurements , as a variant steiner tree problem in a graph . based on the graphical characterization , we propose both exact and reduced - complexity approximation algorithms . in particular , we show that the proposed tree - pruning based approximation algorithm significantly reduces computational complexity , while yielding negligible performance degradation compared with the optimal algorithms . the advantageous performance of the proposed defending mechanisms is verified in ieee standard power system testcases . false - data injection attack , power system state estimation , smart grid security , graph algorithms .
|
the inverse obstacle scattering problem is to image the scattering object , i.e. find its shape and location , from the knowledge of the far - field pattern of the scattered wave .the medium is illuminated by light at given direction and polarization .then , maxwell s equations are used to model the propagation of the light through the medium , see for an overview . due to the complexity of the combined system of equations for the electric and the magnetic fields , it is common to impose additional assumptions on the incident illumination and the nature of the scatterer .we consider time - harmonic incident electromagnetic plane wave that due to the linearity of the problem will result to a time - independent system of equations .in addition , the penetrable object is considered as an infinitely long homogeneous cylinder .then , it is characterized by constant permittivity and permeability .the problem is further simplified if we impose oblique incidence for the incident wave .the three - dimensional scattering problem modeled by maxwell s equations is then equivalent to a pair of two - dimensional helmholtz equations for two scalar fields ( the third components of the electric and the magnetic fields ) .this approach reduces the difficulty of the problem but results to more complicated boundary conditions .the transmission conditions now contain also the tangential derivatives of the electric and magnetic fields . in showed that the corresponding direct problem is well - posed and we constructed a unique solution using the direct integral equation method .a similar problem has been considered for an impedance cylinder embedded in a homogeneous , and in an inhomogeneous medium .a numerical solution of the direct problem has been also proposed using the finite element method , the galerkin method , and the method of auxiliary sources . on the other hand , the inverse problem is non - linear and ill - posed .the non - linearity is due to the dependence of the solution of the scattering problem on the unknown boundary curve .the smoothness of the mapping from the boundary to the far - field pattern reflects the ill - posedness of the inverse problem .the unique solvability of the inverse problem is still an open problem .the first and only , to our knowledge , uniqueness result was presented recently in for the case of an impedance cylinder using the lax - phillips method . in this work , we solve the inverse problem by formulating an equivalent system of non - linear integral equations that is solved using a regularized iterative scheme .this method was introduced by kress and rundell and then considered in many different problems , in acoustic scattering problems , in elasticity and in electrical impedance problem .we propose an iterative scheme that is based on the idea of johansson and sleeman applied to the inverse problem of recovering a perfect conducting cylinder .see for some recent applications .we assume integral representations for the solutions that results to a system consisting of four integral equations on the unknown boundary ( considering the transmission conditions ) and one on the unit circle ( taking into account the asymptotic expansion of the solutions ) .we solve this system in two steps .first , given an initial guess for the boundary curve we solve the well - posed subsystem ( equations on the boundary ) to obtain the corresponding densities and then we solve the linearized ( with respect to the boundary ) ill - posed far - field equation to update the initial approximation of the radial function .we consider tikhonov regularization and the normal equations are solved by the conjugate gradient method .the paper is organized as follows : in section [ direct ] we present the direct scattering problem , the elastic potentials and the equivalent system of integral equations that provide us with the far - field data . the inverse problem is stated in section [ inverse ] where we construct an equivalent system of integral equation using the indirect integral equation method . in section [ numerics ]the two - step method for the parametrized form of the system and the necessary frchet derivative of the integral operators are presented .the numerical examples give satisfactory results and justify the applicability of the proposed iterative scheme .we consider the scattering of an electromagnetic wave by a penetrable cylinder in .let we denote by the cylinder , where is a bounded domain in with smooth boundary the cylinder is oriented parallel to the -axis and is its horizontal cross section .we assume constant permittivity and permeability for the exterior domain the interior domain is also characterized by constant parameters and we define the exterior magnetic and electric field for and the interior fields and for that satisfy the maxwell s equations and the transmission conditions where is the outward normal vector , directed into .we illuminate the cylinder with an incident electromagnetic plane wave at oblique incidence , meaning transverse magnetic ( tm ) polarized wave .we define by the incident angle with respect to the negative axis and by the polar angle of the incident direction ( in spherical coordinates ) , see figure [ fig1 ] .then , and the polarization vector is given by satisfying and assuming that in the following , due to the linearity of the problem we suppress the time - dependence of the fields and because of the cylindrical symmetry of the medium we express the incident fields as separable functions of and let be the frequency and the wave number in .we define and and it follows that the incident fields can be decomposed to where after some calculations , we can reformulate maxwell s euqations as a system of equations only for the -component of the electric and magnetic fields . the interior fields and and the exterior fields and satisfy the helmholtz equations where here , we assume in order to have the transmission conditions can also be written only for the -component of the fields .let be a local coordinate system , where is the outward normal vector and the outward tangent vector on we define where and denote the unit vectors in then , we rewrite the boundary conditions as where for the exterior fields are decomposed to and where and denote the scattered electric and magnetic field , respectively . from wesee that to ensure that the scattered fields are outgoing , we impose in addition the radiation conditions in where uniformly over all directions .now we are in position to formulate the direct transmission problem for oblique incident wave : find the fields and that satisfy the helmholtz equations , the transmission conditions and the radiation conditions .[ theo32 ] if is not an interior dirichlet eigenvalue and is not an interior dirichlet and neumann eigenvalue , then the direct transmission problem admits a unique solution .the proof is based on the integral representation of the solution resulting to a fredholm type system of boundary integral equations . for more details see ( * ? ? ?* theorem 3.2 ) . in the following , counts for the exterior ( ) and interior domain ( ) , respectively .we introduce the single- and double - layer potentials defined by where is the fundamental solution of the helmholtz equation in and is the hankel function of the first kind and zero order .we define also the integral operators the following theorem was proven in .let the assumptions of theorem [ theo32 ] still hold .then , the potentials solve the direct transmission problem provided that the densities and satisfy the system of integral equations where and the rest of the densities satisfy and the solutions and of have the asymptotic behavior where the pair is called the far - field pattern corresponding to the scattering problem .its knowledge is essential for the inverse problem and using we can compute it by where is the unit ball .the far - field operators are given by where is the far - field of the green function given by inverse scattering problem , we address here , reads : find the shape and the position of the inclusion meaning reconstruct its boundary , given the far - field patterns for all for one or few incident fields . to solve the inverse problemwe apply the method of nonlinear boundary integral equations , which in our case results to a system of four integral equations on the unknown boundary and one on the unit circle where the far - field data are defined .this method was first introduced in and further considered in various inverse problems , see for instance . since the direct problem was solved with the direct method ( green s formulas ) , in order to obtain our numerical data , here we adopt a different approach based on the indirect integral equation method , using simple representations for the fields .we assume a double - layer representation for the interior fields and a single - layer representation for the exterior fields .thus , we set substituting the above representations in the transmission conditions and considering the well - known jump relations , we get the system of integral equations in addition , given the far - field operators and the representations of the exterior fields we see that the unknown boundary and the densities and satisfy also the far - field equations [ far_inverse ] where the right - hand sides are the known far - field patterns from the direct problem . the equation in matrix form reads where 0 & -\dfrac1{2\tilde\mu_1 } & 0 & 0 \\ 0 & 0 & \dfrac{\omega}2 & -\dfrac{\beta_1}{2\tilde\epsilon_1}\partial_\tau \\[10pt ] 0 & 0 & 0 & -\dfrac1{2\tilde\epsilon_1 } \end{pmatrix } , & \textbf k & = \begin{pmatrix } -\omega ns_0 & - \dfrac{\beta_1}{\tilde\mu_1 } td_1 & \dfrac{\beta_0}{\tilde\mu_0 } ts_0 & \omega nd_1 \\[10pt ] 0 & \dfrac1{\tilde\mu_1 } d_1 & -\dfrac1{\tilde\mu_0 } s_0 & 0 \\ -\dfrac{\beta_0}{\tilde\epsilon_0 } ts_0 & \omega nd_1 & -\omega ns_0 & \dfrac{\beta_1}{\tilde\epsilon_1 } td_1 \\[10pt ] -\dfrac1{\tilde\epsilon_0 } s_0 & 0 & 0 & \dfrac1{\tilde\epsilon_1 } d_1 \end{pmatrix } , \\ \bm\phi & = \begin{pmatrix }\psi_e \\ \phi_h \\ \psi_h \\\phi_e \end{pmatrix } , & \textbf b & = \begin{pmatrix } \tilde\epsilon_0 \omega \partial_n \\ 0 \\ \beta_0 \partial_\tau \\ 1 \end{pmatrix } e^{inc}_3 .\end{aligned}\ ] ] the matrix due to its special form and the boundness of has a bounded inverse given by 0 & -2\tilde\mu_1 & 0 & 0 \\[6pt ] 0 & 0 & \dfrac2{\omega } & -\dfrac{2\beta_1}{\omega}\partial_\tau \\[10pt ] 0 & 0 & 0 & -2\tilde\epsilon_1 \end{pmatrix } .\ ] ] then , equation takes the form where now is the identity matrix and using the mapping properties of the integral operators , we see that the operator is compact .we observe that we have six equations and for the five unknowns : and the four densities .thus , we consider the linear combination + as a replacement for the far - field equations in order to state the following theorem as a formulation of the inverse problem. given the incident field and the far - field patterns for all if the boundary and the densities and satisfy the system of equations [ final_system ] then , solves the inverse problem .the integral operators in are linear with respect to the densities but non - linear with respect to the unknown boundary the smoothness of the kernels in the far - field equation reflects the ill - posedness of the inverse problem . to solve the above system of equations , we consider the method first introduced in and then applied in different problems , see for instance .more precisely , given an initial approximation for the boundary , we solve the subsystem - for the densities and then , keeping the densities and fixed we linearize the far - field equation with respect to the boundary .the linearized equation is solved to obtain the update for the boundary .the linearization is performed using frchet derivatives of the operators and we also regularize the ill - posed last equation . to present the proposed method in details , we consider the following parametrization for the boundary \},\ ] ] where is a -smooth , -periodic , injective in meaning that for all . ] as the initial regularization parameter .we present reconstructions for different boundary curves , different number of incident directions and initial guesses for exact and perturbed far - field data . in all figuresthe initial guess is a circle with radius a green solid line , the exact curve is represented by a dashed red line and the reconstructed by a solid blue line .the arrows denote the directions of the incoming incident fields . in the first three examples we consider the peanut - shaped boundary . in the first example , the regularized equation is solved with penalty term , meaning and coefficients .we solve equation for different incident directions .the reconstructions for and are presented in figure [ fig1b ] for two incident fields with directions on the left picture , we see the reconstructed curve for exact data and 9 iterations and on the right picture for noisy data with and 14 iterations . in the second example , we consider equation , four incident fields , noisy data and we keep all the parameters as before .the reconstructions for and 14 iterations are shown in the left picture of figure [ fig2 ] , and for and 20 iterations in the right one .we set and ( penalty term ) in the third example .the results for and four incident fields are shown in figure [ fig3 ] . here and we use equation .we need 26 iterations for the exact data and 30 iterations for the noisy data ( ) .in the last two examples we consider the apple - shaped boundary , penalty term , and coefficients . in the fourth example , we consider equation , noise - free data and four incident fields in order to examine the dependence of the iterative scheme on the initial radial guess . on the left picture of figure [ fig4 ] , we see the reconstructed curve for after 13 iterations and on the right picture for after 20 iterations . in the last example we consider noise and figure [ fig5 ] shows the improvement of the reconstruction for more incident fields . on the left picturewe see the results for three incident fields , equation and 7 iterations and the reconstructed curve for 4 incident fields , equation and 15 iterations is shown on the right picture .our examples show the feasibility of the proposed iterative scheme and the stability against noisy data . considering morethan one incident field improves considerably the reconstructions .the choice of the initial guess is also crucial .tsitsas , e.g. alivizatos , h.t . anastassiu and d.i .kaklamani , _ optimization of the method of auxiliary sources ( mas ) for oblique incidence scattering by an infinite dielectric cylinder_ , electrical engineering 89 ( 2007 ) , pp . 353361 .
|
in this work we consider the method of non - linear boundary integral equation for solving numerically the inverse scattering problem of obliquely incident electromagnetic waves by a penetrable homogeneous cylinder in three dimensions . we consider the indirect method and simple representations for the electric and the magnetic fields in order to derive a system of five integral equations , four on the boundary of the cylinder and one on the unit circle where we measure the far - field pattern of the scattered wave . we solve the system iteratively by linearizing only the far - field equation . numerical results illustrate the feasibility of the proposed scheme . * keywords * inverse electromagnetic scattering , oblique incidence , integral equation method
|
in process specification language ( psl ) , the ordering of event ( activity ) occurrences is modelled using occurrence trees , which are restricted forms of partial orders .although partial orders can sufficiently model the earlier than " relationship , they can not _ explicitly _ model the not later than " relationship .for instance , if an event is performed not later than " an event , then this not later than " relationship can be modelled by the following set of two step sequences , where the _ step _ models the simultaneous performance of and .but the set can not be represented by any partial order . to provide a unified framework for analyzing `` earlier than '' and ``not later than '' relationships , we proposed to interpret the _ generalized stratified order structure _ ( _ gso - structure _ ) theory within psl .the gso - structure theory is originated from causal partial order theory and _ stratified order structure _ ( _ so - structure _ ) theory .a so - structure is a triple , where and are binary relations on .they were invented to model both earlier than " ( the relation ) and not later than " ( the relation ) relationships , under the assumption that all _ system runs _( also called _ observations _ ) are modelled by _ stratified orders _ , i.e. , step sequences .they have been successfully applied to model inhibitor and priority systems , asynchronous races , synthesis problems , etc .( see for example and others ) .however , so - structures can adequately model concurrent histories only when the paradigm of is satisfied .paradigm says that if two event occurrences are observed in both orders of execution , then they will also be observed executing simultaneously . without this assumption ,we need gso - structures , which were introduced and analyzed in .the comprehensive theory for gso - structures has been developed in .a gso - structure is a triple , where and are binary relations on modelling `` never simultaneously '' and `` not later than '' relationships respectively under the assumption that all system runs are modelled by stratified orders .intuitively , gso - structures can model even the situation when we have the mixture of `` true concurrency '' and interleaving semantics .the only disadvantage is that gso - structures are more complex to conceptualize than so - structures . since the works of janicki et al . focus on the algebraic properties of gso - structures , the number of axioms are kept to minimal and some of the assumptions are made implicit .furthermore , the theorems of gso - structure theory frequently involve quantifying over relations , which requires the use of higher - order language .hence , to apply first - order ontology and model - theoretic techniques in the manner as in , we will first define a formal ontology for gso - structure in first - order logic and characterize all possible models of gso - structure theory up to isomorphism .after that we can proceed to investigate to what extend the theorems of gso - structure theory hold within the first - order setting of psl by studying possible ontological mappings from gso - structure theory to psl .the organization of this paper is as follows . in section 2, we will give a first - order axiomatization of the gso - structure theory and end the section will a result showing that our theory is consistent . in section 3 , we will classify all possible models of the gso - structure theory from section 2 using more natural and intuitive concepts from graph theory . in section 4 , we study a semantic mapping from our theory to psl - core theory .section 5 contains our concluding remarks .the following table provides a summary of the lexicon of so - structure theory .the relations , and in the papers of janicki et al . correspond to the relations , and respectively in this paper .we rename these relations to make the theory more intuitive and accessible . [ cols="<,<,<",options="header " , ] everything is either an _ event _ , _ event occurrence _ or _ observation _ : the sets of events , event occurrences and observations are pair - wise disjoint . the _ occurrence _ relation only holds between events and event occurrences . event occurrence is an occurrence of some event . event occurrence is an occurrence of a unique event .we now axiomatize the gso - structure , which describes the _ specification level _ of a concurrent system .the relations of gso - structure are , and .the relation can be defined as the intersection of the latter two , yet is added because it helps to make our axioms shorter and more intuitive .+ we have to make sure that the field of the relations , and consists of only event occurrences . the relation is irreflexive and symmetric . the relation is the intersection of the and the relations . relation is irreflexive . the and relations satisfy some weak form of transitivity . + the following propositions are helpful in understanding the relations of a gso - structure .the first proposition basically says that the relation is a partial order . [ prop : gso1 ] the irreflexivity property follows from axioms and .the transitivity property follows from axioms and .the second proposition shows the intuition that if two event occurrences must happen not later than each other , then they must occur simultaneously . [ prop : gso2 ] we assume for a contradiction that there are some observations and such that then since , it follows from axiom that .but since is symmetric ( axiom ) , we also have .thus we have and , which by proposition [ prop : gso1 ] implies . butthis contradicts with proposition [ prop : gso1 ] , which says that the relation is irreflexive .the third proposition shows the intuition that if the first event happens earlier than the second event , then it is not the case that the second event happens not later than the first event . [ prop : gso3 ] we assume for a contradiction that there are some observations and such that then by the axiom , we have .thus , and , which by proposition [ prop : gso1 ] implies .but this contradicts with proposition [ prop : gso1 ] , which says that the relation is irreflexive .assume the set of all possible event occurrences is .the following is an example of a gso - structure , where 1 .the relation is represented by a directed _ acyclic _ graph : \ar@{.>}@/^1pc/[rr]\ar@{.>}@/^/[drrr]\ar@{.>}@/^1pc/[ddrr ] & & o_5 & \\ o_1\ar[dr]\ar[ur]\ar@{.>}[rr]\ar@{.>}@/_5pc/[drrr]\ar@{.>}@/^5pc/[urrr]\ar@{.>}@/^1pc/[rrrr ] & & o_4\ar[dr]\ar[ur]\ar[rr ] & & o_7\\ & o_3\ar[ur]\ar@{.>}@/_1pc/[rr]\ar@{.>}@/_/[urrr]\ar@{.>}@/_1pc/[uurr ] & & o_6 & \\ & & & & } \ ] ] note that in this diagram , we used the solid edges to denote the edges of the _ _ transitive reduction _ _ on a set is a minimal relation on such that the transitive closure of is the same as the transitive closure of .] of the relation .the relation is represented as the following directed graph : \ar@{.>}@/^1pc/[rr]\ar@{.>}@/^/[drrr]\ar@{.>}@/^1pc/[ddrr ] & & o_5\ar@/^/@{-->}[dr]\ar@/^/@{-->}[dd ] & \\ o_1\ar[dr]\ar[ur]\ar@{.>}[rr]\ar@{.>}@/_5pc/[drrr]\ar@{.>}@/^5pc/[urrr]\ar@{.>}@/^1pc/[rrrr ] & & o_4\ar[dr]\ar[ur]\ar[rr ] & & o_7\ar@/^2pc/@{-->}[dl]\\ & o_3\ar[ur]\ar@{.>}@/_1pc/[rr]\ar@{.>}@/_/[urrr]\ar@{.>}@/_1pc/[uurr ] & & o_6\ar@/_/@{-->}[ur ] & \\ & & & & } \ ] ] + note that we used the dashed edges to denote the edges of which are not in .the relation is represented by the following ( undirected ) graph ( because is symmetric ) .+ \ar@{-}[dd]\ar@{-}@/^1pc/[rr]\ar@{-}@/^/[drrr]\ar@{-}@/^1pc/[ddrr ] & & o_5 & \\ o_1\ar@{-}[dr]\ar@{-}[ur]\ar@{-}[rr]\ar@{-}@/_5pc/[drrr]\ar@{-}@/^5pc/[urrr]\ar@{-}@/^1pc/[rrrr ] & & o_4\ar@{-}[dr]\ar@{-}[ur]\ar@{-}[rr ] & & o_7\\ & o_3\ar@{-}[ur]\ar@{-}@/_1pc/[rr]\ar@{-}@/_/[urrr]\ar@{-}@/_1pc/[uurr ] & & o_6 & \\ & & & & } \ ] ] + note that except the edge , all other edges of are exactly the edges of the _ comparability graph _ of the relation . because of the quantity of edges the comparability graph has , it is often more practical to draw the _ complement graph _ of the graph induced by the relation .for example , the complement graph of the graph is the following : \ar@{-}[dd ] & \\o_1 & & o_4 & & o_7\\ & o_3 & & o_6\ar@{-}[ur ] & } \ ] ] + [ ex : gso1 ] if the relations of a gso - structure in the previous section describe the specification level ( also called structural semantics ) of a concurrent system , observations characterize _ behavioral level _ of the system . the ( or ) relation relates two event occurrences and an observation . each observation and the relation specify a stratified order on the event occurrences as follows .every event occurrence can not be observed before itself with respect to any observation . the is transitive with respect to any observation . the relation and can be derived from each other . the relation on a fixed observation satisfies the stratified order property . observation and the relation specify a _stratified order extension _ of the gso - structure . axioms and impose the _ observation soundness _ property of our gso - structure theory in the following sense : if is an possible observation of the system , then it must satisfy the constraints specified by the relations of the gso - structure .we next axiomatize the _ observation completeness _ property of our gso - structure theory .if and are simultaneous event occurrences , then there must be some observation , where and are observed simultaneously . if it is not the case that the event occurrence is not later than the event occurrence , then there will be some observation , where is observed earlier than . + the reason why stratified orders are used to encode observations can be explained formally in the next two propositions .+ for any observation , we define : for all event occurrences , and , we have 1 . 2 . 3 . in other words , the relation is an equivalence relation .[ prop : strat1 ] 1. follows from how is defined .2 . follows from axiom and how is defined .3 . follows from axiom and how is defined .the intuition of proposition [ prop : strat1 ] is that for any fixed observation , we can extend the relation with the identity relation to construct the equivalence relation .the relation can then be used to partition the set of event occurrences , where we can think of each equivalence class as a `` composite event occurrence '' consisting of only atomic event occurrences that are pairwise observed _ simultaneously _ within .for example , fig .[ fig : observation ] shows a stratified order induced by an observation and the relation . in this case ,the equivalence classes of are the sets , , , and , where the fact that and belong to the same equivalence class means they are observed simultaneously within . } ! { ( 0,0.5)}*+{o_1}="o1 " ! { ( 0,-0.5)}*+{o_2}="o2 " ! { ( 1,0)}*+{o_3}="o3 " ! { ( 2,1)}*+{o_4}="o4 " ! { ( 2,0)}*+{o_5}="o5 " ! { ( 2,-1)}*+{o_6}="o6 " ! { ( 3,0.5)}*+{o_7}="o7 " ! { ( 3,-0.5)}*+{o_8}="o8 " ! { ( 4,0.5)}*+{o_9}="o9 " ! { ( 4,-0.5)}*+{o_{10}}="o10 " " o1":"o3 " " o2":"o3 " " o3":"o4 " " o3":"o5 " " o3":"o6 " " o4":"o7 " " o5":"o7 " " o6":"o7 " " o4":"o8 " " o5":"o8 " " o6":"o8 " " o7":"o9 " " o7":"o10 " " o8":"o9 " " o8":"o10 " } \ ] ] [ fig : observation ] if and are two distinct equivalence classes of , then either or .[ prop : strat2 ] we pick and .clearly , or , otherwise which contradicts that , are elements from two distinct equivalence classes .there are two cases : 1 .if : we want to show .let and , it suffices to show . assume for contradiction that .since , it follows that .there are three different subcases : 1 . if , then and .hence , .this contradicts that .2 . if , then and . hence , .this contradicts that .3 . if and , then and and and . since , either or .* if : since , it follows .this contradicts . *if : since , it follows .this contradicts .+ therefore , we conclude .if : using a symmetric argument , it follows that .proposition [ prop : strat2 ] leads to the following consequence .for any observation , let us define the relation on the set {\simeq_o}:{\mathsf{event\_occurrence}}(a)\}}$ ] as then the relation is a _ strict total order _ on . intuitively , the equivalence classes in can always be totally ordered using , where for any two equivalence classes and in , if , then all event occurrences in are observed before all the event occurrences in within the observation . for examples , the equivalence classes of the stratified order from fig .[ fig : observation ] can be totally ordered by the ordering as follows : when the cardinality of the set of event occurrences is finite as in our example , the stratified order from fig .[ fig : observation ] can be equivalently represented more compactly as where each equivalence class is called a _step _ and the whole sequence is called a _ step sequence_.it might seem counterintuitive that our axioms allow observations whose infinitely many event occurrences are observed simultaneously .however , this is just a limitation of first order theory .since our theory allows models that observe arbitrarily large finite set of simultaneous event occurrences , by the compactness theorem there will be models whose observations will allow us to observe infinite set of simultaneous event occurrences .+ we have just discussed the idea behind why stratified orders are used to formalize the notion of an observation .we next want to show the intuition of how stratified order based observations satisfy the observation soundness properties with respect to a gso - structure .we will do so using a detailed example .given the set of event occurrences and the relations , and from example [ ex : gso1 ] , we want to know possible observations of this gso - structure . by axioms and for observationsoundness , we know that all of the observations must satisfy all the causality constraints specified by these three relations . for each observation ,we let denote the dag representing the stratified order . 1 . the observation satisfies the relation intuitively meaning that must contain , i.e. , .2 . the observation satisfies the relation _ roughly _ which means that might or might not contains the edges of , where denotes the graph difference of and .the exception is when contains both directed edges and , then neither nor is allowed to be included in .3 . finally satisfies the relation is equivalent to saying that if , but neither nor is in the graph , then we have the case that either or must be included in . from these intuitions ,if , and are given an interpretation as in example [ ex : gso1 ] , then we notice the follows . * since and ,if we consider only the set of event occurrences , then the transitive reduction graphs of all of the possible ways they can be observed are : & o_2\ar[r ] & o_3\ar[r ] & o_4 } \ ] ] + & o_3\ar[r ] & o_2\ar[r ] & o_4 } \ ] ] * since , if we consider only the set of event occurrences , then the transitive reduction graphs of all of the possible ways they can be observed are : \ar[ur]\ar[r]&o_7 & & o_4\ar[r ] & o_5\ar[dr]\ar[ur ] & \\ & o_6 & & & & o_7 } \ ] ] note that because , the vertices are disconnected ( incomparable ) in all of the possible observations . combining all of these cases together ,the transitive reduction graphs of all possible observations which satisfy the observation soundness condition with respect to the gso - structure from example [ ex : gso1 ] are depicted in fig .[ fig : stratext1 ] . 1 . & o_2\ar[r ] & o_3\ar[r ] & o_4\ar[dr]\ar[ur]\ar[r]&o_7 & \\ & & & & o_6 & } \ ] ] 2 . & o_3\ar[r ] & o_2\ar[r ] & o_4\ar[dr]\ar[ur]\ar[r]&o_7 & \\ & & & & o_6 & } \ ] ] 3 . & o_2\ar[r ] & o_3\ar[r ] & o_4\ar[r ] & o_5\ar[dr]\ar[ur ] & \\ & & & & & o_7 } \ ] ] 4 . & o_3\ar[r ] & o_2\ar[r ] & o_4\ar[r ] & o_5\ar[dr]\ar[ur ] & \\ & & & & & o_7 } \ ] ] [ ex : stratext1 ] one subtle question one might ask is if the observation completeness condition is too strong for every gso - structure to have . in other words , is there any model of our theory , where its gso - structure can not be characterized by any set of stratified order observations ?fortunately , the theorem which we will discuss next will help us answer this question . before stating the theorem ,let us define some notations .for a partial order on a set , let us define the following theorem can be seen as a generalization of szpilrain s theorem . if szpilrajn s theorem ensures that every partial order can be uniquely reconstructed from the set of all of its total order extensions , then the following theorem states that every gso - structure can be uniquely reconstructed from its stratified order extensions .let be a gso - structure , i.e. , satisfies all axioms from to .let be the set of all stratified orders on satisfying the following stratified order extension conditions : 1 . and 2 . . then we have and .[ theo : gsoext ] from this theorem , we know that there is always a subset of , where we can uniquely reconstruct and .note that although the consequence of the theorem does not mention , the axiom implies that thus , hence , observation completeness is a safe assumption for our gso - structure theory .it is worth noticing that , since theorem [ theo : gsoext ] is a generalization of szpilrajn s theorem , the proof of theorem [ theo : gsoext ] requires _ the axiom of choice_. let , , and be the stratified orders whose transitive reduction graphs are depicted in cases ( a ) , ( b ) , ( c ) and ( d ) respectively. then the set of all the stratified order extensions of the gso - structure from example [ ex : gso1 ] is .however , the gso - structure from example [ ex : gso1 ] can be uniquely reconstructed from any subset of , which is a superset of at least one of the following two sets and .for example , let us consider the set .then the relations and can be represented as the following two graphs ( some arcs which can be inferred from transitivity are omitted for simplicity ) : \ar[dr]&\\ o_1\ar[r ] & o_2\ar[r ] & o_3\ar[r ] & o_4\ar[dr]\ar[ur ] & & o_7\ar@/^1pc/[dl]\ar@/_1pc/[ul]\\ & & & & o_6\ar@/_/[uu]\ar[ur ] & } \ ] ] \\ o_1\ar[r ] & o_3\ar[r ] & o_2\ar[r ] & o_4\ar[r ] & o_5\ar[dr]\ar[ur ] & \\ & & & & & o_7\ar@/_/[uu ] } \ ] ] it is easy to check that the graph is exactly the intersection of these two graphs .it is also easy to check that the graph is the intersection of the comparability graphs induced by the relations and . [ ex : stratext2 ] let denote our gso - structure theory , which consists of axioms from to. then we have the following theorem .the theory is consistent .[ theo : consistent ] it suffices to build a model that satisfies all of these axioms .let , and be three _ pairwise disjoint _ sets , where we define the universe of to be the set .we then give the following interpretations 1 . 2 . 3 . 4 . 5 . is exactly the graph from example [ ex : gso1 ] 6 . is exactly the graph from example [ ex : gso1 ] 7 . is exactly the graph from example [ ex : gso1 ] 8 . , where and are relations from example [ ex : stratext2 ] . 9 . , where is the following relation \ar@/_/[dr ] & \\ o_6\ar@/_/[rr]\ar@/_/[ur ] & & o_7\ar@/_/[ll]\ar@/_/[ul ] } \ ] ] and is the following relation &&o_7\ar@/_/[ll ] } \ ] ] it is easy to check that axioms to are satisfied by this interpretation .we also see from example [ ex : gso1 ] how the interpretation of , and given by , and respectively satisfies that axioms from to .it is also clear from example [ ex : stratext1 ] and example [ ex : stratext2 ] that our interpretation satisfies axioms from to .by theorem [ theo : consistent ] , we already know that is consistent , and hence the class of all models satisfying is nonempty . in this section , we will attempt to classify all the possible models of our theory . for convenience ,we let denote the theory consisting of axioms from to , and we let denote the specification - level theory consisting of axioms from to. the following definition will give us the classification of all models of .let denote the class of all possible models for .then any model consists of the following sets , , and such that 1 .the universe of is 2 . , , and are pairwise disjoint 3 . is a partitioning of the set . 5 . 6 . 7 . [ def : class1 ] the correctness of our definition follows from the following theorem .if the class is defined as in definition [ def : class1 ] , then for any model , we have .[ theo : sat1 ] the fact that satisfies axioms and follows from the condition that , , and are pairwise disjoint .the fact that satisfies axioms and follows from our construction that is a partitioning of the set and the interpretation of as the membership relation between and .any model of is isomorphic to a structure of .[ theo : axiom1 ] let be a model of .we will show that satisfies the conditions of the structures in from definition [ def : class1 ] . since , we know that any element of the universe of belongs to one of the following sets , and . since , all of these sets , and pairwise disjoint .hence , the conditions ( 1 ) , ( 2 ) , ( 4)(6 ) are satisfied . since satisfied axioms , we know that a function hence , given the set , we can define the set as since is a function , it can be easily checked that defines a partitioning of .thus , the condition ( 3 ) and ( 7 ) are also satisfied .we will classify the relational models of in a more well - understood combinatorial setting .but before that we will recall some definitions . a directed graph is a pair , where is the set of vertices and is the set of edges. * the transitive closure of is a graph such that for all in there is an edge in if and only if there is a nonempty path from to in . *the graph is called a _ transitive graph _ if we have . in other words, is its own transitive - closure taken away all the self - loops .* we let denote the _ comparability graph _ of , i.e. , * we let denote the _ incomparability graph _ of , i.e. , * we let denote the _ complement graph _ of , i.e. , in other words , we exclude the self - loops . *given a directed graph , we write if . we write to denote the graph .and we write to denote the graph . in this paper , we will treat undirected graphs ( or graphs ) as a special case of directed graph , where the edge relations are symmetric .this explains why we defined and as direct graphs .also note that whenever we call something a graph or a directed graph , we already mean that it does not contain any self - loop .+ let denote the class of all possible models for .then any model can be uniquely determined from the following three graphs : 1 .the graph is a _acyclic _ transitive graph .the graph is a transitive graph satisfying the following two conditions : 1 . , where .the graph does not contain a triangle that has any of these two forms : & \\ \bullet\ar@{-->}[rr]\ar[ur ] & & \bullet } \mbox{\quad\quad}\xymatrix { & \bullet\ar[dr ] & \\ \bullet\ar@{-->}[rr]\ar@{-->}[ur ] & & \bullet } \ ] ] where the solid edges are edges of and the dashed edges are edges of .the graph is an undirected graph such that there is an undirected graph and .the interpretation for can be defined as : * the universe of is a superset of * * * * . [ def : class2 ] if the class is defined as in definition [ def : class2 ] , then for any model , we have .[ theo : sat2 ] since , and are exactly the edge relations of , and respectively , it follows that satisfies axioms . since and is a graph , it follows that satisfies axioms and .recall that we define and . hence , to show that , it suffices to show the following lemma .* .[ lem : l1 ] + ( ) from definition [ def : class2 ] , we know that and .hence , it follows that .but we know that . + ( ) it suffices to show that and .but we know that since from condition ( 3 ) of definition [ def : class2 ] , we have .this also implies that that .it remains to show that , but this holds since from condition ( 2)(a ) of definition [ def : class2 ] we have .since is a transitive graph , it follows that satisfies axioms and .it remains to show that .then since , there are three cases to consider : * if and , then it follows that since is a transitive graph . *if and , where is the set of edges of , then since is a transitive graph , we know that .suppose for a contradiction that , then we have a triangle & \\ o_1\ar@{-->}[rr]\ar@{-->}[ur ] & & o_3 } \ ] ] this is a contradiction .* the case of and is similar to the previous case .any model of is isomorphic to a structure of .[ theo : axiom2 ] let be a model of .we will show that satisfies the conditions of the structures in from definition [ def : class2 ] . since satisfies axioms and , we know that we can determine the vertex set for the graphs , and . since satisfies all axioms , from proposition [prop : gso1 ] we know that is a strict partial order , so it can be represented by an acyclic transitive graph as from the condition ( 1 ) of definition [ def : class2 ] .since satisfies axioms and , we can represent the relation by a transitive graph as from the condition ( 2 ) of definition [ def : class2 ] . * to show that the condition ( 2)(a ) is satisfied , we must show that .suppose for a contradiction that there is an edge that appears on both and .since , we know that , so .this would mean that and .but this contradicts with proposition [ prop : gso3 ] . * to show that the condition ( 2)(b ) is satisfied , we assume for a contradiction that we have at least one of the following two triangles : & \\ u\ar@{-->}[rr]\ar[ur ] & & w } \mbox{\quad\quad}\xymatrix { & v\ar[dr ] & \\ u\ar@{-->}[rr]\ar@{-->}[ur ] & & w } \ ] ] where the solid edges are edges of and the dashed edges are edges of .the left triangle implies that and but .this contradicts with axiom .similarly the case of the right triangle also leads to a contradiction .since satisfies axioms and , we can represent by a graph as from the condition ( 3 ) of definition [ def : class2 ] .let , it remains to show that .suppose for a contradiction that an edge and is shared by both the graph and .without loss of generality , we can assume that .thus , and .but by axiom , we have that .this contradicts with our assumption that .we first introduce a more combinatorial representation of stratified orders .given a set , we call the pair a _ ranking structure _ of if is a partitioning of the set and is a total ordering on the set . intuitively , a ranking structure of is just a partitioning of equipped with a total ordering which orders the partitions in .any stratified order on a set can be uniquely determined by a _ranking structure _ of . similarly to the ideas from proposition[ prop : strat1 ] and proposition [ prop : strat2 ] , we define an equivalence relation from the stratified order as follows : then let be the set of all partitions of with respect to this equivalence relation .next we define the relation as . then , similarly to proposition [ prop : strat2 ] , we can check that is a total ordering .+ to recover the stratified order from the ranking structure , we simply reconstruct for a set , we let denote the complete graph induced by . in other words , and for each ranking structure of a set , we have two kinds of graph associated with it : intuitively , the graph is simply the transitive graph of the stratified order encoded by . andthe graph is exactly the graph , but in this case it is more intuitive to characterize it as the union of complete graphs .+ putting everything together we have the following characterization of the class of all models of .let denote the class of all possible models for .then any model is uniquely determined from * the sets , , and * the graphs , and * a family of ranking structures on indexed by the set , i.e. , , such that 1 .all conditions from definition [ def : class1 ] are satisfied 2 .all conditions from definition [ def : class2 ] are satisfied 3 .the graph is the intersection of all the graphs in the set 4 .the graph is the intersection of all the graphs in the set 5 . 6 . [ def : class3 ] if the class is defined as in definition [ def : class3 ] , then for any model , we have .[ theo : sat3 ] the fact that satisfies axioms and follows from the theorem [ theo : sat1 ] .the fact that satisfies axioms and follows from the theorem [ theo : sat2 ] . since each is a ranking structure on , from the way and are defined , we know that satisfies axioms and .since is defined from the graphs and each graph is the incomparability graph of , it follows that .also since we construct the relation from the graphs and each is a stratified order .hence , satisfies axioms , and since these axioms are the conditions saying that is a stratified order for every and we have .recall axioms - together say that but this is equivalent to conditions ( 2 ) and ( 3 ) from definition [ def : class3 ] .any model of is isomorphic to a structure of .[ theo : axiom3 ] let be a model of .we will show that satisfies the conditions of the structures in from definition [ def : class2 ] .since satisfies axioms , from theorem [ theo : axiom1 ] we can determine the sets and the set , which satisfied the condition ( 1 ) of definition [ def : class3 ] .since satisfies axioms , from theorem [ theo : axiom2 ] we can determine the graphs , and such that the condition ( 2 ) of definition [ def : class3 ] is satisfied .let . then since satisfies axioms , we know that for all the induced relation is a stratified order , so we can uniquely construct the family of ranking structure from the set .it is easy to check that the condition ( 5 ) and ( 6 ) of definition [ def : class3 ] are satisfied .but since we already know that axioms - together are equivalent to conditions and from the proof of theorem [ theo : sat3 ] , it follows that satisfies conditions ( 2 ) and ( 3 ) of definition [ def : class3 ] .in this section , we will attempt to map a subset of to the psl - core theory ( ) . we let to denote the theory consisting of axioms from to and the following two axioms . axiom says that everything is either an _ event occurrence _ or an _ observation_. and axiom says that the set of event occurrences and the set of observations are disjoint .the reason for considering the theory is that all of the interesting properties of concern with event occurrences and not with the events themselves .the second reason is that beside weakening the theory , we do not see how we can establish a semantic mapping from to without introducing extra axioms into .+ to shorten our formulas , we need the following notation . for any formula define in other words , we write to say that there exists a unique satisfying .we let denote the relative interpretation of the language of into .then the interpretation is defined as follows : [ def : rel ] intuitively , the interpretation means the following .if in each observation is a `` system run '' , encoded by a stratified order of the event occurrences , which is observed by some implicit observer , then in we explicitly describe this observer as an object . for our interpretation , we are particularly interested in objects that participate in a unique activity occurrence of each activity at a unique time point . in other words , observers are objects satisfying the following properties : 1 .the time point in which an object participates with an activity occurrence of an activity is exactly the time when the object observes the activity .the object observes every activity .the object only observes each activity exactly once .all of the other interpretations , and can be easily determined from the observations that all observers observed .the interpretation defined in definition [ def : rel ] is correct .it is easy to check that under the interpretation , every axioms of is a theorem of .hence , defined in definition [ def : rel ] is a correct interpretation .in this paper , we proposed in our knowledge the first version of a first - order theory for gso - structures in .we avoid the difficulty of not being able to quantify over relations in first - order logic by introducing the relations and which take an observation as one of their parameters . using model - theoretic ontological techniques introduced in , we classified all possible models of , where our key results are the satisfiability theorem and axiomatizability theorem for . in our opinion ,the classification of models of , which decomposes the , and into smaller graphs , is especially insightful in understanding these three relations .although the classification of observations using ranking structures is quite artificial , we could not figure out any simpler characterization .we also give a very intuitive interpretation of the weaker theory into , which shows that is strong enough to prove most of the theorems in .the main philosophical difference between and is that causality relations are treated as logical relations without mentioning the concept of time in while the causality relations in are directly connected to timepoints of a reference timeline .the fact that can be correctly interpreted inside also suggests that the soundness and completeness conditions might be too restrictive .one way to relax these conditions is to partition the observation set into `` legal '' and `` illegal '' observations , where legal observations are the ones satisfying the soundness and completeness conditions .this approach would also give us the ability to talk about illegal observations .g. juhs , r. lorenz , s. mauser , synchronous + concurrent + sequential = earlier than + not later than , _ proc .of acsd06 _ ( application of concurrency to system design ) , turku , finland 2006 , pp .261 - 272 , ieee press .
|
in this paper , we propose a first - order ontology for generalized stratified order structure . we then classify the models of the theory using model - theoretic techniques . an ontology mapping from this ontology to the core theory of process specification language is also discussed .
|
energy harvesting communications offer the promise of energy self - sufficient , energy self - sustaining operation for wireless networks with significantly prolonged lifetimes .energy harvesting communications have been considered mostly for energy harvesting _ transmitters _ , e.g. , , with fewer works on energy harvesting _ receivers _ , e.g. , . in this paper, we consider energy harvesting communications with both energy harvesting transmitters and receivers .the energy harvested at the transmitters is used for data transmission according to a rate - power relationship , which is concave , monotone increasing in powers . the energy harvested at the receiversis used for decoding costs , which we assume to be convex , monotone increasing in the incoming rate .the transmission energy costs and receiver decoding costs could be comparable , especially in short - distance communications , where high rates can be achieved with relatively low powers , and the decoding power could be dominant ; see and the references therein .we model the energy needed for decoding at the receivers via _ decoding causality _ constraints : the energy spent at the receiver for decoding can not exceed the receiver s harvested energy .we already have the _ energy causality _ constraints at the transmitter : the energy spent at the transmitter for transmitting data can not exceed the transmitter s harvested energy .therefore , for a given transmitter - receiver pair , transmitter powers need now to adapt to both energy harvested at the transmitter and at the receiver ; the transmitter must only use powers , and therefore rates , that can be handled / decoded by the receiver . the most closely related workto ours is , where the authors consider a general network with energy harvesting transmitters and receivers , and maximize a general utility function , subject to energy harvesting constraints at all terminals .reference carries the effects of decoding costs to the objective function .if the objective function is no longer concave after this operation , it uses time - sharing to concavify it , leading to a convex optimization problem , which it then solves by using a generalized water - filling algorithm . in this paper , we consider a similar problem with a specific utility function which is throughput , for specific network structures , with different decoding costs informed by network information theory .first , we consider the single - user channel , and observe that the decoding costs at the receiver can be interpreted as a _ gate keeper _ at the front - end of the receiver that lets packets pass only if it has sufficient energy to decode .we show that we can carry this _ gate _effect to the transmitter as a _ generalized data arrival constraint_. therefore , the setting with decoding costs at the receiver is equivalent to a setting with no decoding costs at the receiver , but with a ( generalized ) data arrival constraint at the transmitter .we also note that the energy harvesting component of the receiver can be separated as a _ virtual relay _ between the transmitter and the receiver ; and again , the problem can be viewed as a setting with no decoding costs at the receiver but with a _ virtual relay _ with a ( generalized ) energy arrival constraint .we then consider several multi - user settings .we begin with a decode - and - forward two - hop network , where the relay and the receiver both have decoding costs .this gives rise to _ decode - and - forward causality _ constraints at the relay in addition to decoding causality constraints at the receiver and energy causality constraints at the transmitter .we decompose the problem into inner and outer problems . in the inner problem ,we fix the relay s decoding power strategy , and show that _ separable _ policies are optimal .these are policies that maximize the throughput of the transmitter - relay link independent of maximizing the throughput of the relay - destination link .thereby , we solve the inner problem as two single - user problems with decoding costs . in the outer problem , we find the best relay decoding strategy by a water - filling algorithm .next , we consider a two - user multiple access channel ( mac ) with energy harvesting transmitters and receiver , and maximize the departure region .we consider two different decoding schemes : simultaneous decoding , and successive cancellation decoding .each scheme has a different decoding power consumption . for the simultaneous decoding scheme ,we show that the boundary of the maximum departure region is achieved by solving a weighted sum rate maximization problem that can be decomposed into an inner and an outer problem .we solve the inner problem using the results of single - user fading problem .the outer problem is then solved using a water - filling algorithm . in the successive cancellationdecoding scheme , our problem formulation is non - convex .we then use a successive convex approximation technique that converges to a local optimal solution .the maximum departure region with successive cancellation decoding is larger than that with simultaneous decoding .finally , we characterize the maximum departure region of a two - user degraded broadcast channel ( bc ) with energy harvesting transmitter and receivers . with the transmitter employing superposition coding , a corresponding decoding power consumption at the receiversis assumed .we again decompose the weighted sum rate maximization problem into an inner and outer problem .we show that the inner problem is equivalent to a classical single - user energy harvesting problem with a time - varying _ minimum power constraint _ ,for which we present an algorithm .we solve the outer problem using a water - filling algorithm similar to the outer problems of the two - hop network and the mac with simultaneous decoding .as shown in fig . [ fig_p2p_sys ] , we have a transmitter and a receiver , both relying on energy harvested from nature .the time is slotted , and at the beginning of time slot , energies arrive at a given node ready to be used in the same slot or saved in a battery to be used in future slots .let and denote the energies harvested at each slot for the transmitter and the receiver , respectively , and let denote the transmitter s powers . without loss of generality, we assume that the time slot duration is normalized to one time unit . the physical layer is a gaussian channel with zero - mean unit - variance noise .the objective is to maximize the total amount of data received _ and decoded _ by the receiver by the deadline .our setting is _ offline _ in the sense that all energy amounts are known prior to transmission .the receiver must be able to decode the packet by the end of the slot .a transmitter transmitting at power in the time slot will send at a rate , for which the receiver will spend amount of power to decode , where is generally an increasing convex function . in the sequel , we will also focus on the specific cases of linear and exponential functions , where , with , and , with and .continuing with a general convex increasing function , we have the following decoding causality constraints for the receiver : therefore , the overall problem is formulated as : where denotes the vector of powers .note that the problem above in general is not a convex optimization problem as ( [ eq_p2p_rx_battery ] ) in general is a non - convex constraint since is a convex function while is a concave function .applying the change of variables , and defining ( note that is a convex function ) , we have which is now a convex optimization problem .we note that the constraints in ( [ eq_p2p_rx_battery ] ) , i.e. , , place upper bounds on the rates of the transmitter by every slot .this resembles the problem addressed in with data packet arrivals during the communication session .in fact , when and , where is the amount of data arriving in slot , these are exactly the data arrival constraints in . a general convex generalizes this data arrival constraint .we characterize the solution of ( [ eq_su_opt ] ) in the following three lemmas and the theorem .the proofs rely on the convexity of and generalizing the proof ideas in .[ thm_p2p_inc ] is monotonically increasing .assume that there exists a time slot such that , and consider a new policy obtained by replacing both and by , and observe that from the convexity of and , we have in addition , since both and are monotonically increasing , we have , and .therefore , the new policy is feasible , and can only save some energy either at the transmitter or at the receiver .this saved energy can be used to increase the rates in the upcoming time slots .thus , the original policy can not be optimal .[ thm_p2p_consume ] in the optimal policy , whenever the rate changes in a time slot , at least one of the following events occur : 1 ) the transmitter consumes all of its harvested energy in transmission , or 2 ) the receiver consumes all of its harvested energy in decoding , up to that time slot .assume not , i.e. , but both the transmitter and the receiver did not consume all their energies in the time slot . then , we can always increase and decrease without conflicting the energy causality or the decoding causality constraints . by the convexity of and , this modification would save some energy that can be used to increase the rates in the upcoming time slots .therefore , the original policy can not be optimal . in the optimal policy , by the end of the transmission period ,at least one of the following events occur : 1 ) the transmitter s total power consumption in transmission is equal to its total harvested energy , or 2 ) the receiver s total power consumption in decoding is equal to its total harvested energy .assume that both conditions are not met .then , we can increase the rate in the last time slot until either the transmitter , or the receiver , consumes all of its energy .this is always feasible and strictly increases the rate .[ thm_p2p_sol ] let .a policy is optimal iff it satisfies the following where with , and .first , we prove that the optimal policy satisfies ( [ eq_p2p_sol1 ] ) and ( [ eq_p2p_sol2 ] ) .we show this by contradiction .let us assume that the optimal policy , that satisfies the necessary lemmas above , is not given by ( [ eq_p2p_sol1 ] ) and ( [ eq_p2p_sol2 ] ) and achieves a higher throughput .in particular , let us assume that it coincides with the policy given by ( [ eq_p2p_sol1 ] ) and ( [ eq_p2p_sol2 ] ) for all rates but has a different value for .let us denote the points of rate increase of this policy by .thus , there must exist a time index such that and let us consider two different cases .assume that .if the transmitter s energy is the bottleneck at , then can not be supported by the transmitter . on the other hand , if the receiver s energy is the bottleneck at , then can not be supported by the receiver . hence , is not feasible in both cases .now , assume that .then , there will exist a duration ] . on the other hand ,if , then by the monotonicity property , all upcoming rates for can only be larger than , which are all larger than .this makes the new policy infeasible by the end of slot since consumes all feasible energy according to ( [ eq_p2p_sol1 ] ) and ( [ eq_p2p_sol2 ] ) .thus , the original policy is optimal .theorem [ thm_p2p_sol ] shows that decoding costs at the receiver are similar in effect to having a single - user channel with data arrivals during transmission and no decoding costs .this stems from the fact that the transmitter has to adapt its powers ( and rates ) in order to meet the decoding requirements at the receiver .therefore , the receiver s harvested energies and the function control the amount of data the transmitter can send by any given point in time .alternatively , we can slightly change the single - user problem ( [ eq_su_opt ] ) by adding an extra variable as follows this gives the same solution as we will always have satisfied for all . therefore , as shown in fig .[ fig_p2p_virtual ] , we can view the single - user setting with an energy harvesting receiver , as a two - hop setting with a _ virtual relay _ between the transmitter and the receiver , with a non - energy harvesting receiver . to this end, we separate the decoding costs of the receiver , which are subject to energy harvesting constraints , as a relay which is subject to energy harvesting constraints in its transmissions , and consider the receiver as fully powered .the receiver will only receive data if the relay has sufficient energy to forward it .in addition , this energy harvesting virtual relay has no data buffer , thus , its incoming data rate equals its outgoing data rate .the rate through this relay is controlled by and .thus , the decoding function puts a _ generalized energy arrival effect _ to this virtual relay , in a similar way that it puts a _generalized data arrival effect _ to the transmitter through theorem [ thm_p2p_sol ] , as shown in fig .[ fig_p2p_sys ] .it is worth mentioning that if we consider the special case where the receiver has no battery to store its energy , this will lead to the following decoding causality constraint which , in view of the generalized data arrival interpretation , can be modeled as a time - varying upper bound on the transmitter s power in each slot where is the maximum transmission rate of a packet that can handle at the decoder , and denotes its corresponding maximum transmit power .this problem has been considered in the general framework of , and in for the special case of a constant maximum power constraint .one solution for this problem is to apply a backward water - filling algorithm that starts from the last slot backwards , where at each slot directional water - filling is applied only on slots whose maximum power constraint is not satisfied with equality .this might cause some wastage of water if the maximum power constraints are tighter than the transmitter s energy causality constraints , which depends primarily on how the function relates the transmitter s and the receiver s energies .we now consider a two - hop network consisting of a single source - destination pair communicating through a relay , as depicted in fig . [ fig_twohop ] .the relay is full duplex , and it uses a decode - and - forward protocol . the relay has a data buffer to receive its incoming packets from the source . at the beginning of slot ,energies in the amounts of , , and arrive at the source , relay , and destination , respectively .unused energies can be saved in their respective batteries .let and be the rates of the source and the relay , respectively , in slot .our goal is to maximize the total amount of data received _ and decoded _ at the destination by the deadline .we impose decoding costs on both the relay and the destination .the problem is formulated as : where the first constraint in ( [ eq_2h_opt ] ) is the source transmission energy causality constraint , the second one is the relay decode - and - forward causality constraint , the third one is the data causality constraint at the relay , and the last one is the destination decoding causality constraint .we first note that that if the relay did not have a data buffer , the source and the relay rates will need to be equal , i.e. , for all . in this case , the problem reduces to be a problem only in terms of the source rates , and could be solved by straightforward generalization of the single - user result in theorem [ thm_p2p_sol ] considering three constraints instead of two . in a sense, this would be equivalent to taking the effects of decode - and - forward causality at the relay and decoding causality at the receiver back to the source as two different generalized data arrival effects .this can be further extended to multi - hop networks with relays having no data buffers by taking their constraint effects all the way back to the source . in oursetting , having a data buffer at the relay imposes non - obvious relationships among the source and the relay rates . to tackle this issue, we decompose the problem into inner and outer problems . in the inner problem ,we solve for the source and relay rates after fixing a decoding power strategy for the relay node . bythat we mean choosing the amounts of powers , , the relay dedicates to decoding its incoming source packets .these amounts need to be feasible in the sense that , .this decomposes the decode - and - forward causality constraint into the following two constraints : in the next lemmas and theorem , we characterize the solution of the inner problem .the proofs of the lemmas are extensions of the ones presented in to the case of generalized data arrivals .[ thm_2h_inc ] there exists an optimal increasing source rate policy for the inner problem .assume that there exists a time slot where .we have two cases to consider .first , assume .let us define a new policy by replacing the and source and relay rates by , and , respectively. by the convexity of and , and linearity of the data causality constraint , the new policy is feasible , and can only save some energy at the source or the relay .this energy can be used in later slots to achieve higher rates .now , assume .we argue that the data arrival causality constraint is satisfied with strict inequality at time slot . forif it were equality , we need to have and , which leads to , an obvious contradiction .now , we can find a small enough , such that defining a new policy by replacing the and source rates by and , respectively , we do not affect the relay rates . by the convexity of and , the new policy is feasible , andcan only save some energy at the source .this energy can be used in later slots to send more data to the relay , and hence , possibly increasing the relay rates , and the end - to - end throughput .[ thm_2h_sep ] the optimal increasing source rate policy for the inner problem is given by the single - user problem solution in ( [ eq_p2p_sol1 ] ) and ( [ eq_p2p_sol2 ] ) , where the transmitter s and the receiver s energies are given by and , respectively .let us denote the single - user solution by .assume for contradiction that it is not optimal for the inner problem .in particular , let and be equal for , and differ on the slot .we again have two cases to consider .first , assume .in this case , since by lemma [ thm_2h_inc ] , is increasing , by similar arguments as in the proof of theorem [ thm_p2p_sol ] , the policy will eventually not satisfy the source s energy causality or the relay s decoding causality constraints , at some time slot .hence , it can not be optimal .now , assume .we argue that this shrinks the feasible set of the relay s rates .we show this by induction . by assumption of this case , it is true at time slot , that we have .now , assume it is true that for some time slot we have , and consider the time slot . if , then we are back to the previous case where this can not be feasible eventually. therefore , the feasible set of the relay s rates shrinks at time slot , and hence , shrinks all over .thus , this case can not be optimal either .lemma [ thm_2h_sep ] states that the optimal source policy is separable in the sense that the source maximizes its throughput to the relay irrespective of how the relay spends its transmission energy .this stems from the fact that the relay has an infinite data buffer to store its incoming source packets .therefore , once we fix a decoding power strategy at the relay , we get separability .the following theorem , which is an extended version of theorem [ thm_p2p_sol ] , gives the optimal relay rates for the inner problem .the proof is similar to that of theorem [ thm_p2p_sol ] and is omitted for brevity . given the optimal source rates , the optimal relay rates for the inner problem is given by where is the of the expression in ( [ eq_2h_sol ] ) as in ( [ eq_p2p_sol1])-([eq_p2p_sol2 ] ) , and .denoting the solution of the inner problem by , we now find the optimal relay decoding strategy by solving the following outer problem : we have the following lemma regarding the outer problem . is a concave function .consider two decoding power strategies , , and let , be their corresponding source and relay optimal inner problem rates , respectively .let , for some , and consider the rate policy defined by , and , for the source , and the relay , respectively .by the convexity of and , the policy is feasible for the decoding strategy .therefore , we have proving the concavity of .therefore , the outer problem is a convex optimization problem .we propose a water - filling algorithm to solve the outer problem .we first note that does not possess any monotonicity properties in the feasible region .for instance , , while is strictly positive for some in between .thus , at the optimal relay decoding power strategy , not all the relay s decoding energy will be exhausted . to this end , we add an extra slot where we can possibly discard some energy .we start by filling up each slot by its corresponding energy / water level and we leave the extra slot initially empty .meters are put in between bins to measure the amount of water passing .we let water flow to the right only if this increases the objective function . after each iteration, water can be called back if this increases the objective function .all the amount of water that is in the extra slot is eventually discarded , but may be called back also during the iterations . since with each waterflow the objective function monotonically increases , problem feasibility is maintained throughout the process , and due to the convexity of the problem , the algorithm converges to the optimal solution .we now consider a two - user gaussian mac as shown in fig . [ fig_mac_sys ] .the two transmitters harvest energy in amounts and , respectively , and the receiver harvests energy in amounts .the receiver noise is with zero - mean and unit - variance .the capacity region for this channel is given by : where and are the powers used by the first and the second transmitter , respectively .in addition to the usual energy harvesting causality constraints on the transmitters , we impose a receiver decoding cost .we note that there can be different ways to impose this constraint depending on how the receiver employs the decoding procedure . in the next two sub - sections , we consider two kinds of decoding procedures , namely , simultaneous decoding , and successive decoding . changing the decoding modelaffects the optimal power allocation for both users so as to adapt to how the receiver spends its power . in this case, the two transmitters can only send at rates whose sum can be decoded at the receiver .a power control policy is feasible if the following are satisfied : from here on , we assume a specific structure for the decoding function for mathematical tractability and ease of presentation . in particular, we assume that it is exponential with parameters , and , i.e. , .let denote the total departed bits from the user by time slot .our aim is to characterize the _ maximum departure region _ , , which is the region of the transmitters can depart by time slot , through a feasible policy .the following lemmas characterize this region .the maximum departure region , , is the union of all , over all feasible policies , where for any fixed power policy , satisfy is a convex region .each point on the boundary of , see fig .[ fig_mac_region ] , can be characterized by solving a weighted sum rate maximization problem subject to feasibility conditions ( [ eq_mac_feasibility ] ) .let and be the non - negative weights for the first and the second user rates , respectively .assuming without loss of generality that , and defining , we then need to solve the following optimization problem : we note that the above problem resembles the one formulated in for a diamond channel with energy cooperation .first , we state a necessary condition of optimality for the above problem . in the optimal solution for ( [ eq_mac_casec ] ) , by the end of the transmission period , at least one of the following occur :1 ) both transmitters consume all of their harvested energies in transmission , or 2 ) the receiver consumes all of its harvested energy in decoding .assume without loss of generality that transmitter 1 does not consume all of its energies in transmission , and that the receiver also does not consume all of its energies in decoding .then , we can always increase the value of until either transmitter 1 or the receiver consume their energies .this strictly increases the objective function .we decompose the optimization problem ( [ eq_mac_casec ] ) into two nested problems .first , we solve for in terms of , and then solve for .let us define the following inner problem : where the modified energy levels are defined as follows : then , we have the following lemma .[ thm_inner_prb_ccv ] is a decreasing concave function in . is a decreasing function of since the feasible set shrinks with . to show concavity ,let us choose two points and , and take their convex combination for some .let and denote the solutions of the inner problem ( [ eq_mac_inner_prb ] ) at and , respectively .now , let , and observe that , from the linearity of the constraint set , is feasible with respect to .therefore , we have where the second inequality follows from the concavity of .we observe that the inner problem ( [ eq_mac_inner_prb ] ) is a single - user energy harvesting maximization problem with fading , whose solution is via directional water - filling of over the inverse of the fading levels as presented in .next , we solve the outer problem given by : where we define the water levels , with , and .the minimum is added to ensure the feasibility of the inner problem .note that , by lemma [ thm_inner_prb_ccv ] , the outer problem is a convex optimization problem .we first note that at the optimal policy , first user s modified energies need not be fully utilized by the end of transmission .this is because the objective function is not increasing in . to this end, we use the iterative water - filling algorithm for the outer problem proposed in section [ sec_2h ] to solve this outer problem .since the problem is convex , iterations converge to the optimal solution .note that the above formulation obtains the dotted points in the curved portion of the departure region in fig .[ fig_mac_region ] .specific points in the departure region , e.g. , points 1 and 3 in fig .[ fig_mac_region ] , can be found by specific schemes , by solving the problem for the cases and .we now let the receiver employ successive decoding , where it aims at decoding the corner points , and then uses time sharing if necessary to achieve the desired rate pair .for instance , if the system is operating at its lower corner point , then the receiver first decodes the message of the second user , by treating the first user s signal as noise , then decodes the message of the first user , after subtracting the second user s signal from its received signal . for , we are always at a lower corner point at every time slot , and therefore the weighted sum rate maximization problem can be formulated as : where the last inequality comes from the fact that the receiver is decoding the second user s message first by treating the first user s signal as noise , and thereby spends amount of energy to decode this message , and then spends amount of energy to decode the first user s message after subtracting the second user s signal .observe that the last constraint , i.e. , the decoding causality constraint , is non - convex .therefore , one might need to invoke the time - sharing principle in order to fully characterize the boundary of the maximum departure region . in terms of the ratesthe problem can be written as : which is a non - convex problem due to the second user s energy causality constraint .in fact , the above problem is a signomial program , a generalized form of a geometric program , where posynomials can have negative coefficients .next , we use the idea of successive convex approximation to provide an algorithm that converges to a local optimal solution . by applying the change of variables , , and some algebraic manipulations : now , the problem looks very similar to a geometric program except for the last two sets of constraints .these constraints are written in the form of a monomial less than a posynomial , which will not allow us to write the problem in convex form by the usual geometric programming transformations .we will follow an approach introduced in in order to iteratively approximate the posynomials on the right hand side by monomials , and thereby reaching a geometric program that can be efficiently solved .approximations should be chosen carefully such that iterations converge to a local optimum solution of the original problem . towards that, we use the arithmetic - geometric mean inequality to write : which holds for .in particular , equality holds at a point if we choose .therefore , the monomial function approximates the posynomial function at . substituting this approximation, we obtain that at the iteration , we need to solve the following geometric program : where , , and is the solution of the iteration .we pick an initial feasible point and run the iterations .the choice of the approximating monomial function satisfies the conditions of convergence stated in , and therefore , the iterative solution of problem ( [ eq_mac_succ_cvxapprox ] ) converges to a point that is local optimal for problem ( [ eq_mac_succ_powers ] ) .finally , we get the original power allocations by substituting , and .we now consider a two - user gaussian bc with energy harvesting transmitter and receivers as shown in fig .[ fig_bc_sys ] .energies arrive in amounts and at the transmitter , and the receivers 1 and 2 , respectively . by superposition coding , the weaker user is required to decode its message while treating the stronger user s interference as noise .while the stronger user is required to decode both messages successively by first decoding the weaker user s message , and then subtracting it to decode its own .the receiver noises have variances 1 and . under a total transmit power , the capacity region of the gaussian bc is : working on the boundary of the capacity region we have : where is the minimum power needed by the transmitter to achieve rates and .note that is an increasing convex function of both rates . as in the mac case ,the goal here is to characterize the maximum departure region : where the first constraint in ( [ eq_opt_bc ] ) is the source transmission energy causality constraint , and second and third constraints are the decoding causality constraints at the stronger and weaker receivers , respectively . here also , we take the decoding cost function to be . by virtue of superposition coding , we see that , in the optimization problem in ( [ eq_opt_bc ] ) , the decoding causality constraint of the stronger user is a function of both rates intended for the two users , as it is required to decode both messages . while the decoding causality constraint for the weaker user is a function of its own rate only . by the convexity of and , the maximum departure region is convex , and thus the weighted sum rate maximization in ( [ eq_opt_bc ] ) is sufficient to characterize its boundary .in addition , the optimization problem in ( [ eq_opt_bc ] ) is convex .we note that a related problem has been considered in , where the authors characterized transmission completion time minimization policies for a bc setting with data arrivals during transmission . there , the solution is found by sequentially solving an equivalent energy consumption minimization problem until convergence .their solution is primarily dependent on newton s method .some structural insights are also presented about the optimal solution . in our setting, we consider the case with receiver side decoding costs , and generalize the data arrivals concept by considering the convex function .in addition , our formulation imposes further interactions between the strong and the weak user s data , by allowing a constraint ( strong user s ) that is put on the sum of both rates , instead of on individual rates .we characterize the solution of the problem according to the relation between and as follows .if , then due to the degradedness of the second user , it is optimal to put all power into the first user s message .this way , the problem reduces to a single - user problem : where the modified energy levels are defined as follows : on the other hand , if , then we need to investigate the necessary kkt optimality conditions .we write the lagrangian for the problem ( [ eq_opt_bc ] ) as follows : taking the derivative with respect to and and equating to zero , we obtain : along with the complementary slackness conditions : from here , we state the following lemmas [ thm_bc_sumrate ] the sum rate is monotonically increasing .we prove this by contradiction .assume that there exists some time slot such that . from ( [ eq_bc_sumrate ] ), since the denominator can not increase , the numerator has to decrease for the sum rate to decrease , i.e. , . from complementary slackness, we must have .therefore , in order for the sum rate to decrease we must have , which in turn leads to . from ( [ eq_bc_weakrate ] ) , we know that for the weak user s rate to decrease , the numerator has to decrease , i.e., we must have . since , this is equivalent to having .however , we know from above that , i.e. , , an obvious contradiction by non - negativity of the lagrange multipliers . [ thm_bc_weakrate ] the weak user s rate is monotonically increasing .we also prove this by contradiction .assume that there exists some time slot such that .from ( [ eq_bc_weakrate ] ) , since the denominator can not increase , the numerator has to decrease for the weak user s rate to decrease , i.e. , .let us consider two different cases .first , assume .therefore , we must have , and thus , by complementary slackness , , and hence , can not be less since it can not drop below zero .now , assume . in this case , by complementary slackness , . by lemma [ thm_bc_sumrate ] , we have , i.e. , , which is a contradiction . with the change of variables : , and , ( [ eq_opt_bc ] )becomes : we now decompose the above problem into an inner and an outer problem and iterate between them until convergence .first , we fix the value of , and solve the following inner problem : where the modified energy levels are defined as follows we have the following lemma for this inner problem whose proof is similar to that of lemma [ thm_inner_prb_ccv ] .[ thm_bc_inner_ccv ] is a decreasing concave function in .we note that the vector serves as a _ minimum power constraint _ to the inner problem .let us write the lagrangian for the inner problem : taking the derivative with respect to and equating to zero , we obtain : first , let us examine the necessary conditions for the optimal power to increase , i.e. , .this occurs iff .thus , we must either have which means that , by the complementary slackness , we have to consume all the available energy by the end of the slot .or , we have which means that .next , let us examine the necessary conditions for the optimal power to decrease , i.e. , .this occurs iff , and therefore , we must have .we note from lemmas [ thm_bc_sumrate ] and [ thm_bc_weakrate ] that both and are monotonically increasing .therefore , we only focus on fixing an increasing feasible .this , when combined with the above conditions , leads to the following lemma .for a fixed increasing , the optimal solution of the inner problem is also increasing . by the kkt conditions stated above , if we have , then we must have .thus , we will have , i.e. , the minimum power constraint is not satisfied at the slot .therefore , choosing an increasing in the outer problem ensures that the inner problem s solution is also increasing , and thereby , satisfies the conditions of lemmas [ thm_bc_sumrate ] and [ thm_bc_weakrate ] .we solve the inner problem by algorithm [ alg_bc_inner ] .the algorithm s main idea is to equalize the powers as much as possible via directional water - filling while satisfying the minimum power requirements .[ alg_bc_inner ] initialize the status of each bin mark bins by their minimum power requirements set pour water into the bin from previous bins , in a backward manner , until equality holds do directional water - filling over the current and upcoming bins update the status of each bin observe that the algorithm gives a feasible power profile ; it examines each slot , and does not move backwards unless the minimum power requirement is satisfied . if there is an excess energy above the minimum , say at slot , it performs directional water - filling which will occur if ( let us consider water - filling only over two bins for simplicity ) . since the minimum power requirement vector is increasing , after equalizing the energies the updated status will satisfy , i.e. , the minimum power requirement is always satisfied if directional water - filling occurs .also observe that the algorithm can not give out a decreasing power profile since is increasing .according to the kkt conditions , the power increases from slot to slot only if or the total energy is consumed by slot .we see that the algorithm satisfies this condition .power increases only if directional water - filling is not applied at slot , which means that either some of the water was poured forward in the previous iteration to satisfy , or no water was poured which means that all energy is consumed by slot . a numerical example for a three - slot systemis shown in fig .[ fig_bc_inner ] .the minimum power requirements are shown by red dotted lines in each bin . according to the algorithm, we first initialize by pouring all the amounts of water in their corresponding bins .we begin by checking the last bin , and we see that it needs some extra water to satisfy its minimum power requirement . thus , we pour water forward from the middle bin until the minimum power requirement of the last bin is satisfied with equality .this causes a deficiency in the middle bin , and therefore , we pour water forward from the first bin until the minimum power requirement of the middle bin is satisfied with equality .since the problem is feasible , the amount of water remaining in the first bin should satisfy its minimum power requirement .in fact , in this example , there is an excess amount that is therefore used to equalize the water levels of the first two bins via directional water - filling .this ends the algorithm and gives the optimum power profile .we now find the optimum value of by solving the following outer problem : where , and the modified water levels are given by : where the extra terms in the expression are to ensure feasibility of the inner problem . by lemma [ thm_bc_inner_ccv ] ,the outer problem is a convex optimization problem .we solve it by an algorithm similar to that of the two - hop network outer problem , except that we only focus on choosing increasing power vectors in each iteration . by convexity of the problem ,the iterations converge to the optimal solution .in this section , we present numerical results for the considered systems models .we focus on the specific case where , and . starting with the single - user channel , we consider a five - slot system with energy amounts of ] at the receiver .the optimal rates in this case according to theorem [ thm_p2p_sol ] are given by ] , ] .we observe that the simultaneous decoding region lies strictly inside the successive decoding region .the latter , given by the geometric programming framework , is only a local optimal solution ; one can therefore achieve even higher rates if a global optimal solution is attained . finally , in fig .[ fig_bc_sim ] , we provide some simulation results to illustrate the difference between the departure regions with and without decoding costs for a bc .we consider a system of three time slots , where the energy profile of the transmitter is given by ] , and ] , and ] , and $ ] , to get region in brown .we note that as we lower the energy profiles at the receivers , the decoding causality constraints become more binding , and therefore , the region progressively shrinks .we considered decoding costs in energy harvesting communication networks . in our settings, we assumed that receivers , in addition to transmitters , rely on energy harvested from nature .receivers need to spend a decoding power that is a function of the incoming rate in order to receive their packets .this gave rise to the _ decoding causality _ constraints : receivers can not spend energy in decoding prior to harvesting it .we first considered a single - user setting and maximized the throughput by a given deadline .next , we considered two - hop networks and characterized the end - to - end throughput maximizing policies .then , we considered two - user mac and bc settings , with focus on exponential decoding functions , and characterized the maximum departure regions . inmost of the models considered , we were able to move the receivers decoding costs effect back to the transmitters as _ generalized data arrivals _ ; transmitters need to adapt their powers ( and rates ) not only to their own energies , but to their intended receivers energies as well .such adaptation is governed by the characteristics of the decoding function . throughout this paper , we only considered receiver decoding costs in our models without considering transmitter processing costs . on the other hand ,other works have considered the processing costs at the transmitter without considering decoding costs at the receiver . in their models, the transmitter spends a constant amount of power per unit time whenever it is communicating to account for circuitry processing ; while in our model , the receiver spends a decoding power which is a function of the incoming data rate . as a future work, the two approaches can be combined to account for both the processing costs at the transmitter and the decoding costs at the receiver in a single setting .o. ozel , k. tutuncuoglu , j. yang , s. ulukus , and a. yener , `` transmission with energy harvesting nodes in fading wireless channels : optimal policies , '' _ ieee jsac _ , vol .29 , no . 8 , pp . 17321743 , september 2011 .j. yang and s. ulukus , `` optimal packet scheduling in a multiple access channel with energy harvesting transmitters , '' _ journal of communications and networks _ ,14 , no . 2 ,pp . 140150 , april 2012 .m. a. antepli , e. uysal - biyikoglu , and h. erkal , `` optimal packet scheduling on an energy harvesting broadcast link , '' _ ieee journal on selected areas in communications _ , vol .29 , no . 8 , pp . 17211731 ,september 2011 .h. erkal , f. m. ozcelik , and e. uysal - biyikoglu , `` optimal offline broadcast scheduling with an energy harvesting transmitter , '' _ eurasip journal on wireless communications and networking _ , vol .2013 , no . 1 , pp .120 , july 2013 .o. ozel , j. yang , and s. ulukus , `` optimal broadcast scheduling for an energy harvesting rechargebale transmitter with a finite capacity battery , '' _ ieee transactions on wireless communications _11 , no . 6 , pp .21932203 , june 2012 . c. huang , r. zhang , and s. cui , `` throughput maximization for the gaussian relay channel with energy harvesting constraints , '' _ ieee journal on selected areas in communications _ , vol . 31 , no . 8 , pp . 14691479 , august 2013 . y. luo , j. zhang , and k. b. letaief , `` optimal scheduling and power allocation for two - hop energy harvesting communication systems , '' _ ieee transactions on wireless communications _ , vol . 12 , no . 9 , pp. 47294741 , september 2013 .k. tutuncuoglu , a. yener , and s. ulukus , `` optimum policies for an energy harvesting transmitter under energy storage losses , '' _ ieee journal on selected areas in communications _ ,33 , no . 3 , pp .476481 , march 2015 .d. gunduz and b. devillers , `` a general ramework for the optimization of energy harvesting communication systems with battery imperfections , '' _ journal of communications and networks _ , vol . 14 , no . 2 , pp .130139 , april 2012 .o. ozel , k. shahzad , and s. ulukus , `` optimal energy allocation for energy harvesting transmitters with hybrid energy storage and processing cost , '' _ ieee trans .signal proc ._ , vol .62 , no . 12 , pp . 32323245 , june 2014 .a. nayyar , t. basar , d. teneketzis , and v. v. veeravalli , `` optimal strategies for communication and remote estimation with an energy harvesting sensor , '' _ ieee trans . on automatic control _ ,58 , no . 9 , pp . 22462260 , september 2013 .c. huang , r. zhang , and s. cui , `` optimal power allocation for outage probability minimization in fading channels with energy harvesting constraints , '' _ ieee transactions on wireless communications _ , vol . 13 , no . 2 , pp .10741087 , february 2014 .z. wang , v. aggarwal , and x. wang , `` iterative dynamic water - filling for fading multiple - access channels with energy harvesting , '' _ ieee journal on selected areas in communications _ ,33 , no . 3 , pp .382395 , march 2015 .p. grover , k. woyach , and a. sahai , `` towards a communication - theoretic understanding of system - level power consumption , '' _ ieee journal on selected areas in communications _ ,29 , no .17441755 , september 2011 .j. rubio , a. pascual - iserte , and m. payar , `` energy - efficient resource allocation techniques for battery management with energy harvesting nodes : a practical approach , '' in _european wireless conference _ , april 2013 .a. a. damico , l. sanguinetti , and d. p. palomar , `` convex seperable problems with linear constraints in signal processing and communications , '' _ ieee transactions on signal processing _ , vol .62 , no .22 , pp . 60456058 , november 2014 .
|
we consider the effects of decoding costs in energy harvesting communication systems . in our setting , receivers , in addition to transmitters , rely solely on energy harvested from nature , and need to spend some energy in order to decode their intended packets . we model the decoding energy as an increasing convex function of the rate of the incoming data . in this setting , in addition to the traditional _ energy causality _ constraints at the transmitters , we have the _ decoding causality _ constraints at the receivers , where energy spent by the receiver for decoding can not exceed its harvested energy . we first consider the point - to - point single - user problem where the goal is to maximize the total throughput by a given deadline subject to both energy and decoding causality constraints . we show that decoding costs at the receiver can be represented as _ generalized data arrivals _ at the transmitter , and thereby moving all system constraints to the transmitter side . then , we consider several multi - user settings . we start with a two - hop network where the relay and the destination have decoding costs , and show that _ separable _ policies , where the transmitter s throughput is maximized irrespective of the relay s transmission energy profile , are optimal . next , we consider the multiple access channel ( mac ) and the broadcast channel ( bc ) where the transmitters and the receivers harvest energy from nature , and characterize the maximum departure region . in all multi - user settings considered , we decompose our problems into inner and outer problems . we solve the inner problems by exploiting the structure of the particular model , and solve the outer problems by water - filling algorithms . energy harvesting , throughput maximization , energy harvesting transmitters , energy harvesting receivers , decoding costs , energy causality , decoding causality .
|
it is a great honor and pleasure for me to contribute to this celebration of the scientific life work and achievements of abner shimony , from whom i have received much inspiration , personal encouragement and the gift of friendship in a decisive period of my scientific career . when i came to know abner more closely , i was thrilled to realize the close agreement between our quantum mechanical world views ; and ever since , when contemplating foundational issues , i found myself often wonder : what would abner say ? " .i am proud to share with abner one piece of work on an important item of unfinished business " , a paper on the insolubility of the quantum measurement problem , which i hope may prove useful as a stepping stone towards resolving this problem . in this contributioni will address another area of concern to abner , one that remains even when the measurement problem is suspended : quantum limitations of measurements . by way of introduction of terminology and notationi briefly review the basic and most general probabilistic structures of quantum mechanics , encoded in the concepts of states , effects and observables ; i then recall how these objects enter the modeling of measurements ( section [ sec : mmt ] ) .this general framework of quantum measurement theory will then be used to obtain precise formulations and proofs of some long - disputed limitations of quantum measurements , such as the inevitability of disturbance and entanglement in a measurement , the impossibility of repeatable measurements for continuous quantities , and the incompatibility between conservation laws and the notion of repeatable sharp measurements ( section [ sec : qu - lim ] ) . in section [ sec : cu ] the _ classic " _ quantum limitations expressed by the complementarity and uncertainty principles are revisited .appropriate operational measures of inaccuracy and disturbance for the formulation of quantitative trade - off relations for ( joint ) measurement inaccuracies and disturbances have been introduced in recent years ; these will be discussed in section [ sec : inacc - dist ] .i conclude with an outlook on open questions ( section [ sec : conclusion ] ) .every quantum system is represented by a finite or infinite - dimensional , separable hilbert space over the complex field .states are described as positive operators or equivalently we denote the usual ordering of self - adjoint operators ; thus , and only if for all . an operator is _ positive _ if , the null operator . ] of trace equal to one . was chosen there to denote a state since it is the first letter of the finnish word for state " ; the authors of that monograph found this preferable to , which would stand for the german word for knowledge " , or , which is reminiscent of the phase space density with its classical connotations .linguistic balance between the authors was maintained by taking to denote the pointer ( zeiger " ) observable in a measurement scheme ( see below ) .naturally , will stand for the english term measurement " . ]the set of states is a convex subset of the real vector space of all self - adjoint trace - class operators .the role of a quantum state is to assign a probability to the outcome of any measurement ; in other words , associated with every measurement with possible outcomes , , are mappings ] , where is an operator satisfying ( here denotes the identity operator ) .such operators are called _ effects_. the set of effects will be denoted .the normalization of the probability distributions ( ) entails the condition the mapping together with the property ( [ eqn : normaliz ] ) is a ( discrete ) instance of a normalized positive - operator - valued measure ( povm ) , the general definition being that of an operator - valued mapping with the following properties : ( i ) the domain consists of all elements of a -algebra of subsets of an outcome space ; ( ii ) the operators in the range are effects ; ( iii ) the mapping is -additive ( with infinite sums defined as weak limits ) : for any finite or countable family of mutually disjoint sets in ; ( iv ) .povms are taken as the most general representation of an _observable_. in this contribution the measurable space of outcomes will be or , where denotes the borel algebra of subsets of .the usual notion of observable is then recovered as the special case of a projection - valued measure ( pvm ) on , which is nothing but the spectral measure associated with a selfadjoint operator .observables represented are called pvms _ sharp _observables , all other povms are referred to as _ unsharp_ observables . the extreme case of a _observable arises when all the effects in its range are _ trivial _ , that is , of the form ; the statistics associated with trivial effects and observables carries no information about the state .measurements are physical processes and as such they are subject to the laws of physics . in quantum mechanics , a measurement performed on an isolated object is described as an interaction between this object system and an apparatus system , both being treated as quantum systems .being a macroscopic system , the apparatus will interact with a wider environment , but it is often convenient and sufficient to subsume the degrees of freedom of this rest of the world " into the description of the apparatus . the quantum description of a measurement is succinctly summarized in the notion of a _ measurement scheme _ , i.e. , a quadruple , where is the hilbert space of the apparatus ( or probe ) system , the initial apparatus state, is the unitary operator representing the time evolution and ensuing coupling between the object system and apparatus during the period of measurement from time to .finally , is the apparatus pointer observable , usually modeled as a sharp observable .a schematic sketch of a measurement process is given in figure [ mmt - scheme ] which is taken from .here and denote the initial states of the object and apparatus , and is the final state of the compound system after the measurement coupling has ceased .it is understood that upon reading an outcome , symbolized in the diagram with a discrete label , the apparatus is considered to be describable in terms of a pointer eigenstate , and this determines uniquely the associated final state of the object , as will be shown below .the observable measured by such a scheme is determined by the pointer statistics for every object input state and is thus represented by a povm that is unambiguously defined by the following _ probability reproducibility _ condition : }=:{\mathrm{tr}\left [ { t\,e(x ) } \right]}\equiv{\mathsf{p}}^e_t(x).\ ] ] here is any element of a -algebra of subsets of an outcome space .the positivity of the operators in the range of the map and the measure properties of this map follows from the fact that the maps are probability measures for every state .the state of the object after recording a measurement outcome in the set is determined by the following _ sequential joint _probability for a value of the pointer to be found in and an immediately subsequent measurement of an effect to yield a positive outcome : }=:{\mathrm{tr}\left [ { { { \mathcal i}}_x(t)\,b } \right]}\equiv{\mathrm{tr}\left [ { t_xb } \right]}\ ] ] the maps , called ( quantum ) operations , are affine and trace norm - nonincreasing : }={\mathrm{tr}\left [ { t_x } \right]}= { \mathrm{tr}\left [ { t\,e(x ) } \right]}\leq { \mathrm{tr}\left [ { t } \right]}=1,\ ] ] and they compose an _ instrument _ , that is , an operation - valued map .note that these maps extend in a unique way to linear maps on the complex vector space of trace class operators .the above equation shows that every instrument defines a unique povm .an important property of the operations deriving from a measurement scheme is their _ complete positivity _ : for every , the linear map defined by ( where is any trace class operator on and is any trace class operator on ) is positive , that is , it sends state operators to ( generally non - normalized ) state operators ., where is antilinear operator such as complex conjugation for . ]the instrument composed of the completely positive operations is also called completely positive .every measurement scheme defines thus a unique completely positive instrument , and the latter fixes a unique povm which represents the observable measured by the scheme .starting from ground - breaking mathematical work of neumark and stinespring , the converse statement was developed in increasing generality by ludwig and collaborators , davies and lewis , and ozawa ( detailed references can be found in ) : [ thm : fund - qtm ] + every observable , represented as a povm , admits infinitely many completely positive instruments from which it arises via eq .( [ eqn : prob - rep ] ) , and every completely positive instrument admits infinitely many implementations by means of a measurement scheme according to eq .( [ eqn : mmt - instrument ] ) .next i recall some model realizations of measurement schemes and completely positive instruments ; these will provide valuable case studies in subsequent sections . on the final pages of his famous book of 1932 , _ mathematische grundlagen der quantenmechanik " _ , von neumann introduces a mathematical model of what he describes as a measurement of the position of a particle in one spatial dimension .both the particle and measurement probe are represented by the hilbert spaces ; and the coupling generates a correlation between the observable intended to be measured , , and the pointer observable . denote the selfadjoint canonical position and momentum operators , and their spectral measures are denoted , respectively . ] to simplify the calculations , one assumes that the interaction is _ impulsive _ , that is , the coupling constant is large so that the duration of the interaction can be kept small enough so as to neglect the free hamiltonians of the two systems .it is further assumed that the initial state of the probe is a pure state , } ] .the second term in the expression for the variance , , indicates the unsharpness of the observable and at the same time is a measure of the inaccuracy of the measurement , that is , the separation between and .the instrument induced by von neumann s measurement scheme is given as follows : it turned out much more intricate to find a measurement scheme realizing a measurement of the sharp position observable .one solution was presented by ozawa who introduced the following coupling : \\ & = \exp\left(-\tfrac i\hbar{q}\otimes{p}_{\mathcal{a}}\right)\ , \exp\left(\tfrac i\hbar{p}\otimes{q}_{\mathcal{a}}\right ) .\end{split}\ ] ] taking the pointer as , the measured observable is , the sharp position , independently of the choice of initial probe state .indeed , the associated instrument is found to be }\ ,e^{-\frac i\hbar q{p}}\,t_{\mathcal{a}}\,e^{\frac i\hbar q{p}},\ ] ] so that }={\mathrm{tr}\left [ { { { \mathcal i}}^{\mathrm{ozawa}}_x(t ) } \right]}={\mathrm{tr}\left [ { t{\mathsf{q}}(x ) } \right]} ] is a constant map for all , and so the induced observable is trivial , . the proof is quickly sketched : if }\mapsto{{\mathcal i}}_\omega({p[{\varphi}]})={{\mathcal i}}_x ( { p[{\varphi}]})+{{\mathcal i}}_{\omega\setminus x}({p[{\varphi}]})={p[{\varphi}]} ] . due to the linearity of , the term is independent of , and the measured observable gives probabilities independent of : } ) } \right]}=\lambda(x) ] . in order to extend this proof to measurement schemes for which the initial apparatus state is not pure ,it is necessary to sharpen the no - entanglement condition of the theorem to hold for any vector in whose projection operator can arise as a convex component of .these vectors are known to be given exactly by those in the range of .the following theorem , also proven in , can then be applied to take a step towards extending the above discussion to mixed apparatus states .let be a unitary mapping such that for all vectors , , the image of under is of the form .then is one of the following:(a ) where and are unitary;(b ) , where and are surjective isometries .the latter case can only occur if and are hilbert spaces of equal dimensions .it is not hard to construct a measurement scheme with a non - entangling coupling of the form ( b ) for _ any _ object observable .this can be achieved by making the object interact with another system of the same type onto which the state of the original system is identically copied .let .let be a povm in .define .then we have a measurement and its associated instrument are called _ repeatable _ if the probability for obtaining the same result upon immediate repetition of the measurement is equal to one : }={\mathrm{tr}\left [ { { { \mathcal i}}_x(t ) } \right]}\quad \mathrm{for\ all}\ x\in\sigma,\ t\in{\mathcal{s(h)}}.\ ] ] a measurement of a discrete observable and its associated instrument is called _ ideal _ if it does not change any eigenstate ; thus , if the state is such that a particular outcome is certain to occur , then an ideal instrument does not alter the state : }=1\ \mathrm{then\ } { { \mathcal i}}_k(t)=t.\ ] ] examples of repeatable measurements are the von neumann and lders measurements which will be defined next .let be an observable with discrete spectrum and associated spectral decomposition .we allow the eigenvalues to have multiplicity greater than one , so that the spectral projections can be decomposed into a sum of orthogonal rank-1 projections : } ] , this set is a borel set . )an instrument on is _-repeatable _ if for all states and all , }={\mathrm{tr}\left [ { { { \mathcal i}}_x(t ) } \right]}.\ ] ] an example is given by ozawa s instrument of a sharp position measurement , eq .( [ eqn : ozawa - instr ] ) if the probe state is chosen such that its position distribution is concentrated within ] , the ( trace norm ) difference between the states and is of the order ; this is the sense in which the generalized lders instruments are approximately ideal .approximately ideal measurements enable a weakening of the epr criterion applicable to unsharp or continuous observables , thus yielding a notion of _ unsharp reality _ .it is not hard to construct examples of effects ( with some eigenvalues small ) such that the associated lders operation does not increase the small probability represented by that eigenvalue since the corresponding eigenstate is left unchanged .this shows that repeatability does not hold even in an approximate sense .thus unsharp observables sometimes admit measurements that are less invasive than measurements of sharp observables .the notion of a lders measurement was introduced by g. lders in 1951 ( english translation in ) who showed that such measurements can be used to test the compatibility of sharp observables .[ thm : luders ] let and be two ( discrete ) observable .the following are equivalent : + ( a ) for all states , }={\mathrm{tr}\left [ { tb } \right]} ] ; + ( b ) for all . + the assumptions are : +( i ) is a simple observable with only two effects .+ ( ii ) has a discrete spectrum of eigenvalues that can be numbered in decreasing or increasing order .+ ( iii ) condition ( a ) is also stipulated for the effect . that _ some _ additional assumptions are necessary has been demonstrated by means of a counter example in .there a discrete unsharp observable and effect not commuting with were found such that the generalized lders instrument of does not disturb the statistics of .there is an obvious limitation on measurability due to the fact that the physical realization of a measurement scheme depends on the interactions available in nature . in particular, the hamiltonian of any physical system has to satisfy the symmetry requirements associated with the fundamental conservation laws .this measurement limitation is reviewed in abner shimony s contribution , so that here some complementary points and comments will be sufficient .an early demonstration of the impact of the existence of additive conserved quantities on the measurability of a physical quantity was given by wigner in 1952 .wigner showed that repeatable measurements of the -component of a spin-1/2 system are impossible due to the conservation of the -component of the total angular momentum of the system and the apparatus .the conclusion was generalized by other authors to the statement that a repeatable measurement of a discrete quantity is impossible if there is a ( bounded ) additive conserved quantity of the object plus apparatus system that does not commute with the quantity to be measured . wigner s resolution was to show that a successful measurement can be realized with an angular - momentum - conserving interaction and with an arbitrarily high success probability if the apparatus is sufficiently large .thus he allowed for an additional measurement outcome " that indicated no information " about the spin .the outcomes associated with spin up " and spin down " were shown to be reproduced with probabilities that came arbitrarily closely to the ideal quantum mechanical probabilities . in ( * ? ? ?iv.3 ) it was shown that this resolution amounts to describing the measurement by means of a povm with three possible outcomes and associated effects , where the effects , i.e. , they are close to " the spectral projections of if , and the effect is a multiple of .it can be shown that can be made very small if the size of the measuring system is large .these considerations show that it is a matter of principle that measurements of spin can never be perfectly accurate as a consequence of the additive conservation law for total angular momentum .the the necessary inaccuracy is appropriately described by a povm of the kind described above . however , the common description of a sharp spin measurement is found to be an admissible idealization ; the error made by breaking ( ignoring ) the fundamental rotation symmetry of the measurement hamiltonian is negligible due to the fact that the measuring system is very large .it seems to be a difficult problem to decide whether a limitation of measurability arises also in cases where the observable to be measured and the conserved quantity are unbounded and have continuous spectra .this question was raised by shimony and stein in 1979 .the most general result at that time was the following ( expressed in the notation of the present paper ) : [ thm : mmt - cons - law ] if a sharp observable admits a repeatable measurement , and if is a bounded selfadjoint operator representing a conserved quantity for the combined object and apparatus system , then commutes with .since repeatable measurements exist only for discrete observables ( theorem [ thm : rep - disc ] ) , the above statement is only applicable to object observables with discrete spectra .hence it does not apply to measurements of position .ozawa presented what seems to be a counter example , using a coupling that is manifestly translation invariant .however , this model constitutes an unsharp position measurement which becomes a sharp measurement only if the initial state of the apparatus is allowed to be a non - normalizable state ( that is , not a hilbert space vector or state operator ) . a proof that a sharp position measurement ( without repeatability , but with some additional physically reasonable assumptions )can not be reconciled with momentum conservation was given in .a general proof is still outstanding .here we use another modification of the von neumann model to demonstrate that momentum conservation is compatible with unsharp position measurements where the inaccuracy can be made arbitrarily small ( * ? ? ? * sec .note that the total momentum commutes with the coupling \right).\ ] ] the pointer is again taken to be .then the measured observable is the smeared position , where .one can argue that the clash between the conservation law and position measurement has been shifted and reappears when the measurement of is considered .however , if momentum conservation is taken into account in the measurement of the pointer , it would turn out that the pointer itself is only measured approximately , that is , an unsharp pointer is actually measured , which then yields the measured observable as . the lesson of the current subsection is this : to the extent that the limitation on measurability due to additive conservation laws holds as a general theorem , it shows that the notion of a sharp measurement of the most important quantum observables is an idealization which can be realized only approximately _ as a matter of principle _ ; yet the quality of the approximation can be extremely good due to the macroscopic nature of the measuring apparatus . to conclude this section , it is worth remarking that the quantum limitations of measurements described here are valid independently of the view that one may take on the measurement problem .this is the case because these limitations follow from consideration of the total state of system and apparatus as it arises in the course of its unitary evolution .the _ classic " _ expressions of quantum limitations of preparations and measurements are codified in the complementarity and uncertainty principles , formulated by bohr and heisenberg 80 years ago .this section offers a taster " for two recent extensive reviews on the complementarity principle , ref . , and the uncertainty principle , ref . , which together develop a novel coherent account of these two principles . in a nutshell ,complementarity states a strict exclusion of certain pairs of operations whereas the uncertainty principle shows a way of softening " complementarity into a graded , quantitative relationship , in the form of a trade - off between the accuracies with which these two options can be realized together approximately .this interpretation is compatible with , if not envisaged in , the following passage of bohr s published text of his famous como lecture of 1927 . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in the language of the relativity theory , the content of the relations ( 2 ) [ the uncertainty relations ] may be summarized in the statement that according to the quantum theory a general reciprocal relation exists between the maximum sharpness of definition of the space - time and energy - momentum vectors associated with the individuals .this circumstance may be regarded as a simple symbolical expression for the complementary nature of the space - time description and claims of causality . at the same time, however , the general character of this relation makes it possible to a certain extent to reconcile the conservation laws with the space - time co - ordination of observations , the idea of a coincidence of well - defined events in a space - time point being replaced by that of _ unsharply _ defined individuals within finite space - time regions . " __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ bohr summarizes here his idea of complementarity as the falling - apart in quantum physics of the notions of observation , which leads to _ space - time description _ , and state definition , linked with _conservation laws _ and _ causal description _ ; he regarded the possibility of combining space - time description and causal description as an idealization that was admissible in classical physics .note also the reference to _ unsharpness _ ( the emphasis in the quotation is ours ) , which seems to constitute the first formulation of an intuitive notion of _ unsharp reality _ ( and the first occurrence of this teutonic addition to the english language ) . in a widely accepted formulation ,the _ complementarity principle _ is the statement that there are pairs of observables which stand in the relationship of complementarity .that relationship comes in two variants , stating the mutual exclusivity of _ preparations _ or _ measurements _ of certain pairs of observables . in quantum mechanicsthere are pairs of observables the eigenvector basis systems of which are mutually unbiased .this means that the system is in an eigenstate of one observable , so that the value of that observable can be predicted with certainty , the values of the other observable are uniformly distributed .this feature is an instance of _ preparation complementarity _ , and it has been called _value complementarity_. _ measurement complementarity _ of observables with mutually unbiased eigenbases can be characterized by the following property : any attempt to obtain simultaneous information about both observables by first measuring one and then the other is bound to fail since the first measurement completely destroys any information about the other observable ; that is to say , the second measurement gives no information about the state prior to the first measurement .this will be illustrated in an example below .we conclude that the principle " of complementarity , as formalized here , is in fact a consequence of the quantum mechanical formalism .examples of pairs of observables are spin-1/2 observables such as , and the canonically conjugate position and momentum observables of a free particle . a unified formalization of preparation and measurement complementarity can be given in terms of the spectral projections of these observables ( for , and for : the symbol represents the lattice - theoretic infimum of two projections , that is , for example , is the projection onto the closed subspace which is the intersection of the ranges of and .these relations entail , in particular , that complementary pairs of observables do not possess joint probability distributions associated with a state in the usual way : for example , there is no povm such that and for all .in fact , if these marginality relations were satisfied for all bounded intervals , then one must have and , and this implies that any vector in the range of must also be in the ranges of and , hence .let be observables in , , with mutually unbiased eigenbases and , respectively .( hence are value complementary . )let be the repeatable ( von neumann - lders ) instrument associated with : .let be the nonselective measurement operation , then the probability for a measurement following the measurement is , which is independent of .this can be expressed by saying that the observable effectively measured in this process is not but the trivial povm whose effects are .consider a measurement of position followed by a measurement of momentum .let be the instrument representing the position measurement .then the following defines a joint probability distribution : }={\mathsf{p}}_t(x\times y)=:{\mathrm{tr}\left [ { tg(x\otimes y ) } \right]},\quad x , y\in{\mathcal{b}({\mathbb r})}.\ ] ] the marginal observables are sharp position and a distorted momentum " observable , and .since one of these marginal observables is a sharp observable , it follows that the effects of the other marginal observable commute with the sharp observable . but is a maximal observable , and so the effects are in fact functions of the position operator .the attempted momentum measurement only defines an effectively measured observable which contains a shadow " of the information of the first position measurement .hence a sharp measurement of position destroys all prior information about momentum ( and vice versa ) .the following defines a completely positive instrument which renders the effective observable defined by a subsequent momentum measurement trivial : let be the continuous family of positive operators of trace one , generated by , where are unitary operators that commute with momentum . then put }.\ ] ]the associated measured observable is indeed the sharp observable since }={\mathrm{tr}\left [ { t{\mathsf{q}}(x ) } \right]} ] denote the moment operator of ( defined on its natural domain ):=\{{\varphi}\in{\mathcal h}\,:\,\left|\int x^k{\langle { \psi}|{e(dx){\varphi}}\rangle}\right|<\infty\ \mathrm{for\ all\ } \psi\in{\mathcal h}\} ] and commute so that they can be jointly measured to determine the expectation of the operator -{\mathsf{q}})^2 ] and do not commute then normally the squared difference operator does not commute with either of them and a third , quite different measurement is required to find its expectation value .this is to say that the standard error is not _ operationally significant _ , in general .an interesting but very special subclass of measurements where this deficiency does not arise is the family of _ unbiased _ measurements , for which ={\mathsf{q}} ] .by i denote the _ inaccuracy _ , defined as the smallest interval width such that whenever the value of is certain to lie within an interval , then the output distribution is concentrated to within in : the inaccuracy describes the range within which the input values can be inferred from the output distributions , with confidence level , given initial localizations within .the inaccuracy is an increasing function of , so that one can define the _ error bar width _ of relative to : if is finite for all , we will say that approximates in the sense of _ finite error bars_. similar definitions apply to approximations of momentum , yielding and .it is interesting to note that the finiteness of either or implies the finiteness of .therefore , among the three measures of inaccuracy introduced above , the condition of finite error bars gives the most general criterion for selecting good " approximations of and .the following uncertainty relation for error bar widths is proven in .[ thm : err - bar - ur ] let be an observable on .the marginals obey the following trade - off relation ( for ) : there are various measures of the intrinsic unsharpness of an observable on . herewe briefly review a measure based on the _ noise operator _ of , given by the positive operator -e[1]^2 ] is a selfadjoint ( rather than only symmetric ) operator , it is known that if and only if is a sharp observable .the following trade - off relation for the noise in approximate joint measurements of position and momentum is proven in .[ thm : noise - ur ] let be an approximate joint observable for in the sense of finite error bars .then the noise of and the noise of obey the following inequality : an alternative measure of the intrinsic unsharpness of an observable on is given by the _ resolution width _, introduced in ; this quantity is similar in spirit to the error bar width , and it is again found to yield a universal trade - off relation in joint measurements . we have seen that a momentum measurement following a sharp position measurement defines an observable that carries no information about the momentum distributions of the states prior to the position measurement .a sharp measurement of position thus destroys completely the momentum information contained in the initial state .the question arises whether the disturbance of momentum can be diminished if the position is measured approximately rather than sharply .this possibility was already envisaged by heisenberg in his discussion of thought experiments illustrating the uncertainty relations .for example , in the case of a particle passing through a slit he noted that due to the diffraction at the slit , an initially sharp momentum distribution is distorted into a broader distribution whose width is of the order , where is the width of the slit .the width is a measure of the change , or disturbance , of the momentum distribution , and can be interpreted as the inaccuracy of the position determination effected by the slit .further , one may also consider the recording of the location at which the particle hits the screen as a geometric determination of the ( direction ) of its momentum , the inaccuracy of which is given by the width of the distribution obtained after many repetitions of the experiment . in this way the passage through the slit followed by the recording at the screen constitutes an approximate joint measurement of the position and momentum of the particle at the moment of its passage through the slit; see figure [ fig : slit ] .( 100,120)(-50,100 ) generalizing this idea of making an approximate joint measurement by way of a sequence of approximate measurements , we consider the schemes of figures [ fig : sharp - seq ] and [ fig : unsharp - seq ] ) .here is either the sharp position or an unsharp position observable measured first , followed by a sharp momentum observable , whose measurement is to be followed by a sharp momentum measurement .the observable effectively measured by this momentum measurement is defined via for all initial states , where is the state after the position measurement .thus is the distorted " momentum observable .collecting the probabilities for finding an outcome in a set for the first measurement and an outcome in for the second measurement defines a probability measure for each state via . hence there is a unique joint observable for and determined by the given measurement scheme . in the first case ,since the marginal is sharp , commutes with and is therefore _ not _ a good approximation of the momentum observable . however , in the second case , , it is known that the second marginal observable is a smeared momentum observable , , if the first , unsharp position measurement is such that the induced instrument is the von neumann instrument ( [ eqn : vn - instrum ] ) .the inaccuracy distributions are then related as follows ( cf .( [ eqn : smeared - pos ] ) ) : here is the fourier transform of , from which it follows that the standard deviations of the distributions obey the uncertainty relation : note that , are measures of how well the sharp observables are approximated by , respectively .thus they are measures of measurement inaccuracy , and at the same time quantifies the disturbance of the momentum distribution due to the position measurement .\(1 ) : \ar[d]&\boxed{{\mathsf{q}}}\ar[r]\ar@{~>}[d]&t'\ar[r]&\boxed{{\mathsf{p}}}\ar@{~>}[d]\\ { \mathsf{p}}_t^{{\mathsf{q}}},{\mathsf{p}}_t^{{\mathsf{p}}}&{\mathsf{p}}_t^{{\mathsf{q}}}={\mathsf{p}}_t^{m_1 } & & { \mathsf{p}}_{t'}^{{\mathsf{p}}}={\mathsf{p}}_t^{f({\mathsf{q}})}={\mathsf{p}}_t^{m_2 } } \ ] ] \(2 ) &{\mathsf{q}}}$ ] : \ar[d]&\boxed{m_1}\ar[r]\ar@{~>}[d]&t'\ar[r]&\boxed{{\mathsf{p}}}\ar@{~>}[d]\\ { \mathsf{p}}_t^{{\mathsf{q}}},{\mathsf{p}}_t^{{\mathsf{p}}}&{\mathsf{p}}_t^{m_1 } & & { \mathsf{p}}_{t'}^{{\mathsf{p}}}={\mathsf{p}}_t^{f({\mathsf{q}})}=\rho^{m_2 } } \ ] ] these considerations show that an operational definition disturbance of the momentum distribution due to a position measurement is obtained by considering the sequential joint measurement composed of first measuring position and then momentum .the inaccuracy of the second measurement , that is , any measure of the separation between and , is also a measure of the momentum disturbance .consequently , all the joint measurement inaccuracy relations discussed above apply to sequential joint measurements of position and momentum , and in this case they constitute rigorous versions of the long - sought - after inaccuracy - vs - disturbance trade - off relations .using the apparatus of modern quantum measurement theory , i have reviewed rigorous formulations of some well - known quantum limitations of measurements : the inevitability of disturbance and ( transient ) entanglement ; the impossibility of repeatable measurements for continuous quantities , the restrictions on measurements arising from the presence of an additive conserved quantity , and the necessarily approximate and unsharp nature of joint measurements of noncommuting quantities . in each case , a strict no - go theorem is complemented with a positive result describing conditions for an approximate realization of the impossible goal : repeatability can be approximated arbitrarily well for continuous sharp observables , also in the presence of a conservation law .it was found that ideal measurements of sharp observables are necessarily repeatable , but in the case of unsharp observables , approximate ideality can be achieved without forcing approximate repeatability .thus , unsharp measurements may be less invasive than sharp measurements .the impossibility of joint sharp measurements of complementary pairs of observables can be modulated into the possibility of _ approximate _ joint measurements of such observables , _ provided _ the inaccuracies are allowed to obey a universal heisenberg uncertainty relation .likewise , the complete destruction of momentum information by a sharp position measurement can be avoided if an _ unsharp _position measurement is performed .the trade - off between the information gain in the approximate measurement of one observable and the disturbance of ( the distribution of ) its complementary partner observable was found to be an instance of the joint - measurement uncertainty relation .these results , some of which were made precise in very recent investigations , open up a range of interesting new questions and tasks .in particular , it will be important to find operational measures of inaccuracy that are applicable to all types of observables , whether bounded or unbounded , discrete or continuous .this would probably enable a formulation of a universal form of joint measurement uncertainty relation for arbitrary pairs of ( noncommuting ) observables , thus generalizing the relations presented here for the special case of complementary pairs of continuous observables such as position and momentum .p. busch . can quantum theoretical reality be considered sharp ? in p. mittelstaedt and e.w .stachow ( eds . ) , _ recent developments in quantum logic _ , bibliographisches institut , mannheim , pp .81101 , 1985 .
|
in this contribution i review rigorous formulations of a variety of limitations of measurability in quantum mechanics . to this end i begin with a brief presentation of the conceptual tools of modern measurement theory . i will make precise the notion that quantum measurements necessarily alter the system under investigation and elucidate its connection with the complementarity and uncertainty principles .
|
comparative studies of cities adopt classifications , such as cultural or geographical criteria , and then apply analytical tools to characterize the existing groups in morphological terms . however , in recent space syntax investigations devoted to the comparative classification of urban textures , the predefined categories have been avoided and groups have been interpreted as a result of the analysis .methods for automatic classification or grouping , broadly termed _hierarchical clustering _ have been discussed in .the general idea behind the hierarchical clustering is that elements of any set have similarities and differences that can be mapped as distances in a multi - dimensional space in which each characteristic ( variable ) represents an axis .then , clusters are created by grouping isolated elements or subgroups or , alternatively , splitting the set into smaller groups , according to the distance between them . in the present paper ,we propose the new automatic classification method based on the structural statistics of so called justified graphs used in urban space syntax theory .most real world networks can be considered complex by virtue of features that do not occur in simple networks .the encoding of cities into non - planar dual graphs ( sec .[ sec : graphrepresentations ] ) reveals their complex structure , .if cities were perfect grids in which all lines have the same length and number of junctions , they would be described by regular graphs exhibiting a high level of similarity no matter which part of urban texture is examined . this would create a highly accessible system that provides multiple routes between any pair of locations .it was believed that pure grid systems are easy to navigate due to this high accessibility and to the existence of multiple paths between any pair of locations .although , the urban grid minimizes descriptions as long as possible in the ideal grid all routes are equally probable ; morphology of the perfect grid does not differentiate main spaces and movement tend to be dispersed everywhere .alternatively , if cities were purely hierarchical systems ( like trees ) , they would clearly have a main space ( a hub , a single route between many pairs of locations ) that connects all branches and controls movement between them .this would create a highly segregated , sprawl like system that would cause a tough social consequence , .however , real cities are neither trees nor perfect grids , but a combination of these structures that emerge from the social and constructive processes .they maintain enough differentiation to establish a clear hierarchy resulting from a process of negotiation between the public processes ( like trade and exchanges ) and the residential process preserving their traditional structure .the emergent urban network is usually of a very complex structure which is therefore naturally subjected to the _ complex network theory _ analysis . in order to illustrate the applications of complex network theory methods to the structural investigations of dual graphs representing urban environments ,we have studied five different compact urban patterns .two of them are situated on islands : manhattan ( with an almost regular grid - like city plan ) and the network of venice canals ( imprinting the joined effect of natural , political , and economical factors acting on the network during many centuries ) . in the old city center of venice that stretches across 122 small islands in the marshy venetian lagoon along the adriatic sea in northeast italy ,the canals serve the function of roads , and every form of transport is on water or on foot .we have also considered two organic cities founded shortly after the crusades and developed within the medieval fortresses : rothenburg ob der tauber , the medieval bavarian city preserving its original structure from the 13 century , and the downtown of bielefeld ( altstadt bielefeld ) , an economic and cultural center of eastern westphalia . to supplement the study of urban canal networks ,we have investigated that one in the city of amsterdam .although it is not actually isolated from the national canal network , it is binding to the delta of the amstel river , forming a dense canal web exhibiting a high degree of radial symmetry .* table 1 : some features of studied dual city graphs * [ cols="^,^,^,^,^",options="header " , ] in tab . 2 ,we have presented the structural distances between the dual graphs of five compact urban patterns calculated in accordance to ( [ scalar_distance ] ) .it is important to note that the structural distances given in the tab .2 have been calculated independently for each pair of urban textures with reference to their sizes and distributions of far away neighbors .it is obvious that they do not belong to the same space and therefore can not be immediately compared .the _ degree distribution _ has become an important concept in complex network theory describing the topology of complex networks .it originates from the study of random graphs by erds and rnyi , .the importance of the implemented street identification principle is worth mentioning for the investigations on the degree statistics of dual city graphs .the comparative investigations of different street patterns performed in implementing the icn principle reveal scale - free degree distributions for the vertices of dual graphs .however , in it has been reported that under the street - name approach the dual graphs exhibit small - world character , but scale - free degree statistics can hardly be recognized .the results on the probability degree statistics for the dual graphs of compact urban patterns analyzed in accordance with the above described street identification principle are compatible with that reported in . in general , compact city patterns do not provide us with sufficient data to conclude on the universality of degree statistics .the probability degree distributions for the dual graph representations of the five compact urban patterns mentioned in tab . 1 has been studied by us in . to give an example , we display in fig .[ fig1_09 ] the log - log plot is sketched of the number of streets in manhattan versus the number of their junctions , where .\ ] ] these numbers are displayed by points .the solid line is associated to cumulative degree distribution , where is the probability that the degree is _greater than or equal _ to .the presentation of degree data by the cumulative degree distribution has an advantage over the degree distribution ( [ degdistr01 ] ) , since it reduces the noise in the distribution tail , .it is remarkable that the observed profiles are broad indicating that a street in a compact city can cross the different number of other streets , in contrast while in a regular grid . at the same time, the distributions usually have a clearly noticeable maximum corresponding to the most probable number of junctions an average street has in the city .the long right tail of the distribution which could decay even faster then a power law is due to just a few `` broadways , '' embankments , and belt roads crossing many more streets than an average street in the city , . it has been suggested in that irregular shapes and faster decays in the tails of degree statistics indicate that the connectivity distributions is _ scale - dependent_. as a possible reason for the non - universal behavior is that in the mapping and descriptive procedures , inadequate choices to determine the boundary of the maps or flaws in the aggregation process can damage the representation of very long lines .being scale - sensitive in general , the degree statistics of dual city graphs can nevertheless be approximately universal within a particular range of scales .the scale - dependence of degree distributions indicate that the degree statistics alone does not give us the enough information to reach a qualified conclusion on the structure of urban spatial networks .thus , the statistics of far - away neighbors would be targeted to reduce the gap providing us with a new method for automatic structural classification of cities .the work has been supported by the volkswagen foundation ( germany ) in the framework of the project : `` network formation rules , random set graphs and generalized epidemic processes '' ( contract no az .: i/82 418 ) .the authors acknowledge the multiple fruitful discussions with the participants of the workshop _madeira math encounters xxxiii _ , august 2007 , ccm - centro de cincias matemticas , funchal , madeira ( portugal ) .m. major , `` are american cities different ?if so , how do they differ '' in _ proc . international space syntax symposium _ ,m. major , l. amorim , f. dufaux ( eds ) , university college london , london , vol . * 3 * , 09.1 - 09.14 ( 1997 ) .k. karimi , `` the spatial logic of organic cities in iran and in the united kingdom '' in _ proc . international space syntax symposium _ ,m. major , l. amorim , f. dufaux ( eds ) , university college london , london , vol . * 1 * , 05.1 - 05.17 ( 1997 ) .v. medeiros , f. holanda , `` urbis brasiliae : investigating topological and geometrical features in brazilian cities '' , in a. van nes ( ed . ) , _ proc . international space syntax symposium _ ,delft , faculty of architecture , section of urban renewal and management , pp .331 - 339 ( 2005 ) .l. figueiredo , l. amorim , `` continuity lines in the axial system '' , in a van nes ( ed.)_proc . international space syntax symposium _ , delft ,faculty of architecture , section of urban renewal and management , pp .161 - 174 ( 2005 ) .kansky , _ structure of transportation networks : relationships between network geometry and regional characteristics _ , research paper * 84 * , department of geography , university of chicago , chicago , il ( 1963 ) .miller , s.l .shaw , _ geographic information systems for transportation : principles and applications _ , oxford univ .press , oxford ( 2001 ) .w. g. hansen , _ journal of the american institute of planners _ * 25 * , 73 - 76 ( 1959 ) .a. g. wilson , _ entropy in urban and regional modelling _ , pion press , london ( 1970 ) .m. batty , _ a new theory of space syntax _ , ucl centre for advanced spatial analysis publications , casa working paper * 75 * ( 2004 ). b. jiang , `` a space syntax approach to spatial cognition in urban environments '' , position paper for nsf - funded research workshop _ cognitive models of dynamic phenomena and their representations _ ,october 29 - 31 , 1998 , university of pittsburgh , pittsburgh , pa ( 1998 ) .a. cardillo , s. scellato , v. latora , and s. porta , _ phys .e _ * 73 * , 066107 ( 2006 ) .s. porta , p. crucitti , and v. latora , _ physica a _ * 369 * , 853 ( 2006 ) b. jiang , c. claramunt , _ environ . plan .b : plan . des ._ * 31 * , 151 ( 2004 ) .
|
degree distributions of graph representations for compact urban patterns are scale - dependent . therefore , the degree statistics alone does not give us the enough information to reach a qualified conclusion on the structure of urban spatial networks . we investigate the statistics of far - away neighbors and propose the new method for automatic structural classification of cities .
|
in contemporary science and engineering modeling many situations arise in which the physical system consists of a lattice of discrete interacting units .the role of discreteness in modifying the behavior of solutions of continuum nonlinear pdes has recently been increasingly appreciated .the relevant physical contexts can be quite diverse , ranging from the calcium burst waves in living cells to the propagation of action potentials through the tissue of the cardiac cells and from chains of chemical reactions to applications in superconductivity and josephson junctions , nonlinear optics and waveguide arrays , complex electronic materials , the dynamics of neuron chains or lattices or the local denaturation of the dna double strand . whether the phenomenon in question is the propagation of an excitation wave along a neuron lattice , the electric field envelope in an optical waveguide array , or the behavior of a tissue consisting of an array of individual cells , we would often like to model the system through a coarse level " effective continuum evolution equation that retains the essential features of the actual ( discrete ) problem . typically computational modeling of such systems involves two steps : the derivation of effective continuum equations , followed by their analysis through traditional numerical tools . in this paperwe attempt to circumvent the derivation of explicit ( closed ) continuum effective equations , and analyze the effective behavior directly .this is accomplished through short , appropriately initialized simulations of the detailed discrete process , a procedure that we call the coarse time stepper " .these simulations provide estimates of the quantities ( residuals , action of jacobians , time derivatives , frchet derivatives ) that would be directly evaluated from the effective equation , had such an equation been available .the estimated quantities are processed by a higher level numerical procedure ( in this case , the recursive projection method , rpm , of shroff and keller ) which computes the effective , macroscopic behavior ( in this case , traveling waves and their coarse bifurcations ) .a more general discussion of the combination of coarse time stepping with continuum numerical techniques beyond rpm can be found in .we have recently demonstrated such an approach to the computation of the effective behavior ( in some sense , homogenization ) of spatially heterogeneous problems .this paper constitutes an extension of this idea to spatially discrete problems .the paper is organized as follows : we begin with a brief review of the coarse time stepper for spatially discrete problems .we then discuss our illustrative problem ( a front for a discrete reaction - diffusion system ) and its properties . a description of our implementation of the coarse time stepper for the bifurcation analysis of this particular problemis then presented , followed by numerical results .we conclude with a discussion of an alternative approach that involves the derivation of an explicit effective evolution equation ( based on pad approximations ) , and of the scope and applicability of our method .consider a discrete system where each unknown is associated with a point on a lattice in space . in the discussion here , we consider a one - dimensional regular lattice for simplicity .higher dimensional and/or possibly irregular , lattices can be treated in a similar way .we denote the unknowns , with , and the corresponding points , such that , where is the lattice spacing .we assume that the system is governed by the ordinary differential equations = f(t , u_-n, ,u_+n ) , , where is an integer representing the range of interaction between lattice points . we want to describe this discrete system dynamics through a continuous function that models the `` coarse '' behavior of the unknowns on the lattice : in some appropriate sense .we denote the _ coarse continuous solution _ of and we assume that is not large and that there exists an effective , spatially continuous evolution equation for of the form v_t = p(t , v,_x v, ,^m_x v ) , for some and integer .such an effective equation for should average over " the detailed discrete structure of the medium ; if there are no macroscopic variations of the discrete medium , this equation should therefore be translationally invariant ; for the moment , we will confine ourselves to this case . in terms of, we can express this as : if does not depend on , and if and are two solutions to the effective equation satisfying for all , then for all time , all , and all shifts .it is interesting to consider what the result of integrating such an effective equation with a particular , continuum initial condition , would physically mean .there clearly exists an uncertainty in how such a continuum initial condition would be imparted to ( sampled by ) the lattice .one way would be to set , for all , but we could equally well set for any . there exists , therefore , a one - parameter uncertainty parametrized by a continuous shift .simulations resulting from different lattice samplings of the same continuum initial condition could be quite different .this is best illustrated by thinking of a single - peaked function as the continuum initial condition : the peak may lie precisely at a lattice point , or could fall in - between lattice points .it is reasonable to consider as a useful effective continuum equation one which takes into account all possible shifts of the initial condition within a cell ; in analogy with our earlier work , we would like to analyze an effective equation that would describe the expected result taken over all possible shifts of sampling the initial condition by the lattice . we will use the coarse time stepper approach to simulate an effective equation like . in this setting , we approximate by the coarse time stepper solution at discrete times , where is the _ time horizon _ of the coarse time stepper . using the terminology of this framework, we do the following steps , starting from a continuous initial condition .* _ lifting ._ this initial data is lifted " to an ensemble of different initial states of by sampling , u_^j(0)=v_0(x_+js ) , s = x / n_c , j=0, ,n_c-1 . setting , we write this symbolically as where are called the lifting operators . in this casethey simply sample a continuous function . * _ evolve . _each ensemble of initial data is evolved till time according to the true dynamics " , _j(t)=_tu_j(0 ) , j=0, ,n_c-1 . where is the solution operator of evolving to .this step thus generates an ensemble of solutions at time .* _ restrict ._ via the restriction operator , the ensemble of solutions is brought back to a continuous function .( t , x)=\{u_j(t ) } , j=0, ,n_c-1 . to ensure consistencywe require that . the restriction operator is typically defined as follows .the solutions are thought of as sample values of a function such that .the function is recovered by interpolating the sample values and the restriction is finally given as a coarse scale filtering of .these steps are illustrated in .for we define recursively by applying the same construction . hence , ( nt , x)=\{_t_j}((n-1)t , x ) .the hope is that the coarse time stepper solution , at these discrete points in time , can be obtained from a closed evolution equation like whose solution , ( defined for all ) , agrees , at least approximately , with the coarse solution obtained from the procedure above , at the discrete points in time , .we will refer to the procedure as the _ coarse time stepper_. in order to approximate numerically , we must use a finite representation of .we let , be this representation at time .the elements could be nodal values , cell averages or , more generally , coefficients for finite elements or other basis functions . let be the operator realizing the function from the finite representation , .we also require that the restriction operator projects on the subspace spanned by the finite representation , and we can redefine it to also convert the projected function to this representation . symbolically , we then write the coarse time stepping v^n+1=\{_t_j}v^n = : g(v^n ) .note that we may not be able to write down the explicit expression for or the equation for , but our definition of allows us to realize its time- map numerically in a straightforward fashion .applied directly to the simulation , the coarse time - stepper does nothing to reduce the cost of detailed computation with the discrete dynamics .it is only in conjunction with other techniques ( like projective integration , or matrix - free fixed point techniques ) that the coarse time stepper may provide computational or analytical benefits .here we will make use of the coarse time stepper in conjunction with the recursive projection method ( rpm ) , to perform stability and bifurcation analysis of certain types of solutions of the ( unavailable ) coarse evolution equation .for a schematic illustration of the coarse time stepper with rpm , see .rpm helps locate fixed points , allows us to trace fixed point branches and locate their local bifurcations ; when the bifurcations in that we are interested in do not involve fixed points , has to be reformulated .how this is done depends on the application ; for the type of solutions considered here ( traveling fronts ) , the appropriate modification is discussed in .the effects of discreteness on the propagation of traveling wave solutions have been documented and analyzed in many different settings over the last two decades . from the pinning of travelling waves in discrete arrays of coupled torsion pendula and hamiltonian models , to the trapping of coherent structures in dissipative lattices of coupled cells ( see also references therein ) , the role of spatial discreteness has triggered a large interest in a diverse host of settings .effective equations capable of describing the nature of the solutions of discrete problems should successfully capture the effects of discreteness on the traveling wave shape and speed .more importantly , they should be capable of accurately predicting qualitative transitions ( bifurcations ) that are _ inherently due to the discreteness_. the most prominent of those is probably the pinning of traveling waves and fronts often observed when the lattice spacing becomes sufficiently large . to illustrate the performance of our proposed coarse equation in capturing such a front pinning , we have chosen what is arguably a prototypical spatially discrete problem capable of exhibiting it : a one - dimensional lattice with scalar bistable on - site kinetics and nearest neighbor diffusive coupling between lattice sites .our test problem is , therefore , a discrete reaction diffusion system described by = ( u_-1 - 2u_+u_+1 ) + f(u _ ) , with f(u ) = 2u(u-1)(-u ) , = 0.45 .this can serve as a model of e.g. individual cells in the cardiac tissue which are resistively coupled through gap junctions ( see e.g. , and references therein ) . in this case the solution , would correspond to the electrical potential of the cells . for small the system possesses solutions that can be characterized as _discrete traveling fronts _ : see .these solutions have a near constant shape and travel in a `` lurching '' manner .when becomes sufficiently large , front propagation fails ( front pinning ) . in our example, this happens at , see .the front speed for an infinite lattice approaches the asymptotic pde speed " value as the lattice size tends to zero. we will examine how faithful the coarse time stepper is to the properties of the solutions of the full discrete model .our numerical simulations are restricted to a finite domain , using grid points . at the boundaries , we prescribe neumann - type conditions u_n - u_n-1&=&0 , + u_0-u_-1&=&0 .this should model the full problem accurately as long as the ( relatively narrow ) front is positioned sufficiently far from the boundary . in this sectionwe detail the procedures associated with the coarse time stepper applied to the test problem on the finite interval $ ] , where and the cell locations are , with .our choice of finite representation of the coarse solution are nodal values , , evaluated at and , with .for many solution shapes fourier interpolation would be a natural interpolation operator realizing the coarse solution from .we denote direct fourier interpolation by .we could then define the corresponding lifting operators via the _ shifting _ operator , where uses as interpolation nodes . in our case , however , the solution is not periodic on and we get large errors if we use directly .instead we apply fourier interpolation to the _ differences _ of the sequence .we thus use the modified shifting operator given by _s:=c_s^fd , ( c)_:=1+_j=0^u_j , ( d)_:= u_0 - 1 , & = 0 , + u_-u_-1 , & > 0 .we then define the lifting operator ( acting directly on ) as where . the restriction operator is also defined using the shifting operators , but now with negative shifts , where uses as interpolation nodes .we then set and let note that these choices of and are consistent when .then , by the sampling theorem on . moreover , it is easy to see that .therefore , we also have on and consequently , we should also remark here that , in the special case when , we have where is a projection on the lowest fourier modes . hence , if we used direct fourier interpolation and , then our definition of is equivalent to lowpass filtering of , the lined up copies described in , top right .when we replace by we do not retain exactly this property , and a definition of based on simple lowpass filtering is no longer consistent .however , our procedure still corresponds to a type of lowpass filtering , although a more complicated one . for the time integration ofwe use the crank nicolson method , treating the nonlinear term explicitly .thus , with , where are given iteratively by + & = & w_^n+(w_-1^n-2w_^n+w_+1^n ) + t f(w_^n),for , together with the free boundary conditions w_-1^n - w_0^n&=&0 , + w_n^n - w_n-1^n&=&0 . in our computationswe use the time step .the coarse solution as we have defined it is a ( practically ) constant shape moving front . in order to convert this moving state into a stationary state , we can factor out the movement through a procedure based on _ template fitting _ ( , see also ) which pins the traveling front at a fixed -coordinate .this is performed by a `` pinning - shift '' operator , which we denote .our coarse time stepping is then modified from to v^n+1=\{_t_j}v^n = : g(v^n ) .this formulation has a steady state at the constant shape moving front .let us start from the basic , fourier based , pinning - shift operator .after introducing a template function , we define ^f:= ^f_c , c = _ c _ 0^l ( ^f ) ( x+c ) s(x)dx .hence , is the shifted version of that best fits the template , in the sense that it maximizes the -inner product between its fourier interpolant and . upon convergence, the effective front speed can be deduced from the converged value of and the time reporting horizon simply by taking . with the template we can compute the inner product in explictly , 1l_0^l ( ^f ) ( x+c ) s(x)dx = _0-(_1 e^ic ) , where are the fourier coefficients of . hence , since is real in should be chosen such that is real and negative .this is easily implemented numerically together with the fourier shift .for the same reasons as in the implementation of the coarse time stepper , we would like to avoid direct fourier interpolation of the solution , since it is not periodic . therefore , we modify to operate on differences instead . in the same spirit as in , we let with and defined in .we still use the effective propagation speed given by .an important property of the fourier based pinning shift operator is that it satisfies , which follows from the sampling theorem .for other types of interpolation , such as piecewise polynomial interpolation , the pinning shift operator will not have this property and a steady moving coarse shape may not translate into a fixed point for .our modification still has this property though , since where we used the fact that .rpm is an iterative procedure which can accelerate the location of fixed points of processes ; under certain conditions it can help locate steady states of dynamic processes ( in particular , discretized parabolic pdes ). it can be an acceleration technique for the solution of nonlinear equations , and a stabilizer of unstable numerical procedures ( as it was first presented , ) .consider the fixed point problem f(u ; ) = u , and let be the jacobian of . * like the newton method , rpm can converge rapidly to the fixed point solution provided the initial guess is good enough ; the convergence occurs even if has a few eigenvalues larger than one .the computational cost and convergence rate depend on the eigenvalues of .optimally there should be a clear gap in the spectrum between small and large ( near the unit circle ) eigenvalues and a limited number of large ( in norm ) eigenvalues for rpm to perform well .* never needs to be evaluated directly , only .we can therefore apply rpm to any `` black box '' code that defines a function ; it is a matrix - free " method .* as a by - product , rpm also computes approximations of the largest eigenvalues of .this gives approximate stability information about the fixed point .when rpm is used for the computer - assisted bifurcation analysis of steady states of ( usually dissipative evolution ) pdes , the function represents a _ time - stepper _ : a subroutine that takes initial data and reports the solution of the pde after some fixed time ( the reporting horizon ) .a fixed point then satisfies .the conventional way of finding the steady state using a time - stepper would be to call it many times in succession in effect , to integrate the pde for a long time , corresponding to solving by simple fixed point ( picard ) iteration .rpm can improve this approach in two important respects .first , the convergence can be significantly accelerated . the nature of many transport pdes usually encountered in engineering modeling ( the action of viscosity , heat conduction , diffusion , and the resulting spectra ) dictates that there exists a separation of time - scales , which translates into an eigenvalue gap in the spectrum of at the steady state .second , rpm converges even if the steady state is slightly unstable , i.e. when has a few eigenvalues outside the unit circle .it may thus be possible to compute ( mildly ) unsteady branches of the bifurcation diagram using forward integration ( but in a non - conventional way , dictated by the rpm protocol ) .rpm still retains the simplicity of the fixed point iteration , in the sense that no more information is needed than just the time - integration code .this code , which may be a legacy code , and can incorporate the best physics and modeling available for the process , is used by rpm as a black box .rpm can be seen as a modified version of fixed point iteration .it adaptively identifies the subspace corresponding to large ( in norm ) eigenvalues of , hence the directions of slow or unstable time - evolution in phase space . in these directionsthe fixed point iteration is replaced by ( approximate ) newton iteration .more precisely , suppose in .let be the maximal invariant subspace of corresponding to the largest eigenvalues and let be its orthogonal complement in .the solution is decomposed as , where and , are the projection operators in on and .these are constructed from an orthogonal basis p&=&v_pv_p^t , + q&=&i - v_pv_p^t.in a pseudo - arclength continuation context the solution and , where parameterizes the bifurcation curve .in addition to we then use an algebraic equation to be able to handle turning points , s(u,,s)=+-s=0 , where and refers to the converged solution at the previous point on the continuation curve .the solution is advanced using a predictor - corrector method . via extrapolation from previous points , and ,the predictor - solution is obtained . comparing a first order extrapolation , ^*&=&_i+s_i , + u^*&=&u_i+s_i , with a second order extrapolation , ^**&=&^*+ s_i^2 , + u^**&=&u^*+ s_i^2 + & = & , andrequiring that ( u^**-u^*,| ^**-^ * | ) < the stepsize is determined . here is a user specified tolerance . as the corrector method, we use rpm with pseudo - arclength continuation , see . starting from and ,the iterative scheme is given by q^n+1&=&qf(u^n,^n ) , + + & = & - , + + u^n+1&=&p^n+v_pp^n + q^n+1 , + ^n+1&=&^n+^n , where the left hand side consists of partial derivatives of in and of in with respect to and .the iterates will converge to the solution of under the assumptions discussed above . if the number of large norm eigenvalues , , is limited , the dimension of and the projected jacobian in the newton iteration , , remains small .only this small matrix needs to be inverted . for a more complete description of rpm we refer to .in this section we present some numerical results using the coarse time stepper and the procedure described above to simulate an effective equation for the discrete problem in .we will start by discussing the exact " bifurcation diagram of the discrete system , which we attempt to approximate .we will then show results obtained through the coarse time stepper , and discuss the effect of time stepper construction parameters " like the reporting time horizon , ( the time to which is integrated within the coarse time stepper ) , and the number of different initial shifted copies , . shows the bifurcation diagram of the discrete problem as a function of the parameter , the lattice spacing , in the regime close to the onset of pinning . for lattice spacings smaller than system has , as we discussed , an attracting , front - like solution that travels ; its motion is _ modulated _ as it passes over " the lattice points . for an infinite lattice ,this modulated traveling solution possesses a discrete translational invariance : .the shape of the modulating front is shifted by one ( resp . ) lattice spacing after time ( resp . ; this helps us define its effective speed ( see ) . as approaches zero , for an infinite lattice , the discrete front approaches the continuum front of the pde , and its speed ( the period of the modulation divided by approaches the pde front speed , ( see see ) .if we identify shapes shifted by one lattice constant , the attractor appears as a limit cycle with period . as the lattice spacing approaches the critical value the speed of propagation approaches zero ( the period of the limit cycle " approaches infinity ) ; asymptotically , .as discussed in what occurs is a saddle - node infinite period ( sniper ) bifurcation : a saddle - node bifurcation where both new fixed points appear on " the limit cycle . for larger values of saddle " and the node " move away from each other , and what used to be the limit cycle is now comprised from the saddle , the node , and both sides of the one - dimensional unstable manifold of the saddle , which asymptotically approach the node .the saddle and the node are , of course , stationary fronts .a pair of them exists for every unit cell " : all node fronts " are shifts of each other by one lattice spacing , and all saddle fronts " are also shifts of each other by one lattice spacing . since the medium has a discrete translational invariance , this makes sense if an initial condition gives rise to a front eventually pinned at some location in the discrete medium , the shift of this initial condition by one lattice spacing will eventually get trapped one lattice spacing further .this saddle - node bifurcation can be seen in a ; linearizing around the saddle front will give a positive eigenvalue , while the corresponding eigenvalue for the node front would be negative .since we look at the problem in discrete time , what is plotted is the _ multiplier _ , where is the reporting horizon .the saddle front has a multiplier larger than 1 , while the corresponding multiplier for the stable node is less than 1 ; both multipliers asymptote to 1 at the sniper ( ) .b shows the bifurcation diagram in terms of the front traveling speed . since both the saddle and the node fronts are pinned ( have zero speed ) they both fall on the zero axis ; we plotted their eigenvalues in a to distinguish between them . the true traveling speed ( broken line )is compared with the effective traveling speed predicted by a coarse time - stepper using copies within each unit cell , and a reporting horizon of .the coarse time stepper speed is a byproduct of fixed point computation and continuation with it ; short bursts of detailed simulation are used in the rpm framework to construct a contraction mapping that converges to a fixed point of the time stepper .the final shift upon convergence ( from the pinning - shift computation ) , divided by the time stepper reporting horizon gives us an estimate of the effective speed " .inspection of b indicates that the coarse time stepper never predicts a speed that is exactly zero ; yet it gives a good approximation of the effective speed , all the way from small to the near neighborhood of the pinning transition , when the effective speed becomes small .we will return to discussing this issue of small residual motion " for the coarse time stepper shortly . to give an indication of when the procedure stops being quantitative , we have included the curve in b : disagreement starts well in the regime where the effective movement is _ less _ than one unit cell per observation period . in the next sectionwe will compare the goodness of approximation " of our coarse time stepper to the effective speed predicted by the pad approach to extracting effective continuum equations .it is interesting that the coarse time stepper sometimes predicts a small hysteresis loop at low speeds , relatively close to true pinning " ; notice in a the unstable ( larger than one ) multipliers for the brief saddle part of this loop .we will discuss a tentative rationalization of this below . illustrates the effects of time stepper construction " parameters on the effective behavior predicted by the time stepper : the reporting time - horizon , for two different sets of shifted copies ( and ) as well as the effect of the number of copies for a fixed time horizon ( ) . augmenting the time stepper reporting horizonis shown in a - b ; clearly , in both cases , extending the time stepper reporting horizon extends the region over which its effective speed agrees with the true problem closer to .larger numbers of copies ( ) also perform slightly better than smaller numbers ( ) . in all casesthe qualitative behavior is the same : ( a ) successful approximation of the effective speed until reasonably close to true pinning ; ( b ) all differences occur when the average front motion is significantly less than one unit cell per reporting horizon ; ( c ) there is always a slight residual motion , which possibly after a small hysteresis loop close to true pinning eventually becomes negligible. we now turn to the discussion of the slight residual motion of the coarse time stepper at large beyond . for an infinite domain , the saddle and node pinned fronts appearing there are invariant to translations by one lattice spacing; for a large enough computational domain we still see two pinned front solutions per cell .when we sprinkle " initial conditions along the cell , depending on their location with respect to the saddle front , the trajectories may either be attracted to the stable node to the right " or to the one to the left " of the saddle .it is instructive to represent these solutions as in a , in a way that identifies the right " node front with the left " one ; here translation along the lattice corresponds roughly to rotation along the circle .the node is denoted by a black circle , and the saddle by a white one .the small squares represent the initial positions of our initial condition copies " .the fate of our distribution of initial conditions is governed by their initial angle " on the circle as our time horizon grows all initial conditions will asymptote to a stable front , either the left one ( moving counterclockwise on the circle ) or the right one ( clockwise movement ) .we now see clearly the physical reason behind the net residual motion for any finite time horizon for the coarse time stepper .an initial condition that is put down at random " in a unit cell deep in the pinned regime , even if it never exits this unit cell , will gradually traverse the part of the circle separating it from the closest node front .when the critical parameter value is approached from the pinned side , the saddle and the node fronts approach each other on the circle , on their way to coalescing at the sniper bifurcation point .b shows how this process becomes manifest in the coarse time stepper computations , using the problem in as our example .deep in the pinning regime ( high , marked ) the relative phase " of the saddle and the node pinned fronts on the circle remains roughly constant .the distance each member of our ensemble of initial conditions has traversed during one time horizon can be deduced from b : the copy with the largest negative movement is the one closest to the saddle but on its left ( copy number two ) .one can similarly rationalize the labelling of the remaining curves in b. when is reduced approaching the onset of pinning , at some point the saddle front starts moving appreciably towards the node front . as part of this movement ,it sweeps " the circle counterclockwise ; at it has its first encounter with one of our initial conditions the closest one on the left .when the saddle moves past " it into the regime marked , this copy , which was responsible for the largest negative displacement now approaches asymptotically the node front on the right , performing the largest _ positive _ displacement ( and so on for the remaining copies ) .eventually , in the propagating regime , marked , and for long enough reporting horizons , the initial phase " difference ( a fraction of a cell ) becomes negligible compared to the net displacement of each point ( several cells ) .the real movement in phase space is shown un c for two different . in these subfigures, the -axis represents where corresponds to the location of the front , more specifically .the -axis represents .the initial positions of the copies are indicated by small squares and their locations at , the time horizon , are marked by filled circles .the labels refer to the same copies as in b. as the reporting time horizon of the time stepper goes to infinity , it is clear that one can compute the average residual movement from the asymptotic position of the saddle front , i.e. from the relative extent of the circle to the right " and to the left " of the saddle front .the most reasonable point to declare " as an estimate of the true pinning from coarse time - stepper computations would come from a polynomial extrapolation of the successful " regime ( close to the tip of the apparent parabola " in ) ; alternatively , a value of where the speed is small enough ( well below one unit cell per time horizon ) and its variation with number of copies and time horizon is below a user - prescribed tolerance , would also serve this purpose . while there isno well defined pinning bifurcation for the coarse time stepper ( since pinning is an inherently non - translationally invariant bifurcation ) , the procedure can provide a good approximation of the effective shape and speed of the traveling fronts , as well as common sense " ways of numerically estimating the true pinning . -3 mm -3 mmin this section , we propose an alternative scheme for capturing effects of discreteness , by means of a ( now explicit ) continuum equation .this pde is obtained by means of pad approximations which can be used to approximate discreteness in a quasi - continuum way , through the use of pseudo - differential operators . in particular , starting from the taylor expansion for analytic functions , see e.g. , , one can then express spatial discreteness as expanding , one then obtains finally , regrouping the terms in the manner of pad yields we now use the pseudo - differential operator approximation in ( [ geq5 ] ) to convert the discrete model in into the pde approximation of the form : such approaches were introduced and used extensively by rosenau and collaborators to regularize nonlinear wave equations , particularly of the klein gordon type .( [ geq6 ] ) clearly emulates the discrete setting in some key aspects of the relevant spectral operator properties ( i.e. , of the discrete laplacian in comparison with the pseudo - differential operator of ( [ geq6 ] ) ) .for example , considering plane wave solutions of the form , we obtain in the discrete case the linearized dispersion relation ( around a uniform state ) in the case of ( [ geq6 ] ) , the corresponding equation becomes apart from sharing the continuum limit , the two dispersion relations share another qualitative feature which is particularly important ; namely , the presence of a lower bound in the continuous spectrum .notice , however , that the two lower bounds are different ( in the discrete case versus in the pad approximation ) .it would then be of interest to alleviate this spectral discrepancy , as well as to match the discrete operator ( if possible ) to a higher order in the taylor expansion this can be achieved by a natural generalization in the form of a continued fraction such as e.g. , in order to use ( [ geq17 ] ) in practice ( i.e. , for computational purposes ) , we convert the three fractions into one of the form where a simple ( algebraic ) reduction of to has been used .we then use taylor expansion of the denominator to convert the expression of ( [ geq19 ] ) into one resembling ( [ geq14 ] ) . by matching up to the exact taylor expansion ,we obtain three algebraic equations for and . in this way, we obtain a set of solutions for and .we use here the set , , . an additional benefit ( to the matching of the taylor expansion up to correction terms of ) that should be highlighted hereis the value of the lower bound expression for , which is much closer to the theoretical lower bound of than the prediction of the leading order approximation presented previously .the resulting evolution equation will then read : both ( [ geq6 ] ) and ( [ newmodel ] ) can be numerically implemented in a straightforward manner , by means of the spectral techniques described in .we have performed numerical simulations of the front propagation , using modes in the spectral decomposition of ( [ geq6 ] ) and ( [ newmodel ] ) .we will refer to these equations as the ( pad ) models a and b respectively .a fourth order runge kutta algorithm has been used for the time integration . for each value of , we identify the position of the front as the point where the ordinate of the front acquires the value .the linear interpolation scheme suggested in has been implemented and has proved to be an efficient front tracking algorithm in all the examined cases .our results of this quasi - continuum approach to the discrete problem can be summarized in and .shows the speed of the fronts in pad models a and b respectively .we can observe that the critical value of beyond which trapping of the front occurs is significantly displaced from the actual one of , for .in particular , for model a , , while for model b , the corresponding critical value is .we can deduce that the latter model is closer to the actual physical reality , even though the relevant prediction is still considerably higher than its actual value for the discrete model .in part at least , these results ( and the discrepancy from the actual discrete case ) can be justified by observing .the bottom panel of the figure suggests that the _ only _ way in which the front can stop in these quasi - continuum pad approximations is by becoming practically a vertical shock - like structure . in this case , the `` mass '' of the front which is given by ( see e.g. , and references therein ) becomes practically infinite .this means that the inertia of the front becomes too big for the front to move and hence `` pinning '' occurs .however , notice that this process of pinning is significantly different than the details of the discrete structure of the problem ( such as e.g. , the saddle - node bifurcation and the transition to pinned solutions ) .the translationally invariant quasi - continuum pad approximations of models a and b do not `` see '' such features .instead , they incorporate the well - known feature of front steepening for stronger discreteness and the criticality of the latter feature eventually leads to pinning . an additional pointer to the fact that such ( pseudo - differential operator ) models are `` eligible '' to pinning is that they are devoid of some of the important symmetries that are inherently related to traveling such as the galilean invariance in the case of continuum bistable equation or the lorentz invariance of its hamiltonian ( nonlinear klein - gordon ) analog .we presented a computer - assisted approach for the _ solution _ of effective , translationally invariant equations for spatially discrete problems without deriving these equations in closed form .assuming that such an equation exists , its time - one map is approximated through the coarse time stepper , constructed through an ensemble of appropriately initialized simulations of the detailed discrete problem . combining the coarse time stepper with matrix - free based numerical analysis techniques , e.g. contraction mappings such as rpm ,can then help analyze the unavailable effective equation .we are currently exploring the use of our coarse time stepper with coarse projective integration .matrix - free eigenanalysis techniques should also be explored , especially since they can help test the fast slaving " hypothesis underlying the existence of a closed effective equation ( see , for example , the discussion in ) .we also presented initial computational results exploring the effect of certain construction parameters " of the approach : the number of shifted copies in the ensemble of initial conditions , as well as the time - horizon used .we included a comparison between our approach and a particular way of obtaining explicit approximate translationally invariant evolution equations for such a problem ( the pad approximation ) .more work is necessary along these lines , exploring the relation of our approach with traditional homogenization methods at small lattice spacings .a discrete problem whose detailed solution can be obtained explicitly ( perhaps a piecewise linear kinetics problem ) or at least approximated very well analytically over short times , would be the ideal context in which to study these issues .several extensions of the approach can be envisioned , and might be interesting to explore . a time stepper based approach can be applied without modification to hybrid discrete - continuum media , e.g. continuum transport with a lattice of sources or sinks , such as cells secreting ligands into and binding them back from a liquid solution , .it is clear that it can be tried in more than one dimensions , and for regular lattices of different geometry . for irregular lattices the averaging over all shifts " we performed here for periodic media can be substituted with a monte carlo sampling over the distribution of possible lattices that takes into account what we know about the statistical geometry of the lattices . in this paperwe assumed that an equation existed and closed for the _ expected shape _ of the solution .conceivably one can attempt to develop time steppers not only for the expectation ( the first moment of a distribution of possible results ) , but , say , for the expectation _ and _ the standard deviation of possible results ; the lifting operator would then have to be appropriately modified .finally , our time stepper here was built on short simulations of the _ entire _ detailed discete system in space .hybrid simulations , where a known , explicit effective equation is accurate over _ part _ of the physical domain can be done ; an `` overall hybrid coarse '' time stepper ( explicit equation over part of the domain , and the coarse time stepper in this paper over the rest of the domain ) will then be used . in a multiscale context , we have proposed gaptooth " and patch dynamics " simulations , where the present coarse time stepper integrations are performed not over the entire domain , but over a mesh of small computational boxes " . both hybrid and gaptooth " simulations , if possible , require careful boundary conditions for the handshaking " between the continuum equation and the discrete simulations , or the discrete simulations in distant boxes , effectively implementing smoothness of the solution of the unavailable effective equation ( e.g ) .we close with a discussion of the onset of pinning " , the transition around which our test example of the coarse time stepper was focused .continuum effective equations such as the ones discussed here through the numerical time - stepping procedure do _ not _ , strictly speaking , possess a bifurcation at the critical point of the genuinely discrete problem . in this effective process, the bifurcation is smeared out and rendered a `` continuum transition '' ( see , for example , materials science models of the onset of movement of a front , ) . on the other hand , one might argue that this is an acceptable , and possibly optimal way for a continuum equation to represent the discrete bifurcation to pinning .we can see that other procedures , such as the discreteness - emulating pad type ones , lose a lot of the quantitative structure of the relevant transition . on the other hand , if a continuum _ differential _ ( as opposed to pseudo - differential ) equation was constructed to `` model '' this transition , the latter would possess other artificial features such as a topologically mandated , unstable branch of traveling wave solutions .it is conceivable that the short hysteresis loop sometimes predicted by the coarse time stepper close to pinning conditions is a vestige " of this unstable branch that translationally invariant equations would necessarily predict . in conclusion, it can be appreciated that genuinely discrete problems and continuum ones have inherent differences that can not be fully captured by emulating ( or `` summarizing '' ) the one context through the other .nevertheless , the approach proposed here , combined with a `` common sense '' interpretation of its results with respect to the genuinely discrete problem , performs in a satisfactory way for the modeler , even for the `` most different '' features between discrete and continuum models .part of the research for this paper was carried out while olof runborg held a post - doctoral appointment with the program for applied and computational mathematics at princeton university , supported by nsf kdi grant dms-9872890 .panayotis g. kevrekidis gratefully acknowledges support from a umass frg , nsf - dms-0204585 and from the eppley foundation for research .kurt lust is a postdoctoral fellow of the fund for scientific research flanders .this paper presents research results of the belgian programme on interuniversity poles of attraction , initiated by the belgian state , prime minister s office for science , technology and culture .the scientific responsibility rests with its authors .ioannis g. kevrekidis gratefully acknowledges the support of afosr ( dynamics and control ) and an nsf - itr grant .
|
we propose a computer - assisted approach to studying the effective continuum behavior of spatially discrete evolution equations . the advantage of the approach is that the coarse model " ( the continuum , effective equation ) need not be explicitly constructed . the method only uses a time - integration code for the discrete problem and judicious choices of initial data and integration times ; our bifurcation computations are based on the so - called recursive projection method ( rpm ) with arc - length continuation ( shroff and keller , 1993 ) . the technique is used to monitor features of the genuinely discrete problem such as the pinning of coherent structures and its results are compared to quasi - continuum approaches such as the ones based on pad approximations . * mathematical subject classification . * 65p30 , 74q99 , 37l60 , 37l20 , 39a11 .
|
non - stationary signals have a time - dependent spectral content , therefore , an adequate characterization of these signals requires joint time and frequency information . among the many time - frequency ( quasi)distributions that have been proposed ,_ wigner - ville _ s ( wv) for an analytic signal , is considered to be optimal in the sense that it satisfies the marginals , it is time - frequency shift invariant and it possesses the least amount of spread in the time - frequency plane. however , the wv distribution has , in general , positive and negative values and may be non - zero in regions of the time - frequency plane where either the signal or its fourier transform vanish .therefore , despite the fact that the wv distribution is an accurate mathematical characterization of the signal , in the sense that it can be inverted by its interpretation for signal detection and recognition is no easy matter , because of the negative and spurious components .the origin of this problem lies in the fact that and being non - commuting variables , they can not be simultaneously specified with absolute accuracy and , as a result , there can not be a joint probability density in the time - frequency plane . therefore no joint distribution , even if positive , may be interpreted as a probability density .looking back at the original motivation leading to the construction of the time - frequency distributions , namely the characterization of non - stationary signals , we notice that we are asking for more than we really need . to characterize a non - stationary signal what we needis time and frequency - dependent information , not necessarily a joint probability density , a mathematical impossibility for non - commuting variables .the solution is very simple .the time density projects the signal intensity on the time axis and the spectral density projects on the frequency axis . to obtain the required time - frequency information ,all we need is a family of time and frequency functions , depending on a parameter , which interpolates between time and frequency .projecting the signal intensity on this variable , that is , computing the density along the , one obtains a function that has , for each , a probability interpretation .the simplest choice for is a linear combination the parameter being the pair . for definiteness we may choose being a reference time and a reference frequency adapted to the signal to be studied .the function interpolates between and and , as we will prove below , contains a complete description of the signal . for each the function is strictly positive and being a bona - fide probability ( in ) causes no interpretation ambiguities .a similar approach has already been suggested for quantum optics and quantum mechanics , the non - commuting variable pairs being respectively the quadrature phases and the position - momentum .this approach , in which to reconstruct an object , be it a signal in signal processing or a wave function in quantum mechanics , one looks at its probability projections on a family of rotated axis , is similar to the _ computerized axial tomography _ ( cat ) method .the basic difference is that in cat scans one deals with a pair of commuting position variables and here we deal with a plane defined by a pair of non - commuting variables .for this reason we call the present approach _ non - commutative tomography _ ( nct ) .the paper is organized as follows . in section 2we construct the nct signal transform and show its positivity and normalization properties .we also establish the invertibility of the transformation , which shows that it contains a complete description of the signal and establish its relation to the wv distribution . because the nct transform involves the square of the absolute value of a linear functional of the signal, it is actually easier to compute than bilinear transforms like wv . in section 3we work out the analytical form of the nct transform for some signals and also display the in some examples .we also deal with the problem of using nct to detect the presence of signals in noise for small _ signal to noise ratios _ ( snr ) . herethe essential observation is that , for small snr , the signal may be difficult to detect along or , however , it is probable that there are other directions on the plane along which detection might be easier .it is the consistent occurrence of many such directions that supplies the detection signature . finally in section 4we point out that the nct approach may also be used for other pairs of non - commuting variables of importance in signal processing . as an example we work out the relevant formulas for the scale - frequency pair .because the fourier transform of a characteristic function is a probability density , we compute the marginal distribution for the variable using the characteristic function method . frequency and time are operators acting in the hilbert space of analytic signals and , in the time - representation , the frequency operator is .the characteristic function is where is a normalized signal the fourier transform of the characteristic function is a probability density after some algebra , one obtains the marginal distribution ( [ 2.2 ] ) in terms of the analytical signal f(t)\,dt\right| ^2\ ] ] with normalization for the case it gives the distribution of the analytic signal in the time domain and for the case the distribution of the analytic signal in the frequency domain the family of marginal distributions contains complete information on the analytical signal .this may be shown directly .however it is more interesting to point out that there is an invertible transformation connecting to the wigner - ville quasidistribution , namely w(t,\omega ) \,\frac{dk\,d\omega \,dt}{(2\pi ) ^2}\ ] ] and \,d\mu \,d\nu \,ds\ ] ] therefore , because the wv quasidistribution has complete information , in the sense of eq.([1.2 ] ) , so has .we compute the nct transform for some analytic signals : ( i ) _ a complex gaussian signal _ \ ] ] it has the properties this signal minimizes the robertson schrdinger uncertainty relation in quantum mechanics , it corresponds to a correlated coherent state .the nct transform is \ ] ] with parameters for the case of eq.([3.6 ] ) shows how the initial gaussian evolves along the axis , changing its maximum and width thus , we have squeezing in the quadrature components and their correlation . in the case has a purely squeezed state , which minimizes the heisenberg uncertainty relation \(ii ) _ a normalized superposition of two gaussian signals _ where is , \qquad i=1,2\,\ ] ] and ^{1/4}\exp \left [ -\frac 18\,\frac{\left ( b_i+b_i^{*}\right ) ^2}{a_i+a_i^{*}}\right]\ ] ] the superposition coefficients being complex numbers , the normalization constant reads \right ) ^{-1/2}\ ] ] computing the marginal distribution by eq.([2.3 ] ) we arrive at a combination of three gaussian terms \ } \end{array}\ ] ] where we have the contribution of two real gaussian terms , \qquad i=1,2,\ ] ] and the superposition of two complex gaussians \ ] ] the parameters of the real gaussians are the dispersion and mean the parameters of the complex gaussian are and \ ] ] and the complex amplitude of the complex gaussian contribution is \ ] ] \(iii ) _ finite - time signals _ herewe consider signals which vanish for all other times and compute the nct for one signal and for the superposition of two such signals .the parameters and are complex numbers .the normalization constant is \big|\frac { \sqrt \pi}{2}\left[\mbox { erfc}\left ( \sqrt { a_i+a_i^*}\left [ t_{2i}-\frac { b_i+b_i^*}{2\left(a_i+ a_i^*\right)}\right]\right)\right.\nonumber\\ & & -\left.\mbox { erfc}\left ( \sqrt { a_i+a_i^*}\left [ t_{1i}-\frac { b_i+b_i^*}{2\left(a_i+ a_i^*\right)}\right]\right)\right]\big|^{-1/2}\label{3.22}\end{aligned}\ ] ] where erfc is the function using eq.([2.3 ] ) , we arrive at the following marginal distribution \right)\nonumber\\ & & -\mbox { erfc}\left(\sqrt { a_i-\frac { i\mu}{2\nu } } \left [ t_{1i}-\frac { \nu b_i - is}{2\nu a_i - i\mu } \right]\right)\big|^2\label{3.24}\end{aligned}\ ] ] in the limit the marginal distributions ( [ 3.24 ] ) reduce to the gaussian distribution given by ( [ 3.14 ] ) . in the case the distribution ( [ 3.24 ] ) describes a sinusoidal signal of finite duration .the normalization constant takes the limit value for a superposition of two finite - time signals with the signals and as in ( [ 3.21 ] ) , the normalization constant is given by eq.([3.12 ] ) with overlap integral \nonumber\\ & & \left\{\mbox { erfc}\left(\sqrt { a_1+a_2^*}\left [ t_a-\frac { b_1+b_2^ * } { 2\left(a_1+a_2^*\right)}\right]\right)\right.\nonumber\\ & & -\left.\mbox { erfc}\left ( \sqrt { a_1+a_2^*}\left [ t_b-\frac { b_1+b_2^*}{2\left(a_1 + a_2^*\right)}\right]\right)\right\ } \label{3.26}\end{aligned}\ ] ] the marginal distribution for the superposition signal has the same form as eq .( [ 3.13 ] ) but with a changed normalization constant , the distributions and given by eq . ( [ 3.24 ] ) , and an interference term \right)\right.\nonumber\\ & & -\left.\mbox { erfc}\left(\sqrt { a_1-\frac { i\mu}{2\nu } } \left [ t_{11}-\frac { \nu b_1-is}{2\nu a_1-i\mu } \right]\right)\right\}\nonumber\\ & & \times \left\{\mbox { erfc}\left(\sqrt { a_2-\frac { i\mu}{2\nu}}\left [ t_{22 } -\frac { \nu b_2-is}{2\nu a_2-i\mu}\right]\right)\right.\nonumber\\ & & -\left.\mbox { erfc}\left(\sqrt { a_2-\frac { i\mu}{2\nu } } \left [ t_{12}-\frac { \nu b_2-is}{2\nu a_2-i\mu } \right]\right)\right\}^ * \label{3.27}\end{aligned}\ ] ] the case corresponds to the combination of a finite time chirp and a finite time sinusoidal signal shown in one of the figures below .\(iv ) _ graphical illustrations _we have plotted for some signals . in all cases we use and as in eq.([1.5 ] ) with and .all signals are finite time signals and in each case we display a three - dimensional and a contour plot . # figs 1a , b .the signal is although the number of periods , during which is signal is on , is relatively small , the two contributing frequencies are clearly seen in the separating ridges . #figs 2a , b .the signal is once again the contributions separate as grows , but notice the intermediate interference region which is a signature of the time - sequence of the frequencies occurrence and of their relative phase .# figs 3a , b .the signal is contrasts the signature shapes of a chirp contribution and a regular sinusoidal pulse .notice that all values have a probability interpretation .therefore all peaks or oscillations have a direct physical meaning and , as opposed to the time - frequency quasidistributions , we need not worry about spurious effects .this is particularly important for the detection of signals in noise , as we will see in the next example .\(v ) _ detection of noisy signals by nct _ in fig.4a and 4b we have plotted a time signal and its spectral density . it is really very hard to decide , from these plots , where this signal might have originated from .now we plot the nct transform ( fig.4c ) and its contour plot ( fig.4d ) with the normalization and .it still looks quite complex but , among all the peaks , one may clearly see a sequence of small peaks connecting a time around to a frequency around .in fact the signal was generated as a superposition of a normally distributed random amplitude and random phase noise with a sinusoidal signal of the same average amplitude but operating only during the time interval .this means that , during the observation time , the signal to noise power ratio is .the signature that the signal leaves on the nct transform is a manifestation of the fact that , despite its low snr , there is a number of particular directions in the plane along which detection happens to be more favorable .the reader may convince himself of the soundness of this interpretation by repeating the experiment with different noise samples and noticing that each time the coherent peaks appear at different locations , but the overall geometry of the ridge is the same . of course , to rely ona ridge of small peaks for detection purposes only makes sense because the rigorous probability interpretation of renders the method immune to spurious effects .the method may also be applied to other pairs of non - commuting variables for which , as in the time - frequency case , there can not be a joint probability density .consider the pair time - scale , where scale is the operator in the plane we consider the linear combination the relevant characteristic function is and the nct transform is , as before , the fourier transform of leading to
|
the characterization of non - stationary signals requires joint time and frequency information . however , time and frequency being non - commuting variables , there can not be a joint probability density in the plane and the time - frequency distributions , that have been proposed , have difficult interpretation problems arising from negative or complex values and spurious components . as an alternative we propose to obtain time - frequency information by looking at the marginal distributions along rotated directions in the plane . the rigorous probability interpretation of the marginal distributions avoids all interpretation ambiguities . applications to signal analysis and signal detection are discussed as well as an extension of the method to other pairs of non - commuting variables .
|
the empirical investigation of the dynamics of stock returns has been an area of intensive research since the beginning of last century ( see thesis of bachelier ) .the understanding of dynamics observed in price fluctuations are of paramount importance to activities such as forecasting for investment decision support , risk modelling and derivative pricing .moreover , the complexity of their structure , as a result of agent - market interactions , is an indicator of the nature of overall market conditions and organization .this complexity may also reflect the level of agent s rationality and risk tolerance .it becomes apparent that the explanation of certain qualities of the structure of market dynamics , provides the opportunity to improve the understanding of their current and future states .clearly such an exercise is of great importance to all market participants that aim to minimize their risks and protect their investments and profits .viewing economies and markets in particular as a dynamical system , we can draw many inferences by examining their observable outputs : sequences of stock prices and the corresponding returns .crack and ledoit have first revealed a _ compass rose _ " pattern discovered in scatter diagrams of returns against their lagged values ( i.e. , phase portraits ) , such as the one depicted in fig .[ fig : fig1](a ) .they attributed the pattern to price clustering and discreteness and especially the tick size and suggested reasons for its appearance .our aim is by using an approach consistent with the tradition of econophysics , to continue their research by revealing yet more interesting patterns and showing that the compass rose is a mask for more subtle dynamics . in this paperwe establish the case of existence of nonstochastic nonlinear dynamics via the calculation of the bds statistic .we use this as a discriminating statistic for a permutation test based framework ( surrogate data analysis " ( sda ) by ) that allows us to support our results at various levels of significance . as a second step , following , we reduce the level of noise in the original returns sequences using wavelet based thresholding ( the waveshrink technique by ) . we then recalculate the bds statistic on the denoised sequences and their surrogates and test again for the absence of linear dynamics . meanwhile we produce the compass rose of the denoised sequences only to reveal an entirely different structure that is strongly reminiscent of a dynamical attractor .our findings are consistent with the hypothesis that the returns sequence dynamics may be characterized by nonlinearities that can be of a complex - deterministic character .the results produced here may bring us closer to establishing that a significant part of the driving force generating financial prices could indeed be chaotic .[ insert figure [ fig : fig1 ] about here . ]crack and ledoit suggested first the use of phase portraits in order to reveal the compass rose .this implied the investigation of some sort of time - dependency among stock return sequences .this could be linear or nonlinear , a result of stochastic ( random ) or nonstochastic ( deterministic ) data generating process ( dgp ) , or even a mixture of the above behind the asset price dynamics .the authors also proposed that the formations revealed could be of use for calibrating tests of the existence of chaos in returns sequences . since then various papers have appeared on this theme ( see ) .we believe that two issues can be addressed further : 1 .as note , observed stock prices are not always the true equilibrium prices and hence the image of market dynamics observed through them could be partial . moreover , in markets where significant fixing takes place , there is a variable amount of error introduced into the price level which is then passed to the returns ( ( see * ? ? ? * for a discussion on this ) ) .2 . generating logarithmic or percentage returns , i.e. , 1st order differencing , is a _ high - pass _ filter . in this respect, all return sequences will contain amplified noise .consequently , any interesting and possibly non - stochastic structures may be concealed and/or distorted .the importance of this becomes even greater if we take into account point ( 1 ) above . in the following pages ,we investigate further the issue of compass rose formations in stocks from the uk market .we analyze the daily closing prices of stocks in the ftse all share and especially the ftse 100 index , spanning the period 01/01/1970 to 5/30/2003 ( a maximum of 8717 observations ) .a total of 53 ftse100 stocks were available with a full ( homogeneous ) range of prices for the above time - span .remarkably , all 53 high - capitalization company prices and corresponding returns revealed the patterns we observe and report in this paper ( some more intensively and clearly than others ) .following ( see also ) , we investigated the possibility of the observed structures of the compass rose being a one - off " situation .the basic purpose of the sda procedure is to provide a framework that will allow us to deny the null hypothesis that the data are generated by a linear stochastic system .it basically comprises of two steps ( see for an extensive overview ) : * the production of data sets from a model which captures deliberately only certain linear " properties of the original sequence .these sets are called surrogate data " . * the rejection of the null hypothesis according to a calculation of a discriminating statistic .this will suggest that the original data is very unlikely to have been generated by a process consistent with the null hypothesis .if the value of the statistic calculated on the original data set is different from the sets of values obtained on the surrogate data , we have a clear indication for the rejection of the null .there are various different nulls , some more composite than others and each null is usually accompanied by its own procedure of surrogate data generation . for the purposes of this paper we followed refs .we thus generated phase - randomized amplitude - adjusted surrogates ( termed aaft " ) to test for the null hypothesis that the return sequences were monotonic nonlinear transformation of linearly filtered noise ( which is also maintained as the most interesting " ) .such surrogates are expected to exhibit the same spectral and distributional characteristics as in the original series , however they are purely linear processes . as a discriminating statistic we chose the bds test .we simulated aaft surrogate data from the original returns sequences , and produced the compass roses for various stocks .an example of an aaft surrogate set compass rose for the bp stock is presented in fig .[ fig : fig3](c ) .we can clearly see there that both the randomly shuffled sequence ( fig .[ fig : fig3](b ) ) and the aaft surrogates loose the compass rose structure whereas the bootstrapped sequence maintains it ( fig .[ fig : fig3](d ) ) .this was an initial indication that the results of clustering and discreteness may not be manifestations of linear - random dynamics .following the results of the sda analysis on the phase portraits , we chose to test for independence under an sda framework for a subset of 53 ftse100 stocks returns .we used the bds test as a discriminating statistic , and generated the aaft surrogate sets for each stock , testing the null at 5% , 2.5% and 1% significance levels . in tables[ tab : table1 ] and [ tab : table2 ] , we present the results of the sda . in table[ tab : table1 ] we quote the results for a bds test neighborhood size of times the standard deviation of each returns sequence , for significance levels , and .the results here refute clearly the null that the sequences are a monotonic nonlinear transformation of linearly filtered white noise .this is a strong indication of absence of linear dynamics and randomness and supports the premise of nonlinear deterministic complexity in the returns .the results of table [ tab : table2 ] are also supporting this finding .there we have provided more detail , checking for neighborhood sizes of , , and , where denotes the standard deviation of each returns sequence .the level of significance for table [ tab : table2 ] is .the results clearly show that the above null is strongly refuted .[ insert table [ tab : table1 ] about here . ] [ insert table [ tab : table2 ] about here . ] [ insert table [ tab : table3 ] about here . ] since the results of sda where pointing towards more complex , nonlinear dynamics ( possibly deterministic ) we tested as a next step , the returns sequences after these have been filtered for noise reduction . for each stock returns sequencewe produced a filtered version , using the waveshrink approach .we then produced aaft surrogates and tested for significance level . in table[ tab : table3 ] we produce the results for the bp stock , where the waveshrink routine has been applied for a daubechies 8 ( d8 ) wavelet .. wavelets here are a justified choice in order to avoid the bleaching " of the returns sequences , and preserve any delicate deterministic structures in the dgps .our approach is also consistent with refs .looking at the values of the bds statistic for the original prefiltered sequence and its aaft surrogates , as well as the p - value of the statistic for sizes of neighborhood ranging from to times the standard deviation , we can safely reject the null at a significance level . only for a size of neighborhood of standard deviation ( which is a considerable size ), we can reject the null at a level of significance of almost .searching for qualitative evidence of deterministic dynamics and aperiodic cycles we looked at the phase portraits of the denoised sequences .for example , in fig .[ fig : fig1 ] ( b ) we can clearly see the phase portrait for the bp denoised returns reveals dynamics that are similar to chaotic attractors .a detail of the core of the phase portrait in fig .[ fig : fig1 ] ( c ) exhibits dynamics that are very similar to that of the mackey - glass attractor in fig .[ fig : fig1 ] ( d ) .this appears to be in line with .[ insert figure [ fig : fig2 ] about here . ] [ insert figure [ fig : fig3 ] about here . ] another interesting diagram that reveals the effects of stock price clustering and discreteness is depicted in fig .[ fig : fig2 ] ( a ) .there we have plotted the prices of bp stock against the corresponding logarithmic returns .we can clearly see patterns of correlation and anticorrelation in the same diagram .this is the first time such patterns have been revealed in financial literature and they need to be investigated further . in nonlinear science , the phase portraits ( i.e. , the compass rose ) are usually called delay plots " whereas the plot of a sequence of prices from a function gainst its first derivative are called phase plots " .thus the diagram in fig .[ fig : fig2 ] ( a ) could be loosely termed as a phase plot .if we generate the same kind of display for the denoised sequences ( in this case for the bp stock ) , we see clearly the cyclical but aperiodic behavior observed in the phase portraits also repeated here ( fig . [fig : fig2 ] ( b ) ) . the results lead us to deduce that the presence of chaotic dynamics can not be excluded .such a statement though should also involve the calculation of certain invariant measures that characterize chaos ( such as entropy or dimension based statistics ) .moreover , these results should also be backed by a suitable sda testing exercise .we retain this as a strategy for future research. it would also be interesting to observe if these smoother though irregular cyclical dynamics revealed in this paper are irrespective of the noise reduction technique ( i.e. , robust under different noise reduction techniques ) .we have investigated the dynamics of sequences of daily closing prices and the corresponding returns for stocks traded in the london stock exchange in the last three decades , as these are observed through the compass rose phase portraits .our results suggest that the amount of noise inherent in the examined sequences may be covering more interesting " dynamics . using wavelet based noise reduction techniques we filtered the return sequences only to uncover a strong aperiodic nonlinear behavior , characteristic of many phenomena that are governed by complex deterministic dynamics .the sda hypothesis testing framework employed here also suggests the absence of stochastic randomness and linear dynamics for both original and denoised returns sequences .our results show that the apparently random dynamics and discreteness observed in closing price sequences , may conceal via the generation of noise in the returns , a more delicate structure and aperiodic cyclical dynamics . however , further research in needed to maintain the hypothesis of nonlinear determinism in stock price time series dynamics .catherine kyrtsou and michel terraza .is it possible to study chaotic and arch behaviour jointly ?application of a noisy mackey - glass equation with heteroskedastic errors to the paris stock exchange returns series ., 21(3):257276 , 2003 ..surrogate data analysis results on actual returns for 53 companies in the ftse100 .discriminating statistic : bds test .neighbourhood size , where standard deviation of .biases and standard errors ( s.e . ) reported for significance levels and . [ cols= " < , > , > , > , > , > , > , > " , ]
|
we investigate the `` compass rose '' ( crack , t.f . and ledoit , o. ( 1996 ) , journal of finance , 51(2 ) , pg . 751 - 762 ) patterns revealed in phase portraits ( delay plots ) of stock returns . the structures observed in these diagrams have been attributed mainly to price clustering and discreteness . using wavelet based denoising , we examine the noise - free versions of a set of ftse100 stock returns time series . we reveal evidence of non - periodic cyclical dynamics . as a second stage we apply surrogate data analysis on the original and denoised stock returns . our results suggest that there is a strong nonlinear and possibly deterministic signature in the data generating processes of the stock returns sequences .
|
it is often claimed that both internet traffic and capacity undergo multiplicative year - on - year increase . such a rate may not be sustainable over time , even so, it is a pervasive point that the internet s demand and service rate changes multiplicatively .given this , in what way should a communication networks resources be allocated inorder to scale with such multiplicative effects ?mo and walrand introduced a parametrized family of utility functions called the _weighted -fair utility functions_. a utility function orders the preferences of different network states : a network state is better than a second network state if it has higher utility . when maximized over a network s capacity set, a utility function provides a solution which can be used to allocate network resources .congestion control protocols such as tcp are argued to allocate bandwidth in this way , and congestion protocol models have been designed that provably reach such an operating point .suppose that a performance improvement is made whereby the network capacity is doubled .although capacity increases , the criteria by which capacity is evaluated and shared should remain the proportionate .we note that , in particular , the weighted -fair utility functions obey this scaling property . but are there more ?such a utility function should , also , scale well when the traffic increases .that is , multiplicatively increasing the number of network flows should not alter the preferred allocation of network resources .this is a desirable property : regardless of relative changes , the proportion of network resource each route receives remains the same .given that the internet traffic is increasing multiplicatively , it is desirable that each flow has a share of resource that remains proportionate . once again , the weighted -fair utility functions will scale in this way .so are the weighted -fair utility functions the only utility functions that satisfy these scalability properties ?we mathematically formulate scaling properties and general settings where this is provably true .we thus provide a theoretical basis to the claim that , if bandwidth is allocated according to a network utility function which scales with relative network changes , then that utility function must be a weighted -fair utility function , and hence , a control protocol that is robust to the future relative changes in network capacity and usage must to allocate bandwidth inorder to maximize a weighted -fair utility function .economists arrow and pratt formulated the utility functions that satisfy a certain _ iso - elastic property_. in words , an individual is iso - elastic if his preferences enter a bet are unaltered by a multiplicative changes in the bet s stakes and rewards .arrow and pratt parametrize the set of iso - elastic utility functions on .it is an immediate check that the iso - elastic utility functions are precisely the summands of a weighted -fair utility function .thus the reason a weighted -fair utility function has good scaling properties is because of the iso - elasticity property inherent in its summands .but conversely , if a utility function allocates network resources proportionately then must it be weighted -fair ? the main results of this paper provide settings where scaling properties are equivalent to a utility function being weight -fair .the work of mo and walrand has received a great deal of attention .this is chiefly because a number of known network fairness criteria correspond to maximizing a weighted -fair utility function : known internet equilibria , _ tcp fairness _ , for ; the work kelly , weighted proportional fairness , for ; the work of bertsemas and gallager , max - min fairness , for with and maximum throughput for with .but , perhaps , the explicit reason for the attention on weighted -fairness has not been the above iso - elastic property .the iso - elastic property is frequently exploited inorder to studying the limit and stability behaviour of associated stochastic network models ; some authors have begun to explicitly cite the iso - elastic property , and other authors have attempted to derive weighted -fairness and other fairness criteria from axiomatic properties .even so , there appears to be little discussion of this iso - elastic scaling property , its equivalence with weighted -fairness , and what this implies for the performance achieved by a communication network and associated stochastic models .addressing this point is the principle contribution of this paper .the results and sections of this paper are organized as follows . in section [ sec :net ] , we define the topology and capacity constraints of a communication network . in section [ sec : netutil ] , we define what we mean by a network utility function , network utility maximization and we define the weighted -fair utility functions . in section [ sec : iso ] , we formally define the network iso - elasticity property and the flow scalability property . in section [ sec : isothrm ] , we prove that a network utility function is network iso - elastic iff it is weighted -fair . in section [ sec :scale ] , we consider a network topology where each link has some dedicate local traffic flow . on this topology, we show that a network utility maximizing allocation satisfies flow scalability iff it is a weighted -fair maximizer . in section [ sec : access ] , in addition to the flow scalability property , we define an access scalability property .we prove the only utility functions that are flow scalable and access scalable are the weighted -fair utility functions . in proving these results we lay claim to the statement that if bandwidth is allocated according to a network utility function which scales with relative network changes then that utility function must be a weighted -fair utility function .thus a control protocol that is robust to the future relative changes in network capacity and usage must allocate bandwidth to maximize a weighted -fair utility function .we define the topology and capacity constraints of a network .we suppose that there are a set of links indexed by . the positive vector gives the capacity of each link in the network .a route consists of a set of links .we let denote the set of routes .the topology of the network is defined through a matrix .we let if route uses link and we let otherwise . we let the positive vector denote the bandwidth allocated to each route . at link these must satisfy the capacity constraint and thus the set feasible bandwidth allocations is given by communication network , we could allocate resources to achieve the highest aggregate throughput .although a high data rate is achieved , some networks flows maybe be starved . over the last decade ,there has been interest in allocating network resources in a _fair _ way .essentially , fairness is achieved by allocating a positive share of the networks resources to each flow . to achieve thisit was recommended to allocate resources inorder to maximize utility , see kelly .in such a network , each flow expresses its demand via a strictly increasing concave utility function .a utility function will have a higher slope for smaller throughputs . when maximized , this function can be used express a higher demand for the networks resources .thus , from this , a form of fairness is achieved .the average utility of a network flow is here indexes the routes of a network ; gives the number of flows present on route ; gives the flow rate allocated to route , which is then shared equally amongst the flows present on the route , and finally , gives the utility function of each route user .we call a utility function of the form , a _ network utility function_. we assume throughout this paper that each utility function is increasing , once differentiable and strictly concave . to the average user , a network utility function orders preferences of different network states , i.e., is as least as good as if . given the flows present in a network , the best network state is one that maximizes utility .that is , the optimum of the _ network utility maximization_. here is the set of feasible bandwidth allocations . in general ,set of bandwith allocations will depend on the topology and capacity constraints of a communication network .the _ weighted -fair _ utility functions introduced by mo and walrand are given by for and parametrized by and .as mentioned , the weighted -fair class has proved popular as it provides a spectrum of fairness criteria which contains proportional fairness ( ) , tcp fairness ( , ) , and converges to a maximum throughput solution ( , ) and max - min fairness ( , ) .we now define the two main scaling properties which we will use : network iso - elasticity and flow scalability .both encapsulate a simple idea , if we multiplicatively change the available network capacity then we will allocate resources proportionately .a network utility function ranks the set of network states through the ordering induced by .we say a network utility function is network iso - elastic if this ordering is unchanged by a multiplicative increase in the available bandwidth .more formally , we define a utility function to be _ network iso - elastic _ if , , for each .we note that this is equivalent to the expression , where we multiplicatively scale the number of flows , , for each .network iso - elasticity requires that the utility of each network state scales .what if we only wish the optimal allocation to scale ?we say a utility function optimized on capacity set is _ flow scalable _ if the solution to the optimization problem is such that that is the bandwidth allocated to each route is unchanged by multiplicative changes in the number of flows on each route . making explicit that is the solution to when optimizing over and defining .we note that network scalability condition is equivalent to the condition that allocated bandwidth scales proportionately with capacity increases , i.e. , this section , we prove the first result of this paper .we prove a network utility function satisfying this iso - elastic property is , up to an additive constant , a weighted -fair utility function .[ iso thrm ] for the network utility function the following are equivalent , + i ) is network iso - elastic .+ ii ) up to an additive constant , is weighted -fair , i.e. , there exist , and constant such that it is an immediate calculation that ii ) implies i ) .we now prove conversely that i ) implies ii ) .the key idea is to prove that the map is linear .for simplicity , we suppose ; we , also , let and for , and we let and be random variables with distribution and , respectively .one can see the value of the utility function induces an ordering on the set of pairs .network iso - elasticity states that , , and induce the same ordering on elements . thus implies , for each , there exists an increasing function such that now let us show that is linear .let be the random variable such that let and .note .now observe thus is an increasing linear function and so , for , for some and .statement applies to , for .thus we wish to remove the seemingly arbitrary functions and . differentiating with respect to gives , as is a decreasing function it is differentiable at at least one point ( * ? ? ?* theorem 7.2.7 ) .differentiating again at this point gives thus , as is arbitrary , exists for all .dividing both terms leads to the equation taking and defining , we have the differential equation dividing both sides by and integrating with respect to gives here is an appropriate additive constant . integrating one more time gives and letting gives it remains to show for all .observe that includes the statement iff , which can only hold if .so for some and thus for all and as required .we note that an almost identical result and proof holds for the aggregate utility function the only exception would be that the above result does not hold for the weighted proportionally fair case ( where ) .in this section , we demonstrate that flow scalability is equivalent to optimizing a weighted -fair utility function for two specific network topologies : a linear network and a network satisfying the local traffic condition .a _ linear network _ consist of a routes and links for .route uses all the links , , whilst for each route only uses link , and otherwise . the _ local traffic condition _ insists that each link has a route that uses that link only , such that and .we will also assume a network that satisfies the local traffic condition has two or more links and is connected , that is for all links there exists a sequence of links and routes such that and , and for .we now prove that flow scalability is equivalent to weighted -fairness for linear networks .[ linear theorem ] assuming , for , has range , a linear network is flow scalable iff it maximizes a weighted -fair utility function .it is immediate that the weighted -fair utility is flow scalable .let us show the converse .the bandwidth a linear network allocates is the solution to the optimization here , we let denote the capacity of the link on the intersection of route and route . assuming , the solution to this optimization satisfies if the solution is flow scalable then must also satisfy we observe that the solutions for a linear network allow a suitably large array of values for and . in particular , for any and , we can choose a such that , .letting , we can also chose such that .thus , this choice of gives the unique solution to for the given .the remainder of the proof is similar to that of theorem [ iso thrm ] .once again , the key idea is to prove the map is linear . as must be continuous and decreasing is an increasing continuous function .also , note that because .with as above , and state that taking , and for , we see that taking , and for , and together imply as is increasing it is differentiable at some point ( see ( * ? ? ?* theorem 7.2.7 ) ) , then by it is differentiable at all points and moreover its derivative is constant . in otherwords , must be linear , for some function .thus by same argument used to derive from in theorem [ iso thrm ] , we have that , and .finally note , if for some then the equalities and can not hold , for all .thus , we now see the optimization ( [ linear fn]-[linear fn2 ] ) must have been a weighted -fair optimization . from this argument for linear networks , we extend our result to any network satisfying the local traffic condition . assuming , for , has range , a network satisfying is the local traffic condition is flow scalable iff it maximizes a weighted -fair utility function .it is immediate that the weighted -fair utility is flow scalable . to show the converse , we show a local traffic networkcan be reduced to a linear network .take any route which uses two or more links .let them be and let be local traffic routes for each respective link .set for all .the resulting network is a linear network , and thus , by theorem [ linear theorem ] , the utility function associated with each route of this subnetwork is of the form , and . here is same for all routes in this linear subnetwork , i.e. for all .we show , that since our network is connected , does not depend on the route chosen .we know for each local traffic route intersecting route .take two routes and .since our network is connect , for a link on route and a link on route , there are a sequence connecting and . for each ,the subnetwork formed by , and and their respective local traffic routes forms a linear network .thus is constant across all routes and associated local traffic routes in the sequences .this , thus , implies . by simply adding a local traffic route to each link , any network topology can be made to satisfy the local traffic condition .thus given the generality of this set of networks , it is natural to conjecture that flow scalability is equivalent to weighted -fairness for any network topology .a network is flow scalable iff it maximizes a weighted -fair utility function .the network utility maximization problem is given the flows on each route , this finds an optimal way to allocate bandwidth .given the bandwidth to be allocated is , what is the optimal number of flows permissible on each route ? this leads to the maximization in this second optimization , we optimize the number of flows accessing routes inorder to guarantee the maximum average utility per flow .flow scalability ( [ flow scale 1]-[flow scale 2 ] ) says , if we multiply the capacity of the network then the rate allocated to each route will multiply by the same amount . similarly , we will say a utility function is access scalable , if multiplying the bandwidth available to each route means that the number flows accepted on each route will multiply by the same amount .more formally , conversely , suppose user scalability and access scalability hold for each and .taking and , by user scalability and access scalability , respectively , we have that by continuity , there exists an such that also , by , now , ( [ thrm2:2]-[thrm2:4 ] ) imply the following set of equivalences .}\\ \text{iff}\qquad & { u}(ay,\lambda)\leq { u}(a\tilde{a}\tilde{y},\lambda ) & \text{[by \eqref{thrm2:2}]}\\ \text{iff}\qquad & { u}(ay,\lambda)\leq { u}(a\tilde{y},\tilde{\lambda } ) . & \text{[by \eqref{thrm2:4}]}\end{aligned}\ ] ] thus , is network iso - elastic and so , by theorem [ iso thrm ] , is a weighted -fair utility function .egorova , r. , borst , s. , and zwart , b. ( 2007 ) .bandwidth - sharing networks in overload ._ performance evaluation _ * 64 * , 9 - 12 , 978 993 .performance 2007 , 26th international symposium on computer performance , modeling , measurements , and evaluation .kang , w. n. , kelly , f. p. , lee , n. h. , and williams , r. j. ( 2009 ) . state space collapse and diffusion approximation for a network operating under a fair bandwidth sharing policy .probab . _ _ 19 _ , 17191780 .kelly , f. p. , maulloo , a. k. , and tan , d. k. h. ( 1998 ) . rate control in communication networks : shadow prices , proportional fairness and stability ._ journal of the operational research society _ _ 49 _ , 237252 .la , r. and mo , j. ( 2009 ) .interaction of service providers in task delegation under simple payment rules . _ decision and control , 2009 held jointly with the 2009 28th chinese control conference .cdc / ccc 2009 .proceedings of the 48th ieee conference on _ , 8594 8599 .
|
when a communication network s capacity increases , it is natural to want the bandwidth allocated to increase to exploit this capacity . but , if the same relative capacity increase occurs at each network resource , it is also natural to want each user to see the same relative benefit , so the bandwidth allocated to each route should remain proportional . we will be interested in bandwidth allocations which scale in this _ iso - elastic _ manner and , also , maximize a utility function . utility optimizing bandwidth allocations have been frequently studied , and a popular choice of utility function are the weighted -fair utility functions introduced by mo and walrand . because weighted -fair utility functions possess this iso - elastic property , they are frequently used to form fluid models of bandwidth sharing networks . in this paper , we present results that show , in many settings , the only utility functions which are iso - elastic are weighted -fair utility functions . thus , if bandwidth is allocated according to a network utility function which scales with relative network changes then that utility function must be a weighted -fair utility function , and hence , a control protocol that is robust to the future relative changes in network capacity and usage ought to allocate bandwidth inorder to maximize a weighted -fair utility function . ,
|
entanglement is a quantum - mechanical resource that can be used for a number of tasks , including quantum teleportation , quantum cryptography , and quantum dense coding .since real quantum channels are noisy , it is very difficult to create perfect entanglement directly between two distant parties .there is thus a need to purify ( or distill ) partial entanglement .suppose two parties share pairs of qubits such that each pair is in the same entangled , but mixed state , the total state of all pairs thus being the -fold tensor product .there exist protocols , using only local operations and classical communication , which allow the two parties to transform of the pairs into maximally entangled states , for instance singlet states . in the limit , the fidelity of the singlets approaches 1 and the fraction a fixed limit , called the asymptotic yield . in this paper , we consider the more general case in which the initial state is not a tensor - product state .this corresponds to the realistic situation that the state of each individual pair is not perfectly known , for instance because one of the particles has been sent through a channel with only partially known characteristics . in secs .[ sec : hashing ] and [ sec : recurrence ] , we apply the entanglement purification methods known as one - way hashing and recurrence to partially known , including completely unknown , quantum states .it turns out that the generalization of the recurrence method is straightforward , whereas the hashing method as it is described in ref . depends on the initial state being of tensor - product form and therefore requires a more careful analysis . unlike briegel _et al . _ , who have studied entanglement purification with imperfect quantum operations , we assume that all operations are error - free .a paper related to ours is ref . by eisert _et al . _ , who study how distillable entanglement decreases when information about a quantum state is lost .before we turn to the actual entanglement purification protocols , we discuss , in sec .[ sec : assign ] , the problem of what density operator to assign to pairs of qubits if only partial information is available .this is an unsolved problem , and we do not attempt to give a general solution .we show , however , that under the additional assumption of _ exchangeability _ the state must have a certain simple form , which is amenable to entanglement purification .our discussion also provides a resolution of the apparent paradox found by horodecki _et al . _ , who give an example where applying the jaynes principle of maximum entropy leads to a state with more distillable entanglement than seems to be warranted by the available information .we conclude in sec .[ sec : conclude ] .let us consider the example given by horodecki _et al._ .the authors consider a system composed of a single pair of qubits and define an operator where , are projectors onto the bell states , our definition of differs from that of ref . by a constant factor to simplify the expressions .if _ all that is known _ about the system state is the expectation value , then jaynes s principle of maximum entropy stipulates that one should assign the state of maximum von neumann entropy compatible with the constraint , which in this case is this state has distillable entanglement .et al._ point out that the state also satisfies the constraint , but is separable and hence unentangled .they conclude that the entanglement in the maximum entropy state is `` fake , '' because it violates the condition that an inference scheme `` should not give us an inseparable estimated state if only theoretically there exists a separable state consistent with the measured data . '' as an alternative to the jaynes principle , they propose first to minimize the entanglement and then to find the state of maximum entropy among those states that have minimal entanglement .for the constraint , this alternative scheme results in the state given above . a simple defense of the jaynes principle would be the following ( see also refs . ) . the alternative procedure proposed by horodecki __ assumes additional information about the two qubits , namely that entanglement is _ a priori _ unlikely. this would be reasonable , e.g. , in a situation where the parties know that the state has been prepared by an adversary whose objective is to let them have as little entanglement as possible .but then more is known about the state than just the given expectation value , and hence the assumptions behind the jaynes procedure are not fulfilled . if there is no specific additional information , however , the maximum entropy state assignment is preferable to the minimum entanglement assignment . indeed ,if a projective measurement in the bell basis is performed , assigning corresponds to assigning zero probability to the measurement outcome , an outcome that is not ruled out by the constraint . in this sense ,the minimum entanglement assignment is inconsistent with the prior information .by contrast , no inconsistency of this kind can arise from the maximum entropy assignment in the absence of prior information beyond the given expectation value . since no measurement of a single system can tellif that system is entangled or not , the prediction of `` fake entanglement '' for can cause no difficulty . in particular , there is no way to turn a single state into a maximally entangled state even probabilistically .we now turn to the case in which the parties share not just one , but qubit pairs .we denote by the total state of the pairs and assume that the pairs are known to satisfy the constraints for , where is the reduced density operator of the -th pair . in this case , the state assignment is not supported by the prior information , even though this is the state of maximum entropy compatible with the given expectation values . for large , this state assignment corresponds to the definite prediction that a nonzero number of perfect singlets can be distilled , which is certainly not implied by the given expectation values .the alternative state assignment would , however , be equally unsupported by the prior information .it corresponds to the definite prediction that _ no _ singlets can be distilled from the pairs , which is the minimum number of distillable singlets compatible with the _ a priori _ knowledge .although this is a very cautious prediction , it is also not implied by the given expectation values . the fact that a nave application of the principle of maximum entropy to many copies of a system fails is essentially of classical origin and is not unique to problems involving entanglement .jaynes has given a thorough discussion of this problem , which can be explained by a simple example .consider a possibly loaded die .all that is known about the die is the mean value , where is the probability of the outcome , .the probability distribution of maximum entropy compatible with the given mean - value constraint is for . now consider throwing the die times .a nave application of the maximum entropy principle would predict that the dice throws were independent and identically distributed ( i.i.d . ) according to the single - trial distribution .this would lead to the prediction that the fraction of throws showing any particular outcome would approximate 1/6 with arbitrary precision as tended to infinity .this prediction , however , is not implied by the prior knowledge , which is compatible with many possible outcome sequences , including sequences in which only the events and ever occur quite possible , if the die is loaded . moreover , with an i.i.d .distribution , the results of earlier throws imply nothing about the probability of later outcomes . even the most gullible gambler might become suspicious if 1 and 6 were the only outcomes after thousands of throws . in ref . , jaynes discusses how to choose the multi - trial distribution in the classical case .the starting point of his discussion is the assumption that the probability distribution of the dice throws is _ exchangeable_.the same assumption is the starting point for our quantum analysis .if exchangeability is assumed , the task of assigning a state of qubit pairs compatible with the constraints given above is much simplified .a state of copies of a system is exchangeable if it is a member of an exchangeable sequence , .an exchangeable sequence is defined by ( i ) for all , where denotes the partial trace over the system , and ( ii ) each is invariant under permutations of the systems on which it is defined .this definition is the quantum generalization of de finetti s definition of exchangeable sequences of classical random variables .a state is exchangeable if and only if it can be written in the form where is a measure on density operator space , and is a normalized generating function , .this is a consequence of the quantum de finetti theorem , the quantum version of the fundamental representation theorem due to de finetti .the quantum theorem was first proved by hudson and moody after pioneering work by strmer ; for an elementary proof see ref . .how , in general , do we pick ? to our knowledge , there is no universal rule for this task , although there exist a number of proposals for unbiased measures on density operator space .these can be interpreted as proposals for state assignments for systems under the sole assumption of exchangeability , i.e. , using a generating function .if , in addition to exchangeability , there is a mean - value constraint , the nave jaynes maximum entropy state assignment leads to a generating function of the form , where is the single - system state of maximum entropy , subject to the constraint ; this generating function is unacceptable for the reasons given above .a good choice of should be nonzero for all that are compatible with the prior information we should never arbitrarily rule out any possibility .similarly , ought to vanish for any which is actually ruled out by the prior information .we therefore would expect a multi - system generalization of jaynes s maximum entropy procedure to have the form f(\rho ) d\rho , \label{multi_maxent}\ ] ] where is a normalization constant and is strictly positive .the exact form of the function and of the measure is the subject of ongoing research . in the spirit of the single - system jaynes principle, should favor states with higher von neumann entropy and should give the usual when .given an initial state assignment of the form ( [ eq : exch ] ) , additional information can be obtained , e.g. , by making measurements on individual subsystems .suppose a measurement outcome is represented by a positive single - system operator , with ; i.e. , the form a positive - operator valued measure ( povm ) .given that the subsystem is in state , the probability of getting outcome is $ ] .if the total state is given by eq .( [ eq : exch ] ) , the probability of outcome in a measurement on a single subsystem is then after the measurement we must update the state of the remaining systems by bayes s rule , where by doing different measurements on several subsystems , we acquire more and more data ; if these measurements are chosen well , the resulting posterior becomes more and more peaked and has less and less dependence on the choice of prior .this procedure is a straightforward bayesian version of quantum state tomography .the condition of exchangeability in combination with the quantum de finetti theorem provides only a partial solution of the problem of state assignment in the presence of partial information , but we show in the next two sections that exchangeability alone is sufficient to guarantee that the entanglement purification procedures known as one - way hashing and recurrence can be carried out .the probability of distilling a positive yield of maximally entangled states depends on the exact form of in eq .( [ eq : exch ] ) .in this section , we first present a version of the one - way hashing algorithm that proceeds by bayesian updating of the probabilities for products of bell states and that can in principle be applied to general exchangeable states .we then briefly sketch the argument given in ref . that for a product state , where is bell - diagonal with von neumann entropy , the asymptotic yield of pure singlets is given by .we show how to modify this argument so that it can be applied to general exchangeable states .finally we give a simplified bayesian hashing algorithm for exchangeable states and discuss its asymptotic yield .our analysis is restricted to pairs of qubits , but the method generalizes straightforwardly to arbitrary hilbert space dimensions .we restrict attention to bell - diagonal states , i.e. , mixtures of the bell states , where we denote the weights by , , for .most existing entanglement purification procedures begin by making this assumption .if it does not hold , it is possible to put any state in this form by `` twirling , '' that is , by randomly rotating both spins of an entangled pair . the final yield of maximally entangled states can not be diminished by omitting this step , however , so it is better to think of twirling as a conceptual , rather than a physical procedure . after twirling, the initial , exchangeable state ( [ eq : exch ] ) of our pairs of qubits becomes where we now define the set of labeled states the first bit in the label tells us whether the pair is in a or a state ; the second bit tells us whether it is in a or state .if we are restricted to local measurements and classical communication on a single pair , the best we can do is to determine one of these two bits , but not both , and the pair will be left in an unentangled state .et al._have shown , however , that if we can manipulate the qubits collectively , much more interesting measurements are possible .the first step is to rewrite the state ( [ rhon ] ) as a probability distribution over strings of bits , with each qubit pair associated with two bits in the string . for this , we define the product distribution where , , , and . using this notation , where we now select a random subset of the bits and list all the qubit pairs which have at least one associated bit in the subset . from this listwe choose one qubit pair to be the _target_. for each of the other qubit pairs in the list , alice and bob both perform one of a set of three unitary transformations on their half of the pair , followed by a bilateral controlled - not onto the target pair .this sequence of operations is equivalent to replacing one of the bits of the target pair with the parity of a subset of all the bits .the choice of unitary transformation corresponds to including the first , second , or both of the bits from a particular pair in the parity calculation .then a measurement is performed on the target pair .( the details of this procedure are given in . ) by carrying out such a procedure , one bit of joint information is acquired about all the pairs , at the expense of sacrificing one entangled pair ( that is , two bits ) .the unmeasured pairs in general undergo an invertible transformation among the bell states , but they do not become entangled with each other , and this transformation can , if one chooses , be undone , leaving the sequence of bits for the unmeasured pairs unaltered .et al._have shown that such a procedure can be equivalent to finding the parity of any subset of the bits .this parity bit then allows one to update the probability distribution for the remaining -bit string .let us examine this in a little more detail .let denote a sequence of bits .we can select a subset of these bits by giving another sequence , which includes a 1 for each bit to be included in the subset and a 0 for the rest .the parity of the subset is then for a given the probability of getting a value for the parity is either 0 or 1 , so the probability of getting as a measurement result is for simplicity let us assume that the target pair is the last , so the last two bits are sacrificed ; the new state for the remaining pairs is where and note that while the initial probability distribution is symmetric under interchanges of the pairs , this symmetry is lost after measurement .the purification scheme follows simply from this .one chooses subsets of the bit string at random and measures their parity , sacrificing one pair with each measurement , but updating the probability distribution for the remaining strings .this procedure is repeated until one is left with only a single string , say , with probability for some small . written more formally , the posterior probability at the end of the procedure , conditioned on all measurement results , has the property for some sequence .one then knows with high probability the maximally entangled state of each remaining pair , which can then be transformed into a standard state ( such as ) by local operations .the yield of this procedure is the number of entangled pairs left at the end .it is clear that there are states for which the yield is zero .the obvious example is a state where is unentangled . for states of the form ,bennett _ et al ._ have shown that asymptotically the method gives a yield of maximally entangled pairs with fidelity approaching 1 , where is the entropy of .the argument makes use of the theorem of typical sequences ( which is closely related to shannon s noiseless coding theorem ) , according to which , for any and and sufficiently large , there exists a subset of the set of all sequences with the following properties : i.e. , the total probability of the set is arbitrarily close to 1 ; and i.e. , the number of sequences in is not much larger than . the set is called the set of typical sequences . since the parity measurement in each hashing round rules out half the typical sequences on average and since essentially all the probabilityis concentrated on the typical sequences , it can be expected that after sacrificing approximately pairs , essentially all the probability is concentrated on a single typical sequence .clearly this leads to a positive yield only if .the theorem of typical sequences does not hold in general for sequences corresponding to exchangeable states of the form ( [ rhon ] ) . to apply the hashing method in this case ,we rely on a generalization of the theorem of typical sequences due to cziszr and krner ( this theorem has recently been used by jozsa _et al . _ to derive a universal quantum information compressing scheme ) . applied to our setting , the theorem is that , given a fixed entropy , then for any and and sufficiently large , there exists a subset of the set of all sequences with the following properties : for all such that , which means that the set is typical for all probability distributions with entropy less than ; and i.e. , the number of sequences in is not much larger than . in the following , when we write `` typical sequences , '' we mean sequences in , whereas by `` atypical sequences '' we mean sequences in , the complement of .now assume that we want to perform the hashing protocol on a state of pairs of the form ( [ rhon ] ) with the property for some entropy ; i.e. , there is only a small _ a priori _ probability that the entropy of the unknown state exceeds the given value .( the case of states that do not have this property will be discussed at the end of this section . ) furthermore , assume that is large enough that there exists a cziszr - krner set with constants in eqs .( [ eq : ck1 ] ) and ( [ eq : ck2 ] ) .it then follows that where eqs .( [ rhon_norm ] ) , ( [ eq : ck1 ] ) , and ( [ eq : defeta ] ) have been used .we use this inequality , in combination with eq .( [ eq : ck2 ] ) , to derive the asymptotic yield of the hashing algorithm applied to exchangeable states .we restrict our analysis to a simplified protocol , in which we choose a number , somewhat larger than , such that we begin with input strings that have probability .let denote a sequence of parity checks on random subsets , and let denote the -bit string of parity checks , or outcomes .note that we denote all strings of bits as vectors , even though they are not all of the same length .the probability distribution on parity - check sequences is weighted uniformly on all sequences . for a given input string and a given parity - check sequence , the outcome is determined ; we denote this deterministic outcome by .we can express this deterministic outcome in terms of a probability for outcome string , given parity - check sequence and input string : since for each parity - check bit obtained , two bits of the input string are discarded , two strings with the same parity check , which differ only on those two bits , become the same after that step .after steps of a parity - check sequence , there will be only entangled pairs , corresponding to a string of bits .if one starts with a string , one will be left with a shorter substring .different initial strings that generate the same outcome and lead to the same final substring are equivalent for practical purposes .let us denote the set of all _ input _ strings which lead to outcome and to _ output _substring by . for parity - check sequence , we are interested in outcomes such that all _ typical _ input strings that lead to produce the same output string . for outcomes where this is the case , the procedure picks out a unique output string from among all those that could be produced by a typical input string . in this case, we say that we _accept _ the outcome and the corresponding unique output string , which we denote by . in this way we divide the outcomes for a parity - check sequence into two sets , the set of accepted outcomes , , and its complement .for an outcome that we accept and for a typical input string , we can write the conditional probability ( [ eq : pohi ] ) as though the additional kronecker delta in this expression is redundant , it reminds one that any _ typical _ input string that leads to an _ accepted _ outcome produces output string . notice that this is not true for atypical input strings : an atypical input string can have outcome and produce outcome string or a different output string .the probability that the outcome is accepted , given input string and parity - check sequence , is notice that this conditional acceptance probability can be nonzero for atypical input strings .the complementary probability , that the outcome is not accepted , given and , is given by if the input string is a typical string , the conditional acceptance probability can also be written as [ see eq .( [ eq : condoih ] ) ] .what we are interested in for the present is the probability to have an outcome that is accepted , given a typical input string , but averaged over all parity - check sequences : the complementary probability , is the average probability not to have an outcome that is accepted , given the typical input string .this probability is the probability that for a random parity - check sequence , the typical input string leads to an outcome that does not pick out a unique output string , i.e. , does not lead to the only possible output string that could have been produced by a typical input string .we can bound this probability in the following way .the number of typical sequences satisfies . for parity subsetschosen randomly , the probability that two typical input strings , and , agree on all parity checks i.e ., have the same outcome is ; thus the probability that and agree on all parity checks _ and _ produce _ different _ output strings , and , is .hence the probability of not producing a unique output , given a typical input , is bounded by this implies that the conditional acceptance probability ( [ eq : paccepti ] ) satisfies bayes s rule tells us that the posterior probability for output string , given and , is where is the probability for outcome string , given parity - check sequence . given a parity - check sequence and an accepted outcome for that sequence, we judge the `` success '' of the accepted output string by the posterior probability , i.e , the total probability of success , , is obtained by averaging over all parity - check sequences and over all accepted outcomes .this probability can be manipulated in the following ways : the inequality here follows from restricting the sum over input strings to typical strings and reflects the fact that an atypical string might lead to an accepted outcome _ and _ to the accepted output string , thereby contributing to the success probability .the final equality comes from using eq .( [ eq : condoih ] ) for .using eqs .( [ eq : ck3 ] ) , ( [ eq : paccepti ] ) , and ( [ eq : pacceptibound ] ) , we can now bound the probability of success : this is the desired result .assuming we can choose arbitrary positive constants and and have sufficiently large , the probability ( [ bound ] ) can be made arbitrarily close to 1 .except for certain singular distributions , given an exchangeable state of the form ( [ rhon ] ) , it is always possible to make in eq .( [ eq : defeta ] ) arbitrarily small by choosing the entropy sufficiently large ( ) ; if , however , then the number of hashing rounds , which means there is no yield since . to decrease the value of andthereby make the yield positive or increase an already positive yield , one can perform quantum state tomography on some of the pairs to obtain more data about the state , generally producing a narrower posterior distribution ( see sec . [sec : assign ] ) .the width of the posterior distribution depends on the number of pairs sacrificed for the tomographic measurements , but not on the total number of pairs .the number of pairs needed for tomography can therefore be neglected in the asymptotic limit of large .asymptotically , the prior probability of obtaining a posterior concentrated at with an entropy is given by the expression where is the prior distribution ( [ rhon ] ) defining the initial state .putting everything together we see that , for , is the probability of obtaining an asymptotic yield of using a combination of quantum state tomography and one - way hashing . if most of the prior distribution is concentrated on states with an entropy exceeding 1 bit , i.e. , if is small , then it will normally be a better strategy to precede the hashing procedure by a few iterations of the recurrence method .this is the content of the next section .if the generating function has no significant support on weights with , then hashing can not be used for entanglement purification , at least initially . it might still be possible , however , to distill some entanglement by using the more robust ( but far more wasteful ) technique of _ recurrence _ . in the recurrence algorithm ,an initial set of entangled qubit pairs is grouped into sets of 2 pairs each . in each set , one pair is designated the _ target _ pair , and the other the _ control _ pair .alice and bob thus have target qubits and control qubits each .alice now rotates all her qubits by about the axis , while bob rotates all his qubits by about the axis .each of them then performs a controlled - not operation from each control qubit onto the corresponding target qubit and measures his or her target qubit in the basis ( and ) .the target qubits are then discarded .if alice and bob both get the same result for a given target pair ( i.e. , both 0 or both 1 ) , the procedure has succeeded , and the control pair can be shown to have increased entanglement . if their results differ , the procedure has failed , and the control qubits must also be discarded . if the state of both target and control pairs is of form ( [ bell_mixtures ] ) , the probability of success is and the new state of the control pair after the measurement has weights if initially , then this procedure converges towards .the convergence is slow , however , and since more than half of all the pairs is discarded each time , the yield is generally low .suppose that instead of a product state we have an exchangeable state of the form ( [ rhon ] ) .we can carry out the procedure exactly as before , grouping the pairs into sets of two , with a target and control bit .if there are initially pairs in the state then after performing the measurements , alice and bob will get the same result times and different results times , leaving them with a new state of the form ( [ rho2n ] ) for pairs . for large , the posterior distribution will generally be sharply peaked about those which give a value of close to . unlike hashing, the recurrence algorithm produces a posterior state which is exchangeable .we now turn to how we find this state in light of the measurement results .compared with the hashing algorithm , where precisely one bit of information is obtained in each round of the procedure , in the recurrence method much more information is obtained , namely the value of .we can therefore deduce the posterior distribution where ^\nsucc [ 1-\psucc(\w)]^{n-\nsucc}\;,\ ] ] and because the remaining states have been transformed according to ( [ neww ] ) , we must also change to the new variables .so the new state is where while this bayesian procedure is very simple compared to the hashing method , it is still a bit too complicated for simple illustration .there is , however , an even simpler variant of this technique that is easy to analyze .suppose that , instead of the general bell - diagonal state ( [ bell_mixtures ] ) , we have an initial werner state we can carry out the recurrence procedure exactly as above , with the probability of success here denotes the fidelity of the state with , with necessary for distillability .the recurrence procedure does not in general lead to a new state of form ( [ werner ] ) , but by twirling the state can be put in this form , at the cost of some increase in entropy .the new state has a fidelity suppose that we have entangled pairs , with partial information sufficient to determine that they are all in a state of the form ( [ werner ] ) , but not to determine the exact fidelity .the joint state of the pairs is then we then group the pairs into sets of two and carry out the recurrence procedure on each set , with successful results .we can then deduce a revised generating function where ^\nsucc [ 1-\psucc(f)]^{n-\nsucc}\;,\ ] ] and the new density operator for the remaining pairs is where the the posterior distribution is expressed in terms of the new variable given by ( [ new_fidelity ] ) . working this out explicitly ,we get } \over 10 - 8f ' } \;,\ ] ] where is the inverse of ( [ new_fidelity ] ) : we can see how much information is gained by a single round of the recurrence method using this simplified version as an example . if the initial generating function is a uniform distribution , for , then for large the posterior distribution is highly peaked after one round .we see this in figure 1 , where the prior and posterior distributions are shown for different values of and a typical choice of .note that states with move towards under the procedure , producing a peak about the completely mixed state ; for high and the value of used in our example , this peak is suppressed by the bayesian updating .states with move towards .the procedure has fixed points at , , and .it should be noted that because of its extremely small yield , the recurrence method should never be used if hashing is possible . an initial state that can not be distilled by the hashing method , however ,might , after one or more rounds of the recurrence method , satisfy the criterion ( [ eq : defeta ] ) for some value of .if that is so , then a combination of tomography and hashing should be used thereafter , as described in the last section .similarly , if has some support on distillable and some on undistillable states , a few rounds of the recurrence method generally produces convergence on either a distillable or undistillable state , without ambiguity . under certain circumstances ,however , it might be beneficial to supplement this with tomographic measurements on a number of pairs as well .for example , the updating procedure ( [ neww ] ) treats the coefficients and symmetrically .an initially symmetric state thus has this symmetry preserved , and the distribution might become double - peaked . in this case , measuring a small number of pairs would suffice to eliminate one of the two peaks .we have given a bayesian account of the entanglement purification procedures of one - way hashing and recurrence .the bayesian formulation allows us to provide a straightforward discussion of the conditions under which maximally entangled states can be distilled from unknown or partially known quantum states . for one - way hashing, we have given the _ a priori _ probabilities for the possible asymptotic yields of maximally entangled pairs .our results can be used to decide which combination of quantum state tomography , recurrence , and hashing to use to obtain the highest expected yield , both asymptotically and in the case of a fixed number of initially given pairs .although our discussion is entirely in terms of pairs of qubits , the method is general and can be applied to any generalization of hashing or recurrence in hilbert spaces of higher dimension .we would like to thank howard barnum , oliver cohen , chris fuchs , and bob griffiths for helpful conversations .t.a.b . was supported in part by nsf grant no .phy-9900755 and doe grant no .de - fg02 - 90er40542 , r.s . was supported by the uk engineering and physical sciences research council , and c.m.c .was supported in part by onr grant no .n00014 - 00 - 1 - 0578 .some of this work was done at the july 2000 workshop on `` quantum information processing '' at the benasque center for science in benasque , spain .one possibility , whose analogue for classical probabilities was proposed by skilling [ j. skilling , in _ maximum entropy and bayesian methods _ , edited by j. skilling ( kluwer , dordrecht , 1989 ) , pp .4552 ] is to use , along with one of the proposed unbiased measures for . here is a parameter that characterizes one s confidence in the single - copy maximum - entropy assignment : for , the exchangeable state ( [ eq : exch ] ) becomes effectively the product state , but for , eq . ( [ eq : exch ] ) predicts measurement statistics different from the product state .
|
a concern has been expressed that `` the jaynes principle can produce fake entanglement '' [ r. horodecki _ et al . _ , phys.rev . a * 59 * , 1799 ( 1999 ) ] . in this paper we discuss the general problem of distilling maximally entangled states from copies of a bipartite quantum system about which only partial information is known , for instance in the form of a given expectation value . we point out that there is indeed a problem with applying the jaynes principle of maximum entropy to more than one copy of a system , but the nature of this problem is classical and was discussed extensively by jaynes . under the additional assumption that the state of the copies of the quantum system is _ exchangeable _ , one can write down a simple general expression for . we show how to modify two standard entanglement purification protocols , one - way hashing and recurrence , so that they can be applied to exchangeable states . we thus give an explicit algorithm for distilling entanglement from an unknown or partially known quantum state . # 1#1| # 1|#1 # 1#1 o
|
hardware security has become an important aspect in modern integrated circuit ( ic ) design industry because of the global supply chain business model .identifying and authenticating each fabricated components of a chip is a challenging task .a physical unclonable function ( puf ) has been a promising security primitive such that its behavior , or challenge response pair ( crp ) , is uniquely defined and is hard to predict or replicate .a puf can enable low overhead hardware identification , tracing , and authentication during the global manufacturing chain .silicon delay based strong pufs have been studied intensively since its first appearance in because of its low implementation cost and large crp space compared with a weak puf . however , there are still design challenges that restrain a strong puf from being put in a widespread practical use .one of the major design challenges for a silicon delay based puf is the strict symmetric delay path layout requirement .the wire delays of the competing paths should be designed and matched carefully to avoid biased responses , otherwise low inter - chip uniqueness would make the puf unusable .in addition to asymmetric routing , another source of the biased responses for silicon based puf is the systematic process variation , which can also degrade the quality of a puf , such as uniqueness or unpredictability .finally , the metastability issue of the arbiter circuit for an arbiter puf can cause unstable puf responses , making a portion of the crp unusable due to their instabilities . for a delay based puf , the randomness should be contributed only by the subtle variations between devices , so having biased delay differences due to asymmetric routing is detrimental to delay based pufs , and such impact should be eliminated . however, a precise control of the routing can be a difficult and time consuming task .an implementation of an arbiter puf on field programmable gate array ( fpga ) is considered much more difficult than a ro puf because the connections to the arbiter circuit must also be symmetric , and performing completely symmetric routing is physically infeasible in most cases , resulting small inter - fractional hamming distance ( fhd ) for an arbiter puf .one of the most common solutions to the asymmetric routing is to use hard - macros in fpga designs , but it is not effective with arbiter puf , and some less commonly - used features of the fpga would be required .other approaches try to extract randomness by xoring the outputs of multiple arbiter pufs at the cost of large hardware overhead and less stability . in ,the authors proposed using ` middle ' bits instead of the most significant bit ( msb ) as the ro puf response measurement .the measurement can effectively eliminate the biased responses , but an efficient way of predicting the inspection bit is not described , and the presented ro puf is not a strong puf . a rtl - based puf bit generation unit was proposed in , but to the best of our knowledge , a strong puf that can be implemented efficiently without any layout constraints has not yet been proposed .the existence of systematic process variation can degrade the quality of silicon based pufs because the local randomness should be the only desired entropy source of the delay based puf .the effect of systematic process variation is similar to having biased wire delay between two delay paths , which can also damage the uniqueness of the puf .another possible vulnerability caused by systematic variation is the induced process side channel attack as described in . due to intra - wafer systematic variation , pufs fabricated at the same region on different waferscan have similar systematic behavior , which can be exploited as a process side channel attack . to account for systematic variations ,a compensation technique is proposed in , which requires careful design decisions to compare ro pairs that are physically placed close to each other . in ,the systematic variation is modeled and subtracted from the puf response to distill true randomness with the cost of model calculation .similarly , in , the averaged ro frequency is subtracted from the original frequency , where the multiple measurements of each ro can lead to large latency overhead . in ,a method is proposed to extract local random process variation from total variation , however , a second order difference calculation is needed , and hard - macro technique must be applied to construct symmetric delay paths .the idea of an arbiter puf is to introduce a race condition on two paths and an arbiter circuit is used to decide which one of the two paths reached first .the arbiter circuit is usually a d flip - flop or a sr latch .if two signals arrive at an arbiter within a short time , the arbiter circuit may enter a metastable state due to setup time violation .once the arbiter circuit is in metastable state , the response becomes unstable . to eliminate the inconsistency caused by metastability of the arbiter circuit , existing approaches use the majority vote or to choose the paths that have a delay difference larger than the metastable window at the cost of crp characterization and discarding the unstable crps . in this paper ,we propose the physical implementation bias agnostic ( unbias ) puf that is immune to physical implementation bias .the contributions of this paper include : * we proposed the first strong unbias puf that can be implemented purely by rtl without imposing any physical routing constraints .* efficient inspection bit selection strategy based on intra-/inter - fhd prediction models are proposed and verified on the strong unbias puf .the proposed strong unbias puf compares two delay paths to generate puf responses .similar to arbiter puf , each bit of the challenge of the unbias puf specifies the path configuration of the delay path .as shown in figure [ figure : biaspuf ] , the challenge c1 and c2 specify the path configurations , and an one - bit response is extracted from the difference register , which can be of several bits long .once a challenge is given , a signal is applied at trigger .each of the clock counter begins to count the number of clock cycles of the system clock ( clk ) whenever the the signal from trigger is propagated to the start input of the counter , and stops counting whenever the the signal from trigger is propagated to the stop input of the counter . for each challenge, the difference value of the two clock counters is stored in the difference register for further response extraction , which is described in details in section [ sec : measurement ] .the purpose of the ros inserted between path configurations is to increase the path delay so that it will take multiple clock cycles for the signal to propagate to stop the clock counter .as shown in figure [ figure : ro_delay ] , each ro is associated with a ro counter that counts the number of oscillations of the ro .the ro counter starts counting when the signal from its previous path configuration is arrived , and propagates the signal to the next path configuration only when the count reaches a certain threshold .all the ros are composed of same number of inverters and neither configurations nor any layout constraints are needed . unlike the conventional arbiter puf , the strong unbias puf has no metastability issues caused by a d flip - flop or a latch .the delay difference of the two paths is transformed into counter values of the system clock . by judiciously extracting the response from the differenceregister , the physical implementation bias can be effectively mitigated , therefore the unbias puf can be implemented purely by rtl without any routing or layout constraints .details of the response extraction are described in section [ sec : measurement ] .in this section we describe how different selections of the inspection bit can change the intra- and inter - fhd .figure [ figure : ud ] shows an example of a distribution of values from difference registers of symmetrically routed unbias pufs .the length of the difference register is 22-bit , so the range of the register value is between and as represented in 2s - complement .the large inter - chip measurement curve gives the distribution of the values across all pufs .since the puf is unbiased , roughly half of the difference values would be greater than zero due to random local process variation , therefore the inter - fhd of the unbias pufs would be close to 50% . in this case , the inspection bit is simply the msb , which divides the range of 22-bit difference value into two groups and .all measurements fall into on the left output a 1 ; others output a 0 .the small intra - chip measurement curve gives the distribution of multiple measurements of the puf on a same chip .due to noise , the difference values could be different , so the intra - fhd of the difference register may not be a perfect 0% . even though symmetric unbias puflayout is much preferred , it is difficult and takes much effort and overhead to achieve such requirement as described in section [ section : r_works ] . in practice ,if no layout constraints are imposed , the measurement distribution of the difference register can be as shown in figure [ figure : bd ] , where most of the difference values across chips are greater than zero . in this case , using the msb as the inspection bit would cause low inter - fhd of the pufs because most msbs are 0 s . for the same biased distribution shown in figure [ figure : bd ] , if the bit is used as the inspection bit of the difference register as figure [ figure : bd_bin ] shows , the range of the 22-bit difference value is divided into multiple bins with width , where the output of the measurement is decided by the bin in which it resides .note that in this case the response is not an indicator of which delay is longer in the comparison .the smaller the width of the bin is , the closer the inter - fhd is to 50% because roughly half of the outputs would reside in even with biased delay . on the other hand, the width of the bin should be large enough so that multiple measurements of a same puf should always fall into the same bin .in other words , the width of the bin should be larger than the variation of the intra - chip measurement distribution .therefore , the choice of inspection bit is a tradeoff between inter - fhd and intra - fhd for a puf with asymmetric routing ., therefore the inter - fhd would be close to 50%.,width=278 ]the intra - fhd depends on the width of the bins when the inspection bit is .a straightforward way to determine the associated intra - fhd for each inspection location is to gather multiple measurements of the same challenge on a same puf , and simply calculate the intra - fhd for each .a more efficient approach is to predict the intra - fhd without calculating it for each .to predict intra - fhd of a challenge of an inspection bit , we first obtain measured difference registers of the challenge of a same puf . since the bin width and the range of the difference registeris known , the difference values can be divided into two groups ( responses ) according to the bins they reside in .let the number of difference values fall in be , and number of difference values fall in be . and represent the number of responses of the challenge to be one and zero during the measurements , respectively .since the intra - fhd is essentially calculated from the response difference between any two measurements , the predicted intra - fhd is calculated as : where the final predicted intra - fhd would be the averaged intra - fhd of all challenges .as shown in figure [ figure : vw ] , the expected intra - fhd is 0% because all measurements fall in the same bin and .the expected intra - fhd depends on the portion of measured values that fall in . with larger bin width , it is more likely that all responses would fall into the same bin with three bins . is the bin width and the measurement ranges for challenges and are specified .the expected intra - fhd is 0% and the expected intra - fhd depends on the portion of measured values that fall in .,width=259 ] the inter - fhd depends on the bin width with a given inspection bit . assume the distribution of inter - chip difference value is a normal distribution . define to be the distance between the mean and the closest bin boundary on the left as figure [ figure : iwc ] shows .we first prove that the worst - case inter - fhd happens when , followed by the prediction model of the inter - fhd for the worst - case scenario .given a fixed , define and to be the total underlying area in and as functions of , respectively .for any normal distribution , and are calculated as : a_1 ( ) & = _n=-^ f(-+2nw+w)-f(-+2nw ) [ eq : a1 ] + a_0 ( ) & = 1 - a_1(,w ) [ eq : a0 ] where is the cumulative distribution function ( cdf ) of the normal distribution , and is the index for bin area summation .the ratio is defined as : where the range of is from 0 to because of its periodic structure .the closer the is to one , the closer the inter - fhd would be to 50% because the two areas are closer to each other .we want to show that the largest ( most unbalanced ) ratio happens at as figure [ figure : iwc ] shows . to find the extreme value of given a fixed , we take derivative with respective to of equation [ eq : ratio ] and replace by from equation [ eq : a0 ] : from equation [ eq : ratio_de ] we see that to find the extreme value of , it is equivalent to find the solution of , which is given below : where is the probability density function ( pdf ) of the normal distribution .equation [ eq : a1_de ] shows that is the summation of differences between two pdf terms where one is a shifted version by of another .therefore , applying to equation [ eq : a1_de ] , we get a zero .figure [ figure : iwc ] shows that when , each difference term in equation [ eq : a1_de ] has its counter part at the mirrored location to the center , so that the summation becomes zero .to conclude our derivation , given a of an inspection bit , the extreme value of happens when , and the inter - chip stander deviation is needed for the calculation .to predict inter - fhd , we calculate the probability of which any pair of chips produce different responses .the inter - fhd prediction given the width of the inspection bit is : with , the two areas are the same , resulting a predicted 50% inter - fhd .given a selected , plugging in to equation [ eq : iner ] would give the predicted inter - fhd lower bound .please note that to predict inter - fhd , the inter - chip standard deviation is needed because the calculation involves the cdf .however , the mean does not affect the prediction because the extreme value is obtained by finding the worst - case .also , since changing the inspection bit results at least a 2x change of , the inter - chip does not have to be calculated with high accuracy .it can be obtained by pre - layout simulation or measuring a small number of chips .given the error correction code ( ecc ) specification corresponding to the puf design , the intra - fhd threshold can be defined . from the intra - fhd prediction model , choose a set of candidate bits that would satisfy the intra - fhd threshold requirement .from the candidate bits , a best inspection bit can be determined by applying the inter - fhd prediction model given the standard deviation of the inter - chip delay distribution .please note that only one chip is needed for the inspection bit selection since the measurement noise is similar for all chips from our experiment and the is obtained from pre - layout simulation .the location of the final inspection bit , which is a public information , is passed to all pufs for the secret response generation .the strong unbias puf structure is implemented on 7 altera de2 - 115 fpga boards . in our implementation ,no physical constraints , additional xors , tunable delay units , or any systematic variation compensation techniques are used .the design is purely a rtl design .the ros inserted between path configurations are composed of 19 inverters , and the signal will be propagated to the next path configuration when the ro counter associated to the ro reaches a count of 50 thousand .the unbias puf has 10 path configurations , therefore the length of the challenge is 10-bit long .the length of the difference register is 19-bit , and the length of the final response for each challenge is one bit . for our experiment ,120 challenges are applied , and 120 bits of responses are obtained for each puf within a second . please note that the ro structure and the count of the ro counter are selected given the 50 mhz system clock of the fpga .the results are similar as long as no overflow occurs at the 19-bit difference register. the inter - fhd is obtained from 7 fpgas , and the intra - fhd is calculated by measuring each puf 10 times . to show inter - chip variation and measurement noise of our experimental setup, we measure the frequency of a single ro across the chips 10 times , and the inter - chip variation is 6.1% with 0.2% measurement noise . to validate the intra - fhd prediction model, we follow the procedure described in section [ sec : up ] with measurements .figure [ figure : intra_predict ] shows the results of the intra - fhd prediction of and .the intra - fhd of is much higher than because its bin width is much smaller . and of 7 fpgas . has much larger intra - fhd because its bin width is smaller.,width=259 ] to validate the inter - fhd prediction model , for each challenge , we obtain an inter - chip standard deviation from 7 fpgas , and the final used in the prediction model is the median of the from 120 challenges , which gives .the results shown in figure [ figure : inter_predict ] indicate that the inter - fhd lower bound prediction is well matched with the measured data . to demonstrate that the inter - fhd prediction model does not requirean accurate inter - chip estimation , figure [ figure : inter_predict ] also shows the prediction range with variation .we can see that the differences of the predictions are limited , which indicates that the can either be obtained from pre - layout simulation or measurements of a small number of chips .the prediction gap is relatively large when is much larger than .however , as becomes comparable to , where potential inspection bits begin to occur , the prediction curve rises up quickly and matches the measured data well .figure [ figure : inter_predict ] also shows that should be a proper inspection bit because the intra - fhd is low and the inter - fhd is close to 45% the results of inter - fhd and intra - fhd with different inspection bit selections are shown in figure [ figure : inter_predict ] .as we can see from the figure , using bits closer to the msb gives low intra - fhd but also low inter - fhd .this verifies the fact that the delay paths are biased if no physical implementation constraints are imposed .on the other hand , using bits closer to the lsb gives 50% on both intra - fhd and inter - fhd because of the measurement noise .as predicted , the best inspection location appears at with 45.1% inter - fhd and 5.9% intra - fhd .the results also indicate that the systematic variation is mitigated because no constraints are imposed at all . with different inspection bit selections of the strong unbias puf.,width=278 ]table [ table : compare ] shows comparison results with previous work . with conventional arbiter puf ( apuf ) shown in the second column ,the results from show that the circuit is essentially a constant number generator with very little inter - fhd .the third column shows the 3 - 1 double arbiter puf with xors , where symmetric layout is still required , and the hardware overhead is 2x or 3x from the duplicated circuits depending on the uniqueness requirement of the application .the inter - fhd is close to 50% but the intra - fhd is high due to the xors .the fourth column shows the results from path delay line ( pdl ) puf .symmetric pdl and delay characterization for each crp are required , which can cause scalability issues .also the ability of eliminating biased responses is limited because it depends on the number of tuning stages inserted .the last column shows the proposed strong unbias puf .its behavior is unique and stable , and most importantly no symmetric layout at all ..comparison between previous arbiter pufs and strong unbias puf [ cols="^,^,^,^,^ " , ] [ table : compare ] for temperature and voltage variations , the reference responses are measured at 20 with standard voltage 12v .the reference responses are then compared with responses measured at 20 and 75 with 10% voltage variation .the results indicate the reliability of the puf when it is enrolled at normal condition but verified at a high temperature environment with unstable voltage source .figure [ figure : variations ] shows the intra - fhd using as the inspection bit .all intra - fhd at 20 with 10% voltage variation is below 8% , and all intra - fhd at 75 with 10% voltage variation is below 14% , which is still within conventional ecc margin with error reduction techniques for pufs . compared with ro puf presented in , one possible explanation of smaller intra - fhd for our strong unbias puf is that with multiple ro delay units , the overall delay variation is canceled out , where for the ro puf , the variation of each ro is directly compared .we proposed the first strong unbias puf that can be implemented purely by rtl without complex post - layout analysis or hand - crafted physical design effort .the proposed measurement can effectively mitigate the impact of biased delay paths and metastability issues to extract local device randomness .the inspection bit can be determined efficiently from the intra - fhd and inter - fhd prediction models .the strong unbias puf is implemented on 7 fpgas without imposing any physical layout constraints .experimental results show that the intra - fhd of the strong unbias puf is 5.9% and the inter - fhd is 45.1% , and the prediction models are closely fitted to the measured data .the averaged intra - fhd of the strong unbias puf at worst temperature and voltage variations is about 12% , which is still within the margin of conventional ecc techniques .the fact that the proposed scheme is immune to physical implementation bias would allow the strong unbias puf to be designed and integrated with minimum effort in a high - level description of the design , such as during rtl design .
|
the physical unclonable function ( puf ) is a promising hardware security primitive because of its inherent uniqueness and low cost . to extract the device - specific variation from delay - based strong pufs , complex routing constraints are imposed to achieve symmetric path delays ; and systematic variations can severely compromise the uniqueness of the puf . in addition , the metastability of the arbiter circuit of an arbiter puf can also degrade the quality of the puf due to the induced instability . in this paper we propose a novel strong unbias puf that can be implemented purely by register transfer language ( rtl ) , such as verilog , without imposing any physical design constraints or delay characterization effort to solve the aforementioned issues . efficient inspection bit prediction models for unbiased response extraction are proposed and validated . our experimental results of the strong unbias puf show 5.9% intra - fractional hamming distance ( fhd ) and 45.1% inter - fhd on 7 field programmable gate array ( fpga ) boards without applying any physical layout constraints or additional xor gates . the unbias puf is also scalable because no characterization cost is required for each challenge to compensate the implementation bias . the averaged intra - fhd measured at worst temperature and voltage variation conditions is 12% , which is still below the margin of practical error correction code ( ecc ) with error reduction techniques for pufs .
|
it has long been known through empirical studies that in a population of socially interacting individuals where each individual node holds an opinion from a binary set , a small fraction of _ initiators _ holding opinion opposite to the one held by the majority can trigger large cascades and eventually result in a dominant majority holding the initiators opinion .some recent studies have investigated such phenomena in the context of the adoption of scientific , cultural , and commercial products .one of the simplest models that captures adoption dynamics , irrespective of context , is the threshold model . according to the threshold model , an individual changes its opinion only if a critical fraction of its neighbors have already adopted the new opinion .this required fraction of new adoptees in the neighborhood is designated the _ adoption threshold _ . here, we denote the adoption threshold by .since its introduction , the threshold model has been studied extensively on complex networks to analyze the conditions under which a vanishingly small fraction ( of the total system size ) of initiators is capable of triggering a cascade of opinion change . in particular , these studies considered initial conditions with a single active " node or an active connected clique ( a single node and all of its neighbors ) as initiators . in this scenario , the condition for global cascades in connected sparse random networks is , where is the average degree of the network . however , with a few exceptions , little attention has been paid to the question of how the size and the selection of this initiator fraction affects the spreading of an opinion in the network , in particular , in the regime where a single active node or a small clique is insufficient to trigger global cascades . in case of multiple initiators , how to select these initiators from among the nodes of the networkso as to maximize the spread ( cascade size ) , remains an open question . to address this issuewe compare three different heuristic ways of selecting a set of initiators with predefined size , on erds - rnyi ( er ) random networks .specifically , we look at the size of the spread for a varying range of the average degree of the er networks .as found earlier for the case of cascades triggered by single initiators , we find that when the average degree is too low or too high , large cascades are not triggered .however , within an intermediate range of , large cascades are realized .this range is referred to as the _ cascade window _we find that the width of this cascade window is the largest when the initiator nodes are selected successively in descending order of degree starting with the node having the largest degree .we also find that the total time taken for the cascade to terminate is shortest for this selection strategy . in both er and empirical networksit was observed that for a given , there is a critical threshold such that cascades are only triggered if for a single - node or a very small initiator set .here , we systematically study the effect of varying the initiator fraction with held fixed , for the entire range of values of the adoption threshold .we find that for any given threshold there exists a critical value of the fraction of initiators , above which global cascades can be triggered .we discuss the dependence of on which turns out to be a smooth curve separating the two phases , one in which cascades are observed and the other where cascades can not be triggered .this finding constitutes an important insight into how _ local _ neighborhood - level thresholds can constrain the emergence of tipping points for cascades on global scales on sparse graphs .we note that in refs . , the authors went beyond basic heuristic selections of the initiators ( targets ) by employing a systematic greedy selection and a scalable influence maximization algorithm , respectively ; however they did not explore the region for global [ cascades ( and the corresponding tipping point of initiators required to trigger them ) , but rather , only focused on the regime . in ref . , assuming locally tree - like structures , the authors developed an asymptotic approach to approximate the size of the cascades .this method is expected to work better for random graphs with small average degree ( with negligible presence of triads ) and to gradually break down for graphs with higher .we will comment on its applicability in determining the tipping point in the results section .details of the network structure beyond the average degree , also play an important role in the spreading process .the network s degree distribution and the presence of community structure and local clustering can significantly affect the dynamics of spreading and vulnerability to cascades in both social networks ( driven by influencing ) and infrastructure networks ( driven by load - based failures ) . to elucidate the effect of clustering, we study the effect of network rewiring on the cascades triggered by different methods . specifically , starting from an empirical network with a community structure and relatively high clustering , we redistribute the links in the network while preserving the original degree sequence , using a number of different methods .the cascade sizes are found to be larger and more likely in the original network which , in addition to having an inherent community structure , has much higher clustering coefficient ( essentially capturing the density of triads ) .these results indicate that local clustering , just like in the case of a single node ( or single - clique ) initiator , facilitates the spreading of global cascades in the case of multiple initiators as well .a recent study also considered cascades in the threshold model in multiplex networks ( a natural framework and terminology for interdependent networks in the social setting ) . in this case , individuals can be connected by multiple types of edges ( representing multiple kinds of social ties , e.g. , colleagues , friends , or family ) .it was shown that multiplex networks facilitate cascades , i.e. , increase the social network s vulnerability to spreading .in the threshold model , every node in the network can be in one of the two possible states , ( inactive ) or ( active ) , that can be also be thought of as signifying distinct binary opinions on an issue .the typical initial condition for studying threshold model dynamics is one where all nodes except a minority - the initiators - are in state .then , the dynamics proceeds as follows . at each time step , a node is selected at random . if the node is inactive , it becomes active if at least a threshold fraction of its neighboring nodes are active i.e. in state .the active state is assumed to be permanent i.e. once a node becomes active it remains active indefinitely .the system evolves according to these rules until no further activations can occur .the threshold , in general , can be different for every node but for simplicity , we consider the case where every node has the same threshold .the size of the cascade at any point during its evolution or after it has terminated , is quantified by the fraction of active nodes in the network . in the following sections we discuss the simulation of this dynamics for various network topologies .the decision that a node will adopt depends only on the states of its neighbors .if the fraction of its neighborhood in state exceeds then the node updates its state . as a result of this _ threshold condition _ a node s degree plays an important role in determininghow easily it can be influenced .the threshold condition is more easily satisfied for a low - degree node than a high - degree node , since the former requires fewer active nodes to be present than the latter , given a fixed adoption threshold for all nodes .similarly , the average degree of the network determines to what extent , if at all , the entire network can be influenced . for a fixed number of initiators , high degree nodes are less likely to get influenced because it is more difficult for their neighbors to satisfy the threshold condition .a high is therefore not a desirable condition for cascades . on the other hand , for low ,the network consists of disconnected clusters of sizes less than , and cascades remain confined to one or few of these clusters . as a result , global cascades only become possible in an intermediate range of - the cascade window . in general , cascade window sizes depend on both , the threshold , and the initiator fraction .the precise choice of initiators also plays an important role in the size of the cascade and consequently the cascade window itself .a strategic selection of initiators can dramatically increase the average size of the spread , which we denote by . here , we compare three heuristic strategies for selecting a set of initiators constituting a fraction of the total network size : ( _ i _ ) random selection , ( _ ii _ ) selecting nodes in the descending order of their degrees , and ( _ iii _ ) selection in the descending order of -shell index . in (_ ii _ ) and ( _ iii _ ) , the choice of initiators may not be unique .if there are many sets of initiators that can be selected for the same degree ( or -shell ) , one of these sets is selected at random .the simulation results are shown in fig .[ cwindow](a ) for a fixed fraction of initiators on an er graph with and .we first look at the average spread size as a function of average degree on an er random graph as shown in figure [ cwindow](a ) . when is small , all three strategies perform equally well because the network consists only of small clusters without a giant component and hence spread is localized to those clusters . as soon as becomes large enough for a giant component to arise , the spread covers a large portion of the network .further increasing makes it harder for the nodes to satisfy the threshold condition and decreases again . to understand the differences in the performance of these heuristics, we first note that there are two distinct aspects determining the efficacy of a node as an initiator .first , it must be capable of influencing a large number of nodes , i.e. it should have a large degree .second , it must be connected to nodes which have an easily satisfiable threshold condition i.e. the degrees of its neighbors must be sufficiently low . additionally , and related to the first point , it also makes sense to choose the highest - degree nodes as initiators , since they are the hardest to influence . in light of these arguments, the highest - degree selection strategy appears to be a natural choice for generating large cascades .it would appear that high -shell nodes are a comparably good choice , since high -shell nodes also possess a high degree .however , by construction , nodes in the highest -shells are a special subset of the high - degree nodes that are predominantly connected to other nodes of high - degree . in other words , nodes selected in descending order of their -shell index have fewer easily influencable neighbors than nodes selected purely on the basis of degree .this qualitatively explains why the -shell method does not perform as well as the high - degree selection .finally , the random selection works the poorest since it largely selects low - degree nodes which trigger a small number of cascades many of which frequently terminate when they encounter a high - degree node .an increase in the initiator fraction makes the cascade window wider by allowing cascades to occur for even higher values as shown in fig .[ cwindow](b ) where is increased to .the selection strategies follow the same ranking in this case as well .results obtained from simulations indicate that highest degree method also works better ( followed by the -shell method ) in terms of the speed of the cascade .the results for and are shown in figs .[ cwindow](c ) and ( d ) , respectively .as discussed in the previous section , for a small ( -size ) seed of initiators , cascades can only occur if is smaller than a critical value ( for sparse random graphs ). however , this does not hold if we introduce a sufficiently large fraction of initiators in the system .we look at the quantity ( average fraction of nodes in state ) as a function of .( we will refer to as cascade size for short . ) gradually increasing shows that in the beginning when , ( global ) cascades are not observed .when reaches a critical value , a discontinuous transition occurs and large cascades are seen immediately as shown in fig .[ sn_p_differentphis](a ) .the need for a minimum critical fraction of committed nodes for consensus has been observed in different models of influence ( see discussion for more details ) . since starting with a finite accounts for a large number of nodes in state , the relevant quantity to look at is the number of nodes that were initially in state and eventually adopted state ( i.e. , excluding the initiators ) .thus , we define which measures the fraction of non - initiator nodes that participate in the cascade .transitions in are shown in fig .[ sn_p_differentphis](b ) for different values and several network sizes .it can clearly be seen that the transition only depends upon and is independent of system size .this transition ( the emergence of the tipping point ) is quite generic in the threshold model , and can be observed in networks with different sizes and average degrees , as well as for different selection methods for initiators ( see supplementary information sections s.1 and s.2 for more details ) .the critical point in each case is calculated by numerically computing the derivative of with respect to and finding its maximum .having calculated allows us to explicitly look at the relationship between and as shown in fig .[ pc_phi](a ) for different average degrees . as increases ,all curves appear to converge to the limiting case of the fully - connected network ( complete graph ) for which .therefore , for a given threshold the minimum number of initiators needed to trigger large cascades can be estimated .we also employed a previously developed asymptotic method to estimate analytically ( see supplementary information section s.3 for more details ) .this method uses a tree - approximation for the network structure and calculates the cascade size by assuming a progressive , directed activation of nodes from the surface of the tree to the root .consequently , the method works well only for low and low . for large ,the tree - approximation breaks down , while for large , deviations from the assumed progressive and directed activation of levels , become significant .the comparison of the analytically predicted using this method to values obtained from simulations clearly show regions of approximation validity and breakdown [ fig .[ pc_phi](a ) ] . for a fixed and ,we also studied by simulations how the selection of initiators affect the critical fraction .simulation results in fig .[ pc_phi](b ) show that selection of initiators by their degree works better than the other two methods across the range of threshold .in this section we study how the dynamics of the threshold model is affected by structural changes in the network .we study the dynamics on an empirical high - school friendship network , using one particular network from the add health data set ( also employed in ) and a few degree - sequence preserving randomized versions of it .[ _ add health _ was designed by j. richard udry , peter s. bearman , and kathleen mullan harris , and funded by a grant p01-hd31921 from the national institute of child health and human development , with cooperative funding from 17 other agencies . for data files contact add health , carolina population center , 123 w. franklin street , chapel hill , nc 27516 - 2524 , addhealth.edu , http://www.cpc.unc.edu/projects/addhealth/ ( accessed june 20 , 2013 ) . ] to simplify things , we extract the giant component from the high - school network which has nodes and .hereafter , we only consider the giant component of this network and refer to it as the high - school network .the initiator fraction is kept fixed at .the network contains two communities which are roughly equal in size .we generate two distinct ensembles of networks from this high - school network by employing the following randomization methods : 1 .the link swap method ( henceforth referred to as _ x - swap _ ) in which two links are selected at random and then one end point of a link is swapped with the end point of the other link .an x - swap step is disallowed if it results in fragmentation of the network .this swapping is done repeatedly so that the network is randomized to an extent that any community structure , local clustering , or degree - degree correlation is eliminated .2 . the exact sampling method by del genio et al .( dktb ) , a connected network is constructed from the degree sequence of the original network .the algorithm takes as input the exact degree sequence of the network and joins the link stubs from different nodes until every stub has been paired with another stub . both methods of randomization leave the degree sequence unchanged .( results for x - swapped and exact sampling are very similar and we only show them in detail for the former . ) we look at the size of spread as a function of time for in the original high - school network fig .[ s_t_xs_ex](a ) and the x - swapped high - school network fig .[ s_t_xs_ex](b ) , while fig .[ s_t_xs_ex](c ) shows the direct comparison between the corresponding ensemble - averaged time series .analogous plots for are shown in figs .[ s_t_xs_ex](d f ) . for the empirical high - school network ,some runs reveal the existence of community structure in the network where spread is faster in one community compared to other .more specifically , in some of these runs , the cascade first sweeps one of the communities ( while the other one resists ) before it becomes global .this can be seen by the step - like evolution in the corresponding time series in fig .[ s_t_xs_ex](a ) [ randomized networks do not exhibit this behavior , see figs .[ s_t_xs_ex](b ) ] .the same phenomena can also be observed in the configuration snapshots in fig .[ visuals](a ) , while their randomized counterparts do not show this behavior [ fig .[ visuals](b , c ) ] . in general , the results show that triggered cascades are larger and more likely for a network with high local clustering than for a randomized network with the same degree sequence [ fig .[ s_t_xs_ex ] ] , although the impact of clustering is diminishing for larger values of .note that the clustering coefficient of the original high - school ( hs ) graph is ; for its randomized versions obtained by x - swaps ( xs ) and exact - degree sequence ( dktb ) construction are ( see supplementary information section s.4 for more details ) . the average cascade size [ fig .[ s_phi](a ) and ( b ) ] and the probability of global cascades [ fig .[ s_phi](c ) and ( d ) ] as a function of threshold also indicate that strong clustering ( present in empirical networks ) facilitates threshold - limited spreading .( we define a global cascade as a cascade that covers at least percent of the network size . ) hence , this important feature of threshold - limited spreading is preserved for the case of multiple initiators studied here . the temporal evolution of the average cascade size in the original hs network , its two randomized versions , and an er network of the same size and with the same average degree is shown in fig . [ s_t_avg ] .the two methods of randomization ( x - swap and exact sampling ) roughly give the same cascade size . in case of randomized networks , for some realizations spread reaches the full network [ fig .[ visuals](c ) ] and for some realizations spread is minuscule [ fig . [ visuals](b ) ] and therefore .finally , analogous to fig .[ sn_p_differentphis ] , we show the emergence of global cascades ( at the tipping point ) in the high - school network , as the density of initiators is varied [ fig . [ s_p_hs ] ] .several recent studies have addressed , for a variety of agent - based opinion spreading models , the impact of a special set of initiators viz .inflexible individuals , also referred to synonymously as committed or stubborn agents , true believers , zealots , or inflexible contrarians .the rules of state updating ( or opinion switching ) in these models is symmetric , and governed purely by the local density of states in the neighborhood of a node . in such a system , the inflexible nodes constitute a special set of nodes which never change their opinion , thereby breaking the symmetry of the system and giving rise to tipping points beyond which the entire network conforms to the state adopted by the committed agents .it has been shown that the emergence of tipping points in some of these models is related to metastable regions and barriers ( saddle points ) in the corresponding opinion landscapes . because these models allow frequent changes of state or opinion at the individual level , these models are more suitable for scenarios where switching an individual s state incurs virtually no cost .in contrast to the above models , the threshold model ( or the qualitatively similar threshold contact process ) is more suited to modeling the diffusion of innovations or adoption of new products where investment in a new idea comes at a cost , and the incentive to switch back after becoming active is low . here , spreading is an asymmetric process and is also inhibited by a local threshold : individuals can only adopt the new product or norm if a sufficient fraction of their neighbors have already done so .( the threshold model or threshold contact process , in spirit , is closer to the family of susceptible - infected - susceptible- or contact - process - like models , in that the spreading of a disease or norm is an inherently asymmetric process by the rules of the local dynamics . )the focus of this work was to identify tipping points for global cascades triggered by multiple initiators and governed by local thresholds .our findings demonstrate that these tipping points emerge in both er and empirical high - school networks , in a qualitatively similar fashion .further , we studied three different heuristic strategies to select a fraction of initiators for the threshold model on er network as well as on an empirical network .our results demonstrate that selecting initiators by their degree ( highest first ) results in the largest ( as well as fastest ) spread . naturally , for high values of the local threshold ( ) , single initiators or small cliques can not trigger global cascades .we showed by simulations that there exists a critical value of initiator fraction that is needed to trigger cascades for high values of .we also studied how structural changes , such as randomizing an empirical network using different randomizing methods , would affect the size of the cascades triggered ( in the cases studied here ) by multiple initiators .our simulation results on the empirical high - school network show that randomizing the network in fact results in narrower cascade windows compared to the original network with strong clustering , implying that clustering facilitates spreading in threshold - limited diffusion with multiple initiators .this work was supported in part by the army research laboratory under cooperative agreement number w911nf-09 - 2 - 0053 , by the army research office grant w911nf-12 - 1 - 0546 , by the office of naval research grant no .n00014 - 09 - 1 - 0607 , and by grantfa9550 - 12 - 1 - 0405 from the u.s .air force office of scientific research ( afosr ) and the defense advanced research projects agency ( darpa ) .the views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies either expressed or implied of the army research laboratory or the u.s . government .p.s . , s.s . , b.k.s . and g.k .designed the research ; p.s . and s.s .implemented and performed numerical experiments and simulations ; p.s ., b.k.s . and g.k .analyzed data and discussed results ; p.s ., b.k.s . and g.k. wrote , reviewed , and revised the manuscript .competing financial interests : the authors declare no competing financial interests .lu , q. , korniss , g. & szymanski , b. k. threshold - controlled global cascading in wireless sensor networks . in _ proceedings of the third international conference on networked sensing systems ( inss 2006 ) _( transducer research foundation , san diego , ca , 2006 ) pp .164 - 171 ; http://arxiv.org/abs/cs.ni/0606054 ( accessed june 20 , 2013 ) .kempe , d. , kleinberg , j. , and tardos , e. maximizing the spread of influence through a social network . in _ proceedings of the 9th acmsigkdd international conference on knowledge discovery and data mining _( acm new york , ny , 2003 ) , pp .137146 .chen , w. , yuan , y. , & zhang , l. scalable influence maximization in social networks under the linear threshold model . in _ proceedings of the 2010 ieee international conference on data mining _( ieee computer society , washington , dc , 2010 ) , pp. 8897 .sharan , r. , ideker , t. , kelley , b. p. , shamir , r. & karp , r. m. identification of protein complexes by comparative analysis of yeast and bacterial protein interaction data . 835846 ( 2005 ) .del genio , c. i. , kim , h. , toroczkai , z. & bassler , k. e. efficient and exact sampling of simple graphs with given arbitrary degree sequence . e10012 ( 2010 ) .kim , h. , toroczkai , z. , erds , p.l ., mikls , i. & szkely , l.a . degree - based graph construction .392001 ( 2009 ) .marvel , s. a. , hong , h. , papush , a. & strogatz , s. h. encouraging moderation : clues from a simple model of ideological conflict .118702 ( 2012 ) .halu , a. , zhao , k. , baronchelli , a. & bianconi , g. connect and win : the role of social networks in political elections ._ eurphys ._ * 102 * 16002 ( 2013 ) .turalska , m. , west , b. j. & grigolini , p. role of committed minorities in times of crisis .rep . _ * 3 * 1371 ( 2013 ) .centola , d. , willer , r. & macy , m. the emperor s dilemma : a computational model of self - enforcing norms .100940 ( 2005 ) .mobilia , m. does a single zealot affect an infinite group of voters ? .028701 ( 2003 ) .mobilia , m. , petersen , a. & redner , s. on the role of zealotry in the voter model .p08029 ( 2007 ) .verma , g. , swami , a. & chan , k. the effect of zealotry in the naming game model of opinion dynamics . in proceedings of milcom 2012 , oct .29 nov . 1 ( 2012 ) .as a function of the average degree on er networks of nodes with threshold for different selection strategies of multiple initiators for ( a ) ; for ( b ) .time evolution of the average cascade size on er networks of nodes with average degree and threshold for different selection strategies of multiple initiators for ( c ) ; for ( d ) .,width=604 ] .( a ) cascade size as a function of initiators for er networks with for different values of .( b ) scaled cascade size [ eq . ( [ s_scaled ] ) ] vs. for er networks with different network sizes and values.,width=529 ] as a function of the local threshold - value for er networks of size with various values of the average degree .the dashed line corresponds to the exact limiting case on large complete graphs ( fully - connected networks ) , .( b ) critical fraction of intiators for three different selection strategies for er networks of and .,width=529 ] on the high - school ( hs ) network and its randomized version by x - swaps with identical degree sequence , with , , and for two different values of fraction of initiators .( a ) hs friendship network and ( b ) its x - swapped randomized version .( c ) direct comparison of the ensemble - averaged time series for the original hs network ( red solid curve ) and for its x - swapped randomized version ( green solid curve ) ; blue solid curves represent conditional average over runs for which the spread reaches the entire network .thin black curves in ( a ) and ( b ) are individual time series . the fraction of initiators for ( a c ) is .( d ) hs friendship network and ( e ) its x - swapped randomized version .( f ) direct comparison of the ensemble - averaged time series for the original hs network ( red solid curve ) and for its x - swapped randomized version ; blue solid curves represent conditional average over runs for which the spread reaches the entire network .thin black curves in ( d ) and ( e ) are individual time series .the fraction of initiators for ( d f ) is ., width=680 ] , , and .nodes in state ( active nodes ) are colored red .( a ) original high - school network ; ( b ) randomized network ( by x - swap ) when eventual spread is local ; and ( c ) the same randomized network but for a run that reaches the whole network.,width=453 ] and . cascade size as a function of for ( a ) and for ( b ) .probability of global cascades as a function of for ( c ) and for ( d ) .,width=491 ] , ) .( a ) cascade size as a function of initiators for different values of .( b ) scaled cascade size [ eq . ( [ s_scaled ] ) ] vs. for different values of .,width=604 ]
|
a classical model for social - influence - driven opinion change is the threshold model . here we study cascades of opinion change driven by threshold model dynamics in the case where multiple _ initiators _ trigger the cascade , and where all nodes possess the same adoption threshold . specifically , using empirical and stylized models of social networks , we study cascade size as a function of the initiator fraction . we find that even for arbitrarily high value of , there exists a critical initiator fraction beyond which the cascade becomes global . network structure , in particular clustering , plays a significant role in this scenario . similarly to the case of single - node or single - clique initiators studied previously , we observe that community structure within the network facilitates opinion spread to a larger extent than a homogeneous random network . finally , we study the efficacy of different initiator selection strategies on the size of the cascade and the cascade window .
|
since bell s paper entanglement has been studied and explored in depth .saying that quantum information branch emerged from extensive studies of phenomenon of entanglement would not be an exaggeration .entanglement has been used in many information - processing applications in which it either yields an advantage over the classical setting , e.g. , in communication complexity , or where a classical counterpart simply does nt exist , e.g. , in quantum key distribution ( qkd ) , its device independent variant ( diqkd ) , teleportation , super dense coding , or pseudo - telepathy ( pt ) .although quantum theory allows for violations of bell inequalities ( bi ) , in certain cases the violations can not reach their maximum algebraically possible value .tsirelson was the first to find such upper bounds on bell values for quantum theory and to relate them to the grothendieck s inequality .much research has been done to explain why quantum mechanics does not lead to `` algebraic '' violations of bell inequalities . in ,wehner and oppenheim argue that the trade - off between steerability and uncertainty determines how non - local a theory is . in ,cleve et al . gave an upper bound for the winning probability for xor games in the quantum setting ; their bound depends on the classical winning probability and the grothendieck s constant .note that xor game is a non - local game and that non - local games form a subset of general bell inequalities .the approach to bounding quantum violations via a grothendieck - type constant is now quite common and reasonably well understood .it leads to estimates for bell values that are of the form . in this workwe develop a different strategy , where the bell value for a given inequality depends on the difference between the maximal algebraic value ( ) and maximal deterministic value ( ) of the inequality in question .specifically , we study quantitatively bell inequalities with inputs ( henceforth bi ) and give a _ universal bound _ on quantum bell values of these inequalities . to find this bound for bi, we introduce notion of _ fraction of determinism _ ( fod ) and show that it depends only on the number of outcomes alice and bob have at their sites .we claim that presence of fod prevents quantum bell value from attaining the maximal algebraic value of a bell type inequality .our paper is inspired by gisin et al . , which studied certain bell inequalities ( pseudo - telepathy ) for which quantum resources achieve algebraic violation .they show that to achieve such violations for these inequalities at minimum inputs are required . in other words , there is no bi for which quantum theory attains algebraic violation .here we uncover the heart of this effect the fraction of determinism and are able to give a quantitative bound for it . .triangle inequality gives an upper bound , whereas _reverse triangle inequalities _ give lower bounds for general quantum states and for classical ( or commuting ) states .[ fig : trian_conj],scaledwidth=40.0% ] while looking for a lower bound for fod , we proved a fundamental property of quantum states which is interesting on its own .namely , if and are far from , then any convex mixture of them is also far from .more precisely , if and for some , then , for all ] , consider = , _ i = , where corresponds to the plus sign and to the minus .one directly checks that tr_i+tr-_i- = 1+r - on the other hand , if , then tr+tr-- = 2r . in our setting , this means that if ( which covers all possible values ] ) , then . since for ] }[\max \{{p_0\over 4 } , 2q_1(1-{q_0\over p_1})\ } ] \ge { 5-\sqrt17\over 8}=0.1096&\\ \nonumber & \epsilon_{01}\ge { 1\over 4}\rightarrow & \\ \nonumber & \min_{p_0\le q_0 \in [ 0,{1\over2}]}[\max \{2q_1(1-{q_0\over p_1 } ) , { p_0\over 4}\ } ] \ge { 5-\sqrt17\over 8}=0.1096&\\ \nonumber & \epsilon_{10}\ge { 1\over 4}\rightarrow & \\ \nonumber & \min_{p_0\le q_0 \in [ 0,{1\over2}]}[\max \{2q_1(1-{q_0\over p_1 } ) , { q_0\over 4})\}\ge 0.1123&\\ \nonumber & \epsilon_{11}\ge { 1\over 4}\rightarrow & \\ \nonumber & \min_{p_0\le q_0 \in [ 0,{1\over2}]}[\max \{{q_1\over 4 } , 2q_0(1-{q_1\over p_1})\}\ge 0.1123&\end{aligned}\ ] ]
|
it is an established fact that entanglement is a resource . sharing an entangled state leads to non - local correlations and to violations of bell inequalities . such non - local correlations illustrate the advantage of quantum resources over classical resources . here , we study quantitatively bell inequalities with inputs . as found in [ n. gisin et al . , int . j. q. inf . 5 , 525 ( 2007 ) ] quantum mechanical correlations can not reach the algebraic bound for such inequalities . in this paper , we uncover the heart of this effect which we call the _ fraction of determinism_. we show that any quantum statistics with two parties and inputs exhibits nonzero fraction of determinism , and we supply a quantitative bound for it . we then apply it to provide an explicit _ universal upper bound _ for bell inequalities with inputs . as our main mathematical tool we introduce and prove a _ reverse triangle inequality _ , stating in a quantitative way that if some states are far away from a given state , then their mixture is also . the inequality is crucial in deriving the lower bound for the fraction of determinism , but is also of interest on its own .
|
parser generators have been used for decades to create translators and other language processing tools .a parser generator is a tool that reads a specification of a language and generates code which is capable of checking its input text for syntactical correctness and construct an internal representation from it .this process involves several languages ( see figure [ languages ] ) : the specification is written in a _ specification language _ and describes the _ parsed language _ , while the generated code ( which recognizes the parsed language ) is written in an _ implementation language _ ( e.g. , java or c ) .a specification language usually includes a notation for context - free grammars and some means to specify _ semantic actions _ which are the computations translating a program into the internal representation . in this paperwe consider _ on - line _ parser generators such as yacc , antlr , coco / r and javacc , which are characterized by having the semantic actions defined on the concrete grammar , as opposed to attribute grammar ( ag ) systems such as eli and jastadd which define the computations on abstract syntax trees ( asts ) and usually require the complete input to be parsed before the computations are started .a parser generator translates a grammar specification into a parser and integrates user - defined semantic actions into it .since the user may make mistakes while writing the actions , this frequently leads to generating code that contains errors which are reported only by the implementation language compiler .we will now illustrate this problem with an example of a simple language of arithmetic expressions with variables .the antlr grammar for this language is shown in listing [ arithexp ] ( no semantic actions are presented at this point ) . .... //lexical rules fragment letter : ' a' .. 'z ' | ' a' .. 'z ' | ' _ ' ; fragment digit : ' 0' .. '9 ' ; var : letter ( letter | digit ) * ; int : digit+ ; // syntactic rules expr : term ( ( ' + ' | ' - ' ) term ) * ; term : factor ( ' * ' factor ) * ; factor : var | int | ' ( ' expr ' ) ' ; .... let us consider the case when one uses antlr to develop a parser that checks the syntax and evaluates the expressions in a given environment .evaluation of an expression is a special case of translation : the expression in essentially translated into a number .an environment stores values for all variables referenced in the expression .for example , if the parser is run on the input text `` ` x*(3 + 2 ) ` '' in the environment [ x=4 ] , it accepts the input and returns 20 .when run on `` ` ( x+*3 ) ` '' , it does not accept ( raises an exception ) , because the input is not syntactically correct .note that the notation in listing [ arithexp ] does not describe environments : an environment is passed as a separate argument to the parser ( see the examples below ) . to give an example of an error which may appear in the generated code ,let us consider the following set of semantic actions for the rule ` factor ` : .... factor[environment env ] returns [ int result ] : var { result = env.getvalue( int ; } | ' ( ' e = expr[env ] ' ) ' { result = e ; } ; .... when we run antlr on a specification containing this rule it will successfully generate code .the code generated for the second alternative will contain the following lines : .... int result = 0 ; // ... token int2=null ; // ... result = int2 ; .... the java compiler yields an error message at the last line : a value of type ` token ` can not be assigned to a variable of type ` int ` .now we have to figure out that the cause of this error is that we forgot to extract the contents of the token by calling the ` gettext ( ) ` method on ` int.gettext ( ) ; } .... we run antlr again and in the generated code the erroneous line changes to the following : [ source , java ] ---- result = int2.gettext ( ) ; ---- the java compiler complains again at the same line : a ` string ` can not be assigned to an ` int ` variable .we have to correct the specification again : .... | int { result = integer.parseint( f1=factor ( ' * ' f1 : result = f ; // only for f2 : result = mul(result , f ) ; // only for t1 ` ) ..... expr : sgn=('+ ' | ' - ' ) term ) * ; expr(environment env ) -- > ( int result ) { at term : t = term(env ) ; before sgn : sign = one ( ) ; after ' - ' : sign = neg(sign ) ; } .... to motivate again our choice of notation , we provide the same rule in antlr notation ( in listing [ exprantlr ] ) . as can be seen , it is rather hard to understand the structure of the syntactic rule from it , which is never the case for grammatic . ....expr[environment env ] returns [ int result ] : { result = 0 ; sign = 1 ; } t = term[env ] { result = t ; } ( { sign = 1 ; } ( ' + ' | ' - ' { sign = -1 ; } ) t = term[env ] { result + = t * sign ; } ) * ; ....all functions used in our example have only one output attribute , but in general a function may return a tuple .for example , a function ` divide(x , y ) ` may return two numbers : a quotient and a remainder .grammatic does not support tuple - typed attributes , and a return value of this function can not be assigned to a single attribute .instead , grammatic supports attribute tuples .if one needs to receive a result of the ` divide ` function , it can be done in the following way : .... int quot ; int rem ; ( quot , rem ) = divide(x , y ) ; .... this code assigns the first component of a returned tuple to the attribute ` quot ` and the second to the attribute ` rem ` . before the first assignment , a value of an attribute is not defined and thus can not be read . to ensure that every attribute is initialized before the first usage , grammatic performs conventional data - flow analysis . if the analysis finds a read - access which may not be preceded by a corresponding write - access , the front - end reports an error .this analysis relies on the construction of a control flow graph .grammatic does not support conditional operators and loops as such , and all the branching and repetition happens according to the structure of grammar rules which is denoted by the common regular operations : concatenation ( sequence ) , alternative ( `` '' ) and iteration ( `` + '' ) .optional constructs ( `` ? '' and `` * '' ) are viewed as alternatives with an empty option .a control flow graph is constructed as follows : concatenation corresponds to sequential execution , alternative corresponds to branching and iteration corresponds to a loop .figure [ cfg ] shows a control flow graph for the listing [ exprtf ] ; edges are labeled with corresponding sequences of attribute reads and writes indicated as `` [ r ] '' and `` [ w ] '' respectively .grammatic checks translation functions for type safety : when a value is assigned to an attribute ( passing as an argument can also be interpreted as an assignment to an input attribute of a function ) , the type of the right - hand side must be a subtype of the type of the left - hand side .this prevents grammatic from generating code with typing errors such as those we discussed in section [ introduction ] . in listing [ exprtf ] , two attributes , ` env ` and ` result` , are declared in the signature of the translation function to have types ` environment ` and ` int ` respectively , but the intermediate attribute ` t ` is not declared anywhere .what type does it have ?this is an example of _ local type inference _ which makes grammatic feel more like dynamic languages : if some attribute is used but not declared , the type checker assumes that it is a local attribute and tries to figure out an appropriate type for it considering the context in which it is used . in our example , `t ` is first assigned the value returned by the ` term ` function , which has an output attribute ` result ` of type ` int ` . we write this as follows : then , it is passed to the ` mul ` function as a value for an input parameter ` y ` of type ` int ` : these two usages facilitate the following reasoning : assuming that there exists a type for ` t ` , such that the whole translation function is typed correctly , from ( [ teqres ] ) we see that ` int ` must be a subtype of , and from ( [ yeqt ] ) we see that must be a subtype of ` int ` . hence , equals ` int ` , and we have inferred the type for ` t ` successfully .the type checker in grammatic applies this kind of reasoning for every attribute which is used but not declared . in some cases this procedure does not lead to a definitive conclusion . in these cases the type checker reports an error , which can be reconciled by providing an explicit declaration of the attribute in question .not only attribute types but also signatures of external functions can be inferred in this manner .for example , assume the following assignment appears in a specification : .... a = f(b , c ) ....if ` f ` is not declared , grammatic assumes that there is an external function ` f ` with one output attribute and two input attributes , and applies the above reasoning to these attributes .if it succeeds , a complete type for ` f ` is inferred .we provide a more detailed description of type checking and type inference in grammatic in the next section .this section describes the extensible type system used in grammatic .the typing rules presented below are written under the assumption that all the attributes and external functions are declared explicitly .the purpose of type inference in this case is to reconstruct omitted annotations .if the reconstruction is not possible , the specification is considered to be inconsistent .let the implementation language be denoted by .the types of the implementation language will be denoted and referred to as _ground types_. let be the ground type which represents character strings .the types used in grammatic are defined by the following productions : as the names suggest , attributes may only have ground types , tuples are sequences of attributes and have corresponding types , and functions send tuples to tuples .as we explained above , attribute tuples are used to receive return values of functions having more than one output attribute .note that tuples can not be nested .function arguments and individual return attributes can only have ground types : for example , a single argument can not be a tuple .a _ subtyping relation _ on the set of ground types ( denoted ) represents subtyping rules of the implementation language .we assume that it is reflexive and transitive .we will use and interchangeably .each translation function is type - checked separately .a type - checking context comprises signatures of all functions available in the specification along with declarations of all input and output attributes of these functions and local attributes of the function being checked . since attributes in different signatures may have same names , each attribute is indexed with the name of the function it belongs to .for example , a context may contain the following declarations : + figure [ exptypes ] provides straightforward typing rules for token values , attributes and tuples and a rule for function application which says that a type of an argument must be a subtype of the type of the corresponding formal parameter . in figure [ statypes ] , denotes all the statements in which typing rules are respected . the only nontrivial constraint is expressed by the rule assignment : the type of the right - hand side must be a subtype of the type of the left - hand side . as the left - hand side may appear to be a tuple ( as well as the right - hand side in case of function application ) , we use an extended subtyping relation which is the minimal relation such that and the previous subsection formalizes the type system of grammatic under the assumption that every attribute and every function which is used is also declared explicitly . as we illustrated above, such declarations may be redundant in some cases , and grammatic ( after many programming languages ) provides a local type inference mechanism to enable omission of some of them .type inference works separately in each translation function .it represents all the statements as sequences of attribute assignments ( function arguments are treated as `` assigned '' to input attributes ) assigning unknown types to undeclared attributes . to reconstruct the declarations we use a modification of a conventional algorithm ( see , for example , ) which creates constraints ( subtyping inequalities ) for unknown types and finds ground types which satisfy these constraints ( see section [ typeinf ] for a simplistic example ) .our modifications to the classical algorithm are not significant enough to formally present the entire type inference process .instead , we will only illustrate the behaviour of our algorithm in case of an ambiguity .let us assume that the implementation language has the following types : , where .consider the following function : .... f(integer x ) -- > ( object result ) { before ... : t = x ; // ... after ... : result = t ; } .... the local attribute ` t ` is not declared .let be the unknown type for ` t ` .the type inference algorithm will construct the following set of constraints : these constraints have at least two solutions : both and might be assigned as a type for ` t ` .the very existence of a solution already means that the specification does not have type inconsistencies , but the back - end will need the exact type information to generate code , so we have to decide which type to choose .this may be important when inferring the types for external functions since they will be visible to the user ( we provide details on the back - end below ) . in such casesgrammatic prefers lower bounds to upper bounds , which means that ` t ` will be assigned the type .the general procedure is the following : find a minimal solution satisfying all lower bounds , if it also satisfies all upper bounds , choose it as a final solution .if there are several ( incomparable ) minimal solutions for the lower bounds which satisfies all the upper bounds , the algorithm can not decide between them and reports an error .if no solution for lower bounds satisfies all the upper bounds , the specification is inconsistent , and we again report an error .if no lower bounds are present , we choose a _ maximal _ solution for the upper bounds .this procedure can be summarized as follows : we look for the type which is as close to the constraining ones as possible , preferring more concrete ( smaller ) types .this approach appears to be rather intuitive : it is very unlikely to infer a type which the developer does not expect .if no constraints are present at all , this means that all the attributes connected to the one at hand are not declared . in this casewe ask if the ground type system has a top type ( such as ` java.lang.object ` ) , and if it has one , we choose it , otherwise we report an error .now we are ready to explain how the example from section [ introduction ] is handled by grammatic .in that example we had a rule for ` factor ` analogous to the one in listing [ factortf ] , in which there was an error : a name of the ` int ` token was used instead of its textual value ( ` int # ` ) .here is the modification of listing [ factortf ] containing the same defect : .... factor : var | int | ' ( ' expr ' ) ' ; factor(environment env ) -- > ( int result ) { after var : result = evaluate(env , var # ) ; after int : result = int ; at expr : result = expr(env ) ; } .... unlike antlr , grammatic reports the following error when we try to generate code : _ `` the local attribute int might have not been initialized''_. what happened ? ` int ` appears on the right - hand side of an assignment .thus , grammatic expects it to be an attribute .since such an attribute is not declared , the type checker treats it as a local attribute and infers a type for it : ` int ` . for the time being, no error has been found .after the type checking , the definitive assignment analysis is performed , and it finds that the local attribute ` int ` is read but never assigned .this leads to the error which is reported . without generating the code and running a compiler ,we have got an error message which points precisely to the place in the specification where the defect is situated . following the logic of the example of section [ introduction ], we correct the specification : .... after int : result = int # ; .... now the type checker complains : _`` incompatible types : string and int''_. again , we have got a precise error message without generating code and running a compiler .we correct the specification again : .... after int : result = strtoint(int # ) ; .... this time all checks are passed successfully , the code is generated and will be compiled with no errors .as can be seen , the development cycle now takes the form shown in figure [ cycles ] ( right side ) , which was our main goal .now we proceed to a description of the tools grammatic provides to support many implementation languages .the type checking procedure described above is parameterized by a set of ground types with distinguished types and , and a subtyping relation . to incorporate a type system of the implementation language into grammatic, one needs to substitute concrete implementations for these parameters . in general , this is done by adding front - end _ extensions _ ( plug - ins written in java ) , which provide support for particular implementation languages . developing such extensionsrequires some effort and may be undesirable in certain cases . for this reason grammatic provides a default extension which supports declarative descriptions of type systems of implementation languages .a type system description specifies a set of named types , a subtyping relation on these types , an optional top type and a string type .for example , the type system used above may be described as follows : .... typesystem simple ( _ ,// name of the top type ( nothing in this case ) string // name of the string type ) { // type declarations type int ; type environment ; type object ; //subtyping rules environment < : object ; string < : object ; } ....this denotes a type system named ` simple ` , with no top type ( underscore is used to denote this , if there were a top type , its name would have been written ) and with the string type called ` string ` .it declares three more types : ` int ` , ` environment ` and ` object ` .the first two were used above , but the third one is added only for demonstration purposes , namely to introduce subtyping rules which state that ` environment ` and ` string ` are subtypes of ` object ` .note that since ` object ` is not the top type , ` int ` is not its subtype . in a general case , subtyping rules stated in a type system description form an incomplete form of a subtyping relation .a final subtyping relation is obtained as a reflexive - transitive closure of it .type system descriptions such as the one shown above are sufficient to check grammatic specifications for type safety and infer types , but they are not sufficient to generate code unless the implementation language has types with exactly the same names as used in a description . the latter is unlikely because normally types are defined inside some kind of namespaces , such as java packages , and can not be referred to by a simple name without import statements .thus , we have to provide another description which _ instantiates _ the types with their actual form in the implementation language .we call this a _ language description _ : .... language java for simple { int = ' int ' ; environment = ' java.util.map<string , integer > ' ; string = ' string ' ; object = ' object ' ; } .... the example shows a language description named ` java ` which instantiates the type system ` simple ` and says that ` int ` is implemented as ` int ` in java , and ` environment ` is implemented as a map from strings to integer objects. we might not specify the instantiations for ` string ` and ` object ` since they may be referred to just by their names .now a back - end can use the strings we have provided in generated code , and it will work correctly ( if the back - end generates java and not c or some other language ) .a back - end may need some extra information such as a package to put the generated code in and a name to give to the generated parser .these options are provided in a _ back - end profile _, such as .... backend ' org.grammatic.pg.backends.antlrjavabackend ' for java { package = ' org.example.arithexp ' ; parsername = ' expressionevaluator ' ; } .... this is a profile for a back - end implemented by a java class ` antlrjavabackend ` which applies for the language description ` java ` defined above . in the profileone simply writes name - value pairs which are processed by the back - end as options .to summarize , the declarative descriptions of type systems in grammatic are organized into three levels ( see figure [ typesystem ] ) .this makes grammatic rather flexible when it comes to _ multi - targeted _ parser specifications , form which parsers in many implementation languages must be generated .in such a case one describes an abstract _ type system _ as shown above and provides many _ language descriptions _ for it , so that each _ back - end profile _ may use its own language . in some situations it is also convenient to have several back - end profiles for the same language , for example , when one needs to compare performance of different implementations or while migrating from one back - end to another .to provide tighter integration with a particular implementation language one can use a language - specific extension of the grammatic front - end instead of the default one described in the previous section . technically , a front - end extension consists of a ground type syntax specification and implementations of java interfaces which capture the semantical aspects of ground types : a subtyping relation , a set of predefined types and two distinguished types .let us describe these parts in more details .examples below present an extension supporting java types ( including generics ) , which we developed in our prototype . the core specification of the grammatic notation ( see listing [ short_notation ] ) has extension points : it uses but does not define two nonterminal symbols , ` type ` and ` declaration ` ( written in bold font in the listing ) .the language generated by ` type ` is a syntactical form of .since the specification parser in grammatic is itself implemented using grammatic , ` type ` must be defined by grammar rules and translation functions .this is done in a separate specification file which is virtually `` appended '' to the generic specification when the whole system is built ..... specification : declarations ?( externalfunctionsignature | ( grammarrule translationfunction * ) ) * ; attributedeclaration : type name ; .... a syntactic function for ` type ` must return an instance of ` java.lang.object ` ( grammatic is implemented in java ) , in other words , a type may be represented by an arbitrary object .the context - free rules for types in java 5 are given in listing [ java_types ] .for the sake of brevity we do not include the corresponding translation functions ..... type : identifier typearguments ? ( ' . 'identifier typearguments ? ) * ( ' [ ' ' ] ' ) * : basictype ; typearguments : ' < ' typeargument ( ' , ' typeargument ) * ' > ' ; typeargument : type : ' ? ' ( ( ' extends ' | ' super ' ) type ) ? ; basictype : ' byte ' | ' short ' | ' char ' | ' int ' | ' long ' | ' float ' | ' double ' | ' boolean ' ; .... with the rules from listing [ java_types ] used for defining the syntax of ground types , a signature of the ` evaluate ` function may be the following : .... evaluate ( java.utli.map<java.lang.string , java.lang.integer > environment ) -- > ( java.lang.integer result ) ; .... as can be seen , fully qualified class and interface names and generics can be used as types for attributes. the names in the example above are quite long which is inconvenient . in javathis problem is solved with the help of imports .grammatic does not know about the structure of java types and can not support imports itself .instead , it provides a generic mechanism for adding arbitrary _ declarations _ which are specific for an implementation language .the syntax for declarations is defined by the ` declarations ` nonterminal . a translation function for it does not take or accept any attributes: it is supposed to collect the information about the declarations and store it internally to be available when somebody else ( e.g. , the translation function for ` types ` ) needs it . to support importswe can define ` declarations ` as follows : .... declarations : importdeclaration * ; importdeclaration : ' import ' identifier ( ' . ' indentifier ) * ( ' . ' ' * ' ) ? ; .... the corresponding translation functions will collect the information about imported types and provide it to the translation function for ` types ` .now we can use java imports in grammatic specifications , for example [ source , java ] ---- import java.util.map ; evaluate(map < string , integer > environment ) -- > ( integer result ) ; ---- the complete syntax of declarations which we use for java is given in listing [ java_decl ] .in addition to imports , it supports _ options _ which are used by the back - end and specify auxiliary information ( this corresponds to back - end profiles of the default extension ) ..... declarations : options ?importdeclaration * ; options : ' # javaoptions ' ' { ' option+ ' } ' ; option : name ' = ' string ' ; ' ; importdeclaration : ' import ' identifier ( ' . ' indentifier ) * ( ' . ' ' * ' ) ? ; .... .... public interface isubtypingrelation < t > { boolean issubtypeof(t type , t supertype ) ; } public interface itypesystem <t > { isubtypingrelation < t > getsubtypingrelation ( ) ; set < t > getpredefinedtypes ( ) ; t gettoptype ( ) ; t getstringtype ( ) ; } .... semantics of ground types is provided by java classes that must implement the interfaces shown in listing [ java_intf ] . a subtyping relation is represented by a class which implements ` isubtypingrelation < t > ` interface , where the type parameter must be substituted by a class which is used by the extension to internally represent types , and the ` issubtypeof ` method returns true if and only if the following condition holds : for example , our implementation represents java types using the ` egenerictype ` abstraction from eclipse modeling framework . in this case the subtyping relation class is declared as follows : [ source , java ] ---- public class javasubtypingrelation implements isubtypingrelation < egenerictype > ---- the other interface , ` itypesystem ` has methods which return a subtyping relation represented as discussed above , a set of predefined types , a top type ( or ` null ` if no top type exists in the ground type system ) and a type for character strings .a back - end must use the ` tostring ( ) ` method of type objects to obtain their textual representation .by now we have presented an extensible front - end which can detect typing errors in specifications . herewe will explain why detecting these errors is sufficient for the back - end to be able to generate error - free code .the techniques described below apply to virtually any language , and we believe that whenever the front - end can be extended to support a particular implementation language , a corresponding back - end can be developed .generating error - free code from a grammar with no semantic actions is relatively easy .the problems arise when we need to incorporate hand - written code fragments into the generated program .grammatic front - end guarantees that the actions do not contain errors themselves and thus the errors may be caused only by conflicts between hand - written and generated code .for example , semantic actions may introduce names which are already used in the generated code or require particular imports .the peculiar property of this sort of errors is that the back - end can always detect and prevent them while reading the internal representation of the specification .this is because the back - end has total control over the generated code .for example , to avoid name clashes , it is sufficient to rename variables , which the back - end can do . in the casewhen the back - end generates antlr specifications with semantic actions in java , we have to prevent the following types of errors : * a name used in the specification is a java or antlr keyword . *a name is used internally by antlr . *a type requires some classes or interfaces to be imported .other types of errors are either prevented by the analysis performed by the front - end ( e.g. , usage of uninitialized variables ) or do not appear because of the particular structure of the code generated by antlr .naming problems are easy to prevent since it is sufficient to use a fresh name which can be obtained by adding numbers to original names ( e.g. , ` result1 ` ) .the code remains readable enough and no errors appear .the import problem is also fixed straightforwardly : we can always import all the classes ever mentioned in the specification or use fully qualified names if some short names clash .correctness of external function signatures is guaranteed by the following design of generated parsers . along with an antlr specification, grammatic generates a java interface which has a method for each external function used in the specification .this interface must be supported in order to implement the functions , which makes java compiler to check if the functions are declared properly in the implementation .this interface being read and implemented by a human makes us select the most appropriate types during the type inference process ( see section [ typeinference ] ) . to summarize ,we have demonstrated how a back - end can prevent all the errors which are imposed by the structure of the generated code ( and thus not present in the specification , and not checked by a front - end ) .these techniques vary over implementation languages , but the essence stays unchanged : we can always generate code in such a way that these errors do not appear .thus , we have reached our goal : as soon as a specification is successfully type - checked , the generated code is error - free .we are not aware of any parser generators which support multiple implementation languages and have type checking in their front - ends .the most popular tools supporting multiple implementation languages are antlr and coco / r , and they do not perform any type checking in the front - ends . in most cases these tools can not generate code in different specification languages from the same specification ( the specifications , thus , are not _ multi - targeted _ in these systems ) , because the embedded actions are written in a particular implementation language and will not compile in another one . in sablecc specifications are multi - targeted , which is achieved by having no semantic actions : a developer has to manually process parse trees using visitors . in contrast , grammatic supports both multi - targeted specifications ( using type system descriptions ) and semantic actions .the following attribute grammar systems are capable of reducing the development cycle to the one shown in figure [ cycles ] ( right side ) : eli automatically tracks compiler errors back to the specification .this approach is tied up not only to a specific implementation language ( eli uses c ) , but also to a specific implementation of its compiler , since the format of error messages usually varies from one compiler to another .the team behind jastadd system plans to integrate their own implementation of a java compiler into it , to check semantic actions .this approach is also tied up to a specific implementation language .none of these systems provide appropriate means by which they could be extended for other implementation languages .grammatic allows for this via front - end extensions and separated back ends .this paper addresses the problem of type checking in front - ends of parser generators supporting multiple implementation languages .the main goal is to prevent typing errors in the generated code to avoid the need of manually tracing such errors back to their causes in the specification .we have demonstrated that type checking of the specifications , which we implemented in a prototype tool grammatic , helps to reduce the development cycle compared to the one imposed by the tools currently available ( see figure [ cycles ] ) .our approach is designed to be extensible for use with multiple implementation languages .the principle contributions of this paper are the following : * a grammatic specification language supporting * * semantic actions , but having no problem of tangling between grammar rules and action code ; * * extensions to support type systems of implementation languages . * a type checking procedure for this language , supporting local type inference , compatible with the extensions . * a generic extension for declarative definitions of abstract type systems , their syntactical realizations for particular language and configuration profiles for different back - ends , which in complex enable multi - targeted specifications . *another extension providing tight integration with java .we have also reported on a prototype back - end for generating antlr / java , which , we believe , never produces erroneous code from successfully type - checked specifications . one possible direction to continue this work is to investigate a possibility of declaratively specifying back - ends for particular implementation languages to obtain a complete generator from declarative specifications .another direction will be to integrate grammar inspections ( such as heuristic ambiguity tests ) into the static checking procedure .this work was partly done while the author was a visiting phd student at university of tartu , under a scholarship from european regional development funds through archimedes foundation .received his masters degree in computer science from st .petersburg state university of it , mechanics and optics in 2007 .he is currently a ph.d .student at the mathematics department of the same university and a visiting pd.d student at the institute of computer science of university of tartu .his research interests include the problems of declarative language descriptions , type systems and domain - specific languages .
|
_ parser generators _ generate translators from language specifications . in many cases , such specifications contain _ semantic actions _ written in the same language as the generated code . since these actions are subject to little static checking , they are usually a source of errors which are discovered only when generated code is compiled . in this paper we propose a parser generator front - end which statically checks semantic actions for typing errors and prevents such errors from appearing in generated code . the type checking procedure is extensible to support many implementation languages . an extension for java is presented along with an extension for declarative type system descriptions . parser generator , type checking , type system , generated code , errors .
|
cellular automata are useful in a variety of problems related to statistical mechanics , traffic flow , and so on .the simplest sort of cellular automaton is the _ elementary cellular automaton _ which is one - dimensional with two states .these have interesting behaviour such as chaos and turing - completeness , and are described extensively in wolfram s pioneering paper as well as later works .the biham middleton levine ( bml ) traffic model was first proposed in 1992 to study traffic flow , as a 2d analog of the elementary rule 184 .it consists of a rectangular lattice , with periodic boundary conditions , where each site may be empty , contain a red car , or a blue car . on each time step : all red cars synchronously attempt to move one step east if the site is empty ; then all blue cars synchronously attempt to move one step south if the site is empty . for some parameter $ ] , the model is initialized by assigning to each site either a red car with probability , or a blue car with probability , or empty space otherwise .the model is interesting because , for small values of it self organizes into a free flow where cars move freely without ever stopping , whereas for large it converges to a global jam .it was initially believed that there is a sharp transition between these two phases , but in 2005 a stable intermediate phase was discovered for lattices of coprime dimensions , and then shown in 2008 to exist for square lattices . the bml model s simplicity and interesting behavior has inspired research that tweak every aspect of the model , such as dimensionality , lattice geometry , boundary conditions , the update rule , initialization , and so on .many other cellular automaton models were proposed for modelling traffic .although the bml traffic model was described as an `` analog '' of rule 184 , no attempt was made to formulate a rigorous description of the analogy , or a general method of finding such analogs of other elementary rules .we show that other elementary cellular automata can be generalized into two or more dimensions with arbitrarily many states , by sequentially applying the same rule in each dimension .historically , work on two - dimensional cellular automata have mostly focused on the von neumann or moore neighborhood , resulting in a large neighborhood . by applying the same rule in each dimension sequentially, we only have a neighborhood of size 3 regardless of dimensionality . in this paper, we describe an extension of wolfram s naming scheme for elementary cellular automata to higher dimensions and states ; then we describe a new discovery in the bml traffic model ; finally we describe and analyze a variety of different cellular automata with interesting behaviors and resembling various physical systems .in elementary cellular automata , which have one dimension and two states , the update rule is represented as a single lookup table , which stores the state of the next cell as a function of itself and its two neighbors .the contents of the lookup table , when interpreted as a binary integer , is called the wolfram code .for example , rule 184 is a model of one - dimensional traffic flow where there is one type of car that attempts to move right if an empty site exists .[ cols="^,^,^,^,^,^,^,^",options="header " , ] put simply , if a site s neighborhood contains only cells of one colour , the site assumes that colour .otherwise , the site becomes empty .when the model is seeded with some initial distribution of red and blue cells ( same as in the bml model ) , these seeds aggressively expand to cover sites in their vicinities , and soon the lattice contains some red and blue regions separated by thin borders of empty cells .these meta - stable regions have boundaries closely approximating orthogonal polygons whose sides are approximately to the horizontal .a side of such a polygon can move in the direction of its normal with speed inversely proportional to its length . as shorter sides move rapidly and join with parallel sides to be longer and stabler , the complexity of each of these regionsis reduced over time and the model appears to be annealing ( figure [ annealfig ] ) .the reason why the polygonal sides move with speed inversely proportional to length is because they are not in fact _ exactly _ oriented to the horizontal .instead , the side s slope is often off by one unit .the one `` flaw '' in the side propagates along the side back and forth , at a constant speed ; at the end of each orbit of the flaw , the side effectively moves one unit . since the period of the flaw s orbit is proportional to the length of the side , the side advances at a speed inversely proportional to the length .ultimately , the model converges to either red or blue , or some simple intermediate phase ( e.g. half red , half blue and separated by parallel boundaries ) , independent of .there does not appear to be any surprising phase change behavior for this model .grid , initialized in the same way as the bml traffic model , with . shown here from left to right is the same realization after timesteps respectively.,title="fig:",width=113 ] grid , initialized in the same way as the bml traffic model , with .shown here from left to right is the same realization after timesteps respectively.,title="fig:",width=113 ] grid , initialized in the same way as the bml traffic model , with . shown here from left to rightis the same realization after timesteps respectively.,title="fig:",width=113 ] grid , initialized in the same way as the bml traffic model , with . shown here from left to right is the same realization after timesteps respectively.,title="fig:",width=113 ] grid , initialized in the same way as the bml traffic model , with . shown here from left to right is the same realization after timesteps respectively.,title="fig:",width=113 ]in the previous section , somewhat nontrivial behavior along boundaries separating different phases was briefly discussed .more complex boundaries can form between more complex phases . for some rulestrings ,when the state is initialized in the same way as the bml traffic model , the cellular automaton rapidly partitions itself into two or more different phases separated by a membrane - like frontier . like biological membranes ,this can move in time , allow particles to cross it , merge with other membranes , and so on .a consequence is that the membranes are often transient or meta - stable .depending on the initial choice of , the model may ultimately converge to one of the different phases .one such model is rule 152690720768 , shown in figure [ 152690720768 ] .this model was found by random rulestring generation and then visual inspection .its microscopic behavior is too complex to be studied in detail here , so instead we perform experiments to determine its sensitivity to .the model appears to have a sharp transition around , above which the blue phase dominates and below which the white phase dominates ( figure [ membraneplot ] ) .intriguingly , there are red cells sparsely interspersed throughout both phases . lattice with .,height=113 ] iterations , on a lattice .the transition appears to occur at around .,width=302 ] lattice.,title="fig:",width=113 ] lattice.,title="fig:",width=113 ] lattice.,title="fig:",width=113 ] lattice.,title="fig:",width=113 ] + lattice.,title="fig:",width=113 ] lattice.,title="fig:",width=113 ] lattice.,title="fig:",width=113 ] lattice.,title="fig:",width=113 ]study of cellular automata such as the bml traffic model is severely restricted by available computing power . for a moderately sized lattice ,each time step requires on the order of site updates .supposing the model takes up to time steps to converge , this already takes several minutes on a typical cpu assuming a typical updates per second .past studies have used purpose - built hardware such mit s cam8 architecture , for which development has unfortunately stopped in 2001 due to a disk crash caused by `` mindless jerks '' .even then , it took a whole month for the simulations to run .later researchers have implemented the bml traffic model on a graphics processing unit ( gpu ) , which is suitable for the task due to the embarrassingly parallel nature of the model .our implementation also uses the gpu to process several rows of the lattice in parallel . like ,our implementation uses the cuda technology ; the graphics card used in our experiments is a single nvidia geforce gtx 970 , which has 1664 cuda cores . as a minor note , in the implementationit is important to use a good random number generator for large simulations .many common programming languages have a built - in pseudorandom number generator that has a period of , far too small for generating simulations of size more than each .our implementation uses the mersenne twister with a period of .the main contribution of this paper is the proposal of a family of cellular automata operating on a neighborhood of size 3 for arbitrary dimensions and states , which operates by sequentially applying the same update rule in each dimension .this family includes the widely - studied bml traffic model , for which we present new results regarding the mobility of intermediate states .furthermore , we have briefly discussed the properties of some other cellular automata within this family , and their possible application to various problems including percolation and annealing .the advantages of using a cellular automaton for such problems include the simplicity of the model and ease of implementation .future work includes a more thorough exploration of such models ( figure [ interesting ] ) , as well as models with different numbers of dimensions and states . in particular , although this paper talks about the case , the simpler case of is yet to be explored .for this , there are only 256 rules , the same number as elementary cellular automata . for implementation , a possible idea for futureoptimization is the gigantic lookup table optimization , which groups adjacent sites into a block that is updated all at once .our implementation is available open source at ` https://bitbucket.org/dllu/bml-cuda/ ` .an interactive visualization for alternative rules is available at ` http://www.dllu.net/bml/ ` .99 wolfram , stephen .`` statistical mechanics of cellular automata . '' _ reviews of modern physics _ 55.3 ( 1983 ) : 601 .wolfram , stephen . _ a new kind of science _ , wolfram media ( 2002 ) .ganguly , niloy , et al .`` a survey on cellular automata . '' ( 2003 ) .biham , ofer , a. alan middleton , and dov levine .`` self - organization and a dynamical transition in traffic - flow models . ''_ physical review a _ 46.10 ( 1992 ) : r6124 .dsouza , raissa m. `` coexisting phases and lattice dependence of a cellular automaton model for traffic flow . ''_ physical review e _ 71.6 ( 2005 ) : 066112 .linesch , nicholas j. , and raissa m. dsouza .`` periodic states , local effects and coexistence in the bml traffic jam model . ''_ physica a : statistical mechanics and its applications _ 387.24 ( 2008 ) : 6170 - 6176 .lau , chi - yung ._ numerical studies on a few cellular automation traffic models .the university of hong kong ( pokfulam , hong kong ) , 2002 .chowdhury , debashish , ludger santen , and andreas schadschneider .`` statistical physics of vehicular traffic and some related systems . ''_ physics reports _ 329.4 ( 2000 ) : 199 - 329 .vzquez , j. carlos garca , salvador rodrguez gmez , and fernando sancho caparrini .`` biham - middleton - levine traffic model in two - dimensional hexagonal lattice . ''_ proceedings of the european conference on complex systems _ 2012 .springer international publishing , 2014 .cmpora , daniel , et al .`` bml model on non - orientable surfaces . ''_ physica a : statistical mechanics and its applications _ 389.16 ( 2010 ) : 3290 - 3298 .ding , zhong - jun , et al .`` effect of randomization in the biham middleton levine traffic flow model . '' _ journal of statistical mechanics : theory and experiment _ 2011.06 ( 2011 ) : p06017 .nagatani , takashi .`` effect of jam - avoiding turn on jamming transition in two - dimensional traffic flow model . '' _ journal of the physical society of japan _ 63.4 ( 1994 ) : 1228 - 1231 .nagatani , takashi .`` effect of traffic accident on jamming transition in traffic - flow model . '' _ journal of physics a : mathematical and general _ 26.19 ( 1993 ) : l1015 .ding , zhong - jun , et al .`` effect of overpasses in the biham - middleton - levine traffic flow model with random and parallel update rule . ''_ physical review e _ 88.2 ( 2013 ) : 022809 .li , qi - lang , et al .`` effect of vehicles changing lanes in the biham middleton levine traffic flow model . '' _ international journal of modern physics c _ ( 2014 ) .fukui , minoru , and yoshihiro ishibashi .`` effect of reduced randomness on jam in a two - dimensional traffic model . '' _ journal of physical society of japan ._ 65 ( 1996 ) : 1871 - 1873 .huang , wei , et al .`` effect of car length in the biham - middleton - levine traffic model . '' _ future control and automation ._ springer berlin heidelberg , 2012 . 479 - 487 .sui , qiao - hong , et al .`` slow - to - start effect in two - dimensional traffic flow . '' _ computer physics communications _ 183.3 ( 2012 ) : 547 - 551 .levinshtein , m. e. , et al .`` the relation between the critical exponents of percolation theory . '' _ soviet journal of experimental and theoretical physics _ 42 ( 1976 ) : 197 .feng , xiaomei , youjin deng , and henk wj blte .`` percolation transitions in two dimensions . ''_ physical review e _ 78.3 ( 2008 ) : 031136 .`` cam8 : latest developments '' .mit laboratory for computer science .+ ` http://www.ai.mit.edu/projects/im/cam8/latest.html ` + accessed 2014 - 12 - 22 .`` geforce gtx 970 | specifications| geforce '' .+ ` http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-970/specifications ` + accessed 2014 - 12 - 22 .tyler , tim . `` gigantic lookup table ( glut ) '' . + ` http://cell-auto.com/optimisation/#glut ` + accessed 2014 - 12 - 22 .
|
a general family of -dimensional , -state cellular automata is proposed where the update rule is sequentially applied in each dimension . this includes the biham middleton levine traffic model , which is a 2d cellular automaton with 3 states . using computer simulations , we discover new properties of intermediate states for the bml model . we present some new 2d , 3-state cellular automata belonging to this family with application to percolation , annealing , biological membranes , and more . many of these models exhibit sharp phase transitions , self organization , and interesting patterns . cellular automata ; transport phenomena ; phase transitions ; self - organization ; bml model
|
the quality of data can not be assessed without contextual knowledge about the production or the use of data .actually , the notion of data quality is based on the degree in which the data fits or fulfills a form of usage . as a consequence ,the quality of data depends on their use context .it becomes clear that context - based data quality assessment requires a formal model of context , at least for the use of data . in this workwe follow and extend the approach proposed in . according to it , the assessment of a database performed by mapping it into a context that is represented as another database , or as a database schema with partial information , or , more generally , as a virtual data integration system with possibly some materialized data and access to external sources of data .the quality of data in is determined through additional processing of data within the context .this process leads to a new ( or possible several ) quality version(s ) of , whose quality is measured in terms of how much it departs from its quality version(s ) . in ,dimensions are not considered as contextual elements for data quality analysis . however , in practice dimensions are naturally associated to contexts .for example , in , they become the basis for building contexts , and in they are used for data access data at query answering time . in order to capture general dimensional aspects of data for inclusion in contexts , we take advantage of the hurtado - mendelzon ( hm ) multidimensional data model , whose inception was mainly motivated by data warehouse and olap applications . we extend and formalize it in ontological terms .actually , in an extension of the hm model was proposed , with applications to data quality assessment in mind . that work was limited to a representation of this extension in description logic ( actually , an extension of dl - lite ) , but data quality assessmentwas not developed . in this workwe propose an ontological representation in datalog of the extended hm model , and also mechanisms for data quality assessment based on query answering from the ontology via dimensional navigation .our extension of the hm model includes _ categorical relations _ associated to categories at different levels in the dimensional hierarchies , possibly to more than one dimension .the extension also considers _ dimensional constraints _ and _ dimensional rules _ , which could be treated both as _ dimensional integrity constraints _ on categorical relations that involve values from dimension categories .however , dimensional constraints are intended to be used as _ denial constraints _ that forbid certain combinations of values , whereas the dimensional rules are intended to be used for data completion , to generate data through their enforcement .dimensional constraints can be _ intra - dimensional _ , i.e. putting restrictions on data values of categorical relations associated to categories in a single dimension , or _ inter - dimensional _ ,i.e. putting restrictions on data values of categorical relations associated to categories in different dimensions .the next example illustrates the intuition behind categorical relations , dimensional constraints and rules , and how the latter can be used for data quality assessment . in it we assume , according to the hm model , that a dimension consists of a number of categories related to each other by a partial order .later on , we use the example to show how contextual data can be captured as a datalog ontology . [exp : intr ] consider a relational table _ measurements _ with body temperatures of patients in an institution ( table [ tab : measurements ] ) .a doctor in this institution needs the answer to the query : _ the body temperatures of _ tom waits _ for _ september 5 _ taken around noon with a thermometer of brand _ b1 _ " _ ( as he expected ) .it is possible that a nurse , unaware of this requirement , used a thermometer of brand _b2 _ , storing the measurements in _ measurements_. in this case , not all the measurements in the table are up to the expected quality .however , table _ measurements _ alone does not discriminate between expected or intended values ( those taken with brand _ b2 _ ) and the others .now , for assessing the quality of the data in _ measurements _ according to the doctor s quality requirement , extra contextual information about the thermometers used may be useful .for instance , there is a table _ patientward _ , linked to the _ ward _ category , that stores patients of each ward of the institution ( fig . [fig : dim ] ) .in addition , the institution has a _ guideline _ prescribing that : _ temperature measurement for patients in standard care unit have to be taken with thermometers of brand b1"_. this guideline , which will become a dimensional rule in the ontology , can be used for data quality assessment when combined with an intermediate virtual relation , _ patientunit _ , linked to the _ unit _ category , that is generated from _ patientward _ by upward - navigation through dimension _ hospital _ ( on left - hand - side of fig .[ fig : dim ] ) , from category _ ward _ to category _unit_. now it is possible to conclude that on certain days , tom waits was in the standard care unit , where his temperature was taken , and with the right thermometer according to the guideline ( patients in wards or had their temperatures taken with a thermometer of brand _ b1 _ ) .these clean data appear in relation ( table [ tab : qualitymeasurements ] ) , which can be seen as a quality answer to the doctor s request .c|c|c|c| & * time * & * patient * & * value * + & sep/5 - 12:10 & tom waits & 38.2 + & sep/6 - 11:50 & tom waits & 37.1 + & sep/7 - 12:15 & tom waits & 37.7 + & sep/9 - 12:00 & tom waits & 37.0 + & sep/6 - 11:05 & lou reed & 37.5 + & sep/5 - 12:05 & lou reed & 38.0 + c|c|c|c| & * time * & * patient * & * value * + & sep/5 - 12:10 & tom waits & 38.2 + & sep/6 - 11:50 & tom waits & 37.1 + elaborating on this example , it could be the case that there is a _ constraint_ imposed on dimensions and relations linked to their categories .for instance , one capturing that the intensive care unit was closed since august/2005 : _ no patient was in intensive care unit during the time after august /2005"_. again , through upward - navigation to the next category , we should conclude that the third tuple in table _ patientward _ should be discarded .inter - dimensional constraint _ involves dimensions _ hospital _ and _ time _( right - hand - side of fig .[ fig : dim ] ) , to which the ward and the day values in _ patientward _ are linked . the example shows a processing of data that involves changing the level of data linked to a dimension .this form of _ dimensional navigation _ may be required for query answering both in the _ downward _ and _ upward _ directions ( example [ exp : intr ] shows the latter ) .our ontological multidimensional contexts support both .[ exp : downward ] two additional categorical relations , _ workingschedules _ and _ shifts _ ( table [ tab : ws ] and table [ tab : shifts ] ) , store shifts of nurses in wards and schedules of nurses in units .a query to _ shifts _ asks for dates when _ mark _ was working in ward _w2 _ , which has no answer with the extensional data in table [ tab : shifts ] .now , an institutional guideline states that if a nurse works in a unit on a specific day , he / she has shifts in every ward of that unit on the same day .consequently , the last tuple in table [ tab : ws ] implies that _ mark _ has shifts in both _w1 _ and _ w2 _ on_ sep/9_. this date would be an answer obtained via downward navigation from the _ standard _ unit to its wards ( including _ w2 _ ) . c|c|c|c|c| & * unit * & * day * & * nurse * & * type * + & intensive & sep/5 & cathy & cert .+ & standard & sep/5 & helen & cert .+ & standard & sep/6 & helen & cert .+ & terminal & sep/5 & susan & non - c .+ & standard & sep/9 & mark & non - c .+ c|c|c|c|c| & * ward * & * day * & * nurse * & * shift * + & w4 & sep/5 & cathy & night + & w1 & sep/6 & helen & morning + & w4 & sep/5 & susan & evening + example [ exp : downward ] shows that downward navigation is necessary for query answering , in this case , for propagating data in _workingschedules _ at the _ unit _ level down to _shifts _ at the lower _ ward _ level ) . in this processa unit may drill - down to more than one ward , e.g. _ standard _ unit is connected to wards _ w1 _ and _ w2 _ ) , generating more than one tuple in _shifts_. contexts should be represented as formal theories into which other objects , such as database instances , are mapped into , for contextual analysis , assessment , interpretation , additional processing , etc . .consequently , we show how to represent multidimensional contexts as logic - based ontologies ( c.f .section [ sec : dlrep ] ) .these ontologies represent and extend the hm multidimensional model ( cf . section [ sec : preliminaries ] ) . our ontological language of choice is datalog .it allows us to give a clear semantics to our ontologies , to support some forms of logical reasoning , and to apply some query answering algorithms .furthermore , datalog allows us to generate explicit data by completion where they are missing , which is particularly useful for data generation though dimensional navigation .our ultimate goal is to use multidimensional ontological contexts for data quality assessment , which is achieved by introducing and defining in the context relational predicates standing for the _ quality versions of relations in the original instance_. their definitions use additional conditions on data , to make them contain quality data . in this work , going beyond , the context also contains an ontology in datalog that represents all the multidimensional elements shown in the examples above .our ontologies fall in the _ weakly - sticky _ ( ws ) class of the datalog family of languages ( cf .section [ sec : dlrep ] ) with _ separable _ equality generating dependencies ( when used as dimensional constraints ) , which guarantees that conjunctive query answering can be done in polynomial time in data .we have developed and implemented a deterministic algorithm for boolean conjunctive query answering , which is based on a non - deterministic algorithm for ws datalog .the algorithm can be used with ontologies containing dimensional rules that support both upward or downward navigation ( cf .section [ sec : qa ] ) .section [ sec : cdqa ] shows how to use the ontology to populate the quality versions of original relations .this paper is an extended abstract .we show concepts , ideas , ontologies , and mechanisms only by means of an extended example .the general approach and its analysis in detail will be presented in an extended version of this work .we start from the hm multidimensional ( md ) data model . in it , dimensions represent the hierarchical data ; and facts describe data as points in an md space .dimension _ is composed of a schema and an instance . a _ dimension schema _ includes a directed acyclic graph ( dag ) of _ categories _ , which defines _ levels _ of the category hierarchy .a dimension hierarchy corresponds to a partial - order relation between the categories , a so - called _ parent - child relation_. a _ dimension instance _ consists of set of members for each category .the instance hierarchy corresponds to a partial - order relation between members of categories , that parallels the _ parent - child _ relation between categories . _hospital _ and _ time _ , at the right- and left - hand sides of fig .[ fig : dim ] , resp . , are dimensions . we extend the hm model with , among other elements , _ categorical relations _, which can be seen as a generalization of fact tables , but at different dimension levels and not necessarily containing numerical data .categorical relations represent the entities associated to the factual data .a _ categorical relation _ has a schema and an instance .a _ categorical relation schema _ is composed of a relation name and a list of attributes . each attribute is either _ categorical _ or _ non - categorical_. a categorical attribute takes as values the members of a category in a dimension .a non - categorical attribute takes values from an arbitrary domain .[ exp : crelation ] in fig .[ fig : dim ] , the categorical relation has its categorical attributes , _ ward _ and _ day _ , connected to the _ hospital _ and _ time _ dimensions ._ patient _ is a non - categorical attribute with patient names as values ( there could be a foreign key to another categorical relation that stores data of patients). datalog is a family of languages that extends plain datalog with additional elements : ( a ) existential quantifiers in heads of _ tuple - generating dependencies _ ( tgds ) ; ( b ) _ equality - generating dependencies _ ( egds ) , that use equality in heads ; and ( c ) _ negative constraints _ , that use in heads . with these extensions , datalog captures ontological knowledge that can not be expressed in classical datalog .although the _ chase _ with these rules does not necessarily terminate , syntactic restrictions imposed on the set of rules aim to ensure decidability of conjunctive query answering , and is some cases , also tractability in data complexity .datalog has sub - languages , such as _ linear _ , _ guarded _ ,_ weakly - guarded _ , _ sticky _ , and _ weakly - sticky _ , that depend on the kind of predicates and syntactic interaction of tgd rules that appear in the datalog program .in this paper , our md ontologies turn out to be written in _ weakly - sticky _ ( ws ) datalog .this sublanguage extends _ sticky_ datalog .ws datalog allows joins in the body of tgds , but with a milder restriction on the repeated variables .boolean conjunctive query answering is tractable for ws datalog .we will represent our extended md model as a datalog ontology that contains a schema , an instance , and a set of dimensional rules and constraints . is a finite set of predicates ( relation names ) , where is a set of _ category predicates _ ( unary predicates ), is a set of _ parent - child predicates _, i.e. partial - order relations between elements of adjacent categories , and is a set of _ categorical predicates_. in example [ exp : intr ] , contains , e.g. ; contains , e.g. a predicate for connections from _ ward _ to _ unit _ ; and contains , e.g. _patientward_. an _ instance _ , , is a relational instance that gives ( possibly infinite ) extensions to the predicates in , and satisfies a given set of tgds , egds , and negative constraints ( cf .below ) . the constants for come from an infinite underlying domain .the dimensional rules and constraints in constitute the intentional part of . rules ( [ frm : gf1])-([frm : gf4 ] ) below show the general form of elements of . in what follows ,each is a categorical atom , with a sequence of categorical attributes ( values ) and a sequence of non - categorical attributes ; is a parent - child atom with parent / child elements , resp . ; and is a category atom , with a category element .that is , , .as an instance in ( [ frm : exref ] ) and ( [ frm : exegd ] ) , is a category atom and is a parent - child atom . * to capture the _ referential constraint _ between a categorical attribute of a categorical relation and a category , we use a negative constraint , with : tgds , making it possible to generate elements in categories or categorical relations . ] * a _ dimensional constraint _ is either an egd of the form ( [ frm : gf2 ] ) ( where also appear in the body ) or a negative constraint of the form ( [ frm : gf3 ] ) : * a _ dimensional rule _ is a datalog tgd of the form : + here , , and . furthermore , shared variables in bodies of tgds correspond only to categorical attributes of categorical relations . with rule ( [ frm : gf4 ] )( an example is ( [ frm : upward ] ) below ) , the possibility of doing dimensional navigation is captured by joins between categorical predicates , e.g. in the body , and parent - child predicates , e.g. .rule ( [ frm : gf4 ] ) allows navigation in both upward and downward directions .the _ direction of navigation _ is determined by the level of categorical attributes that participate in the join in the body . assuming the join is between and , upward navigation is enabled when ( i.e. appears in ) and ( i.e appears in the head ) . on the other hand , if occurs in and occurs in , then downward navigation is enabled , from to .the existential variables in ( [ frm : gf4 ] ) make up for missing non - categorical attributes due to different schemas ( i.e. the existential variables may appear in positions of non - categorical attributes but not in categorical attributes ) . as a result , when drilling down , for each tuple of a categorical relation linked to a parent member , the rule generates tuples for all the child members of the parent member ( or children specifically indicated in the body ) .[ exp : ont ] the categorical attribute _ unit _ in categorical relation _ patientunit _ takes values from the _ unit _ category .we use a constraint of the form ( [ frm : gf1 ] ) .similar constraints are in the ontology that capture the connection between other categorical relations and their corresponding categories .+ for the constraint in example [ exp : intr ] requiring _ no patient was in intensive care unit during the time after august 2005 " _ , we use a dimensional constraint of the form ( [ frm : gf3 ] ) : + similarly , the following rule , of form ( [ frm : gf2 ] ) , states that _ all the thermometers used in a unit are of the same type " _ : + with a categorical relation with thermometers used by nurses in wards .finally , the following dimensional rules of the form ( [ frm : gf4 ] ) capture how data in _ patientward _ and _ workingschedules _ generate data for _ patientunit _ and _ shifts _ , resp .: in ( [ frm : upward ] ) , dimension navigation is enabled by the join between _ patientward _ and _ unitward_. the rule generates data for _ patientunit _ ( at a the higher level of _ unit _ ) from _ patientward _ ( at the lower level of _ ward _ ) via upward navigation . notice that ( [ frm : upward ] ) is in the general form ( [ frm : gf4 ] ) , but since in this case the schemas of the two involved categorical relations match , no existential quantifiers are necessary .rule ( [ frm : downward1 ] ) captures downward navigation while it generates data for _ shifts _ ( at the level of _ ward _ ) from _ workingschedules _ ( at the level of _ unit _ ) . in this case, the schemas of the two categorical relations do not match .so , the existential variable represents missing data for the _ shift _attribute. it is possible to verify that _ the datalog md ontologies with rules of the forms ( [ frm : gf1])-([frm : gf4 ] ) are weakly - sticky_. this follows from the fact that shared variables in the body of dimensional rules , as defined in ( [ frm : gf4 ] ) , may occur only in positions of categorical attributes , where only limited values may appear , which depends on the assumption that the md ontology has a fixed dimensional structure , in particular , with a fixed number of category members .no new category member is generated when applying the dimensional rules of the form ( [ frm : gf4 ] ) .the _ separability property _ in relation to the interaction of dimensional egds of the form ( [ frm : gf2 ] ) and tgds of the form ( [ frm : gf4 ] ) must be checked independently . however , _ when the egds have only categorical variables in the heads , the separability condition holds _ , which is the case with rule ( [ frm : exegd ] ) . to illustrate query answering via downward navigation , reconsider the query about the dates that _ mark _ works in _ w1 _ : . considering ( [ frm : downward1 ] ) and the last tuple in _ workingschedules _ , the chase will generate a new tuple in _ shifts _ for _ mark _ on _ sep/9 _ in _ w2 _ , with a fresh null value for his shift , reflecting incomplete knowledge about this attribute at the lower level .so , the answer to the query via ( [ frm : downward1 ] ) is _sep/9_. the general tgd ( [ frm : gf4 ] ) only captures downward navigation when there is incomplete data about the values of non - categorical attributes , because existential variables are only non - categorical .however , in some cases we may have incomplete data about the categorical attributes , i.e. about parents and children involved in downward navigation .[ tab : discharge ] c|c|c|c| & * inst . * & * day * & * patient * + & h1 & sep/9 & tom waits + & h1 & sep/6 & lou reed + & h2 & oct/5 & elvis costello + there is an additional categorical relation _ dischargepatients _ ( table [ tab : discharge ] ) with data about patients leaving an institution .since each of them was in exactly one of the units , _ dischargepatient _ should generate data for _ patientunit _ through downward navigation from the _ institution _ level to the _ unit _ level .since we do not have knowledge about which unit at the lower level has to be specified , the following rule could be used : + which is not of the form ( [ frm : gf4 ] ) , because it has an existentially quantified categorical variable , , for units .it allows downward navigation while capturing incomplete data about units , and represents disjunctive knowledge at level of units . the general form of ( [ frm : downward2 ] ) , for this type of downward navigation is as follows : + where and and , and the categorical attributes of refer to categories that are at a higher or same level than the categorical attributes of .( in ( [ frm : downward2 ] ) , categories and for are higher and same level , resp . than and for . )_ if the md ontology also includes rules of the form ( [ frm : gf5 ] ) , it still is weakly - sticky_. this is because , despite the fact that these rules may generate new members ( nulls ) , they can only generate a limited number of such members ( because the rule only navigates in downward direction ) , i.e. there is no cyclic behavior . with these new rules ,egds with only categorical attributes in heads do not guarantee separability anymore .so , checking this condition becomes application dependent .weakly - stickyness guarantees that _boolean conjunctive query answering from our md contextual ontologies becomes tractable _ in data complexity .then , answering open conjunctive queries from the md ontology is also tractable .we have developed and implemented a deterministic algorithm , deterministicwsqans , for answering boolean conjunctive queries from datalog md contextual ontologies .the algorithm is based on a non - deterministic algorithm , weaklystickyqans , for ws datalog that runs in polynomial time in the size of extensional database .given a set of ws tgds , a boolean conjunctive query , and an extensional database , weaklystickyqans builds an accepting resolution proof schema " , a tree - like structure which shows how query atoms can be entailed from the extensional instance .the algorithm rejects if there is no resolution proof schema ; otherwise it builds it and accepts .our deterministic algorithm , deterministicwsqans , applies a top - down backtracking search for accepting resolution proof schemas .starting from the query , the algorithm resolves the atoms of the query , from left to right . in each step, an atom is resolved either by finding a substitution that maps the atom to a ground atom in the extensional database ( which makes a leaf node ) or by applying a tgd rule that entails the atom ( building a subtree ) .the decision at each step is stored on a stack to be restored later if the algorithm fails to entail the atoms of the query in the next steps .the algorithm accepts if it resolves all the atoms in the query ( the content of the stack specifies the decisions that lead to the accepting resolution proof schema ) , and rejects if it can not resolve an atom , no matter what decisions have been made before . in this deterministic approach ,possible substitutions of constants for query variables are derived by the ground atoms in the extensional database ( as opposed to the non - deterministic version of the algorithm that guesses applicable substitutions ) .this enables us to extend deterministicwsqans for finding answers to open conjunctive queries , by building resolution proof schemas for all possible substitutions .weaklystickyqans runs in polynomial time in the size of the extensional database .it can be proved that deterministicwsqans also runs in polynomial time .none of these algorithms are first - order ( fo ) query rewriting algorithms , which do exist for the datalog more restrictive syntactic classes , e.g. _ linear _ and _ sticky _ .the md ontologies to which the complexity results and algorithms above apply support both upward and downward navigation .however , for simpler md ontologies that support only upward navigation ( which can be syntactically detected from the form of the dimensional rules ) , we developed a methodology for conjunctive query answering based on fo query rewriting .the rewritten query can be posed directly to the extensional database .ontologies of this kind are common and natural in real world applications ( example [ exp : intr ] shows such a case ) .interestingly , these upward - navigating " md ontologies _ do not _ necessarily fall into any of the good " cases of datalog mentioned above .the algorithms mentioned in this section are rather proofs of concept than algorithms meant to be used with massive data .it is ongoing work the development and implementation of scalable polynomial time algorithms for answering open conjunctive queries .in this section , we show how a datalog md ontology can be a part of -and used in- a context for data quality assessment or cleaning . fig .[ fig : frmw ] shows such a context and the way it is used . the central idea in that the original instance ( on the left - hand - side ) is to be assessed or cleaned through the context in the middle .this is done by mapping into the contextual schema / instance .the context may have additional data , predicates ( ) , data quality predicates ( ) specifying single quality requirements , and access to external data sources ( ) for data assessment or cleaning .the clean version of is on the right - hand - side , with schema , which is a copy of s schema .the new element in the context is the md ontology , which interacts with , and represents the dimensional elements of the context .the categorical relations in provide dimensional data for the relations in and for quality predicates in . also gets extensional data from initial database , , and external sources . herewe concentrate on data cleaning , which here amounts to obtaining clean answers to queries , in particular , about clean extensions ( ) for the original database relations ( ) ( a particular case of _ clean query answering _ ) .the quality versions are specified in terms of the relations in and quality predicates , .the data for the latter may be already in the context or come from , the ontology , or external sources .the problems become : ( a ) computing quality versions of the original predicates , and ( b ) computing quality answers to queries expressed in terms of those original predicates .the second problem is solved by rewriting the query as , which is expressed ( and answered ) in terms of predicates .answering it is the part of the query answering process that may invoke dimensional navigation and data generation as illustrated in previous sections .problem ( a ) is a particular case of ( b ) .( ex . [ exp : ont ] cont . ) a query about tom waits temperatures is initially expressed in terms of the initial predicates _measurements _ , but is rewritten into a query expressed and an- swered via its quality extension ( see for more details ) . more specifically , the query is about _ the body temperatures of tom waits on september 5 taken around noon by a certified nurse with a thermometer of brand b1 " _ : _ measurements _ , as initially given , does not contain information about nurses or thermometers . hence the _ expected conditions _ are not expressed in the query . according to the general contextual approach in , predicate _measurement _ has to be logically connected to the context , conceiving it as a footprint of a broader " contextual relation that is given or built in the context , in this case one with information about thermometer brands ( ) and nurses certification status ( ) : where is a contextual copy of _ measurement _ , i.e. the latter is mapped into the context . if we want quality measurements data , we impose the quality conditions : with the auxiliary predicates defined by : here , _ daytime _ is parent / child relation in _ time_ dimension ) , and the last definition right above is capturing as a rule the guideline from example [ exp : intr ] , at the level of relation _patientunit_. summarizing , _ takenbynurse _ and _ takenwiththerm _ are contextual predicates ( shown in fig . [fig : frmw ] as ) ._ patientward _ and _ workingschedules _ are categorical relations . to obtain quality answers to the original query, we pose to the ontology the new query : answering it , which requires evaluating _ takenwiththerm _ , triggers upward dimensional navigation from _ ward _ to _ unit _ , when requesting data for categorical relation _patientunit_. more specifically , dimensional rule ( [ frm : upward ] ) is used for data generation , and each tuple in _ patientward _ generates one tuple in _ patientunit _ , with its unit obtained by rolling - up .we have described in general terms how to specify in datalog a multidimensional ontology that extends a multidimensional data model .we have identified some properties of these ontologies in terms of membership to known classes of datalog , the complexity of conjunctive query answering , and the existence of algorithms for the latter task .finally , we showed how to apply the ontologies to multidimensional and contextual data quality , in particular , for obtaining quality answers to queries through dimensional navigation .md contexts are also of interest outside applications to data quality .they can be seen as logical extensions of the md data model .research funded by nserc discovery , and the nserc strategic network on business intelligence ( bin ) .l. bertossi is a faculty fellow of ibm cas .we thank andrea cali and andreas pieris for useful information and conversations on datalog .
|
data quality and data cleaning are context dependent activities . starting from this observation , in previous work a context model for the assessment of the quality of a database instance was proposed . in that framework , the context takes the form of a possibly virtual database or data integration system into which a database instance under quality assessment is mapped , for additional analysis and processing , enabling quality assessment . in this work we extend contexts with dimensions , and by doing so , we make possible a multidimensional assessment of data quality assessment . multidimensional contexts are represented as ontologies written in datalog . we use this language for representing _ dimensional constraints _ , and _ dimensional rules _ , and also for doing _ query answering _ based on dimensional navigation , which becomes an important auxiliary activity in the assessment of data . we show ideas and mechanisms by means of examples .
|
the velocity - density relation is one of the most important characteristics for the transport properties of any traffic system . for pedestrian trafficthere is currently no consensus even about the principle shape of this relation which is reflected e.g. in conflicting recommendations in various handbooks and guidelines .discrepancies occur in particular in the high - density regime which is also the most relevant for applications in safety analysis like evacuations or mass events . at high densitiesstop - and - go waves occur indicating overcrowding and potentially initiating dangerous situations due to stumbling etc .however , the densities where the flow breaks down due to congestion ranges from densities of m to m .this large variation in values for reported in the literature is partly due to insufficient methods of data capturing and data analysis .in previous experimental studies , different kinds of measurement methods are used and often a mixture of time and space averages are realized . especially in the case of spatial and temporal inhomogeneities the choice of the measurement method and the type of averaging have a substantial influence on the results . up to now , congested states in pedestrian dynamicshave not been analyzed in much detail .this is in contrast to vehicular traffic where the congested phase is well - investigated , both empirically and theoretically . in this contributionwe show that even improved classical measurement methods using high precision trajectories but basing on mean values of density and velocity fail to resolve important characteristics of congested states . for a thorough analysis of pedestrian congestionwe apply a new method enabling measurements on the scale of single pedestrians .for our investigation we use data from experiments performed in 2006 in the wardroom of bergische kaserne dsseldorf with a test group of up to soldiers .the length of the circular system was about 26 m , with a m long measurement section .detailed information about the experimental setup and data capturing providing trajectories of high accuracy ( m ) is given in . and ( left to right , top to bottom ) . with increasing density the occurrence of stop - and - go waves accumulate.,title="fig:",scaledwidth=45.0% ] and ( left to right , top to bottom ) . with increasing density the occurrence of stop - and - go waves accumulate.,title="fig:",scaledwidth=45.0% ] and ( left to right , top to bottom ) . with increasing density the occurrence of stop - and - go waves accumulate.,title="fig:",scaledwidth=45.0% ] and ( left to right , top to bottom ) . with increasing density the occurrence of stop - and - go waves accumulate.,title="fig:",scaledwidth=45.0% ] in fig .[ fig : traj ] the -component of trajectories is plotted against time . for the extraction of the trajectories ,the pedestrians heads were marked and tracked .backward movement leading to negative velocities is caused by head movement of the pedestrians during a standstill .inhomogeneities in the trajectories increase with increasing density . as in vehicular traffic ,jam waves propagating opposite to the movement direction ( upstream ) occur at higher densities . stopping is first observed during the runs with pedestrians , at pedestrians they can hardly move forward .macroscopically one observes separation into a stopping area and an area where pedestrians walk slowly . in the following weanalyze how macroscopic measurements blur this phase separation and apply a technique introduced in enabling a measurement of the fundamental diagram on a ` microscopic ' scale .speed of pedestrian and the associated density are calculated using the entrance and exit times and into and out of a measurement section of length m , ) occur only at large densities where no moving states ( ) are observable.,scaledwidth=46.0% ] the speed is a mean value over the space - time interval and . by integration over the instantaneous density the densityis assigned to the same space - time interval . to reduce the fluctuations of we use the quantity , introduced in , which measures the fraction of space between pedestrians and that is inside the measurement area .results of the macroscopic measurement method are shown in fig .[ fig : fdmacro ] . in comparisonto method b introduced in ( see fig . 6 of which uses data based on the same trajectories as this study ) the scatter of the data is reduced due to an improved density definition and a better assignment of density and velocity .however the resulting velocity - density relation does not allow to identify phase - separated states although these are clearly visible in the trajectories . to identify phase separated states we determine the velocity - density relation on the scale of single pedestrians .this can be achieved by the voronoi density method . in one dimension a voronoi cellis bounded by the midpoints of the pedestrian positions and . with the length of the voronoi cell corresponding to pedestrian and we define the instantaneous velocity and density by m both stopping and moving states are observable . * right : * probability distribution of the velocities for different density intervals .the double peak structure indicates the coexistence of moving and stopping states .the height of the stopping peak increases with increasing density.,title="fig:",scaledwidth=48.0% ] m both stopping and moving states are observable .* right : * probability distribution of the velocities for different density intervals .the double peak structure indicates the coexistence of moving and stopping states .the height of the stopping peak increases with increasing density.,title="fig:",scaledwidth=48.0% ] the fundamental diagram based on the voronoi method is shown on the left side of fig .[ fig : fdmicro ] .regular stops occur at densities higher than 1.5 m . on the right side of fig .[ fig : fdmicro ] the distribution of the velocities for fixed densities from 1.8 m to 2.6 m are shown .there is a continuous change from a single peak near m/s , to two peaks , to a single peak near m/s .the right peak represents the moving phase , whereas the left peak represents the stopping phase . at densities around 2.2 m peaks coexist , indicating phase separation into a flowing and a jammed phase . in highway traffic ,phase separation into moving and stopping phases typically occurs when the outflow from a jam is reduced compared to maximal possible flow in the system .related phenomena are hysteresis and a non - unique fundamental diagram . at intermediate densities two different flow values can be realized .the larger flow , corresponding to a homogeneous state , is metastable and breaks down due to fluctuations or perturbations ( capacity drop ) .the origin of the reduced jam outflow is usually ascribed to the so - called slow - to - start behaviour ( see and references therein ) , i.e. an delayed acceleration of stopped vehicles due to the loss of attention of the drivers etc .the structure of the phase - separated states in vehicular traffic is different from the ones observed here . for vehicle trafficthe stopping phase corresponds to a jam of maximal density whereas in the moving phase the flow corresponds to the maximal _ stable _ flow , i.e. all vehicles in the moving phase move at their desired speed .this scenario is density - independent as increasing the global density will only increase the length of the stopping region without reducing the average velocity in the free flow regime .the probability distribution of the velocities ( in a periodic system ) shows a similar behaviour to that observed in fig .[ fig : fdmicro ] .the position of the free flow peak in the case of vehicular traffic is independent of the density .the behaviour observed here for pedestrian dynamics differs slightly from that described above .the main difference concerns the properties of the moving regime . herethe observed average velocities are much smaller than the free walking speeds . therefore the two regimes observed in the phase separated state are better characterized as stopping " and slow moving " regimes .further empirical studies are necessary to clarify the origin of these differences .one possible reason are the different acceleration properties of vehicles and pedestrians as well as anticipation effects. it also remains to be seen whether in pedestrian systems phenomena like hysteresis can be observed .in this section we introduce the adaptive velocity model , which is based on an event driven approach .a pedestrian can be in different states which determine the velocity .a change between these states is called _ event_. the model was derived from force - based models , where the dynamics of pedestrians are given by the following system of coupled differential equations where is the force acting on pedestrian .the mass is denoted by , the velocity by and the current position by . is split into a repulsive force and a driving force .the dynamics is regulated by the interrelation between driving and repulsive forces . in our approachthe role of repulsive forces are replaced by events .the driving force is defined as where is the desired speed of a pedestrian and the relaxation time of the velocity . by solving the differential equation the velocity functionis obtained .this is shown in fig .[ fig : paramodell ] together with the parameters governing the pedestrians movement ., the step length and the safety distance .* right : * the adaptive velocity : acceleration until than deceleration until , again acceleration until and so on.,title="fig:",height=170 ] , the step length and the safety distance . *right : * the adaptive velocity : acceleration until than deceleration until , again acceleration until and so on.,title="fig:",height=170 ] in this model pedestrians are treated as bodies with diameter .the diameter depends linearly on the current velocity and is equal to the step length in addition to the safety distance length and safety distance are introduced to define the rules for the dynamics of the system .we determine the model parameters from empirical data which allows to judge the adequacy of the rules .based on the step length is a linear function of the current velocity with following parameters : \ ; v_i(t).\ ] ] the required quantities for the safety distance can be specified through empirical data of the fundamental diagram , see . with these experimental resultsthe previous equations can be summarized to with m and s. no free model parameter remain with these specifications . in the followingwe describe the rules for the movement .a pedestrian accelerates to the desired velocity until the distance to the pedestrian in front is smaller than the safety distance . from this time on , he / she decelerates until the distance is larger than the safety distance . to guarantee a minimal volume exclusion , case `` collision ''is included , in which the pedestrians are too close to each other and have to stop . via , and the velocity function for the states deceleration ( dec . ) , acceleration ( acc . ) and collision ( coll .) can be defined , see eq .[ eq : vel ] : where is the distance between the centers of both pedestrians .the current velocity of an pedestrian depends on his / her state . denominates the point in time where a change from acceleration to deceleration takes place .conversely is the change from deceleration to acceleration . and are defined accordingly . at the beginning s with a change from acceleration to deceleration a new calculation of is necessary : the discreteness of the time step could lead to configurations where overlapping occurs . to ensure good computational performance for high densities ,no events are explicitly calculated . instead in each time step, it is checked whether an event has taken place and , or are set to accordingly . to avoid too large interpenetration of pedestrians and to implement a reaction time in a realistic size we choose s. to guarantee a parallel updatea recursive procedure is necessary : each person is advanced one time step according to eq .[ eq : vel ] .if after this step a pedestrian is in a different state because of the new distance to the pedestrian in front , the velocity is set according to this state .then the state of the next following person is reexamined .if the state is still valid the update is completed .otherwise , the velocity is calculated again . in the following weface model results with experimental data and study how the distribution of individual parameter influences the phase separation .for all simulations the desired velocity is normal distributed with average m / s and variance .[ fig : simohne ] and fig .[ fig : simmit ] show the simulation results for two different choices of parameter distributions ., and for all pedestrians * left : * comparison of fundamental diagrams of modeled and empirical data . * middle : * trajectories for ( model ) . *right : * trajectories for ( model).,title="fig:",scaledwidth=32.0% ] , and for all pedestrians * left : * comparison of fundamental diagrams of modeled and empirical data .* middle : * trajectories for ( model ) . *right : * trajectories for ( model).,title="fig:",scaledwidth=32.0% ] , and for all pedestrians * left : * comparison of fundamental diagrams of modeled and empirical data .* middle : * trajectories for ( model ) . *right : * trajectories for ( model).,title="fig:",scaledwidth=32.0% ] ( model ) . * right : * trajectories for ( model).,title="fig:",scaledwidth=32.0% ] ( model ) .* right : * trajectories for ( model).,title="fig:",scaledwidth=32.0% ] ( model ) . * right : * trajectories for ( model).,title="fig:",scaledwidth=32.0% ] the model yields the right macroscopic relation between velocity and density even if and are the same for all pedestrians , see fig .[ fig : simohne ] ( left ) .the trajectories display that phase separation does not appear . even at high densitiesthe movement is ordered and no stops occur .for further simulations we incorporate a certain disorder by choosing the following individual parameter normal distributed : , and .variation of the personal parameters affects the scatter of the fundamental diagram , see fig .[ fig : simmit ] , left .phase separation appears in the modeled trajectories as in the experiment , see fig .[ fig : simmit ] middle and right .it is clearly visible that long stop phases occur by introducing distributed individual parameters .then the pattern as well as the change of the pattern from to are in good agreement with the experimental results , see fig .[ fig : traj ] .even the phase separated regimes match qualitatively .however , the regimes appear more regular in the modeled trajectories . in fig .[ fig : fdmicmod ] microscopic measurements of fundamental diagram and the related velocity distributions are shown .separation of phases is reproduced well .but the position of the peak attributed to the moving phase is not in conformance with the experimental data , compare fig .[ fig : fdmicro ] ( right ) .the experimental data show that the peak position is independent from the density at around m / s . at the model datathe position of the peak changes with increasing density .measurements with different time steps show that the size of the time step influences the length and shape of the stop phase at high densities .but the density where first stops occur seems independent from the size of the time step .further model analysis is necessary to study the role of the reaction time implemented by discrete time steps in this special type of update .furthermore we will study how the change of the peak could be influenced by including a distribution for the step length and other variations of the distribution for the safety distance . at fixed densities.,title="fig:",scaledwidth=48.0% ] at fixed densities.,title="fig:",scaledwidth=48.0% ]we have investigated the congested regime of pedestrian traffic using high - quality empirical data based on individual trajectories .strong evidence for phase separation into standing and slow moving regimes is found .the corresponding velocity distributions show a typical two - peak structure .the structure of the trajectories is well reproduced by an adaptive velocity model which is a variant of force - based models in continuous space .future studies should clarify the origin of the differences to the phase separated states observed in vehicular traffic . here phase separation into a stopping and a moving phase occurs such that the average velocity in the moving regime is independent of the total density .schadschneider , a. , klingsch , w. , klpfel , h. , kretz , t. , rogsch , c. , seyfried , a. : evacuation dynamics : empirical results , modeling and applications , in : meyers r. a. ( ed . ) , encyclopedia of complexity and system science , pp .3142 - 3176 . springer ( 2009 )seyfried , a. , boltes , m. , khler , j. , klingsch , w. , portz , a. , schadschneider a. , steffen , b. , winkens , a. : enhanced empirical data for the fundamental diagram and the flow through bottlenecks . in : klingsch ,, rogsch , c. , schadschneider , a. and m. schreckenberg ( eds . ) , pedestrian and evacuation dynamics 2008 , pp . 145 - 156 , springer ( 2010 ) boltes , m. , seyfried , a. , steffen , b. and schadschneider , a. : automatic extraction of pedestrian trajectories from video recordings . in : klingsch ,, rogsch c. , schadschneider a. and m. schreckenberg ( eds . ) , pedestrian and evacuation dynamics 2008 , pp . 43 - 54 , springer ( 2010 )
|
experimental results for congested pedestrian traffic are presented . for data analysis we apply a method providing measurements on an individual scale . the resulting velocity - density relation shows a coexistence of moving and stopping states revealing the complex structure of pedestrian fundamental diagrams and supporting new insights into the characteristics of pedestrian congestions . furthermore we introduce a model similar to event driven approaches . the velocity - density relation as well as the phase separation is reproduced . variation of the parameter distribution indicates that the diversity of pedestrians is crucial for phase separation . dynamics , velocity - density relation
|
discriminative learning algorithms are typically trained from large collections of vectorial training examples . in many classical learning problems , however , it is arguably more appropriate to represent training data not as individual data points , but as probability distributions .there are , in fact , multiple reasons why probability distributions may be preferable .firstly , uncertain or missing data naturally arises in many applications .for example , gene expression data obtained from the microarray experiments are known to be very noisy due to various sources of variabilities . in order to reduce uncertainty , and to allow for estimates of confidence levels ,experiments are often replicated .unfortunately , the feasibility of replicating the microarray experiments is often inhibited by cost constraints , as well as the amount of available mrna . to cope with experimental uncertainty given a limited amount of data , it is natural to represent each array as a probability distribution that has been designed to approximate the variability of gene expressions across slides .probability distributions may be equally appropriate given an abundance of training data . in data - rich disciplines such as neuroinformatics , climate informatics , and astronomy, a high throughput experiment can easily generate a huge amount of data , leading to significant computational challenges in both time and space . instead of scaling up one s learning algorithms , one can scale down one s dataset by constructing a smaller collection of distributions which represents groups of similar samples .besides computational efficiency , aggregate statistics can potentially incorporate higher - level information that represents the collective behavior of multiple data points .previous attempts have been made to learn from distributions by creating positive definite ( p.d . )kernels on probability measures . in , the probability product kernel ( ppk ) was proposed as a generalized inner product between two input objects , which is in fact closely related to well - known kernels such as the bhattacharyya kernel and the exponential symmetrized kullback - leibler ( kl ) divergence . in , an extension of a two - parameter family of hilbertian metrics of topsewas used to define hilbertian kernels on probability measures . in , the semi - group kernels were designed for objects with additive semi - group structure such as positive measures .recently , introduced nonextensive information theoretic kernels on probability measures based on new jensen - shannon - type divergences .although these kernels have proven successful in many applications , they are designed specifically for certain properties of distributions and application domains .moreover , there has been no attempt in making a connection to the kernels on corresponding input spaces .the contributions of this paper can be summarized as follows .first , we prove the representer theorem for a regularization framework over the space of probability distributions , which is a generalization of regularization over the input space on which the distributions are defined ( section [ sec : regularization ] ) .second , a family of positive definite kernels on distributions is introduced ( section [ sec : distkernel ] ) .based on such kernels , a learning algorithm on probability measures called _ support measure machine _ ( smm ) is proposed .an svm on the input space is provably a special case of the smm .third , the paper presents the relations between sample - based and distribution - based methods ( section [ sec : relation ] ) .if the distributions depend only on the locations in the input space , the smm particularly reduces to a more flexible svm that places different kernels on each data point .given a non - empty set , let denote the set of all probability measures on a measurable space , where is a -algebra of subsets of .the goal of this work is to learn a function given a set of example pairs , where and . in other words ,we consider a supervised setting in which input training examples are probability distributions . in this paper, we focus on the binary classification problem , i.e. , . in order to learn from distributions ,we employ a compact representation that not only preserves necessary information of individual distributions , but also permits efficient computations .that is , we adopt a hilbert space embedding to represent the distribution as a mean function in an rkhs .formally , let denote an rkhs of functions , endowed with a reproducing kernel .the mean map from into is defined as we assume that is bounded for any . it can be shown that , if is characteristic , the map is injective , i.e. , all the information about the distribution is preserved .for any , letting , we have the reproducing property =\langle{\ensuremath{\mu_{{\ensuremath{\mathbb{p}}}}}},f \rangle_{{\ensuremath{\mathcal{h } } } } , \enspace \forall f\in{\ensuremath{\mathcal{h}}}\enspace .\ ] ] that is , we can see the mean embedding as a feature map associated with the kernel , defined as . since , it also follows that , where the second equality follows from the reproducing property of .it is immediate that is a p.d .kernel on .the following theorem shows that optimal solutions of a suitable class of regularization problems involving distributions can be expressed as a finite linear combination of mean embeddings .[ thm : representer ] given training examples , a strictly monotonically increasing function , and a loss function , any minimizing the regularized risk functional ,\dotsc,{\ensuremath{\mathbb{p}}}_m , y_m,\mathbb{e}_{{\ensuremath{\mathbb{p}}}_m}[f]\right ) + \omega\left(\|f\|_{\mathcal{h}}\right)\ ] ] admits a representation of the form for some .theorem [ thm : representer ] clearly indicates how each distribution contributes to the minimizer of . roughly speaking , the coefficients controls the contribution of the distributions through the mean embeddings . furthermore ,if we restrict to a class of dirac measures on and consider the training set , the functional reduces to the usual regularization functional and the solution reduces to .therefore , the standard representer theorem is recovered as a particular case ( see also for more general results on representer theorem ) .note that , on the one hand , the minimization problem is different from minimizing the functional for the special case of the additive loss .therefore , the solution of our regularization problem is different from what one would get in the limit by training on an infinitely many points sampled from . on the other hand , it is also different from minimizing the functional where ] .this kernel can be computed in closed - form for certain classes of distributions and kernels .examples are given in table [ tab : expected - kernel ] .[ tab : expected - kernel ] alternatively , one can approximate the kernel by the empirical estimate : where and are empirical distributions of and given random samples and , respectively .a finite sample of size from a distribution suffices ( with high probability ) to compute an approximation within an error of . instead, if the sample set is sufficiently large , one may choose to approximate the true distribution by simpler probabilistic models , e.g. , a mixture of gaussians model , and choose a kernel whose expected value admits an analytic form . storing only the parameters of probabilistic models may save some space compared to storing all data points . note that the standard svm feature map is usually nonlinear in , whereas is _ linear _ in .thus , for an smm , the first level kernel is used to obtain a vectorial representation of the measures , and the second level kernel allows for a nonlinear algorithm on distributions . for clarity , we will refer to and as the * embedding kernel * and the * level-2 kernel * , respectivelythis section presents key theoretical aspects of the proposed framework , which reveal important connection between kernel - based learning algorithms on the space of distributions and on the input space on which they are defined .given a training sample drawn i.i.d . from some unknown probability distribution on , a loss function , and a function class , the goal of statistical learning is to find the function that minimizes the expected risk functional .since is unknown , the empirical risk based on the training sample is considered instead .furthermore , the risk functional can be simplified further by considering based on samples drawn from each . our framework ,on the other hand , alleviates the problem by minimizing the risk functional ){\ , \mathrm{d}}\mathcal{p}({\ensuremath{\mathbb{p}}},y) ] ( cf .the discussion at the end of section [ sec : regularization ] ) .it is often easier to optimize as the expectation can be computed exactly for certain choices of and .moreover , for universal , this simplification preserves all information of the distributions .nevertheless , there is still a loss of information due to the loss function . due to the i.i.d .assumption , the analysis of the difference between and can be simplified w.l.o.g . to the analysis of the difference between ] for a particular distribution .the theorem below provides a bound on the difference between ] .[ thm : deviation ] given an arbitrary probability distribution with variance , a lipschitz continuous function with constant , an arbitrary loss function that is lipschitz continuous in the second argument with constant , it follows that - \ell(y,\mathbb{e}_{x\sim{\ensuremath{\mathbb{p}}}}[f(x)])\rvert } \leq 2c_{\ell}c_f\sigma ] will be small . as a result , if this holds for any distribution in the training set , the true risk deviation is also expected to be small . it turns out that , for certain choices of distributions , the linear smm trained using is equivalent to an svm trained using some samples with an appropriate choice of kernel function .[ lem : smm - svm ] let be a bounded p.d .kernel on a measure space such that , and be a square integrable function such that for all . given a sample where each is assumed to have a density given by ,the linear smm is equivalent to the svm on the training sample with kernel .note that the important assumption for this equivalence is that the distributions differ only in their location in the parameter space .this need not be the case in all possible applications of smms .furthermore , we have .thus , it is clear that the feature map of depends not only on the kernel , but also on the density .consequently , by virtue of lemma [ lem : smm - svm ] , the kernel allows the svm to place different kernels at each data point .we call this algorithm a _ flexible svm _ ( flex - svm ) .consider for example the linear smm with gaussian distributions and gaussian rbf kernel with bandwidth parameter .the convolution theorem of gaussian distributions implies that this smm is equivalent to a flexible svm that places a data - dependent kernel on training example , i.e. , a gaussian rbf kernel with larger bandwidth .the kernel is in fact a special case of the hilbertian metric , with the associated kernel ] , ,5\cdot\mathbf{i}_2) ] .then , to reduce computational cost , principle component analysis ( pca ) is performed to reduce the dimensionality to 16 .we compare the svm on the initial dataset , the asvm on the virtual datasets , and the smm . for svm and asvm ,the gaussian rbf kernel is used . for smm, we employ the empirical kernel with gaussian rbf kernel as a base kernel .the parameters of the algorithms are fixed by 10-cv over parameters and .the results depicted in figure [ fig : usps - invariant ] clearly demonstrate the benefits of learning directly from the equivalence classes of digits under basic transformations ) , we got similar results using uniform distributions . ] . in most cases ,the smm outperforms both the svm and the asvm as the number of virtual examples increases .moreover , figure [ fig : usps - invariant - time ] shows the benefit of the smm over the asvm in term of computational cost core 2 duo cpu e8400 at 3.00ghz and 4 gb of memory . ] .this section illustrates benefits of the nonlinear kernels between distributions for learning natural scene categories in which the bag - of - word ( bow ) representation is used to represent images in the dataset .each image is represented as a collection of local patches , each being a codeword from a large vocabulary of codewords called codebook .standard bow representations encode each image as a histogram that enumerates the occurrence probability of local patches detected in the image w.r.t .those in the codebook . on the other hand ,our setting represents each image as a distribution over these codewords .thus , images of different scenes tends to generate distinct set of patches . based on this representation , both the histogram andthe local patches can be used in our framework .we use the dataset presented in . according to their results ,most errors occurs among the four indoor categories ( 830 images ) , namely , bedroom ( 174 images ) , living room ( 289 images ) , kitchen ( 151 images ) , and office ( 216 images ) .therefore , we will focus on these four categories . for each category, we split the dataset randomly into two separate sets of images , 100 for training and the rest for testing .a codebook is formed from the training images of all categories .firstly , interesting keypoints in the image are randomly detected .local patches are then generated accordingly .after patch detection , each patch is transformed into a 128-dim sift vector . giventhe collection of detected patches , k - means clustering is performed over all local patches .codewords are then defined as the centers of the learned clusters .then , each patch in an image is mapped to a codeword and the image can be represented by the histogram of the codewords . in addition , we also have an matrix of sift vectors where is the number of codewords .we compare the performance of a probabilistic latent semantic analysis ( plsa ) with the standard bow representation , svm , linear smm ( lsmm ) , and nonlinear smm ( nlsmm ) . for smm , we use the empirical embedding kernel with gaussian rbf base kernel : where is the histogram of the image and is the sift vector .a gaussian rbf kernel is also used as the level-2 kernel for nonlinear smm .for the svm , we adopt a gaussian rbf kernel with -distance between the histograms , i.e. , where .the parameters of the algorithms are fixed by 10-cv over parameters and .for nlsmm , we use the best of lsmm in the base kernel and perform 10-cv to choose parameter only for the level-2 kernel . to deal with multiple categories ,we adopt the pairwise approach and voting scheme to categorize test images .the results in figure [ fig : naturalscene - acc ] illustrate the benefit of the distribution - based framework .understanding the context of a complex scene is challenging . employing distribution - based methodsprovides an elegant way of utilizing higher - order statistics in natural images that could not be captured by traditional sample - based methods .this paper proposes a method for kernel - based discriminative learning on probability distributions .the trick is to embed distributions into an rkhs , resulting in a simple and efficient learning algorithm on distributions .a family of linear and nonlinear kernels on distributions allows one to flexibly choose the kernel function that is suitable for the problems at hand .our analyses provide insights into the relations between distribution - based methods and traditional sample - based methods , particularly the flexible svm that allows the svm to place different kernels on each training example .the experimental results illustrate the benefits of learning from a pool of distributions , compared to a pool of examples , both on synthetic and real - world data .km would like to thank zoubin gharamani , arthur gretton , christian walder , and philipp hennig for a fruitful discussion .we also thank all three insightful reviewers for their invaluable comments .[ thm : representer1 ] given training examples , a strictly monotonically increasing function , and a loss function , any minimizing the regularized risk functional ,\dotsc,{\ensuremath{\mathbb{p}}}_m , y_m,\mathbb{e}_{{\ensuremath{\mathbb{p}}}_m}[f]\right ) + \omega\left(\|f\|_{\mathcal{h}}\right)\ ] ] admits a representation of the form for some . by virtue of proposition 2 in ,the linear functional ] .[ thm : deviation2 ] given an arbitrary probability distribution with variance , a lipschitz continuous function with constant , an arbitrary loss function that is lipschitz continuous in the second argument with constant , it follows that - \ell(y,\mathbb{e}_{x\sim{\ensuremath{\mathbb{p}}}}[f(x)])\rvert } \leq 2c_{\ell}c_f\sigma\ ] ] for any .assume that is distributed according to .let be the mean of in .thus , we have - \ell(y,\mathbb{e}_{{\ensuremath{\mathbb{p}}}}[f(x)])\rvert } & \leq & \int { \lvert\ell(y , f(\tilde{x } ) ) - \ell(y,\mathbb{e}_{{\ensuremath{\mathbb{p}}}}[f(x)])\rvert } { \ , \mathrm{d}}{\ensuremath{\mathbb{p}}}(\tilde{x } ) \\ & \leq & c_{\ell } \int { \lvertf(\tilde{x } ) - \mathbb{e}_{{\ensuremath{\mathbb{p}}}}[f(x)]\rvert } { \ , \mathrm{d}}{\ensuremath{\mathbb{p}}}(\tilde{x } ) \\ & \leq & \underbrace{c_{\ell } \int { \lvertf(\tilde{x } ) - f(m_x)\rvert } { \ , \mathrm{d}}{\ensuremath{\mathbb{p}}}(\tilde{x})}_{a } + \underbrace{c_{\ell}{\lvertf(m_x ) - \mathbb{e}_{{\ensuremath{\mathbb{p}}}}[f(x)]\rvert}}_{b } \ ; .\end{aligned}\ ] ] control of ( ) the first term is upper bounded by [ eq : control - a ] where the last inequality is given by \leq \sqrt{\mathbb{e}_{{\ensuremath{\mathbb{p}}}}[{\lvert\tilde{x } - m_x\rvert}^2 ] } = \sigma$ ] . control of ( ) similarly , the second term is upper bounded by combining and yields - \ell(y,\mathbb{e}_{{\ensuremath{\mathbb{p}}}}[f(x)])\rvert } \leq 2c_{\ell}c_f\sigma \enspace , \ ] ] thus completing the proof .[ lem : smm - svm3 ] let be a bounded p.d .kernel on a measure space such that , and be a square integrable function such that for all . given a sample where each is assumed to have a density given by ,the linear smm is equivalent to the svm on the training sample with kernel . for a training sample , the svm with kernel minimizes by the representer theorem , with some , hence this is equivalent to next , consider the kernel mean of the probability measure given by and note that for any .the linear smm with loss and kernel minimizes by theorem [ thm : representer ] , each minimizer admits a representation of the form thus , for this we have and , as above .this completes the proof .
|
this paper presents a kernel - based discriminative learning framework on probability measures . rather than relying on large collections of vectorial training examples , our framework learns using a collection of probability distributions that have been constructed to meaningfully represent training data . by representing these probability distributions as mean embeddings in the reproducing kernel hilbert space ( rkhs ) , we are able to apply many standard kernel - based learning techniques in straightforward fashion . to accomplish this , we construct a generalization of the support vector machine ( svm ) called a support measure machine ( smm ) . our analyses of smms provides several insights into their relationship to traditional svms . based on such insights , we propose a flexible svm ( flex - svm ) that places different kernel functions on each training example . experimental results on both synthetic and real - world data demonstrate the effectiveness of our proposed framework .
|
on two beam interference , the explicit interrelationship among source s frequency parameter , time functions of the beams between time domains of source and interference point and instantaneous outcome of the interference constructs a necessary foundation for any two - beam interferometry s design and interpretation of measured data . by means of phase transfer function between time domains of source and observer , the accumulated phase or phase function for either of the wave beams at any spatial point in the coherent temporal spatial region can be determined by source s parameter and corresponding instant . upon the beam s phase function in the coherent temporal - spatial region ,the phase difference function , two variables function in interference temporal - spatial region , is established .is the phase difference function a general interrelationship itself . from it ,the unified equations on steady and non - steady interference are inferred directly under the cases respectively . for steady interference , conventional rule on the interference spatial distributionis a particular example of the unified equation as the two beam s wavelengths are same . in additionmichelson - morley experiment result is reinterpreted with the equation . for non - steady interference two sets of the equations are derived for different interferometry outcomes : beat frequency and fringe s instantaneous displacing velocity ; moreover on some of typical dynamical measurement : history of distance , velocity and acceleration as well as source frequency property , the principle formulas are presented for application illustration .in beam s transfer time - space , , there is phase transfer functions ( ref . ) : =\varphi[t'-t_i(r , t ' ) ] \end{split}\ ] ] here =\varphi_i(t_{oi})=\varphi[t_{oi}'-t_i(r , t_{oi } ' ) ] ] ; where coherent spatial volume of wave beam 1 and 2 , coherent time interval of beam 1 and beam 2 ; two beams are split by same source but have different main trajectories to reach interference point .now define the phase difference function in this coherent temporal - spatial region , -\varphi[t'-t_1(r , t')]\\ & = \int_{t'-t_1(r , t')}^{t'-t_2(r , t')}\omega(\dot{t})\,d\dot{t } \end{split}\ ] ] thus there are the derivative property of the function : \frac{\partial t_1(r , t')}{\partial t'}-\omega[t'-t_2(r , t')]\frac{\partial t_2(r , t')}{\partial t'}+\omega[t'-t_2(r , t')]-\omega[t'-t_1(r , t ' ) ] \end{split}\ ] ] -\operatorname{grad}\varphi[t'-t_1(r , t')]\\ & = \operatorname{grad}\int_{t'-t_1(r , t')}^{t'-t_2(r , t')}\omega(\dot{t})\,d \dot { t}\\ & = \left\{-\omega[t'-t_2(r , t')]\frac{\partial t_2(r , t')}{\partial \vec n ( \operatorname{grad}\psi)}+\omega[t'-t_1(r , t')]\frac{\partial t_1(r , t')}{\partial \vecn ( \operatorname{grad}\psi)}\right\}\vec n ( \operatorname{grad}\psi)\\ & = \biggl\{\biggl\{-\omega[t'-t_2(r , t')]\operatorname{grad}t_2(r , t')+\omega[t'-t_1(r , t')]\operatorname{grad}t_1(r , t')\biggr\}\cdot\vec n ( \operatorname{grad}\psi)\biggr\}\vec n ( \operatorname{grad}\psi)\\ & = \biggl\{\biggl\{-\omega[t'-t_2(r , t')]\frac{\vec n ( \operatorname{grad}t_2)}{v_{io2}(r , t')}\left[1-\frac{\partial t_2(r , t')}{\partial t'}\right]\\ & \phantom{=\bigg\{\bigg\{}+\omega[t'-t_1(r , t')]\frac{\vec n ( \operatorname{grad}t_1)}{v_{io1}(r , t')}\left[1-\frac{\partial t_1(r , t')}{\partial t'}\right]\biggr\}\cdot \vec n ( \operatorname{grad}\psi)\biggr\}\vec n ( \operatorname{grad}\psi ) \end{split}\ ] ] where ( see appendix ) : \ ] ] . when or and =constant ] , or , . +the can be resolved form .moreover from eq . there , so is also resolved by fringe s displacing speed .4 . to measure the rate of radiant frequency of electron in magnetic dipole , ref .fig . 2 ,insert high dispersion medium with length between radiant source and interferometer .if does not exist , short distance of fringes will be independent to the medium length ; if exists , the fringes distance will be proportional to the length and refractive index of dispersion medium .1 . phase difference function , through phase transfer function and involved time function , reflects the theoretical interrelation between outcome of two - beam interference , frequency characteristics of two autonomic wave sources , for example , as beat equation \frac{\partial t_2(r,\dot t)}{\partial \dot t}-\omega_1[t_1(r,\dot t)]\frac{\partial t_1(r,\dot t)}{\partial \dot t}\right\}\,d\dot t=\pm 2\pi$ ] and subject s kinematic information involved in corresponding time function .the time function applied in phase transfer function must be reversible , in most cases it is positive reversible , that is , ; all inference and property of the phase difference function can be applied in case of two autonomic wave sources , although in most cases two coherent wave sources are derived from one by splitting . since phase difference function is consisted of or can be expressed by spatial frame independent physical quantities , the theoretical relation and consequent inferences do no depend upon spatial - frame s selection .explanation of zero fringe displacing result of michelson - morley experiment .+ according to steady interference equation , the only existing explanation for the experimental result is phase difference at observing interference point remains same or invariable in two cases , that is , light speed with respect to interferometer remains same in both cases .phase difference function reveals the necessary interrelation between outcome of two - beam interference , frequency parameter of two autonomic wave sources , and concrete time function s information affected by subject kinematic movement .unified equation on steady and non - steady two - beam interference can be derived from the phase difference function .phase difference function and related inference are independent to spatial - frame s selection or remain invariable under frame transformation .}{\delta r}\\ & = \lim_{\delta r \to 0}\frac{t(r , t')-t(r+\delta r , t')}{\delta r}=\lim_{\delta r \to 0}\frac{1}{\frac{\delta r}{\delta t'}}\frac{\delta t}{\delta t'}\\ & = \frac{1}{v_{io}(r , t')}\frac{dt}{dt'}=\frac{1}{v_{io}(r , t')}\left[1-\frac{\partial t(r , t')}{\partial t'}\right ] \end{split}\ ] ] or from \rvert}={\lvert\operatorname{grad}t[r , t(r , t')]\rvert}\\ & = \frac{\partial t(r , t')}{\partial r}=\frac{d t[r , t(r , t')]}{dr } \end{split}\ ] ] and }{\delta r}\\&=\lim_{\delta r \to 0}\frac{\delta t'}{\delta r}=\frac{1}{v_io(r , t ) } \end{split}\ ] ] there ( ref . ) \\ & = \frac{1}{v_{io}(r , t)}\left[1-\frac{\partial t(r , t')}{\partial t'}\right ] \end{split}\ ] ]
|
phase difference function is established by means of phase transfer function between time domains of source and interference point . the function reveals a necessary interrelation between outcome of two - beam interference , source s frequency and measured subject s kinematic information . as inference unified equations on steady and non - steady interference are derived . meanwhile relevant property and application are discussed .
|
this paper provides a rigorous discussion of the concept of spin precession frequency on synchro betatron orbits in storage rings . to set the scene we begin by introducing some key physical ideas via the equations of orbit and spin motion and the notion of spin orbit equilibrium . the spin expectation value ( `` the spin '' ) in the rest frame of , for example , a proton , an electron or a muon moving in electric and magnetic fields under the influence of the lorentz force precesses according to the thomas bargmann michel telegdi ( t bmt ) equation where the precession vector depends on and which are respectively the electric and magnetic fields , and the velocity .particle motion with respect to the synchronous closed , i.e. periodic , orbit is described in terms of three pairs of canonical variables which we combine into a vector with six components .for example , two of the pairs can describe transverse motion and one pair can describe longitudinal ( synchrotron ) motion within a bunch .since we are dealing with storage rings we take the orbital motion to be bounded . in this paper we ignore radiation , interparticle interactions and interactions with the vacuum system . in ( [ eq:1.1 ] )the independent variable is the time .however , since the electric and magnetic guide fields in particle accelerators and storage rings are fixed in space it is more convenient to adopt the standard practice of replacing with the angular position around the ring , the azimuth , where is the distance around the ring and is the circumference .since and depend on and we can now rewrite ( [ eq:1.1 ] ) in the form where is the precession vector obtained from by rescaling with and transforming to machine coordinates .if the beam is in equilibrium , i.e. if the the phase space density is in , we then write it as so that . we normalize it to unity : . the necessary condition for this kind of equilibrium , namely that at a fixed the fields are in , is automatically fulfilled in a storage ring .but of course the boundedness of the motion is also required .conditions necessary for beam equilibrium and a way of calculating using ergodic theory and the concept of `` stroboscopic averaging '' are described in detail in .the statistical properties of the spins are encoded in the quantum mechanical spin density matrix .but for spin 1/2 particles this can be completely parametrized by the polarization vector . for particle beams we need the local polarization at each point in phase space . is the average of the normalized spin vectors , , at , where denotes the euclidean norm .we define the polarization of the whole beam , the `` beam polarization '' , at a given azimuth as .since the t bmt equation ( [ eq:1.2 ] ) is linear in and since the particles at all see the same , also obeys the t bmt equation .furthermore , the length of is constant along a phase space trajectory . for a storage ring at fixed energy, is in at a fixed position in phase space so that .this opens up the possibility of a spin distribution that is the same from turn to turn , i.e. in equilibrium .then not only obeys the t bmt equation , but is in for fixed and we then write it as so that .we denote the unit vector along by .this also obeys the t bmt equation along orbits and is in : .the method described in for constructing can be extended as in for constructing and from the treatments in it is clear that the existence of and do not require that the orbital motion be integrable .but , of course , the conclusions of are still valid if the motion _ is _ integrable .moreover , particle motion in storage rings is usually close enough to integrability to allow the motion to be characterized in terms of well defined betatron and synchrotron frequencies .this , in turn , allows predictions to be made about beam stability via the concept of orbital resonance .thus , in the remainder of this paper we will assume that the orbital motion is integrable .then , as we shall see , the stability of spin motion can also be discussed in terms of resonance , namely `` spin orbit resonance '' .of course , integrable orbital motion and spin orbit equilibrium are idealizations .nevertheless , these idealizations often provide useful starting points for calculations . for integrable particle motion the position of a particle in phase spaceis represented by three pairs of action angle variables and is determined by a hamiltonian .thus the orbital phase space is partitioned into disjoint tori , each of which is characterized by a unique set of .we now define .the actions are constants of the motion and for fixed the constant rate of advance of each , , is called an _orbital tune_. these frequencies are the number of oscillations per turn around the ring . in beam physicssuch frequencies are often referred to as tunes and we have adopted that usage .we will only consider storage rings running at fixed nominal energy . for integrable motion ,the in of , and is accompanied by in and .so as well as being a solution to the t bmt equation along orbits , satisfies nontrivial periodicity conditions . in our later discussions on quasiperiodicitywe will require that it also depend sufficiently regularly on the azimuth and the orbital angles , i.e. must be `` smooth '' in the sense defined in the main text .this corresponds to the expectation that also be smooth .the equilibrium density is also in and off orbital resonance it just depends on .since for every the field is invariant from turn to turn , it is now often called the _ invariant spin field _ ( isf ) .the isf is a central object in the theory of polarization in storage rings .for example , for an isf sufficiently regular in , off orbital resonance and away from the spin orbit resonances to be defined below , an upper limit to the equilibrium beam polarization at a particular is and it is reached only when the are parallel .this is easy to see by noting that if were to vary over a torus , the beam polarization would vary from turn to turn .so equilibrium implies that is constant over a torus .the maximum equilibrium polarization on each torus is reached when .note that a zero value for at some does not mean that the beam is depolarized .it could well be that the beam is fully polarized at each point in phase space but that the geometry of causes the integral to vanish . then, if a change of parameters were to change the geometry of so that the integral were to become nonzero , and if the change were carried out adiabatically , the beam polarization would reappear .furthermore , the fact that the integral vanishes at one position does not mean that it vanishes at other positions .we prefer to reserve the term `` depolarization '' for a _ definitely irreversible _ loss of polarization such as occurs in the presence of noise or for an _ effectively irreversible _ loss of the kind that can occur when spin orbit resonances are crossed .although we have introduced via the notion of equilibrium , the integral also contains useful information when the spin distribution is not in equilibrium : gives an upper limit for the time averaged polarization away from spin orbit resonances .the maximum is reached on each torus when the polarization is in equilibrium and with .see ( * ? ? ?* section 2.2.8 ) and ( * ? ? ?* section 4.4 ) .an example of the origin and behavior of nonequilibrium beam polarization is given in figure 9 in where large oscillations are evident .however , polarimeters and particle detectors can not collect data quickly enough to make such oscillations observable .instead , only the time averaged polarization can be observed or exploited .but as we have just seen , we can still estimate its maximum value .that depends only on the geometry of and it is reached for each torus when the spread of is minimized .the isf also provides a perfect tool for estimating the long term effects on the beam polarization of small perturbations such as radiation or electric and magnetic fields which cause nonintegrable orbital motion .in particular , one begins with a spin orbit system which is invariant from turn to turn , i.e. with an equilibrium orbital distribution and with spins set parallel to the .then , since the system is initially in equilibrium , the effects of the perturbations can not be masked by the natural , potentially large , variations of the beam polarization of the kind depicted in figure 9 in . an ability to construct for integrable orbital motion and understand its behavior is then indispensable .for our integrable orbital motion of electrons and protons and up to energies of a few gev , an approximate can be calculated in a first order perturbation theory by an extension of the code slim , and in higher order perturbation theory by the codes , smile , forget me not and spinlie . however , for the high magnetic fields characteristic of proton rings running at energies of hundreds of gev , perturbative methods are inadequate .then the method of stroboscopic averaging as in the code sprint should be used .this is a numerical , nonperturbative algorithm and yields high accuracy for real rings even when all modes of orbit oscillation are included simultaneously .one can also use fourier methods as in the codes sodom2 or miles .sodom2 has been very useful for orbital motion restricted to one plane .miles gives explicit formulae which are applicable to some simple models .so far , the only practical general way to calculate the invariant spin field is to use stroboscopic averaging . as for any dynamical systemwe hope to understand more about spin motion by studying its spectrum of frequencies .various quantities , which seem at first sight to be related to spin frequencies , can be found in the literature and we will mention some in section 10 . buta true component of a spectrum quantifies long term behavior .thus any definition of spin precession frequency should reflect that stipulation .the choice can be further narrowed by requiring that the spectrum give useful clues about the behavior of _ sets _ of spins , and in particular about the beam polarization .after all , the experimenters using the beams in storage rings are just interested in the beam polarization , not individual spins . experience has shown that the best choice for characterizing spin motion in storage rings is the traditional one , namely the so called _ amplitude dependent spin tune _ ( briefly `` spin tune '' ) , which we usually denote by .assuming exists , the spin tune measures the number of spin precessions around , per turn around the ring , for a particle on the orbit and it provides a way to quantify the correlation between the spin motion and the orbital motion which `` drives '' it , and thereby forecast a qualitative aspect of spin motion , namely the degree of regularity of the spin motion .in particular , the spin motion can in general become very erratic when a spin tune is near a low order resonance condition where is a vector of integers and the quantity is usually called the _ order _ of the resonance . correspondingly , close to spin orbit resonance can become a very sensitive function of .this sensitivity has immediate consequences for work with polarized beams .for example , the maximum attainable equilibrium beam polarization of a stored high energy proton beam can be unacceptably low or the rate of depolarization , due to synchrotron radiation , of a stored electron beam can be unacceptably high .note , however , that the on a torus can sometimes be small away from spin orbit resonance and that proximity to a spin orbit resonance , especially one of high order , does not automatically imply that the on a torus is low .the resonance might be very weak .another feature of our definition of spin frequency is that , as we shall see , it is this quantity whose spectrum one obtains in a straightforward spectral analysis of spin motion during spin orbit tracking simulations . in other words : in an ideal world with technology which could select particles on a torus at a fixed , it could be _measured_. right at the resonance condition ( [ eq:1.5 ] ) , is in general nonunique .however , as we shall see , our spin orbit systems exhibit a tendency to avoid exact spin orbit resonance . since in general depends on and the particle energy on the closed orbit , usually varies with ( hence `` amplitude dependent spin tune '' ) , and the particle energy on the closed orbit .we will call the latter the `` beam energy '' .we emphasize that is a field over the _six _ dimensional phase space so that synchrotron motion is built in from the start .thus although varies with the beam energy and and , it does _ not _ change during a period of synchrotron motion . if were defined on four dimensional transverse phase space and the energy oscillations due to synchrotron motion were added as an afterthought , it would not be useful for describing _ equilibrium _ polarization . instead, we would have to characterize the beam polarization using time averages .we return to this theme in section 10 . on the closed orbit ,i.e. for , an exists which is independent of . we denote it by .the calculation of spin tune on the closed orbit presents no problem : it can , as we shall see , be extracted from an eigenvalue of the 1turn spin map . but the definition of spin tune for , i.e. on synchro betatron orbits is much more subtle .moreover , it requires precision .notions of spin frequency for synchro betatron orbits appearing in the literature are often not precisely presented and some appear to possess no capacity for predicting the qualitative aspects of spin motion in storage rings .this brings us to the purpose of this paper .this is to provide a rigorous discussion of the concept of spin precession frequency on synchro betatron orbits and thereby consolidate a framework for systematizing and classifying spin motion in storage rings . for thiswe make a careful mathematical study of the consequences for spin motion of the periodicities of in and , using precise definitions and carefully formulated theorems and we make use of the isf and other concepts which we distill from the literature and `` folklore '' on spin dynamics in storage rings . for example , we will show that under the appropriate conditions , the existence of the isf with the above mentioned periodicities implies that the dependence of the general solutions of ( [ eq:1.2 ] ) will contain five frequencies .four of them are the orbital tunes and the circulation tune , i.e. the frequency associated with the in . a fifth tune emerges which , under circumstances to be described , is a spin tune .the general solutions will then be found to be quasiperiodic with the tunes .moreover the results obtained here can be viewed as a generalization of floquet theory .given the confusion surrounding definitions of spin precession frequencies , the treatment of the kind that we provide here seems to be very necessary .our assumptions about are weak enough to cover the situations of most interest for storage rings , namely typical integrable synchro betatron motion .several of our theorems assume the existence of but although we have ways to find approximate , the determination of complete conditions for its existence is an outstanding mathematical issue .this question can , for example , be investigated using ergodic theory and the method of stroboscopic averaging .see .moreover , simulations indicate that approximate isfs _ do _ exist .this means that one obtains objects which , at least approximately , behave like an isf .moreover in some instances approximations even lead to an for which an exact isf can be found , e.g. as in the single resonance model - see section 7 .although we have introduced the vector by studying spin orbit equilibrium , it was first discussed by derbenev and kondratenko as a vehicle for constructing joint action angle variables for spin and orbital motion from their semiclassical spin orbit hamiltonian .this hamiltonian is derived from the dirac hamiltonian by a foldy wouthuysen transformation taken to first order in . in that picturethe spin tune emerges as the rate of advance of a spin phase .the terms at first order in in the derbenev kondratenko hamiltonian are those containing spin and these terms imply a force of the stern gerlach ( s g ) type . the s g forces on trajectories appear at first order in and a `` back reaction '' on the spin of the s g perturbation to the orbit would involve an addition to the spin precession rate of order .however , in this paper the effect of s g forces on spin and orbit motion is neglected and we just operate with the lorentz force and the t bmt equation and for the lorentz force and just include , as is usual , the terms of zeroth order in .there are several reasons for this approach .first , it is far from clear what form the s g forces should take .in fact there is considerable ambiguity in the choice of the s this is covered in detail in .see too and the bibliography in .the second ground has to do with the size of the s g forces . since the s g forces are of first order in they are extremely small compared to the lorentz forces which are of zeroth order in . of the lorentz force on a proton . at a fixed radiusthis ratio is essentially independent of the beam energy . the s g energy at that radius is of the order of of the kinetic energy .g energy in a hera dipole magnet is of the order of of the kinetic energy . ]they are also small compared to typical spurious perturbations to trajectories like noise and collective effects .so s g forces would not cause changes of practical significance to the results that we present . in particular , in practical situations in a storage ring there would be no significant change in the phenomenology of spin orbit resonances even if the s g forces were to cause tiny changes in the orbital tunes .the third ground is that it is far from clear that it makes sense to treat an essentially quantum mechanical system with a classical `` over interpretation '' of the influence of the s g force on the spin .an example of an effect which is not taken into account by a naive application of classical s g forces , is given in .it is implied there that long term shifts of an orbit due to s g forces will be nullified when the spin undergoes a quantum flip and the s g force then acts in the reverse direction .see for a classical perspective on this . in summary , we believe that a too literal interpretation of the s g like forces in the semiclassical spin orbit hamiltonian could lead to manipulations and conclusions of little relevance and utility for illuminating the core phenomenology of spin motion in typical storage rings .we believe that the first priority is to begin with just the lorentz force and the t bmt equation .then , as mentioned earlier , once the equilibrium state of the system has been defined , other influences such as nonlinear fields , noise , collective effects , synchrotron radiation and the very small s g like effects can be included as perturbations .the paper is structured as follows . in section 2we begin by discussing some important consequences of ( [ eq:1.2 ] ) . herewe introduce the central concept of a _ uniform precession frame _( upf ) and the associated _ uniform precession rate _ ( upr ) .the upf provides a coordinate system for spin .then in section 3 we give a detailed discussion of spin motion on the closed orbit where is independent of so that the floquet theorem applies .sections 2 and 3 contain standard results but we present them in forms which motivate their extension in later sections .section 4 contains the definition of a quasiperiodic function and collects some properties useful for the discussion following . in particularit defines a diophantine condition needed for handling a problem with small divisors .the key ideas are formalized in lemmas 4.3 , 4.7 and 4.8 .section 5 uses the concept of a upf , quasiperiodic with orbital frequencies , to define the proper upr , the spin tune and spin orbit resonance .the main theorem in section 5 is theorem 5.3 which allows us to define equivalence classes of spin tunes .the presentation in sections 2 , 3 , 4 and 5 is deliberately rather general and abstract .then in section 6 we introduce a _ field _ called the _ invariant frame field _ ( iff ) which is used to construct upfs .there we consider the angular phase space as a whole to prove theorems about the concepts introduced in section 5 .we also connect the abstract ideas introduced earlier to a familiar physical idea , namely that if the orbital tune were off orbital resonance ( only with the vector of integers ) , the existence of a nonunique isf would imply that the system were on spin orbit resonance .the main theorems in section 6 are theorems 6.3 - 5 .theorem 6.3 is used in the proof of theorem 6.5 and it is generally our main tool for showing that a torus is `` well tuned '' .the proof of theorem 6.3 relies on theorem 5.3 .some examples of the formalism for model are presented in sections 6 , 7 and 8 .note that , except for some examples , we allow the number of action angle pairs , , to be arbitrary ( but ) although for spin motion in storage rings , the case is the most important . to aid the reader we mark the key equations with a on the left . as a byproduct of the quasiperiodic structure of the solutions we suggest using spectral analysis as a way of `` measuring '' the spin tune during spin orbit tracking simulations and thereby complementing other methods already in use .spectral analysis may also lead to a practical method for deciding whether an invariant spin field exists .these ideas are presented in section 9 and formalized in theorems 9.1 and 9.2 .the paper is summarized in section 10 where our concepts are also related to simulations and used to discuss some popular notions . for the rest of the paper , apart from section 10 , we will now adopt a more efficient notation whereby we use the symbols , and , ( ) to mean respectively the list of orbital actions , orbital tunes and orbital angles . from now on wewill also adopt the frame dependent abbreviations , and . generally ,if appears as an independent variable in a function , the function will be in . in that casewe say for brevity that the function is in .in terms of the new notation , the t bmt equation and the equations of orbital motion are where is a real skew symmetric matrix with nonzero elements and .the dot over a symbol denotes differentiation w.r.t .because the dependence is only parametric , we will often suppress the symbol in and , e.g. as in .clearly , is in and in . for brevity we just say that functions with such periodicity are . on the closed orbit ,i.e. for , is independent of .note that on the torus the angular variables play a largely artificial role because here is independent of .but their inclusion is very convenient as it allows one to treat all tori on the same basis .then all definitions , e.g. that of the isf , apply to all tori .a function is called , if the function together with all of its partial derivatives up to and including those of order are continuous . in this paperwe will assume that , for fixed , is a function of . a function will be called _smooth_. the smoothness of corresponds to the fact that in real storage rings , the magnetic and electric fields are smooth functions of space and time . the labels for the definitions , propositions , theorems and lemmas are chosen in a way which indicates their relative positions in the text .we begin by establishing some basic components of our formalism .clearly ( [ eq:1.10 ] ) gives and where and are the actions and phases at .thus an orbit is labeled by .but if we consider a fixed torus we often suppress the symbol . by `` a fixed torus '' we mean that the orbital tune has the value and that the spin motion is characterized by the function of and .equation ( [ eq:1.9 ] ) thus becomes where the real skew symmetric is defined by as will become clear from definition 4.1 in section 4 , is a quasiperiodic function of with the tunes ( frequencies ) .the solutions of ( [ eq:2.1 ] ) can be written as in terms of the _ principal solution matrix _ at which isthe matrix , defined uniquely by the initial value problem thus the principal solution matrix at is the spin transport matrix from the azimuth to the azimuth .occasionally we call a solution of ( [ eq:2.1 ] ) a `` spin trajectory '' at . the choice for the starting azimuth does not imply a loss of generality as can be seen by considering the general initial value problem for ( [ eq:1.9 ] ) and ( [ eq:1.10 ] ) . note that and are in and that by the smoothness of the principal solution matrix is a smooth function of . we will sometimes suppress the symbols and in , and .the key property of is that it belongs to , i.e. as is easily proved using ( [ eq:2.2 ] ) and ( [ eq:2.0 ] ) .let and be two initial conditions for ( [ eq:2.1 ] ) and let be the real inner product .then so that the inner product of any two solutions of ( [ eq:2.1 ] ) is conserved . in particular , the length of a spin vector and the angle between any two spin trajectories at the same is conserved .in addition , it is easy to show that the cross product of two solutions is a solution . in the remainder of this paperwe will , for convenience , allow spins to have arbitrary length .an interesting property of ( [ eq:2.1 ] ) is that knowledge of one solution completely determines by a simple integration .it is a standard result for linear systems that knowledge of one solution can be used to reduce the dimension by one .here it reduces the dimension by two because of the special structure of in ( [ eq:2.3 ] ) as we will now demonstrate .let be a solution of ( [ eq:2.1 ] ) , i.e.a spin trajectory at , and let be of norm 1 .choose and so that \label{eq:2.4}\ ] ] is a matrix .one can for example require and to be solutions of ( k=1,2 ) , whence we can assume that is smooth , i.e. a function .next we make a transformation on ( [ eq:2.2 ] ) and ( [ eq:2.0 ] ) defined by this gives where since , and thus by the skew symmetry of .therefore is skew symmetric as expected for rotations .the third column of is since is a solution of ( [ eq:2.1 ] ) , and the skew symmetry of yields where ( [ eq:2.9 ] ) also serves to define .therefore as is easily checked by differentiation .finally , the exponentials in these equations can be evaluated by noting that so we have constructed the complete principal solution matrix by starting from just one solution of ( [ eq:2.1 ] ) and assuming the existence of a smooth .this is the result we were aiming for .note that by ( [ eq:2.7]),([eq:2.9 ] ) we have it follows that from which we deduce .thus we obtain the useful formula in addition , since , it follows from ( [ eq:2.8 ] ) that .* remarks : * * equation ( [ eq:2.5 ] ) is equivalent to a change of basis for spin whereby is expressed as so that is the spin in the rotating frame represented by the matrix .moreover so that .thus is constant and precesses around at a nonconstant rate . from ( [ eq:2.13 ] )the rate is , as one would expect , just a combination of the projection of onto and the rate , , of rotation of and around . in our discussion of spin tune in later sectionsit will be useful to define a frame in which the spin precesses _uniformly_. from ( [ eq:2.11 ] ) we have where is an arbitrary constant .thus with the change of basis we obtain and we have defined a frame whose third column is a spin trajectory and in which spin has a constant precession rate . in the followingwe will be interested in the case where the mean of exists and we will choose where mod and is in [ 0,1 ) .thus we can write where represents the fluctuating part of with zero mean and where the integer is chosen such that . * the ideas in remark 1 lead to some precise definitions . on a given torus ,let be such that the principal solution matrix at can be written as where is constant and in .then is called a _ uniform precession frame _ ( upf ) at and , which is uniquely determined by , is called the _ uniform precession rate _ ( upr ) for and is denoted by .we then call ( [ eq:2.20 ] ) a _ standard form _ of the principal solution matrix . + under certain conditions which will be described in definition 5.5 , will be called a spin tune .note that , due to ( [ eq:2.20 ] ) , is smooth in and satisfies the ordinary differential equation : where . in particular by ( [ eq:2.30 ] ) the vector described by the third column of obeys ( [ eq:2.1 ] ) so that it is a spin trajectory with unit length .moreover , for every constant the initial value problem defined by ( [ eq:2.30 ] ) and the arbitrary initial matrix has the unique solution .thus every solution of ( [ eq:2.30 ] ) with and is a upf at and its upr equals , i.e. because , for every upf one has the useful formula which follows from ( [ eq:2.32 ] ) .note that the interval is just a matter of choice any convenient half open interval of length 1 could be chosen , e.g. . * in this section and in the rest of this paper , the concepts of orthonormal reference frame and matrix are interchangeable . moreover the elements of the columns of such a matrix are just the components of the unit coordinate vectors of the corresponding frame , as for example in ( [ eq:2.4 ] ) .thus we will often identify the columns with such vectors . *it can be shown that if with ] satisfies the relations and .also , by ( [ eq:2.14 ] ) , where the matrix is skew symmetric .\b ) conversely , if is a real skew symmetric matrix and if , then a matrix exists such that .the proof is elementary .see for example .note that the relation simply means that lies along the `` axis of rotation '' for .in this section we consider the case , so that the of ( [ eq:2.17 ] ) has no or dependence and is and smooth in .this case corresponds to the motion of the particle on the closed orbit and the following theorem applies . * floquet theorem : * _ where the matrix is and smooth and where .moreover , . _ _proof : _ since , at , is independent of , is independent of . from lemma 2.1a ,we know that there exist and such that and where . then with , , and .a key property of the principal solution matrix is that which we can see by noting that the l.h.s . of ( [ eq:3.2 ] )is a solution matrix of ( [ eq:2.2 ] ) by the of and the r.h.s . is a solution matrix of ( [ eq:2.2 ] ) since is .they are equal , by the uniqueness of solutions to the initial value problem for ( [ eq:2.2 ] ) and ( [ eq:2.0 ] ) , since they are equal at .define by .clearly and since and is skew symmetric .the periodicity of is clear because where ( [ eq:3.2 ] ) is used at the second equality . the floquet theorem we now make several remarks concerning the principal solution matrix defined by ( [ eq:2.2]),([eq:2.0 ] ) when , i.e. when is independent of and in .the `` '' symbol which was specific to the theorem is not needed in the following remarks .* remarks : * * if , where , where the real skew symmetric matrix has the spectrum and where the matrix is , then will be called _ floquet parameters_. in particular called a _floquet frequency_. thus the floquet theorem states that floquet parameters exist and it implies that the principal solution matrix depends on two frequencies where the floquet frequency emerges in addition to the circulation tune .note that the floquet parameter is a smooth element of with . *if are floquet parameters as defined in remark 1 , then from lemma 2.1b a exists such that , whence .thus at every , is a upf with upr , as defined in section 2 .we conclude that every floquet frequency is a upr of a upf and that ( recall remark 2 of section 2 ) a unit length function of exists , which is a spin trajectory at every .this is the mentioned in the introduction .+ conversely , if is a upf , then by ( [ eq:2.20 ] ) where , defined by fulfill all conditions of floquet parameters .thus the upr of every upf is a floquet frequency , i.e. for upfs the upr emerges as a floquet frequency and thus as an extra frequency of the system .we conclude that the set of floquet frequencies is identical with the set of uprs which correspond to upfs . * to study the set of floquet frequencies in more detail , we first consider two sets and of floquet parameters , i.e. from ( [ eq:3.7 ] ) at , we obtain , so that .thus the set of floquet frequencies has at most two elements and ( due to the floquet theorem ) at least one element .in particular , the set of floquet frequencies has either one element ( which then is equal to ) or it has two elements , both of them positive . note that a floquet frequency which is in ] by numerically integrating ( [ eq:2.1 ] ) with .this is the way that is constructed in slim and other related spin codes . * since is a solution of ( [ eq:2.1 ] ) so is .one can therefore replace ] . by ( [ eq:2.13 ] )it follows that with this replacement becomes .thus if ] leads to , so that one can choose in ( [ eq:3.5 ] ) so as to put the upr of in ] where is the set latexmath:[\[\lbrace \omega \in { \mathbb r}^d : | m\cdot(1 , \omega ) | \geq \gamma note that the symbol is also used in the introduction for the precession vector. however its meaning should be clear from the context .it follows from the definition that , where . for fixed and for each `` resonant zone '' is either empty or a thickened dimensional plane centered on the resonant plane , , with thickness proportional to .for example , when and the corresponding zones are intervals centered on the three points in : where , each with thickness . when and the corresponding zones are centered on the twelve lines in : where with thickness either or . more generally , if with and then one can show by using rotations in that can be rotated into the set .thus the thickened dimensional plane has thickness .for such that the resonance condition can not be satisfied and we have .note also that is undefined for .* definition 4.6 * ( orbital resonance ) : we say that the torus at is off orbital resonance if is nonresonant ( otherwise we say that it is on orbital resonance ) and this is certainly the case if .usually , ( [ eq:1.10 ] ) is said to be resonant if is resonant .our usage is different because our basic system is ( [ eq:2.1 ] ) , which includes the circulation tune , 1 . we can now interpret as the closed set in constructed by successively removing the open resonance zones , corresponding to the resonance planes , with increasing . thus its construction is similar to the construction of a cantor set .the resonance planes are dense in and thus is small in the sense that it has an empty interior .however , we will show in the proof of lemma 4.8 that it is large in the sense that for the lebesgue measure of its complement relative to is proportional to ( in the sense of ( [ eq:4.23 ] ) ) which can be arbitrarily small .we could take our diophantine set to be for small as in . herewe take the larger set . nowif then and thus converges uniformly if converges . thus the diophantine condition leads to a simple sufficient condition for the quasiperiodicity of . in this context we now state and prove the following lemma which addresses the differentiability of .let be of class and and let .let where .then , given by ( [ eq:4.5 ] ) with , converges uniformly on to a smooth function which is .moreover , . _proof : _ because a constant exists such that ( see ) , we have , for every , a combinatorial argument gives moreover , if , then and ( [ eq:4.6 ] ) and ( [ eq:4.15 ] ) give since we conclude that the l.h.s. of ( [ eq:4.9 ] ) converges as for every .it follows that converges uniformly to a continuous function which is . from ( [ eq:4.5 ] )we have then if and repeating the above argumentation we find thus , if , converges uniformly and a standard result ( see , ) means that is and it follows that since , with , the in ( [ eq:4.600 ] ) converge . * remark : * * to prove ( [ eq:4.15 ] ) we define the sets and .it follows that and that contains elements .then contains no more than elements and because we conclude that contains no more than elements , thus proving ( [ eq:4.15 ] ) .lemma 4.7 provides the basic framework that we need for discussing the uniform convergence of the sequence .in particular it shows that as increases beyond the small divisor problem loses much of its potency .this comes as no surprise because the inequality implies that the fourier coefficients decrease with increasing more rapidly as increases . then with growing the small divisor in ( [ eq:4.5 ] ) can come closer to zero without destroying the convergence .however , although lemma 4.7 takes the mystery out of the working of the diophantine condition , it puts the burden on determining which are in the set and it is not so easy to decide , off orbital resonance , if is in . butsome relief comes from the following lemma , which shows that if then the complement , , of is a small set in terms of lebesgue measure .if in addition , the sequence converges uniformly for almost every .for these two conditions to be consistent we thus need .these results will be central to the statement and proof of theorems 6.5c - d . if , then , where denotes the lebesgue measure . _proof : _let and define . then from definition 4.5 , .note that ( hence ) is undefined for .we will show below that where ( note that is the volume of ) . assuming the validity of ( [ eq:4.223 ] ) , as before ( see ( [ eq:4.15 ] ) ) therefore and for the series converges .now decreases monotonically to as the positive integer .therefore by continuity and the finiteness of lebesgue measure restricted to where the second equality follows from ( [ eq:4.23 ] ) .since this is true for all we obtain the required result . to prove ( [ eq:4.223 ] ) we first recall that is empty if .thus can only be nonempty if with and so that we only have to consider this case . using the fact that volumes are invariant under rotations it follows from the remarks after definition 4.5 that where we also used the fact that .if then ( [ eq:4.2230 ] ) and the fact that immediately yield ( [ eq:4.223 ] ) . furthermore , for , we conclude from ( [ eq:4.2230 ] ) that where the r.h.s .is just the volume of the cylinder in with height and radius and where in the first equation we used the fact that volumes are invariant under translations .this completes the proof .we can now continue the study of the principal solution matrix for ( [ eq:2.1 ] ) which is defined by the initial value problem where in the language of section 4 is quasiperiodic in .one of the aims of this paper is to define the spin tune for this system .we are guided by the special case of section 3 and therefore , and as will become clear below , our emphasis in this section is on principal solution matrices , , where is quasiperiodic and can be written in the standard form , where is a quasiperiodic matrix in and and both may depend on . in that caseall solutions of ( [ eq:2.1 ] ) are in with a very simple frequency structure in the tune .such principal solution matrices fulfill a generalized floquet theorem .there has been extensive study of the equation , where the matrix is almost periodic and is a vector of parameters , one goal being to find conditions under which almost periodic solutions exist ( see for example ) .however our problem ( [ eq:5.02 ] ) is quite special because of the parameter dependence of induced by in ( [ eq:5.03 ] ) .in fact , for every integer we have which follows from ( [ eq:5.03 ] ) and the of in .we will return to this in section 9 where we will see that condition ( [ eq:5.04 ] ) has useful consequences for the spectrum of the principal solution matrix . the case where is , i.e. in and independent of and ,was discussed in section 3 .there we found a solution of ( [ eq:2.1 ] ) in , namely .all other linearly independent solutions are in .the latter follows directly from the floquet theorem , but more importantly also from the construction of the upf using which led , with in , to the representation ( [ eq:5.01 ] ) .it is this construction that points the way for a generalization of the floquet theorem to .we begin with the following proposition .* proposition 5.1 * _ consider the initial value problem with a real skew symmetric matrix and make the following assumptions : a ) equation ( [ eq:5.0 ] ) has a nonzero solution in .b ) there exists a smooth matrix ] is a proper upf at that with .this motivates the definition of an equivalence relation where and in are said to be equivalent - and we write - iff there exist such that .the equivalence relation partitions the interval into equivalence classes such that iff and iff .we note that if has one irrational component then each is a dense subset of . to see thissuppose that is irrational .then is dense as varies over so that is dense in as and vary . from the above motivation for the definition of equivalence it is clear that if then the equivalence implies , i.e. .now suppose that and are in and that .then so that the l.h.s .is in .thus by lemma 4.3e .it is plausible that . if that is so , then .then since is an arbitrary member of , .in fact this is the case , and the joint conditions and can be embodied in a theorem . if is nonempty with an element , then .although all proper uprs are equivalent at a given on the torus , it is not true in general that all proper uprs on the torus are equivalent .but if they are , we then say that the torus is well tuned and we call any a spin tune .since this situation is central to this paper , we delay the proof of theorem 5.3 in order to formalize this definition .* definition 5.4 * ( well tuned ) : a torus is said to be _ well tuned _ iff the following two conditions hold : a ) has a proper upf for each , i.e. each is nonempty .b ) let and be in , then .thus for every , and , by theorem 5.3 , so that then , .a torus that is not well tuned is called _ ill tuned_. note that by theorem 5.3 a torus is well tuned iff the have an element in common .this criterion is very convenient and we will use it for example in the proof of theorem 6.3a .note also that always contains at most countably many elements .in particular , if a torus is well tuned then contains at most countably many elements . *definition 5.5 * ( spin tune ) : let a torus be well tuned , then each element of is called a _spin tune_. thus for each spin tune , . to prove theorem 5.3 we need the following lemma .let , where is a real constant , be in with .then there exists such that .moreover is unique if is nonresonant .first of all we observe that also , by lemma 4.3e we see that .hence by ( [ eq:6.500 ] ) : thus exists such that .clearly is unique if is nonresonant . we can now prove theorem 5.3 ._ proof of theorem 5.3 : _consider ] , both in , and define the smooth functions by it follows from ( [ eq:2.32 ] ) that so that because and are in , ( [ eq:6.401 ] ) ensures that are also in . now ,if is not the zero function then by ( [ eq:6.400 ] ) is in and by lemma 5.6 for some .thus either or exist such that .similarly either or exist such that . by definitioneither or is different from the zero function , since otherwise for ( ) which is obviously false .thus exist such that either or .hence exists such that , i.e. . but by the remarks before theorem 5.3 we also have that . * remarks : * * for the case of and arbitrary each floquet frequency is in ( see remark 2 in section 3 ) .we will see in remark 4 of section 6 that the torus is well tuned so that every floquet frequency is a spin tune .* as we will see in remarks 13 and 14 in section 6 , matrices can be found with which there are tori where for every but which are not well tuned . *consider a well tuned torus .we say that the torus is on a `` spin orbit resonance '' if .thus on spin orbit resonance the set of spin tunes reads as + . for the spin orbit resonance condition amounts to ( [ eq:1.5 ] ) . in general the conditiontakes the form and the order of the resonance is ._ thus on a well tuned torus , if one has resonant spin motion then all have resonant spin motion_. of course the same is true for nonresonant spin motion .this is a key aspect of being well tuned : spin orbit resonance is not defined in terms of a spin frequency that varies with in the sense that ] is an iff . due to remark 3the iff is uniform with .we conclude from theorem 6.3a that the torus is well tuned and that is a spin tune . in particularthe torus is on a spin orbit resonance ( see also remark 3 in section 5 ) . to complete the proof ,we now consider a smooth and function for which .we define via hence is a smooth function -periodic in such that . because is smooth , we have because is -periodic in , we obtain whence by ( [ eq:26 ] ) thus for the fourier coefficients of it follows that where . because is nonresonant vanishes for . thus by lemma 4.3a is constant , i.e. independent of .therefore by ( [ eq:26 ] ) is constant , i.e. independent of and then by ( [ eq:25 ] ) is constant , i.e. independent of . theorem 6.4 addresses the uniqueness of the isf as well as its nonuniqueness .in particular , the contrapositive of theorem 6.4 yields : if off orbital resonance a uniform iff exists and if the system is not on spin orbit resonance , then the isf is unique up to a sign .this behavior was predicted earlier in .we now focus on the case where an iff exists and is not constant . we first define since , its mean , , and zero mean part , , exist .we thus have an important decomposition of , namely , since the l.h.s .is , the r.h.s .is too and in fact and are individually as we check in lemma 6.6a below . from ( [ eq:6.6 ] ) the integral in the exponential of ( [ eq:6.0007 ] )is where clearly , which leads to consideration of the partial differential equation then for every solution of ( [ eq:6.0 ] ) the existence of a -periodic will be important below ( note that is always -periodic ) .we now write where the integer is uniquely determined by the condition .the generalized principal solution matrix from ( [ eq:6.0007 ] ) now becomes where then the principal solution matrix becomes where we can now state and prove the next basic result of this section .the proof will depend , in part , on lemma 6.6b which follows later in order not to break the flow .theorems 6.5c - d use the diophantine condition .consider a fixed torus .\a ) if an iff , , exists and ( [ eq:6.0 ] ) has a smooth and solution , then , defined by ( [ eq:6.108 ] ) , is a proper upf at with upr equal to .b ) if the conditions of theorem 6.5a hold and if the torus is off orbital resonance , then , defined by ( [ eq:6.0013 ] ) is a uniform iff and .moreover the torus is well tuned and , defined for every by ( [ eq:6.108 ] ) , is a proper upf at whose upr is a spin tune and .c ) let and let be in .if an iff , , exists in and if then a uniform iff exists ( and thus the torus is well - tuned ) .\d ) let and let be in .if an iff , , exists in for every in a borel subset of then a uniform iff exists ( and thus the torus is well - tuned ) for -almost every in ._ proof of theorem 6.5a : _ it is clear from ( [ eq:6.108 ] ) that is in and the result follows from ( [ eq:6.07 ] ) . _ proof of theorem 6.5b : _ from ( [ eq:6.0013 ] ) , and have the same third column and is smooth and .therefore is an iff . from ( [ eq:6.201 ] )an easy calculation gives .thus by lemma 6.6b , is constant , independent of .hence is a uniform iff .therefore by theorem 6.3a is a proper upf at with upr which is a spin tune and the torus is well tuned . _ proof of theorem 6.5c : _ from ( [ eq:6.201 ] ) , is in since and are in .using the fact that the torus is off orbital resonance ( since ) , we have by lemma 6.6b that is a constant .thus , by ( [ eq:6.6 ] ) , is in .it follows from the condition and lemma 4.7 that is smooth and in and and that it satisfies ( [ eq:6.0 ] ) , where denotes the fourier coefficient of .the claims now follow from theorems 6.5b and 6.3a . _ proof of theorem 6.5d : _ the interval is not empty so that we pick a in that interval .because we have , by theorem 6.5c , for a uniform iff ( and thus a well - tuned torus ) . because we have by lemma 4.8 that every in is in .this proves our claim .we now complete the discussion by stating and proving the lemmas 6.6a and 6.6b mentioned earlier in this section .\a ) if denotes an iff , then and are in and in .\b ) if denotes an iff and if the torus is off orbital resonance , then and are constant ._ proof of lemma 6.6a : _ the periodicities in ( [ eq:6.6 ] ) can be demonstrated as follows : where at the second equality we have used the fact that is and at the last equality we have used the fact that is bounded .this shows that the first term on the r.h.s . of ( [ eq:6.6 ] )is in and thus that all three terms in ( [ eq:6.6 ] ) have this periodicity property . that all three terms in ( [ eq:6.6 ] ) are in is trivial . _ proof of lemma 6.6b : _ off orbital resonance we find , due to ( [ eq:6.8 ] ) and by applying lemma 4.3c , . thus and in ( [ eq:6.109 ] ) are -independent in this case . * remarks : * * the in and of and is suggested by examining the formal fourier series of where .then it is easy to show that the resonant module part of this sum , defined by for corresponds to .the remaining part with corresponds to .their formal fourier series display the in and . off orbital resonance the relation that so that , i.e. it is independent of as in lemma 6.6b .* under the conditions of theorem 6.5a , is the upr at associated with the upf defined in ( [ eq:6.108 ] ) .however the upf is not unique since the principal solution matrix in ( [ eq:6.07 ] ) can be written as so that is also a upf for an arbitrary smooth .* under the conditions of theorem 6.5a , and using the notation in ( [ eq:6.3 ] ) , gives a solution of ( [ eq:2.1 ] ) which is in .this is easy to check by ( [ eq:6.07 ] ) : since is the eigenvector of with zero eigenvalue , and has dropped out . by ( [ eq:6.07 ] )all other linearly independent solutions are in . *the iff underlying the definition ( [ eq:6.109 ] ) of is , of course , not unique .for example , if then by changing the signs of and we find .thus in analogy to the case in remark 7 in section 3 we can choose such that ] is smooth and .it is thus an iff . by ( [ eq:6.4]),([eq:7.2]),([eq:7.3 ] )we obtain so that where we use the of .thus is independent of . since and since by ( [ eq:7.20 ] ) and ( [ eq:7.21 ] ) we can make the assignments because does not depend on one finds that ( [ eq:6.0 ] ) has smooth solutions which do not depend on and that one of those solutions reads as since and by the of it follows by ( [ eq:7.6 ] ) that is in .thus theorem 6.5a applies and one concludes that , defined by is a proper upf at with upr . here and the integer are uniquely determined by via ( [ eq:6.109 ] ) and ( [ eq:7.5 ] ) . because is contained in each we conclude ( recall the comment after definition 5.4 ) that the tori are well tuned when and .we now discuss the general case using the machinery of section 6 .* proposition 7.1 * _ the single resonance model has a uniform iff for every value of the orbital action variable .hence the corresponding torus is well tuned ._ _ proof : _ for one obtains where this has eigenvalues .if , then and , due to ( [ eq:6.0005 ] ) and ( [ eq:7.001 ] ) , is a uniform iff and .thus by theorem 6.3a the tori are well tuned and is a spin tune . if , then by lemma 2.1b we can choose such that where the integer is chosen such that .we now define and obtain it follows by ( [ eq:6.0005 ] ) that is a uniform iff and .thus by theorem 6.3a the tori are well tuned and is a spin tune . * remarks : * * the proof of proposition 7.1 and definition 5.5 show that the spin tunes of the single resonance model have the form where .conversely every constant of the form ( [ eq:7.17 ] ) is a spin tune of the single resonance model , if it is in .in particular the set is independent of .note that for the single resonance model , spin tunes exist also on orbital resonance , i.e. for rational .see remark 4 too . * the case represents the absence of betatron motion , i.e. motion on the design orbit . in this case can be chosen in ( [ eq:7.17 ] ) such that the spin tune reduces to . * from the expression for it is clear that during variation of , the spin tune in ( [ eq:7.17 ] ) comes closest to the spin orbit resonance when .however for the case of principle interest , the resonance condition is not reached . *if is an integer , the matrix is one turn periodic and the isf can be obtained as the eigenvector of length 1 of the one turn principal solution matrix .moreover , every proper upf is one turn periodic and its upr can be obtained from the complex eigenvalues of the one turn principal solution matrix just as in section 3 .of course , this upr is a spin tune .if is rational , the isf can be obtained as the eigenvector of length 1 of the appropriate multi turn principal solution matrix .see , e.g. .every proper upf is then multi turn periodic and its upr is extracted from the corresponding complex eigenvalues and is again a spin tune .note that this circumstance that a spin tune exists even on orbital resonance has its origin in the facts that the single resonance model has only one orbital frequency and that is independent of . *if the orbital tune is rational then with proposition 7.1 we have an example of a torus which is on orbital resonance but is nevertheless well tuned .this is an example for which the torus may be on orbital resonance but still satisfy the conditions of theorem 6.3a .in this section we construct and study an illustrative but unphysical model which , for reasons which will become clear , we will call the `` moser - siegel model '' .this model has , can be solved exactly , and has two real parameters , where . for certain choices of the orbital tune and of moser - siegel model provides an example of spin motion which is nonquasiperiodic .the matrix of the moser - siegel model takes the form : where and by definition is smooth ( in fact ) and .the latter follows from the convergence of for a nonnegative integer , which follows from the ratio test and which implies that the series in ( [ eq:8.3 ] ) and the series of all its derivatives converge uniformly ( see , ) . for , clearly and for an irrational , by lemma 4.3c , so that where because the function has been used in ( * ? ? ?* paragraph 36 ) , we call our model the moser siegel model . *proposition 8.1 * _ for some there exist values of such that is unbounded . for these values , is not quasiperiodic whence is not quasiperiodic ._ _ proof : _ it is shown in ( * ? ?* paragraph 36 ) that values of and of exist such that is unbounded .thus , for these values , is an almost periodic function whose integral , , is unbounded .it then follows ( see ( * ? ? ?* chapter 6 ) ) that is not almost periodic , whence at least one of and is not almost periodic . * remarks : * * proposition 8.1 shows that for certain values of and of the principal solution matrix at is not quasiperiodic , so that .in particular , for those values the torus is ill tuned so that , by theorem 6.3a , it has no uniform iff . *the unit matrix obviously provides an iff for the moser - siegel model .thus proposition 8.1 demonstrates that the existence of an iff is neither sufficient for having a uniform iff nor for having a well tuned torus . in the context of theorem 6.5athis means that the existence of an iff does not necessarily admit a smooth solution of ( [ eq:6.0 ] ) , which is -periodic .in the code sprint , isfs are calculated nonperturbatively in three ways , namely by stroboscopic averaging , by the sodom2 algorithm and by adiabatic antidamping .perturbative algorithms for obtaining isfs are listed in the introduction .moreover the spin tune is calculated nonperturbatively in sprint by logging the spin precession around the isf or by the sodom2 algorithm .we will make further comments on these simulations in section 10 .the fact that under appropriate conditions the set of generalized floquet frequencies is the set of spin tunes , suggests a further way to obtain the spin tune , namely by spectral analysis of quasiperiodic functions . the way forward is contained in the following theorem .as we shall see , we can also use spectral analysis to construct the isf . consider a uniform iff on a fixed torus .then the following holds .\a ) let be a spin trajectory at . if , where the equivalence class is defined in section 5 , then is in .\b ) let be a spin trajectory at and let the torus be off orbital resonance .also , let .then , defined by with , satisfies the relation where is the isf .moreover , is a spin trajectory in .\c ) for arbitrary , _ proof of theorem 9.1a : _ for fixed , define so that by theorem 6.3a is a proper upf at and we have thus if , then , and therefore , is in . _ proof of theorem 9.1b : _ because the , defined in the proof of theorem 9.1a , is in and because the torus is off orbital resonance we can use lemma4.3b - c to write with and recalling ( [ eq:2.12 ] ) , we have so that since , then .it follows from lemma 4.3e that .then so that ( [ eq:9.05 ] ) leads to combining ( [ eq:9.02 ] ) , ( [ eq:9.03 ] ) , ( [ eq:9.06 ] ) we obtain hence then , because is a spin trajectory at , where the last equality follows from ( [ eq:9.01 ] ) . from ( [ eq:2.32 ] ) , ( [ eq:9.02 ] ) , ( [ eq:9.00 ] ) , ( [ eq:9.08 ] ) it follows that thus is a spin trajectory at . by ( [ eq:9.08 ] ) and because is proper, is in . if is in , then by lemmas 4.3b and 4.3c so that in this special situation , .thus , for an arbitrary spin trajectory at , the double transform of from the double application of ( [ eq:9.01 ] ) is equal to the single transform so that ( [ eq:9.08 ] ) yields if , ( [ eq:9.08a ] ) becomes an eigenproblem for .the solution is .inserting this into ( [ eq:9.08a ] ) yields where denotes the third column of and where is an arbitrary spin trajectory at . _ proof of theorem 9.1c : _ by ( [ eq:9.02 ] ) and ( [ eq:9.04 ] ) we have if , then because is proper and with lemma 4.3e , .thus with ( [ eq:9.002 ] ) , .then in an analogous way moreover , combining ( [ eq:9.001 ] ) , ( [ eq:9.003 ] ) , ( [ eq:9.004a ] ) , ( [ eq:9.004b ] ) gives + + . * remarks : * * let the conditions of theorem 9.1b hold and let the spin trajectory at be in .in this special situation ( [ eq:9.01a ] ) holds , i.e.([eq:9.01 ] ) becomes the spectral expansion of .however , in general is not in , i.e. in general ( [ eq:9.01 ] ) is _ not _ the spectral expansion of because only the tunes appear in ( [ eq:9.01 ] ) .+ if the conditions of theorem 9.1b hold then is parallel to an isf and with ( [ eq:9.01 ] ) and ( [ eq:9.08c ] ) one could , at least in principle , attempt to compute the isf by doing numerical spectral analysis on an arbitrary spin trajectory .of course , by theorem 6.4 , the isf is unique up to a sign and , by theorem 6.3a , the torus is well tuned . *consider a fixed torus and assume that a uniform iff exists so that , due to theorem 6.3a , the torus is well tuned .then , due to remark 8 in section 5 , the set of generalized floquet frequencies is the same at every and is identical with the set of spin tunes .therefore theorem 9.1c implies , for arbitrary , that for every in the fractional part of is a generalized floquet frequency at , and in particular a spin tune .thus as conjectured earlier , spin tunes can indeed be obtained by spectral analysis .theorem 9.1c addresses the spectrum , , for the conditions stated but gives no information on its dependence on .however , under certain conditions , the special parameter dependence of given in ( [ eq:5.04 ] ) guarantees that is independent of as we show in the next theorem .consider a fixed torus and denote by , where is an integer .we conclude from ( [ eq:5.02 ] ) and ( [ eq:5.04 ] ) that satisfies the initial value problem because is the unique solution of this initial value problem ( see also section 2 ) , we obtain valid for arbitrary and arbitrary integer .thus the basic property of in ( [ eq:5.04 ] ) leads to the basic property of as manifested in ( [ eq:9.06n ] ) .it follows that if is known at a fixed then it is known for all with . then if in addition is nonresonant it follows by continuity that it is known for all on the torus .consider a fixed torus off orbital resonance and assume that a exists such that .then for every real and all , exists and is continuous in .moreover , for all , . _proof : _ since is quasiperiodic , it is easy to see from definition 4.1 that and the basic identity ( [ eq:9.06n ] ) gives therefore for all , where we also used the fact that is in . thus the spectrum of the principal solution matrix on the set is the same as the spectrum of the principal solution matrix at . because is nonresonant , is dense in .now fix and let for and assume so that . then where we used ( [ eq:9.002n ] ) for the first equality and where for the second inequality we used the fact that which follows from the nature of .as always , denotes the euclidean norm , i.e. . if is continuous and and defined all over then , since is dense in , for all and thus for all .conversely if , for a given , then , by exchanging the roles of and , we obtain so that for all .thus for all and it remains to show that is defined and continuous on .the of then follows immediately .we first state the following lemma . for fixed and nonresonant , let converge uniformly for all in the set of ( [ eq:9.011 ] ) as the positive integer . then converges uniformly on .in particular exists for all in and is continuous in ._ proof of lemma 9.3 : _ the space of bounded ( w.r.t .euclidean norm ) functions is a complex normed space w.r.t .the norm and obviously is a sequence in . because is complete , so is ( see for example ( * ? ? ?* section 7.1 ) ) .thus to show that converges uniformly on it suffices to show that it is a cauchy sequence in , i.e.for all positive there is a positive integer such that for all integers with we have we also observe that which follows from the continuity of in and from being dense in . equation ( [ eq:9.009 ] ) implies that ( [ eq:9.008 ] ) is equivalent to the statement clearly ( [ eq:9.010 ] ) holds because , by assumption , converges uniformly on .thus converges uniformly on , so that exists for all in and is continuous in . to complete the proof of theorem 9.2 we now show that the conditions of lemma 9.3 are fulfilled , i.e. that converges uniformly on , for every . clearly we have and to show that the limit in ( [ eq:9.007 ] ) is uniform on we estimate by using the first equality of ( [ eq:9.007 ] ) with and by noting that is bounded we have , for all , moreover because is almost periodic it follows ( see ( * ? ? ?* chapter 3 ) ) that the convergence in ( [ eq:9.15 ] ) is uniform on the domain of .hence ( [ eq:9.13 ] ) implies that converges uniformly on . thus we have proved , under the conditions of theorem 9.2 , that is independent of . *remark : * * remark 2 shows that under the conditions of theorem 9.1 the spin tune can indeed be discovered from a spectral analysis of the spin flow for arbitrary . in particular , since the spin motion is quasiperiodic and the torus is well tuned , the spectrum has at most countably many elements and will consist of sharp `` lines '' which can then be `` measured '' . moreover , off orbital resonance and under the conditions of theorem 9.2 , all are equal .however , if the torus is ill tuned but the spin motion is quasiperiodic ( so that the spectrum will consist of sharp lines ) , we expect that the union of the spectra over the torus will contain uncountably many elements .the models in remarks 13 and 14 in section 6 provide examples .note that in the absence of quasiperiodicity or even almost periodicity , as for example in the moser - siegel model of section 8 , there may be difficulties in even computing the spectrum . in practice, the spectrum can be obtained by tracking three mutually orthogonal spins along an orbit and storing their values at each of a very large number of turns before applying a discrete fourier transform to the data ( a well - known way of doing numerical fourier analysis can also be found in ) .but since this spectrum can be very dense , it can be difficult to identify the spin tunes .thus it would be useful to begin with small amplitudes , i.e. tori with small , and look for a close to that for the closed orbit , which can be calculated as in section 3 .the spin tune at higher amplitudes could then be identified by continuing away from the closed orbit .we return to this theme in section 10 . finally , tracking a spin trajectory which is parallel to an isf would give a spectrum without .* of course , there might be cases where the dependence of on leading to the ill tuning mentioned in remark 3 , is very weak so that the sharp lines are just broadened .intuition suggests that in such cases , spin orbit resonance like phenemena might still be expected .but , of course , the definitive conditions under which such phenomena could occur , will only become clear by careful analysis .in the foregoing sections we have presented a thorough step by step account of the circumstances under which spin motion may be quasiperiodic on integrable particle orbits and have thereby put previous studies of the concept of spin tune onto a rigorous basis . in particular , we considered integrable orbits in and by introducing certain conditions ( e.g. diophantine conditions ) and assuming the existence of an isf we obtained conditions under which the spin motion is in where is a spin tune .we have also shown how , by introducing upfs , the spin motion can be represented in terms of generalized floquet forms . the scenarios covered by our treatment and the relationships between them are summarized by the venn diagram of figure 1 .the meanings of the domains in figure 1 are as follows : * inside the black circle : all tori , i.e. for arbitrary real skew symmetric matrices and arbitrary orbital tunes , which are smooth and in and .* inside the red ellipse : tori which have an isf * inside the blue ellipse : tori , which at every have a proper upf ( note that for those tori every spin trajectory is quasiperiodic ) * inside the green ellipse : tori which are well tuned ( see section 5 ) * inside the yellow ellipse : tori which are off orbital resonance * inside the pink ellipse : tori with a uniform iff the numbered circles label specific examples , namely : * example 1 : the single resonance model off orbital resonance ( see section 7 ) * example 2 : the torus defined in remark 13 in section 6 , for the case of irrational * example 3 : the single resonance model on orbital resonance ( see section 7 ) * example 4 : the torus defined in remark 13 in section 6 , for the case * example 5 : a torus defined in remark 14 in section 6 * example 6 : the moser - siegel model ( for certain choices of the parameters ) - see section 8 * example 7 : see below analytical solutions for spin motion or the isf for the arising in real storage rings can not be obtained and it is not even known whether an isf exists in general .nevertheless it seems that it usually exists for storage rings of interest .this is supported by a large amount of numerical work in which , for the cases studied , it was possible to construct at least a very good approximation to an isf .note that in these simulations hard edge and some thin lens representations of fields were used so that the were not smooth .see remark 16 in section 6 .those studies also included calculations of the spin tune using sprint , either beginning with the pseudo or using the sodom2 algorithm whereby the spin tune is obtained by solving an eigenproblem for fourier coefficients in an su(2 ) formalism .numerical simulations with both methods suggest that a spin tune normally exists if the torus is off orbital resonance .this is the situation considered by theorem 6.5d which implies that if we have a isf and if a unit vector exists which is nowhere parallel to , then we almost always have a spin tune ( see remark 5 in section 6 ) . for this and the other results in this paper it has been convenient to prescribe that the are smooth .but the demonstration of isfs and spin tunes in indicates that the smoothness condition can often be relaxed . for convenience ,in the remainder of this section we will use the term isf in this spirit .we hope in the future to be able to present a treatment of the isf and the spin tune in which the requirement of smoothness is relaxed .an example of a situation where an isf might not exist even in crude approximation , namely near 803.5 gev in the hera ring , is given in .if the spin motion is nonquasiperiodic , this case would correspond to example 7 in figure 1 .the spin tune is a crucial quantity for characterizing the stability of spin motion .spins behave somewhat like driven oscillators where the driver is the magnetic and electric fields along particle orbits . near spin orbit resonance , there is potential for marked qualitative changes in the spin motion which may then be quite erratic .the special nature of spin orbit resonance is already clear from the fact that the isf need not be unique at resonance .see theorem 6.4 .moreover , it is clear that it would make little sense to define a spin orbit resonance condition in terms of a upr depending on : such a would in general take different values for different particles on a torus so that it would be impossible for the particles on a torus to be simultaneously on resonance and there would be no enhancement of our ability to systematize spin motion .this was the reason for insisting that a torus be well tuned before considering spin orbit resonance .as mentioned in the introduction and in remark 5 of section 5 , it is clear that can vary with the beam energy and with .this is confirmed by simulations in sprint , hence the name `` amplitude dependent spin tune '' . to study the dependence of on ( recall remark 5 of section 5 ) , it is necessary to choose a `` preferred '' member of . for the calculations with sprintthe choice is made as follows .the spin tune and the corresponding upf are found on the closed orbit using the method outlined in section 3 . normally is set significantly different from zero in order to ensure that the direction of is close to the `` design '' direction in spite of the presence of the usual misalignments of the ring .then as in remark 3 in section 9 , the preferred spin tune at nonzero amplitudes is selected by requiring that it and the corresponding upf vary continuously with amplitude and reduce continuously to the spin tune and upf on the closed orbit . with this procedure in place , simulations in which the isf and the spin tune are calculated over a range of fixed amplitudes or energies indeed show that the spin motion can become erratic near a spin orbit resonance .moreover , in such cases there is a tendency for the isf to become very sensitive to the parameter being varied . with this prescription, one also finds that the strongest variations of the isf occur near low order resonances and that high order resonance effects are usually unimportant . for the details of these calculations and results ,the reader is referred to .these findings are consistent with perturbation theoretical calculations of the isf as in .. then the for the closed orbit appears in the resonance factors , not the for nonzero orbital amplitudes . ]the nonperturbative calculations also show that near spin orbit resonance the spin tune tends to avoid exact fulfillment of the resonance condition .as we have already indicated in remark 3 of section 7 , the spin tune for the single resonance model ( see ( [ eq:7.17 ] ) ) also avoids the spin orbit resonance condition as is varied through the condition . in a ring without so called siberian snakes ( see below )the closed orbit spin tune usually varies with the beam energy .it is then sometimes implied in the literature that in a beam with a large energy spread , the synchrotron motion causes the particles to oscillate to and fro across spin orbit resonances as the spin tune oscillates .this crossing of resonances is then supposed to be the source of the low beam polarization that would be seen . however , we have seen that a spin tune on a torus is a constant . as usual we assign to synchrotron motion . thus does _ not _ oscillate and there is no resonance crossing .nevertheless , we do expect that a large energy spread can lead to small beam polarization .this is explained by the fact that for particles of large enough , varies strongly with the .then the maximum permissible equilibrium beam polarization can indeed be small .studies of the effect on polarization of real resonance crossing can be found in .the consequences for the polarization of crossing first order resonances are usually quantified using the froissart stora formula .but in it is shown that the froissart stora approach can be generalized to describe the change of the polarization at the crossing of higher order resonances too .this is a further illustration of the value of using a wisely defined spin tune for identifying resonances and for understanding their properties the spin tune can be obtained in sprint via the pseudo or by using the sodom2 algorithm .however , as we have seen in section 9 the spin tune might also be obtained ( `` measured '' ) by a spectral analysis of the spin motion .this offers an attractive alternative .as in the case of the other two methods the preferred spin tune would have to be identified among the many spectral lines by matching onto the spin tune of the closed orbit .we have also seen in section 9 that the isf might be obtainable , if it exists , by spectral analysis of spin motion .spectral analysis may thus lead to a criterion for deciding whether an invariant spin field exists .other criteria for the existence of the isf are already available for stroboscopic averaging and the sodom2 algorithm .we now complete this discussion by mentioning other quantities that have been used in attempts to define a spin precession frequencies .as explained in remark 3 in section 3 , for motion on the closed orbit the spin tune can be obtained trivially from the complex eigenvalues of the 1turn principal solution matrix .this does not normally work off the closed orbit since and for this the are generally not the complex eigenvalues .so the reader will agree that on synchro betatron orbits the spin tune usually _ can not _ be obtained from the complex eigenvalues of the 1turn principal solution matrix .in fact in general the real eigenvector of the general 1turn principal solution matrix starting at is not even a spin trajectory and is not parallel to an isf ( * ? ? ?* , for example ) : the term `` spin closed orbit '' which is sometimes used for the isf , is inappropriate . moreover in the simulations with sprint the sensitivity of the isf to variation of parameters shows no correlation with the spin precession rate extracted from the complex eigenvalues of the 1turn principal solution matrix .of course eigenvalues of 1turn principal solution matrices are easy to calculate but for they usually have no useful function . the model described in ( * ?* equation 21 ) provides an example .it concerns orbits which are said in to be exceptional .it is stated there that exceptional orbits are characterized by the feature that the spin tune depends on orbital phase and this is discussed in close conjunction with stern gerlach forces . as we point out in the introduction , s g forces can have no practical relevance for understanding spin resonance .the model in involves the single resonance model ( see section 7 ) and a single thin lens siberian snake .a siberian snake of the kind used here is a magnet system that rotates a spin by the angle around an axis in the plane of the ring . in the notation of section 7the parameters in are and .at all the 1turn principal solution matrix starting at the snake represents a rotation around the vertical by an angle depending linearly on . then the isf is vertical at the snake at all on this torus and for this zero measure range of , the isf at the snake _ is _ the same as the real unit length eigenvector of the 1turn principal solution matrix .. ] the orbital tune is arbitrary .the eigentunes extracted from the complex eigenvalues of , vary linearly with .therefore if a particle is followed along an orbit , these eigentunes normally change abruptly between one turn and the next .other 1turn eigentunes would be obtained if other positions around the ring were chosen for .nevertheless , it is claimed in that the eigentune for is tune .obviously this is not a spin tune in the sense of our treatment .we have taken account of the fact that an eigentune in the su(2 ) formalism used in is one half of a corresponding eigentune in our formalism .although this and its eigentune are discussed in association with s g forces , the inclusion of s g forces is not necessary to obtain either .in fact by exploiting techniques additional to those used in this paper one can show that these parameters simply provide an example of ill tuning and that this case would be entered next to example 6 in the diagram of figure 1 .the inclusion of s g efffects clouds the issue .it is not clear from whether these phase dependent eigentunes serve some useful function such as indicating the stability of spin motion .we give another example of the use of the eigentune of a 1turn principal solution matrix below .the situation with regard to the utility of eigenvectors and eigentunes is more subtle for tori on orbital resonance where are rational .the orbit and are then periodic over an appropriate number , , of whole turns . thus in analogy with the method in remark 6 in section 3, there is a possibility of calculating at as the unit length real eigenvector of the principal solution matrix and in general it would be a function of .the imaginary part of the exponent of a complex eigenvalue of this principal solution matrix would provide the advance of the phase of spin rotation around and this could be used to obtain the average 1turn spin phase advance .this would usually depend on and in such cases it could _ not _ be used to define a spin tune .but it could be used to define a ( see ( [ eq:6.109 ] ) ) .so , as is usual at orbital resonance , an isf can exist in general but normally there is no spin tune .we note in passing that the complex eigentunes are in and , just like . moreover , because eigenvalues of matrices are invariant under similarity transformations , the eigentunes are invariant when the starting point for the eigenanalysis is shifted _ along the orbit_. examples of an unwise use of the term spin tune can be found in where the dependence of the multi turn eigentune on is made explicit .see too .again , a `` tune '' depending on can not be used for studying spin orbit resonance .calling such a quantity a spin tune can create confusion .see remark 4 in section 9 also . if , on orbital resonance with rational tunes , the torus is ill tuned , then one can expect that either there are more or fewer proper uprs than one would have on a well tuned torus .thus a spectral analysis along the lines of that in section 9 , applied to every on the torus , could be a useful diagnostic tool to signal these two cases of ill tuning .note that in the examples in remarks 13 and 14 in section 6 there are too many uprs .therefore one might expect that on orbital resonance with rational tunes an ill tuned torus had too many proper uprs .although an eigentune is in general dependent , we might expect that an approximation to the spin tune on a well tuned torus off orbital resonance could be obtained by setting the orbital tunes to rational values near to the actual tunes but such that were very large .indeed this is the essence of a popular perception . in effect , although usually not clearly stated , the underlying hope is that for the smooth guide fields of real rings and for large enough , the average 1turn spin phase advance is only very weakly dependent on and that therefore a good approximation to can be obtained .that would be consistent with the heuristic expectation that for large enough the influence of the initial become diluted .such behavior might also be expected if the high order fourier coefficients in the in ( [ eq:6.6 ] ) are very small owing to the smoothness of the fields .see for a hint of how the fourier coefficients come in here .we hope in the future to be able to present a rigorous treatment of this approximation .this approach has been adopted in to indicate that off orbital resonance the spin tune is a half integer at most in the model to be discussed in the next paragraph .however , any attempt to find an approximate value for a spin tune by using maps for nearby rational tunes should at least be checked for convergence and consistency .see remark 4 in section 9 too .further discussion around the topic of rational tunes can be found in in which the nature of the so called `` snake resonances '' is studied .these refer to a large loss of beam polarization during acceleration in a model in which the spin motion in most of the ring is approximated by the single resonance model and the spin motion is stabilized by pairs of idealized , i.e. thin lens , siberian snakes .the siberian snakes have the effect of fixing at independently of the beam energy .snake `` resonances '' occur at rational orbital tunes for which , in the notation of ( [ eq:5.00 ] ) , with odd . in it is made evident that at these tunes and for most there is no amplitude dependent spin tune so that one is not dealing with spin orbit resonances .moreover at most orbital amplitudes the isf defined there ( i.e. without insistence on smoothness ) is irreducibly discontinuous at some orbital phases . of course , since rational tunes correspond to orbital resonance , it should come as no surprise , given the content of our paper , that there is no amplitude dependent spin tune in this case .we have explicitly chosen our nomenclature to be consistent with earlier usage and thereby contribute clarity to the classification of phenomena .it could be that the pathological behavior in this model , namely the large loss of polarization , is due to the use of the simplified but singular representation for the snake fields .in this connection we note with interest that according to simulations for rhic , the loss of polarization during acceleration is less severe when the simulations are carried out with the magnetic fields of real snakes than with the singular fields of thin lens snakes .this indicates that predictions from simplified , mathematically singular models should be treated with some caution . for recent experimental work around vertical orbital tunes corresponding to `` snake resonance '' , see . in any casethe use of the term `` snake resonance '' is a good illustration of the confusion that arises from an imprecise use of the concept of spin orbit resonance .a model involving the single resonance model and two thin snakes provides the second example of the use of the eigentune from the 1turn principal solution matrix .simulations described in show some loss of polarization for all nonzero during acceleration , even away from the rational orbital tunes associated with snake `` resonances '' .it is implied there that this loss stems from the fact that during the acceleration , the eigentune from the 1turn principal solution matrix , which in is called the `` perturbed spin tune '' , oscillates to and fro across a spin orbit resonance as the orbital phase advances from turn to turn .but in this example the 1turn eigenvector of the principal solution matrix is usually not even parallel to the isf .so it is even more difficult to imagine that the `` perturbed spin tune '' can characterize long term spin motion .thus an alternative picture for the loss of polarization must be sought . as mentioned in remarks 1 and 4 in section 7 , for the straightforward single resonance model a spin tune _does _ exist on orbital resonance .this completes our discussion of notions of spin precession frequency .we now conclude by summarizing the main message .this is , that , by just employing the lorentz force and the t - bmt equation and with the help of the concept of quasiperiodicity , we are able to provide a rigorous and broad treatment of the concepts of proper uniform precession rate , spin tune and the invariant spin field by common methods used for ordinary differential equations .this allows us to focus on the main phenomena without the distraction of perturbations such as noise , collective effects and synchrotron radiation . in principlethey can be included by using perturbation theory .we have discussed putative stern gerlach forces in the introduction .the advantages of a clear , universally accepted and _ useful _ definition of spin tune has been made evident by the examples in the foregoing paragraphs .we thank georg hoffstaetter , helmut mais , mathias vogt , kaoru yokoya and the late gerhard ripken for important and fruitful discussions on this and related topics . in addition , jae gratefully acknowledges the support from doe grant de - fg03 - 99er41104 and from desy during a sabbatical in 19971998 when the core elements of this work were first assembled into a manuscript .jackson , `` classical electrodynamics '' , 3rd edition , wiley ( 1998 ) .d.p . barber , k. heinemann and g. ripken , z. f. physik * c64 * , 117 ( 1994 ) .j.a.ellison and k. heinemann , `` periodic spin fields and phase space densities : stroboscopic averaging and the ergodic theorem '' , submitted for publication ( 2004 ) .k. gottfried , `` quantum mechanics : fundamentals '' , addison wesley ( 1989 ) .k. heinemann and d.p .barber , nucl .a463 * , 62 and * a469 * , 294 ( 2001 ) . k. heinemann and g.h .hoffstaetter , phys . rev . * e 54 * ( 4 ) , 4240 ( 1996 ) .d.p . barber et al . , five articles in proceedings of icfa workshop `` quantum aspects of beam physics '' , monterey , u.s.a . , 1998 , edited by p. chen , world scientific ( 1999 ) . also in extended form as desy technical report 98 - 096 ( 1998 ) and e print archive : physics/9901038 , 9901041 , 9901042 , 9901043 , 9901044barber , g.h .hoffstaetter and m. vogt , proc .1998 european part .( epac98 ) , stockholm , sweden , june 1998 .available electronically at : http://epac.web.cern.ch/epac/welcome.html .d.p . barber , g.h .hoffstaetter and m. vogt , proc .high energy spin physics , protvino , russia , september 1998 , world scientific ( 1999 ) .s.r . mane , phys .* a36 * , 105 ( 1987 ) .hoffstaetter and m. vogt , phys . rev . *e 70 * , 056501 ( 2004 ) .hoffstaetter , `` a modern view of high energy polarized proton beams '' . to be published as a springer tract in modern physics .m. vogt , ph.d .thesis , university of hamburg , desy technical report desy thesis2000054 ( 2000 ) .a.w . chao , nucl .180 * , 29 ( 1981 ) .modern notation : replace by .d. p. barber and g. ripken , `` radiative polarization , computer algorithms and spin matching in electron storage rings '' , in the handbook of accelerator physics and engineering , edited by a.w .chao and m. tigner , world scientific , 2nd edition ( 2002 ) .v.v . balandin and n.i .golubeva , desy technical report 98 - 16 ( 1998 ) .eidelman and v. yakimenko , particle accelerators * 50 * , 261 ( 1995 ) .k. yokoya , desy technical report 99 - 006 ( 1999 ) and e print archive : physics/9902068 .s.r . mane , nucl .a498 * , 1 ( 2003 ) .ya.s . derbenev and a.m. kondratenko , sov .jetp * 35 * , 230 ( 1972 ) .ya.s . derbenev and a.m. kondratenko , sov .jetp * 37 * , 968 ( 1973 ) .k. yokoya , desy technical report 86 - 57 ( 1986 ) .d.p . barber , g.h .hoffstaetter and m. vogt , proc .spin physics symposium , osaka , japan , october 2000 , aip proceedings 570 , ( 2001 ) .d.p . barber , k. heinemann and g. ripken , desy technical report m-92 - 04 ( 1992 ) , second revised version , september 1999 .hoffstaetter , m. vogt and d.p .barber , phys . rev .beams * 2 * , 114001 ( 1999 ) .sinai ( ed . ) , encycl .of math.sciences v. dynamical systems ii , springer , new york ( 1989 ) .i.p . cornfeld , s.v .fomin and y.g .sinai , `` ergodic theory '' , springer , new york ( 1982 ) .k. heinemann , desy technical report 96 - 229 ( 1996 ) and e print archive : physics/9611001 .pomeransky , r.a .senkov and i.b .khriplovich , phys .* 43 * ( 10 ) , 1055 ( 2000 ) .+ see also + i.b .khriplovich and a.a .pomeransky , surveys high energy phys .* 14 * , 145 ( 1999 ) and e print archive gr - qc/9809069derbenev , university of michigan ann arbor , technical report um he9030 ( 1990 ) .mane , nucl .a498 * , 52 ( 2003 ) .s.r . mane , proc .spin physics symposium , brookhaven national laboratory , long island , u.s.a . , september 2002 .aip proceedings 675 ( 2003 ) .d.p . barber , j.a .ellison and k. heinemann , proc .spin physics symposium , osaka , japan , october 2000 , aip proceedings 570 , ( 2001 ) .h. amann , `` ordinary differential equations : introduction to nonlinear analysis '' , de gruyter , ( 1990 ). j. k. hale , `` ordinary differential equations '' , 2nd ed . , krieger , malabar , florida ( 1980 ) .y. aharonov and j. anandan , phys . rev .* 58 * ( 16 ) , 1593 ( 1987 ) .j. n. franklin , `` matrix theory '' , prentice hall , new jersey ( 1968 ) .h. goldstein , `` classical mechanics '' , addison wesley , 2nd ed.,new york ( 1982 ) .barber et al ., desy technical report 85 - 44 ( 1985 ) .modern notation : replace by .p. lochak and c. meunier , `` multiphase averaging for classical systems '' , springer , new york ( 1988 ) .koerner , `` fourier analysis '' , cambridge university press , cambridge ( 1988 ) .w. maak , `` fastperiodische funktionen '' , 2nd ed . ,springer , berlin ( 1967 ) .arnold , `` mathematische methoden der klassischen mechanik '' , birkhaeuser , basel ( 1988 ) .a. m. fink , `` almost periodic differential equations '' , lecture notes in math .377 , springer ( 1974 ) .h. s. dumas , j. a. ellison and m. vogt , siam j. app .* 3 * , 409 ( 2004 ) .j. dieudonne , `` foundations of modern analysis '' , academic press , new york ( 1960 ) . s. lang , `` real analysis '' , addison wesley , reading ( 1973 ) . t. yoshizawa , `` stability theory and the existence of periodic solutions and almost periodic solutions '' , springer , new york ( 1975 ) .s.r . mane , nucl .a321 * , 21 ( 1992 ) .i.s . gradstein and i.m .ryshik , `` tables of integrals , series , and products '' , academic press , new york ( 1965 ) .barber et al . , proc .spin physics symposium , brookhaven national laboratory , long island , u.s.a . , september 2002 .aip proceedings 675 ( 2003 ) . c. l. siegel and j. k. moser , `` lectures on celestial mechanics '' , springer , new york ( 1971 ) .j. laskar , physica * d67 * , 257 ( 1993 ) .m. froissart and r. stora , nucl .. meth . * 7 * , 297 ( 1960 ) .b. montague , physics reports * 113 * , 1 ( 1984 ) .s. derbenev and a. kondratenko , soviet physics doklady * 20 * , 562 ( 1976 ) .s. derbenev et al .particle accelerators * 8 * , 115 ( 1978 ) .s.r . mane , nucl .a480 * , 328 ( 2002 ) .lee , `` spin dynamics and snakes in synchrotrons '' , world scientific ( 1997 ) .m.xiao and t. katayama , university of tokyo technical report cns - rep-51 ( 2003 ) .m. bei et al . , proc .spin physics symposium , trieste , italy , october ( 2004 ) . to be published by world scientific .
|
we present an in depth analysis of the concept of spin precession frequency for integrable orbital motion in storage rings . spin motion on the periodic closed orbit of a storage ring can be analyzed in terms of the floquet theorem for equations of motion with periodic parameters and a spin precession frequency emerges in a floquet exponent as an additional frequency of the system . to define a spin precession frequency on nonperiodic synchro betatron orbits we exploit the important concept of quasiperiodicity . this allows a generalization of the floquet theorem so that a spin precession frequency can be defined in this case too . this frequency appears in a floquet like exponent as an additional frequency in the system in analogy with the case of motion on the closed orbit . these circumstances lead naturally to the definition of the uniform precession rate and a definition of spin tune . a spin tune is a uniform precession rate obtained when certain conditions are fulfilled . having defined spin tune we define spin orbit resonance on synchro betatron orbits and examine its consequences . we give conditions for the existence of uniform precession rates and spin tunes ( e.g. where small divisors are controlled by applying a diophantine condition ) and illustrate the various aspects of our description with several examples . the formalism also suggests the use of spectral analysis to `` measure '' spin tune during computer simulations of spin motion on synchro betatron orbits .
|
the polarization of light provides a versatile suite of remote sensing diagnostics . in astronomy, polarization is used to study the sun and solar system , stars , dust , supernova remnants , and high - energy extragalactic astrophysics .the astrophysical mechanisms by which polarized light is produced range from scattering phenomena to the interaction between high energy charged particles , and magnetized plasmas . beyond astronomy , polarization is used in remote sensing , medical diagnostics , defense , biophysics , microscopy , and fundamental experimental physics , e.g. .accurate , precision polarimetric methods usually require rapidly modulating , often fragile , parts and are inherently monochromatic , e.g. photoelastic modulators ( pems ) , ferroelectric liquid crystals or liquid crystal variable retarders ( lcvrs ) in tandem with phase locked photomultipliers , or synchronized charge shuffling on a charge - coupled device ( ccd ) detector for area detection .lower accuracy techniques typically require sequential measurements of the target using rotating waveplates and polarization analyzers . herewe describe a method to encode polarimetric information over a wide spectrum in a single data frame , using static optics .this approach alleviates errors introduced by the need to match sequentially acquired data , and eliminates the need for fragile or rapid modulation , yet may be able to accomplish high accuracy , precision measurements . the methods , of course , have their own implicit sensitivities and concerns , as we discuss below .a particular interest of the authors , which serves as a useful illustrative example , is the use of precision circular polarization spectroscopy as a remote sensing biosignature and a potentially valuable tool in searches for biological processes elsewhere in the universe .the circular polarization spectrum is sensitive to the presence of molecular homochirality , a strong biosignature , through the combined optical activity and homochirality of biological molecules .biologically - induced degrees of circular polarization have been found in the range to for a variety of photosynthetic samples , with an important correlation between the intensity spectrum and polarization spectrum . hence , precision full stokes polarimetry and wide spectral coverage are required .furthermore , the target scene and instrumentation may be in rapid relative motion , compounding the difficulties of acquiring the data using traditional polarimetric techniques .a large number of photons must be accumulated in a short period of time .the techniques presented in this paper may provide a means to make this type of polarization measurement , in addition to providing a robust method for acquiring less precise spectropolarimetry in a straightforward fashion .furthermore , the approach is applicable across a wide wavelength range , and , as well as in the visible , can work equally well in the ultraviolet , where for example chiral electronic signatures are generally strongest , to the infrared , where polarimetry goes hand in hand with probes into the geometry and physical characteristics of dusty regions of the universe .a variety of similar concepts are available under the generic title of `` channeled polarimetry'' .these typically fall into two classes : channeled imaging polarimetry ( cip ) and channeled spectropolarimetry ( cs ) , following terminology of . to simplify , the cs methods typically encode the polarization information as an amplitude modulation directly on the spectrum , derived from a polarization optic whose retardance is a function of wavelength .as an example , the spectral modulation principle for linear spectropolarimetry can reach a precision of at least .the cip methods , by contrast , use a polarization optic whose retardance is spatially varying , so that the polarization information is encoded as a set of spatial fringes onto an image .these two approaches , as well as a number of technical issues that arise in each case , are described in some detail in .previous authors have used multiorder retarders , birefringent wedges , pairs of birefringent wedges , and savart plates individually or in combination for these two applications . typically , the polarization information is extracted from the data using fourier methods .another approach to single - shot imaging polarimetry and spectropolarimetry is the wedged double wollaston device , which yields multiple images on a detector with polarization axes at different angles and allows retrieval of the stokes parameters through combinations of the images .the approach explored in the current paper is to disperse the spectral and polarimetric information along two orthogonal directions , a `` spectral '' dimension for the spectroscopy and a `` spatial '' dimension for the polarimetry .the amplitude modulation of the encoding of the polarization information is independent of the choice of spectral resolution .the two aspects of the measurement , the spectroscopy and the polarimetry , may be optimized independently .the complete spectropolarimetric information is encoded on a single data frame , and may be derived using straightforward analytical techniques .poisson photon counting statistics play a critical role in astronomical polarimetry . to measure a polarization degree of ,it is necessary to collect ( at least ) photons .for example , to measure , it is necessary to accumulate photons .a typical astronomical ccd has a well - depth electrons per pixel , requiring pixel readouts .if the data are needed in , say , 1 s in one pixel , this multi - readout approach becomes prohibitive .a solution is to spread the illumination across many pixels , as is done for high signal - to - noise - ratio photometry with the hubble space telescope . making a virtue of necessity ,if we use optics which spread the light of a spectrum perpendicular to the spectrum , then we can exploit the width of the broadened spectrum to encode the polarimetric data .sections 2 and 4 discuss a variety of configurations that accomplish this goal .2 starts with linear polarization ( equivalently any two of the stokes polarization parameters ) , followed by a discussion of our analysis methods in sec .3 . configurations that enable full stokes spectropolarimetry are presented in sec .5 describes practical implementation , sensitivities , and an approach based on calibration .6 provides an example application .finally , we make some conclusions in sec . 7 . the different embodiments of the underlying approach described in secs . 2 and 4 highlight different aspects of the method .in the end , we anticipate that the most useful realizations of the concept will be the double wedge for linear polarimetry , sec .2b , and the double - double wedge for full stokes spectropolarimetry , sec .the other subsections introduce new ideas incrementally , while these two sections capture the final products for the two types of polarimetry .we use the conventional stokes vector formalism to quantify the polarization of light with where is the total intensity ; decribe the linear polarization and the circular polarization .the normalized stokes parameters represent the fractional polarization state .the degree of polarization is given by and the direction of linear polarization by .we envisage a spectrum of light broadened in a direction orthogonal to the dispersion direction and sensed using a two dimensional area array such as a ccd .this broadening can be spread along a segment of a conventional long slit spectrograph , for example , with the length of the entrance slit providing the spatial dimension in the detected two dimensional spectrum . to introduce amplitude modulation along the slit ( direction ), we introduce a retardance gradient along using a birefringent wedge ( or wedges ) followed by a polarization analyzer , such as a dichroic polarizer or polarizing prism ( see figs . 1 and 2 ) . it would be possible to carry out the polarization analysis immediately in front of the detector array , as in .however , performing the polarization analysis as early as possible in the optical path yields better robustness against polarization introduced by the instrumentation optics . furthermore , the polarization optics can be more compact and easier to characterize , since light from all wavelengths is analyzed using the same optical elements .hence , we prefer to insert the polarization optics immediately adjacent to the spectrograph s entrance slit , and allow a long - slit spatial segment to project through the instrument to become the detector array s spatial dimension . to lay the groundwork , we initially consider just a single birefringent wedge .the wedge thickness gradient is oriented along the slit , while its fast axis is oriented with respect to the slit .the analyzer s transmission axis is parallel to the slit , though it could alternatively be orthogonal to it .if we define the stokes direction as also being parallel to the slit , then we can consider a uniformly illuminated slit with the beam entering the slit , orthogonal to the slit plane .if the incoming light is polarized with its electric field along the slit ( ) , at the hypothetical tip of the wedge where the retardance is zero , the polarized light passes through the retarder and analyzer without hinderance .moving along the slit , the retardance increases to the point where it becomes quarter - wave , and the light is converted to circularly polarized light after the retarder , and half of the light transmits through the analyzer . as , and the retardance , increase together , it reaches the point where there is half wave retardance . at that point the polarization is rotated after the retarder , and none of the light transmits through the analyzer .at the same distance further along the slit , the retardance is full - wave , the light is completely transmitted , and the cycle is complete .note that for typical birefringent materials , the spatial distance corresponding to one wavelength of retardance will depend on the wavelength . in the absence of dispersion ,the spatial modulation frequency is .circularly polarized light ( ) is half transmitted at zero retardance ( circularly polarized light passing through the analyzer ) .when the retardance reaches quarter - wave , the light becomes linearly polarized along the slit direction and all of the light passes the analyzer . when the retardance is half - wave , the sign of the circular polarization is flipped ( ) , and again , half of the light is transmitted through the analyzer .hence , the modulation for is similar to the modulation due to , but out of phase by a distance corresponding to one quarter wave of retardance .light polarized linearly at ( ) , i.e. , along the retarder fast axis , is unaffected by the variable retardance along the slit .however , if we precede the birefringent wedge by an achromatic ( or superachromatic ) quarter wave retarder , with fast axis _ along _ the slit , then the circular polarization parameter is interchanged with .now is unaffected by the variable retardance and causes no spatial modulation , while causes spatial amplitude modulation , a quarter wave out of phase from .if the input stokes vector is , the output intensity in the spatial direction is the retardance maps onto the distance along the slit according to , where the distance corresponding to a single wave of retardance change is , is the wedge angle , and and are the refractive indices for the and -beams , respectively . if the circular polarization is desired rather than , then the quarter wave retarder can be omitted .if a beam splitting polarizing prism ( e.g. a wollaston prism ) is used as the polarization analyzer , then two versions of the spectra are obtained on the detector .the intensity of the orthogonally polarized beam is the difference of eqs .( 1 ) and ( 2 ) , divided by their sum , assuming any transmission differences have been removed , gives table 1 summarizes this and other algebraic expressions for the spatial modulation in subsequent configurations discussed below . an alternative way to express eqs .( 1)(3 ) is where the position angle of linear polarization is given by . from eq .( 4 ) it is apparent that the spatially modulated profile has an amplitude of modulation equal to the degree of polarization , and a ( spatial ) phase zero point that reveals the angle of polarization .it is important in using eqs .( 1 ) and ( 2 ) that there not be any signficant intensity variations along the slit on length scales of order . however , in the `` dual beam '' version , eq .( 3 ) , the total intensity along the slit has been eliminated .this potentially offers a means to retain some spatial resolution along the slit .for example , the image of a star can have very large intensity changes along a spectrograph slit , though its polarization is unchanged .provided the extent of the image is sufficient to encode the sine and cosine terms in eq .( 3 ) , we may derive its polarization even in the presence of quite strong intensity changes . in eqs .( 1 ) and ( 2 ) , intrinsic intensity changes would be mixed with amplitude modulation produced by polarization .hence , care needs to be taken in matching the spatial extent of the instrumental point spread function to the projected scale of the retardance variation . use of eq .( 3 ) is more robust against this constraint . in practice ,wedge components available off the shelf are relatively thick .this introduces a multiorder retarder effect , exploited in the spex concept . a thick birefringent material , followed by analysis optics , such as those employed here , for a single location on the slit , yields spectral modulation , used to measure the polarization in .hence , using a single wedge , the resulting fringes from polarized light have a relatively pronounced `` slope '' because the retardance , implicitly , is varying as a function of both wavelength and spatial direction . from above , the complete expression for .therefore a constant phase , which defines the appearance of the fringes on the detector , occurs for \lambda ] , following the notation of the appendix . if we incorrectly derive {vv} ] .a formal tolerance analysis can be carried out , and to completely ignore this term , while restricting the cross talk , requires to be within of if the miscentering is such as to _ maximize _ the cross talk . if needed , there are two options to relax this constraint within the context of an analytical approach : ( 1 ) do not ignore the term ! and ( 2 ) increase the spatial scale to ease the requirement on .empirical calibration approaches are also possible as discussed below .wedge retarders , such as those discussed here may be custom manufactured . however , testable quality versions are available off - the - shelf under the guise of depolarizers or scramblers .quartz or calcite provide plausible birefringent materials .the ordinary and extraordinary refractive indices and birefringences are , for quartz , , , , respectively , and for calcite , , .for example , if the retarders have a wedge gradient of , then at 500 nm the retardance increases by one wavelength over a distance of mm for quartz and mm for calcite .if these scales are projected 1:1 onto a detector with m pixels , then these values correspond to and pixels for quartz and calcite , respectively . in a similar fashion to the analysis described in sec .4.b.3 , the least squares methods lend themselves to formal tolerance analyses , as well as parameter estimation . however , a comprehensive analysis of all plausibly relevant parameters is impractical and premature .instead , we consider an alternative approach , which is purely empirical . for a very general set of optical component characteristics , the generic versions of the amplitude modulated intensity profile , eqs .( 7 ) and ( 8) , are valid , even if the exact forms of the functions , , and are not known . if we present the system in turn with unpolarized light , and then 100% polarized light oriented in the direction , the direction , and finally , with 100% circularly polarized light , then the empirical response _ gives _ the functions , , and .these empirically derived functions can then be used to numerically derive the curvature matrix and its inverse . as in sec .4.b.3 , the presence of covariance terms in the matrices by itself does not invalidate the approach , because , in principle , with a high quality set of calibration observations ( which would appear much like the examples shown in fig .4 ) , they are implicitly known .it is only when the terms are not correctly accounted for , that problems may arise ; that is , if the calibration sources are not of high enough quality .it is also likely that the variance will be a function of analyzer angle , as above .hence empirical versions of fig. 7 may be useful to explore trade space .we expect , in general , the empirical calibration method to be a very important approach , potentially offering the best strategy for deriving the stokes parameters accurately .however it remains to be seen to what degree the required tolerances , or calibration stability , limit the performance of these devices in practice .our intent is to develop additional laboratory experience to understand these issues .component birefringence will depend on temperature .this can be mitigated by ( 1 ) stabilizing the temperature , ( 2 ) continuous calibration , and ( 3 ) compounding carefully chosen materials .to obtain an idea of the order of magnitude of the temperature sensitivity , we use the temperature dependent formula for the birefringence of quartz , given by , which yields for the range to . since the wedge should keep its shape under expansion or contraction , only the birefringence term matters and imprints itself as an identical fractional change on the spatial wavelength .it can be shown that the resulting fractional change in for the dual beam single wedge example is one half the fractional change in , for an input beam consisting of purely , and the spurious cross - talk into the other stokes parameter is .hence a temperature change would result in a spurious polarization ( cross talk ) of for a 10% linearly polarized source .the presence of two refractive indices in a wedge - shaped optic will cause a prismatic separation of the orthogonally polarized beams .the magnitude of this effect will depend on the details of the optical system designed . in our laboratory testing , this issue was not significant .if the beam incident on a single wedge has significant convergence , then the retardance seen by light at different angles differs by approximately where is the angle to the normal . for this retardance difference to be less than ,say , the -number of the beam needs to be slower than for a 1 mm thickness quartz and for 1 mm thickness calcite .the convergence requirement scales as the inverse square root of the thickness .we carried out a number of monte - carlo simulations and found that the retrieved solutions are close to the theoretical expressions given in the tables provided that ( 1 ) at least one full period is sampled and ( 2 ) there are sufficent sampling points within the full cycle to properly sample the highest frequency component . in cases where the fringes have significant slope , as described above, it will be necessary for the spectral resolution to be sufficient to separate the fringes .if we require the retardance change , this is , where , the thickness .hence the required spectral resolution is . for 2 mm thick quartzthis is and for 1 mm calcite , . for the compounded wedge optics which straighten the fringes ,this constraint is relaxed to the point of being essentially irrelevant .we established an optical test bench to allow us to provide an empirical proof of concept for the approach presented in this paper , and to demonstrate that the device functions as a polarimeter .the optical test bench is shown and illustrated in fig .light sources , either halogen continuum white light , or a variety of line lamps for wavelength calibration , illuminated an integrating sphere s entrance port .the light emerging from a separate port at right angles to the first is expected to be unpolarized .however , to ensure an unpolarized source and to provide a uniform location for our measurements , the emergent light was directed to fall onto an opal diffusing screen .following the opal screen , and located close to the screen , the light optionally encountered polarizing elements ( , , or ) for calibration , samples to measure the polarized transmission spectrum , or nothing , to provide an intensity reference spectrum .the spectrograph consisted of a slit , a collimator , a transmission grating , and a camera .the slit was 125 m by 1 cm and was located at one focus of an 50 mm nikon collimating lens .the transmission diffraction grating was thorlabs part # gt50 - 06v with 600 grooves / mm .a second 50 mm nikon lens imaged the spectrum onto a quantum scientific imaging 683 8 mpix cooled ccd camera with pixels of size 5.4 micron . for the polarization analyzer , we placed a meadowlark precision linear polarizer , dpm100vis between the spectrograph slit and the collimating lens , using a rotary mount to allow adjustment of the analyzer angle .the extinction ratio across the visible spectrum for this polarizer is , and exceeds 100:1 from approximately 375 nm to 725 nm .the birefringent wedges were placed on a platform next to the spectrograph slit between the slit and the source ( i.e. before the light enters the spectrograph ) .the optional quarter wave retarder was mounted , when used , between these wedges and the source and was an achromatic meadowlark quarter wave retarder aqm-100 - 545 , which is effective between 450 nm and 630 nm .the birefringent wedges themselves were a mix of customized and off - the - shelf quartz scramblers from karl - lambrecht corporation .off - the - shelf wedges had a pitch angle of , and the customized pieces used a pitch angle of .the distance between the source and the spectrograph slit was approximately 0.6 m , and the entire system was contained within a series of light - tight baffles and boxes to eliminate stray light .locating the birefringent wedges externally to the spectrograph not only allows us to ignore possible polarization due to the slit , but also allowed easy access for switching configurations , and permitted us to focus the spectrograph onto a single slit position .the first exercise was to attempt to reproduce the appearance of the theoretical data frames for the various configurations presented in fig .4 . fig .9 shows the results . to do this, we used the white halogen continuum source in combination with linearly and circularly polarizing filters .for the first four configurations ( first four columns in fig .9 ) , we used a set of 60 mm astronomical polarizing filters that utilize hn38 polaroid mounted in a magnesium fluoride substrate to approximate 100% linearly polarized light ( first two rows of the first four columns in fig .we used a cholesteric liquid crystal technology ( clc ) filter to approximate 100% circularly polarized light ( third row of the first four columns in fig .9 ) . for the final configuration ( fifth column of fig. 9 ) , we used a polarization state generator that utilized a precision linear polarizer in combination with a fresnel rhomb that is capable of producing close to 100% polarized light anywhere on the poincar sphere .we also obtained data frames without any polarizing optics to provide a `` flat - field '' reference .the spectral scale was wavelength calibrated using argon and mercury line lamps .we defined the slit direction to correspond to and to the slit to correspond to . by comparing figs . 4 and 9, it is apparent that the empirical data reproduce the qualitative expectations extremely well .differences in detail can be attributed to different absolute thicknesses of the wedges ( in practice the 3 and 6 wedges had very different thicknesses ) relative to one another and to the theoretical model , and to essentially random centering of the crossover points for wedge pairs relative to the slit and to one another .care was taken with the sign convention and parity of the wedges to reproduce the directions for fast axes and wedge gradients used in the models .the empirical wavelength range shown corresponds to 550 nm to 700 nm , and the slit height to 2 mm in fig .we consider the data obtained in this exercise to be fully consistent with the theoretical expectations , given the practical uncertainties described .the second exercise was to test the linear polarization mode described in sec .we placed a compound wedge pair with the fast axes crossed , running corner to corner at to the spectrograph slit , in front of the spectrograph slit .an achromatic quarter wave retarder was placed upstream of the wedge pair , with its fast axis oriented to the slit .calibration measurements were taken using the standard suite of polarizing filters , and the resulting data frames were used to derive empirical `` coefficient '' frames for use in the least - squares retrieval procedure of sec . 3 and sec .5a , with additional theoretical analysis in the appendix .we placed an schott bg18 colored glass filter ( broad bandpass with peak transmission near nm ) close to the source and orthogonal to the beam and took a single data frame .we then rotated the filter about the vertical axis by approximately and took a second data frame . using these single data frames in conjunction with the procedures described in sec . 3 and sec .5a , we derived polarization and intensity spectra , shown in fig . 10 .the average linear polarization for the region 500 nm to 550 nm was 0.39% and 6.2% with standard deviations0.02% and 0.35% respectively .for a uniform glass sheet of refractive index in air , we expect transmitted light to be polarized 6.3% for an incident angle of .we emphasize that these results were obtained using a single data frame with no moving parts once the calibration data had been acquired .the third exercise was to demonstrate our ability to obtain full stokes polarimetry from a single data frame .for this , we used the compound wedge pair used in the previous exercise , followed by a compound wedge pair with fast axes at and .we removed the quarter wave retarder .as an interesting source of circularly polarized light , we used a pair of plastic 3d cinema glasses .these glasses comprise a quarter wave sheet and polarizing sheet in combination .used as viewers , they either transmit or extinguish circularly polarized light , depending on its sign .but , in reverse , they produce circularly polarized light of opposite signs for each `` eye . ''we took single data frames through each eye in turn and processed the data frames according to the methodology of sec . 3 and sec .5a , using the empirical suite of calibration data frames .the results are shown in fig .the method produced excellent results on these sources , yielding the expected extremely high level of polarization , with opposite sign for the two eyes .we also show the derived linear polarization , to show that we are also measuring full the stokes vector in these single data frames .we defer attempts to carry out precision polarimetry given the rudimentary nature of our optical bench coupled to imperfect calibration source availability. however , we believe that these preliminary results are extremely encouraging and satisfy our desire to provide a proof of concept in the laboratory and to demonstrate that the desired polarimetric information can be retrieved from single data frames .we have described an approach to polarization measurement that uses no moving parts and that relies on simple , robust optical components . either linear polarimetry or full stokes polarimetry can be carried out .the method depends on the use of an area detector , such as a ccd , with the light spread across a region of the detector .if the system can be made photon - limited in sensitivity , this spreading of the light improves the polarimetry , since typical ccd well - depths are only of order . with modest spreading of the light ,a single photon limited frame should be able to reach precision of order in polarization .the influence of departures from ideal circumstances still remains largely to be explored .hence , we do not know at this stage whether this approach will be able to achieve extremely high accuracy . however the robustness and simplicity of the components involved offers cause for optimism .other approaches , such as the spectral modulation method for linear polarimetry , offer alternative methods for static polarimetry in hostile environments . compared to that approach ,the methods presented here yield a cleaner separation of the spectroscopy and polarimetry , at the expense of additional detector surface area requirements .the methods may be applied in the uv or ir as well as in the visible wavelength range . since the entire polarization information is contained within a single data frame ,the method is well - suited to measuring the polarization of transient sources and scenes where the polarimeter and target are in rapid relative motion .since the optics are robust , simple and require no moving parts , we anticipate that these methods will prove useful for application in space .* acknowledgments : * we acknowledge support from the stsci jwst director s discretionary research fund jdf grant number d0101.90152 .stsci is operated by the association for universities for research in astronomy , inc ., under nasa contract nas5 - 26555 .patent pending , all rights reserved .we follow bevington and let the general problem to be solved be where measurements , either the intensity in the single beam case or in the dual beam case , are made at points and , with the true underlying value and its error ( assumed random , independent ) at location .the terms , , , and are trigonometric functions that encode the stokes parameters , , , and or , , and , and their coefficients , , , and are the stokes parameters to be derived .the mapping and specific functions depend on the chosen configuration , but all configurations discussed here can be expressed in this way . sometimes the functions are identically zero , implying no sensitivity to that parameter .the function is then ^ 2=\sum^{n}_{i=1}{1\over\sigma_i^2}[y_i - ai_c(x_i)-bq_c(x_i)-cu_c(x_i)-dv_c(x_i)]^2,\ ] ] and to solve , we set the partial derivatives of with respect to each of , , , and equal to zero : we require the curvature matrix and summation vector , , respectively . with this terminology ,the least squares equations become we solve for the vector , following standard procedures , e.g. , , ignoring covariances , the uncertainties on these parameters are where represents , , , or , and {ii} ] , omitting the zero term .hence the solutions from are similarly , we can derive the uncertainties of the stokes parameters . the uncertainty is given by and is assumed to be constant .hence , reading directly from the expression for , we have the uncertainties in the normalized stokes parameters and are ignoring bias terms , it follows that the uncertainty on the degree of polarization is the expressions for the trigonometric functions , , , depend on the configuration and whether a dual beam formalism is adopted or not .we derived the expressions for these functions using mueller matrix algebra for a selection of configurations , as presented in table 1 . in a similar fashion to this example , though with more complex manipulations , we can analytically invert the corresponding curvature matrix to derive both the solution and the uncertainty estimates , taking only the diagonal terms as the uncertainty. there are cases where the off - diagonal terms of are non - zero , as discussed in the text .w. b. sparks , j. h. hough , t. a. germer , f. chen , s. dassarma , p. dassarma , f. robb , n. manset , l. kolokolova , i. reid , f. macchetto , and w. martin , `` detection of circular polarization in light scattered from photosynthetic microbes . ''sci . * 106 * , 78167821 ( 2009 ) .w. b. sparks , j. h. hough , l. kolokolova , t. a. germer , f. chen , s. dassarma , p. dassarma , f. t. robb , n. manset , i. n. reid , f. d. macchetto , and w. martin , `` circular polarization in scattered light as a possible biomarker , '' j. quant .110 * , 17711779 ( 2009 ) .g. van harten , f. snik , j. h. h. rietjens , j. m. smit , j. de boer , r. diamantopoulou , o. p. hasekamp , d. m. stam , c. u. keller , e. c. , a. l. verlaan , w. a. vliegenthart , r. ter horst , r. navarro , k. wielinga , s. hannemann , s. g. moon , and r. voors , `` prototyping for the spectropolarimeter for planetary exploration ( spex ) : calibration and sky measurements , '' in `` society of photo - optical instrumentation engineers ( spie ) conference series '' , vol .8160 of _ society of photo - optical instrumentation engineers ( spie ) conference series _ ( 2011 ) .r. w. oka and n. saito , `` snapshot complete imaging polarimeter using savart plates . '' infrared detectors and focal plane arrays viii .dereniak , e.l . ; sampson , r.e .spie * 6295 * , 629508 ( 2006 ) .f. snik , j. h. h. rietjens , g. van harten , d. m. stam , c. u. keller , j. m. smit , e. c. laan , a. l. verlaan , r. ter horst , r. navarro , k. wielinga , s. g. moon , and r. voors , `` spex : the spectropolarimeter for planetary exploration , '' in `` society of photo - optical instrumentation engineers ( spie ) conference series , '' vol .7731 of _ society of photo - optical instrumentation engineers ( spie ) conference series _ ( 2010 ) . c. pernechele , e. giro , and d. fantinel , `` device for optical linear polarization measurements with a single exposure , '' in `` society of photo - optical instrumentation engineers( spie ) conference series '' , vol .4843 of _ society of photo - optical instrumentation engineers ( spie ) conference series _ ,s. fineschi , ed .( 2003 ) , pp .156163 . running horizontally and wavelength vertically , increasing up .parameters correspond to 2 mm in of a quartz wedge set , running from 450 nm to 750 nm .top row shows 100% stokes , middle row 100% stokes and bottom row 100% stokes . left to right , in the notation of tables 13 , the configurations are , , , , and .note that if the quarter wave retarder were omitted in the first two columns , and , which show no sensitivity with the quarter wave retarder , would be interchanged .for the first two columns , the fast axis is set at and for the remaining three at ( see text).,width=377 ] , green is , and red is .the vertical lines indicate the positions of the minima for and , the horizontal lines indicates ( dotted ) , and the analytically - determined minimum higher ( dashed).,width=415 ] the spatial distance of one wavelength of retardance .blue is stokes , green is , and red is .the vertical lines indicate the positions of the analyzer angles that have no formal covariance , which is independent of the miscentering .smooth lines through the simulated data are the analytic solutions ignoring covariance terms , while the plus signs are the results of monte - carlo simulations .the horizontal lines are as in the previous figure.,width=415 ] to orthogonal , green , observed using the configuration . at right angles, we expect no polarization , and inclined at , approximately % , consistent with the least squares retrieval .the black curves show arbitrarily normalized throughputs for the two configurations ( solid , orthogonal and dotted , inclined ) derived from the data , serving to illustrate that we also obtain full stokes spectroscopy using these methods.,width=453 ] left and right circularly polarized light for the left and right eyes , measured using the configuration .the retrieval is consistent with expectations .for completeness , and to illustrate that we obtain full stokes polarimetry from a single data frame , the dashed lines show the retrieved degree of linear polarization.,width=453 ]
|
we present an approach to spectropolarimetry which requires neither moving parts nor time dependent modulation , and which offers the prospect of achieving high sensitivity . the technique applies equally well , in principle , in the optical , uv or ir . the concept , which is one of those generically known as channeled polarimetry , is to encode the polarization information at each wavelength along the spatial dimension of a 2d data array using static , robust optical components . a single two - dimensional data frame contains the full polarization information and can be configured to measure either two or all of the stokes polarization parameters . by acquiring full polarimetric information in a single observation , we simplify polarimetry of transient sources and in situations where the instrument and target are in relative motion . the robustness and simplicity of the approach , coupled to its potential for high sensitivity , and applicability over a wide wavelength range , is likely to prove useful for applications in challenging environments such as space .
|
the development of data analysis algorithms for gravitational wave detectors on the ground and for the proposed space - based gravitational wave detector , the laser interferometer space antenna ( lisa ) , is an active area of current research .lisa data analysis activities are being encouraged by the mock lisa data challenges ( mldcs ) .prior to the most recent round , round 3 , which finished in april 2009 , the mldcs had included data sets containing individual and multiple white - dwarf binaries , a realisation of the whole galaxy of compact binaries , single and multiple non - spinning supermassive black hole ( smbh ) binaries , either isolated or on top of a galactic confusion background and isolated extreme - mass - ratio inspiral ( emri ) sources in purely instrumental noise .round 3 included a realisation of the galactic binaries that included chirping systems , a single data set containing multiple overlapping emris and data sets containing three new types of source inspirals of spinning supermassive black hole binaries , bursts from cosmic string cusps and a stochastic background of gravitational radiation .several of these sources , including the emri signals , the spinning black hole binaries and the cosmic string bursts , have highly multi - modal likelihood surfaces which can cause problems for algorithms such as markov chain monte carlo ( mcmc ) , as these may become stuck in secondary maxima rather than finding the primary mode .recently we applied a new algorithm to the problem of gravitational wave data analysis , multinest , which is a nested sampling algorithm , optimized for problems with highly multi - modal posteriors .we demonstrated that multinest could be used as a search tool and to recover posterior probability distributions for the case of data sets containing multiple signals from non - spinning smbh binary inspirals .we used the algorithm as a search tool in mldc round 3 to analyse two of the data sets the spinning smbh inspirals ( challenge 3.2 ) and the cosmic string bursts ( challenge 3.4 ) .we will discuss the second of these applications in this paper .the nested sampling algorithm was developed as a tool for evaluating the bayesian evidence .it employs a set of live points , each of which represents a particular set of parameters in the multi - dimensional search space .these points move as the algorithm progresses , and climb together through nested contours of increasing likelihood . at each step the algorithm works by finding a point of higher likelihoodthan the lowest likelihood point in the live point set and then replacing the lowest likelihood point with the new point .the difficulty is to sample , efficiently , new points of higher likelihood from the remaining prior volume .multinest achieves this using an ellipsoidal rejection sampling scheme .it has demonstrated orders of magnitude improvement over standard methods in several applications in cosmology and particle physics .it also proved to be very effective in a gravitational wave context for the test case mentioned above .multinest can be used as a search tool and returns the posterior probability distributions as a by - product .we employed the algorithm successfully to analyse mldc challenge 3.4 , accurately finding all three of the cosmic string burst sources present in the data set .this will be discussed in more detail in section [ sec : mldcres ] .the cosmic string cusp is a special type of burst source , since the waveform is known and can be modelled , so a matched filtering search is possible and that is what multinest does in computing the likelihoods across the parameter space .however , there may be bursts of gravitational waves in the lisa data stream from other sources and the question arises as to whether we would be able to detect them and if we can distinguish cosmic string bursts from bursts due to these other sources .the evidence that multinest computes is one tool that can be used to address model selection .evidence has been used for gravitational wave model selection to test the theory of relativity in ground based observations , and , in a lisa context , to distinguish between an empty data set and one containing a signal .calculation of an evidence ratio requires a second model for the burst as an alternative to compare against .we chose to use a sine - gaussian waveform as a generic alternative burst model , as this is one of the burst models commonly used in ligo data analysis ( see for instance ) .we find that the evidence is a powerful tool for characterising bursts for the majority of detectable bursts , the evidence ratio strongly favours the true model over the alternative .while the sine - gaussian model does not necessarily well describe all possible un - modelled bursts , these results suggest that the evidence can be used to correctly identify any cosmic string bursts that are present in the lisa data stream .this paper is organised as follows . in section [ sec : methods ] we describe the methods that we employ in this analysis we describe bayesian inference , nested sampling , the multinest algorithm , the two burst waveform models we use and the detector noise spectral density . in section [ sec : res ] we present our results , including a description of our methods and results for the analysis of mldc challenge 3.4 ( in section [ sec : mldcres ] ) ; an estimate of the signal - to - noise ratio ( snr ) required for detection of cosmic string and sine - gaussian bursts using the multinest algorithm ; and a calculation of the evidence ratio of the two models for data sets containing each type of source .we finish in section [ sec : discuss ] with a summary and discussion of directions for further research .bayesian inference methods provide a consistent approach to the estimation of a set of parameters in a model ( or hypothesis ) for the data .bayes theorem states that where is the posterior probability distribution of the parameters , is the likelihood , is the prior distribution , and is the bayesian evidence .bayesian evidence is the factor required to normalise the posterior over : where is the dimensionality of the parameter space .since the bayesian evidence is independent of the parameter values , it is usually ignored in parameter estimation problems and the posterior inferences are obtained by exploring the un normalized posterior using standard mcmc sampling methods .bayesian parameter estimation has been used quite extensively in a variety of astronomical applications , including gravitational wave astronomy , although standard mcmc methods , such as the basic metropolis hastings algorithm or the hamiltonian sampling technique ( see e.g. ) , can experience problems in sampling efficiently from a multi modal posterior distribution or one with large ( curving ) degeneracies between parameters . moreover, mcmc methods often require careful tuning of the proposal distribution to sample efficiently , and testing for convergence can be problematic . in order to select between two models and one needs to compare their respective posterior probabilities given the observed data set , as follows : where is the prior probability ratio for the two models , which can often be set to unity but occasionally requires further consideration ( see , for example , for some examples where the prior probability ratio should not be set to unity ) .it can be seen from eq .( [ eq : model_select ] ) that the bayesian evidence plays a central role in bayesian model selection . as the average of likelihood over the prior, the evidence automatically implements occam s razor : a simpler theory which agrees well enough with the empirical evidence is preferred .a more complicated theory will only have a higher evidence if it is significantly better at explaining the data than a simpler theory .unfortunately , evaluation of bayesian evidence involves the multidimensional integral ( eq .( [ eq : z ] ) ) and thus presents a challenging numerical task .standard techniques like thermodynamic integration are extremely computationally expensive which makes evidence evaluation typically at least an order of magnitude more costly than parameter estimation .some fast approximate methods have been used for evidence evaluation , such as treating the posterior as a multivariate gaussian centred at its peak ( see , for example , ) , but this approximation is clearly a poor one for highly non - gaussian and multi modal posteriors .various alternative information criteria for model selection are discussed in , but the evidence remains the preferred method .nested sampling is a monte carlo method targetted at the efficient calculation of the evidence , but also produces posterior inferences as a by - product .it calculates the evidence by transforming the multi dimensional evidence integral into a one dimensional integral that is easy to evaluate numerically .this is accomplished by defining the prior volume as , so that where the integral extends over the region(s ) of parameter space contained within the iso - likelihood contour .the evidence integral , eq . ( [ eq : z ] ) , can then be written as where , the inverse of eq .( [ eq : xdef ] ) , is a monotonically decreasing function of .thus , if one can evaluate the likelihoods , where is a sequence of decreasing values , as shown schematically in fig .[ fig : ns ] , the evidence can be approximated numerically using standard quadrature methods as a weighted sum where the weights for the simple trapezium rule are given by .an example of a posterior in two dimensions and its associated function is shown in fig .[ fig : ns ] .the summation in eq .( [ eq : ns_sum ] ) is performed as follows .the iteration counter is first set to and ` active ' ( or ` live ' ) samples are drawn from the full prior , so the initial prior volume is .the samples are then sorted in order of their likelihood and the smallest ( with likelihood ) is removed from the active set ( hence becoming ` inactive ' ) and replaced by a point drawn from the prior subject to the constraint that the point has a likelihood .the corresponding prior volume contained within the iso - likelihood contour associated with the new live point will be a random variable given by , where follows the distribution ( i.e. , the probability distribution for the largest of samples drawn uniformly from the interval ] , so these results suggest we would be able to detect any source drawn from the mldc prior .the sine - gaussian bursts require a slightly higher snr for detection , of , but this increase in threshold is only of the order of in snr .the sine - gaussian signals are in general much simpler in form than the cosmic strings and therefore it is perhaps unsurprising that it requires a higher snr to distinguish them from noise . in this case , it was the sources with highest frequency , , that were most difficult to detect .however , since the nyquist frequency of the data sets was , the difficulty of detection may have arisen because the signal was partially out of band . and [ tab : challpar].,width=384 ] , but now showing results for nine different sine - gaussian signals , with frequency , , and width , , as shown.,width=384 ] to explore model selection , we used multinest to search the same data sets described above , but now using the alternative model , i.e. , we searched the cosmic string data sets using the sine - gaussian likelihood and vice versa . for sufficiently high snr ,in both cases the alternative model was able to successfully detect a signal in the data set .typically , the best - fit sine - gaussian signal to a cosmic string source has low and a frequency that matches the two lobes of the cosmic string burst .the parameter sets the number of cycles in the sine - gaussian wave packet , and so a sine - gaussain with most closely resembles a cosmic string event , which typically has two cycles .this is illustrated for a typical case in fig .[ fig : cosstringrec ] .similarly , the best - fit cosmic string source to a sine - gaussian signal matches the central two peaks of the sine - gaussian waveform as well as possible . a typical case is shown in fig .[ fig : sinegaussrec ] . , but we now show a typical confusion example for searches of the sine - gaussian data sets .we compare the injected sine - gaussian signal to the best - fit signals recovered by multinest using both the cosmic string and the sine gaussian models.,width=384 ] from the results of these searches it is possible to construct the evidence ratio of the two models for each of the data sets . in fig .[ fig : compmodcs ] we show the ratio of the bayesian evidence for the cosmic string model to that of the sine - gaussian model when searching the cosmic string data sets .we see that the evidence ratio starts to significantly favour the true model , i.e. , the cosmic string , at an injected snr of , which is the point at which we first start to be able to detect the cosmic string burst at all . for the two low frequency sources , training source 1 and blind source 2 , the evidence ratio only starts to favour the true model at snr , butagain this is the point at which we are first able to detect the source .we conclude that when a cosmic string burst is loud enough to be detected , then the evidence clearly favours the interpretation of the event as a cosmic string burst . in fig .[ fig : compmodsg ] we show the ratio of the evidence of the sine - gaussian model to that of the cosmic string model when searching the data sets containing sine - gaussian signals .the conclusions are broadly the same as for the cosmic string case .we require a slightly higher snr in order to correctly choose the sine - gaussian model , but this just reflects the fact that we need a somewhat higher snr to detect the sine - gaussians in the first place .the only case for which the evidence ratio does not begin to favour the sine - gaussian model at the point where the source becomes detectable is the case with and .this is a sine - gaussian signal with only two smooth oscillations , and so it does look rather like a low frequency cosmic string event . even in that case , the evidence begins clearly to favour the correct model for snrs of and higher . , but now showing the ratio of the bayesian evidence for the sine - gaussian model to that of the cosmic string model when searching a data set containing a sine - gaussian burst source.,width=384 ]we have considered the use of the multi - modal nested sampling algorithm multinest for detection and characterisation of cosmic string burst sources in lisa data . as a search tool ,the algorithm was able successfully to find the three cosmic string bursts that were present in the mldc challenge data set .these sources , and the five sources in the mldc training data , were correctly identified in the sense that the full signal - to - noise ratio of the injected source was recovered , and a posterior distribution for the parameters obtained . the maximum likelihood andmaximum a - posteriori parameters were not particularly close to the true parameters of the injected signals , but this was a consequence of the intrinsic degeneracies in the cosmic string model parameter space and in all cases the true parameters were consistent with the recovered posterior distributions . in controlled studies , we found that the snr threshold required for detection of the cosmic string bursts was , depending on the burst parameters .bursts with a low break - frequency require a higher snr to detect than those with high break frequencies .we also explored the detection of sine - gaussian bursts and in that case the snr required for detection was slightly higher , being typically , with sources having frequency close to nyquist being more difficult to detect .multinest is designed to evaluate the evidence of the data under a certain hypothesis , and this can be used to compare possible models for the burst sources .lisa may detect bursts from several different sources , and it is important for scientific interpretation that the nature of the burst be correctly identified .we used the bayesian evidence as a tool to choose between two different models for a lisa burst source the cosmic string model and the sine - gaussian model , which was chosen to represent a generic burst .the bayesian evidence works very well as a discriminator between these two models .the evidence ratio begins to clearly favour the correct model over the alternative at the same snr that the sources become loud enough to detect in the first place .the usefulness of multinest as a search tool in this problem is a further illustration of the potential utility of this algorithm for lisa data analysis , as previously demonstrated in a search for non - spinning smbh binaries .other algorithms based on markov chain monte carlo techniques have also been applied to the search for cosmic strings .both approaches performed equally well as search tools in the last round of the mldc .we are now exploring the application of multinest to searches for other lisa sources , including spinning massive black hole binaries and extreme - mass - ratio inspirals .multinest was not designed primarily as a search algorithm , but as a tool for evidence evaluation and this work has demonstrated the utility of the bayesian evidence as a tool for model selection in a lisa context .other problems where the evidence ratio approach could be applied include choosing between relativity and alternative theories of gravity as explanations for the gravitational waves observed by lisa , or choosing between different models for a gravitational wave background present in the lisa data set .the bayesian evidence was previously used in a ligo context as a tool to choose between alternative theories of gravity and in a lisa context to distinguish a data set containing a source from one containing purely instrumental noise .multinest provides a more efficient way to compute the evidence and this should be explored in more detail in the future . in the context of interpretation of lisa burst events , what we have considered hereis only part of the picture .we have shown that we are able to correctly choose between two particular models for a burst , and this can easily be extended to include other burst models .however , lisa might also detect bursts from unmodelled sources . in that case, algorithms such as multinest which rely on matched filtering would find the best fit parameters within the model space , but a higher intrinsic snr of the source would be required for detection . in such a situation , we would like to be able to say that the source was probably not from a model of particular type , e.g. , not a cosmic string burst .there are several clues which would provide an indication that this was the case .the sine - gaussian model is sufficiently generic that we would expect it , in general , to provide a better match to unmodelled bursts than the cosmic string model , which has a very specific form .therefore , we could say that if the evidence ratio favoured the cosmic string model over the sine - gaussian model it was highly likely that the burst was in fact a cosmic string and not something else . similarly , if we found that several of the alternative models had almost equal evidence , but the snr was quite high , it would be indicative that the burst was not described by any of the models .we have seen that at relatively moderate snrs , when the signal is described by one of the models , the evidence clearly favours the true model over an alternative .if we found that two models gave almost equally good descriptions of the source , it would suggest that the burst was not fully described by either of them .a third clue would come from the shape of the posterior for the source parameters .the cosmic string waveform space contains many degeneracies , but these can be characterised theoretically for a given choice of source parameters .if the signal was not from a cosmic string , we might find that the structure of the posterior was modified .finally , some techniques have been developed for the bayesian reconstruction of generic bursts which could also be applied in a lisa context .the usefulness of these various approaches can be explored further by analysing data sets into which numerical supernovae burst waveforms have been injected . while the necessary mass of the progenitor is probably unphysically high for a supernova to produce a burst in the lisa frequency band , such waveforms provide examples of unmodelled burst signals on which to test analysis techniques .the final lisa analysis will employ a family of burst models to characterize any detected events .the work described here demonstrates that the bayesian evidence will be a useful tool for choosing between such models , and multinest is a useful tool for computing those evidences .this work was performed using the darwin supercomputer of the university of cambridge high performance computing service ( http://www.hpc.cam.ac.uk/ ) , provided by dell inc . using strategic research infrastructure funding from the higher education funding council for england andthe authors would like to thank dr .stuart rankin for computational assistance .ff is supported by the trinity hall research fellowship .jg s work is supported by the royal society .pg is supported by the gates cambridge trust .99 danzman k et al . , _ lisa pre - phase a report _ , max - planck - institute fur quantenoptik , report mpq 233 ( 1998 ) .babak s et al . , * 25 * 184026 ( 2008 ) .gair j r , mandel i & wen l , * 25 * 184031 ( 2008 ) .cornish n j , preprint arxiv:0804.3323 ( 2008 ) .gair j r , porter e k , babak s & barack l * 25 * 184030 ( 2008 ) .babak s , gair j. r. & porter e. k. , * 26 * 135004 ( 2009 ) .f , gair j r , hobson m p & porter e k , class . quantum grav .* 26 * 215003 ( 2009 ) .f , hobson m p , * 384 * 449 ( 2008 ) .f , hobson m p & bridges m , * 398 * 1601 ( 2009 ). j , 2004 , american institute of physics conference series , * 735 * 395 ( 2004 ) .f , marshall p j & hobson m p , preprint arxiv:0810.0781 ( 2008 ) .f , hobson m p , zwart jt l , saunders r d e & grainge k j b , * 398 * 2049 ( 2009 ) .f , allanach b c , hobson m p , abdus salam s s , trotta r , weber a m , journal of high energy physics , 10 , 64 ( 2008 ) .r , feroz f , hobson m p , roszkowski l , ruiz de austri r , 2008 , journal of high energy physics , 12 , 24 .veitch j & vecchio a , * 25 * 184010 ( 2008 ) .littenberg t b & cornish n j , 063007 ( 2009 ) .blackburn l et al . , * 22 * s1293 ( 2005 ) .d j c , 2003 , information theory , inference and learning algorithms .. 640 .isbn 0521642981 .( cambridge : university press ) .ruanaidh j , fitzgerald w , 1996 , numerical bayesian methods applied to signal processing ( new york : springer verlag ) .m p , bridle s l , lahav o , 2002 , * 335 * , 377 .a r , lett .* 377 * l74 ( 2007 ) .damour t & vilenkin a , 064008 ( 2001 ) .siemens x & olum k d , 085017 ( 2003 ) .key j s & cornish n j , 043014 ( 2009 ) .rubbo l j , cornish n j & poujade o , 082003 ( 2004 ) .tinto m & dhurandhar , s v , 2005 , _ living rev. relativity _ 8 , 4 .[ online article ] : cited on 15/10/2009 , http://www.livingreviews.org/lrr-2005-4 .helstrom s w , _ statistical theory of signal detection _( london : pergamon ) ( 1968 ) .owen b j 6749 ( 1996 ) .rver c , bizouard m a , christensen n , dimmelmeier h , siong heng i & meyer r , 102004 ( 2009 ) .
|
we consider the problem of characterisation of burst sources detected with the laser interferometer space antenna ( lisa ) using the multi - modal nested sampling algorithm , multinest . we use multinest as a tool to search for modelled bursts from cosmic string cusps , and compute the bayesian evidence associated with the cosmic string model . as an alternative burst model , we consider sine - gaussian burst signals , and show how the evidence ratio can be used to choose between these two alternatives . we present results from an application of multinest to the last round of the mock lisa data challenge , in which we were able to successfully detect and characterise all three of the cosmic string burst sources present in the release data set . we also present results of independent trials and show that multinest can detect cosmic string signals with signal - to - noise ratio ( snr ) as low as and sine - gaussian signals with snr as low as . in both cases , we show that the threshold at which the sources become detectable coincides with the snr at which the evidence ratio begins to favour the correct model over the alternative .
|
genetics is concerned with the physical characteristics of organisms that are passed on from one organism to another through the use of deoxyribonucleic acid ( dna ) , consisting of a sequence of nucleotides .the nucleotides are the chemical bases adenosine , thymine , cytosine and guanine that are denoted using the alphabet .those on one strand are paired in a complementary fashion with those on the other strand , where adenosine matches with thymine , and guanine with cytosine .groups of three bases are called codons , and these encode the twenty amino acids that combine to form proteins , the building blocks of life . in a nutshell , the central dogma of molecular biology states that `` dna makes rna makes protein '' .this is encapsulated in figure [ dogmafigure ] .the dna is transcribed into complementary messenger ribonucleic acid ( mrna ) . in rnas ,the alphabet is where uracil plays the same role that thymine does in dna , as it pairs with guanine .sections of the mrna that do not code for proteins are removed , and a `` poly - a tail''a sequence composed entirely of adenosine bases is added to ( chemically ) stabilise the sequence .the mrna then acts as a template for protein synthesis .transfer rnas ( trnas ) bind to an amino acid on one end , and a complimentary set of three bases on the mrna template . a 1d sequence of amino acids forms andis then detached from the trnas and folds into a 3d structure .this sometimes occurs by itself and sometimes with the aid of other proteins , either immediately or at a later date in the life of the cell .there are several key areas in which mathematical principles underlie , influence , and can provide information about genetic structures .the key questions that these principles can help answer are * why do we have four bases , a triplet coding and twenty amino acids ? *why do we observe the particular assignment of triplets to amino acids that we do ?* how do new gene sequences arise , and how do they spread in a population ? * how can we analyse the sequences that arise ? some mathematically - based answers are discussed in the remainder of this paper .the following is a summary of the work of soto and toh , who took a mathematical approach to the question of why four bases , a triplet coding , and 20 amino acids are used , based on the assumption that nature will , over evolutionary time , find a solution to the problem that minimises the amount of cell machinery .it also assumes that the machinery is not unlike that used by computer memory chips to decode .this is not a bad assumption , but leaves out chemical tricks that that the trnas can use .i also use the fact that optimal solutions , since they have an advantage in evolutionary terms , spread in a population as i explore later .the main argument of soto and toh is as follows : firstly , they define the maximum number of amino acids as where is the number of possible bases ( symbols of length 1 ) and the number of positions . for example, the amino acid codings used in all living things has bases and positions , a triplet code .this gives a total of possible amino acids .for the assumptions above , it turns out the amount of `` hardware '' , or cell machinery , is proportional to the number of bases times the number of positions , which can be written as where and are as defined above .it also turns out that to minimise the amount of hardware , one can write this number of amino acids as , where is the base of the natural logarithm , and describes many growth and decay processes that occur in the natural world , and is the number of positions .so we need to have the number of bases close to , thus optimising the number of positions for a given by setting then we can find a semi - optimal by , where is the actual number of positions used , resulting in a degeneracy , where is the number of actual bases used and is the minimum ( integer ) possible .then the actual amount of hardware used is and we write the difference between this amount of hardware , and the optimal , , as where is the difference in `` hardware '' between the actual and optimal solution , and this is always greater than zero as we can approach but never achieve the minimal amount of `` hardware '' ( since this would require a non - integral number of bases .if we set the derivative , or rate of change of , to zero , this allows us to find the optimal solution for the number of amino acids for fixed number of base positions .a graph of is shown in figure [ fig : delta ] , showing the minima for one , two and three positions occurring at three , seven , and 20 .this assumes four bases are used the actual minima , and for the best possible choice of number of bases , are shown in table [ table : minima ] , again , indicating 20 amino acids is the optimal number ..this table shows the optimal number of amino acids for 1 - 4 base positions , and the corresponding ( minimal ) difference between the actual coding and the theoretical minimum . [cols="^,^,^,^",options="header " , ] [ table : gtscores ]in this section i will first introduce the topic of entropy , and then discuss how it applies to the introns , the parts of genes that are cut out of the transcribed mrna sequence template before the protein is made .entropy is also discussed later on , as it can also be used to analyse the mathematical properties of existing sequences .entropy is a measure of the amount of order or disorder in a sequence , which can be thought of as the information ( ignoring context ) .the mathematical formula is where denotes different symbols from the set of symbols in a sequence , , and the is the probability of finding a symbol , or simply the number of times it occurs divided by the total number of symbols in the sequence .for example , the sequence has , , and thus has entropy bits ( the same bits that computers use ) of information .a related topic to the shannon entropy is chaitin - kolmogorov entropy .this is the `` algorithmic '' entropy , that is defined in terms of the shortest computer program that could reproduce a given sequence .this is related to the shannon entropy ( ideally it should approach , or get close to , the measure of the shannon entropy ) .we can consider the chaitin - kolmogorov entropy as being like a self - extracting zip ( computer ) file : the data is compressed , and a short program is attached which can then decompress the compressed data when the self - extracting file is run .i show below that this is similar to what occurs in introns entropy can enlighten us on two key things : evolutionary advantages for introns , and also on patterns found in specific existing genes .the former is discussed here , and the latter is discussed in the following section . if we write consider each protein as composed of distinct functional modules ( true for many , but not all proteins ) then we often find other proteins containing the same modules .if we can write these alternative proteins as a single gene , with alternative splices , then we can increase the shannon entropy , since there is less redundancy ( and thus the probabilities of finding various bases are more even ) .this also increasess the chaitin - kolomogorov entropy , if we can use this alternative splicing a lot , in comparison to the extra genes we need to encode for this alternative splicing machinery an `` algorithm '' to unpack the alternative splices from a single gene . in general ,if the entropy of a system increases , the complexity increases ( not always true since a true random signal has a very low complexity ) , and this leads to increased adaptability ( but trades off reliability ) . the need to have minimal machinery here again guides us as to the evolutionary solution found .if we have some systematic way of marking where these modules , or exons , start and stop in genes , then we can use the same set of cellular machinery repeatedly .this then allows a greater degree of freedom in terms of the instructions that can be coded for , since we can include non-(protein-)coding instructions in these introns . as a very simple example of this, it has been showing that increasing the intron length can decrease the probability ( or in other words , the final amount of protein ) of containing the exon immediately after that intron .mathematics not only underpins genetic structures but it can also be used to analyse genetic structures in existing organisms .the following is an excerpt from my paper on using mutual information to analyse dna sequences .mutual information is like shannon information above , except for two sequences .basically it describes the total information covered by two sequences , say and , making sure to not double count the information they have in common .the mathematical formula is where is the shannon entropy defined above in eq .[ eqn : shannonentropy ] a mathematical for showing the existence of long - range correlations in dna is to use the mutual information function , as given in eq .[ mutualinfo ] below .this approach has been shown to distinguish between coding and non - coding regions .we explore the use of the the mutual information function given in eq .[ mutualinfo ] : for symbols ( in the case of dna , ) . is the probability that symbols and are found a distance apart .this is related to the correlation function in eq .[ correlation ] : where and are numerical representations of symbols and .as discussed by li , the fact that we are working with a finite sequence means that this overestimates the true by where is the number of symbols ( for dna this is always 4 ) and is the sequence length .an example of applying this method to a real sequence of ( mouse ) dna is shown in figure [ miplots : real ] , clearly showing the existence of long - range correlations .it is not altogether clear why these correlations exist across proteins , it may be due to variants of functional modules , stringed together to make a protein , or it may be due to interesting structures in introns . in eq .[ mutualinfo ] against base distance for the sequence of the map kinase - activated protein kinase 2 gene from _ mus musculus _ ( in plain english : a mouse protein ) , shown in a darker line style , compared with the set of 100 randomized sequences of the same base distribution , the lighter band .the graph of mutual information in the map kinase gene mostly sits about the `` noise floor '' of the randomized sequences , in which the correlations have been destroyed , width=453 ]mathematics presents us with powerful tools , such as entropy , and game theory , that enlighten us as to what sort of genetic structures exist , how they evolve , and how we can analyse them .in particular , i have shown mathematical arguments for : * why four bases , a triplet code , and 20 amino acids are use , * why the triplets code for the 20 amino acids ( and start and stop codons ) in the way they do , * why introns are expected to evolve , and how they can be used to give increased flexibility , * how optimal solutions to evolutionary problems spread in a population , and * how to analyse genetic structures .wang and j. p. lee .searching algorithms of the optimal radix of exponential bidirectional associative memory . in _ieee international conference on neural networks _ ,volume 2 of _ ieee world congress on computational intelligence _ , pages 11371142 .ieee , ieee press , jun 1994 .
|
many people are familiar with the physico - chemical properties of gene sequences . in this paper i present a mathematical perspective : how do mathematical principles such as information theory , coding theory , and combinatorics influence the beginnings of life and the formation of the genetic codes we observe today ? what constraints on possible life forms are imposed by information - theoretical concepts ? further , i detail how mathematical principles can help us to analyse the genetic sequences we observe in the world today .
|
recently , there has been a renewed interest in optimal control problems for diffusions of mean - field type , where the performance functionals , drifts and diffusions coefficients depend not only on the state and the control but also on the probability distribution of state - control pair .most formulations of mean - field type control in have been of risk - neutral type where the performance functionals are the expected values of stage - additive payoff functions . not all behavior , however ,can be captured by risk - neutral mean - field type controls .one way of capturing risk - averse and risk - seeking behaviors is by exponentiating the performance functional before expectation ( see ) .a stochastic maximum principle ( smp ) for the risk - sensitive optimal control problems for markov diffusion processes with an exponential - of - integral performance functional was elegantly derived in using the relationship between the smp and the dynamic programming principle ( dpp ) which expresses the first order adjoint process as the gradient of the value - function of the underlying control problem .this relationship holds only when the value - function is smooth ( see assumption ( b4 ) in ) .the approach of was widely used and extended to jump processes in and , but still under this smoothness assumption . however , in many cases of interest , the value function is , in the best case , only continuous .moreover , the relationship between the smp and the dpp does not hold for non - markovian dynamics and for mean - field type control problems where the bellman optimality principle does not hold .this calls for the need to find a risk - sensitive smp for these case . the only paper that we are aware of and which deals with risk - sensitive optimal control in a mean field context is .therein , the authors derive a verification theorem for a risk - sensitive mean - field game whose underlying dynamics is a markov diffusion , using a matching argument between a system of hamilton - jacobi - bellman ( hjb ) equations and the fokker - planck equation .this matching arguments freezes the mean - field coupling in the dynamics , which yields a standard risk - sensitive hjb equation for the value - function .the mean - field coupling is then retrieved through the fokker - planck equation satisfied by the marginal law of the optimal state .our contribution can be summarized as follows .we establish a stochastic maximum principle for a class of risk - sensitive mean - field type control problems where the distribution enters only through _ the mean of state process_. this means that the drift , diffusion , running cost and terminal cost functions depend on the state , the control and on the mean of state .our work extends the results of to risk - sensitive control problems for dynamics that are non - markovian and of mean - field type .our derivation of the smp does not require any relationship between the first - order adjoint process and a value - function of an underlying control problem .using the smp derived in , our approach can be easily extended to the case where the mean - field coupling is in terms of the mean of the state and the control processes . to the best to our knowledge, the risk - sensitive maximum principle for mean - field type controls has not been established in earlier work , is entirely new , and is fundamentally different from the existing results in the risk - neutral mean - field case .the paper is organized as follows . in section [ sec : results ] , we present the model and state the main result . in section [ sec :intermediate ] , we establish a risk sensitive smp , based on the risk - neutral smp by buckdhan __ . in section [ sec : mainresult ] , we establish the risk - sensitive smp . in section [ sec :example ] we apply the risk - sensitive smp to the linear - exponential- quadratic setup .section [ sec : conclusion ] concludes the paper . to streamline the presentation, we only consider the one - dimensional case .the extension to the multidimensional case is by now straightforward .let be a fixed time horizon and be a given filtered probability space on which a one - dimensional standard brownian motion is given , and the filtration is the natural filtration of augmented by sets of we consider the stochastic control system : , u(t))dt+\sigma(t , x^u(t),e[x^u(t ) ] , u(t))db_t , \\x^u(0)=x_0 , \end{array}\right.\ ] ] where \times \dbr\times\dbr\times u\longrightarrow \dbr,\ t\in[0 , t],\ x\in\dbr,\ y\in \dbr,\ u \in u.\ ] ] an admissible control is an -adapted and square - integrable process with values in a non - empty subset of .we denote the set of all admissible controls by . given , equation ( [ sdeu ] ) is an sde with random coefficients .the risk - sensitive cost functional associated with ( [ sdeu ] ) is given by , u(t))\,dt+ h(x^u(t),e[x^u(t)])\right]},\ ] ] where , is the risk - sensitivity index , \times \dbr\times\dbr\times u\longrightarrow \dbr,\ \ h(x ,y ) : \,\,\dbr\times\dbr\longrightarrow \dbr , \ t\in[0 , t],\x\in \dbr,\ y\in \dbr,\ u \in u.\ ] ] any satisfying [ rs - opt - u ] j^(|u())=_u()j^(u ( ) ) is called a risk - sensitive optimal control .the corresponding state process , solution of ( [ sdeu ] ) , is denoted by .the optimal control problem we are concerned with is to characterize the pair solution of the problem ( [ rs - opt - u ] ) .let , u(t ) ) dt+h(x(t ) , e[x(t)]) ] can be seen as a limit of risk - sensitive functional when .note that the presence of the expectations ] and surely .in particular , if is independent of then theorem [ main result ] reduces to theorem 3.1 of lim and zhou , if the model is mean - field free i.e. for which , and when , in which case the generic martingale becomes the smooth value - function of a markovian or feedback control dynamics , whose gradient is the adjoint process .the main results of the paper are built on the smp for the risk neutral case derived in , where the strong condition [ cond1 ] on the involved coefficients is imposed to get less technical proofs .these conditions can be considerably weakened using techniques that are by now well established in the optimal control literature ( see e.g. ) .the proof of theorem [ main result ] is displayed in the next subsections . in this subsectionwe first reformulate the risk - sensitive control problem ( [ sdeu])-([rs - opt - u ] ) in terms of an augmented state process and terminal payoff problem .an intermediate stochastic maximum principle is then obtained by applying the smp of ( , theorem 2.1 . ) for loss functionals without running cost .then , we transform the intermediate first- and second - order adjoint processes to a more simpler form .the mean - field type control problem ( [ rs - opt - u ] ) under the dynamics ( [ sdeu ] ) is equivalent to )+ \xi(t)\right ] } , \\\displaystyle{\mbox { subject to } \ } \\dx(t)=b(t , x(t),e[x(t ) ] , u(t))dt+\sigma(t , x(t),e[x(t ) ] , u(t))db_t,\\ d\xi(t)=f(t , x(t ) , e[x(t ) ] , u(t ) ) dt,\\ x(0)=x_{0 } , \quad \xi(0)=0 .\\ \end{array } \right.\end{aligned}\ ] ] recall that )+ \int_0^tf(t,\bar x(t ) , e[\bar x(t ) ] , \bar u(t ) ) dt]}.\ ] ] under assumption [ cond1 ] , we may apply the smp for risk - neutral mean - field type control from ( , theorem 2.1 ) to the augmented state dynamics to derive the first order adjoint equation \right\}dt\\ \\ \qquad\qquad + \vec{q}(t ) db_t,\\ \\ \vec{p}(t)=-\theta \phi^{\theta}_t\left(\begin{array}{cc } h_{x}(t ) \\ 1 \end{array } \right ) - \theta e\left [ \phi^{\theta}_t\left(\begin{array}{cc }h_{y}(t ) \\ 0 \end{array } \right)\right ] , \end{array } \right.\ ] ] with [ rn - pq - ineq ]e<. let be the hamiltonian associated with the optimal state dynamics and the pair of adjoint processes : [ inter - ham ] h^(t , |x(t ) , u , ( t ) , ( t)):= ( cc b(t , |x(t ) , e[|x(t ) ] , u ) + f(t , |x(t ) , e[|x(t ) ] , u ) ) ( t)+ ( cc ( t , |x(t ) , e[|x(t ) ] , u ) + 0 ) ( t ) , where , denotes the usual scalar product in .the dependence of the hamiltonian on stems from the dependence of the adjoint processes of through the end - condition in ( [ rn - firstad ] ) .the second order adjoint equation is [ rn - pq - ineq ] e < , where , denotes the norm of the coresponding matrices .we have the following [ rn - msp ] let assumption [ cond1 ] hold .if is an optimal solution of the risk - neutral control problem ( [ smp2 ] ) , then there are two pairs of -adapted processes and that satisfy ( [ rn - firstad])-([rn - pq - ineq ] ) and ( [ rn - secondad])-([rn - pq - ineq ] ) respectively , such that for all almost every and surely , where , although the result of proposition [ rn - msp ] is a good smp for the risk - sensitive mean - field type control , the fact that augmenting the state process with the second component yields a system of two adjoint equations that appears complicated to solve in concrete situations . in the mean - field free case , lim and zhou ( )elegantly solve this problem by suggesting a transformation of the adjoint processes in such a way to get rid of the second component in ( [ rn - firstad ] ) and express the smp in terms of only one adjoint process that we denote and which solves a backward sde whose driver is quadratic in , which reminds of the risk - sensitive hamilton - jacobi - bellman equation ( see and the references therein ) .the suggested transform uses a relationship between the smp and the dpp ( valid only for markovian or feedback controls and in situations where the bellman principle is valid ) which expresses the adjoint process as the gradient of the value - function associated with the control problem ( [ smp2 ] ) , provided that the value - function is smooth ( see assumption ( b4 ) in ) , a condition that is often hard to verify in concrete situations .the value - function is in general _ not _ smooth .furthermore , the approach developed in can not be extended general situations , such as non - markovian dynamics and in mean - field type control problems , where the bellman principle does not hold . a closer look at the method of lim and zhou ( ) , suggests in fact that it is enough to use a generic square - integrable martingale to transform the pair into the adjoint process , where the process is still a square - integrable martingale , which would mean that and is equal to the constant ] is square - integrable , we get that =-1 ] and surely .this finishes the proof of theorem [ main result ] .the optimal control of a linear stochastic system driven by a brownian motion and with a quadratic cost in the state and the control is probably the most well known solvable stochastic control problem in continuous time . to illustrate our approach , we consider the one - dimensional case with linear state dynamics and exponential quadratic cost functional .it is well - known that in absence of mean - field coupling , the optimal control is a linear feedback control whose feedback gain is obtained from the solution of a risk - sensitive riccati equation which has an additional term when compared to the ( classical ) riccati equation for the quadratic cost problem . in the examples below we will show that this feature is still valid in the lq risk - sensitive problem ( with and without the mean - field coupling ) .we consider the linear - quadratic risk - sensitive control problem : } , \\\displaystyle{\mbox { subject to } \ } \\dx(t)=\left(ax(t)+bu(t)\right)dt+\sigma db_t,\\ x(0)=x_{0},\\ \end{array } \right.\end{aligned}\ ] ] where , and are real constants. an admissible pair that satisfies the optimality necessary conditions of theorem [ main result ] can be obtained by solving the following system of forward - backward sdes : [ lq - ad ] \ { lll d|x(t)=(a|x(t)+b|u(t))dt+db_t , + dp(t)=-\ { ap(t)+(t)q(t)}dt+q(t)db_t , + dv^(t)=(t)v^(t)db_t , + v^(t)=^(t ) , + x(0)=x_0,p(t)=-|x(t ) , .where , } ] .therefore , the first - order adjoint equation remains the same as ( [ lq - ad ] ) , but the terminal condition becomes [ t - p ] p(t)= -|x(t)-e[^_t ] , where , \right]} ] as illustrated in figure [ figure1 ] . as becomes larger ( for example for in figure [ figure2 ] ) there is an expolosion of the solution . ] is larger . ]in this paper we established a peng s type stochastic maximum principle for risk - sensitive stochastic control of mean - field type extending a previous result by lim and zhou .bahlali , k. , djehiche , b. and mezerdi , b. , on the stochastic maximum principle in optimal control of degenerate diffusions with lipschitz coefficients ._ appl . math . and56(3 ) , pp .364 - 378 , 2007_. chighoub , f. , mezerdi , b. , a stochastic maximum principle in mean - field optimal control problems for jump diffusions , _ arab journal of mathematical sciences , volume 19 , issue 2 , july 2013 , pages 223 - 241 ._ hafayed , m. , a mean - field maximum principle for optimal control of forward - backward stochastic differential equations with poisson jump processes , international journal of dynamics and control december 2013 , volume 1 , issue 4 , pp 300 - 315 .shen , y. and siu , t. k. , the maximum principle for a jump - diffusion mean - field model and its application to the mean - variance problem , _ nonlinear analysis : theory , methods and applications , volume 86 , july 2013 , pages 58 - 73 . _
|
in this paper we study mean - field type control problems with risk - sensitive performance functionals . we establish a stochastic maximum principle ( smp ) for optimal control of stochastic differential equations ( sdes ) of mean - field type , in which the drift and the diffusion coefficients as well as the performance functional depend not only on the state and the control but also on the mean of the distribution of the state . our result extends the risk - sensitive smp ( without mean - field coupling ) of lim and zhou ( 2005 ) , derived for feedback ( or markov ) type optimal controls , to optimal control problems for non - markovian dynamics which may be time - inconsistent in the sense that the bellman optimality principle does not hold . in our approach to the risk - sensitive smp , the smoothness assumption on the value - function imposed in lim and zhou ( 2005 ) need not to be satisfied . for a general action space a peng s type smp is derived , specifying the necessary conditions for optimality . two examples are carried out to illustrate the proposed risk - sensitive mean - field type smp under linear stochastic dynamics with exponential quadratic cost function . explicit solutions are given for both mean - field free and mean - field models . * index terms . * time inconsistent stochastic control , maximum principle , mean - field sde , risk - sensitive control , logarithmic transformation . * abbreviated title . * risk - sensitive control of sdes of mean field type * ams subject classification . * 93e20 , 60h30 , 60h10 , 91b28 .
|
computing ( rc ) is a set of methods for designing and training artificial recurrent neural networks that brings a drastic simplification of the system design .a typical reservoir is a randomly connected fixed network with arbitrary coupling coefficients between the input signal and the nodes .these parameters remain fixed and only readout weights are optimised .this greatly simplifies the training process - that is , computing the coefficients of the readout layer - which often reduces to solving a system of linear equations . despite these simplifications , the rc approach can yield performances equal , or even better than other machine learning algorithms . the rc algorithm has been applied to speech and phoneme recognition , equalling other approaches , and won an international competition on financial time series prediction . optical computing has been investigated for decades as photons propagate faster than electrons , without generating heat or magnetic interference , and thus promise higher bandwidth than conventional computers . the possibility of optical implementation of reservoir computing was studied using numerical simulations in . a major breakthrough occurred by the end 2011 beginning 2012 whenexperimental implementations of reservoir computers with performance comparable to state of the art digital implementations were reported . in quick succession appeared an electronic implementation , and then three opto - electronic implementations . since thenall - optical reservoir computers have been reported using as nonlinearity the saturable gain of a semiconductor optical amplifier , a semiconductor laser with delayed feedback , the saturation of absorption , integrated on an optical chip , and based on a coherently driven passive optical cavity .the performance of a reservoir computer greatly relies on the training technique used to compute the readout weights .offline learning methods , used up to now in experimental implementations , provide good results , but become detrimental for real - time applications , as they require large amounts of data to be transferred from the experiment to the post - processing computer .this operation may take longer than the time it takes the reservoir to process the input sequence .moreover , offline training is only suited for time - independent tasks , which is not always the case in real - life applications .the alternative ( and more biologically plausible ) approach is to progressively adjust the readout weights using various online learning algorithms such as gradient descent , recursive least squares or reward - modulated hebbian learning . such procedures require minimal data storage and have the advantage of being able to deal with a variable task : should any parameters of the task be altered during the training phase , the reservoir computer would still be able to produce good results by properly adjusting the readout weights . in the present work we apply this online learning approach to an opto - electronic reservoir computer andshow that our implementation is well suited for real - time data processing .the system is based on the opto - electronic reservoir , introduced in , coupled to an fpga chip , that implements input and output layers .it generates the input sequence in real time , collects the reservoir states and computes optimal readout weights using a simple gradient descent algorithm .real - time generation of reservoir inputs allows the system to be trained and tested on an arbitrary long input sequence , and the replacement of the personal computer by a dedicated fpga chip significantly reduces the experimental runtime. we apply our system to a specific real - world task : the equalisation of nonlinear communication channel . wireless communications is by far the fastest growing segment of the communications industry .the increasing demand for higher bandwidths requires pushing the signal amplifiers close to the saturation point which , in turn , adds significant nonlinear distortions into the channel .these have to be compensated by a digital equaliser on the receiver side .the main bottleneck lies in the analog - to - digital converters ( adcs ) that have to follow the high bandwidth of the channel with sufficient resolution to sample correctly the distorted signal .current manufacturing techniques allow producing fast adcs with low resolution , or slow ones with high resolution , obtaining both being very costly .this is where analog equalisers become interesting , as they could equalise the signal before the adc and significantly reduce the required resolution of the converters , thus potentially cutting costs and power consumption .moreover , optical devices may outperform digital devices in terms of processing speed .it can for instance be shown that reservoir computing implementations can reach comparable performance to other digital algorithms ( namely , the volterra filter ) for equalisation of a nonlinear satellite communication channel .our reservoir computer is used to equalise a simple wireless channel introduced in .this model is described by a simple set of equations ( see section [ subsec : cheq ] ) and can be easily implemented on the fpga chip .this task has also been extensively studied in the rc community , both numerically and experimentally . our system performs better than previously reported rc implementations on this task and we report error rates up to two orders of magnitude lower than previous results .furthermore , we demonstrate the great advantage of online training , namely that it is suitable for solving non - stationary tasks , such as a variable wireless channel .this is particularly interesting for real - life applications , as physical communication channels vary depending on fluctuating environmental conditions .we show that even under such variable conditions , our system performs as well as in the stationary case . in previous work we programmed the simple gradient descent algorithm on an fpga chip to train a digital reservoir computer , and we have reported preliminary results on an online - trained physical reservoir computer . compared to the latter work , the experimental setup has been improved , the fpga design has been further optimised , and a new dedicated clock generation device is used . as a consequencethe system is more stable , more efficient , and the reservoir size has been increased to 50 neurons ( as in ) .we also report what is , to the best of our knowledge , the lowest error rates ever obtained with a physical reservoir computer on the channel equalisation task .finally we present a much more in depth analysis of the time - dependent case .the paper is structured as follows .section [ sec : basic ] introduces the basic principles of the reservoir computing , the channel equalisation task and the simple gradient descent algorithm . the experimental setup and the fpga design are outlined in sections [ sec : expsetup ] and [ sec : design ] .finally , the experimental results and the conclusion are presented in sections [ sec : results ] and [ sec : ccl ] .a typical reservoir computer is depicted in figure [ fig : rc ] .it contains a large number of internal variables evolving in discrete time , as given by where is a nonlinear function , is some external signal that is injected into the system , and and are time - independent coefficients , drawn from some random distribution with zero mean , that determine the dynamics of the reservoir . the variances of these distributions are adjusted to obtain the best performances on the task considered .is injected into a dynamical system , composed of a large number of internal variables .the dynamics of the system is defined by the nonlinear function and the coefficients and .the readout weights are trained to obtain an output signal , given by their linear combination with the reservoir states , as close as possible to the target signal .,scaledwidth=45.0% ] the nonlinear function used here is , as in . to simplify the interconnection matrix , we exploit the ring topology , proposed in , so that only the first neighbour nodes are connected .this architecture provides performances comparable to those obtained with complex interconnection matrices , as demonstrated numerically in and experimentally in . under these circumstanceswe obtain [ eq : rcevo2 ] with , and parameters are used to adjust the feedback and the input signals , respectively , and is the input mask , drawn from a uniform distribution over the the interval ] ( for ease of implementation on an fpga chip ) .noise amplitude values are chosen to produce the same signal - to - noise ratios as in , where gaussian noise was used .the reservoir computer has to restore the clean signal from the distorted noisy signal .the performance is measured in terms of wrongly reconstructed symbols , called the symbol error rate ( ser ) .the results are presented in section [ subsec : constch ] and compared to a previous implementation based on the same opto - electronic setup .note that although the input signal has a symmetric symbol distribution around , the output signal loses this property , with the symbols lying within the ] interval .the decay rate is an integer , typically scanned from up to in a few wide steps .the noise ratios were set to several pre - defined values , in order to compare our results with previous reports .the feedback attenuation was scanned finely between and .lower values would allow cavity oscillations to disturb the reservoir states , while higher values would not provide enough feedback to the reservoir .table [ tab : gdparams ] contains the values of parameters we used for the gradient descent algorithm ( defined in section [ subsec : gd ] ) ..gradient descent algorithm parameters [ cols="^,^,^,^",options="header " , ] [ tab : fpgares ]this section presents the results of different investigations outlined in sections [ subsec : cheq ] and [ subsec : gd ] .all results presented here were obtained with the experimental setup described in section [ sec : expsetup ] .figure [ fig : oldvsfpga ] presents the performance of our reservoir computer for different signal - to - noise ratios ( snrs ) of the wireless channel ( green squares ) .we investigated realistic snr values for real world channels such as lan and wi - fi . for each snr, the experiment was repeated 20 times with different random input masks .average sers are plotted on the graph , with error bars corresponding to maximal and minimal values obtained with particular masks .we used noise ratios from up to , and also tested the performance on a noiseless channel , that is , with infinite snr .the rc performance was tested over one million symbols , and in the case of a noiseless channel the equaliser made zero error over the whole test sequence with most input masks .the experimental parameters , such as the input gain and the feedback attenuation , were optimised independently for each input mask .figure [ fig : servsparams ] shows the dependence of the ser on these parameters .the plotted ser values are averaged over 10 random input masks .for this figure , we used data from a different experiment run with more scanned values . for each curve ,the non - scanned parameter was set to the optimal value .the equaliser shows moderate dependence on both parameters , with an optimal input gain located within and an optimal feedback attenuation of .we compare our results to those reported in , obtained with the same optoelectronic reservoir , trained offline ( blue dots ) .for high noise levels ( ) our results are similar to those in . for low noise levels ( )the performance of our implementation is significantly better .note that the previously reported results are only rough estimations of the equaliser s performance as the input sequence was limited by hardware to symbols . in our experimentthe ser is estimated more precisely over one million input symbols .for the lowest noise level ( ) an ser of was reported in , while we obtained an error rate of with our setup .one should remember that common error detection schemes , used in real - life applications , require the ser to be lower than in order to be efficient . to the best of our knowledge ,the results presented here ( at snr ) are the lowest error rates ever obtained with a physical reservoir computer .sers around have been reported in and a recently reported passive cavity based setup achieved a rate ( this values is limited by the use of a -symbol test sequence ) , but no results below have been published so far .however , this is nt the main achievement of this experiment .indeed , had it been possible to test on a longer sequence , it is possible that comparable sers would have been obtained .the strength of this setup resides in the adaptability to changing environment , as will be shown in the following sections . ) , for most choices of input mask , the rc made no errors over the test sequence .blue dots show the results of the optoeletronic setup with offline training . for low noise levels ,our system produces error rates significantly lower than , and for noisy channels the results are similar .brown diamonds depict the sers obtained with the simplified version of the training algorithm ( see section [ subsubsec : simptrain ] ) .the equalisation is less efficient than with the full algorithm , but the optimisation of experimental parameters takes less time.,scaledwidth=44.0% ] snr ) on the experimental parameters .average sers ( over 10 random input masks ) are plotted against the input gain ( blue dots ) and the feedback attenuation ( green squares ) .the optimal feedback attenuation has to be set around , outside this region the ser deteriorates by roughly one order of magnitude .the input gain shows a minimum around .,scaledwidth=44.0% ] the performance of the simplified training algorithm is shown in figure [ fig : oldvsfpga ] ( brown dots ) .the equaliser was tested with 10 random input masks and one million input symbols , the training was performed over symbols .only three parameters were scanned during these experiments : the input gain , the feedback attenuation and the signal - to - noise ratio .the learning rate was set to .the overall experimental runtime was significantly shorter : while an experiment with full training algorithm would last for about 50 hours , these results were obtain in approximately 10 hours ( which is due to five different values of tested in the former case ) . for high noise levels the results of the two algorithms are close and for low noise levels the simplified version yields slightly worse error rates .the performance is much worse in the noiseless case and strongly depends on the input mask : we notice a difference of almost two orders of magnitude between the best and the worst result .this performance loss is the price to pay for the simplified algorithm and shorter experimental runtime . besides the environmental conditions , the relative positions of the emitter and the receiver can have a significant impact on the properties of a wireless channel .a simple example is a receiver moving away from the transmitter , causing the channel to drift more or less slowly , depending on the relative speed of the receiver .here we show that our reservoir computer is capable of dealing with drifts with time scales of order of a second .this time scale is in fact slow compared to those expected in real life situations , but the setup could be sped up by several orders of magnitude , as will be shown in the next section .a drifting channel is a good example of a situation where training the reservoir online yields better results than offline .we have previously shown in numerical simulations that training a reservoir computer offline on a non - stationary channel results in an error rate ten times worse than with online training .we demonstrate here that an online - trained experimental reservoir computer performs well even on a drifting channel if is set to a small non - zero value ( see section [ subsubsec : gdnonstationary ] ) . at first , we investigated the relationship between the channel model coefficients and the lowest error rate achievable with our setup .that is , would the equalisation performance be better or worse if one of the numerical values in equations and was changed by , for instance , . given the vast amount of possibilities of varying the 4 parameters and , we picked those that seemed most interesting and most significant .we thus tested the amplitude of the linear part , given by the parameter , the amplitude of the quadratic and cubic parts , given by and , and the memory of the impulse response .for each test , only one aspect of the channel was varied and other parameters were set to default values ( as in equations and ) .the results of these investigations are presented in the appendix .we then programmed these parameters to vary during experiments in two different ways : a monotonic growth ( or decay ) and a periodic linear oscillation between two defined values .the results of these experiments are depicted in figure [ fig : driftchan ] .figure [ fig : driftchan](a ) shows the experimental results for the case of monotonically decreasing from to .the blue curve presents the resulting ser with , that is , with training process stopped after input symbols .the green curve depicts the error rate obtained with , so that the readout weight can be gradually adjusted as the channel drifts . notethat while in the first experiment the ser grows up to , it remains much lower in the second case .the increasing error rate in the latter case is due to the decrease of resulting in a more complex channel .brown curves show the best possible error rate obtained with our setup for different values of , as presented in the appendix . with approaching ,the obtained error rate is , which is the lowest error rate possible for this value of , as demonstrated in figure [ subfig : p1ser ] .this shows that the non - stationary version of the training algorithm allows a drifting channel to be equalised with the lowest error rate possible .ll ( 0,0 ) ( -225,115)(a ) & ( 0,0 ) ( -230,115)(b ) + ( 0,0 ) ( -225,115)(c ) & ( 0,0 ) ( -230,115)(d ) + ( 0,0 ) ( -225,115)(e ) & ( 0,0 ) ( -230,115)(f ) + ( 0,0 ) ( -225,115)(g ) & ( 0,0 ) ( -230,115)(h ) + figure [ fig : driftchan](b ) depicts error rates obtained with linearly oscillating between and . with ( blue curve )the error rate is as low as when is around , and grows very high elsewhere . with ,the obtained ser is always at the lowest value possible : at the point where , it stays at , which again is close to the best performance for such channel , illustrated by the brown curve .we obtained similar results with parameters , and , as shown in figures [ fig : driftchan](c)-(d ) . letting the reservoir computer adapt the readout weights by setting produces the lowest error rates possible for a given channel , while stopping the training with results in quickly growing sers .figure [ fig : swch ] shows the error rate produced by our experiment in case of a switching noiseless communication channel .the parameters of the channel are programmed to switch in cycle among equations every symbols .every switch is followed by a steep increase of the ser , as the reservoir computer is no longer optimised for the channel it is equalising .the performance degradation is detected by the algorithm , causing the learning rate to be reset to the initial value , and the readout weights are re - trained to new optimal values. symbols , produced by the fpga in case of a switching channel .the value of ( right axis , green curve ) is modified every symbols .the change in channel is followed immediately by a steep increase of the ser .the parameter ( right axis , orange curve ) is automatically reset to every time a performance degradation is detected , and then returns to its minimum value , as the equaliser adjusts to the new channel , bringing down the ser to its asymptotic value . after each variation of , the reservoir re - trains .the lowest error rate possible for the given channel is shown by the dashed brown curve.,scaledwidth=44.0% ] for each value of , the reservoir computer is trained over symbols , then its performance is evaluated over the remaining symbols . in case of , the average ser is , which is the expected result . for and compute average sers of and , respectively , which are the best results achievable with such values of according to our previous investigations ( see figure [ subfig : p1ser ] ) .this shows that after each switch the readout weights are updated to new optimal values , producing the best error rate for the given channel .note that the current setup is rather slow for practical applications . with a roundtrip time of ,its bandwidth is limited to and training the reservoir over samples requires to complete .however , it demonstrates the potential of such systems in equalisation of non - stationary channels . for real - life applications , such as for instance wi - fi 802.11 g , a bandwidth of would be required .this could be realised with a fibre loop , thus resulting in a delay of .this would also decrease the training time down to and make the equaliser more suitable for realistic channel drifts .the speed limit of our setup is set by the bandwidth of the different components , and in particular of the adc and dac .for instance with and keeping , reservoir states should have a duration of , and hence the adc and dac should have bandwidths significantly above ( such performance is readily available commercially ) .as an illustration of how a fast system would operate , we refer to the optical experiment in which information was injected into a reservoir at rates beyond .in the present work we applied the online learning approach to training an opto - electronic reservoir computer .we programmed the simple gradient descent algorithm on an fpga chip and tested our system on the nonlinear channel equalisation task .we obtained error rates up to two orders of magnitude lower than previously reported rc implementations on the channel equalisation task , while significantly reducing the experimental runtime .we also demonstrated that our system is well - suited for non - stationary tasks by equalising a drifting and a switching channel . in both cases, we obtained the lowest error rates possible with our setup .such flexibility is more complex to achieve with offline methods , and would require improving the algorithm by adding several computational steps .the online learning methods , on the other hand , need little modifications to successfully solve this task .moreover , in case of a slowly drifting channel the algorithm can be set to fine - tune the readout weights without performing a complete re - training of the reservoir , which would be hard to achieve with offline learning .this shows that the technique presented here is more suitable for real - life tasks with variable parameters .our realisation opens several new research directions .using the fpga to drive the opto - electronic reservoir gives more control over the experiment .such a system could , for instance , implement a full optimisation of the readout weights and the input mask , as suggested in .the real - time training makes it possible to feed the output signal back into the reservoir .this additional feedback would highly enrich the dynamics of the system , allowing one to tackle new tasks such as pattern generation or chaotic series prediction .the high speed of dedicated electronics offers the opportunity to develop very fast , autonomous reservoir computers with ghz data rates .the present work thus paves the way towards autonomous , very - high speed , fully analog reservoir computers with a wider range of possible applications .[ sec : inflparams ] figure [ subfig : p1ser ] shows the equalisation results for different values of .we tested each value over 10 random input masks , with independent experimental parameters optimisation for each run .average values are presented on the plot , with error bars depicting best and worst results obtained among different masks . the equaliser performancewas tested on a sequence of one million inputs , and in several cases we obtained zero misclassified symbols .note that the observed increase of the ser with reduction of is natural as the linear part contains the signal to be extracted . when decreasing , not only the useful signal gets weaker , but the nonlinear distortion also becomes relatively more important .figures [ subfig : p2ser ] and [ subfig : p3ser ] present the dependence of the ser on parameters and , respectively .these parameters define the amplitude of the nonlinear distortion of the signal , and as they grow , the channel becomes more nonlinear and thus more difficult to equalise .the results of equalisations with different values of are shown in figure [ subfig : mmser ] , higher values of increase the temporal symbol mixing of the channel , hence worse results .we acknowledge financial support by interuniversity attraction poles program of the belgian science policy office under grant iap p7 - 35 `` photonics '' , by the fonds de la recherche scientifique frs - fnrs and by the action de la recherche concerte of the acadmie universitaire wallonie - bruxelles under grant auwb-2012 - 12/17-ulb9 .b. hammer , b. schrauwen , and j. j. steil , `` recent advances in efficient learning of recurrent networks , '' in _ proceedings of the european symposium on artificial neural networks _ , bruges ( belgium ) , april 2009 , pp. 213216 .d. verstraeten , b. schrauwen , and d. stroobandt , `` reservoir - based techniques for speech recognition , '' in _ ijcnn06 .international joint conference on neural networks _ , vancouver , bc , july 2006 , pp .10501053 .k. vandoorne , w. dierckx , b. schrauwen , d. verstraeten , r. baets , p. bienstman , and j. van campenhout , `` toward optical signal processing using photonic reservoir computing , '' _ optics express _ , vol .16 , pp . 1118211192 , 2008 .l. appeltant , m. c. soriano , g. van der sande , j. danckaert , s. massar , j. dambre , b. schrauwen , c. r. mirasso , and i. fischer , `` information processing using a single dynamical node as complex system , '' _ nat ._ , vol . 2 , p. 468, 2011 .l. larger , m. soriano , d. brunner , l. appeltant , j. m. gutirrez , l. pesquera , c. r. mirasso , and i. fischer , `` photonic information processing beyond turing : an optoelectronic implementation of reservoir computing , '' _ opt . express _20 , pp . 32413249 , 2012 .a. dejonckheere , f. duport , a. smerieri , l. fang , j .- l .oudar , m. haelterman , and s. massar , `` all - optical reservoir computer based on saturation of absorption , '' _ opt .22 , pp . 1086810881 , 2014 .k. vandoorne , p. mechet , t. van vaerenbergh , m. fiers , g. morthier , d. verstraeten , b. schrauwen , j. dambre , and p. bienstman , `` experimental demonstration of reservoir computing on a silicon photonics chip , '' _ nat ._ , vol . 5 , p. 3541, 2014 .q. vinckier , f. duport , a. smerieri , k. vandoorne , p. bienstman , m. haelterman , and s. massar , `` high - performance photonic reservoir computer based on a coherently driven passive cavity , '' _ optica _ , vol . 2 , no . 5 , pp . 438446 , 2015 .l. bottou , `` online algorithms and stochastic approximations , '' in _ online learning and neural networks_.1em plus 0.5em minus 0.4em cambridge university press , 1998 .[ online ] .available : http://leon.bottou.org/papers/bottou-98x x. feng , g. he , and j. ma , `` a new approach to reduce the resolution requirement of the adc for high data rate wireless receivers , '' in _ signal processing ( icsp ) , 2010 ieee 10th international conference on_.1em plus 0.5em minus 0.4emieee , 2010 , pp .15651568 .k. hassan , t. s. rappaport , and j. g. andrews , `` analog equalization for low power 60 ghz receivers in realistic multipath channels , '' _ ieee global telecommunications conference ( globecom 2010 ) , pp. 1 - 5 _ , december 2010 .j. malone and m. a. wickert , `` practical volterra equalizers for wideband satellite communications with twta nonlinearities , '' _ ieee digital signal processing workshop and ieee signal processing education workshop ( dsp / spe ) _ , january 2011 .v. j. mathews and j. lee , `` adaptive algorithms for bilinear filtering , '' in _spie s 1994 international symposium on optics , imaging , and instrumentation_.1em plus 0.5em minus 0.4eminternational society for optics and photonics , 1994 , pp .317327 .p. antonik , a. smerieri , f. duport , m. haelterman , and s. massar , `` fpga implementation of reservoir computing with online learning , '' in _24th belgian - dutch conference on machine learning _, 2015 , http://homepage.tudelft.nl/19j49/benelearn/papers/paper_antonik.pdf .p. antonik , f. duport , a. smerieri , m. hermans , m. haelterman , and s. massar , `` online training of an opto - electronic reservoir computer , '' in _apnna s 22th international conference on neural information processing _ ,lncs , vol . 9490 , 2015 , pp . 233240 .l. boccato , a. lopes , r. attux , and f. j. von zuben , `` an echo state network architecture based on volterra filtering and pca with application to the channel equalization problem , '' in _ neural networks ( ijcnn ) , the 2011 international joint conference on_.1em plus 0.5em minus 0.4em ieee , 2011 , pp . 580587 .r. legenstein , s. m. chase , a. b. schwartz , and w. maass , `` a reward - modulated hebbian learning rule can explain experimentally observed network reorganization in a brain control task , '' _ j. neurosci ._ , vol . 30 , pp .84008410 , 2010 .m. duarte , a. sabharwal , v. aggarwal , r. jana , k. ramakrishnan , c. w. rice , and n. shankaranarayanan , `` design and characterization of a full - duplex multiantenna system for wifi networks , '' _ vehicular technology , ieee transactions on _ , vol .63 , no . 3 , pp .11601177 , 2014 .m. hermans , j. dambre , and p. bienstman , `` optoelectronic systems trained with backpropagation through time , '' _ ieee transactions on neural networks and learning systems _ , vol . 26 , no . 7 , pp . 15451550 , 2015 .antonik , m. hermans , f. duport , m. haelterman , and s. massar , `` towards pattern generation and chaotic series prediction with photonic reservoir computers , '' in _spie s 2016 laser technology and industrial laser conference _ , vol . 9732 , 2016 .
|
reservoir computing is a bio - inspired computing paradigm for processing time dependent signals . the performance of its analogue implementation are comparable to other state of the art algorithms for tasks such as speech recognition or chaotic time series prediction , but these are often constrained by the offline training methods commonly employed . here we investigated the online learning approach by training an opto - electronic reservoir computer using a simple gradient descent algorithm , programmed on an fpga chip . our system was applied to wireless communications , a quickly growing domain with an increasing demand for fast analogue devices to equalise the nonlinear distorted channels . we report error rates up to two orders of magnitude lower than previous implementations on this task . we show that our system is particularly well - suited for realistic channel equalisation by testing it on a drifting and a switching channels and obtaining good performances . artificial neural networks , channel equalisation , fpga , online learning , opto - electronic systems , reservoir computing
|
the merge of differential games and regime - switching models stems from a wide range of applications in communication networks , complex systems , and financial engineering .many problems arising in , for example , pursuit - evasion games , queueing systems in heavy traffic , risk - sensitive control , and constrained optimization problems , can be formulated as two - player stochastic differential games . in another direction ,recent applications for better describing the random environment leads to the use of the so - called regime - switching models ; see and many references therein . since for many problems arising in applications ,closed - form solutions are difficult to obtain . as a viable alternative , oneis contended with numerical approximations .a systematic approach of numerical approximation for stochastic differential games was provided in using markov chain approximation methods .the major difficulty in dealing with such game problems is to prove the existence of the value of the game . to ensure the existence of saddle points , separability with respect to controls for objective function and the drift of the diffusion is required in . it would be nice to be able to relax the separability condition .markov chain approximations of stochastic differential games are indeed discrete markov games . in this paper, we aim to develop sufficient conditions for the existence of saddle point of discrete markov games . in the proof , we start with dynamic programming equation together with static game results obtained by sion and von neumann , discover the relations between static games and dynamic games by a series of inequalities .this approach enables us to treat non - separable discrete markov games with respect to controls . by virtue of results in discrete markov games, we can easily prove the existence of saddle points of discrete markov games arising in numerical approximations of stochastic differential games when a discretization parameter is used . as , we are able to obtain the existence of saddle points of non - separable stochastic differential games using weak convergence techniques in and .the rest of the paper is arranged as follows .section ii begins with the formulation of the discrete markov games .section iii presents sufficient conditions for the existence of saddle points of discrete markov games for both ordinary control and relaxed control spaces , respectively .section iv applies the results in the discrete markov games to stochastic differential games .section v concludes the paper with further remarks .consider a two - player discrete markov zero - sum game .let be a finite state space of a markov chain , and be a collection of absorbing states .control space and for player and player are compact subsets of .[ for notational simplicity , we have chosen to treat real - valued controls in this paper . ]let be a controlled discrete - time markov chain , whose time - independent transition probabilities controlled by a pair of sequences is where denote the decision at time by player .[ adcon ] a control policy for the chain is admissible if if there is a function such that , then we refer to as a feedback control of player .given the running cost function , and the terminal cost function , the cost for an initial and an admissible control policy is defined by ,\ ] ] where and is the expectation given that initial and control . in the discrete markov game ,player wants to minimize the cost , while player wants to maximize .the two players have different information available depending on who makes the decision first ( or who `` goes first '' ) . using to denote the space of the admissible ordinary controls that player goes first .that is , for , there exists a sequence of measurable functions taking values in such that similarly , using to denote the collection of the admissible ordinary controls that player goes last , that is , is determined by a sequence of measurable functions taking values in such that to proceed , we define upper and lower values by respectively .it is obvious for .if the lower value and upper value are equal , then we say there exists a saddle point for the game , and its value is the corresponding dynamic programming equation is + c(x , r_1,r_2)\},\ ] ] + c(x , r_1,r_2)\}.\ ] ] practically , we can find and in ( [ uval ] ) and ( [ lval ] ) by solving ( [ udpe ] ) and ( [ ldpe ] ) using iterations .this is possible owing to the following lemma .the proof of this lemma can be found in ( * ? ? ?* lemma 2 ) , and a weaker form in .[ sdl ] is markov chain with state space , absorbing states , and transition probability .let there be a real number with is continuous in and , to each admissible control , , the cost is defined by ( [ cost1 ] ) .then is finite and solutions of ( [ udpe ] ) and ( [ ldpe ] ) are unique . for any initial value , the sequence + c(x , r_1,r_2)\}\ ] ] converges to , the unique solution of ( [ udpe ] ) as .analogously , for any initial , the sequence + c(x , r_1,r_2)\}\ ] ] converges to , the unique solution of ( [ ldpe ] ) as .in this section , we provide sufficient conditions for the existence of saddle points in discrete markov games .an existence proof is established through a series of inequalities . in addition , the definition of relaxed controls is given as a generalization of ordinary controls .it is shown that saddle points always exist in relaxed control space .[ vc ] is said to be convex - concave with respect to , if is convex and is concave .next , we present a well - known minimax principle in static games , which was obtained by sion in .[ minmax ] let and be compact spaces , be a convex - concave function on , then one of following two assumptions are needed for the existence theorem .( h1 ) : : and are continuous and separable in and .( h2 ) : : and are convex - concave with respect to .[ st ] assume either ( h1 ) or ( h2 ) . is a markov chain as in lemma [ sdl ] .let and be associated upper and lower values defined in ( [ uval ] ) and ( [ lval ] ) .then there exists a saddle points , that is , define two functions and by the dynamic programming equation of ( [ udpe ] ) and ( [ ldpe ] ) can be rewritten as under either assumption ( h1 ) or ( h2 ) , by lemma [ minmax ] , let , then in particular , there exists , so that equal holds in ( [ st-2 ] ) , for given in ( [ st-3 ] ) , a series of inequalities follows , by virtue of ( [ st-3 ] ) , we conclude all inequalities are indeed equal in ( [ st-4 ] ) , and this implies note that for all .hence .the existence of the saddle point is established .the above theorem gives sufficient conditions for the existence of saddle points .we note that there always exist saddle points in _ relaxed control _ space with merely continuity assumed .[ rc ] a control policy for the chain is said to be a relaxed control policy , if is a probability measure on , a -algebra of borel subsets of .more general definition of relaxed control is given by definition [ dfn - rcon ] in the context of stochastic differential games .let and be collection of probability measure on and . slightly abusing notations , we generalize real function on into a function on as following using the notation of relaxed control representation ,the transition probability function is and the cost under the relaxed control policy is .\ ] ] using to denote the space of admissible relaxed controls that player goes first .that is , for , there exists a sequence of measurable function taking values in such that analogously , using to denote the space of admissible relaxed controls that player goes last .that is , , there exists a sequence of measurable function taking values in such that the upper and lower values associated with relaxed control space are defined by respectively . to proceed ,we present another static game result obtained by von neumann .[ vonl ] let and be finite sets .let be a function on , and be probability measure on and , then [ rsad ] is a markov chain as in lemma [ sdl ] with relaxed control used .assume and are continuous on .let and be associated upper and lower values of ( [ uvalr ] ) and ( [ lvalr ] ) .then there always exists a saddle point , that is define two functions and by then dynamic programming equation in relaxed control space can be written by note that is continuous in compact set .hence for , there exists a finite subset , such that forcing to the limit as in ( [ rsad-2 ] ) and ( [ rsad-3 ] ) , as well as using lemma [ vonl ] , we have similarly , we obtain equality for function , equalities in ( [ rsad-4 ] ) and ( [ rsad-5 ] ) implies the rest of this proof is similar to the lines of inequalities ( [ st-4 ] ) .the details are omitted .in this section , we formulate stochastic differential games with regime switching . numerical methods using markov chain approximation leads to a sequence of discrete markov games discussed in the previous section .the use of theorem [ st ] gives sufficient conditions for the existence of saddle points , and facilitates the proof .consider a two - player stochastic game of regime - switching diffusions .for a finite set , , , , the dynamic system is given by where for each , is a control for player , is a standard -valued brownian motion , and is a continuous - time markov chain having state space with generator . let be a filtration , which might depend on controls , and which measures at least .we suppose that for each , is -adapted taking values in a compact subset , which are called _admissible controls_. denote , which is symmetric and positive definite .let be a compact set that is the closure of its interior and be the first exit time of from with using a real number to denote the discount factor , let the cost function be ,{\end{array}}\ ] ] where and are functions representing the running cost and terminal cost , respectively , and denotes the expectation taken with the initial data and and given control process .next , we introduce the relaxed control representation ; see .[ dfn - rcon ] let be the -algebra of borel subsets of .an _ admissible relaxed control _ is a measure on such that ) = t ] for .to proceed , we need the following assumptions . * for each , and are continuous functions on the compact set . * for each , the functions and are continuous on .* equation ( [ mod1 ] ) , where the controls are replaced by relaxed controls , has a unique weak sense solution ( i.e. , unique in the sense of in distribution ) for each admissible triple , where .* for any .* let the function is continuous as a mapping from to ] is the interval compactified ( see ) . *the functions and are separable in and for every .that is , and .* the cost is convex - concave with respect to , and there exist -valued continuous functions ( ) such that assumption ( a4 ) is used for construction of transition probabilities of the approximating markov chain .it requires that the diffusion matrix be diagonally dominated . if the given dynamic system does not satisfy ( a4 ) , then we can adjust the coordinate system to satisfy assumption ( a4 ) ; see .( a5 ) is a broad condition that is satisfied in most applications .the main purpose is to avoid the _ tangency _ problem discussed in .later , we will establish the existence of _ saddle points _ using either ( a6 ) or ( a7 ) in addition to ( a1)(a5 ) .condition ( a7 ) allows non - separable differential games with respect to controls .now we are ready to define upper values , lower values , and _ saddle points _ of differential games ; see for the corresponding definitions of systems without regime switching .let be collection of all admissible ordinary control with respect to .for , let such that are piecewise constant on the intervals , and is -measurable .let denote the set of such piecewise constant controls for player that are determined by measurable real - valued functions we can define and the associated rule for player analogous to ( [ defcon1 ] ) .thus we can always suppose that if the control of ( for example ) player is determined by a form such as ( [ defcon1 ] ) .then ( in relaxed control terminology ) the law of for is determined recursively by past information [ defval1 ] for initial condition , define the upper and lower values for the game as if the lower and upper value are equal , then we say there exists a saddle point for the game , and its value is here , we will construct a two - component markov chain .the discretization of differential game leads to a sequence of discrete markov games .the approximation is of finite difference type .the basis of the approximation is a discrete - time , finite - state , controlled markov chain whose properties are _ locally consistent _ with that of ( [ mod1 ] ) . for each ,let be a finite subset of such that as , where is a metric defined by let be a controlled discrete - time markov chain on a discrete state space with transition probabilities denoted by , where .we use to denote the actual control action for the chain at discrete time .suppose we have a positive function on such that as , but for each .we take an interpolation of the discrete markov chain by using interpolation interval .now we give the definition of local consistency .[ deflc1 ] let for and in and be a collection of well - defined transition probabilities for the two - component markov chain , approximation to .define the difference .assume .denote by , and the conditional expectation , covariance , and probability given .the sequence is said to be _ locally consistent _ with ( [ mod1 ] ) , for , if to approximate the cost defined in ( [ costfun1 ] ) , we define a cost function using the markov chain above .let the cost for and initial is , { \end{array}}\ ] ] using to denote the space of the ordinary controls that player goes first , and its strategy is defined by measurable functions of the type similar to ( [ defcon1 ] ) .that is , for , is determined by by denote the collection of the ordinary controls that player goes last . for , is determined by the associated upper and lower values is defined as in this section , we present a local consistent discrete markov game of generated by central finite difference scheme for analysis purpose .under assumptions ( a1)(a5 ) together with either ( a6 ) or ( a7 ) , we can apply theorem [ st ] to show the existence of saddle points for each . by forcing the limit , the upper ( lower ) values converge to that of stochastic differential game by lemma [ lem : conv ] , and it results in the existence of saddle points .first , the transition probabilities for are where set the interpolation interval as by ( a4 ) , .also , we have . to ensure that is always nonnegative , we require assume ( a1 ) , ( a2 ) , ( a4 ) , and satisfies ( [ asmp - h ] ) .the markov chain with transition probabilities and interpolation defined above is locally consistent with ( [ mod1 ] ) .the criterion in ( [ deflc2 ] ) can be verified through a series of calculations , thus details are omitted .[ sadptthm1 ] assume ( a1)(a5 ) , either ( a6 ) or ( a7 ) , and is a finite set defined above ( [ met ] ) . for and ,a markov chain is defined by ( [ transpb3 ] ) .let and be the associated upper and lower values defined in ( [ defupval2 ] ) and ( [ deflwval2 ] ) in the control spaces and .then there exists a saddle point provided satisfies ( [ asmp - h ] ) .the contraction condition ( [ sdl-1 ] ) satisfies for the discount factor .let assumptions ( a6 ) and ( a7 ) lead to ( h1 ) and ( h2 ) , respectively .the result holds applying theorem [ st ] .although the proof of next lemma is rather complicated and not trivial , the proof is referred to weak convergence techniques in , , and due to the limit of space .[ lem : conv ] assume that the conditions of theorem [ sadptthm1 ] are satisfied . then for the approximating markov chain , we have [ sadptthm4 ] assume the conditions of theorem [ sadptthm1 ] are satisfied .then the differential game has saddle point in the sense key part of zero - sum game problems is existence of saddle point .this paper is devoted to sufficient condition for the existence of saddle point in discrete markov game .using dynamic programming equation method , we are able to use static game results of sion and von neumann to discover the sufficient conditions .a direct application is numerical methods for stochastic differential game problems .the transition probabilities used in ( [ transpb3 ] ) requires restriction ( [ asmp - h ] ) on .practically , we develop the transition probabilities by upward finite difference scheme , so that the generated one is well defined without restriction on .it can be routinely calculated to verify the local consistency .this kind of discrete markov game might have different upper and lower values for some .however , both the upper and lower values in this situation converge to the original saddle point of differential game by lemma [ lem : conv ] and theorem [ sadptthm4 ] .numerical examples in pursuit - evasion games are omitted due to the space limit , although the numerical results clearly verify our works . for a regime - switching systemin which the markov chain has a large state space , we may use the ideas of two - time - scale approach presented in ( see also and references therein ) to first reduce the complexity of the underlying system and then construct numerical solutions for the limit systems .optimal strategies of the limit systems can be used for constructing strategies of the original systems leading to near optimality .h. j. kushner , s. g. chamberlain , on stochastic differential games : sufficient conditions that a given strategy be a saddle point , and numerical procedures for the solution of the game , _ journal of mathematical analysis and applications _, * 26 * ( 1969 ) , 560 - 575 .
|
this work establishes sufficient conditions for existence of saddle points in discrete markov games . the result reveals the relation between dynamic games and static games using dynamic programming equations . this result enables us to prove existence of saddle points of non - separable stochastic differential games of regime - switching diffusions under appropriate conditions .
|
forward error - correcting codes have played an instrumental role in the many successes of digital communications over the past decades .the fact that it is possible to transmit digital information reliably at a positive rate over an unknown noisy channel is now universally acknowledged .the main cost of improving reliability is the use of increasingly long codewords .one situation where the valuable lessons of classical coding theory may not apply directly is the general area of delay - constrained communications .if system specifications dictate that almost all information bits should be made available at the destination shortly after they arrived at the transmitter , it may not be possible to aggregate a large number of them before encoding and transmission . in some cases, stringent delay requirements will force a system designer to resort to short block codes or short constraint - length convolutional codes . from a coding perspective , using short codewords on channels with memory creates two impediments .first , decoders are designed to correct the most - likely error patterns and the probability of seeing atypical error patterns can not be neglected for short block lengths .second , if the coherence time of the channel is longer than a codeword transmission interval , then optimal code rate may depend heavily on the channel state , which is unknown to the transmitter .together , these factors impair the rapid transmission of information .coding performance as a function of block - length and code - rate has been assessed in the information theory literature using the reliability function .this criterion focuses on the exponential rate at which the error probability decays with block length , known as the error exponent , as a function of information rate .the concept of a reliability function can also be extended to variable - length codes in the presence of feedback .more recently , consideration has been given to the reliability function for bits with fixed delay , as opposed to coded blocks , in the presence of feedback . while remarkable , these results remain asymptotic in nature and do not necessarily capture overall system behavior adequately .for delay - sensitive applications and short codewords , three interrelated effects come into play .the probability of decoding failure for every codeword is not negligible .packet retransmissions lead to queue buildups at the source and , thereby , induce longer latencies .channel correlation over time introduces dependencies among successive decoding attempts , which further perturb queueing behavior and end - to - end delay .this is especially true when decoding failures are likely to occur in sequence .thus , a queueing analysis is necessary when considering the behavior of communication systems subject to very stringent delay requirements . for delay - sensitive systems with short codewords ,the natural tradeoff between code - rate and probability of decoding failure is hard to characterize . in a non - asymptotic regime where information is queued at the source ,transmitting data at a rate slightly below shannon capacity may lead to poor performance .recent results in the literature hint at the fact that , for delay - constrained communication , optimal code - rate selection depends heavily on block - length and channel correlation .these findings are especially important for real - time traffic and live interactive sessions , as these applications are sensitive to latency and require the use of short codewords .guidelines for code - rate selection in the context of delay - sensitive traffic were previously obtained for an erasure channel with memory .the approach favored therein , which permits a complete characterization of queueing behavior , consists in building a markov model for the evolution of the system .crucial assumptions that facilitate analysis can be summarized as follows : the packet arrival process at the source is bernoulli , the packet lengths are i.i.d .geometric , the error protection uses random codes , and the channel evolution is governed by a markov chain . in this article, we adopt a similar formulation and extend results that were obtained for the correlated erasure case to a more encompassing gilbert - elliot framework .this latter class of erasure channels is common to the literature on channels with memory , and subsumes earlier work based on similar concepts .we also present an in - depth analysis of system performance using different criteria that reflect the needs of various contemporary applications .this research is significant because it offers a new perspective on the selection of code - rate and block - length for delay - sensitive systems and provides a rigorous investigation into the effects of time - correlation on the queued performance of real - time wireless connections .throughout , we assume that coded bits are sent from the transmitter to the destination over a gilbert - elliot erasure channel. this channel can be in one of two states : a _ good _ state in which every bit is erased with probability and a _ bad _ state in which every bit is erased with probability , independently of other bits .our naming scheme implies .transitions between channel states occur according to a markov process .the probability of transitioning to state given that the markov chain is currently in state is denoted by .the likelihood of the reverse transition from to is symbolized by . under alphabetical state ordering, the parameters of this markov chain can be expressed in the form of a probability transition matrix , .\ ] ] a graphical interpretation of the communication channel under consideration appears in fig .[ figure : gilbertelliotchannel ] .[ c] [ c] [ c] [ c] [ l] [ l] [ c] [ c] [ c] the state of the channel at time is a random variable , which we denote by .moreover , the succession of states over time , , forms a markov process . finding the conditional probability amounts to selecting an entry in .likewise , can be obtained by locating the corresponding entry in , the power of .we note that this markov chain converges to its stationary distribution at an exponential rate that depends on the second eigenvalue of ( i.e. ) . in our analysis, a packet of length is sectioned into data segments each containing information bits .packing loss is treated implicitly since the last data segment of each packet is zero padded to bits .every segment is encoded separately into a codeword of length , which is subsequently stored in the queue for eventual transmission over the gilbert - elliot erasure channel .decoding failures are handled through immediate retransmission of the missing data .a quantity that is of fundamental importance in our analysis is the conditional probability of decoding failure at the destination .an intermediary step in identifying this probability is to derive an expression for , the number of erasures within a codeword of length .this , in turn , depends on the number of visits to each state within consecutive realizations of the channel .more specifically , we are interested in conditional probabilities of the form where and .the generating function for these conditional probabilities is based on generalizing the entries of to the vector space of real polynomials in with .\ ] ] let be the operator which maps a polynomial in to the coefficient of .then , the conditional probability is given , in terms of the power of , by {c , d}.\ ] ] it is worth mentioning that one can employ this method or alternative combinatorial means to obtain closed - form expressions for the desired conditional probabilities . during every transmission , a segment of information bitsis encoded using a code defined by a random parity - check matrix of size , where each matrix entry is selected independently and uniformly from .maximum likelihood decoding is used at the destination .random coding has the benefit that the probability of decoding failure depends only on the number of erasures and not on the locations of the erasures .consequently , the decoding failure probability is a function of the number of erasures in the block .once the value of is known , we can derive the desired probability as follows .conditioned on , decoding at the destination will succeed if and only if the submatrix of formed by choosing the erased columns has rank .furthermore , the probability that a random matrix over , where stands for the number of parity bits , has rank is equal to .thus , given erasures within a codeword of length , the probability of decoding failure can be written as the average probability of decoding failure at the destination is therefore equal to ] is the stationary distribution associated with the level . using this compact notation ,we can write the chapman - kolmogorov equations as , where is the probability transition matrix associated with . one possible approach to solve for the stationary distribution of our markov model is to employ spectral representation and ordinary generating functions . in this article, we adopt an alternate means and apply the matrix geometric method .we can represent the probability transition matrix as a semi - infinite matrix of the form where the submatrices , , , , and are real matrices .more specifically , we have 2 _ 0 & = & _ 1 & = + _ 2 & = .when the queue is empty , the relevant submatrices become 2 _ 0 & = & _ 1 & = .note that the markov chain associated with belongs to the class of processes with repetitive structure .the following theorem characterizes its stationary distribution .consider a positive recurrent markov chain on a countable state space with transition matrix given by .let the positive matrix be defined as the limit , starting from , of the matrix recursion then , the - level stationary distribution satisfies for with and \left ( \mathbf{i } + \mathbf{z } ( \mathbf{i } - \mathbf{r})^{-1 } \right)^{-1 } . % % \pi_0 & = \left [ \begin{array}{cc } 1 & \frac{\beta}{\alpha+\beta } \end{array } \right ] \left [ \begin{array}{cc } 1 & 1 \\ 1 & 0 \end{array } \right]^{-1 } % \left ( \mathbf{i } + \mathbf{z } ( \mathbf{i } - \mathbf{r})^{-1 } \right)^{-1 } . \\% \mathbf{z}_2 & = \left ( ( \mathbf{i } - \mathbf{c}_1 ) \mathbf{a}_2^{-1 } ( \mathbf{i } - \mathbf{a}_1 ) - \mathbf{c}_0 \right ) \mathbf{a}_2^{-1 } \\% \pi_0 & = \left[1 \ ; \frac{\beta}{\alpha+\beta } \right ] \left [ \begin{array}{cc } 1 & 1 \\ 1 & 0 \end{array } \right]^{-1 } \!\ ! \left ( \mathbf{i } + \mathbf{z}_1 + \mathbf{z}_2 ( \mathbf{i } \ ! - \ ! \mathbf{r})^{-1 } \right)^{-1 } \ !\!. % \end{aligned}\ ] ] [ corollary : decayrate ] the decay rate of the complementary cumulative distribution function of the queue satisfies where is the spectral radius of .this mathematical characterization makes it possible to compute a wide range of advanced performance criteria for the system under consideration , including average packet error rate and outage capacity . herein , we focus on two measures that are most relevant to delay - sensitive communications .first , we look at the probability that the queue exceeds a threshold , , where is relatively small .second , we examine the decay rate of the complementary cumulative distribution function , as discussed in corollary [ corollary : decayrate ] . again , we emphasize that the tail decay in buffer occupancy is given by the dominant eigenvalue of . for illustrative purposes ,we select the following parameters . the gilbert - elliot erasure channel is defined by , , , and .this generates an average erasure probability of .the channel memory decays at an exponential rate of .the blocklength is fixed at and the arrival process is defined by the arrival probability and average packet length . if codewords are transmitted every 4.615 ms , then this corresponds to an arrival rate of roughly 10.6 kbits / sec and an ergodic channel capacity of roughly 22.2 kbits / sec .these parameters are selected to loosely match the operation of a wireless gsm relay link .system performance as a function of the number of information bits per codeword , , is shown in fig .[ figure : overflow1 ] .each curve represents the complementary cumulative distribution function evaluated at a different threshold value , .[ c]information bits per block , [ c]tail probability , as expected , the probability of the queue exceeding a prescribed threshold decreases as increases .more interestingly , it is instructive to notice that appears uniformly optimal for all values of . further supporting evidence for this observationis offered by looking at the asymptotic decay rate in tail occupancy , displayed in fig .[ figure : effcap1 ] .when the arrival rate is between 47.5 and 60 , one finds that is also optimal in terms of tail decay .this robustness property is very encouraging , as it simplifies system design .an important observation that does not appear on these two figures is the fact that , for short block lengths , the optimal value of depends heavily on the channel parameters , , and .a naive conjecture would place close to the shannon limit , but this is much larger than the optimal value of .a more sophisticated approach is to maximize the throughput of a system with an infinite - backlog . after some calculation, one finds that this leads to , which is much closer to the true optimum .but , as the channel memory parameter varies , the optimal value of changes substantially . in fact , as , approaches .[ c][l][1][335]arrival rate[c][r][1][15]information bits work provides a unified approach that links queueing performance with the operation of a communication system at the physical layer .the methodology and results are developed for the gilbert - elliot erasure channel , but can be generalized to more intricate finite - state channels with memory .for example , the simple performance characterization of random codes over erasure channels extends naturally to hard - decision decoding of bch codes over gilbert - elliot error channels . for fixed parameters, the optimal code rate appears relatively insensitive to target threshold in the queue .still , channel memory and cross - over probabilities can affect this optimal operating point .more generally , the optimal code rate seems to be linked to ratio between the codeword time and the coherence time of the channel .l. liu , p. parag , j. tang , w .- y .chen , and j .- f .chamberland , `` resource allocation and quality of service evaluation for wireless communication systems using fluid models , '' _ ieee trans . inf . theory _ , vol .53 , no . 5 , pp .17671777 , may 2007 .p. parag , j .- f .chamberland , h. d. pfister , and k. r. narayanan , `` code rate , queueing behavior and the correlated erasure channel , '' in _ ieee information theory workshop on information theory _ , cairo , egypt , january 2010 .g. latouche and v. ramaswami , _ introduction to matrix analytic methods in stochastic modeling _ ,asa - siam series on statistics and applied probability.1em plus 0.5em minus 0.4emsociety for industrial mathematics , 1987 .
|
this paper considers the queueing performance of a system that transmits coded data over a time - varying erasure channel . in our model , the queue length and channel state together form a markov chain that depends on the system parameters . this gives a framework that allows a rigorous analysis of the queue as a function of the code rate . most prior work in this area either ignores block - length ( e.g. , fluid models ) or assumes error - free communication using finite codes . this work enables one to determine when such assumptions provide good , or bad , approximations of true behavior . moreover , it offers a new approach to optimize parameters and evaluate performance . this can be valuable for delay - sensitive systems that employ short block lengths .
|
one of the greatest successes of the big bang theory is that its prediction that the primordial baryonic matter is almost entirely composed of hydrogen and helium with a trace amount of a few other light elements is in detailed agreement with current observations ( e.g. , schramm & turner 1998 ) .the heavier elements , collectively called metals " , are thought to be made at much later times through nucleosynthesis in stars .metals are ubiquitous in the universe in virtually all environments that have been observed , including regions outside of galaxies , the intergalactic medium ( igm " ) , ranging from the metal rich intracluster medium to low metallicity lyman alpha clouds .however , metallicity ( the ratio of the amount of mass in metals to the total baryonic mass for a given region , , divided by for the sun , ) is observed to be very variable .for example , metallicity reaches as high as ten ( in units of the solar value , where value unity corresponds to ) in central regions of active galactic nuclei ( mushotsky , done , & pounds 1993 ; hamann 1997 ; tripp , lu , & savage 1997 ) but is as low as for some halo stars in our own galaxy ( beers 1999 ) .disparity in metallicity values is also seen at high redshift .for instance , metallicity in damped systems is as high as and as low as at redshift ( prochaska & wolfe 1997 ) , whereas it is about 0.01 in moderate column density clouds at ( tytler 1995 ; songaila & cowie 1996 ) .low column density lyman alpha clouds at appear to have still lower metallicity ( lu 1998 ; tytler & fan 1994 ) . the question that naturally rises then is : when were the metals made and why are they distributed as observed ?can we understand the strong dependence of on the gas density ( at redshift zero ) and the comparable dependence of on redshift for regions of a given overdensity ? while these are well - posed questions , addressing them directly is a formidable computational problem and requires both a large dynamic range , to ensure a fair piece of the universe to be modeled , and sufficiently realistic physics being modeled including gasdynamics , galaxy formation , galactic winds and metal enrichment .after years of continuous improvement of both numerical techniques and physical modeling , coupled with rapid increase in computer power , we have now reached the point where this important question can at last be addressed in a semi - quantitative fashion using numerical simulations .the results reported on here are based on a new computation of the evolution of the gas in a cold dark matter model with a cosmological constant ; the model is normalized to both the microwave background temperature fluctuations measured by cobe ( smoot 1992 ) on large scales ( bunn & white 1997 ) and the observed abundance of clusters of galaxies in the local universe ( cen 1998 ) , and it is close to both the concordance model of ostriker & steinhardt ( 1995 ) and the model indicated by the recent high redshift supernova results ( reiss 1998 ) .the relevant model parameters are : , , , , / s / mpc , and tensor mode contribution to the cmb fluctuations on large scales .two simulations with box sizes of are made , each having cells and dark matter particles with the mean baryonic mass in a cell being and the dark matter particle mass being , respectively , in the two simulations .output was rescaled to to match latest observations ( burles & tytler 1998 ) .the results shown are mostly based on the large box , while the small box is used to check resolution effects .the description of the numerical methods of the cosmological hydrodynamic code and input physical ingredients can be found elsewhere ( cen & ostriker 1999a , b ) .to briefly recapitulate , we follow three components separately and simultaneously : dark matter , gas and galaxies , where the last component is created continuously from the former two during the simulations in places where real galaxies are thought to form , as dictated mostly by local physical conditions . self - consistently , feedback into igm from young stars in the galaxies " is allowed , in three related forms : supernova thermal energy output , uv photon output and mass ejection from the supernova explosions .the model reproduces the observed uv background as a function of redshift and the redshift distribution of star formation ( madau plot " ; nagamine , cen & ostriker 1999 ) , among other diagnostics .metals are followed as a separate variable ( analogous to the total gas density ) with the same hydrocode .we did not fit to the observed distributions and evolution of metals , but assumed a specific efficiency of metal formation , subsequently rescaling the computed results to an adopted yield " ( arnett 1996 ) , the percentage of stellar mass that is ejected back into igm as metals , of ( from an input value 0.06 ) .a word about the resolution of the simulation is appropriate here .the conclusions drawn in this paper are not significantly affected by the finite resolution , as verified by comparing the two simulations .let us give an argument for why this is so .although our numerical resolution is not sufficient to resolve any internal structure of galaxies , the resolution effect should affect different regions with different large - scale overdensities more or less uniformly since our spatial resolution is uniform and our mass resolution is good even for dwarf galaxies . in other words , galaxy formation in our simulationsis not significantly biased against low density regions .thus , the distribution of the identified galaxies as a function of large - scale overdensity in the simulation would be similar , if we had a much better resolution .it is the distribution of the galaxies that determines the distribution of metals , which is the subject of this paper .needless to say , we can not make detailed modeling of the ejection of metals from galaxies into the igm and this ignorance is hidden in the adopted yield " coefficient .however , once the metals get out of galaxies , their dynamics is followed accurately .changing the adopted yield by some factor would change all quoted metallicities by the same factor but not alter any statements about spatial and temporal distribution of metals .figure 1 shows the evolution of metallicity averaged over the entire universe ( dot - dashed curve ) and four regions with four different overdensities , , smoothed by a gaussian window of comoving size , respectively , that approximately correspond to clusters of galaxies , lyman limit and damped systems , moderate column density clouds and very low column density clouds , at .the overdensity of each class of objects is defined using a gaussian smoothing window of radius , which corresponds to a mean mass of .if we assume that the dlas are progenitors of the present day large disk galaxies , their mass may be in the range .therefore a choice of overdensity of seems appropriate . for the moderate column density lyman alpha clouds ,the choice is somewhat less certain but small variations do not drastically change the results .for the very low column density lyman alpha clouds , the choice of the mean density should be adequate since the density fluctuations for these objects are small thus their density should be close to the mean . for the clusters of galaxies we can use overdensity of or and it makes no difference to the results .note that a given class of objects is chosen to have a fixed comoving overdensity , not to have a fixed physical density .this choice is made because the decrease of a factor of the observed meta - galactic radiation field from to ( haardt & madau 1996 ) , and the increase of the comoving size of structure with time at a fixed comoving density as ( cen & simcoe 1997 ) approximately compensate for the decrease of physical density so a fixed comoving density approximately corresponds to a fixed column density at different redshifts .this applies for the last three classes of objects .for the first class of objects ( clusters of galaxies ) either choice gives comparable results , due to the fact that metallicity saturates at the highest density ( see below ) .several trends are clear .first , metallicity is a strong function of overdensity in the expected sense : high density regions have higher metallicity .second , and more surprisingly , the evolution of metallicity itself is a strong function of overdensity : high density regions evolve slowly with redshift , whereas the metallicity in low density regions decreases rapidly towards high redshift . finally , the overall metallicity evolution averaged globally differs from that of any of the constituent components .therefore _ any given set of cosmic objects ( including stars or forest ) can not be representative of the universal metallicity at all times _ , although at a given epoch one may be able to identify a set of objects that has metallicity close to the universal mean .for example , at , regions with overdensity ( which roughly correspond to lyman alpha clouds of column density of ) have metallicities very close to the global mean , while at , regions with overdensity of one hundred ( which roughly correspond to lyman limit and damped lyman alpha systems ) has metallicity very close to the global mean .it has been the conventional wisdom to expect that , as all metals " are produced ( but not , on balance , destroyed ) by stars , the metal abundance should increase with time or decrease with increasing redshift .what we see from figure 1 is that there is another trend which is as strong as or stronger than this .since stars are _ relatively _ overabundant in the densest regions , metallicity is a strongly increasing function of density at any epoch . this trend is observed within galaxies ( with central parts being most rich ) but it is also true when one averages over larger regions .the gas in dense clusters of galaxies is far more metal rich than the general igm at .this trend is shown in another way in figure 2 , where metallicity is plotted as a function of overdensity at four redshifts .let us now examine the individual components more closely in figure 3 with panels ( a , b , c , d ) showing the metallicity distributions for regions of overdensity , respectively , at four redshifts .we examine each panel in turn . in panel( a ) we see that there is almost no evolution from redshift zero ( thick solid curve ) to redshift one ( dotted curve ) for metallicity of intracluster gas .the narrowness of the computed distributions fits observations very well for clusters locally and at low redshift .but we predict that the metallicity of clusters at redshift will be somewhat lower than their low redshift counterparts by a factor of about three , with the characteristic metallicity declining to .second , examining panel ( b ) for regions with overdensity , which roughly correspond to lyman limit and damped lyman alpha systems , it is seen that the median metallicity increases only slightly from to , but there is a large range of metallicity expected of approximately at any redshift , in very good agreement with observations over the entire redshift range considered .next , panel ( c ) shows the integral distributions for regions with overdensity , that correspond to moderate column density lyman alpha clouds with column density .we see that the median metallicity increases by a factor of about from redshift to , but with a broad tail towards the low metallicity end at all redshift , again in good agreement with observations .dav ( 1998 ) concluded that the metallicity for regions of overdensity of at is from analysis of civ absorption lines , consistent with our results here .finally , panel ( d ) shows regions with overdensity ( i.e , at the mean density ) corresponding to the very low column density lyman alpha clouds .the observations are upper bounds .but it appears that the bulk of the regions with such low density indeed has quite low metallicity , consistent with observations .dav ( 1998 ) derived an upper bound on metallicity for near mean density region at of from analysis of ovi absorption lines , in agreement with our results .in the simulation examined in this paper high density regions reach an approximately solar density first , with lower density regions approaching that level at later epochs , and at all epochs the variations of with density is comparable to or larger than the variations at a given overdensity .this saturation of metallicity has a natural physical explanation .regions where the peaks of long and short waves fortuitously coincide have the highest initial overdensity and the earliest significant star formation ; but , when the long waves break , high temperature shocks form ( as in the centers of clusters of galaxies ) , so that further new galaxy formation and infall onto older systems ceases , star formation declines ( blanton 1999 ) , and the metallicity stops increasing .observationally , we know that , in the highest density , highest metallicity and highest temperature regions of the rich clusters of galaxies , new star formation has essentially ceased by redshift zero . as a side note , that fact that metallicity depends as strongly on density as on time implies that stellar metallicity need not necessarily ( anti-)correlate with the stellar age .for example , young stars may be relatively metal poor , as supported by recent observations ( preston , beers & shectman 1994 ; preston 1994 ) , simply because these young stars may have formed out of relatively lower density regions where metallicity was low .the picture presented here is , in principle , quite testable .for example , steidel and co - workers ( steidel 1993 ; steidel 1994 ) and savage ( 1994 ) and others have found that metal line systems observed along a line of sight to a distant quasar are invariably associated with galaxies near the line of sight at the redshift of the metal line system .one would expect , on the basis of the work presented here , that there would be a strong statistical trend associating higher metallicity systems to closer galaxies , since for these the typical overdensity is larger .figure 4 shows surface density contours on a slice of for galaxies ( filled red ; at a surface density of times the mean surface density of galaxies ) , metals ( green ; at a metallicity of ) and warm / hot gas ( cen & ostriker 1999a ) with ( blue ; at a surface density of times the mean surface density of warm / hot gas ) .each respective contour contains 90% of the total mass of the respective component .we see that most of the green contours contain red spots , each within a region of size approximately ; i.e. , one would expect to see a normal galaxy associated with a metal line system within a projected distance of .it is also seen from figure 4 that metal rich gas is generally embedded in the warm / hot gas .this may manifest itself observationally as spectral features that seem to arise from multiple phase gas at a similar redshift along the line of sight .recent observations appear to have already hinted this ; lopez ( 1998 ) , using combined observations of quasar absorption spectra from hubble space telescope and other ground - based telescopes , noted that some c iv clouds are surrounded by large highly ionized low - density clouds .finally , it may be pointed out that most of the metals are in over - dense regions and these regions are generally relatively hot : .therefore , they should be observable in the euv and soft x - ray emitting warm / hot gas ( cen & ostriker 1999a ) .arnaud , k.a . ,mushotzky , r.f . ,ezawa , h. , fukazawa , y. , ohashi , t.,bautz , m.w . , crewe , g.b ., gendreau , k.c . , yamashita , k. , kamata , y. , & akimoto , f. 1994 , apj , 436 , l67 ( a94 ) arnett , d. 1996 , supernovae and nucleosynthesis " bahcall , n.a . , & cen , r. 1992 , apj , 407 , l49 barlow , t.a . , &tytler , d. 1998 , aj , 115 , 1725 ( bt98 ) blanton , m. , cen , r. , ostriker , j.p . , & strauss , m.a . 1999 ,apj , in press bunn , e.f ., & white , m. 1997 , apj , 480 , 6 burles , s. , & tytler , d. 1998 , , 499 , 699 cen , r. 1998 , apj , 509 , 16 cen , r. , & ostriker , j.p .1998a , apj , in press ( preprint , astro - ph/9809370 ) cen , r. , & ostriker , j.p .1999b , in preparation cen , r. , & simcoe , r.a .1997 , apj , 483 , 8 dav , r. , hellsten , u. , hernquist , l. , weinberg , d.h . , &katz , n. 1998 , apj , 487 , 482 hamann , f. 1997 , apjs , 109 , 279 lu , l. , sargent , w.l.w ., barlow , t.a . , churchill , c.w ., & vogt , s.s .1996 , apjs , 107 , 475 ( lu96 ) lu , l. , sargent , w.l.w . ,barlow , t.a ., & rauch , m. 1998 , preprint , astro - ph/9802189 ( lu98 ) haardt , f. , & madau , p. 1996, apj , 461 , 20 mushotzky , r.f. , done , c. , & pounds , k.a .1993 , araa , 31 , 717 mushotzky , r.f . , & lowenstein , m. 1997 , apj , 481 , l63 ( ml97 ) mushotzky , r.f . ,lowenstein , m. , arnaud , a.k . , tamura , t. , fukazawa , y. , matsushita , k. , kikuchi , k. , & hatsukade , i. 1996 , apj , 466 , 686 ( m96 ) nagamine , k. , cen , r. , & ostriker , j.p .1999 , in preparation ostriker , j.p . , &steinhardt , p. 1995 , nature , 377 , 600 pettini , m. , ellison , s.l ., steidel , c.c . , & bowen , d.v .1998 , preprint , astro - ph/9808017 ( p98 ) prochaska , j.x ., & wolfe , a.m. 1998 , apj , in press ( pw98 ) prochaska , j.x . , & wolfe , a.m. 1997 , apj , 474 , 140 ( pw97 ) rauch , m. , haehnelt , m.g . , & steinmetz , m. 1997,apj , 481 , 601 ( r97 ) reiss , a.g . , 1998 ,preprint , astro - ph/9805201 savage , b.d . , tripp , t.m ., & lu , l. 1998 , aj , 115 , 436 schramm , d.n . , & turner , m.s . 1998 , rev .70 , 303 shull , j.m . , 1998 , preprint , astro - ph/9807246 ( s98 ) smoot , g.f . , 1992 , , 396 , l1 songaila , a. , & cowie , l.l . 1996 , aj , 112 , 335 ( sc96 ) steidel , c.c . 1993 , in the environment and evolution of galaxies " , ed .shull and h.a .thronson , jr ., p263 steidel , c.c . ,dickinson , m. , & persson , s.e .1994 , apj , 437 , l75 tamura , t. , day , c.s . ,fukazawa , y. , hatsukade , i. , ikebe , y. , makishima , k. , mushotzky , r.f . , ohashi , t. , takenaka , k. , & yamashita , k. 1996 , pasp , 48 , 671 ( t96 ) tripp , t. m. , lu , l. , & savage , b. d. 1997 , aj , 112 , 1 tytler , d. , fan , x .- m . , burles , s. , cottrell , l. , davis , c. , kirkman , d. , & zuo , l. 1995 , in qso absorption lines , ed .g. meylan ( berlin : springer ) , 289 tytler , d. , & fan , x .- m .1994 , apj , 424 , l87 ( tf94 ) white , s.d.m , navarro , j. , evrard , a.e . , & frenk , c.s .1993 , nature , 366 , 429 fig .1. the average metallicities averaged over the whole universe ( dot - dashed curve ) , overdensity ( thick solid curve ) , overdensity ( thin solid curve ) , overdensity ( dotted curve ) and overdensity ( dashed curve ) , respectively , as a function of redshift .3. panel ( a ) shows the differential metallicity distribution for regions with overdensity ( clusters of galaxies ) at four different redshifts , ( thick solid curve ) , ( thin solid curve ) , ( dotted ) and ( dashed curve ) [ the same convention will be used for panels ( b , c , d ) ] .also shown as symbols are observations from various sources .various symbols are observations : the open circle from mushotzky & lowenstein ( 1997 ; ml97 ) showing that there is almost no evolution in the intracluster metallicity from to at around one - third of solar , the open triangles from from mushotzky ( 1996 ; m96 ) showing the metallicities of four individual nearby clusters ( abell 496 , 1060 , 2199 and awm 7 ) , the open square from tamura ( 1996 ; t96 ) showing the metallicity of the intracluster gas of abell 1060 , the filled triangle from arnaud ( 1994 ; a94 ) showing the metallicity of the intracluster gas of the perseus cluster .all metallicities are measured in [ fe / h ] .panel ( b ) shows the differential metallicity distribution for regions with overdensity . the open triangle from lu ( 1996 ; lu96 )shows the result from an extensive analysis of a large database of damped lyman alpha systems with .the horizontal range on the open triangle does not indicate the errorbar on the mean rather it shows the range of metallicities of the observed damped lyman alpha systems as given by lu96 .the open circle from pettini ( 1998 ; p98 ) is due to an analysis of ten damped lyman alpha systems at ; here the horizontal range indicates the error on the mean .the open square due to prochaska & wolfe ( 1998 ; pw98 ) is from an analysis of 19 damped lyman alpha systems at ; the horizontal range indicates the error on the mean .finally , the two solid dots are from prochaska & wolfe ( 1997 ; pw97 ) of an analysis of two damped lyman alpha systems at with one having extreme low metallicity and the other having extreme high metallicity .all metallicities are measured in [ zn / h ] .vertical position in panel ( b ) is without significance .panel ( c ) shows the cumulative metallicity distribution for regions with overdensity .the symbols are observations : the open circle from sc96 for lyman alpha clouds at with column density of , the open triangle from rauch ( 1997 ; r97 ) for lyman alpha clouds at with column density of , the solid dot from barlow & tytler ( 1998 ; bt98 ) for lyman alpha clouds at with column density of , the solid triangle from shull ( 1998 ; s98 ) for lyman alpha clouds at with column density of .panel ( d ) shows the cumulative metallicity distribution for regions with overdensity ( i.e. , mean density ) .the open circle is the upper limit for lyman clouds with column density at redshift from lu ( 1998 ; lu98 ) .the open triangle is the upper limit for lyman clouds with column density at redshift from tytler & fan ( 1994 ; tf94 ) .the model seems consistent with observations of low column density lyman alpha clouds at high redshift .4. surface density contours on a slice of for galaxies ( filled red ; at a surface density of times the mean surface density of galaxies ) , metals ( green ; at a metallicity of ) and warm / hot gas with ( blue ; at a surface density of times the mean surface density of warm / hot gas ) .each respective contour contains 90% of the total mass of the respective component .
|
numerical simulations of standard cosmological scenarios have now reached the degree of sophistication required to provide tentative answers to the fundamental question : where and when were the heavy elements formed ? averaging globally , these simulations give a metallicity that increases from of the solar value at to at present . this conclusion is , in fact , misleading , as it masks the very strong dependency of metallicity on local density . at every epoch higher density regions have much higher metallicity than lower density regions . moreover , the highest density regions quickly approach near solar metallicity and then saturate , while more typical regions slowly catch up . these results are much more consistent with observational data than the simpler picture ( adopted by many ) of gradual , quasi - uniform increase of metallicity with time . # 1 # 1 , = 1 = 1
|
any big undertaking , such as x - ray astronomy surely is , must set long range goals . withclear long - term science goals in place we can see which developments are essential , and which are mere sidelines - amusing , but dead ends .daniel goldin , the nasa administrator , urged astronomers ( san antonio aas meeting , january 1996 ) to make decade length plans , even if the plan changes in a few years time .this is what we do here .the goals we shall describe are deliberately ambitious . we propose that x - ray astronomers should aim to reach sensitivities 100 times beyond axaf , while retaining high angular resolution and achieving high dispersion spectroscopy ( table [ goal ] ) .this does not mean that the very next mission we design should necessarily have all these capabilities . nor does it rule out smaller missions with different goals .it does mean that the next major mission we design should at least be a deliberate and significant step toward these capabilities .to some readers these goals may seem hopelessly idealistic . as a result of this workshop, it seems instead that they are only a few factors of 2 away from reality . in section 2we outline several areas of wide astrophysical importance to which x - ray astronomy can make crucial measurements , given a 10 sq.meter,1 arcsec .telescope ; in section 3 the current state of x - ray astronomy is reviewed and general ` discovery space ' arguments are introduced ; ; in section 4 we use these arguments to derive the 10 sq.meter , 1 arcsec , arcmin field of view telescope goals ; in section 5 we discuss instrumentation goals including the need for sensitivity in the 0.1 - 0.5 kev band ; in section 6 we summarize our assessment of the workshop against these goals ; and in section 7 we open the discussion on how to make this telescope a reality ..the 10 sq .meter x - ray telescope [ cols= " < , < " , ] a major weakness of the workshop was the lack of consideration of high spectral resolution using grating spectroscopy . since this is the only route to the necessary goal of spectroscopy , so we must pursue it .three main missions emerged from the workshop as the next step .however , they only partially address our goals .these missions overlap considerably in their aims and concept : ( 1 ) a european follow - on to xmm that takes replica mirror technology another step ( 1 - 3m ; hpd ) ; ( 2 ) a japanese successor to astro - e ( m , [?]hpd ) using improved foil optics , and launched with the new large h-2 ; and ( 3 ) the us ` high throughput x - ray spectroscopy ' mission ( htxs ) ( 1m ; hpd ) . in their collecting areathese plans are only a factor of 5 - 10 too small . in angular resolutionhowever they all are inadequate .since all three missions stressed calorimeters and 1050 kev response none reaches the spectral resolution and low energy coverage that will turn x - ray astronomy into x - ray astrophysics .how can x - ray astronomy reach the 10 sq.m/1 arcsec goal ? as a community we need to become less ` political ' and more science driven .presently we act tactically , reacting to each announcement of a mission opportunity by putting together good proposals .since a good proposal requires the use of existing , proven technologies , we do not get beyond incremental advances .instead we must get strategic .we must set long term science goals , as we have done here , and then _ begin !_ start work now on achieving our goals through technology studies .the primary thrust of these technology studies must be innovative x - ray optics : the pursuit of _ several _ new mirror technologies .these studies need substantial funding , of order 10m / year program sounds huge compared with the less than $ 0.5m / year now spent by nasa on x - ray optics , outside of flight programs .it is small potatoes , though compared with the astrophysics budget at nasa .it is similar to the nasa investment in astro - e or a smex , yet the pay - off is hugely greater . a second but still very important study must address the spacecraft and space infrastructure needed to support a 10 sq.m .high resolution telescope .areas of importance include : long , low weight structures ; active control of optical benches ; moment - of - inertia balancing for fast slewing ; servicing from station , shuttle and small launchers .topical technical workshops on each of these areas would be a good way to focus the issues and technical challenges and to identify the most promising areas of work . the us, european and japanese intermediate missions presented at the workshop should not take all the energies of the community and so prevent a strong development program .the goal we originally outlined seemed bold .this workshop has shown that in fact it is quite plausible , and only a stretch from plans now being formulated .we wish to thank many of our colleagues for stimulating discussions , especially andrew szentgyorgyi , nancy brickhouse , john raymond , richard willingale , ronald polidan , robert rosner , webster cash , roger angel , andrew fabian , john gibbons , steven murray , harvey tananbaum , salvo sciortino and giorgio palumbo .
|
x - ray astronomy needs to set bold , science driven goals for the next decade . only with defined science goals can we know what to work on , and a funding agency appreciate the need for significant technology developments . to be a forefront science the scale of advance must be 2 decades of sensitivity per decade of time . to be stable to new discoveries these should be general , discovery space , goals . a detailed consideration of science goals leads us to propose that a mirror collecting area of 10 sq.meters with arcsecond resolution , good field of view ( arcmin ) , and with high spectral resolution spectroscopy ( =1000 - 10,000 ) defines the proper goal . this is about 100 times axaf , or 30 times xmm . this workshop has shown that this goal is only a reasonable stretch from existing concepts , and may be insufficiently bold . an investment of $ 10m / year for 5 years in x - ray optics technologies , comparable to nasa s investment in astro - e or a smex , is needed , and would pay off hugely more than any small x - ray mission .
|
in recent years , the increasing demands for signal and information processing in irregular domains have resulted in an emerging field of signal processing on graphs . bringing a new perspective for analyzing data associated with graphs ,graph signal processing has found potential applications in sensor networks , image processing , semi - supervised learning , and recommendation systems .an undirected graph is denoted as , where denotes a set of vertices and denotes the edge set .if one real number is associated with each vertex , these numbers of all the vertices are collectively referred as a graph signal .a graph signal can also be regarded as a mapping .there has been lots of research on graph signal related problems , including graph filtering , graph wavelets , uncertainty principle , multiresolution transforms , graph signal compression , graph signal sampling , parametric dictionary learning , graph topology learning , and graph signal coarsening . smooth signals or approximately smooth signals over graph are common in practical applications , especially for those cases in which the graph topologies are constructed to enforce the smoothness property of signals . exploiting the smoothness of a graph signal, it may be reconstructed through its entries on only a part of the vertices , i.e. samples of the graph signal . in this work ,we develop efficient methods to solve the problem of reconstructing a bandlimited graph signal from known samples .the smooth signal is supposed to be within a low - frequency subspace .two iterative methods are proposed to recover the missing entries from known sampled data .there has been some theoretical analysis on the sampling and reconstruction of bandlimited graph signals .some existing works focus on the theoretical conditions for the exact reconstruction of bandlimited signals .the relationships between the sampling sets of unique reconstruction and the cutoff frequency of bandlimited signal space are established for normalized laplacian and unnormalized laplacian , respectively .recently , a necessary and sufficient condition of exact reconstruction is established in . in order to reconstruct bandlimited graph signals from sampled data ,several methods have been proposed . in least square approach is proposed to solve this problem .furthermore , an iterative reconstruction method is proposed and a tradeoff between smoothness and data - fitting is introduced for real world applications . the problem of signal reconstruction is closely related to the frame theory , which is also involved in other areas of graph signal processing , e.g. , wavelet and vertex - frequency analysis on graphs . based on windowedgraph fourier transform and vertex - frequency analysis , windowed graph fourier frames are studied in . a spectrum - adapted tight vertex - frequency frame is proposed in via translation on the graph .these works focus on vertex - frequency frames whose elements make up over - representation dictionaries , while in the reconstruction problem the frames are always composed by elements centering at the vertices in the sampling sets . in this paper , to improve the convergence rate of bandlimited graph signal reconstruction , iterative weighting reconstruction ( iwr ) and iterative propagating reconstruction ( ipr ) are proposed based a new concept of local set . as the foundation of reconstruction methods , several local - set - based frames and contraction operators are introduced .both iwr and ipr are theoretically proved to uniquely reconstruct the original signal under certain conditions .compared with existing methods , the condition of the proposed reconstruction methods is easy to determine by local parameters .the correspondence between graph signal sampling and time - domain irregular sampling is analyzed comprehensively , which will be helpful to future works on graph signals .experiments show that iwr and ipr converge significantly faster than available methods . besides, experiments on several topics including sampling geometry and robustness are conducted .the rest of this paper is organized as follows . in section [ sec2 ] ,some preliminaries are introduced . in section [ sec3 ] ,some important definitions are introduced and some related frames based on local sets are proved . in section [ sec4 ] , two local - set - based reconstruction methods iwr and ipr are proposed and their convergence behavior is analyzed , respectively .section [ discusslocalset ] gives more detailed analysis on local sets .section [ sec6 ] shows the relationship between graph signal sampling and time - domain irregular sampling and section [ sec7 ] presents some numerical experiments .the graph laplacian is extensively exploited in spectral graph theory and signal processing on graphs . for a undirected graph ,its laplacian is where is the adjacency matrix of the graph and is a diagonal degree matrix with the diagonal elements as the degrees of corresponding vertices .the laplacian is a real symmetric matrix , and all the eigenvalues are nonnegative . supposing are the eigenvalues , and are the corresponding eigenvectors , the graph fourier transform is defined as the expansion of a graph signal in terms of , as where denotes the entry of associated with vertex .similar with classical fourier analysis , eigenvalues are regarded as frequencies of the graph , and is regarded as the frequency component corresponding to .the frequency components associated with smaller eigenvalues can be called low - frequency part , and those associated with larger eigenvalues is the high - frequency part . for a graph signal on a graph , is called -bandlimited if the spectral support of is within $ ]. that is , the frequency components corresponding to eigenvalues larger than are all zero .the subspace of -bandlimited signals on graph is a hilbert space called paley - wiener space , denoted as . in this paper, we consider the sampling and reconstruction of bandlimited signals on undirected and unweighted graphs .suppose that for a bandlimited graph signal , only on the sampling set are known , the problem is to obtain the original signal from the sampled data .the problem of signal sampling and reconstruction is closely related to frame theory .a family of elements is a _ frame _ for a hilbert space , if there exist constants such that where and are called _ frame bounds_. [ def : frameoperator ] for a frame , _ frame operator _ is defined as one may readily read that for , where denotes the identity operator and means that is positive semidefinite .consequently , is always invertible and its inverse could be expanded into series in some special cases .for instance , one has where is a scalar satisfying .this inspires that could be iteratively reconstructed from any initial point by with the error bound satisfying obviously , recursion can not be entitled _ reconstruction _ because the original signal to be recovered is involved in the iteration .however , it provides a prototype for practical methods , which will be discussed in section [ sec4 ] .the parameter , which could be deemed as a step - size , determines the convergence rate .if one chooses , then , and the error bound of iteration ( [ iteration ] ) will shrink with the exponential of .a better choice is , then , which leads to a faster convergence rate .there are many useful theoretical results on the problem of bandlimited graph signal sampling and reconstruction .a concept of _ uniqueness set _ is firstly introduced in . a set of vertices is a _uniqueness set _ for space if it holds for all that implies .according to this definition , any could be uniquely determined by its entries on a uniqueness set . as a consequence , may be exactly recovered if the sampling set is a uniqueness set .readers are suggested to refer to , , and for more details on uniqueness set .the following theorem demonstrates that a set of graph signals related to a uniqueness set becomes a frame for , which is a quite important foundation of our work .[ thm3 ] if the sampling set is a uniqueness set for , then is a frame for , where is the projection operator onto , and is a -function whose entries satisfying a method called iterative least square reconstruction ( ilsr ) is proposed to reconstruct bandlimited graph signals in as the following theorem .[ thm : ilsr ] if the sampling set is a uniqueness set for , then the original signal can be reconstructed using the sampled data by ilsr method , where denotes the downsampling operator and is the downsampled signal .ilsr is derived from the method of projection onto convex sets ( pocs ) .its convergence is proved using the fixed point theorem of contraction mapping .in this section , the concept of _ local set _ is firstly proposed .based on local sets , we define an operator named _ local propagation _ and prove its contraction .then several local - set - based frames are introduced , as the theoretical foundation of the proposed methods in next section . fora sampling set on graph , assume that is divided into disjoint local sets associated with the sampled vertices .for each , denote the subgraph of restricted to by , which is composed of vertices in and edges between them in . for each ,its local set satisfies , and the subgraph is connected .besides , should satisfy and we further define to denote the maximal size of local sets , where denotes cardinality . for a given sampling set , there may exist various divisions of local sets .we will see in the next section that different divisions may lead to different theoretical bounds in recovering bandlimited signals . to describe the property of local sets ,two measures are proposed in definition [ defi1 ] and definition [ radius ] , which are useful in the following analysis .[ defi1 ] denote as the shortest - path tree of rooted at . for connected to in , is the subtree which belongs to when and its associated edges are removed from .the _ maximal multiple number _ of is defined as where is the edge set of graph . , sampling set , and one of the divisions of local sets .the shortest - path tree of and its subtrees of are highlighted to give more details . in this case , and the subtrees have , , and vertices , respectively . therefore , and it is easy to check that ., width=377 ] by the definition of , it is ready to check that where is the degree of in the subgraph . for simplicity, one may introduce an approximation for easy calculation of by the definitions above are intuitively illustrated in fig .[ tu ] . [ radius ]the _ radius _ of is the maximal distance from to any other vertex in , which is denoted as according to the definitions , one may see that the two measures are local and only determined by the subgraph of local sets .the two local measures are helpful in establishing the conditions of some important results in this paper , which will be shown in the following subsections . utilizing the introduced maximal multiple number and radius ,we further propose an operation to propagate energy to one s local set .[ limitedpropagation ] for a given sampling set and associated local sets on a graph , the local propagation is defined by where denotes the -function of set with entries as its name shows , operation first propagates the energy locally and evenly to the local set that each sampled vertex belongs to , and then projects the new signal to be -bandlimited , please refer to .these two steps could be merged into one , by a bandlimited local propagation of , please refer to .local propagation , which provides a fast solution to adequately fill all unknown entries by sampled data , makes the proposed local - set - based reconstruction feasible . as an important theoretical foundation, the following lemma gives the condition that is a contraction mapping .[ lemma1 ] for a given set and associated local sets on a graph , , the operator is a contraction mapping for , where the proof is postponed to [ proof1 ] . for a given sampling set, there may exist various divisions of local sets .we will see in the next section that different divisions may lead to different theoretical bounds in recovering bandlimited signals .based on the definition of local set , we could prove that the weighted lowpass -function set is a frame for and estimate its bounds .[ lem0 ] for a given sampling set and associated local sets on a graph , , is a frame for with bounds and , where is defined in and the proof is postponed to [ proof3 ] .one may notice that theorem 3.2 of also implies that is a frame for .however , the assumptions and approaches in this work are quite different from those in the above reference .furthermore , base on the proposed local sets , we study the relation between sampling set and whole vertices , and clarify the frame bound exactly . beyond lemma [ lem0 ] , we further explore the weighted lowpass -functions is also a frame for by appropriate weights .[ lemma3 ] for a given sampling set and associated local sets on a graph , , is a frame for with bounds and , where and are defined in and , respectively .the proof is postponed to [ proof4 ] .bandlimited graph signals can be iteratively reconstructed using a frame for , but the frame bounds play critical roles on the convergence rate . by given appropriate weights to the elements in a frame , a new frame is obtained with a sharper bounds estimation , which may lead to a faster convergence .the related algorithms will be proposed in section [ sec4 ] . to end up this section , we present a general theoretical result which may inspire further study on frame - theory - based graph signal processing .[ pro3 ] for a given sampling set and associated local sets on a graph , , is a frame for with bounds and , where and are defined in and , respectively .the proof is postponed to [ proof3 ] . in factlocal propagation is not a standard frame operator , because two signal sets and are involved .however , under the same condition with the contraction of operator , both sets can be proved to be frame , and either of them can be used to reconstruct the original signal by the corresponding frame operator .all frames discussed in this section are listed in table [ tab : frames ] ..the frames in space , and their bounds . [ cols="^,^,^",options="header " , ]the minnesota road graph is chosen as the graph , which has vertices and edges , to test the proposed reconstruction algorithms .the bandlimited signal is generated by first generating a random signal and then removing its high - frequency components .the convergence rate of the three algorithms are compared here . a one - hop sampling set satisfying ( [ onehop ] )is chosen as the sampling set .the one - hop sampling set and the corresponding local sets are obtained by the greedy method in table [ algonehop ] .this sampling set has vertices , which is about one third of all .the cutoff frequency is .the convergence curves of ilsr , iwr , and ipr are illustrated in fig .it is obvious that the convergence rate of the proposed algorithms is significantly improved compared with the reference .furthermore , ipr is better than iwr on the convergence rate .both observations are in accordance with the analysis in [ intuitive ] .the choice of sampling set may affect the performance of convergence .two different sampling sets are used to reconstruct the same bandlimited original signal .both of the sampling sets have the same amount of vertices .the first sampling set is the one - hop set satisfying ( [ onehop ] ) , with sampled vertices and .for the second sampling set , vertices are selected uniformly at random among all the vertices .each unsampled vertex belongs to the local set associated with its nearest sampled vertex .then and can be obtained for each local set , and we have with the corresponding and .the convergence curves of the two sampling sets using the three reconstruction methods are illustrated in fig .for all the algorithms , the convergence is faster by using the sampled data of the one - hop sampling set than by using the randomly chosen vertex set .it means that the sampling geometry has influence on the reconstruction .a sampling set and the local sets with smaller may converge faster .the cutoff frequency is a crucial quantity in the reconstruction of the bandlimited signal . for a bandlimited signal the cutoff frequencyis known as a priori knowledge .however , the priori knowledge may be an estimate rather than ground truth . in this experiment , the effect on the imprecise knowledge of the cutoff frequency is investigated . for frequencies and satisfying , the following four cases are considered : 1) ; 2) ; 3) ; 4) , where means the actual cutoff frequency is and the priori known frequency is . in the experiment we set .the convergence curves are illustrated in fig .it is easy to understand that the relative error of case 4 ) is large because only about half of the energy can be preserved . for case 1 ) and 2 ) , although both the original signals are actually -bandlimited , the reconstruction converges faster in case 1 ) than case 2 ) because of a more accurate priori knowledge . comparing case 2 ) and3 ) , it can be seen that the convergence curves almost coincide , which means that although the bandwidth of the original signal is reduced , the convergence rate may increase little if the reduction is not priori known .the experimental results show that the convergence rate depends little on the actual cutoff frequency but depends more on the cutoff frequency the signal is regarded to have. if we have more accurate priori knowledge on the cutoff frequency , the reconstruction will be more efficient .in fact , the of is the priori known cutoff frequency , rather than the actual one .even though the actual cutoff frequency of the original signal is smaller than , the decay coefficient is determined by , i.e. , the cutoff frequency of the subspace . in other words , convergence is a property of the frame , which is determined by the low - frequency subspace , and not of the signal we are trying to reconstruct . the given sufficient condition for the convergence of iwr and ipr is rather conservative and not very sharp for all the graphs .this experiment shows the actual cutoff frequency that the reconstruction algorithms can recover . in this experiment , the sampling set and local sets are the same as that in [ expsg ] with .the experimental result is illustrated in fig .[ exp13 ] . for each cutoff frequency , signals within the subspace are generated randomly .the curves show the rate of signals that converge within a relative error in iterations . for this sampling set and local sets, the sufficient condition we provide is .it can be seen that the reconstruction methods work in a larger low - frequency subspace , which means there is still room for improvement to give a better bound . in iterations.,width=340 ] suppose there is noise involved in the observation of sampled graph signal .this experiment focuses on the robustness to the observation noise of the three algorithms .in this experiment the noise is generated as independent identical distributed gaussian sequence .as shown in fig .[ exp6 ] , the steady - state error decreases as the snr increases .the three methods have almost the same performance against observation noise . real - world data is always not strictly bandlimited . however , most smooth signals over graph can be regarded as approximated bandlimited signals . in the experiment in fig .[ exp8 ] , the three methods are used to reconstruct signals with different out - of - band energy . the steady - state error will be larger for signals with more energy out of band . besides , the three algorithms perform almost the same for approximated bandlimited signals .in this paper , the problem of graph signal reconstruction in bandlimited space is studied .we first propose a concept of local set , where all vertices in the graph are divided into disjoint local sets associated with the sampled vertices . based on frame theory ,an operator named local propagation is then proposed and proved to be contraction mapping .consequently , several series of signals are proved to be frames and their frame bounds are estimated . above theory provides solid foundation for developing efficient reconstruction algorithms .two local - set - based iterative methods called iwr and ipr are proposed to reconstruct the missing data from the observed samples .strict proofs of convergence and error bounds of iwr and ipr are presented .after comprehensive discussion on the proposed algorithms , we explore the correspondence between time - domain irregular sampling and graph signal sampling , which sheds light on the analysis in the graph vertex domain .experiments , which verify the theoretical analysis , show that ipr performs beyond iwr , and both the proposed methods converge significantly faster than the reference algorithm .by the definition of , one has considering that is connected , there is always a shortest path within from any to , which is denoted as .one has which is because any path is not longer than .for each satisfying , the path from any vertex in to contains edge and this edge is counted for times . by the definition of , each edge in is counted for no more than times . then , by the assumption of -bandlimited signal , the following inequality is established . in the above derivation , denotes the degree of vertex , and denotes the graph fourier transform of . the last inequality is because the components of corresponding to the frequencies higher than are zero for . by the definition of local propagation , , one has utilizing in lemma [ lemma1 ] , one gets for all and , we have and combining ( [ lem2 - 1 ] ) , ( [ lem2 - 2 ] ) and ( [ lem2 - 3 ] ) and proposition 2 in , and satisfy that there exist constant and , so that , and for all and . then is a frame with frame bounds and , and is a frame with bounds and . ] is a frame with bounds and , and is a frame with bounds and .proposition [ pro3 ] is proved .according to lemma [ lemma1 ] and proposition [ pro4 ] , we have for when . then is invertible and for . then the left inequality of lemma [ lemma3 ] is proved . from lemma [ lemma3 ], is a frame with bounds and . by the property of frame , the original signal can be reconstructed by where the frame operator is and the parameter is chosen as according to the definition of local propagation and table [ algipr ] , the iteration of ipr can be written as which is initialized by .notice that and for any , then . as a consequence of lemma [ lemma1 ] , proposition [ cor1 ] is proved . 1 d. i. shuman , s. k. narang , p. frossard , a. ortega , and p. vandergheynst , `` the emerging field of signal processing on graphs : extending high - dimensional data analysis to networks and other irregular domains , '' _ ieee signal process . mag ._ , vol . 30 , no . 3 , pp .83 - 98 , 2013 .a. gadde , a. anis , and a. ortega , `` active semi - supervised learning using sampling theory for graph signals , '' in _ proc .20th acm sigkdd int .knowledge discovery and data mining ( kdd14 ) _ , 2014 , pp . 492 - 501 .s. k. narang , a. gadde , and a. ortega , `` signal processing techniques for interpolation in graph structured data , '' in _ proc .38th ieee int .speech , signal process .( icassp ) _ , 2013 , pp .5445 - 5449 .s. chen , a. sandryhaila , j. m. f. moura , and j. kovacevic , `` adaptive graph filtering : multiresolution classification on graphs , '' in _ proc .1st ieee global conf .signal and inform . process .( globalsip ) _ , 2013 , pp .427 - 430 . v. n. ekambaram ,g. c. fanti , b. ayazifar , and k. ramchandran , `` multiresolution graph signal processing via circulant structures , '' in _ proc .ieee digital signal process ., signal process .educ . meeting ( dsp / spe ) _ , 2013 ,112 - 117 .s. k. narang , a. gadde , e. sanou , and a. ortega , `` localized iterative methods for interpolation in graph structured data , '' in _ proc .1st ieee global conf .signal and inform . process .( globalsip ) _ , 2013 , pp .491 - 494 .x. wang , m. wang , and y. gu , `` a distributed tracking algorithm for reconstruction of graph signals , '' to appear in _ieee j. selected topics signal process ._ , june 2015 , available at _ arxiv preprint arxiv:1502.0297_.
|
signal processing on graph is attracting more and more attentions . for a graph signal in the low - frequency subspace , the missing data associated with unsampled vertices can be reconstructed through the sampled data by exploiting the smoothness of the graph signal . in this paper , the concept of local set is introduced and two local - set - based iterative methods are proposed to reconstruct bandlimited graph signal from sampled data . in each iteration , one of the proposed methods reweights the sampled residuals for different vertices , while the other propagates the sampled residuals in their respective local sets . these algorithms are built on frame theory and the concept of local sets , based on which several frames and contraction operators are proposed . we then prove that the reconstruction methods converge to the original signal under certain conditions and demonstrate the new methods lead to a significantly faster convergence compared with the baseline method . furthermore , the correspondence between graph signal sampling and time - domain irregular sampling is analyzed comprehensively , which may be helpful to future works on graph signals . computer simulations are conducted . the experimental results demonstrate the effectiveness of the reconstruction methods in various sampling geometries , imprecise priori knowledge of cutoff frequency , and noisy scenarios . * keywords : * graph signal processing , irregular domain , graph signal sampling and reconstruction , frame theory , local set , bandlimited subspace .
|
the problem of reducing a partial differential equation ( pde ) model to a system of finite dimensional ordinary differential equations ( ode ) , has significant applications in engineering and physics , where solving such pde models is too time consuming . reducing the pde model to a simpler representation , without loosing the main characteristics of the original model , such as stability and prediction precision , is appealing for any real - time model - based computations .however , this problem remains challenging , since model reduction can introduce stability loss and prediction degradation . to remedy these problems ,many methods have been developed aiming at what is known as stable model reduction . in this paper, we focus on additive terms called _ closure models _ and their application in reduced order model ( rom ) stabilization .we develop a learning - based method , applying extremum - seeking ( es ) methods to automatically tune the coefficients of the closure models and obtain an optimal stabilization of the rom .our work extends some of the existing results in the field .for instance , a reduced order modelling method is proposed in for stable model reduction of navier - stokes flow models .the authors propose stabilization by adding a nonlinear viscosity stabilizing term to the reduced order model .the coefficients of this term are identified using a variational data - assimilation approach , based on solving a deterministic optimization . in ,a lyapunov - based stable model reduction is proposed for incompressible flows .the approach is based on an iterative search of the projection modes satisfying a local lyapunov stability condition .an example of stable model reduction for the burger s equation using closure models is explored in .these closure models modify some stability - enhancing coefficients of the reduced order ode model using either constant additive terms , such as the constant eddy viscosity model , or time and space varying terms , such as smagorinsky models .the added terms amplitudes are tuned in such a way to stabilize the reduced order model . however , such tuning is not always straightforward .our work addresses this issue and achieves optimal tuning using learning - based approaches .this paper is organized as follows : section [ prem ] establishes our notation and some necessary definitions .section [ es - rom - stab ] introduces the problem of pde model reduction and the closure model - based stabilization , and presents the main result of this paper .an example using the coupled burgers equation is treated in section [ burgers - example ] .finally , section [ concl ] provides some discussion on our approach and concludes .throughout the paper we will use to denote the euclidean vector norm ; i.e. , for we have .the kronecker delta function is defined as : and .we will use for the short notation of time derivative of , and for the transpose of a vector .a function is said analytic in a given set , if it admits a convergent taylor series approximation in some neighborhood of every point of the set .we consider the hilbert space ) ] if for each , , and . finally , in the remaining of this paper by stability we mean stability of dynamical systems in the sense of lagrange , e.g. , .we consider a stable dynamical system modelled by a nonlinear partial differential equation of the form where is an infinite - dimension hilbert space .solutions to this pde can be obtained through numerical discretization , using , e.g. , finite elements , finite volumes , finite differences etc .unfortunately , these computations are often very expensive and not suitable for online applications such as analysis , prediction and control .however , solutions of the original pde often exhibit low rank representations in an ` optimal ' basis .these representation can be exploited to reduce the pde to an ode of significantly lower order . in particular, dimensionality reduction follows three steps : the first step is to discretize the pde using a finite number of basis functions , such as piecewise linear or higher order polynomials or splines . in this paperwe use the well - established finite element method ( fem ) , and refer the reader to the large literature , e.g. , for details .we denote the approximation of the pde solution by , where denotes the scalar time variable , and denotes the multidimensional space variable , i.e. , is scalar for a one dimensional space , a vector of two elements in a two dimensional space , etc .we consider the one - dimensional case , where is a scalar in a finite interval , chosen as ] . using the orthonormality of the pod basis ( [ pod_ortho_chap3 ] ) leads to an ode of the form note , that the galerkin projection preserves the structure of the nonlinearities of the original pde .we start with presenting the problem of stable model reduction in its general form , i.e. , without specifying a particular type of pde . to this end, we highlight the dependence of the general pde ( [ general_pde_chap3 ] ) , on a single physical parameter by the parameter is assumed to be critical for the stability and accuracy of the model ; changing the parameter can either make the model unstable , or inaccurate for prediction . as an example , since we are interested in fluid dynamical problems , we use to denote a viscosity coefficient .the corresponding reduced order pod model is takes the form ( [ pod_proj_chap3 ] ) and ( [ rom2_chap3 ] ) : as we explained earlier , the issue with this ` simple ' galerkin pod rom ( denoted pod rom - g ) is that the norm of might become unbounded over a finite time support , despite the fact that the solution of ( [ general_pde2_chap3 ] ) is bounded .one of the main ideas behind the closure models approach is that the viscosity coefficient in ( [ rom3_chap3 ] ) can be substituted by a virtual viscosity coefficient , whose form is chosen to stabilize the solutions of the pod rom ( [ rom3_chap3 ] ) .furthermore , a penalty term is added to the original ( pod ) rom - g , as follows the term is chosen depending on the structure of to stabilize the solutions of ( [ rom41_chap3 ] ) .for instance , one can use the cazemier penalty model described in .the following closure models were introduced in for the case of burgers equations .we present them in a general framework , since similar closure terms could be used on other pde models .these examples illustrate the principles behind closure modelling , and motivate our proposed method . throughout, denotes the total number of modes retained in the rom .we recall below some closure models proposed in the literature , which we will use later to present the main result of this paper . herewe describe closure models which are based on constant stabilizing eddy viscosity coefficients .-*_rom - h model : _ * the first eddy viscosity model , known as the heisenberg rom ( rom - h ) is simply given by the constant viscosity coefficient where is the nominal value of the viscosity coefficient in ( [ general_pde2_chap3 ] ) , and is the additional constant term added to compensate for the damping effect of the truncated modes .-*_rom - r model : _ * this model is a variation of the first one , introduced in . in this model , is dependent on the mode index , and the viscosity coefficients for each mode are given by with being the viscosity amplitude , and the mode index .-*_rom - rq model : _ * this model proposed in , is a quadratic version of the rom - r , which we refer to as rom - rq . it is given by the coefficients where the variables are defined similarly to ( [ podromr_chap3 ] ) .-*_rom - rq model : _ * this model proposed in , is a root - square version of the rom - r ; we use rom - rs to refer to it .it is given by where the coefficients are defined as in ( [ podromr_chap3 ] ) .-*_rom - t model : _ * known as spectral vanishing viscosity model , is similar to the rom - r in the sense that the amount of induced damping changes as function of the mode index .this concept has been introduced by tadmor in , and so these closure models are referred to as rom - t .these models are given by where denotes the mode index , and is the index of modes above which a nonzero damping is introduced .-*_rom - sk model : _ * introduced by sirisup and karniadakis in , falls into the class of vanishing viscosity models .we use rom - sk to refer to it ; it is given by -*_rom - clm model : _ * this model has been introduced in , and is given by where is the mode index , and are positive gains ( see for some insight about their tuning ) .several ( time and/or space ) varying viscosity terms have been proposed in the literature .for instance , describes the smagorinsky nonlinear viscosity model .however , the model requires online computation of some nonlinear closure terms at each time step , which in general makes it computationally consuming .we report here the nonlinear viscosity model presented in , which is nonlinear and a function of the rom state variables .this requires explicit rewriting of the rom model ( [ rom3_chap3 ] ) , to separate the linear viscous term as follows where represents a constant viscosity damping matrix , and the term represents the remainder of the rom model , i.e. , the part without damping .+ based on equation ( [ rom4_chap3 ] ) , we can write the nonlinear eddy viscosity model denoted by , as where is the amplitude of the closure model , the are the diagonal elements of the matrix , and are defined as follows where the are the selected pod eigenvalues ( as defined in section [ basic_pod_chap3 ] ) . compared to the previous closure models , the nonlinear term does not just act as a viscosity , but is rather added directly to the right - hand side of the reduced order model ( [ rom4_chap3 ] ) , as an additive stabilizing nonlinear term . the stabilizing effect has been analyzed in based on the decrease over time of an energy function along the trajectories of the rom solutions , i.e. , a lyapunov - type analysis .all these closure models share several characteristics , including a common challenge , among others : the selection and tuning of their free parameters , such as the closure models amplitude . in the next section ,we show how es can be used to auto - tune the closure models free coefficients and optimize their stabilizing effect .as mentioned in , the tuning of the closure model amplitude is important to achieve an optimal stabilization of the rom . to achieve optimal stabilization, we use model - free es optimization algorithms to tune the coefficients of the closure models presented in section [ closure_models_chap3 ] .the advantage of using es is the auto - tuning capability that such algorithms allow .moreover , in contrast to manual off - line tuning approaches , the use of es allows us to constantly tune the closure model , even in an online operation of the system .indeed , es can be used off - line to tune the closure model , but it can also be connected online to the real system to continuously fine - tune the closure model coefficients , such as the amplitudes of the closure models .thus , the closure model can be valid for a longer time interval compared to the classical closure models with constant coefficients , which are usually tuned off - line over a fixed finite time interval .we start by defining a suitable learning cost function .the goal of the learning ( or tuning ) is to enforce lagrange stability of the rom model ( [ rom3_chap3 ] ) , and to ensure that the solutions of the rom ( [ rom3_chap3 ] ) are close to the ones of the original pde ( [ general_pde2_chap3 ] ) .the later learning goal is important for the accuracy of the solution .model reduction works toward obtaining a simplified ode model which reproduces the solutions of the original pde ( the real system ) with much less computational burden , i.e. , using the lowest possible number of modes. however , for model reduction to be useful , the solution should be accurate .we define the learning cost as a positive definite function of the norm of the error between the approximate solutions of ( [ general_pde2_chap3 ] ) and the rom ( [ rom3_chap3 ] ) , as follows where denotes the learned parameter , and is a positive definite function of .note that the error could be computed off - line using solutions of the rom ( [ rom3_chap3 ] ) , and approximate solutions of the pde ( [ general_pde2_chap3 ] ) . the error could be also computed online where the is obtained from solving the model ( [ rom3_chap3 ] ) , but the is obtained from real measurements of the system at selected space points .a more practical way of implementing the es - based tuning of , is to start with an off - line tuning of the closure model .then , the obtained rom , i.e. , the computed optimal value of , is used in an online operation of the system , e.g. , control and estimation .one can then fine - tune the rom online by continuously learning the best value of at any give time during the operation of the system .to derive formal convergence results , we use some classical assumptions of the solutions of the original pde , and on the learning cost function .[ pdestab_assumption1_chap3 ] the solutions of the original pde model ( [ general_pde2_chap3 ] ) , are assumed to be in , .[ robustmesass1_pdestab_chap3 ] the cost function in ( [ q_pde2_chap3 ] ) has a local minimum at .[ robustmesass2_pdestab_chap3 ] the cost function in ( [ q_pde2_chap3 ] ) is analytic and its variation with respect to is bounded in the neighborhood of , i.e. , , where denotes a compact neighborhood of . under these assumptions ,the following lemma follows .[ pdestab_lemma1_chap3 ] consider the pde ( [ general_pde2_chap3 ] ) , under assumption [ pdestab_assumption1_chap3 ] , together with its rom model ( [ rom3_chap3 ] ) , where the viscosity coefficient is substituted by .let take the form of any of the closure models in ( [ podromh_chap3 ] ) to ( [ podromclm_chap3 ] ) , where the closure model amplitude is tuned based on the following es algorithm where , , large enough , and is given by ( [ q_pde2_chap3 ] ) . under assumptions [ robustmesass1_pdestab_chap3 ] , and [ robustmesass2_pdestab_chap3 ] , the norm of the distance w.r.t .the optimal value of , admits the following bound where , and the learning cost function approaches its optimal value within the following upper - bound where .based on assumptions [ robustmesass1_pdestab_chap3 ] , and [ robustmesass2_pdestab_chap3 ] , the extremum seeking nonlinear dynamics ( [ pdestab_mes_1_chap3 ] ) , can be approximated by linear averaged dynamics ( using averaging approximation over time , ( * ? ? ?* , definition 1 ) ) .furthermore , there exist , such that for all , the solution of the averaged model is locally close to the solution of the original es dynamics , and satisfies ( ) with . moreover , since is analytic it can be approximated locally in with a quadratic polynomial , e.g. , taylor series up to second order , which leads to ( ) based on the above , we can write which implies the cost function upper - bound is easily obtained from the previous bound , using the fact that is locally lipschitz , with the lipschitz constant . when the influence of the linear terms of the pde are dominant , e.g. , in short - time scales , closure models based on constant linear eddy viscosity coefficients can be a good solution to stabilize roms and preserve the intrinsic energy properties of the original pde .however , in many cases with nonlinear energy cascade , these closure models are unrealistic ; linear terms can not recover the nonlinear energy terms lost during the rom computation .for this reason , many researchers have tried to come up with nonlinear stabilizing terms for instable roms .an example of such a nonlinear closure model is the one given by equation ( [ smagorinsky_chap3 ] ) , and proposed in based on finite - time thermodynamics ( ftt ) arguments and in based on scaling arguments .based on the above , we introduce here a combination of both linear and nonlinear closure models .the combination of both models can lead to a more efficient closure model .in particular , this combination can efficiently handle linear energy terms , that are typically dominant for small time scales and handle nonlinear energy terms , which are typically more dominant for large time - scales and in some specific pdes / boundary conditions .furthermore , we propose to auto - tune this closure model using es algorithms , which provides an automatic way to select the appropriate term to amplify. it can be either the linear part or the nonlinear part of the closure model , depending on the present behavior of the system , e.g. , depending on the test conditions .we summarize this result in the following lemma .[ pdestab_lemma2_chap3 ] consider the pde ( [ general_pde2_chap3 ] ) , under assumption [ pdestab_assumption1_chap3 ] , together with its stabilized rom model where the linear viscosity coefficient is substituted by chosen from any of the constant closure models ( [ podromh_chap3 ] ) to ( [ podromclm_chap3 ] ) .the closure model amplitudes are tuned based on the following es algorithm where , , large enough , and is given by ( [ q_pde2_chap3 ] ) , with . under assumptions[ robustmesass1_pdestab_chap3 ] , and [ robustmesass2_pdestab_chap3 ] , the norm of the vector of distance w.r.t .the optimal values of ; admits the following bound where , and the learning cost function approaches its optimal value within the following upper - bound where . we will skip the proof for this lemma , since it follows the same steps as the proof of lemma [ pdestab_lemma1_chap3 ]as an example application of our approach , we consider the coupled burgers equation , ( e.g. , see ) , of the form where represents the temperature , represents the velocity field , is the coefficient of the thermal expansion , the heat diffusion coefficient , the viscosity ( inverse of the reynolds number ) , ] .the boundary conditions are imposed as where are positive constants , and and denote left and right boundary , respectively .the initial conditions are imposed as ),\\ t(0,x)=t_{0}(x)\in l^{2}([0,1 ] ) , \end{array}\ ] ] and are specified below . following a galerkin projection onto the subspace spanned by the pod basis functions , the coupled burgers equation is reduced to a pod rom with the following structure ( e.g. , see ) where matrix is due to the projection of the forcing term , matrix is due to the projection of the boundary conditions , matrix is due to the projection of the viscosity damping term , matrix is due to the projection of the thermal coupling and the heat diffusion terms , and the matrix is due to the projection of the gradient - based terms , and .the notations ( ) , ( ) , stand for the space basis functions and the time projection coordinates , for the velocity and the temperature , respectively .the terms represent the mean values ( over time ) of and , respectively .we first test the stabilization performance of lemma [ pdestab_lemma1_chap3 ] .we test the auto - tuning results of the es learning algorithm when tuning the amplitudes of linear closure models .more specifically , we test the performance of the heisenberg closure model given by ( [ podromh_chap3 ] ) , when applied in the context of lemma [ pdestab_lemma1_chap3 ] .we consider the coupled burgers equation ( [ burgers2_chap3 ] ) , with the parameters , the trivial boundary conditions , a simulation time - length and zero forcing , .we use pod modes for both variables ( temperature and velocity ) . for the choice of the initial conditions, we follow , where the simplified burgers equation has been used in the context of pod rom stabilization .indeed , in the authors propose two types of initial conditions for the velocity variable , which led to instability of the nominal pod rom , i.e. , the basic galerkin pod rom ( pod rom - g ) without any closure model .accordingly , we choose the following initial conditions : \\ 0,\;\text{if}\;x\in\;]0.5,\;1 ] , \end{array } \right.\ ] ] \\ 0,\;\text { if}\;x\in\;]0.5,\;1 ] , \end{array } \right.\ ] ] we apply lemma [ pdestab_lemma1_chap3 ] , with the heisenberg linear closure model given by ( [ podromh_chap3 ] ) . the closure model amplitude tuned using a discretized version of the es algorithm ( [ pdestab_mes_1_chap3 ] ) , given by where , and denotes the learning iterations index .we use the parameters values : ,\;\omega_{1}=15\;[\frac{rad}{sec}] ] , and a similar cost function as in test 1 .we show the profile of the learning cost function over the learning iterations in figure [ burgersstab_q_test1_chap3 ] .we can see a quick decrease of the cost function within the first iterations .this means that the es manages to improve the overall solutions of the pod rom very fast .the associated profiles for the two closure models amplitudes learned values and are reported in figures [ burgersstab_hatnue_test1_chap3 ] , and [ burgersstab_hatnunl_test1_chap3 ] .we can see that even though the cost function value drops quickly , the es algorithm continues to fine - tune the values of the parameters , over the iterations , and they reach eventually reach an optimal values of , and .we also show the effect of the learning on the pod rom solutions in figure [ burgersstab_lromsol_test1_chap3 ] , and figure [ burgersstab_le_test1_chap3 ] , which by comparison with figures [ burgersstab_lfromsol_test1_chap3 ] , and [ burgersstab_nome_test1_chap3 ] , show a clear improvement of the pod rom solutions with the es tuning of the closure models amplitudes .we also notice an improvement of the rom solutions compared to the linear closure model stabilization test results of figure [ burgersstab_le_test1lin_chap3 ] .specifically , we see that the temperature error in the case of the closure model of lemma [ pdestab_lemma2_chap3 ] , is smaller than the one obtained with the linear closure model of lemma [ pdestab_lemma1_chap3 ] .we do not currently have a formal proof .however , we believe that the improvement is due to the fact that in the closure model of lemma [ pdestab_lemma2_chap3 ] , we are using both the stabilizing effect of the linear viscosity closure model term and the stabilizing effect of the nonlinear closure model term .in this work , we explore the problem of stabilization of reduced order models for partial differential equations , focusing on the closure model - based rom stabilization approach . it is well known that tuning the closure models gains is an important part in obtaining good stabilizing performances .thus , we propose a learning es - based auto - tuning method to optimally tune the gains of linear and nonlinear closure models , and achieve an optimal stabilization of the rom .we validate our using the coupled burgers equation as an example , demonstrating significant gains in error performance .the results are encouraging .we defer to future publications verifying our approach on more challenging higher dimensional cases .our results also raise the prospect of developing new nonlinear closure models , together with their auto - tuning algorithms using extremum seeking , as well as other machine learning techniques .
|
we present results on stabilization for reduced order models ( rom ) of partial differential equations using learning . stabilization is achieved via closure models for roms , where we use a model - free extremum seeking ( es ) dither - based algorithm to learn the best closure models parameters , for optimal rom stabilization . we first propose to auto - tune linear closure models using es , and then extend the results to a closure model combining linear and nonlinear terms , for better stabilization performance . the coupled burgers equation is employed as a test - bed for the proposed tuning method .
|
in this paper , we consider irregular low - density parity - check ( ldpc ) codes with a degree distribution pair .the bit error probability of ldpc codes over the binary erasure channel ( bec ) under belief propagation ( bp ) decoding is determined by three quantities ; the block length , the erasure probability and the iteration number .let denote the bit error probability of ldpc codes with block length over the bec with erasure probability at iteration number .for infinite block length , can be calculated easily by density evolution and there exists threshold parameter such that for and for . despite the ease of analysis for infinite block length , finite - length analysis is more complex . for finite block length and infinite iteration number , can be calculated exactly by _ stopping sets _analysis . for finite block length and finite iteration number , can also be calculated exactly in a combinatorial way .the exact finite - length analysis becomes computationally challenging as block length increasing .an alternative approach which approximates the bit error probability is therefore employed . for asymptotic analysis of the bit error probability, two regions of can be distinguished in the error probability ; the high error probability region called _ waterfall _ and the low error probability region called _error floor_. in terms of block length , they correspond to the small block length region and the large block length region. this paper deals with the bit error probability for large block length both below and above threshold with finite iteration number . for infinite iteration number, the asymptotic analysis for error floor was shown by amraoui as following : as .this equation means that for ensembles with , is a good approximation of where is sufficiently large .our main result is following .+ _ for regular ldpc codes with finite iteration number _ _ as , where and and are given by theorem [ beta ] and theorem [ gamma ] ._ this analysis is the first asymptotic analysis for finite iteration number . 0 on general channel bms( ), there is a threshold parameter such that for and for .montanari proved that where iteration number is chosen as `` the best iteration number '' then as .the proof assume that is much smaller than but the formula well match at simulation for any .there is more simple but interesting problem that how fast approach to as block length increasing . is described perfectly by density evolution and it is well known that as . our main result is to give exact expression of for regular ensembles and asymptotic expression for irregular ensembles on the bec the error probability of a bit in fixed tanner graph at the -th iteration is determined by neighborhood graph of depth of the bit . since the probability of neighborhood graphs which have cycles is we focus on the neighborhood graphs with no cycle and single cycle for calculating the coefficient of in the bit error probability .let denote the coefficient of in the bit error probability due to cycle - free neighborhood graphs and denote the coefficient of in the bit error probability due to single - cycle neighborhood graphs .then the coefficient of in the bit error probability can be expressed as following : . can be calculated efficiently for irregular ensembles and can be expressed simply for regular ensembles .0 where block length tends to infinity , the neighborhood graph takes no cycle with probability . in other words, for any neighborhood graph with some cycles and for any cycle - free neighborhood graph where is degree of the root node the expected probability of erasure message for infinite block length can be calculated by density evolution .let denote erasure probability of messages into check node at the -th iteration and denote erasure probability of messages into variable node at the -th iteration for infinite block length .then 0 the coefficient of in the bit error probability due to single - cycle neighborhood graphs can be calculated using density evolution .[ gamma ] for irregular ldpc ensembles with a degree distribution pair are calculated as following where , and is eq .( [ f12 ] ) , ( [ f34 ] ) and ( [ f56 ] ) , respectively . the complexity of the computation of is in time and in space . can be expressed simply for regular ensembles since of uniqueness of the cycle - free neighborhood graph .[ beta ] for the -regular ldpc ensemble is expressed as following the probability of the unique cycle - free neighborhood graph of depth is where .the coefficient of in the probability is and the error probability of the root node is .then we obtain the statement of the theorem . due to the above theorems , for regular ensembles can be calculated efficiently .0 for irregular ldpc ensemble is bounded as following where assume .then for any , there exists some iteration number such that for any assume and then assume then there exists some and such that assume .then for any , there exists some iteration number such that for any although if then converges to and converges to as , if then and grow exponentially as due to the above proposition .thus convergence of is non - trivial . in practiceit is necessary to use high precision floating point tools for calculating .the bit error probability of an ensemble with iteration number is defined as following : where denotes a set of all neighborhood graphs of depth , denotes the probability of the neighborhood graph and denotes the error probability of the root node in the neighborhood graph .the coefficient of in the bit error probability with iteration number due to single - cycle neighborhood graphs is defined as following : where denotes a set of all single - cycle neighborhood graphs of depth .first we consider the bit error probability of the root node of the neighborhood graph in fig .the variable nodes in depth 1 have degree to .then the coefficient of in is given as the error probability of the message from the channel to the root node is .the error probabilities of the message from the left check node , the right check node and the middle check node to the root node are , and , respectively .then the error probability of the root node is given as the coefficient of term of the bit error probability due to is given as after summing out the left and right subgraphs , after summing out degrees , and , at last , after summing out the root node and the middle check node , the coefficient of in the bit error probability for iteration number due to neighborhood graphs with the right graph type in fig .[ exm ] is given as in the same way .notice that is the coefficient of of the probability of neighborhood graphs with the right graph type in fig .[ exm ] .single - cycle neighborhood graphs can be classified to six types in fig .summing up the bit error probability due to all these types , we obtain .left two types correspond to , middle two types correspond to and right two types correspond to . 0 is a question that _ how large block length is necessary for using for a good approximation of . it is therefore interesting to compare with numerical simulations . in the proof ,we count only the error probability due to cycle - free neighborhood graphs and single - cycle neighborhood graphs .thus it is expected that the approximation is accurate only at large block length where the probability of the multicycle neighborhood graphs is sufficiently small .contrary to the expectation , the approximation is accurate already at small block length in fig .[ reg23 ] .although there is a large difference in small block length near the threshold , the approximation is accurate at block length 801 which is not large enough . for the ensembles with ,the approximation is not accurate at far below the threshold in fig .[ reg36 ] .since decreases to as for the ensembles the higher order terms caused by multicycle stopping sets has a large contribution to the bit error probability .it is expected that the approximation is even accurate for the ensembles from which stopping sets with small number of cycles are expurgated .the limiting value of , is also interesting . for , calculate where sufficiently large in fig .[ lim23 ] and fig .[ lim36 ] . for the -regular ensemble below the threshold , and take almost the same value .it implies that below threshold takes the same value at two limits ; then and then . for the ensembles with , is almost where is smaller than threshold . at last , notice that takes non - trivial values slightly below threshold .for the -regular ensemble , is negative at , positive at and has absolute value which is too small to be measured at . at . with numerical simulations for the -regular ensemble with iteration number 20 .the dotted curves are approximation and the solid curve is density evolution .block lengths are 51 , 102 , 201 , 402 and 801 .the threshold is 0.5 . ] with numerical simulations for the -regular ensemble with iteration number 5 .the dotted curves are approximation and the solid curve is density evolution .block lengths are 512 , 2048 and 8192 .the threshold is 0.42944 . ]although the asymptotic analysis of the bit error probability for finite block length and finite iteration number given in this paper is very accurate at -regular , much work remains to be done .first there remains the problem to computing for irregular ensembles .it would also be interesting to generalize this algorithm to other ensembles and other channels . in the binary memoryless symmetric channel ( bms ) parametrized by , we consider instead of since of lack of monotonicity .the asymptotic analysis of the bit error probability with the best iteration number under bp decoding was shown by montanari for small as following : as , where and is a random variable corresponding to the sum of the i.i.d .channel log - likelihood ratio .it implies that if , the asymptotic bit error probability under bp decoding is equal to that of maximum likelihood ( ml ) decoding .although the condition of the proof in implies the convergence of values corresponding to and in this paper , in general if , they do not converge , where is bhattacharyya constant . although the condition of is strong , the approximation is very accurate for all smaller than threshold .we have the problem to prove the convergence of for the bec and the bms for any .a iteration number is also important .the approximation is not accurate for too large iteration number . a sufficient ( and necessary ) iteration number for a given block length and a ensemble is very important to improve the analysis in this paper . 0although the asymptotic analysis of the bit error probability for finite block length and finite iteration number given in this paper is very accurate at -regular , much work remains to be done .* in this paper , the approximation has been given only for regular ensembles .we have a problem that how to compute for irregular ensembles . * in this paper , the approximation is not accurate for ensembles with and smaller than threshold .we have a problem that how to give higher order terms of the bit error probability with both finite and infinite iteration number . * in this paper , the approximation has been given only for the bit error probability .let denote the block error probability for block length , erasure probability and iteration number .the block error probability for infinite block length and infinite iteration number is known as following : we have a problem what is . * concerning the previous problem, we have a problem how quickly the block error probability converge to limiting values as block length increasing for both finite and infinite iteration number .* apply the same analysis in this paper to other ensembles .* in this paper , the approximation has been given only for the bec .in the binary memoryless symmetric channel ( bms ) parametrized by , we consider instead of since of lack of monotonicity .the asymptotic analysis of the bit error probability with the best iteration number under bp decoding was shown by montanari for small as following : as , where and is a random variable corresponding to the sum of the i.i.d .channel log - likelihood ratio .it implies that if , the asymptotic bit error probability under bp decoding is equal to that of maximum likelihood ( ml ) decoding .although the condition of the proof in implies the convergence of values corresponding to and in this paper , in general if , they do not converge , where is bhattacharyya constant . although the condition of is strong , the approximation is very accurate for all smaller than threshold .we have problems how about finite iteration number , higher order terms , the block error probability and other ensembles for the bms . * the goal of finite - length analysis is to construct good codes ( e.g. low bit / block error probability , high rate , low block length , low maximum degree , low complexity of the encoding / decoding etc . ) .20 r. g. gallager , _ low - density parity - check codes _ , mit press , 1963 m. luby , m. mitzenmacher , a. shokrollahi , d. a. spielman , and v. stemann , `` practical loss - resilient codes , '' in _ proceedings of the 29th annual acm symposium on theory of computing , _ pages 150 - 159 , 1997 t. richardson and r. urbanke , `` the capacity of low - density parity check codes under message - passing decoding , '' _ ieee trans .inform . theory _2 , pp.599 - 618 , feb . 2001 t. richardson and r. urbanke , `` finite - length density evolution and the distribution of the number of iterations for the binary erasure channel '' a. montanari `` the asymptotic error floor of ldpc ensembles under bp decoding , '' _44th allerton conference on communications , control and computing , monticello _ , october 2006 c. di , d. proietti , t. richardson , e. telatar and r. urbanke , `` finite length analysis of low - density parity - check codes , '' _ ieee trans .inform , theory _ , vol .48 , no . 6 ,pp.1570 - 1579 , jun .2002 a. orlitsky , k. viswanathan , and j. zhang , `` stopping set distribution of ldpc code ensembles , '' _ ieee trans .inform , theory _ , vol .3 , pp.929 - 953 , mar .2005 a. amraoui `` asymptotic and finite - length optimization of ldpc codes , '' ph.d .thesis lausanne 2006 t. richardson and r. urbanke , _ modern coding theory _, draft available at http://lthcwww.epfl.ch/index.php
|
we consider communication over the binary erasure channel ( bec ) using low - density parity - check ( ldpc ) code and belief propagation ( bp ) decoding . the bit error probability for infinite block length is known by density evolution and it is well known that a difference between the bit error probability at finite iteration number for finite block length and for infinite block length is asymptotically , where is a specific constant depending on the degree distribution , the iteration number and the erasure probability . our main result is to derive an efficient algorithm for calculating for regular ensembles . the approximation using is accurate for -regular ensembles even in small block length .
|
conventional optics is a highly developed subject , but has limitations of resolution due to the finite wavelength of light .it has been thought impossible to obtain images with details finer than this limit .recently it has been shown that a ` perfect lens ' is in principle possible and that arbitrarily fine details can be resolved in an image provided that the lens was constructed with sufficient precision .the prescription is simple : take a slab of material , thickness , and with electrical permittivity and magnetic permeability given by , given that these conditions are realised , the slab will produce an image of any object with perfect resolution .the key to this remarkable behaviour is that the refractive index of the slab is , it was veselago in 1968 who first realised that negative values for would result in a negative refractive index and he also pointed out that such a negative refractive material ( nrm ) would act as a lens but it took more than 30 years to realise the concept of negative refractive index at microwave frequencies .it was only in recent times that the lens s remarkable property of perfect resolution was noted .for the first time there is the possibility of manipulating the near field to form an image .the physics of negative refractive index has caught the imagination of the physics community as evidenced by the publications in the past two years although the conditions for a perfect lens are simple enough to specify , realising them is in practice rather difficult .there are two main obstacles .first the condition of negative values for also implies that these quantities depend very sensitively on frequency so that the ideal condition can only be realised at a single carefully selected frequency .second it is very important that absorption , which shows up as a positive imaginary component of or , is kept to a very small value .resolution of the lens degrades rapidly with increasing absorption .it is the objective of this paper to explore how the effects of absorption can be minimised .let us probe a little deeper into the operation of the perfect lens .any object is visible because it emits or scatters electromagnetic radiation .the problem of imaging is concerned with reproducing the electro - magnetic field distribution of objects in a two dimensional ( 2-d ) plane in the 2-d image plane .the electromagnetic field in free space emitted or scattered by a 2-d object ( x - y plane ) can be conveniently decomposed into the fourier components and and polarization defined by : ,\ ] ] where the source is assumed to be monochromatic at frequency , and is the speed of light in free space .obviously when we move out of the object plane the amplitude of each fourier component changes ( note the z - dependence ) and the image becomes blurred .the electromagnetic field consists of a radiative component of propagating modes with real and a near - field component of non - propagating modes with imaginary whose amplitudes decay exponentially with distance from the source . provided that is real , , it is only the phase that changes with z and a conventional lens is designed to correct for this phase change .the evanescent near - field modes are the high - frequency fourier components describing the finest details in the object and to restore their amplitudes in the image plane requires amplification , which is of course beyond the power of a conventional lens and hence the limitations to resolution .thus the perfect lens performs the dual function of correcting the phase of the radiative components as well as amplifying the near - field components bringing them both together to make a perfect image and thereby eliminating the diffraction limit on the image resolution . in general the conditions under which this perfect imaging occurs are : where and are the dielectric permittivity and magnetic permeability of the nrm slab , and and are the dielectric permittivity and magnetic permeability of the surrounding medium respectively . an important simplification of these conditions can be had in the case that _ all _ length scales are much less than the wavelength of light . under these circumstanceselectric and magnetic fields decouple : the p - polarised component of light becomes mainly electric in nature , and the s - polarised component mainly magnetic . therefore in the case of p - polarised lightwe need only require that , and the value of is almost irrelevant .this is a welcome relaxation of the requirements especially at optical frequencies where many materials have a negative values for , but show no magnetic activity .we shall concentrate our investigations on these extreme near field conditions and confine our attentions to p - polarised light . in section-2, we investigate the properties of a layered structure comprising extremely thin slabs of silver and show that layered structures are less susceptible to the degrading effects of absorption , than are single element lenses . in section-3 , we present some detailed calculations of how the multilayer lens transmits the individual fourier components of the image .reference to figure 2 shows that extremely large amplitudes of the electric field occur within the lens when the near field is being amplified .this is especially true for the high frequency fourier components which give the highest resolution to the image .unless the lens is very close to the ideal lossless structure , these large fields will result in dissipation which will kill the amplifying effect .however there is a way to restructure the lens to ameliorate the effects of dissipation .we observe that in the ideal lossless case we can perfectly well divide the lens into separate layers each one making its contribution to the amplification process ( shamomina et al have made a similar observation and zhang et al . have considered a similar system ) .provided that the total length of vacuum between the object and image is equal to the total length of lens material , the lens will still work and produce a perfect image .however this subdivision of the lens makes a big difference to how the lens performs when it is less than ideal and absorption is present .the point is that by distribution the amplification , the fields never grow to the extreme values that they do when the lens is a single slab and therefore the dissipation will be much less .figure 3 illustrates this point .first let us estimate the resolution of a lens constituted as a single slab . according to our original calculations in the near field limit the transmission coefficient through the lens for each fourier component is , where , obviously when , \ ] ] the lens power to amplify begins to fall away .fourier components of higher spatial frequency do not contribute and hence the resolution is limited to , } \ ] ] the easiest way to investigate the properties of a layered system is to recognise that , provided that the slices are thin enough , it will behave as an effective anisotropic medium whose properties we calculate as follows .applying a uniform displacement field , , perpendicular to the slices gives electric fields of and in the positive dielectric medium and in the negative material of the lens respectively .therefore the average electric field is given by , where is the effective dielectric function for fields acting along the -axis . by considering an electric field along the -axis we arrive at where is the effective dielectric function for fields acting along the -axis .we have assumed for simplicity that the thickness of each material component is the same , but it is also possible to have unequal thicknesses .now under the perfect lens conditions , , we have thus the stack of alternating extremely thin layers of negative and positive refractive media in the limiting case of layer thickness going to zero behaves as a highly anisotropic medium .radiation propagates in an anisotropic medium with the following dispersion , and hence for the perfect lens conditions it is always true that , each fourier component of the image passes through this unusual medium without change of phase or attenuation .it is as if the front and back surfaces of the medium were in immediate contact .here we have a close analogy with an optical fibre bundle where each fibre corresponds to a pixel and copies the amplitude of the object pixels to the image pixels without attenuation and with the same phase change for each pixel , preserving optical coherence .our layered system performs exactly the same function with the refinement that in principle the pixels are infinitely small , and the phase change is zero . in figure 5we illustrate this point with an equivalent system : an array of infinitely conducting wires embedded in a medium where . in the latter caseit is more obvious that an image propagates through the system without distortion . indeed in the trivial zero frequency limit the system simply connects object to image point by point .coming back to our point that the layered system reduces the effect of absorption , we estimate the transmission for p - polarized light through such a system in the near field limit as evidently for small values of the transmission coefficient is unity and these fourier components contribute perfectly to the image , but for large values of transmission is reduced .we estimate the resolution limit to be , therefore the smallest detail resolved by the lens decreases linearly with decreasing absorption ( ) .in contrast the original single slab of lens had a much slower improvement of resolution , being inversely as $ ] .thus it appears to be a case of _ two lenses are better than one but many lenses are the best of all_.in the previous section we gave some qualitative arguments as to the properties of metal - dielectric multilayer stacks and is clear that for p - polarized light in the quasi - static limit this structure would behave as a near - perfect ` fibre optic bundle ' . in the electrostatic ( magnetostatic ) limit of large , there is no effect of changing ( ) for the p(s)-polarization .the deviation from the quasi - static limit caused by the non - zero frequency of the electromagnetic wave would , however , not allow this decoupling .when the effects of retardation are included , a mismatch in the and from the perfect - lens conditions would always limit the image resolution and also leads to large transmission resonances associated with the excitation of coupled surface modes that could introduce artifacts into the image . for the negative dielectric ( silver )lens , the magnetic permeability everywhere , and this is a large deviation from the perfect lens conditions .the dispersion of these coupled slab plasmon polaritons and their effects on the image transfer has been extensively studied in ref . . essentially , for a single slab of negative dielectric material which satisfies the conditions for the existence of a surface plasmon on both the interfaces , the two surface plasmon states hybridise to give an antisymmetric and a symmetric state , whose frequencies are detuned away from that of a single uncoupled surface state . the transmissionas a function of the transverse wave - vector remains reasonably close to unity up to the resonant wave - vector for the coupled plasmon state , after which it decays exponentially with larger wavevectors .the secret for better image resolution is to obtain a flat transmission coefficient for as large a range of wave vectors as possible .this is possible by using a thinner slab in which case the transmission resonance corresponding to a coupled slab mode occurs at a much larger . for the transfer of the image over useful distances , we would then have to resort to a layered system of very thin slabs of alternating positive and negative media .let us now consider a layered system consisting of thin slabs of silver ( negative dielectric constant ) and any other positive dielectric medium ( ) .since the dielectric constant of silver is dispersive , ( in ev ) .the imaginary part can be taken to be reasonably constant in this frequency range . ], we can choose the frequency ( ) of the electromagnetic radiation so as to satisfy the perfect lens condition at the interfaces between the media ( ) .we use the transfer matrix method to compute the transmission through the layered medium as a function of the transverse wave - vector at a frequency at which the perfect lens condition is satisfied .we will denote by the number of slabs with negative dielectric constant in the alternating structure , each period consisting of a negative and positive slab as shown in figure 4 .now the total length of the system is where is the period of the multilayer stack ( the negative and positive slabs being of equal thickness of ) .note that the total thicknesses of positive and negative dielectric media between the object plane to the image plane are also equal .the transmission across the multilayer system is shown in figure 6 , where the thickness of the individual slabs is kept constant , but the number of layers is increased , thereby increasing the total length of the system .we get divergences in the transmission at wave - vectors corresponding to the coupled plasmon resonances .the number of the resonances increases with the number of layers , corresponding to the number of surface modes at the interfaces . for the system with the ( hypothetical ) lossless negative media ,one notes that as we increase the number of layers , the transmission coefficient is almost constant and close to unity with increasing , until it passes through the set of resonances and decays exponentially beyond .the range of for which the transfer function is constant is independent of the total number of layers and depends only on the thickness of the individual layers which sets the coupling strength for the plasmon states at the interfaces . in the presence of absorption in the negative medium, however , the decay is extremely fast for the system with larger simply as a consequence of the larger amount of absorptive medium present .also note that the absorption removes all the divergences in the transmission . as noted by us in earlier publications ,the absorption is actually vital in this system to prevent the resonant divergences which would otherwise create artifacts that dominate the image .next we keep the total length of the stack fixed and change the number of layers . in the lossless case ,the range of for which there is effective amplification of the evanescent waves , simply increases with reducing layer thickness as can be seen in figure 7 .of course , the number of transmission resonances which depend on the number of surface states increases with the number of layers . with absorptive material, however , the transmission decays faster with for larger in the case of the thicker slabs ( 10 nm ) than in the case of the thinner slabs ( 5 nm ) .this reconfirms our analytical result that the effects of absorption would be less deleterious for the image resolution in the case of thinner layers .note that the total amount of absorptive material in this case is the same in both the cases . in any case, the absorption in the negative dielectric ( metal ) appears to set the ultimate limit on the image resolution in this case of the layered medium .we have noted earlier in ref . that the effects of absorption could be minimised by using a large dielectric constant , gaas say ( ) , for the positive medium and tuning to the appropriate frequency where the perfect lens condition is satisfied for the real part of the dielectric constant of the metal . in the case of silver , the imaginary part of the permittivity or the absorption is reasonably constant ( .4 ) over the frequency range of interest .hence , it is immediately seen that the fractional deviation from the perfect lens condition in the imaginary part is smaller when the real part of the permittivity is large and hence the amplification of the evanescent waves becomes more effective .now we show the transmission obtained across a multilayer stack where and , corresponding to alternating slabs of silver and gaas , in figure 8 .we must first note that the wavelength of light at which the perfect lens condition for the permittivity of silver is satisfied is different in the two cases . using the empirical formula for the dispersion of silver ,we obtain at 356 nm and at 578 nm . in figure 8 for the lossless system ,the transmission resonances appear to occur at higher values of for the high index system , but it must realised that is smaller in this case and the corresponding image resolution would actually be lower. however , when we compare the transmission with absorption included , the beneficial effects of using the larger value of the dielectric constant become obvious .the transmission coefficient indeed decays much more slowly with in this case .also note that we have taken the source to be in air and the image to be formed inside the high - index dielectric medium .finally , we show in figure 9 the images of two slits of 15 nm width and a peak - to - peak separation of 45 nm obtained by using a single slab of silver as the lens and a layered medium of alternating layers of silver and a positive dielectric medium as the lens .the total distance from the object plane to the image plane in both cases is 80 nm .the images of the slits in the case of the single slab lens are hardly resolved , whereas the images of the slits are well separated and clearly resolved in the case of the layered lens .the enhancement in the image resolution for the layered lens is obvious from the figure .the bump seen in between the slits is an artifact due to the fact that the transmission function is not exactly a constant for all wave - vectors .we have elaborated the design of the perfect lens by considering a multilayer stack and shown that this has advantages over the original configuration of a single slab of material . in particular the effects of absorption are much reduced by the division into mutilayers .the limiting case of infinitesimal multilayers was also considered and shown to be equivalent to an effective medium through which the image propagates without distortion as if it were conveyed by an array of very fine infinitely conducting wires .we went on to make a detailed analysis of how imperfections in the lens affects the image quality .the effects of retardation and the coupled slab plasmon resonances can be minimized by considering very thin layers of 5 to 10 nm thickness .the effects of absorption then dominate the image transfer , but are less deleterious when the individual layer thicknesses are smaller .the effects of absorption can also be minimized by using materials with higher dielectric constants , and tuning the frequency of the radiation to meet the perfect lens conditions .sar would like to acknowledge the support from dod / onr muri grant n00014 - 01 - 1 - 0803 .pendry , j.b . , holden , a.j . , robbins , d.j . , and stewart , w.j . , pendry , j.b . , holden , a.j . , stewart , w.j . , and youngs , i. , 1996 , phys . rev .lett . , * 76 * , 4773 ; pendry , j.b . , holden , a.j . , robbins , d.j . , and stewart , w.j . , 1998 , j. phys . : condens. matter , * 10 * , 4785 .
|
in an earlier paper we introduced the concept of the perfect lens which focuses both near and far electromagnetic fields , hence attaining perfect resolution . here we consider refinements of the original prescription designed to overcome the limitations of imperfect materials . in particular we show that a multi - layer stack of positive and negative refractive media is less sensitive to imperfections . it has the novel property of behaving like a fibre - optic bundle but one that acts on the near field , not just the radiative component . the effects of retardation are included and minimized by making the slabs thinner . absorption then dominates image resolution in the near - field . the deleterious effects of absorption in the metal are reduced for thinner layers .
|
in wireless networks , power control is used for resource allocation and interference management . in multiple - access cdma systems such as the uplink of cdma2000 ,the purpose of power control is for each user terminal to transmit enough power so that it can achieve the desired quality of service ( qos ) without causing unnecessary interference for other users in the network .depending on the particular application , qos can be expressed in terms of throughput , delay , battery life , etc . since in many practical situations ,the users terminals are battery - powered , an efficient power management scheme is required to prolong the battery life of the terminals . hence , power control plays an even more important role in such scenarios . consider a multiple - access ds - cdma network where each user wishes to locally and selfishly choose its transmit power so as to maximize its utility and at the same time satisfy its delay requirements .the strategy chosen by each user affects the performance of other users through multiple - access interference .there are several questions to ask concerning this interaction .first of all , what is a reasonable choice of a utility function that measures energy efficiency and takes into account delay constraints ? secondly , given such a utility function , what strategy should a user choose in order to maximize its utility ? if every user in the network selfishly and locally picks its utility - maximizing strategy , will there be a stable state at which no user can unilaterally improve its utility ( nash equilibrium ) ?if such an equilibrium exists , will it be unique ? what will be the effect of delay constraint on the energy efficiency of the network ?game theory is the natural framework for modeling and studying such a power control problem .recently , there has been a great deal of interest in applying game theory to resource allocation is wireless networks .examples of game - theoretic approaches to power control are found in . in , power controlis modeled as a non - cooperative game in which users choose their transmit powers in order to maximize their utilities . in ,the authors extend this approach to consider a game in which users can choose their uplink receivers as well as their transmit powers .all the power control games proposed so far assume that the traffic is not delay sensitive .their focus is entirely on the trade - offs between throughput and energy consumption without taking into account any delay constraints . in this work, we propose a non - cooperative power control game that does take into account a transmission delay constraint for each user .our focus here is on energy efficiency .our approach allows us to study networks with both delay tolerant and delay sensitive traffic / users and quantify the loss in energy efficiency due to the presence of users with stringent delay constraints .the organization of the paper is as follows . in section [ system model ] , we present the system model and define the users utility function as well as the model used for incorporating delay constraints .the proposed power control game is described in section [ proposed game ] , and the existence and uniqueness of nash equilibrium for the proposed game is discussed in section [ nash equilibrium ] . in section [ multiclass ] , we extend the analysis to multi - class networks and derive explicit expressions for the utilities achieved at nash equilibrium . numerical results and conclusionsare given in sections [ numerical results ] and [ conclusions ] , respectively .we consider a synchronous ds - cdma network with users and processing gain ( defined as the ratio of symbol duration to chip duration ) .we assume that all user terminals transmit to a receiver at a common concentration point , such as a cellular base station or any other network access point .the signal received by the uplink receiver ( after chip - matched filtering ) sampled at the chip rate over one symbol duration can be expressed as where , , and are the transmit power , channel gain , transmitted bit and spreading sequence of the user , respectively , and is the noise vector which is assumed to be gaussian with mean and covariance .we assume random spreading sequences for all users , i.e. , ^t ] denote the proposed non - cooperative game where , and ] . in this interval , and hence the utility function is continuous and quasiconcave .this guarantees existence of a nash equilibrium for the proposed power control game .furthermore , for a sigmoidal efficiency function , , which is the ( positive ) solution of , is unique and as a result is unique for . because of this and the one - to - one correspondence between the transmit power and the output sir , the nash equilibrium is unique .the above proposition suggests that at nash equilibrium , the output sir for user is , where depends on the efficiency function through as well as user s delay constraint through .note that this result does not depend on the choice of the receiver and is valid for all linear receivers including the matched filter , the decorrelator and the ( linear ) minimum mean square error ( mmse ) detector .let us now consider a network with classes of users .the assumption is that all the users in the same class have the same delay requirements characterized by the corresponding and .based on proposition [ prop1 ] , at nash equilibrium , all the users in class will have the same output sir , , where . here , depends on the delay requirements of class , namely and , through .the goal is to quantify the effect of delay constraints on the energy efficiency of the network or equivalently on the users utilities . in order to obtain explicit expressions for the utilities achieved at equilibrium , we use a large - system analysis similar to the one presented in and .we consider the asymptotic case in which and . this allows us to write sir expressions that are independent of the spreading sequences of the users .let be the number of users in class , and define .therefore , we have . it can be shown that for the matched filter ( mf ) , the decorrelator ( de ) , and the mmse detector , the minimum power required by user in class to achieve an output sir equal to is given by the following equations : note that we have implicitly assumed that is sufficiently large so that the target sirs ( i.e. , s ) can be achieved by all users . furthermore , since for , we have .therefore , for the matched filter , the decorrelator , and the mmse detector , the utilities achieved at the nash equilibrium are given by note that , based on the above equations , we have .this means that the mmse reciever achieves the highest utility as compared to the decorrelator and the matched filter .also , the network capacity ( i.e. , the number of users that can be admitted into the network ) is the highest when the mmse detector is used .for the specific case of no delay constraints , for all and reduce to comparing with , we observe that the presence of users with stringent delay requirements results not only in a reduction in the utilities of those users but also a reduction in the utilities of other users in the network .a stringent delay requirement results in an increase in the user s target sir ( remember ) . since is maximum when , a target sir larger than results in a reduction in the utility of the corresponding user .in addition , because of the higher target sir for this user , other users in the network experience a higher level of interference and hence are forced to transmit at a higher power which in turn results in a reduction in their utilities ( except for the decorrelator , in which case the multiple - access interference is completely removed ) . also , since and , the presence of delay - constrained users causes a reduction in the system capacity ( again , except for the decorrelator ) . through , we have quantified the loss in the utility ( in bits / joule ) and in network capacity due to users delay constraints for the matched filter , the decorrelator and the mmse receiver . the sensitivity of the loss to the delay parameters ( i.e. , and ) depends on the efficiency function , .let us consider the uplink of a ds - cdma system with processing gain 100 .we assume that each packet contains 100 bits of information and no overhead ( i.e. , ) . the transmission rate , , is and the thermal noise power , , is .a useful example for the efficiency function is .this serves as an approximation to the packet success rate that is very reasonable for moderate to large values of .we use this efficiency function for our simulations . using this , with ,the solution to ( [ eq15b ] ) is .[ fig1 ] shows the target sir as a function of for and 3 .it is observed that , as expected , a more stringent delay requirement ( i.e. , a higher and/or a lower ) results in a higher target sir .we now consider a network where the users can be divided into two classes : delay sensitive ( class ) and delay tolerant ( class ) . for users in class , we choose and ( i.e. , delay sensitive ) . for users in class ,we let and ( i.e. , delay tolerant ) .based on these choices , and . without loss of generality and to keep the comparison fair , we also assume that all the users are 100 meters away from the uplink receiver . the system load is assumed to be ( i.e. , ) and we let and represent the load corresponding to class and class , respectively , with . we first consider a lightly loaded network with ( see fig . [ fig2 ] ) . to demonstrate the performance loss due to the presence of users with stringent delay requirements ( i.e. , class ) , we plot and as a function of the fraction of the load corresponding to class users ( i.e. , ) .here , and are the utilities of users in class and class , respectively , and represents the utility of the users if they all had loose delay requirements which means for all .[ fig2 ] shows the loss for the matched filter , the decorrelator , and the mmse detector .we observe from the figure that for the matched filter both classes of users suffer significantly due to the presence of delay sensitive traffic .for example , when half of the users are delay sensitive , the utilities achieved by class and class users are , respectively , 50% and 60% of the utilities for the case of no delay constraints . for the decorrelator ,only class users suffer and the reduction in utility is smaller than that of the matched filter . for the mmse detector ,the reduction in utility for class users is similar to that of the decorrelator , and the reduction in utility for class is negligible .we repeat the experiment for a highly loaded network with ( see fig .[ fig3 ] ) .since the matched filter can not handle such a significant load , we have shown the plots for the decorrelator and mmse detector only .we observe from fig .[ fig3 ] that because of the higher system load , the reduction in the utilities is more significant for the mmse detector compared to the case of .it should be noted that for the decorrelator the reduction in utility of class users is independent of the system load .this is because the decorrelator completely removes the multiple - access interference .it should be further noted that in figs .[ fig2 ] and [ fig3 ] we have only plotted the ratio of the utilities ( not the actual values ) . as discussed in section [ multiclass ] , the achieved utilities for the mmse detector are larger than those of the decorrelator and the matched filter .we have proposed a game - theoretic approach for studying power control in multiple - access networks with ( transmission ) delay constraints . we have considered a non - cooperative game where each user seeks to choose a transmit power that maximizes its own utility while satisfying the user s delay requirements .the utility function measures the number of reliable bits transmitted per joule of energy .we have modeled the delay constraint as an upper bound on the delay outage probability .we have derived the nash equilibrium for the proposed game and have shown that it is unique .the results are applicable to all linear receivers .in addition , we have used a large - system analysis to derive explicit expressions for the utilities achieved at equilibrium for the matched filter , decorrelator and mmse detector .the reductions in the users utilities ( in bits / joule ) and network capacity due to the presence of users with stringent delay constraints have been quantified .m. xiao , n. b. shroff , and e. k. p. chong , `` utility - based power control in cellular wireless systems , '' _ proceedings of the annual joint conference of the ieee computer and communications societies ( infocom ) _ , pp . 412421 , ak , usa , april 2001 . c. zhou , m. l. honig , and s. jordan , `` two - cell power allocation for wireless data based on pricing , '' _ proceedings of the annual allerton conference on communication , control , and computing _ , monticello ,il , usa , october 2001 . t. alpcan , t. basar , r. srikant , and e. altman , `` cdma uplink power control as a noncooperative game , '' _ proceedings of the ieee conference on decision and control _ , pp . 197202 , orlando , fl , usa , december 2001 .f. meshkati , h. v. poor , s. c. schwartz , and n. b. mandayam , `` a utility - based appraoch to power control and receiver design in wireless data networks . ''to appear in _ ieee transactions on communications_. v. rodriguez , `` an analytical foundation for resource management in wireless communication , '' _ proceedings of the ieee global telecommunications conference _, pp . 898902 , san francisco , ca , usa , december 2003 . d. n. c. tse and s. v. hanly , `` linear multiuser receivers : effective interference , effective bandwidth and user capacity , '' _ ieee transactions on information theory _ , vol .45 , pp .641657 , march 1999 . c. comaniciu and h. v. poor , `` jointly optimal power and admission control for delay sensitive traffic in cdma networks with lmmse receivers , '' _ ieee transactions on signal processing , special issue on signal processing in networking _ , vol .51 , pp .20312042 , august 2003 .
|
a game - theoretic approach for studying power control in multiple - access networks with transmission delay constraints is proposed . a non - cooperative power control game is considered in which each user seeks to choose a transmit power that maximizes its own utility while satisfying the user s delay requirements . the utility function measures the number of reliable bits transmitted per joule of energy and the user s delay constraint is modeled as an upper bound on the delay outage probability . the nash equilibrium for the proposed game is derived , and its existence and uniqueness are proved . using a large - system analysis , explicit expressions for the utilities achieved at equilibrium are obtained for the matched filter , decorrelating and minimum mean square error multiuser detectors . the effects of delay constraints on the users utilities ( in bits / joule ) and network capacity ( i.e. , the maximum number of users that can be supported ) are quantified . [ 1 ]
|
the determination of numerical solutions of the einstein equations is the scope of _ numerical relativity_. it is a fundamental issue not only for the determination of gravitational wave signals for detector data analysis , but also for the study of the properties of relativistic astrophysical objects . within numerical relativity studies ,the most commonly used formulation of the einstein equations is the so - called `` 3 + 1 '' formalism ( also called _ cauchy formalism _ ) in which space - time is foliated by a family of space - like hypersurfaces , which are described by their 3-metric .the 4-metric is then described in terms of , a 3-vector ( called _ shift _ ) and a scalar ( called _ lapse _ ) . in this formalism, the einstein equations can be decomposed into a set of four constraint equations and six second - order dynamical equations . solving the einstein equationsthen turns to be a cauchy problem of evolution under constraints and there remains the freedom to choose the time coordinate ( slicing ) and the spatial gauge .for example , the choice of _ maximal slicing _ for the time coordinate ( see ) converts the constraint equations to scalar form and a vectorial poisson - like equation , for which a numerical method for solution has been presented in .as far as evolution equations are concerned , they consist of six non - linear scalar wave equations in curved space - time , with the additional choice of the _ dirac _ gauge .the whole system is a mixed initial value - boundary problem , and this paper deals with boundary conditions for the time evolution equations .indeed , a simpler problem is considered : the initial value - boundary problem for a linear and flat scalar wave equation : where is the usual flat scalar dalembert operator in spherical coordinates and is a source . to solve a more general problem in curved space - time , like for example: one can put non - linear terms to the source and represent at each time - step the metric function by a polynomial ( semi - implicit scheme , see for an example in spherical symmetry ) .the study of the simple wave equation and its properties concerning quadrupolar waves is more than a toy - model for numerical relativity .there are many degrees of freedom in the formulation of the einstein equations and in the gauge choice .it is not clear which of these formulations are well - posed or numerically stable .it is therefore important to have numerical tools that are general in the sense that they can be used within the framework of various formulations and gauges .still , in many cases , the dynamical degrees of freedom of the gravitational field can be described by wave - like propagation equations in curved space - time .on the other hand , since we are mainly interested in the gravitational wave signal , which has a quadrupolar dominant term , we have to make high precision numerical models ( including boundary conditions ) to study this mode , as well as lower multipoles .these statements can be illustrated as follows .one of the main sources we want to study are binaries of two compact objects ( neutron star or black hole ) orbiting around each other .gravitational waves take away angular momentum and the system coalesces . in some perturbative approach ,the terms corresponding to this `` braking force '' result from a subtle cancellation between terms of much higher amplitude . in numerical non - perturbative studies, the same phenomenon may happen and , if the dominant modes of the wave are not computed with enough precision , the angular momentum loss may be strongly overestimated .moreover , the time - scale for coalescence is much larger than the orbital period and the system is almost stationary .there has been many interesting developments concerning absorbing boundaries in the last years , with the perfectly matched layers ( pml , see and ) which consist in surrounding the true domain of interest by an absorbing layer where the wave is damped .these methods may not be the best - suited for our problems since , as stated above , we might have to change the formulation of the equations we want to solve .moreover , the main problem we want to address is the simulation of quadrupolar waves and , as it will be shown later in this paper , with our formulation it is possible to have a clear control on the behavior of these quadrupolar waves . finally , this formulation is straightforward to implement and very little cpu time consuming in the context of spectral methods and spherical coordinates , which we are already using to solve elliptic partial differential equations ( pde ) arising in numerical relativity ( scalar and vectorial ones , see ) .the development and implementation of the pml techniques for our problem would require much more work and computing time , whereas it is not guaranteed at all it would give better results .for all these reasons we chose to develop a new formulation of the bayliss and turkel boundary conditions , particularly well suited for using with spectral methods and spherical coordinates . the paper deals with this new formulation as well as numerical tests .it is organized as follows .first , sec .[ s : bc ] presents boundary conditions : it briefly recalls main results from bayliss and turkel ( [ ss : mpoles ] ) and we then derive the formulation adapted up to quadrupolar modes of the wave ( [ ss : sphwave ] ). then , sec .[ s : tests ] briefly describes spectral methods in spherical coordinates that were used ( [ ss : specmeu ] ) and details the numerical results ( [ ss : sorti ] ) .finally , sec .[ s : conc ] gives a summary and some concluding remarks .an important difference between the solution of the wave equation and that of the poisson equation ( as in ) is the fact that boundary conditions can not be imposed at infinity , since one can not use `` compactification '' , i.e. a change of variable of the type .this type of compactification is not compatible with an hyperbolic pde , see .one has to construct an artificial boundary and impose conditions on this surface to simulate an infinite domain .these conditions should therefore give no reflection of the wave , that could spuriously act on the evolution of the system studied inside the numerical grid .the boundary conditions have to _ absorb _ all the waves that are coming to the outer limit of the grid .the general _ condition of radiation _ is derived e.g. in , and defined as at a finite distance the condition , which is then approximate , reads which will be hereafter referred as the `` sommerfeld condition '' and is exact only for pure monopolar waves . a completely general and exact boundary condition for the wave equation on an artificial spherical boundaryhas recently been derived by aladl __ and involves an infinite series of inverse fourier transforms of the solution. this condition may not be suitable for direct numerical implementation for which aladl _ et al ._ derived a truncated approximate condition .a rather general method to impose non - reflecting boundary conditions is to construct a sequence of boundary conditions that , for each new term , are in some sense giving better results .some of the possibilities to define `` better '' are when the reflected wave decreases : * as the incident wave approaches in a direction closer to some preferred direction(s ) ( see e.g. ) , * for shorter wavelengths , * as the position of artificial boundary goes to infinity .this last approach is the most relevant to the problem of solving the einstein equation for isolated systems .it is also a way of expanding condition ( [ e : somer0 ] ) in terms of asymptotic series , which has been studied in , where a sequence of recursive boundary conditions is derived .let us recall here some of their results . a radiating solution of ( [ e : defonde ] ) with the source can be written as the following expansion : the operators acting on a function are recursively defined by : the family of boundary conditions then reads : in , it is shown that , following from ( [ e : expand ] ) , a radiating solution of the wave equation verifies : which in particular means that condition ( [ e : bc+ ] ) is an asymptotic one in powers of .the condition is same as the sommerfeld condition ( [ e : somerf ] ) and the same as the first approximation in terms of the angle between the direction of propagation of the wave and the normal to the boundary , derived in . finally , using expression ( [ e : expand ] ) one can verify that the operator annihilates the first terms of the expansion .thinking in terms of spherical harmonics , this means that condition ( [ e : bc+ ] ) is exact if the wave carries only terms with . in other words ,the reflection coefficients for all modes lower than are zero . since we are interested in the study of gravitational wave emission by isolated systems , it is of great importance to have a very accurate description of the quadrupolar part of the waves , which is dominant .therefore , if the part of the gravitational wave is well described , higher - order terms may not play such an important role in the dynamical evolution of the system .the situation then is not so bad even if only an approximate boundary condition is imposed for those terms with .moreover , the error on the function scales like so , if we impose we have an exact boundary condition for the main contribution to the gravitational wave and an error going to zero as . when developing this expression , one gets : starting from ( [ e : b3devel ] ) and considering that is a solution of the wave equation ( [ e : defonde ] ) , we replace second radial derivatives with : where : is the angular part of the laplace operator .we are making here the assumption that , at the outer boundary of the grid ( ) , the source term of ( [ e : defonde ] ) is negligible .this is a very good approximation for our studies of isolated systems and is also the assumption made when writing a solution to the wave equation in the form ( [ e : expand ] ) .for example , the third order radial derivative is replaced with and the second - order radial derivatives of the last term ( combined with its counterpart term in ( [ e : b3devel ] ) ) is replaced once more using ( [ e : remplace ] ) .the boundary condition is then written as : we use the auxiliary function : which is defined on the sphere at . inserting this definition into the boundary condition , with eq .( [ e : b3modif ] ) , one gets : which is a wave - like equation on the outer boundary of the grid , with some source term , equal to zero if the solution is spherically symmetric . the boundary condition ( [ e : b3 ] ) is now equivalent to the system ( [ e : defxi])-([e : ondsph ] ) . written in this way, this formulation can be regarded as a perturbation of the sommerfeld boundary condition ( ) given by ( [ e : defxi ] ) .the main advantages are that it can be very easily implemented using spectral methods and spherical coordinates ( see sec . [ss : specmeu ] ) and that mixed derivatives have almost disappeared : there is only one remaining as a source of ( [ e : ondsph ] ) .spectral methods ( , , for a review see ) are a very powerful approach for the solution of a pde and , in particular , they are able to represent functions and their spatial derivatives with very high accuracy .as presented in , we decompose scalar fields on spherical harmonics , for the angular part : and on even chebyshev polynomials for the radial part of each .time derivatives are evaluated using finite - difference methods .since chebyshev collocation points are spaced by a distance of order , ( where is the highest degree of the chebyshev polynomials used for the radial decomposition ) near grid boundaries , the courant condition on the time step for explicit integration schemes of the wave equation ( [ e : defonde ] ) also varies like .this condition is very restrictive and it is therefore necessary to use an implicit scheme .we use the crank - nicholson scheme , which is unconditionally stable , as shown by various authors ( see e.g. ) .this scheme is second - order in time and the smoothing of the solution due to implicit time - stepping remains lower than the other errors discussed hereafter .this implicit scheme results in a boundary - value problem for at each time - step .the solution to this problem is obtained by inverting the resulting spatial operator acting on using the tau method .its matrix ( in chebyshev coefficient space ) has a condition number that is rapidly increasing with .this can be alleviated by the use of preconditioning matrices , obtained from finite - differences operators ( see ) . at the beginning of time integration, we suppose that satisfies the sommerfeld boundary condition ( [ e : somerf ] ) , that is . is then calculated at next time - step using ( [ e : ondsph ] ) .this is done very easily since the angular parts of and are decomposed on the basis of spherical harmonics ; each component is the solution of a simple ode in time , which is integrated using the same crank - nicholson scheme as for the main wave equation ( [ e : defonde ] ) , with boundary conditions such that is periodic on the sphere .this is already verified by the ( galerkin method ) .we get , with being the time - step , and : this equation in is solved and , for each pair , we impose for which looks like a modification of the condition ( [ e : somerf ] ) .the sommerfeld boundary condition ( [ e : somerf ] ) is an exact condition , even at finite distance from the source , when only considering monopolar waves . in order to test our implementation of absorbing boundary condition ( [ e : bc+ ] ) , we compared its efficiency in being transparent to waves carrying only monopolar , dipolar and quadrupolar terms , to the efficiency of the sommerfeld boundary condition for monopolar waves .we started with at and then solved eq .( [ e : defonde ] ) with with null for . in all cases, we performed a first calculation with a very large grid ( considered as infinite , we checked with various values of the radius that the result in the interval would be the same ) , so that in the time interval $ ] the wave would not reach the boundary , on which we imposed an homogeneous boundary condition .this gave us the reference solution crossing the sphere without any reflection .we then solved again the same problem , but on a grid of radius , imposing sommerfeld boundary conditions ( [ e : somerf ] ) , or our quadrupolar boundary conditions through the system ( [ e : defxi])-([e : ondsph ] ) .the norm of the relative difference between the functions obtained on the small grid and the reference solution was taken as the error .( [ e : somerf ] ) and ( [ e : defxi ] ) for modes . the source of the wave equation is defined in eqs .( [ e : defsig ] ) and ( [ e : defsl2 ] ) .we took , a time - step , 33 polynomials for radial decomposition , 5 for and 4 for ., height=302 ] first , we took which contains only modes .figure [ f : quadrup ] shows the relative efficiency of ( [ e : defxi ] ) condition to ( [ e : somerf ] ) for all three modes present in the wave generated by ( [ e : defsl2 ] ) . for the monopolar ( ) mode, the evolution of the error would be the same for both types of boundary conditions , within one percent of difference on the error .as far as the discrepancy for dipolar and quadrupolar modes is concerned , one can see that it drops from with sommerfeld boundary condition , to with ( [ e : defxi ] ) .this lower level is the same as for the monopolar mode with the sommerfeld boundary condition .we have checked that all solutions had converged with respect to the number of spectral coefficients and to the time - step .the error level at is then mainly due to the condition number of the matrix operator we invert ( see sec .[ ss : specmeu ] above ) .we here conclude that our formulation of ( [ e : defxi ] ) is as efficient for waves containing only modes as the sommerfeld boundary condition ( [ e : somerf ] ) for monopolar waves . ) and ( [ e : defs3d ] ) ; using ( [ e : somerf ] ) .we took , a time - step , 33 polynomials for radial decomposition , 17 for and 16 for ., height=264 ] ) and ( [ e : defs3d ] ) ; using ( [ e : defxi ] ) as the boundary condition .we took , a time - step , 33 polynomials for radial decomposition , 17 for and 16 for ., height=264 ] the study has been extended to a more general source which contains _ a priori _ all multipolar terms : of course , in numerical implementation , only a finite number of these terms are represented .the geometry of this source can be related to the distribution of mass in the case of a binary system of gravitating bodies , which is one of the main astrophysical sources of gravitational radiation we try to model .let us make a comparison between the errors obtained , on the one hand with the condition ( figure [ f : som3d ] ) , and on the other hand with ( figure [ f : bc33d ] ) . as in the case in figure [ f : quadrup] , the error in the monopolar component remains roughly the same , regardless of whether one uses boundary condition ( [ e : somerf ] ) or ( [ e : defxi ] ) .the errors for the dipolar and quadrupolar components also exhibit similar properties : the use of condition ( [ e : defxi ] ) causes these errors to be of the same magnitude as the error in the monopolar term . in the case of figure[ f : bc33d ] , this level is higher than on figure [ f : quadrup ] because a longer time - step has been used .finally , we have also plotted the discrepancies between the reference and test solutions for the multipole . following , the boundary condition is not exact for this component .nevertheless , one can see a reduction in the error for this component .this can be understood using the result of which shows that the condition cancels the first 3 terms in the asymptotic development in powers of of the solution ( [ e : asymp ] ) .then , since a given multipolar term is present in terms like with ( see e.g. ) , it is clear that the condition is supposed to cancel all terms decaying slower than in the mode .thus , the error displayed on figure [ f : err3d ] is three orders of magnitude lower with the condition than with . ) and ( [ e : defs3d ] ) ; using ( [ e : somerf ] ) and ( [ e : defxi ] ) .we took , a time - step , 33 polynomials for radial decomposition , 17 for and 16 for ., height=264 ] we have checked this point , namely that the maximal error over the time interval would decrease like , where is the distance at which the boundary conditions were imposed .we have also checked that the error decreased both exponentially with the number of coefficients used in or , as one would expect for spectral methods , and like ( second - order time integration scheme ) .figure [ f : err3d ] shows the overall error as a function of time for both boundary conditions used .comparing figure [ f : err3d ] with figures [ f : som3d ] and [ f : bc33d ] , one can see that most of the error comes from the term when using boundary condition , and from the term when using .finally , the computational cost of this enhanced boundary condition is very low with this new approach .for the tests presented here , the difference in cpu time would be of about 10% .this is linked with the fact that our formulation ( [ e : defxi ] ) is a perturbation of the sommerfeld boundary condition ( [ e : somerf ] ) , where the quantity is obtained by simple ( ordinary differential equation ) integration .the purpose of this paper has been to provide a boundary condition that is well - adapted for the simulation of astrophysical sources of gravitational radiation , whose dominant modes are quadrupolar .we took the series of boundary conditions derived by bayliss and turkel , truncated at quadrupolar order , and derived a new formulation of that third - order condition in terms of a first - order condition ( resembling the classical radiation one ) , combined with a wave - like equation on the outer boundary of the integration domain .this formulation is simple in the sense that mixed derivatives are ( almost ) absent .the numerical implementation using spectral methods and spherical coordinates is straightforward and this formulation of high - order boundary conditions requires only a little more cpu time ( less than 10% in our tests ) than the simplest first - order condition ( [ e : somerf ] ) .we have verified that our implementation of this boundary condition had the same efficiency with respect to transparency for dipolar and quadrupolar waves as the sommerfeld condition ( [ e : somerf ] ) for monopolar waves .the precision increases very rapidly ( like ) as one imposes the boundary condition further from the source of radiation .these two points are of great interest for the simulation of gravitational radiation from isolated astrophysical sources . as an alternative, one can cite that more accurate results may be obtained using the so - called 2 + 2 formalism in the wave zone and matching it to the results in 3 + 1 formalism near the source .our approach is different , much simpler to implement and should give accurate enough results for the einstein equations .j. novak , review of numerical relativity session in _ proc .of the ninth marcel grossman meeting on general relativity , rome , italy , july 2000 , _ edited by jantzen , gurzadyan and ruffini , ( world scientific , singapore , 2002 ) .p. grandclment , s.bonazzola , e. gourgoulhon and j .- a .marck , a multidomain spectral method for scalar and vectorial poisson equations with noncompact sources , _ j. comput. phys . _ * 170 * , 231 ( 2001 ) , doi:10.1006/jcph.2001.6734 .
|
we present a new formulation of the multipolar expansion of an exact boundary condition for the wave equation , which is truncated at the quadrupolar order . using an auxiliary function , that is the solution of a wave equation on the sphere defining the outer boundary of the numerical grid , the absorbing boundary condition is simply written as a perturbation of the usual sommerfeld radiation boundary condition . it is very easily implemented using spectral methods in spherical coordinates . numerical tests of the method show that very good accuracy can be achieved and that this boundary condition has the same efficiency for dipolar and quadrupolar waves as the usual sommerfeld boundary condition for monopolar ones . this is of particular importance for the simulation of gravitational waves , which have dominant quadrupolar terms , in general relativity . , absorbing boundary conditions ; spectral methods ; wave equation ; general relativity .
|
the principal reason for implanting ions into silicon wafers is to dope regions within the substrate , and hence modify their electrical properties in order to create electronic devices .the quest for ever increasing processor performance demands smaller device sizes .the measurement and modeling of dopant profiles within these ultra shallow junction devices is challenging , as effects that were negligible at high implant energies become increasingly important as the implant energy is lowered .the experimental measurement of dopant profiles by secondary ion mass spectrometry ( sims ) becomes problematic for very low energy ( less than 10 kev ) implants .there is a limited depth resolution of measured profiles due to profile broadening , as the sims ion - beam produces knock - on s , and so leads to effects such as diffusion of dopants and mixing .the roughness and disorder of the sample surface can also convolute the profile , although this can be avoided to a large extent by careful sample preparation .the use of computer simulation as a method for studying the effects of ion bombardment of solids is well established .binary collision approximation ( bca ) , ` event - driven ' codes have traditionally been used to calculate such properties as ranges of implanted species and the damage distributions resulting from the collision cascade . in this model , each ion trajectory is constructed as a series of repulsive two - body encounters with initially stationary target atoms , and with straight line motion between collisions . hence the algorithm consists of finding the next collision partner , and then calculating the asymptotic motion of the ion after the collision .this allows for efficient simulation , but leads to failure of the method at low ion energies .the bca approach breaks down when multiple collisions ( where the ion has simultaneous interactions with more than one target atom ) or collisions between moving atoms become significant , when the crystal binding energy is of the same order as the energy of the ion , or when the time spent within a collision is too long for the calculation of asymptotic trajectories to be valid .such problems are clearly evident when one attempts to use the bca to simulate channeling in semiconductors ; here the interactions between the ion and the target are neither binary nor collisional in nature , rather they occur as many simultaneous soft interactions which steer the ion down the channel .an alternative to the bca is to use molecular dynamics ( md ) simulation , which has long been applied to the investigation of ion bombardment of materials , to calculate the ion trajectories .the usefulness of this approach was once limited by its computational cost and the lack of realistic models to describe materials . with the increase in computational power , the development of efficient algorithms , and the production of accurate empirical potentials , it is now feasible to conduct realistic md simulations . in the classical md model ,atoms are represented by point masses that interact via an empirical potential function that is typically a function of bond lengths and angles ; in the case of si a three - body or many - body potential , rather than a pair potential is required to model the stable diamond lattice and to account for the bulk crystal properties .the trajectories of atoms are obtained by numerical integration of newton s laws , where the forces are obtained from the analytical derivative of the potential function .thus , md provides a far more realistic description of the collision processes than the bca , but at the expense of a greater computational requirement . herewe present a highly efficient md scheme that is optimized to calculate the concentration profiles of ions implanted into crystalline silicon .the algorithms are incorporated into our implant modeling molecular dynamics code , reed , which runs on many architectures either as a serial , or as a trivially parallel program .the basis of the molecular dynamics model is a collection of empirical potential functions that describe interactions between atoms and give rise to forces between them .in addition to the classical interactions described by the potential functions , the interaction of the ion with the electrons within the target is required for ion implant simulations , as this is the principle way in which the ion loses energy .this is accomplished via a phenomenological electronic stopping - power model .other ingredients necessary to the computation are a description of the target material structure and thermal vibration within the solid .it is also necessary to define a criterion to decide when the ion has come to rest in the substrate .we terminate a trajectory when the _ total _ energy of the ion falls below 5 ev .this was chosen to be well below the displacement threshold energy of si ( around 20 ev) .interactions between si atoms are modeled by a many - body potential developed by tersoff .this consists of morse - like repulsive and attractive pair functions of interatomic separation , where the attractive component is modified by a many - body function that has the role of an effective pauling bond order .the many - body term incorporates information about the local environment of a bond ; due to this formalism the potential can describe features such as defects and surfaces , which are very different to the tetrahedral diamond structure .zbl ` pair specific ' screened coulomb potentials are used to model the ion - si interactions for as , b , and p ions .where no ` pair specific ' potential was available , the zbl ` universal ' potential has been used .this is smoothly truncated with a cosine cutoff between 107% and 147% of the sum of the covalent radii of the atoms involved ; the cutoff distances were chosen as they give a screening function that approximates the ` pair specific ' potentials for the examples available to us .the zbl ` universal ' potential is also used to describe the close - range repulsive part of the tersoff si - si potential , as the standard form is not sufficiently strong for small atomic separations .the repulsive morse term is splined to a shifted zbl potential , by joining the two functions at the point where they are co - tangent . in the case of si - si interactions ,the join is at an atomic separation of 0.69 , and requires the zbl function to be shifted by 148.7 ev .the increase in the value of the short - range repulsive potential compensates for the attractive part of the tersoff potential , which is present even at short - range .the firsov model is used to describe the loss of kinetic energy from the ion due to inelastic collisions with target atoms .we implement this using a velocity dependent pair potential , as derived by kishinevskii .this gives the force between atoms and as : where : and is a screening function , is atomic number ( ) , is atomic separation , and . for consistency with the ion - si interactions, we use the zbl ` universal ' screening function within the integral ; there are no fitted parameters in this model .we have found that it is necessary to include energy loss due to inelastic collisions , and energy loss due to electronic stopping ( described below ) as two distinct mechanisms .it is not possible to assume that one , or other , of these processes is dominant and _ fit _ it to model all energy loss for varying energies and directions .a new model that involves both global and local contributions to the electronic stopping is used for the electronic energy loss .this modified brandt - kitagawa model was developed for semi - conductors and contains only one fitted parameter per ion species , for all energies and incident directions .we believe that by using a realistic stopping model , with the minimum of fitted parameters , we obtain a greater transferability to the modeling of implants outside the fitting set .this should be contrasted to many bca models which require completely different models for different ion species or even for different implant angles for the same ion species , and that contain several fitted parameters per species .our model has been successfully used to describe the implant of as , b , p , and al ions with energies in the sub mev range into crystalline si in the , , and non - channeling directions , and also into amorphous si . while initially developed for use in bca simulations , the only modification required to the model for its use in md is to allow for the superposition of overlapping charge distributions , due to the fact that the ion is usually interacting with more than one atom at a time .the one fitting parameter is , the ` average ' one electron radius of the target material , which is adjusted to account for oscillations in the dependence of the electronic stopping cross - section . for the calculations presented here ,the target is crystalline si with a surface amorphous layer .the amorphous structure was obtained from a simulation of repeated radiation damage and annealing , of an initially crystalline section of material .thermal vibrations of atoms are modeled by displacing atoms from their lattice sites using a debye model .we use a debye temperature of 519.0 k for si obtained by recent electron channeling measurements .this gives an rms thermal vibrational amplitude in one dimension of 0.0790 at 300.0 k. note , we do not use the debye temperature as a fitting parameter in our model , as is often done in bca models .the thermal velocity of the atoms is unimportant as it is so small compared to the ion velocity , and is set to zero . at presentthere is no accumulation of damage within our simulations , as we wish to verify the fundamental model with the absolute minimum of parameters that can be fit . at a later datewe will incorporate a statistical damage model into our simulations in a manner similar to that used in bca codes .we also intend to include the capability of using amorphous , or polycrystalline targets in our simulations .during the time that md has been in use , many algorithms have been developed to enhance the efficiency of simulations .here we apply a combination of methods to increase the efficiency of the type of simulation that we are interested in .we incorporate both widely used methods , which are briefly mentioned below , and new or lesser known algorithms for this specific type of simulation which we describe in greater detail .we employ neighbor lists to make the potential and force calculation o( ) , where is the number of particles .coarse grained cells are used in the construction of the neighbor list ; this is combined with a verlet neighbor list algorithm to minimize the size of the list .atoms within 125% of the largest interaction distance are stored in the neighbor list , which is updated only when the relative motion of atoms is sufficient for interacting neighbors to have changed .the paths of the atoms are integrated using verlet s algorithm , with a variable timestep that is dependent upon both kinetic and potential energy of atoms . for high energy simulations the potential energy as well asthe velocity of atoms is important , as atoms may be moving slowly but have high , and rapidly changing , potential energies during impacts .the timestep is selected using : } { \displaystyle m_{i}}}\biggr ) } } \label{tstep}\ ] ] where , and are the kinetic energy , potential energy and mass respectively of atom , and is a constant with a value of 0.10 .away from hard collisions , only the kinetic energy term is important , and the timestep is selected to give the fastest atom a movement of in a single timestep .when the timestep is increasing , it is limited by : to prevent rapid oscillations in the size of the timestep , and the maximum timestep is limited to 2.0 fs .the timestep selection scheme was checked to ensure that the total energy in a full ( i.e. , without the modifications described below ) md simulation was well conserved for any single ion implant with no electronic stopping ; e.g. in the case of a non - channeling ( 10 tilt and 22 rotation ) 5 kev as ion into a 21168 atom si\{100 } target , the energy change was 3.6 ev ( 0.004% ) during the 250 fs it took the ion to come to rest . even with the computation resources available today it is infeasible to calculate dopant profiles by full md simulation .although the method is o( ) in the number of atoms involved , the computational requirements scale extremely quickly with the ion energy .the cost of the simulation can be estimated as the number of atoms in the system multiplied by the number of timesteps required .consider the case of an ion , subject to an energy loss proportional to its velocity , , which is then given by where is its initial velocity and is the loss coefficient .each dimension of the system must scale approximately as the initial ion velocity , , to fully contain an ion - path .if the timestep size is chosen so that the maximum distance moved by any particle in a single step is constant , the number of timesteps is approximately proportional to the ion distance .hence the method is roughly o( ) .although it is possible to compute a few trajectories at ion energies of up to 100s of kev , the calculation of the thousands necessary to produce statistically reliable dopant profiles is out of the question .therefore , we have concentrated on developing a restricted md scheme which is capable of producing accurate dopant profiles with a much smaller computational overhead .as we are only concerned with the path of the implanted ion , we only need to consider the region of silicon immediately surrounding the ion .we continually create and destroy silicon atoms , to follow the domain of the substrate that contains the ion .material is built in slabs one unit cell thick to ensure that the ion is always surrounded by a given number of cells on each side .material is destroyed if it is outside the domain defined by the ion position and the domain thickness . in this scenario ,the ion sees the equivalent of a complete crystal , but primary knock - on atoms ( pkas ) and material in the wake of the ion path behave unphysically , due to the small system dimensions .hence we have reduced the cost of the algorithm to o( ) , at the expense of losing information on the final state of the si substrate .this algorithm is similar to the ` translation ' approach used in the mdrange computer code developed by nordlund .the relationship between the full and restricted md approaches is shown in fig .[ md_scheme ] . fig .[ md_gifs ] illustrates a single domain following trajectory .the ion is initially above a semi - infinite volume that is the silicon target . as the ion approaches the surface ,atoms begin to be created in front of it , and destroyed in its wake .this process is continued until the ion comes to rest at some depth in the silicon substrate .several thousand of such trajectories are combined to produce the depth profile of implanted ions .this was first introduced by harrison to increase the efficiency of ion sputtering yield simulations . in this scheme atomsare divided into two sets ; those that are ` on ' have their positions integrated , and those that are ` off ' are stationary . at the start of the simulation , only the ion is turned on , and is the only atom to have forces calculated and to be integrated . some of the ` off ' atoms will be used in the force calculations and will have forces assigned to them .if the resultant force exceeds a certain threshold , the atom is turned on and its motion is integrated .the simulation proceeds in this way with more and more atoms having their position integrated as energy becomes dispersed throughout the system .we use two thresholds in our simulation ; one for atoms interacting directly with the ion , and one for atom - atom interactions .we are , of course , mostly concerned with generating the correct motion for the ion , so the ion - atom interactions are of the most critical and require a lower threshold than the atom - atom interactions .in fact , for any reasonable threshold value , almost any ion - atom interaction will result in the atom being turned on , due to the large ion energy .hence the ion - atom threshold is set to zero in these simulations , as adjusting the value gives no increase in efficiency . in the case of the atom - atom threshold, we estimate a reasonable value by comparison to simulations without the moving atom approximation ( maa ) .smith et al. found a force threshold of n for both atom - atom and ion - atom interactions gave the correct sputtering yield ( when compared to simulations without the maa ) in the case of 1 kev ar implant into si . we have found a larger value ( n ) gives the correct dopant profile , when compared to a simulations without the approximation .our ability to use a larger value is due to two reasons .the motion of atoms not directly interacting with the ion only has a secondary effect on its motion by influencing the position of directly interacting atoms , so small errors in the positions of these atoms has little consequence .also , by dividing the interactions into two sets , we do not have to lower the threshold to give the correct ion - atom interactions . while we use a many - body potential to describe a stable silicon lattice for low energy implants , this introduces a significant overhead to our simulations . for higher ion velocities , we do not need to use such a level of detail .a pair potential is sufficient to model the si - si interactions , as only the repulsive interaction is significant .also , as the lattice is built at a metastable point with respect to a pair potential , with atoms initially frozen due to the maa , and the section of material is only simulated for a short period of time , stability is not important .hence , at a certain ion velocity we switch from the complete many - body potential to a pair potential approximation ( ppa ) for the si - si interactions .this is achieved in our code by setting the many - body parameter within the tersoff potential to its value for undistorted tetrahedral si , and results in a morse potential splined to a screened coulomb potential .we make a further approximation for still higher ion energies , where only the ion - si interactions are significant in determining the ion path . for ion velocities above a set threshold we calculate only ion - si interactions .this approximation , termed the recoil interaction approximation ( ria) , brings the md scheme close to many bca implementations .the major difference that exists between the two approaches is that the ion path is obtained by integration , rather than by the calculation of asymptotes , and that multiple interactions are , by the nature of the method , handled in the correct manner .we have determined thresholds of 90.0 ev/ and 270.0 ev/ for the ppa , and ria , respectively are sufficiently high that both low and high energy calculated profiles are unaffected by their use . as the thresholds are based on the ion velocity , a single high energy ion simulation will switch between levels of approximation as the ion slows down and will produce the correct end of range behavior .a typical dopant concentration profile in crystalline silicon , as illustrated in fig .[ splits ] , has a characteristic shape consisting of a near - surface peak followed by an almost exponential decay over some distance into the material , with a distinct end of range distance .the concentration of dopant in the tail of the profile is several orders of magnitude less than that at the peak . hence if we wish to calculate a statistically significant concentration at all depths of the profile we will have to run many ions that are stopped near the peak for every one ion that stops in the tail , and most of the computational effort will not enhance the accuracy of the profile we are generating . in order to remove this redundancy from our calculations ,we employ an ` atom splitting ' scheme to increase the sampling in the deep component of the concentration profile .every actual ion implanted is replaced by several virtual ions , each with an associated weighting . at certain _ splitting depths _ in the material , each ion is replaced by two ions , each with a weighting of half that prior to splitting .each split ion trajectory is run separately , and the weighting of the ion is recorded along with its final depth . as the split ions see different environments ( material is built in front of the ion , with random thermal displacements ) , the trajectories rapidly diverge from one another .due to this scheme , we can maintain the same number of virtual ions at any depth , but their weights decrease with depth .each ion could of course be split into more than two at each depth , with the inverse change in the weightings , but for simplicity and to keep the ion density as constant as possible we work with two . to maximize the advantages of this scheme, we dynamically update the splitting depths .the correct distribution of splitting depths is obtained from an approximate profile for the dopant concentration .the initial profile is either read in ( e.g. from sims data ) , or estimated from the ion type , energy and incident direction using a crude interpolation scheme based on known depths and concentrations for the peak and tail .once the simulation is running , the profile and the splitting depths are re - evaluated at intervals . the algorithm to determine the splitting depths from a given profileis illustrated in fig .[ splits ] . at the start of the simulation, we specify the number of orders of magnitude , , of change in the concentration of moving ions over which we wish to reliably calculate the profile .we split ions at depths where the total number of ions ( ignoring weighting ) becomes half of the number of actual implanted ions .hence we will use splitting depths , where is the largest integer .the splitting depths , ( ) , are then chosen such that : where is the concentration of stopped ions ( i.e. , the dopant concentration ) at depth .although we are using an approximate profile from few ions to generate the splitting depths , the integration is a smoothing operation and so gives good estimates of the splitting depths . to minimize the storage requirements due to ion splitting , each real ionis run until it comes to rest , and the state of the domain is recorded at each splitting depth passed .the deepest split ion is then run , and further split ions are stored if it passes any splitting depths .this is repeated until all split ions have been run , then the next real ion is started .hence the maximum we ever need to store is one domain per splitting depth ( i.e. , 16 domains when splitting over 5 orders of magnitude ) .all simulations were run with a si\{100 } target at a temperature of 300 k. a surface amorphous layer of one , or three unit cells thickness was used .dopant profiles were calculated for as , b , p , and al ions ; in each case it was assumed that only the most abundant isotope was present in the ion beam .the direction of the incident ion beam is specified by the angle of tilt , , from normal and the azimuthal angle , as ( , ) .the incident direction of the ions was either ( 0,0 ) , i.e. normal to the surface ( channeling case ) , ( 710,030 ) ( non - channeling ) , or ( 45,45 ) ( channeling ) , and a beam divergence of 1.0 was always assumed .simulations were run for 1,000 ions , with the splitting depths updated every 100 ions . a domain of 3 unit cellswas used and the profile was calculated over either 3 , or 5 orders of magnitude change in concentration .the simulations were run on pentium pro workstations running the red hat linux operating system with the gnu g77 fortran compiler , or sun ultra - sparc workstations with the sun solaris operating system and sun fortran compiler .the running code typically requires about 750k of memory .two sets of results are presented ; we first demonstrate the effectiveness and stability of the rare event enhancement scheme , and then give examples of data produced by the simulations and compare to sims data .example timings from simulations are also given .a more extensive set of calculated profiles will be published separately .an example of the evolution of splitting depths during a simulation is shown in fig .[ as10t22r5ks ] , for the case of non - channeling 5 kev as implanted into si\{100}. the positions of the splitting depths near the peak stabilize quickly .splitting depths near the tail take far longer to stabilize , as these depend on ions that channel the maximum distance into the material . although atom splitting enhances the number of ( virtual ) ions that penetrate deep into the material , the occurrence of an ion that will split to yield ions at these depths is still a relatively rare event .the fact that all splitting depths do stabilize is also an indication that we have run enough ions to generate good statistics for the entire profile .the paths of 5 kev as ions implanted at normal incidence into si\{100 } are shown in fig .[ psplit ] , with the number of splittings shown by the line shading .the 1,000 implanted real ions were split to yield a total of 19,270 virtual ions .the paths taken by 27 split ions produced from the first real ion of this simulation , and the resulting distribution of the ion positions are shown in fig .[ osplit ] .the final ions are the result of between 3 and 6 splittings , depending upon the range of each trajectory .this is typical of the distribution of splittings for one real ion ; the final depths of ions are not evenly distributed over the entire ion range , but are bunched around some point within this range .this reflects how the impact position of the ion and collisions during its passage through the amorphous layer affect its ability to drop into a channel once in the crystalline material .the weighting of the second 500 of the ions ( after the splitting depths had stabilized ) is plotted against final depth in fig .[ wsplit ] ( note the log scale ) .we have estimated the uncertainty in the calculated dopant profiles in order to judge the increase in efficiency obtained through the use of the rare event enhancement scheme .the uncertainty was estimated by dividing the final ion depths into 10 sets .a depth profile was calculated from each set using a histogram of 100 bins , with constant bin size .a reasonable measure of the uncertainty is the standard deviation of the distribution of the 10 concentrations for each bin .[ var ] shows calculated dopant profiles from 1,000 real ions for the case of 2 kev as at ( 7,0 ) into si\{100 } , obtained with and without atom splitting over five orders of magnitude .the profiles are plotted with the uncertainty represented by the size of error bars ; the length of each error bar corresponds to half the standard deviation of concentrations in that bin .the uncertainty is constant in the case of the profile obtained with the rare event scheme , whereas the profile obtained without the scheme is only reliable over one order of magnitude .timings from these simulations , and a simulation with splitting to three orders of magnitude are given in table [ rare ] . from these timings, we can estimate the efficiency gain due to the rare event algorithm .we decrease the time required by a factor of 89 in the case of calculating a profile to three orders of magnitude , and by a factor of 886 when calculating a profile over 5 orders of magnitude , compared to the estimated time requirements without rare event enhancement .the gain in efficiency increases exponentially with the number of orders of magnitude in concentration over which we wish to calculate the profile .the remaining figures show the calculated concentration profile of b , as , and p ions for various incident energies and directions .profiles were generated from a histogram of 100 bins , using adaptive bin sizes ; the final ion depths were sorted and the same number of virtual ions assigned to each bin . no other processing , or smoothing of the profiles was done . also shownare low dose ( ions/ ) sims data ; for comparison , all profiles were scaled to an effective dose of ions/ .we have also examined al ion implants , but were unable to match calculated profiles to the available sims data for a physically reasonable parameter value in our electronic stopping model .this may be due to one or more of the following reasons ; al is the only metal that we are implanting ; the al - si interaction is the only interaction for which we do not have a pair specific zbl potential ; we only have a very limited set of sims data to compare to . in the case of the low energy ( 10 kev ) implants ,we compare to sims data obtained with a thin and well controlled surface layer ; here we assume one unit cell thickness of surface disorder in our simulations . for the other cases considered here , the surface was less well characterized ; we assume three unit cells of disorder at the surface , as this is typical of implanted si . for the low energy implants ,we have calculated profiles over a change of five orders of magnitude in concentration ; for the higher energy implants we calculate profiles over 3 orders of magnitude . the results of the reed calculations show good agreement with the experimental data . in the case of the low energy implants , the sims profileis only resolved over two orders of magnitude in some cases , while we can calculate the profile over five orders of magnitude .we give timing results from several simulations , as examples of the cpu requirements of our implementation of the model .note , the results presented here are from a functional version of reed , but the code has yet to be fully optimized to take advantage of the small system sizes ( around 200 atoms ) .timing data are given in table [ time ] , for profiles calculated over five orders of magnitude on a single pentium pro .run times are dependent on the ion type and its incident direction , but are most strongly linked to the ion velocity .we estimate a runtime of approximately 30 hours per , for this version of our code .in summary , we have developed a restricted molecular dynamics code to simulate the ion implant process and directly calculate ` as implanted ' dopant profiles .this gives us the accuracy obtained by time integrating atom paths , whilst obtaining an efficiency far in excess of full md simulation .there is very good agreement between the md results and sims data for b , p , and as implants .we are unable to reproduce published sims data for al implants with our current model .this discrepancy is currently being investigated ; our findings will be published separately .we can calculate the dopant profile to concentrations one or two orders of magnitude below that measurable by sims for the channeling tail of low dose implants .the scheme described here gives a viable alternative to the bca approach .although it is still more expensive computationally , it is sufficiently efficient to be used on modern desktop computer workstations .the method has two major advantages over the bca approach : ( i ) our md model consists only of standard empirical potentials developed for bulk si and for ion - solid interactions .the only fitting is in the electronic stopping model , and this involves _ only one _ parameter per ion species .this should be contrasted to the many parameters that have to be fit in bca models .we believe that by using physically based models for all aspects of the program , with the minimum of fitting parameters , we obtain good transferability to the modeling of implants outside of our fitting set .( ii ) the method does not break down at the low ion energies necessary for production of the next generation of computer technology ; it gives the correct description of multiple , soft interactions that occur both in low energy implants , and high energy channeling .we are currently working to fully optimize the code , in order to maximize its efficiency .the program is also being extended to include a model for ion induced damage , amorphous and polycrystalline targets , and to model cluster implants such as bf .we also note that the scheme can be easily extended to include other ion species such as ge , in and sb , and substrates such as gaas and sic .we gratefully acknowledge david cai and charles snell for providing us with their insight during many discussions , and al tasch and co - workers for providing preprints of their work and sims data .this work was performed under the auspices of the united states department of energy .k. b. parab , s .- h .yang , s. j. morris , s. tian , a. f. tasch , d. kamenitsa , r. simonton , and c. magee , j. vac .b * 14 * , 260 ( 1996 ) .m. t. robinson and i. m. torrens , phys .b * 12 * , 5008 ( 1974 ) .j. b. gibson , a. n. goland , m. miligrim , and g. h. vineyard , phys . rev . * 120 * , 1229 ( 1960 ) .d. e. harrison , jr ., in _ critical reviews in solid state and materials sciences _ , edited by j. e. greene ( crc , boca raton , 1988 ) , vol .14 , suppl . 1 .k. m. beardmore , d. cai , and n. grnbech - jensen , in _ proceedings of ion implantation technology , austin , 1996 , _ edited by e. ishida ( ieee * 96th8182 * , 1997 ) , p. 535 .d. cai , n. grnbech - jensen , c. m. snell , and k. m. beardmore , phys .b * 54 * , 17147 ( 1996 ) . l. a. miller , d. k. brice , a. k. prinja , and s. t. picraux , phys . rev .b * 49 * , 16953 ( 1994 ) .j. tersoff , phys . rev .b * 38 * , 9902 ( 1988 ) . j. f. ziegler , j. p. biersack , and u. littmark , _ the stopping and range of ions in solids _( pergamon press , new york , 1985 ) .o. b. firsov , sov .jetp * 36 * , 1076 ( 1959 ) .l. m. kishinevskii , izv .sssr , ser .* 26 * , 1410 ( 1962 ) ; v. a. elteckov , d. s. karpuzov , yu . v. martynenko , and v. e. yurasova , in _ atomic collision phenomena in solids _ ,d. w. palmer , m. w. thompson , and p. d. townsend , ( north holland , amsterdam , 1970 ) p. 657 .d. cai , n. grnbech - jensen , c. m. snell , and k. m. beardmore , ( in preparation ) .d. cai , n. grnbech - jensen , c. m. snell , and k. m. beardmore , ( in preparation ) .w. brandt and m. kitagawa , phys .b * 25 * , 5631 ( 1982 ) .g. hobler , a. simionescu , l. palmetshofer , f. jahnel , r. von criegern , c. tian , and g. stingeder , j. vac .b * 14 * , 272 ( 1996 ) .m. posselt and j.p .biersack , nucl .instr . and meth .b * 46 * , 706 ( 1992 ) . r. smith , _ atomic and ion collisions in solids and at surfaces _ ( cambridge university press , cambridge , 1997 ) , chap . 4 .k. m. beardmore , n. grnbech - jensen , and m. a. nastasi , ( in preparation ) .g. buschhorn , e. diedrich , w. kufner , m. rzepka , h. genz , p. hoffmann - stascheck , and a. richter , phys .b * 55 * , 6196 ( 1997 ) .k. m. beardmore , ph .d. thesis , loughborough university of technology , 1995 ; k. beardmore , r. smith , and i. chakarov , computer simulation of radiation damage in solids , santa barbara , 1995 ( unpublished ) .l. verlet , phys . rev . * 159 * , 98 ( 1967 ) .r. w. hockney and j. w. eastwood _ computer simulation using particles _ , ( mcgraw - hill , new york , 1981 ) .k. nordlund , comp .sci . * 3 * , 448 ( 1995 ) .r. smith , d. e. harrison , and b. j. garrison , phys .b * 40 * , 93 ( 1989 ) .see , e.g. , g. a. huber and s. kim , biophys .j. * 70 * , 97 ( 1996 ) .k. m. beardmore and n. grnbech - jensen , ( in preparation ) .r. g. wilson , j. appl .phys . * 60 * , 2797 ( 1986 ) .a. bousetta , j. a. van den berg , r. valizadeh , d. g. armour , and p .c .zalm , nucl .instr . and meth .b * 55 * , 565 ( 1991 ) .a. f. tasch _ et al_. ( private communication ) .r. j. schreutelkamp , v. raineri , f. w. saris , r. e. kaim , j. f. m. westendorp , p. f. h. m. van der meulen , and k. t. f. janssen , nucl .instr . and meth .b * 55 * , 615 ( 1991 ) .lrdrr & & & & + 0&10792&1&1079200&107920000 + 3&12136&3&12136&1213600 + 5&121777&5&-&121777 + lddd simulation & & & + 2 kev as ( 7,0)&30650.28&0.23&133262.09 + 500 ev b( 0,0)&17447.47&0.30&58158.23 + 15 kev p ( 0,0)&56544.42&0.70&81287.73 + 5 kev b ( 0,0)&61464.83&0.95&64699.82 + 5 kev b ( 10,22)&146762.03&0.95&154486.35 + 20 kev al ( 0,0)&125437.13&1.22&102817.32 + 20 kev al ( 45,45)&171572.93&1.22&140633.55 +
|
we present a highly efficient molecular dynamics scheme for calculating the concentration depth profile of dopants in ion irradiated materials . the scheme incorporates several methods for reducing the computational overhead , plus a rare event algorithm that allows statistically reliable results to be obtained over a range of several orders of magnitude in the dopant concentration . we give examples of using this scheme for calculating concentration profiles of dopants in crystalline silicon . here we can predict the experimental profile over five orders of magnitude for both channeling and non - channeling implants at energies up to 100s of kev . the scheme has advantages over binary collision approximation ( bca ) simulations , in that it does not rely on a large set of empirically fitted parameters . although our scheme has a greater computational overhead than the bca , it is far superior in the low ion energy regime , where the bca scheme becomes invalid . = 10000
|
the purpose of this paper is to derive formulas for the orbit deflections caused by the fringe fields of non - solenoidal accelerator magnets .the main ingredient is a multipole expansion for fields having arbitrary longitudinal profile and including all field components ( and only those ) required to be present by maxwell s equations . because terminology describing magnets depends on context, we define some of our terms , if only implicitly , by using them in this section .most magnets in accelerators are `` dipoles '' , `` quadrupoles '' or other `` multipoles '' where , in this paper , we distinguish by quotation marks the common names of these magnets from the dipole , quadrupole , multipole , etc . ,terms appearing in mathematical expansions of their magnetic fields .the particle orbits are _ paraxial _ , with _ small _ transverse displacements , , with slopes small compared to 1 because the orbits are more or less parallel to the -axis , which is the magnet centerline . the dominant magnetic field components ( ) are therefore _ transverse _ to this axis , and the currents in most accelerator magnets are therefore _ longitudinal_. butactual magnet coils must have radial leads to return the currents and , because of practical considerations , they also have azimuthal currents . the standard multipole expansion derives entirely from longitudinal magnet currents ( this includes the bound currents in ferromagnets ) .it is only for a _ long _ magnet whose length is large ( for example compared to a typical radial magnetic half - aperture ) that a single multipole term provides a good approximation to the field . yet , as concerns the effect of the magnet on a particle orbit , a common idealization is the _ short magnet _ or _ thin lens _ approximation , in which the entire deflection caused by the magnet occurs at a single longitudinal position .even more extreme than our straight line approximation is to treat the transverse orbit coordinates as constant through the entire magnet , body and ends ; the deflection ( say horizontal ) is proportional to a _ field integral _ of the form , where stands for any one of , that is , either of the transverse magnetic field components , or any of their derivatives with respect to and/or .commonly then , one defines an _ effective magnet length _ such that .this length is specific to the particular multipole the magnet is designed to produce . in spite of the facts that the magnet must be long to validate the multipole approximation , yet short to validate the thin element treatment , and that discontinuous magnetic fields violate the maxwell s equations , this approximation is curiously accurate for most accelerator magnets .because of this good start , it promises to be effective to improve upon the approximation by assuming that magnets have ideal multipole fields within the length , but also to include `` end fields '' applicable in regions of length and at input and output ends . in this approximationthe transverse magnetic fields are continuous , but their derivatives are discontinuous at both ends of the fringe field regions . in a well - designed magnet , the same multipole that is dominant in the central region is dominant in the end regions .but the fields in the end regions are necessarily more complicated and include longitudinal components .since the fields in these regions are , in principle , constrained only by maxwell equations , rigorous formulas for the deflections they cause can only be evaluated by solving differential equations appropriate for the detailed magnet end configuration . to obtain analytic formulaswe must make some assumptions , the first of which is that the formulation is not intended to apply to `` intentional solenoids '' ( because of their large azimuthal currents and longitudinal field components ) .furthermore the only longitudinal fields included are those that are required by maxwell s equation to be present in regions of varying longitudinal profile .in other words , the formulas can be expected to be accurate for `` well - designed '' magnets , in which the dominant fringe field multipolarity matches the body multipolarity . this can , in principle , be assured by proper shaping of pole ends and proper conformation of the magnet return currents . in the absence of magnetic field measurements in the end regions , this is the only practical assumption one can make when predicting the fringe field deflections .if the fields _ have _ been accurately measured or calculated , to improve on formulas given in this paper , it would be necessary to separate out the ( presumably small ) extraneous components and include their effects perturbatively .one can not exclude the possibility of end geometries that introduce multipoles for which the extraneous fringe fields are large compared to the required fringe fields , either intentionally of unintentionally. the present formalism would not be directly applicable for such fields . in this paper ,we derive first approximations for the deflections occurring in the end field regions , of the form and . like the thin lens approximation, these formulas assume the transverse orbit displacement is constant through the end intervals and .this is a much more valid assumption than assuming constant displacement through the whole magnet if , as is usually true , the end regions are `` short '' ; .furthermore terms proportional to transverse slopes and can be consistently included , in the formulas for the deflections .a criterion for the validity of treating the end region as short can be based on the inequality , where and are the usual beta functions and their derivatives with respect to the longitudinal position . when this is true the ( fractional ) rate of change of multipole strength is large compared to the ( fractional ) rate of change of lattice beta functions .there is often a tendency to believe that multipole contributions from opposite ends of a magnet cancel each other .but , since this is not universally valid , in this paper no such assumption will be made .in this section , a multipole expansion is developed that is appropriate for performing the calculation just described .this expansion is applicable to magnetic fields that depend arbitrarily on the longitudinal coordinate but , being a power series in the transverse coordinates and , its accuracy after truncation to an order deteriorates at large transverse amplitudes .the expansion is intended to describe an arbitrary `` multipole '' magnet along with its fringe field .the formalism presented here generalizes an approach described by steffen and reduces to formulas he gives in the case of `` dipoles '' and `` quadrupoles '' . in the current - free regionsto which the beams are restricted , the magnetostatic field can be expressed as the gradient of a scalar potential where satisfies an appropriate expansion is where the coefficients depend on the longitudinal position can guide the shaping of the pole pieces of iron magnets to match , as closely as possible , equipotentials of .this is discussed by steffen for the case of quadrupoles . ] .substituting eq . into eq . , we get a recursion relation for the coefficients ; }_{m , n}\;\ ; , \label{eq : coef}\ ] ] where in this and subsequent formulas a superscript n>0 ] , and its lengths and .this representation is appropriate for representing the magnet within a particle tracking computer program .the lengths could be determined by best - fitting to measured fringe fields .but , to reduce the number of parameters in the remainder of this paper , and with some reduction in accuracy , a slightly different approach will be taken ; the impulses delivered by the fringe fields will be evaluated in a way that is independent of the fringe field lengths : all the integrals involved will be computed by using the `` hard - edge '' approximation , _i.e. _ taking the limit for which . in this limitthe straight line approximation becomes exact .for the sake of consistency another point must also be made .since the dominant multipole in the magnet body is also dominant in the fringe field , there can be an appreciable contribution to the dominant field integral ( due to the magnet as a whole ) that comes from the fields in the fringe regions .it is a matter of taste whether this contribution is to be treated as part of the main field or part of the fringe field . in this paper , from here on , to simplify the formulas somewhat , the term `` fringe field '' will refer to components other than the dominant component , but restricted to those components necessarily associated with the dominant multipole . in other words , the contributions from the dominant multipole component in the fringe regions will be counted as part of the ideal magnet field integral . treating the magnet in this way increases its effective length probably making itmore nearly equal to the the physical magnet length ; _ i.e. _ , and this will be assumed in all subsequent formulas .for a given magnet with a perfect -pole geometry written in cylindrical coordinates ( see appendix a ) , the scalar potential satisfies the following symmetry condition : which leads to a relation between the harmonic multipole number allowed by symmetry and the multipole order : thus , for a normal `` dipole '' ( ) the multipole coefficients allowed by the magnet symmetry are of the form , for a normal `` quadrupole '' ( ) , for a normal `` sextupole '' ( ) , etc .consider now a `` multipole magnet '' , with normal symmetry , for example .following the symmetry condition ( [ eq : index ] ) , we can rewrite the field components ( [ eq : cartcomp ] ) , keeping terms of the expansion to leading order : b^{[2]}_{n}(z)}{4(n+2 ) ! } } + o(n+4 ) \right\ } \\b_{y}(x , y , z ) = & { \mathcal re } \left\{\frac{(x+iy)^n b_n(z)}{n! } % + o(3n+2 ) - { \frac { ( x+iy)^{n+1 } \left [ ( n+1)x - i(n+3)y\right ] b^{[2]}_{n}(z)}{4 ( n+2 ) ! } } + o(n+4 ) \right\ } \\b_z(x , y , z ) = & { \mathcal i m } \left\ { { \frac{(x+iy)^{n+1 } b^{[1]}_{n}(z)}{(n+1 ) ! } } + o(n+3 ) \right\ } \\\end{split } \ ; , \label{eq : leadcomp}\ ] ] where the functions represent polynomial terms in the transverse variables of order greater or equal to .these expressions apply for .the special case of the `` dipole '' will be treated separately . herethe terms proportional to }_{n} ] approximate the fields present due to the longitudinal field profile variation and do not include fields that could be present due to non - ideal magnet design .for a particle traversing the magnet along the straight line having transverse coordinates , the impulse ( _ i.e. _ change of transverse momentum ) imparted by the nominal field component is where is the effective length of the magnet , and is the nominal field coefficient in the body of the multipole magnet .the quantities in eq .( [ eq : intbody ] ) , the intentional and dominant ( `` zero order '' ) deflections caused by the magnet , are only approximate , since they account neither for orbit curvature within the body of the magnet nor for end field deflections .expressions like this will be used only as `` normalizing denominators '' in ratios having ( the presumably much smaller ) magnet end deflections as numerators . for magnets other than bending magnets ,for which the average deflection is zero , it will be necessary to use r.m.s .values for both the normalizing denominator and the numerator . the impulse due to the fringe field at one end of a magnet is defined in this paper as the effect of field deviation from nominal , from well inside ( where the nominal multipole coefficient is assumed to be independent of ) to well outside the magnet ( where all field components are assumed to vanish ) .these will be the limits for the integrals used in order to calculate the fringe deflection . to obtain explicit formulas the upper limit of these integralswill be taken to be infinity .exploiting the assumed constancy of and along the orbit , these integrals will all be evaluated using integration by parts . suppressing the entire pure multipole contribution ,as explained above , we have . for is an equality _ by definition _ ,and for finite displacements it is approximately true if , as we are assuming , the transverse particle displacements remain approximately constant .this is consistent with our straight line orbit approximation .the individual components of the impulse can themselves be separated into terms due to longitudinal fields ( labeled ) and due to transverse fields ( labeled ) ; where the momentum increments of the particle caused by the longitudinal component of the magnetic field and are the momentum increments of the particle caused by the transverse components of the magnetic field . using the leading order expressions of the magnetic field , we obtain the relations and \right\ } } \\ & \\\delta p^f_{y}(\perp ) & \approx { \displaystyle{\frac{e \overline{b_n}}{4(n+1 ) ! } } { \mathcal im}\left\ { ( x+iy)^{n } \left[(n+3)xx'+(n+1)yy'+i(n+1)xy'-i(n-1)yx'\right]\right\ } } \end{array } \label{eq : multtrans } \;.\ ] ] the total impulses caused by the fringe field are therefore \right\ } } \\ & & \\\delta p^f_{y } & \approx & { \displaystyle{\frac{e \overline{b_n}}{4(n+1 ) ! } } { \mathcal im}\left\ { ( x+iy)^n \left[(n+1)(x - iy)(x'+iy')-2x'(x+iy)\right]\right\ } } \end{array } \label{eq : fringmult } \;\;.\ ] ] even though they occur at a fixed point in the lattice , because these impulses depend on slopes and and are truncated taylor series , they are not symplectic . to use them in long term , damping - free tracking , symplecticity would have to be restored by including deviations in transverse coordinates .the formulas just derived are appropriate to calculate the end field deflection of any single particle . but to assess the importance of these deflections it is appropriate to calculate their impact on the beam as a whole , for example by calculating an r.m.s .deflection , such as . herethe operator denotes an averaging over angle variables .note that here , and from here on , the subscript specifies the transverse impulse , and does not refer to a magnetic field component .formulas for r.m.s .values like these are derived in appendix b. this section contains examples of the use of those formulas , starting with the cases of flat and round beams , then specializing the results further for `` dipole '' and `` quadrupole '' magnets .the derived formulas are finally applied for evaluating the impact of magnets end fields in the case of the large hadron collider ( lhc ) and the spallation neutron source ( sns ) accumulator ring .the calculations are based on eq . . for a flat beam , one of the transverse degrees of freedom ( _ e.g. _ the vertical ) vanishes .thus , the total transverse r.m.s .momentum increment from the magnet body is where represents the average of the in the body of the magnet and is the transverse emittance .the total transverse r.m.s .momentum increment from one of the fringes of the magnet is }{2(n+2 ) } \epsilon_{\perp}^{n+2 } } \label{eq : rmsfrinflat } \;\;,\ ] ] where and represent the beta and alpha functions , at the fringe location .the ratio of these quantities is }{(n+1)(n+2)\overline{\beta^n } } } \;\;. \label{eq : ratrmsflat}\ ] ] assuming that the beta functions are not varying rapidly , if the magnets are in non - critical locations ( which is to say most magnets ) , the square root dependence can be neglected , so an order - of - magnitude estimate ( dropping an -dependent numerical factor not very different from 1 ) is given by the case in which fringe field deflections are likely to be most important is when is anomalously large , for example in the vicinity of beam waists such as at the location of intersection points in colliding beam lattices . in this case , ( again dropping a numerical factor ) the ratio of deflections is roughly the same result is obtained by setting in eqs .( [ eq : finkick ] ) .often the relative deflection is so small as to make neglect of the fringe field deflection entirely persuasive .the simplicity of the formula is due to the fact that the fringe contribution is expressed as a fraction of the dominant contribution .note that , as stated before , this formula applies to each end separately , and does not depend on any cancellation of the contributions from two ends .in fact , nonlinear analysis shows that in magnets fringe - field contributions can tend to add up instead of cancelling . for a round beam ,the two transverse emittances are equal .for simplicity , we assume that typical values of horizontal and vertical lattice functions are approximately equal ; and .also assume that , _i.e. _ the beta functions do not vary significantly in the body of the magnet .taking into account the previous hypotheses , the total transverse r.m.s .momentum increment for the body becomes : ^{1/2 } \label{eq : rmsbodround } \;\;,\ ] ] where the function in the square root represents the generalized hyper - geometric function ( see for details ) .applying the same simplifications , the r.m.s .momentum kick given by the fringe field is : ^{1/2 } \label{eq : rmsfrinround } \;\;,\ ] ] where we considered and the same for the functions .notice now that the sum of the coefficients depends only on .the series involving them can be also written as a sum of a few generalized hyper - geometric functions .the ratio of the r.m.s .momentum transverse kicks is : where the coefficient is : ^{1/2 } \;\;. \label{eq : cnround}\ ] ] let us consider two cases , as before : one where is small and one where is large , as near the interaction points of large colliders .for the first case ( small ) , we may neglect the terms having as a factor in the coefficient and in the second case , we can pull out from the square root and neglect terms in the coefficient having now the function in the denominator . in this way , the coefficients of eq .will depend only on the order .we plot in figs .[ fig : roundcoeff ] , the behavior of these coefficients as a function of the multipole order , for large and small .the dominant factor in seems to be , which is reflected in the slow asymptotic decay depicted at the plots . for all practical cases ( multipole orders up to 20 ), lies between 1/2 and 1/10 .assuming now that the average in the body of the magnet is not so different from in the fringe , one gets for small functions : as in eq .( [ eq : ratrmsflatemlen ] ) , and for large : as in eq .( [ eq : ratrmsflatalemlen ] ) . consider a `` straight '' dipole magnet ; the configuration of poles and coils is symmetric about the and planes , and the coils are excited with alternating signs and equal strength . by symmetry is odd in both and , is even in both and , and is even in and odd in . using the general field expansion of eq .( [ eq : fieldmgen ] ) , we get : }_{2n+2m+2 - 2l } } \\ b_y & = { \displaystyle \sum^{\infty}_{m , n=0}\sum^{m}_{l=0 } { \frac{(-1)^m x^{2n}y^{2m}}{(2n)!(2 m ) ! } } \binom{m}{l } b^{[2l]}_{2n+2m-2l } } \\b_z & = { \displaystyle \sum^{\infty}_{m , n=0}\sum^{m}_{l=0 } { \frac{(-1)^m x^{2n}y^{2m+1}}{(2n)!(2m+1 ) ! } } \binom{m}{l } b^{[2l+1]}_{2n+2m-2l } } \\\end{split}\;\;. \label{eq : fdipole}\ ] ] taking the field expansion up to leading order , we get : } y^2 + \frac{1}{2 } b_2 ( x^2 - y^2 ) + o(4)}\\ b_z \!\ ! & \!\!\;=\;\!\ ! & \!\!{\displaystyle y \ ; b^{[1]}_{0 } \!+\ ! o(3)}\\ \end{array } \;\ ; , \label{eq : dexpand2}\ ] ] where represents a sextupole field component allowed by the symmetry of the `` dipole '' magnet ( for an ideally designed magnet ) and contain all the allowed terms of higher orders .a point has to be made about the application of the integrals evaluating the rms momentum kicks for bending magnets : because of the curved central orbit , these integrals are not exact , as previously mentioned .nevertheless , in most practical cases , the field uniformity in the interior of a `` dipole '' magnet is very high , and thus , on heuristic grounds , this approach can be expected to provide fairly good estimates even in this case .the change of transverse momentum imparted by the dipole field is ( see eq .( [ eq : intbody ] ) ) where as before is the effective length of the `` dipole '' magnet , and is the main dipole field in the body of the `` dipole '' magnet .using eq .( [ eq : transcomp ] ) the deflections in one fringe are and the total r.m.s .fringe kick is using eqs .( [ eq : emitang ] ) and ( [ eq : averaging ] ) , we have and the r.m.s . transverse momentum kick becomes thus , the by - now - standard ratio is except for numerical factors near one this formula yields the same `` ball - park '' estimates as given by eq . ( [ eq : ratrmsbroundemlen ] ) and eq .( [ eq : ratrmsbroundalemlen ] ) for the small and large cases .the configuration of poles and coils in a `` quadrupole '' magnet is symmetric about the four planes and if the coils are excited with alternating signs and equal strength , the magnetic field will satisfy the following symmetry conditions : is even in and odd in ; is odd in and even in ; is odd in both and ; and .as before , we may express the field components as : }_{2n+2m+1 - 2l } } \\b_y = & { \displaystyle \sum^{\infty}_{m , n=0}\sum^{m}_{l=0 } { \frac{(-1)^mx^{2n+1}y^{2m}}{(2n+1)!(2 m ) ! } } \binom{m}{l } b^{[2l]}_{2n+2m+1 - 2l } } \\ b_z = & { \displaystyle \sum^{\infty}_{m , n=0}\sum^{m}_{l=0 } { \frac{(-1)^mx^{2n+1}y^{2m+1}}{(2n+1)!(2m+1 ) ! } } \binom{m}{l } b^{[2l+1]}_{2n+2m+1 - 2l } } \\\end{split}. \label{eq : expand1}\ ] ] the field expansion can be written as } \right ] + o(5)}\vspace{.1 cm } \\ b_y & = & { \displaystyle x \left[b_1-\frac{1}{12}(3y^2+x^2)b_1^{[2 ] } \right ] + o(5)}\\ b_z & = & { \displaystyle xy b_1^{[1 ] } + o(4)}\\ \end{array } \;\ ; , \label{eq : expand2}\ ] ] where is the transverse field gradient at the quadrupole axis , and contain all the higher order terms . for a particle traversing the magnet with a horizontal deviation and vertical deviation from the center , the momentum increments produced by the nominal field gradients are where is the effective length of the quadrupole magnet .the momentum increments of the particle contributed from the longitudinal component of the magnetic field are and the momentum increment produced by the transverse component of the fringe fields are \;\ ; , \qquad \delta p^f_{y}(\perp ) \approx\frac{e \overline{b_1}}{4 } \left [ 2 x x'y + ( x^2+y^2)y'\right ] \;\;.\ ] ] combining the contributions , the total momentum increments due to fringe field are \\ \delta p^f_{y } \approx & \frac{e \overline{b_1}}{4 } \left[-2 x x'y + ( x^2+y^2)y'\right ] \end{split}\;\;.\ ] ] again , by averaging the sum of squares of the transverse momenta contribution , we obtain the total rms transverse momentum kick imparted by the fringe field : \epsilon_x^2 \epsilon_y \right . } \\ & { \displaystyle \left . + ( 1 + 5\alpha_y^2)\beta_y\epsilon_y^3 + \frac{3}{\beta_x } \left [ ( 1+\alpha_x^2)\beta_y^2 - 8\alpha_x\alpha_y\beta_x\beta_y + 2(1 + 3\alpha_y^2)\beta_x^2 \right ] \epsilon_x \epsilon_y^2 \right\}^{1/2 } } \end{array}.\ ] ] note that the expected rotation symmetry of the quadrupole is exhibited both in this formula and in the body deflection formula .the standard ratio is \epsilon_x^2 \epsilon_y}{2\beta_x\beta_y ( \overline{\beta_x}\epsilon_x + \overline{\beta_y}\epsilon_y ) } \right . } \\{ \displaystyle \left .+ \frac{(1 +5\alpha_y^2)\beta_x\beta_y^2\epsilon_y^3 + 3\beta_y \left [ ( 1+\alpha_x^2)\beta_y^2 - 8\alpha_x\alpha_y\beta_x\beta_y + 2(1 + 3\alpha_y^2)\beta_x^2 \right ] \epsilon_x \epsilon_y^2}{2\beta_x\beta_y ( \overline{\beta_x}\epsilon_x + \overline{\beta_y}\epsilon_y ) } \right\}^{1/2 } } \end{array } \;\;.\label{quaddifl}\ ] ] again dropping factors near 1 , this leads to the same ball - park estimates of eq .( [ eq : ratrmsbroundemlen ] ) and eq .( [ eq : ratrmsbroundalemlen ] ) .the lhc and the sns accumulator ring are good examples for testing the validity of the derived fringe field figure of merit formulas .indeed , the purpose of these two proton machines and thereby their magnet design differs in great extent : the lhc , a high - energy hadron collider , is filled with long super - conducting magnets of very small aperture ( around 1 cm ) .in contrast , the sns ring , a low - energy high intensity accumulator , contains short normal conducting magnets with wide aperture ( tens of cm ) .in addition , the lattice design , optics functions and physical parameters of the two machines are substantially different , e.g. the emittance of the sns beam is several orders of magnitude bigger , than the one of the lhc . in table[ tab : param ] , we summarize the parameters of the main magnets in the two accelerators entering in the figure of merit formulas and . in fig .[ fig : fringelhc ] , we plot in logarithmic scale the fringe - field figure of merit estimates for the lhc and the sns accumulator ring magnets .the black bars represent evaluation with the exact formulas derived for dipoles and quadrupoles ( see eqs . and ) and the grey bars represent the evaluation with the formula for round beams . in both cases , the total effect for each magnet is computed by summing up the fringe - field figures of merit from both ends due to all the magnets of the same type .the fringe field importance in the case of the sns is striking , especially for quadrupole magnets , whereas in the case of the lhc can be completely neglected .note that similar results can be derived by careful dynamical analysis and computation of tune - shifts due to fringe fields or dynamic aperture analysis for both the lhc and the sns .it is important to stress that even the approximate formula for round beams is slightly pessimisitic and within a factor of 2 of the exact figure of merit .we have derived formulas for the momentum kicks imparted by the fringe fields of general straight ( non - solenoidal ) multipole magnets .these formulas are based on an expansion having arbitrary dependence on the longitudinal coordinate .this expansion can be used for direct integration of the equations of motion for particle tracking or other analytical non - linear dynamics estimates .it also permits the fringe part and the body part of individual magnets to be identified and separated . a figure of merit , the ratio of r.m.s .end deflection to r.m.s .body deflection is introduced and evaluated .its proportionality to the transverse emittance results in an easily - evaluated measure of the importance of fringe fields both in cases in which the variation of optical functions is not too rapid and in the opposite case of rapid variation .these results are in agreement with previous crude estimations which employed simple physics arguments based on maxwell laws .finally , the formalism has been applied to the most common cases of multipole magnets , namely normal `` dipoles '' and `` quadrupoles '' . sincethe straight line approximation has been used throughout , these formulas are only precise for magnetic fields that are well - approximated by step functions ( the `` hard - edge '' approximation ) .thus , the formulas contain no parameters associated with the fringe shape ( for example , see ) .also , as stated previously , only those fringe fields matching , and therefore required by , the nominal body multipolarity are accounted for. numerical evaluation of the end / body figure of merit shows that fringe fields can be neglected in the magnets populating the arcs of large colliders like the lhc . in these rings ,the magnets are long enough and the emittances are so small ( of the order of m rad ) that the effect of fringe fields is a tiny perturbation as compared to the dominant multipole errors in the body of the magnets .the effect may be important , however , in small rings , as the sns accumulator ring or the muon collider ring , where the emittance is large ( typically m rad ) and the magnets much shorter .careful consideration should be also taken in the case of the magnets located in the interaction regions of the collider , where the beta variation is quite big .it is perhaps appropriate to call attention to possible `` overly optimistic '' use of the scaling law . often quadrupoles are grouped in doublets or triplets in which the desired focal properties rely on the intentional , highly - tuned , near cancellation of deflections caused by more than one element . in such cases ,the fringe deflections are , of course , amplified , when evaluated relative to the gross multiplet deflection .this effect is most obvious at focal points . since the early analytical studies of lee - whiting and forest , significant progress has been achieved for the construction of accurate maps which represent the motion of particles through the magnet fringe field , using either direct numerical evaluation with exact integration of the magnetic field or parameter fit of an adequate function ( _ e.g. _ the enge function ) .these maps are essential for the study of non - linearities introduced by fringe - fields through hamiltonian perturbation theory techniques . on the other hand, the scaling law we have emphasized can provide a rough estimate of the impact of these fringe fields in a ring .if the fringe fields are found to be important , a thorough numerical modelling and analysis of their effect has to be undertaken , including computation of the amplitude dependent tune - shift , resonance excitation and dynamic aperture , as non - linear dynamics can be very sensitive to the details of different lattices and magnet designs .furthermore , great care is required to preserve symplecticity and use these maps in particle tracking .the magnetic field representation in cartesian coordinates is not optimal for studying symmetries imposed by the cylindrical geometry of a perfect multipole magnet . for this, it is preferable to rely on expansions in cylindrical coordinates .both expansions are equivalent and the use of the former or the latter depends mostly on taste and the specific problem to be treated .first , consider the magnetic scalar potential written in the following form where now the coefficients are generally complex .the above expansion follows directly from the fact that the laplacian commutes with .this allows the consideration of solutions where the dependence in is an harmonic -pole .this expansion is compatible with the general solution of the laplace equation in cylindrical coordinates , involving bessel functions .using eq .( [ eq : potcyn ] ) and the laplace equation , one gets that .moreover , , should vanish for ( all terms except the dipole ) .finally , we have a recursion relation similar to eq .( [ eq : coef ] ) : }_{n+1,m}(z)}{(n+1)^2-(m+2)^2 } } \quad \text{for } \quad m \ne n-1 \;\ ; , \label{eq : cocyn}\ ] ] where again the superscript in brackets denotes derivatives with respect to . following these relations , one can show that all coefficients with vanish .thus , the first non - zero coefficient is ( for ) . by extending the recursion relation ( [ eq : cocyn ] )so as to express any coefficient as a function of , we get : }_{n+1,n+1}(z)\;\ ; . \label{eq : recu}\ ] ] the summation indexes can be rearranged so as to express the magnetic scalar potential in cylindrical coordinates : }_{n+1}(z ) \ , r^{n+1 + 2k } \right \ } \;\ ; , \label{eq : finpot}\ ] ] and the three - dimensional field components are : }_{n+1}(z ) \ , r^{n+2k } \right \ } \\ & b_{\theta}(r,\theta , z ) = -{\mathcal i m } \left \ { \sum_{n=0}^\infty e^{i ( n+1)\theta } \sum_{k=0}^\infty { \frac{(-1)^k ( n+1)!(n+1)}{2^{2k}(n+1+k)!k ! } } { \cal g}^{[2k]}_{n+1}(z ) \ , r^{n+2k } \right \ } \\ & b_z(r,\theta , z ) = { \mathcal re } \left \ { \sum_{n=0}^\infty e^{i ( n+1)\theta } \sum_{k=0}^\infty { \frac{(-1)^k ( n+1)!}{2^{2k}(n+1+k)!k ! } } { \cal g}^{[2k+1]}_{n+1}(z ) \ , r^{n+1 + 2k } \right \ } \end{split } \;\;.\label{eq : cylcomp}\ ] ] the coefficients can be related with the usual multipole coefficients , through eqs .( [ eq : mult ] ) .first , we write the scalar magnetic potential in cartesian coordinates : }_{n+1}(z ) \ , ( x+iy)^{n+1 } ( x^2+y^2)^{2k } \right \ } \;\;. \label{eq : potcart}\ ] ] the magnetic field components are computed by the gradient of the potential ( [ eq : potcart ] ) : { \cal g}^{[2k]}_{n+1}(z ) \biggr\ } \\b_y(x , y , z ) = { \mathcal i m } \biggl\ { \sum_{n , k=0}^\infty { \frac{(-1)^k ( n+1)!}{2^{2k}(n+1+k)!k ! } } & ( x^2+y^2)^{k-1 } ( x+iy)^{n+1 } \times\\ & \left [ -(n+1)x+i(n+1 + 2k)y\right ] { \cal g}^{[2k]}_{n+1}(z ) \biggr\ } \\ b_z(x , y , z ) = { \mathcal re } \biggl\ { \sum_{n , k=0}^\infty { \frac{(-1)^k ( n+1)!}{2^{2k}(n+1+k)!k ! } } & \ , ( x+iy)^{n+1 } ( x^2+y^2)^{2k}{\cal g}^{[2k+1]}_{n+1}(z ) \biggr\ } \end{split } \;\;. \label{eq : cartcomp}\ ] ] using eqs .( [ eq : mult ] ) , we get : }_{n+1 - 2k}(z)\ } \\a_n(z ) = & \quad\ ; ( n+1 ) ! \; { \mathcal re } \{{\cal g}_{n+1}(z)\ } + n ! \sum_{k=1}^{n/2 } \frac{(-1)^k(n+1 - 4k)(n+1 - 2k)!}{2^{2k}(n+1+k)!k ! } { \mathcal re } \{{\cal g}^{[2k]}_{n+1 - 2k}(z)\ } \end{split}\;\ ; , \label{eq : multcyn}\ ] ] where the upper limit of both series is the integer part of .thus , in the absence of longitudinal dependence of the field , the normal and skew multipole coefficients are just scalar multiples of the imaginary and real part of . on the other hand , the situation is more complicated in the case of 3d fields . by inverting the series ( [ eq : multcyn ] ), we have : }_{n-2k}(z ) \\ { \mathcal re } \{{\cal g}_{n+1}(z)\ } = & \quad\ ; \frac{1}{n ! } \sum_{k=0}^{n/2 } { \cal r}^{sk}_{n , k } a^{[2k]}_{n-2k}(z ) \end{split}\;\ ; , \label{eq : cynmult}\ ] ] where the coefficients and can be computed order by order by the relations and runs from 1 to the integer part of . using the last relations , the scalar potential and the magnetic field can be expressed as a function of the usual multipole coefficients . by expanding the complex polynomials in the expression of the magnetic field components ,one recovers the expansions of the magnetic fields ( [ eq : fieldmgen ] ) in cartesian coordinates .in order to evaluate the r.m.s . deflection caused by a magnet end , we start from the expressions by splitting the product inside the brackets : \\ & + & { \mathcal im}\left\ { ( x+iy)^n\right\ } \left[-(n+3)xy'+(n+1)x'y)\right ] \bigr ] \\ & & \\\delta p^f_{y } \approx & { \displaystyle { \frac{e \overline{b_n}}{4(n+1 ) ! } } } \bigl [ & { \mathcal re}\left\ { ( x+iy)^n\right\ } \left[(n+1)xy ' - ( n+3)x'y\right ] \\ & + & { \mathcal im}\left\ { ( x+iy)^n\right\ } \left[(n-1)xx'+ ( n+1)yy')\right ] \bigr ] \end{array } \label{eq : fringmultsplit } \;\;.\ ] ] the total r.m.s .transverse momentum kick imparted by the fringe field is , where the operator denotes the average over the angle variables .an equivalent expression stands for the deflection due to the body part of the field .the operator is linear , we can first compute the sum of squares of the momentum kicks and then proceed to their averaging .thus , we have : ^{1/2 } } \\ \\ ( \delta p^b_{\perp})_{\text rms } \approx { \displaystyle \frac { e \overline{b_n } l_{\text{eff}}}{n ! } \left [ \left\langle ( { \mathcal re } \left\ { ( x+iy)^n \right\ } ) ^2 + ( { \mathcal i m } \left\ { ( x+iy)^n \right\ } ) ^2 \right\rangle \right]^{1/2 } } \end{array } \label{eq : totrwo } \;\;,\ ] ] where , and are : - 8 ( n+1 ) xx'yy ' \\ & & \\ f_2 & = & x^2\left[(n-1)^2{x'}^2+(n+3)^2{y'}^2\right]+(n+1)^2y^2({x'}^2+{y'}^2 ) - 8 ( n+1 ) xx'yy ' \\ & & \\ f_3 & = & 4\left [ - ( n+1 ) ( x^2+y^2)x'y ' + xy({x'}^2+{y'}^2)\right ] \end{array } \label{eq : efs } \;\;.\ ] ] we have the following relations for the real and imaginary part of : } ( -1)^l \binom{n}{2l } x^{n-2l } y^{2l } } \\ & \\ { \mathcal i m } \left\{(x+iy)^n \right\}= & { \displaystyle \sum_{l=0}^{[(n-1)/2 ] } ( -1)^l \binom{n}{2l+1 } x^{n-2l-1 } y^{2l+1 } } \end{array } \label{eq : realim } \;\;,\ ] ] and thus : } \\ & = { \displaystyle \frac{1}{2 } \sum_{l=0}^{n } \left[\binom{n}{l } + ( -1)^l\binom{2n}{2l}\right ] x^{2n-2l } y^{2l } } \\ \\\left({\mathcal i m } \left\{(x+iy)^n \right\}\right)^2 & = { \displaystyle \frac{1}{2 } \left[\left(x^2+y^2\right)^n - { \mathcal re}\left\{(x+iy)^{2n}\right\ } \right ] } \\ & = { \displaystyle \frac{1}{2 } \sum_{l=0}^{n } \left[\binom{n}{l } - ( -1)^l\binom{2n}{2l}\right ] x^{2n-2l } y^{2l } } \\ & \\ { \mathcal re } \left\{(x+iy)^n \right\ } { \mathcal i m } \left\{(x+iy)^n \right\ } & = { \displaystyle \frac{1}{2 } { \mathcal im}\left\{(x+iy)^{2n } \right\ } } \\ & = { \displaystyle \frac{1}{2 } \sum_{l=0}^{n } ( -1)^l\binom{2n}{2l+1 } x^{2n-2l-1 } y^{2l+1 } } \end{array } \;,\ ] ] where the upper limit of the last sum is taken to be for uniformity in the equations , instead of the last non - zero term for which . finally , it is straightforward to show that after expanding the products in eq .( [ eq : totrwo ] ) and collecting the terms of equal power in the transverse variables , we have that the transverse kicks can be written in the following form : ^{1/2 } } \\ & & \\ ( \delta p^b_{\perp})_{\text rms } & \approx & { \displaystyle \frac { e \overline{b_n } l_{\text{eff}}}{n ! } \left [ \sum_{l=0}^{n } \binom{n}{l } \left\langle x^{2n-2l } \right\rangle \left\langle y^{2l } \right\rangle \right]^{1/2 } } \end{array } \label{eq : newkick } \;\;,\ ] ] where the s are with the coefficients s : in order to proceed to the averaging of the transverse variables , we write them in the standard form where are the transverse emittance associated with the corresponding phase space dimension , , are the usual beta and alpha functions and stand for , respectively . using the above relations and averaging over the angle variables one can show that : \beta_q^{m-1}\epsilon_q^{m+1}}{2^{2m+1}(m+1 ) } } % & = % \langle q^{2 m } \rangle { \displaystyle % \frac{\left[1+(2m+1)\alpha_q\right]\epsilon_q}{2(m+1)\beta_q } } \\ & \\\langle q^{2m+1 } q ' \rangle & = { \displaystyle \binom{2(m+1)}{m+1 } \frac{\alpha_q\beta_q^m\epsilon_q^{m+1}}{2^{2m+2 } } } % & = { \displaystyle % \langle q^{2 m } \rangle \frac{(2m+1)\alpha_q\epsilon_q}{2(m+1 ) } } \\end{array } \label{eq : averaging } \;\;.\ ] ] then , the s become : \beta_x^{n - l}\beta_y^{l } \epsilon_x^{n - l+2}\epsilon_y^l}{2^{2n+2}(n - l+1)(n - l+2 ) } } } \\ & \\ \omega_2 & = { \displaystyle \left(\omega_3(n , l ) + \omega_4(n , l)\right ) \binom{2(n - l)}{n - l } \binom{2l}{l } { \frac{(2l+1)[1+(2n-2l+1)\alpha_x^2 ] \beta_x^{n - l-1}\beta_y^{l+1 } \epsilon_x^{n - l+1}\epsilon_y^{l+1}}{2^{2n+2}(n - l+1)(l+1 ) } } } \\ & \\ \omega_3 & = { \displaystyle \left(\omega_3(n , l ) + \omega_5(n , l)\right ) \binom{2(n - l)}{n - l } \binom{2l}{l } { \frac{(2n-2l+1)[1+(2l+1)\alpha_y^2 ] \beta_x^{n - l+1}\beta_y^{l-1 } \epsilon_x^{n - l+1}\epsilon_y^{l+1}}{2^{2n+2}(n - l+1)(l+1 ) } } } \\& \\ \omega_4 & = { \displaystyle \left(\omega_1(n , l ) + \omega_6(n , l)\right ) \binom{2(n - l)}{n - l } \binom{2l}{l } { \frac{(2l+1)[1+(2l+3)\alpha_y^2 ] \beta_x^{n - l}\beta_y^{l } \epsilon_x^{n - l}\epsilon_y^{l+2}}{2^{2n+2}(l+1)(l+2 ) } } } \\ & \\ \omega_5 & = { \displaystyle \omega_7(n , l ) \binom{2(n - l)}{n - l } \binom{2l}{l } { \frac{(2l+1)(2l+3)\alpha_x\alpha_y \beta_x^{n - l-1}\beta_y^{l+1 } \epsilon_x^{n - l}\epsilon_y^{l+2}}{2^{2n+2}(l+1)(l+2 ) } } } \\ & \\ \omega_6 & = { \displaystyle \left(\omega_7(n , l ) + \omega_8(n , l)\right ) \binom{2(n - l)}{n - l } \binom{2l}{l } { \frac{(2n-2l+1)(2l+1)\alpha_x\alpha_y \beta_x^{n - l}\beta_y^{l } \epsilon_x^{n - l+1}\epsilon_y^{l+1}}{2^{2n+2}(n - l+1)(l+1 ) } } } \end{array}. \label{eq : omegasav}\ ] ] after collecting terms of equal emittances , the r.m.s .transverse momentum kicks can be expressed as : ^{1/2 } } \\ \\( \delta p^b_{\perp})_{\text rms } \approx { \displaystyle \frac { e \overline{b_n } l_{\text{eff}}}{2^nn ! } \left [ \sum_{l=0}^{n } \binom{n}{l } \binom{2(n - l)}{n - l } \binom{2l}{l } \overline{\beta_x^{n - l } } \overline{\beta_y^{l } } \epsilon_x^{n - l } \epsilon_y^{l } \right]^{1/2 } } \end{array } \label{eq : finkick } \hskip -15pt , \ ] ] where the bars on the s denote their average values over the body of the magnet .the coefficients , given by [ 1+(2l+3)\alpha_y^2]}{(l+1)(l+2)}}\\ & & \\ & & { \displaystyle -\frac{8 ( n+1)(n - l)(2l+3)(-1)^l\binom{2n}{2l } \alpha_x\alpha_y\beta_y}{\beta_x(l+1)(l+2)}}\\ & & \\ g_{n , l,1}(\alpha_{x , y},\beta_{x , y } ) & = & { \displaystyle \frac{\left[(n^2 + 4n+5)(2l+1)\binom{n}{l}+2(5n+2ln+2)(-1)^l\binom{2n}{2l } \right][1+(2n-2l+1)\alpha_x^2]\beta_y}{\beta_x ( n - l+1)(l+1)}}\\ & & \\ & & { \displaystyle + \frac{\left[(n^2 + 4n+5)\binom{n}{l } -2(n+2)(-1)^l\binom{2n}{2l}\right ] ( 2n-2l+1)[1+(2l+1)\alpha_y^2]\beta_x}{\beta_y(n - l+1)(l+1)}}\\ & & \\ & & { \displaystyle -\frac{8 ( n+1)\left[(2l+1)\binom{n}{l}+(n - l)(-1)^l \binom{2n}{2l } \right ] ( 2n-2l+1 ) \alpha_x\alpha_y}{(n - l+1)(l+1)}}\\ & & \\ g_{n , l,2}(\alpha_{x , y},\beta_{x , y } ) & = & { \displaystyle \frac{\left[\left ( n^2 + 1 \right)\binom{n}{l}+ 2n ( -1)^l \binom{2n}{2l } \right](2n-2l+1 ) [ 1+(2n-2l+3)\alpha_x^2]}{(n - l+1)(n - l+2)}}\\ \end{array } \label{eq : ges } , \ ] ] depend on the twiss functions , and on the multipole order .one may note that r.m.s .transverse momentum kick of the fringe is represented by the square root of a polynomial of order in the transverse emittances and as compared to the square root of a polynomial of order representing the body contribution ( see also ) .thus , their ratio should be proportional to the transverse emittance .this scaling law is indeed exact for the case of the `` dipole '' and `` quadrupole '' . for higher order `` multipoles '' ,it is exact for flat and round beams ( sec .the authors would like to thank a. jain for useful suggestions regarding the magnetic field expansions , e. keil for his criticism in an early version of this work and r. baartman for many useful comments and discussion .this work was performed under the auspices of the u.s .department of energy .steffen , _ high energy beam optics _ ( interscience , new york , 1965 ) .e. forest and j. milutinovic , nucl .instrum .methods * 269 * , 474 ( 1988 ) e. forest , _ beam dynamics - a new attitude and framework _ , ( harwood acad . pub . ,amsterdam , 1998 ) .j. irwin and c. wang , explicit soft fringe maps of a quadrupole , _ proceedings of the particle accelerator conference _ , dallas , 1995 , ( ieee piscataway nj , 1996 ) , p. 2376 .r. baartman , intrinsic third order aberrations in electrostatic and magnetic quadrupoles , in _ proceedings of the 6th european particle accelerator conference _ , stockholm , 1998 , edited by s. myers _ et al ._ , ( institute of physics , london and philadelphia , 1998 ) , p. 1415 .gradshteyn and i.m .ryzhik , _ table of integrals , series , and products _ , corrected and enlarged edition , ( academic press inc . , san diego , ca , 1980 ) . f. mot , particle accelerators , * 55 * , 329 ( 1996 ) .y. papaphilippou and d.t .abell , beam dynamics analysis and correction of magnet field imperfections in the sns accumulator ring , in _ proceedings of the 7th european particle accelerator conference _ ,vienna , 2000 , edited by j .-laclare _ et al ._ , ( austrian academy of science press , vienna , 2000 ) , p. 1453 .j. wei and r. talman , particle accelerators * 55 * , 339 ( 1996 ) .j. wei , y. papaphilippou and r. talman , scaling law for the impact of magnet fringe fields , in _ proceedings of the 7th european particle accelerator conference _ , vienna , 2000 , edited by j .-laclare _ et al ._ , ( austrian academy of science press , vienna , 2000 ) , p. 1092 .m. venturini , scaling of third - order quadrupole aberrations with fringe field extension , in _ proceedings of the particle accelerator conference _ , new york , 1999 , edited by a.u .luccio and w.w.mackay , ( ieee piscataway nj , 1999 ) , p. 1590 .m. berz , b. erdlyi and k. makino , prst - ab * 3:124001 * ( 2000 ) .m. berz , b. erdlyi and k. makino , fringe - field effects in muon rings , ( preprint ) .w. wan , c. johnstone , j. holt , m. berz , k. makino , m. lindemann and b. erdlyi , nucl .methods , * 427 * , 74 ( 1999 ) .lee - whiting , nucl .methods * 83 * , 232 ( 1970 ) . m. venturini and a.j .dragt , nucl .instrum .methods * 427 * , 387 ( 1999 ) .a.j . dragt , d.r .douglas , f. neri , e. forest , l.m .healy , p. schtt , j. van zeijts , marylie 3.0 user s manual , university of maryland , physics department report , 1999 ( unpublished ) .hoffsttter and m. berz , phys.rev e * 54 * , 5664 ( 1996 ) .b. erdlyi , m. berz and m. lindemann , differential algebra based magnetic field computations and accurate field maps , ( preprint ) . k. makino and m. berz , nucl. instrum .methods * 427 * , 338 ( 1999 ) .enge , deflecting magnets , in _ focusing of charged particles _ ,volume 2 , edited by a. septier ( academic press , ny and london 1967 ) .e. forest , d. robin , a. zholents , m.donald , r.helm , j. irwin and h. moshammer , sources of amplitude dependent tune shift in the pep - ii design and their compensation with octupoles , _ proceedings of the 4th european particle accelerator conference _ , london , 1994 , edited by v. suller and c. petit - jean - genaz , ( world scientific , river edge nj 1994 ) , p. 1033. f. zimmermann , tune shift with amplitude induced by quadrupole fringe fields , cern - sl-2000 - 009 ap , 2000 ( unpublished ) .f. zimmermann , c. johnstone , m. berz , b. erdlyi and k. makino and w. wan , fringe fields and dynamic aperture in the fnal muon storage ring , cern - sl-2000 - 011 , 2000 ( unpublished ) . f. zimmermann , fringe fields , dynamic aperture and transverse depolarisation in the cern muon storage ring , cern - sl-2000 - 012 ap , 2000 ( unpublished ) .m. berz , b. erdlyi and k. makino , nucl .methods , * 472 * , 533 ( 2001 ) .danby , s.t .lin and j.w .jackson , three - dimensional properties of magnetic beam transport elements , in _ proceedings of the national particle accelerator conference _ , washington d.c ., 1967 , ( ieee transactions on nuclear science 14 , no. 3 , 1967 ) , p. 442 .brown and r.v .servranckx , first- and second - order charged particle optics , in _proceedings on physics of high energy particle optics _ , bnl / suny summer school , 1983 , edited by m. month , p.f .dahl and m. dienes ( american institute of physics conference proceedings no.127 , new york , 1985 ) , p. 62 .m. bassetti and c. biscari , particle accelerators * 52 * , 221 ( 1995 ) .gardner , three - dimensional field expansions in magnets : a primer , bnl / sns technical note no.53 , 1998 ( unpublished ) .jackson , _ classical electrodynamics _ , 3rd edition , ( john wiley and sons , new york , 1999 ) .a. jain , basic theory of magnets , in _ proceedings of cern accelerator school on measurement and alignment of accelerator magnets _ , edited by s. turner , ( cern yellow report 98 - 05 , geneva , 1998 ) , p. 1 . a.b .el - kareh and j.c .el - kareh , _ electron beams , lenses and optics _ ( academic press , new york , 1970 ) ..parameters associated with the lhc and sns magnets , whose fringe - field figure of merit is evaluated in fig .[ fig : fringelhc ] .when two numbers occur , they are associated to the minimum and maximum value .[ cols="<,>,^,^,^,^,^",options="header " , ]
|
a transverse multipole expansion is derived , including the longitudinal components necessarily present in regions of varying magnetic field profile . it can be used for exact numerical orbit following through the fringe field regions of magnets whose end designs introduce no extraneous components , _ i.e. _ fields not required to be present by maxwell s equations . analytic evaluations of the deflections are obtained in various approximations . mainly emphasized is a `` straight - line approximation '' , in which particle orbits are treated as straight lines through the fringe field regions . this approximation leads to a readily - evaluated figure of merit , the ratio of r.m.s . end deflection to nominal body deflection , that can be used to determine whether or not a fringe field can be neglected . deflections in `` critical '' cases ( e.g. near intersection regions ) are analysed in the same approximation .
|
the necessity for greater human mobility brings the need to obtain information on the move , and thus an increase in mobile data traffic .according to the most recent networking index by cisco systems , global mobile data traffic grew 74% in 2015 and is expected to rise at a compound annual growth rate ( cagr ) of 53% from 2015 to 2020 .another important trend is the projected increase at the global mobile network connection speeds .whereas that average downstream speed for smartphones grew nearly 26% to 7.5 megabits per second ( mb / s ) in 2015 , the quantity is anticipated to reach 12.5 mb / s by 2020 , following a five year 11% cagr .minimum technical performance requirements for the modern fourth generation ( 4 g ) mobile telecommunication systems were outlined in 2008 by the international telecommunication union ( itu ) radiocommunication sector s ( itu - r ) report itu - r m.2134 . in the document , the minimum downlink ( dl )peak spectral efficiency was defined as 15 b / s / hz and operation in wider bandwidths up to 100 megahertz ( mhz ) was encouraged , thus setting the theoretical dl peak data rate at 1500 mb / s . however , spectral efficiency was defined assuming a 4 multiple input multiple output ( mimo ) antenna configuration , whereas current 4 g products operate on 2-stream mimo and 4-stream mimo is envisioned only for the user equipment ( ue ) of the fifth generation ( 5 g ) wireless communication systems , for which non - coherent detection would also be preferred . as 4 g systems , although currently represent just 14% of global mobile connections , are already being laid out by mobile network operators , research on 5 g is taking pace .many traffic forecast and market reports are getting published to guide initial specification and standardization activities .two key common conclusions of these studies are an expectation of thousandfold increase in overall wireless communication traffic volume within a decade , and the forthcoming machine - to - machine ( m2 m ) communication boom .tenfold of the traffic rise is attributed to the increase in m2 m connections , which is estimated to grow from 0.6 to 3.2 billion by 2020 owing to a five year 38% cagr , and the rest caused by the rise in traffic per device .based on these expectancies , 5 g systems peak data rate is required to be at least on the order of 10 gigabits per second ( gb / s ) . accommodating the anticipated traffic growth requires total throughput rise together with the data rates . the principal methods to accomplish this are evident : increasing the operation bandwidth or spectral efficiency , or reducing the signalling overhead . because the subject is applicable to nearly all areas of wireless communications , spectral efficiency enhancement effortshave traditionally led the research .a direct consequence of these studies are the spectrally very efficient systems in operation presently , one of which being the long term evolution - advanced with coordinated multi - point .whereas orthogonal frequency division multiplexing ( ofdm ) is utilized by the majority of the broadband communication systems , filter bank multicarrier , despite its higher complexity , is investigated for future communication systems too , due to its better spectral efficiency .device densification inherent to the continuously growing number of mobile - connected devices leads to shorter propagation paths , originating a favourable situation for the employment of wider bandwidths over higher carrier frequencies . taking advantage of this , increasing the operation frequency to the low end of the terahertz ( thz ) bandis proposed in this paper as a solution to the data rate and network capacity requirements of the future 5 g wireless communication systems .the remainder of this paper is organized as follows .section [ sec : std ] presents a brief overview of the standardization activities on low - thz band , namely the 300 gigahertz ( ghz ) spectrum , and section [ sec : model ] outlines its channel characteristics and available models .section [ sec : device ] provides state - of - the - art thz device technologies that operate in room temperature and hold promise for use in commercial 5 g products .key open research issues are identified in section [ sec : iss ] and the article concludes with recapitulating remarks .the need for new spectral resources has been addressed by choosing the 60 ghz as the new industrial , scientific and medical ( ism ) radio band to be used for unlicensed wireless communications .in addition to the two wireless personal area network ( wpan ) standards , ecma-387 and ieee 802.15.3c , which have been available since december 2008 and october 2009 , respectively , the only wireless local area network ( wlan ) standard for the 60 ghz , ieee 802.11ad , was ratified in december 2012 . to provide an initial sense of the potential gains obtainable by the increase in operation frequency, 802.11ad standard makes use of the 9 ghz of unlicensed bandwidth available in most parts of the world between 57 and 66 ghz , by defining channel bandwidths to be 2160 mhz .this ample channel lets single carrier waveforms to reach a maximum data rate of 4620 mb / s with /2 16-ary quadrature amplitude modulation ( 16-qam ) , and ofdm waveforms to reach 6756.75 mb / s using 64-qam .detailed explanations of all three 60 ghz standards are available in the literature . according to itu s latest radio regulations , edition of 2015, frequencies up to 275 ghz are completely allocated to various services and stations , whereas frequency bands in the range of 275 - 1000 ghz are identified only for the passive service applications of radio astronomy , earth exploration - satellite and space research .therefore , the spectrum beyond 275 ghz is nearly uninhabited and at present obtainable by any valuable service , including wireless and mobile communications .in fact , relevant standardization activities began in 2008 with the formation of ieee 802.15 wpan terahertz interest group ( ig thz ) whose focus was on the thz frequency bands between 275 and 3000 ghz . by july 2013 , the efforts both within the group and industry reached to the adequate level to transform ig thz into a study group , sg 100 g , with the aim of developing project authorization request ( par ) and five criteria documents addressing 40/100 gb / s over beam switchable wireless point - to - point links .the par , which identifies data centers for the beam switched point - to - point applications and adds kiosk and fronthaul and backhaul intra - device communications to the usage models , was approved in march 2014 , resulting in the formation of the task group 3d ( tg3d ) .ig thz is also retained for applications outside the scope of the tg3d .the answer to electromagnetic ( em ) wave propagation is available since 1865 through maxwell s equations .however , by making use of the general properties of typical wireless communication scenarios , the complexity of solving four differential equations for a point in space and time can be greatly reduced via specific channel models which provide satisfactory approximations .the non - line - of - sight ( nlos ) em wave propagation mechanisms are transmission , reflection , scattering and diffraction . when a uniform plane em wave propagating in a near - perfect dielectric medium , such as air whose relative permittivity , , which is also termed as the dielectric constant in the literature , equals 1.0006 , is incident upon a lossy medium , as shown in [ fig : model : geometry ] , part of its wave intensity is transmitted into this medium , and the rest is reflected back. the ratios of the transmitted and reflected electric field components , and , to the incident electric field component , , are termed as the transmission and reflection coefficients at the interface , and , respectively .the equations for and depend on the polarization of the incident wave and expressed as rcl t^b_&=&[eq : tper ] + ^b_&=&[eq : rper ] + t^b_&=&[eq : tpar ] + ^b_&=&[eq : rpar ] for perpendicular , or horizontal , and parallel , or vertical , polarizations , respectively , where and are the incident and transmission , or refracted , angles , and and are the intrinsic impedances of the dielectric and conductor media in ohms , respectively .em waves can have any general polarization and one method to attain the resulting and is separating the into its and components , calculating the , and , independently and vector summing the parted components .for any polarization category , in v / m can be written as rcl ^t&=&_2e^-^t + & = & _ 2 + & = & _ 2[eq : etran ] where rcl e_2&=&t^be^i[eq : e2 ] + e^i&=&_i^i[eq : ei ] + & = & _ xx+_yy+_zz[eq : posvec ] is the propagation constant of the wave , is the unit vector in the direction of and is the position vector in rectangular coordinates .em waves attenuate in all media except for the perfect dielectrics , and this effect is represented by the only real exponential in ( [ eq : etran ] ) , which includes the attenuation constant in np / m that is defined as rcl & = & w\{}^1/2[eq : attcnst ] where is the angular frequency in rad / s , is the permeability in h / m , is the permittivity in f / m and is the conductivity in s / m .a medium whose constitutive parameters , which are , and , depend on the frequency of the applied field is labelled as dispersive . whereas all materials possess different levels of dispersion , the variation is typically insignificant , except for the permeabilities of ferromagnetic and ferrimagnetic , and permittivity and conductivities of dielectric materials . therefore , for the ordinary lossy media , it can be assumed that only in ( [ eq : attcnst ] ) increases with rising operation frequency , and so the .consequently , transmission losses are greater for the low - thz band compared to the conventional spectra , which is also evidenced by respective measurements .similarly , can be expressed as rcl ^r&=&_1^re^-j^r + & = & _ 1^r[eq : erefl ] where rcl e_1^r&=&^be^i[eq : e1 ] + & = & w\{}^1/2[eq : phscnst ] is the phase constant in rad / m , which is also known as the wave number and represented with in the literature , and is the reflected angle .hence , the amplitude of varies with , which depends on and , that are calculated using rcl & = & [ eq : intimp ] + _ 1_i&=&_2_t[eq : snellrefr ] ( [ eq : snellrefr ] ) is snell s law of refraction and it generates a complex for incidences comprising a lossy medium .therefore , the true refracted angle , , and the direction of wave travel , , are written as rcl _ t&=&^-1()[eq : angtra ] + _ t&=&_x_t+_z_t[eq : wavedir ] unlike the , there are many frequency dependent parameters contained within the non - linear equation of . thus, determining the precise effects of ascending carrier frequency on , and so , via broad theoretic observations for different material classes is unachievable .although very limited in the number of materials covered , there are existing studies which are based on actual em wave measurements that report the of the evaluated specimens .one of these illustrate the and of one sample of paperboard with ingrain wallpaper pasted on front and two concrete plaster examples , up to 1 thz and for two different . whereas the exact values are not released ,the available graphs of and show that even though there is fluctuation present in the curves , it is negligible for the computed frequency range and both . this resultis actually expected for customary materials which are poorly dispersive .however , it can not be concluded that the increasing operation frequency does not affect , since this also invalidates the smoothness assumption for the planar interface between the media under which ( [ eq : erefl ] ) is derived .therefore , a more comprehensive analysis incorporating rough surfaces into reflection and scattering formulations is needed . an irregular , or rough , boundary , or interface ,is described to have periodic or random variations of height from a pre - set mean plane , or smooth , boundary .the scattering coefficient , , is defined as rcl & = & [ eq : rho ] where is the scattered electric field component , and is assumed to be the reflection created by an upon a smooth and perfectly conducting interface in the specular direction , whose conditions are rcl _ r&=&_i[eq : snellrefl ] + _ s&=&0[eq : longitudinalsct ] ( [ eq : snellrefl ] ) is snell s law of reflection .the mean scattered power density , , in w / m^2^ is then expressed as rcl s^s&=&y_1e^s + & = & y_1e^r^2[eq : powers ] where rcl e^r&=&^b_e^i[eq : ereflsct ] + & = & j[eq : ereflfull ] is the admittance of the dielectric medium in s , is the distance of the point of observation from the origin , which lies on the mean plane interface but not necessarily on the interface , and are the dimensions of the rough boundary , the angular brackets indicate mean value , and the bar denotes complex conjugate . is of polarization because when the reflection is initiated by linearly polarized and upon smooth interfaces , preserve the linear polarization properties .( [ eq : ereflfull ] ) is also calculated exercising the helmholtz integral for the assumptions given in the definition of .let the interface between the media be rough in two dimensions and the surface is given by a random stationary process , whose values designate the level of the surface at points on the plane boundary . if is normally distributed with zero mean value , standard deviation , which represents the roughness of the surface , and correlation distance , signifying the separation between two variable points for which the autocorrelation coefficient reduces to , and so the density of the irregularities , is computed as rcl & = & e^-g(_0 ^ 2+_m=1^e^)[eq : rhorho ] where rcl g&=&^2[eq : g ] + _ 0&=&[eq : rho0 ] + f&=&[eq : factor ] + v_x&=&(_i-_r_s)[eq : vx ] + v_y&=&(_r_s)[eq : vy ] + & = & [ eq : wavelength ] is the rayleigh roughness parameter , is the scattering coefficient of a smooth interface of area , is a factor from the general kirchhoff solution for , and are the and components of the vector , respectively , is the index introduced by an exponential series expansion step during the derivation of ( [ eq : rhorho ] ) , and is the wavelength of the em wave in the dielectric medium in m . to the best of authors knowledge , there are three publications in the literature which propose or include a channel model for the thz band .while the emphasis is on ergodic capacity calculation of a thz band communication system that utilizes hybrid beamforming on its antenna subarrays , a channel model nevertheless is available in .it builds on the famous work of saleh and valenzuela , which primarily proposed that rays arrive in clusters in indoor transmissions . in , arrival time mechanismsare kept the same as poisson processes , whereas gain of the rays are detailed to contain gaseous attenuation .angular characteristics and beamforming related antenna components are also included from related works .however , the model is of unknown use since its accuracy is not validated against correct channel measurements . of the two papers which put forward a thz band channel model ,the more recent one is strictly analytical .formed on the proposal to divide a wideband thz channel into narrow enough subbands that do not exhibit frequency selectivity , the channel response of each such subband is claimed to be the addition of los , reflection , diffraction and scattering components . since the transfer functions and test data sets of all these propagation mechanisms are mostly based on existing work , the channel is yet to be independently analytically modelled and verified .the only low - thz band channel model that is constructed on top of both the theoretical and experimental work of a single research group is .outputs of an internally developed ray tracing software are used for modelling instead of real channel measurements , while omitting diffraction and scattering components .analytical probability density functions approximating the amplitude and polarization parts of the channel transfer function are parametrized in full . however , the formulated model is scenario - specific , hence the provided time and angle of arrival and angle of departure information are of no general use and can be deterministically calculated .altogether , while it is far from final with many issues that need to be resolved , still presents the first true low - thz band channel model in the literature .while the thz band seems to offer an abundant spectrum for every radio service conceivable , it actually presents a very harsh environment for em wave propagation .although prohibitively high attenuation by atmospheric gases is advantageous for some small number of specific applications , like intersatellite communications links because it assists those to be isolated from any possible interference from the earth , thz band is yet to be utilized for communication purposes .however , this has not been the case for the whole scientific field as the thz band contains some unique and valuable information that are within the research interests of different areas . for example, temperatures of the interstellar dust clouds range between 10 and 200 , which corresponds to about 210 ghz and 4.3 thz , respectively .therefore , energy radiated from interstellar gas , which is used for star formation research , lies entirely within the thz band .gases that make up the earth s atmosphere also have thermal emission lines in the thz band , creating earth science s need for measurement instruments working at thz frequencies .several device technologies are available today which jointly cover the transmitter ( tx ) and receiver ( rx ) needs of the entire thz spectrum , ranging from metamorphic high electron mobility transistors to quantum cascade lasers ; however , only a very small percentage of these possess the potential to be used for 5 g communication systems . by 2020 , 11.6 billion mobile - connected devices are expected to be in use .if 5 g systems are to acquire high market penetration rates , respective ue and network devices must be robust , lightweight , highly integrated and most importantly , low - cost .taking into account the technologies which are currently used to manufacture the hardware of virtually all mainstream communication devices and after a review of the currently available thz device technologies , silicon ( si ) and complementary metal - oxide - semiconductor ( cmos ) appear as the only viable candidates for 5 g despite their shortcomings in practically all electronic performance criteria . when compared to the iii - v semiconductor compounds , si has worse material properties than many .lower electron mobility , smaller energy band gap and higher resistivity of si results in devices with inferior figures of merit .cmos , likewise , has poorer transistor and passive performances than corresponding components produced using iii - v compound processes. however , there are plenty of solid reasons that have caused si and cmos dominate the global semiconductor market , and with the current development rate in corresponding areas , their places look secure .si , first of all , is vastly available all around the world and its purification is simple . mechanical characteristics of si make it a sturdy material , thus easy to manufacture and handle .si also has high thermal conductivity enabling efficient thermal management of devices .it is simple to form insulators with exceptional dielectric properties like silicon dioxide that are used as cmos transistor gates among many other functions .the doping concentration of si has a very high range and with the already established manufacturing capacity and continued demand , low - cost production is ensured . on the other hand , since cmos technology is essentially the same for all devices regardless of the frequency of operation , cmos s intrinsic advantages like integration of higher frequency circuits with baseband circuitry , digital calibration for better performance , high yield and built - in self test also holds true for the thz range devices . considering the may 2014 start of tg3d for 300 ghz standardization , it will be safe to say we are at least a decade away from commercial low - thz band products . nevertheless , research activities on both circuitry and communication domains are starting to pick up speed . emerging thz band applications which also develop si cmos technology include imaging , sensors and chip interconnection .artificial dielectric layers are proposed to improve performance of on - chip antennas radiating at low - thz band .however , in line with the subject of the paper , in the following subsections a complete survey is presented on the state - of - the - art si cmos thz circuit blocks and modules designed for communication purposes .moreover , devices whose operation frequencies are contained by the first three transmission windows within the low - thz band , which , approximately , range from 275 to 420 ghz , are selected in order to demonstrate the potential for 5 g in the thz region .this limitation imposed the exclusion of a number of notable si cmos studies that are just outside the frequency range .thz signal source fabrication using cmos is probably the most difficult field of the thz electronics research and only recently have implementations with acceptable power , high stability and frequency tuning been published .depending on the power gain cutoff frequency ( ) of the transistors that are used , low - thz band can be reached through either frequency multiplication or direct generation . if the of a device is large enough for the intended thz application , direct generation is commonly preferred since power efficiency is better and smaller chip areais needed compared to frequency multiplication . however , especially for si cmos devices , this is mostly not the case . therefore , signal sources are specifically designed to efficiently generate power at the harmonic frequencies of built - in non - linear diodes , so that appropriate harmonics of the fundamental frequency ( ) can be output .one si cmos source employing triple - push architecture is reported in .an n - push oscillator consists of n coupled oscillators which use a shared resonator and output phase - shifted signals . when these signals are combined , n^th^ harmonic components are constructively added , whereas the rest , in theory , are negated . while this method is useful for higher frequency generation , discontinuous tuning is observed in the event of uneven phase - shift . in ,same two triple - push oscillator cores are locked by magnetic coupling , and the power is conveyed to the differential ring antenna through a matching stage .the device is realized in a 65-nm cmos process over a 500 570 ^2^ die area , oscillators occupying 120 150 ^2^ of the total .output frequency is measured to be tunable from 284 to 288 ghz by reducing the supply voltage from 1.4 to 0.7 volts ( v ) , and the source can generate a peak output power of -1.5 dbm at the upper limit of the tuning range by consuming 275 milliwatts ( mw ) of dc power .the circuitry was also packaged with a si lens on an fr-4 board , but since the aim was demonstration and did not involve original design , that part is omitted .another novel cmos source that is implemented in a 65-nm low - power bulk cmos process is presented in . for frequency tuning ,placing varactors inside the lc resonator is a common practice that is shown to work satisfactorily up to 0.1 thz .however , at higher frequencies varactor performance degrades . the significance of the design of originates from eliminating varactors from the voltage - controlled oscillator ( vco ) , but still delivering a frequency tunable source with high output power which functions at the beginning of the submm - wave band . by adding phase shifters to the proposed four core coupled oscillator system , locking frequency of the vcois made adjustable in accordance with the phase shifts between each core and the respective injected signal .even though the provided simulation result illustrated that the third harmonic generates higher current around 300 ghz , fourth harmonic is chosen also for the symmetry it brings .one of the two vcos that are fabricated for the study radiates peak output power of -1.19 dbm around the 13 ghz of tuning range centred at 290 ghz , therefore achieving the highest output power and tunability for all the oscillators available in the literature which operate in and beyond the low - thz band , even including ones using compound semiconductor technologies .dc - to - rf conversion efficiency stands at 0.23% due to the 325 mw dc power input , and the chip is printed on an area of 600 600 ^2^. not just sources but also complete txs are being developed for thz frequencies .the latest such device is a phased array that is expanded over the delay - coupled oscillator method by the authors of .the idea of controlling the oscillator frequency through the phase shift between adjacent cores works on a one - dimensional ring . to extend this effect over two dimensions , in 2 2 central loop is connected to 4 other similar loops only through one of its vertices , creating a 4 4 coupled array .adjacent nodes are situated at a fixed distance that equals half of the radiation wavelength , and the oscillators are linked with phase shifters .patch antennas are used for radiation to prevent substrate coupling .a sample , likewise , is manufactured in a 65-nm bulk cmos process over an area of 1.95 2 mm^2^. peak total radiated power is measured at 338 ghz as 0.8 mw , or -0.97 dbm , and equivalent isotropically radiated power ( eirp ) as 51 mw , or 17.1 dbm , using 1.54 w. 12 db of the 18 db antenna directivity is due to array gain , and the rest to patch antenna directivity .beam steering is feasible across 45 in azimuth ( ) , and 50 in elevation ( ) angles .centre frequency tuning measurements are performed between 337 and 339 ghz .however , in the paper 2.1% tuning is claimed to be possible via altering the coupler supply voltage , which results in a 7.1 ghz spectrum around 338 ghz .one other architecture is tunable at second harmonic frequencies that are between 276 and 285 ghz .distributed active radiator ( dar ) and inverse design approach lies at the core of the design .typically , power generation and radiation are implemented by different circuit blocks . however , in , surface currents on si chip metal layers are first calculated for a specific em field , and then synthesized using a dar , which is made of four cross - coupled transistor pairs located symmetrically along two loops that are shaped into a mbius strip . this way ,second harmonic signal is radiated , while fundamental and other harmonic signals are filtered .the implementation consists of 16 dar cores in a 4 4 array , and it is realized in a 45-nm cmos silicon on insulator process .the output of the center vco , which is tunable from 91.8 to 96.5 ghz using a 1.1 v supply voltage , is distributed to four separate divide - by - two frequency dividers that generate quadrature in - phase ( i ) and quadrature ( q ) signals .signal is then transferred through phase rotator and frequency triplers to drive the dars .the resulting circuit , which has a chip area of 2.7 2.7 mm^2^ , is capable of beam steering nearly 80 in both and planes and provides an eirp of 9.4 dbm at 280.8 ghz with the help of 16 dbi maximum directivity .the final integrated transceiver ( trx ) model is included to provide a trx example , even though the device is implemented in 130-nm silicon - germanium bipolar cmos process .moreover , its 367 to 382 ghz working range is around an atmospheric attenuation local maximum , thus making the device unsuitable for communication purposes .the trx design is a homodyne frequency - modulated continuous - wave radar using triangular modulation signal .differential colpitts vco generates the fundamental signal at 92.7 ghz with 8.3% tuning radius , which is followed by drive amplifiers . inside the tx , initially , balanced quadrature i and q signals are coupled through transformers to the frequency quadrupler .two push - push pairs compose the quadrupler , which outputs the fourth harmonic frequency and separate patch antennas , each containing two patches , radiates and receives the signal . on the rx side a subharmonic mixer , driven by second harmonic quadrature i and q signals , down - converts to intermediate frequency ( if ) , before the concluding if amplifier stage .the tx translates 3 dbm vco output power into -14 to -11 dbm eirp , the rx noise figure is assessed to be between 35 and 38 db , and the entire trx consumes 380 mw power above a total space of 2.2 1.9 mm^2^.investigations on low - thz band for wireless communication purposes began nearly a decade ago .consequently , the research area is only emergent with several problems which need to be solved before real world deployments .the foremost of these can be described as follows : * * realistic channel models : * as previously explained , for low - thz band there is only one channel model candidate that incorporates serious inadequacies .multiple new models which target potential wpan and wlan use cases should be devised based on actual channel measurements performed in various environments including outdoors , where the effects of meteorological phenomena also ought to be accounted for .in addition to providing a complete and accurate elucidation of the small- and large - scale space - time - frequency characteristics of the channel , the models need to be adaptable to analytical and empirical analyses for higher communication layer research subjects . * * multiple channel access : * the very high data rate and zero - latency requirements from the low - thz band for iot applications are achievable through multihop communication over dense ad hoc networks , and the multiple access scheme should be compliant .multiple , hybrid and random access methods need to be investigated for varying channel bandwidth , equipment density and usage models . while intersymbol interference is probably insignificant since multipath is confined in the thz band , results of multiple access interference ought to be examined for different access techniques .* * baseband signal processing : * for successful low - thz band system realizations , trx design and communication methods are equally important . however , baseband processing of a 40/100 gb / s wireless link solely in the digital domain is a major challenge in its own right .analogue implementations of various trx components should be investigated as mixed - signal baseband operation can theoretically result in simpler , faster and more energy efficient circuitries .another research area is parallel data transmission using ofdm or parallel sequence spread spectrum , because utilizing multiple streams would lower the performance requirements on the baseband processor in return for increased number of circuit elements . * * multiple antenna systems : * since the effective area of an antenna is inversely proportional to the square of signal frequency , advanced multiple antenna methods are both critical in augmenting the link budget of and practicable for low - thz band iot services .beam forming and steering algorithms for phased array antennas are among the prioritized investigation directions as those will aid the problems of device discovery and propagation in susceptible thz channels .massive mimo also necessitates further research to resolve predicaments such as pilot contamination and reciprocity calibration prior to becoming an enabling technology . * * access network architecture : * additional losses intrinsic to the thz band limit the range of individual devices .however , network densification which accompanies iot and expanding universal internet access present new design opportunities for uninterrupted connectivity .last - mile network architectures should be capable of supporting the expected escalating capacity needs of each small cell , where wavelength - division multiplexing passive optical networks can become the bottleneck .moreover , short - range communication techniques for the low - thz band need to be researched as complementary device - to - device links are effective in avoiding brief shadowing events like human blockage .with no end of wireless communication upsurge in sight , neither throughput nor data rate requirements of 5 g systems can be provided by just concentrating on spectral efficiency solutions . due to the constantly improving low - cost device technologies , the first transmission window within the thz band is no longer out of reach of widespread communication systems . as the thz standardization activities for wpans recently progressed to second stage ,research efforts in the area are to intensify even more . while the initially arising difficulties for low - thz indoor access network architecture can be resolved using the already laid out concepts , the proposed frequency leap is not small and the work has just begun .realistic channel models , optimum multiple access algorithms and si cmos devices all separately yet simultaneously need development . however , the question is not the possibility of commonplace low - thz band communications , but the timetable .this work was supported in part by the scientific and technological research council of turkey ( tubitak ) under grant # 113e962 .z. xu , x. dong , and j. bornemann , `` design of a reconfigurable mimo system for thz communications based on graphene antennas , '' _ ieee transactions on terahertz science and technology _ , vol . 4 , no . 5 , pp . 609617 , 2014 .k. m. s. huq , s. mumtaz , j. rodriguez , and r. l. aguiar , `` comparison of energy - efficiency in bits per joule on different downlink comp techniques , '' in _ communications ( icc ) , 2012 ieee international conference on _ , conference proceedings , pp . 57165720 .d. jeremic and j. y. k. aulin , `` compressive sensing aided determination of ofdm achievable rate , '' in _ global telecommunications conference ( globecom 2011 ) , 2011 ieee _ , conference proceedings , pp .p. boronin , v. petrov , d. moltchanov , y. koucheryavy , and j. m. jornet , `` capacity and throughput analysis of nanoscale machine communication through transparency windows in the terahertz band , '' _ nano communication networks _ ,vol . 5 , no . 3 , pp .7282 , 2014 .`` ieee standard for information technology - telecommunications and information exchange between systems - local and metropolitan area networks - specific requirements .part 15.3 : wireless medium access control ( mac ) and physical layer ( phy ) specifications for high rate wireless personal area networks ( wpans ) amendment 2 : millimeter - wave - based alternative physical layer extension , '' _ ieee std 802.15.3c-2009 ( amendment to ieee std 802.15.3 - 2003 ) _ , pp . c1187 , 2009 .`` ieee standard for information technology telecommunications and information exchange between systems local and metropolitan area networks specific requirements - part 11 : wireless lan medium access control ( mac ) and physical layer ( phy ) specifications amendment 3 : enhancements for very high throughput in the 60 ghz band , '' _ ieee std 802.11ad-2012 ( amendment to ieee std 802.11 - 2012 , as amended by ieee std 802.11ae-2012 and ieee std 802.11aa-2012 ) _ , pp . 1628 , 2012 .k. guan , b. ai , a. fricke , d. he , z. zhong , d. w. matolak , and t. krner , `` excess propagation loss of semi - closed obstacles for inter / intra - device communications in the millimeter - wave range , '' _ journal of infrared , millimeter , and terahertz waves _ , pp . 115 , 2016 .r. piesiewicz , c. jansen , s. wietzke , d. mittleman , m. koch , and t. kurner , `` properties of building and plastic materials in the thz range , '' _ international journal of infrared and millimeter waves _28 , no . 5 ,pp . 363371 , 2007 .r. piesiewicz , c. jansen , d. mittleman , t. kleine - ostmann , m. koch , and t. kurner , `` scattering analysis for the modeling of thz communication systems , '' _ antennas and propagation , ieee transactions on _ , vol .55 , no . 11 , pp . 30023009 , 2007 .c. han , a. o. bicen , and i. f. akyildiz , `` multi - ray channel modeling and wideband characterization for wireless communications in the terahertz band , '' _ wireless communications , ieee transactions on _ , vol . 14 , no . 5 , pp .24022412 , 2015 . m. uzunkol , o. d. gurbuz , f. golcuk , and g. m. rebeiz , `` a 0.32 thz sige 4x4 imaging array using high - efficiency on - chip antennas , '' _ solid - state circuits , ieee journal of _ , vol . 48 , no . 9 , pp .20562066 , 2013 .w. h. syed , g. fiorentino , d. cavallo , m. spirito , p. m. sarro , and a. neto , `` design , fabrication , and measurements of a 0.3 thz on - chip double slot antenna enhanced by artificial dielectrics , '' _ terahertz science and technology , ieee transactions on _ , vol .5 , no . 2 ,pp . 288298 , 2015 .j. grzyb , z. yan , and u. r. pfeiffer , `` a 288-ghz lens - integrated balanced triple - push source in a 65-nm cmos technology , '' _ solid - state circuits , ieee journal of _ , vol .48 , no . 7 , pp . 17511761 , 2013 .u. l. rohde , a. k. poddar , j. schoepf , r. rebel , and p. patel , `` low noise low cost ultra wideband n - push vco , '' in _ microwave symposium digest , 2005 ieee mtt - s international _ , conference proceedings , p. 4 pp .y. m. tousi , o. momeni , and e. afshari , `` a novel cmos high - power terahertz vco based on coupled oscillators : theory and implementation , '' _ solid - state circuits , ieee journal of _ , vol .47 , no . 12 ,pp . 30323042 , 2012 .k. shinwon , c. jun - chau , and a. m. niknejad , `` a w - band low - noise pll with a fundamental vco in sige for millimeter - wave applications , '' _ microwave theory and techniques , ieee transactions on _ , vol .62 , no .23902404 , 2014 .k. sengupta and a. hajimiri , `` a 0.28 thz power - generation and beam - steering array in cmos based on distributed active radiators , '' _ solid - state circuits , ieee journal of _ , vol .47 , no . 12 , pp .30133031 , 2012 . p. jung - dong , k. shinwon , and a. m. niknejad , `` a 0.38 thz fully integrated transceiver utilizing a quadrature push - push harmonic circuitry in sige bicmos , '' _ solid - state circuits , ieee journal of _ , vol .47 , no .10 , pp . 23442354 , 2012 .turker yilmaz ( s13 ) received b.s .and msc degrees in electrical and electronics engineering from the bogazici university and university college london in 2008 and 2009 , respectively .he is currently a research assistant at the next - generation and wireless communications laboratory and pursuing his ph.d .degree within the department of electrical and electronics engineering , koc university , istanbul , turkey .his current research interests include terahertz communications and internet of things .ozgur baris akan ( m00-sm07-f16 ) received ph.d .degree in electrical and computer engineering from the broadband and wireless networking laboratory , school of electrical and computer engineering , georgia institute of technology , atlanta , in 2004 .he is currently a full professor with the department of electrical and electronics engineering , koc university , the director of the graduate school of sciences and engineering , koc university , and the director of the next - generation and wireless communications laboratory ( nwcl ) .his current research interests are in nanoscale , molecular communications , next - generation wireless communications , internet of things , 5 g mobile networks , sensor networks , distributed social sensing , satellite and space communications , signal processing , and information theory .he is an associate editor for the ieee transactions on communications , ieee transactions on vehicular technology , and iet communications , and editor for the international journal of communication systems ( wiley ) , european transactions on telecommunications , and nano communication networks journal ( elsevier ) .
|
initiation of fourth generation ( 4 g ) mobile telecommunication system rollouts fires the starting pistol for beyond 4 g research activities . whereas technologies enhancing spectral efficiency have traditionally been the solution to data rate and network capacity increase demands , due to the already advanced techniques , returns of the even more complicated algorithms hardly worth the complexity increase any longer . in addition , surging number of connected devices now enables operative use of short - range communication methods . also considering the recently approved standards for the 60 gigahertz ( ghz ) industrial , scientific and medical radio band , in this paper the transmission windows around 300 ghz is proposed to be utilized for the fifth generation wireless communication systems . motivations for the low end of the terahertz ( thz ) band are provided in accordance with market trends , and standardization activities in higher frequencies are listed . unified mathematical expressions of the electromagnetic wave propagation mechanisms in the low - thz band are presented . thz band device technologies are outlined and a complete survey of the state - of - the - art low - thz band circuit blocks which are suitable for mass market production is given . future research directions are specified before the conclusion of the paper . 5 g mobile communication , submillimeter wave communication , submillimeter wave propagation , submillimeter wave technology , submillimeter wave circuits , communication standards .
|
increased interest in the analysis of coherent measures is motivated by their application as mathematical models of risk quantification in finance and other areas .this line of research leads to new mathematical problems in convex analysis , optimization and statistics .the uncertainty in risk assessment is expressed mathematically as a functional of random variable , which may be nonlinear with respect to the probability measure .most frequently , the risk measures of interest in practice arise when we evaluate gains or losses depending on the choice , which represents the control of a decision maker and random quantities , which may be summarized in a random vector .more precisely , we are interested in the functional , which may be optimized under practically relevant restrictions on the decisions . most frequently , some moments of the random variable are evaluated . however , when models of risk are used , the existing theory of statistical estimation is not always applicable .our goal is to address the question of statistical estimation of composite functionals depending on random vectors and their moments .additionally , we analyse the optimal values of such functionals , when they depend on finite - dimensional decisions within a deterministic compact set . the known coherent measures of risk can be cast in the structures considered here and we shall specialize our results to several classes of popular risk measures .we emphasize however , that the results address composite functionals of more general structure with a potentially wider applicability .axiomatic definition of risk measures was first proposed in .the currently accepted definition of a coherent risk measure was introduced in for finite probability spaces and was further extended to more general spaces in .given a probability space , we consider the set of random variables , defined on it , which have finite -th moments and denote it by .a _ coherent measure of risk _ is a convex , monotonically increasing , and positively homogeneous functional , which satisfies the translation equivariant property for all . here and we assume that represent losses , i.e. , smaller realizations are preferred .related concepts are introduced in . a measure of riskis called _ law - invariant _ , if it depends only on the distribution of the random variable , i.e. , if for all random variables having the same distribution . a practically relevant law - invariant coherent measure of risk is the _mean semideviation _ of order ( see , ( * ? ? ?* s. 6.2.2 ) ) , defined in the following way : + \kappa \big\| ( x-\e[x])_+\big\|_p = \e[x ] + \kappa \big[\e\big[\big(\max\{0,x- \e[x]\}\big)^p\big]\big]^{\frac{1}{p}},\ ] ] where ] ( see ) , which is defined as follows : \bigg\}.\ ] ] here , denotes the distribution function of .the reader may consult , for example , ( * ? ? ?* chapter 6 ) and the references therein , for more detailed discussion of these risk measures and their representation .the risk measure plays a fundamental role as a building block in the description of every law - invariant coherent risk measure via the _ kusuoka representation_. the original result is presented in for risk measures defined on , with an atomless probability space .it states that for every law - invariant coherent risk measure , a convex set ] denotes the set of probability measures on the interval \xrightarrow{\,\raisebox{-0.2em}{}\;}\xrightarrow{\,\raisebox{-0.2em}{}\;} ] can be defined by choosing so that ; for example may be equal to the diameter of the support of raised to power .the space is and we take a direction . following , we calculate ^{p-1 } \big\}d_3 + d_2(\mu_3),\\ \xi_1(d ) & = \bar{f}'_{1}\big(\mu_2;\xi_{2}(d)\big ) + d_{1}\big(\mu_2\big ) = \frac{\kappa}{p}\mu_2^{\frac{1}{p } - 1}\xi_{2}(d ) + d_{1}\big(\mu_2\big).\end{aligned}\ ] ] we obtain the expression \}\big]^p \big\ } \big ) + { \ } \\\frac{\kappa}{p}\big ( \e \big\ { \big[\max\{0,x- \e[x]\}\big]^p \big\ } \big)^{\frac{1-p}{p } } \times \\ \big(w_2\big(\e[x]\big ) - p \e \big\ { \big[\max\{0,x- \e[x]\}\big]^{p-1 } \big\}w_3\big).\end{gathered}\ ] ] the covariance structure of the process can be determined from . the process has the constant covariance function : \\ = \int_{{\mathcal{x } } } \big [ f_1(\eta',x ) - \bar{f}_1(\eta')\big ] \big [ f_1(\eta'',x ) - \bar{f}_1(\eta'')\big ] \;p(dx ) = \text{var}[x].\end{gathered}\ ] ] it follows that has constant paths .the third coordinate , has variance equal to ] .therefore , and are , in fact , one normal random variable , which we denote by .observe that involves only the value of the process at ] and its covariance with can be calculated from in a similar way : = \e\big\{\big ( \big[\max\{0,x- \e[x]\}\big]^p - \e\big(\big[\max\{0,x- \e[x]\}\big]^p\big ) \big)^2 \big\},\\ & \text{cov}[v_2,v_1 ] = { \ } \\ & \qquad \e\big\{\big ( \big[\max\{0,x- \e[x]\}\big]^p - \e\big(\big[\max\{0,x- \e[x]\}\big]^p\big ) \big ) \big ( x-\e[x ] \big)\big\}.\end{aligned}\ ] ] formula becomes \}\big]^p \big\ } \big)^{\frac{1-p}{p } } \times \\ \big(v_2 - p \e \big\ { \big[\max\{0,x- \e[x]\}\big]^{p-1 } \big\}v_1\big).\end{gathered}\ ] ] we conclude that { \mathrel{\raisebox{-0.2ex}{\scriptstyle \mathcal{d}}}}\mathcal{n}(0,\sigma^2),\ ] ] where the variance can be calculated in a routine way as a variance of the right hand side of , by substituting the expressions for variances and covariances of , , and . following example [ e:2 ] , we could derive the limiting distribution of \xrightarrow{\,\raisebox{-0.2em}{}\;}\xrightarrow{\,\raisebox{-0.2em}{}\;}\xrightarrow{\,\raisebox{-0.2em}{}\;}\xrightarrow{\,\raisebox{-0.2em}{}\;}\xrightarrow{\,\raisebox{-0.2em}{}\;} ]. then \}\big]^p \big\} ] .also , ,\ ] ] and thus and have the same normal distribution and are perfectly correlated . the variance function of and its covariance with ( and ) can be calculated in a similar way : ) ] = \e\big\{\big ( \big[\max\{0,\varphi(\hat{u},x)- \e[\varphi(\hat{u},x)]\}\big]^p - \\\e\big(\big[\max\{0,\varphi(\hat{u},x)- \e[\varphi(\hat{u},x)]\}\big]^p\big ) \big ) \big ( \varphi(\hat{u},x)-\e[\varphi(\hat{u},x ) ] \big)\big\}.\end{gathered}\ ] ] we conclude that { \mathrel{\raisebox{-0.2ex}{\scriptstyle \mathcal{d}}}}\mathcal{n}(0,\sigma^2),\ ] ] where the variance can be calculated in a routine way as a variance of the right hand side of , by substituting the expressions for variances and covariances of , , and . this section we illustrate the convergence of some estimators discussed in this paper to the limiting normal distribution .many previously known results for the case have been investigated thoroughly in the literature ( see , e.g. , ) and we will not dwell upon these here .we will only illustrate the case about higher - order inverse risk measures as discussed in example 4 for the case more specifically , we take independent identically distributed observations from an independent identically distributed observations .we take and in that case numerical calculation in matlab delivers the theoretical argument minimum and the value of the risk in ( [ inverse ] ) being =15.5163.$ ] the standard deviation of the random variable in the right hand side of ( [ moreillustrate ] ) is 16.032 .the plug - in estimator of this risk can be represented as a solution of a convex optimization problem with convex constraints and hence a unique solution can be found by any package that solves such type of problems .we have used the package that can be operated within . denoting and putting all in a vector can rewrite our optimization problem as follows : the numerical solution to this optimization problem gives us the estimator to get an idea about the speed of convergence to the limiting distribution in ( [ toillustrate ] ) we simulate risk estimators for a given sample size and draw their histogram .the number of bins for the histogram is determined by the rough squared root of the sample size " rule .this histogram is superimposed to the density .as is increased , our theory suggests that the histogram and the normal density graph will look more and more similar in shape .their closeness indicates how quickly the central limit theorem pops up in this case .figure [ fig1 ] shows that the central limit theorem indeed represents a very good approximation which improves significantly with increasing sample size .the small downward bias that appears in figure 1 a ) is getting increasingly irrelevant with growing sample size .we have experimented with different values of such as and and we have also changed the value of ( respectively ) . the tendency shown in figure [ fig1 ] is largely upheld , however , as expected , the standard errors are increased when and/or is increased .also , the limiting normal approximation seems to be more accurate for the same sample sizes when a smaller value of is used .this discussed effect is illustrated on figure [ fig3 ] where ( i.e. , the case of avar ) , ( where a different sample in comparison to the sample in figure [ fig1 ] , ) and was simulated ) .the remaining quantities have been kept fixed to and we stress that increasing the sample size in figure [ fig3 ] d ) makes the histogram look much more like the limiting normal curve so that the discrepancy observed there is indeed just due to the limiting approximation popping up at larger samples when is increased .we also experimented with different distributions for the random variable we took specifically -distributions with degrees of freedom such as 4 , 6 , 8 and 60 , shifted to have the same mean of 10 like in the normal simulated data .the results of this comparison for and are shown in figure 2 .the variances of the -distributed variables , being equal to are finite and even smaller than the variance of the normal random variable in figure [ fig1 ] .however the heavier tails of the distribution adversely affect the quality of the approximation . despite the fact that the limiting distribution of the risk estimator is still normal when and the heavy tailed data cause the normal approximation to be relatively poor even at the case is closer to normal distribution and hence the approximation works better in this case .note that the limiting distribution when involves the fourth moment of the distribution and this moment is finite for and but is infinite when as a result , it can be seen from figure [ fig2 ] d ) that the normal approximation collapses in this case .also , figure [ fig2 ] shows that for attaining similar quality in kolmogorov metric for the asymptotic approximation like in the case of normally distributed in figure 1 c ) , much bigger samples are needed .for the fixed sample size of 4000 , the quality of the normal approximation worsens as decreases from 60 to 8 and then to 6 .furthermore , and outside of the scope of the present paper , we note that if the distribution of has even heavier tails than the distribution with ( for example , if it is in the class of stable distributions with stability parameter in the range ( 1,2 ) ) then the limiting distribution of the risk may not be normal at all .the infinity dimensional delta method is a standard statistical technique to evaluate the asymptotic distribution of estimators of statistical functionals .the applicability of the procedure hinges on veryfing smoothness conditions of the related functionals .motivated primarily by the need to estimate coherent risk measures we introduce a general composite structure for such functionals in in which all known coherent risk measures can be cast .the potential applicability of our central limit theorems however extends beyond functionals representing coherent risk measures .our short simulation study indicates that the central limit theorem - type approximations are very accurate when the sample size is large , is in reasonable limits between 1 and 3 and the distribution of is with not too heavy tails .we note that for smaller sample sizes , the technique of concentration inequalities may be more powerful and accurate when evaluating the closeness of the approximation .it is possible to derive concentration inequalities for estimators of statistical functionals with the structure that has been introduced in our paper .this is a subject of ongoing research .the first author was partially supported by the nsf grant dms-1311978 .the second author was partially supported by a research grant ps27205 of the university of new south wales .the third author was partially supported by the nsf grant dms-1312016 .99 artzner , p. , delbaen , f. , eber , j .-m . , and heath d. ( 1999 ) coherent measures of risk , _ mathematical finance _ , 9 , 203228 .belomestny , d. and krtschmer , v. ( 2012 ) central limit theorems for law - invariant coherent risk measures , _ journal of applied probabability _ , 49 ( 1 ) , 1-21 .ben - tal , a. , teboulle , m. ( 2007 ) an old - new concept of risk measures : the optimized certainty equivalent ._ mathematical finance _, 17 , 3 , 449 - 476 .beutner , e. and zhle , h. , ( 2010 ) a modified functional delta method and its application to the estimation of risk functionals . _ journal of multivariate analysis _ ,101 ( 10 ) , 24522463 .bonnans , j. f. and shapiro , a. ( 2000 ) _ perturbation analysis of optimization problems _ , springer , new york .brazauskas , v. , jones , b.l . ,puri , m.l . , and zitikis , r. ( 2008 ) estimating conditional tail expectation with actuarial applications in view ._ journal of statistical planning and inference _ 138 ( 11 ) , 35903604 .cheridito , p. and li , t. h. ( 2009 ) risk measures on orlicz hearts , _ mathematical finance _, 19 , 189214 .dentcheva , d. and penev , s. ( 2010 ) shape - restricted inference for lorenz curves using duality theory , _ statistics & probability letters _ , 80 , 403412 .dentcheva , d. , penev , s. , and a. ruszczyski ( 2010 ) kusuoka representation of higher order dual risk measures ._ annals of operations research_. 181 , 325335 .dentcheva , d , and a. ruszczyski , ( 2014 ) risk preferences on the space of quantile , _ mathematical programming _ , 148 ( 12 ) , 181200 .dentcheva , d. , g. j. stock , g. j. and rekeda , l. , mean - risk tests of stochastic dominance , statistics & decisions 28 ( 2011 ) 97 - 118 .fllmer , h. and schied , a. ( 2002 ) , convex measures of risk and trading constraints , _ finance and stochastics _ , 6 , 429447 .fllmer , h. , and a. schied ( 2011 ) , _ stochastic finance . an introduction in discrete time _ ,third edition , de gruyter , berlin .frittelli , m. and rosazza gianin , e. ( 2005 ) .law invariant convex risk measures . in _ advances in mathematical economics .volume 7 _ , volume 7 of _ adv . math . econ ._ , pages 3346 .springer , tokyo .glten , s. and ruszczyski , a. ( 2014 ) , two - stage portfolio optimization with higher - order conditional measures of risk , _ submitted for publication_. jones , b. l. and zitikis , r. ( 2003 ) empirical estimation of risk measures and related quantities , _ north american actuarial journal _ 7 ( 4 ) , 4454 .jones , b. l. and zitikis , r. ( 2007 ) , risk measures , distortion parameters , and their empirical estimation ._ insurance : mathematics and economics _ 41 ( 2 ) , 279297 .kijima , m. , ohnishi , m. ( 1993 ) mean risk analysis of risk aversion and wealth effects on optimal portfolios with multiple investment possibilities , _ annals of operations research _ , 45 , 147163 .krokhmal , p. ( 2007 ) higher moment coherent risk measures , _ quantitative finance _ 7 373 - 387 .kusuoka , s. ( 2001 ) on law invariant coherent risk measures , adv .econ . , 3 , 8395 .markowitz , h. m. ( 1952 ) portfolio selection , _ journal of finance _ , 7 , 7791 .markowitz , h. m. ( 1987 ) mean variance analysis in portfolio choice and capital markets , blackwell , oxford , 1987 .matmoura , y. and penev , s. ( 2013 ) multistage optimization of option portfolio using higher order coherent risk measures ._ european journal fo operational research _ , 227 , 190198 .ogryczak , w. and ruszczyski , a. ( 1999 ) from stochastic dominance to mean - risk models : semideviations and risk measures , _european journal of operational research _, 116 , 3350 .( 2001 ) on consistency of stochastic dominance and mean semideviation models , _ mathematical programming , _ 89 , 217232 .ogryczak , w. , ruszczyski , a. ( 2002 ) , dual stochastic dominance and related mean - risk models ._ siam j. optim ._ , 13 , 1 , 60 - 78 .pflug , g. and rmisch , w. ( 2007 ) modeling , measuring and managing risk .world scientific .pflug , g. and wozabal , n. , ( 2010 ) asymptotic distribution of law - invariant risk functionals . _ finance and stochastics _, 14 , 397 - 418 .rockafellar , r. t. ( 1974 ) conjugate duality and optimization , cbms - nsf regional conference series in applied mathematics 16 siam , philadelphia .( 2002 ) conditional value - at - risk for general loss distributions ._ journal of banking and finance _ , 26 , 14431471 .rockafellar , r. t. , uryasev , s. , zabarankin , m. ( 2006 ) generalized deviations in risk analysis , _ finance and stochastics _ , 10 , 5174 .rmisch , w. ( 2005a ) stability of stochastic programming problems , in : _ stochastic programming _ , a. ruszczynski , a. shapiro ( eds . ) , elsevier , amsterdam . rmisch , w. ( 2005 ) delta method , infinite dimensional , _ encyclopedia of statistical sciences _( s. kotz , c.b .read , n. balakrishnan , b. vidakovic eds . ) , second edition , wiley .ruszczyski , a. and shapiro , a. ( 2006 ) optimization of convex risk functions , _ mathematics of operations research _ , 31 , 433452 .stoyanov , s. , racheva - iotova , b. , rachev , s. and fabozzi , f. ( 2010 ) stochastic models for risk estimation in volatile markets : a survey , _ annals of operations research _ , 176 , 293309 .shapiro , a. , dentcheva , d. and ruszczyski , a. ( 2009 ) _ lectures on stochastic programming : modeling and theory _ , siam publications , philadelphia .tsukahara , h. ( 2013 ) estimation of distortion risk measures , _ journal of financial econometrics _, 12 ( 1 ) , 213235 .van der vaart , a. w. ( 1998 ) _ asymptotic statistics _ , cambridge university press , cambridge .
|
we address the statistical estimation of composite functionals which may be nonlinear in the probability measure . our study is motivated by the need to estimate coherent measures of risk , which become increasingly popular in finance , insurance , and other areas associated with optimization under uncertainty and risk . we establish central limit formulae for composite risk functionals . furthermore , we discuss the asymptotic behavior of optimization problems whose objectives are composite risk functionals and we establish a central limit formula of their optimal values when an estimator of the risk functional is used . while the mathematical structures accommodate commonly used coherent measures of risk , they have more general character , which may be of independent interest . + _ keywords : _ risk measures composite functionals central limit theorem example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
|
john von neumann ( ( * ? ? ?* von neumann 1948 ) ) proposed a way to deal with the intractable problem of hydrodynamic turbulence , by ( 1 ) using numerical simulation on computers to construct turbulent solutions of the hydrodynamic equations , ( 2 ) building intuition from study of these solutions , and ( 3 ) constructing analytic theory to describe them .he proposed that iterating this procedure could lead to a practical understanding of turbulent flow .the computer power available at that time was totally inadequate to compute hydrodynamics on sufficiently refined grids to produce turbulent flow ; numerical viscosity restricts the effective reynolds number . today, computing power is adequate for the simulations of truly turbulent , three - dimensional ( 3d ) , time dependent , compressible flows , so we have begun a program based upon von neuman s proposal . turbulent flow in its many guises ( e.g. , convection , overshooting , shear mixing , semi - convection , etc . ) is probably the weakest aspect of our theoretical description of stars ( and accretion disks ) .the full problem that faces us includes rotation , magnetic fields , and multi - fluids ( to account for compositional heterogeneity , diffusion , radiative levitation , and nuclear burning ) . in this paperwe describe the progress made toward von neumann s goal .we plan to replace the venerable mixing - length theory ( mlt ) with a physics - based mathematical theory which can be tested by refined simulations and terrestrial experiment ( e.g. , laboratory fluid experiments , meteorological and oceanographic observations ) . particularly relevantare high - energy density plasma ( hedp ) experiments , which now can access regions of temperature and density that overlap stellar conditions up to helium burning ( * ? ? ?* , * ? ? ?* ) , and deal with plasma and magnetic fields , just like star matter , not with an unionized fluid like air or water .it is numerically convenient to replace convective mixing in a stellar evolution code by a diffusion algorithm , but this is not physically correct . the correct equation for the change of composition is , where the term on the left - hand side is the lagrangian time derivative of the composition in a comoving spherical shell with velocity , the first term on the right - hand side is the mixing due to rotation and turbulent velocities across the lagrangian shell boundary , and the last term is the composition change due to nuclear reactions which change species .thus , where the terms on the right - hand side represent all the ways in which species can be made or destroyed .the advection operator involves a velocity field which is determined non - locally and a first order spatial gradient .this is replaced by ,\ ] ] which has a second order derivative in space and a phenomenological local diffusion coefficient .except for contrived cases , these are the same ( zero ) only in the limit that composition is homogeneous .we need a major community effort to base stellar mixing algorithms on physics , comparable to the efforts led by willy fowler for nuclear reaction rates , so that _ both _ the advection and reaction terms are reliable .we have simulated turbulent flow resulting from shell oxygen burning in a presupernova star . because of the fast thermalization time( unlike the solar convection zone , for example ) we can simulate the entire convective depth as well as the stable boundaries .this is a `` convection in a box '' approach , implicit large eddy simulation ( iles ) . using a monotonicty preserving treatment of shocks ( like ppm ,see * ? ? ?* and * ? ? ?* ) insures that the turbulent energy moves from large scales to small in a way close to that envisaged by * ? ? ?because the rate of the cascade of turbulent flow from large scales to small is set by the largest scales , there is no need to resolve the smallest scales , which are far below our grid resolution. this would not be the case if the nuclear burning time were shorter than the turnover time , instead of a thousand times longer .a detonation or deflagration , in which the turnover time is much longer than the reaction time , is a more difficult problem .the aspect ratio is chosen to be large enough so that it has little effect on the simulation .the initial state is mapped from a 1d model with sufficient care so that there is very little transient `` jitter '' .the convection develops from numerical roundoff noise or from low amplitude seed noise . a quasi - steady state , in an average sense , develops in one turnover time , so that memory of initial errors is quickly lost .the simulations show that this oxygen shell burning is unstable to nuclearly - energized pulsations ( primarily radial ) , which couple to the turbulent convective flow .the convective kinetic energy shows a series of pulses with an amplitude change of order of a factor of two .these disappear if the burning is artificially turned off . for more detail , see * ? ? ?* find that 2d simulations which include multiple burning shells show interactions between the shells ; 3d simulations of multiple shells are planned .neither have the pulsations , nor the interaction of burning shells , been included in any 1d progenitor models to date .another novel feature found by * ? ? ? * is entrainment at convection boundaries .the physics of the process is interesting ; it involves the erosion of a stably - stratified layer by a turbulent velocity field , mediated by nonlinear g - mode waves .while these simulations do not contain an entire star , and thus limit the accuracy of the description of low - order modes , whole star simulations are developing enough resolution to exhibit turbulent flows .since we find that even modest resolution will give reliable average quantities ( see below ) , we expect the `` simulation step '' to be soon generalized and extended to include rotation and magnetic fields .the pressure , density and velocity were subjected to a reynolds decomposition , in which average properties and fluctuating properties are separated . for example , for pressure , , so that averages give and .note that in general we use two levels of averaging : one over solid angle ( the extent of our grid in and ) , and one over time ( two turnover times ) .the resulting averaged properties have a robust behavior that was insensitive to grid size , aspect ratio , and limits to the size of the averaging dimension ( provided it was large enough ; two turnover times and 60 degrees worked fine ) . * ? ? ?* find that the velocity scale is well estimated by equating the increase in kinetic energy due to buoyant acceleration to the decrease due to turbulent damping in the kolmogorov cascade .this implies that it will be possible to make quantitatively correct estimates of wave generation and entrainment at convective boundaries . * ? ? ?* have shown that , in the solar case , the velocity scale is significantly larger than estimated by mlt ( a factor of ) , but agrees with both 3d atmospheres ( * ? ? ?* , * ? ? ?* and * ? ? ?* ) and empirical solar surface models .the flow becomes more asymmetric as the depth of a convection zone increases ( i.e. , the upflows are broader and slower ) , so that there is a non - zero flux of turbulent kinetic energy , and for deep convection zones ( , where is a pressure scale height ) , the turbulent energy flux is significant relative to enthalpy flux ( ) and oppositely directed .these insights are being implemented into an algorithm for stellar evolution .the idea is to use fully 3d phenomena , found in simulations and captured by analytic theory , by projecting them onto a 1d geometry , as used in stellar models . unlike mlt , the algorithm is nonlocal and time - dependent , not static. it should be applicable to deep , nearly adiabatic convection without modification . because it have some time dependence it should be useful for models of pulsating stars .it will replace `` overshooting '' and `` semi - convection '' because it uses the bulk richardson criterion for the extent of convection . because the turnover flow in the convection zone is averaged over , this algorithm is not limited by the corresponding courant condition , and is appropriate for stellar evolution over long time scales .we emphasis that failure is possible now that free parameters are being eliminated , so that inadequacies of the theory will be evident .the `` 321d '' approach merges naturally with work on 3d atmospheres ( * ? ? ?* and above ) and work on accretion disks ( * ? ? ?* , * ? ? ?* , * ? ? ?* , and * ? ? ?these approaches all use mean field equations , starting from the same general equations of mass , momentum and energy conservation for fluids , and use averaging to derive general properties .because of this , physical processes are not introduced in patchwork fashion , but a logical necessities of the conservation laws . insights into mhd in disks can spark insight into angular momentum transport in stars , and insights into stellar turbulence should do the same for accretion disk theory . as bohdan paczynski was fond of saying , `` accretion disks are just flat stars . ''perhaps the greatest challenge for stellar evolution is the treatment of angular momentum transport .the rigid rotation of the sun s radiative core , and the differential rotation of the convective envelope , inferred from helioseismology , seem to have been a surprise . if we wish to understand grb s and hypernovae , most workers seem to assume that a key role is played by rotation in the gravitational collapse and explosion ( an idea dating back to fred hoyle , at least ) .we expect to have little success if we extrapolate from the sun , using algorithms that give the wrong qualitative behavior .the von neumann proposal , generalized to include rotation and magnetic fields , offers hope .figure 1 shows the results of a first step toward understanding that problem .our convection in a box simulation is continued , but with the box being rotated around the polar axis .the initial rotation is rigid body , so that the specific angular momentum is quadratic in the radius .after a few turnover times , the results in figure 1 is obtained , in which the specific angular momentum tends toward a constant in the convection zone , while remaining rigid body outside .further , magnetic instabilities ( mri , etc . ) seem to cause radiative regions to tend toward rigid body rotation , even if they initially have some other rotation law .a perusal of the literature suggests that in stellar evolution , the opposite is often assumed .the von neumann proposal of using computation and theory together seems to work well for stellar turbulence , and promises to be of value for the more complex problem which includes rotation and magnetic fields .perhaps the best aspect of this approach is that it certainly will make new predictions of phenomena which hitherto have been essentially in the realm of observation only .
|
a program is outlined , and first results described , in which fully three - dimensional , time dependent simulations of hydrodynamic turbulence are used as a basis for theoretical investigation of the physics of turbulence . the inadequacy of the treatment of turbulent convection as a diffusive process is indicated . a generalization to rotation and magnetohydrodynamics is indicated , as are connections to simulations of 3d stellar atmospheres .
|
rigorous derivations of heat conduction laws for mechanical particle models coupled to heat reservoirs remain a mathematical challenge .a variety of models have been introduced in the past ; nearly all of the proposed derivations of the fourier law are partial solutions based on unproven assumptions .developing proofs of these assumptions would require deep understanding of the properties of systems in non - equilibrium , i.e. , coupled to several unequal heat reservoirs .the standard assumptions include the existence of the unique invariant measure ( steady state ) as well as certain bounds on the rates of convergence of initial distributions to the invariant measure . for systems in equilibrium ,i.e. , when the temperatures of all the reservoirs are the same , the steady states can often be written down explicitly .the question of existence of non - equilibrium steady states has been open for practically all mechanical particle systems , by which we mean hamiltonian - like systems , driven by stochastic heat reservoirs .the main difficulty lies in dealing with the non - compactness of the phase space . for the systems under consideration, however , it is relatively easy to envision scenarios under which particles slow down ( freezing ) or speed up ( heating ) due to stochasticity of the heat reservoirs .this may push initial distributions towards zero or infinite energy levels and ultimately violate existence of physically relevant steady states .an example of freezing has been observed ( numerically ) in one of the proposed models . in that model, a particle acquires very low values of the speed once in a while under the evolution of the dynamics ( due to stochasticity ) .low values of the speed lead to long traveling times between collisions during which the particle has no influence on the evolution of the system .it is observed for the system in that more and more particles get stuck on the low energy states resulting in fewer and fewer collisions per unit time . to rule out such unfortunate scenarios one must be able to control the probabilities of particles acquiring low speeds and the rates at which the speeds recover to normal ranges .the dynamics of the mechanical particle systems driven by heat reservoirs may be viewed as a continuous - time markov process .harris ergodic theorem and its generalizations are common tools for obtaining existence and uniqueness of the steady states as well as ( exponential ) convergence of initial distributions to the steady state .in discrete time , the theorem requires two things : to produce a non - negative potential ( lyapunov function ) on the phase space which , on average , decreases as a power law under the push forwards of the dynamics , and , given such a , to show minorization or doeblin s condition on certain level set of .the first condition guarantees that the dynamics enters the center of the phase space , a certain level sets of , with good control on the rates ; and , once at the center , coupling is guaranteed by the minorization condition .ergodic theorem was applied in for a discretization of the original continuous - time markov process . in results were extended to continuous time .the existence of a unique steady state was obtained by constructing a suspension flow over the discrete dynamics ; convergence of initial distributions to the steady state followed from a result in general state space markov process theory once irreducibility of time- discrete process was shown .in addition , demonstrated that this convergence is sub - exponential for a class of initial distributions .the slow rates of convergence are due to the abundance of slow particles in the system which , in turn , do not influence the system for extended periods of time .this slows the rates of mixing .the analysis in relies heavily on the fact that there exists a meaningful discretization of the system that mixes exponentially fast .because particles do not interact , the study of the dynamics of one particle on the collision manifold reveals important dynamical properties that yield implications for the continuous - time system . for an interacting particle system , the rates of mixing for the continuous - time system and its relevant discretizations happen to be comparable due to the slow particle effect .sub - exponential mixing seems to be prevalent for canonical interacting particle systems driven by gibbs heat reservoirs .if a discrete system does not mix exponentially , finding the potential that would still guarantee existence of invariant measures is a very diligent task requiring extremely good understanding of the dynamics of the system .thus , one needs different methods to tackle the question of existence of non - equilibrium steady states .the paper of meyn and tweedie provides a general framework of showing existence of invariant probability measures for general state space markov processes . in this paperwe consider a class of mechanical systems in which particles interact with an energy tank represented by a rotating disk anchored at the center .particles move freely between collisions with the tank or the boundary . when a particle collides with the disk , an energy exchange occurs , in which the particle exchanges the tangential component of its velocity with the angular velocity of the disk and the normal component of the particle s velocity changes sign .a system in this class is coupled to heat reservoirs set at possibly different temperatures that absorb particles when they collide with the boundaries of the reservoirs and emit new particles according to the gibbs distribution corresponding to the temperatures of the reservoirs .the main geometric assumption that makes the analysis feasible is that a particle can hit the disk at most once before returning to the heat reservoir .the existence is shown in section [ sect : existence ] through estimating hitting times of a carefully chosen compact set in continuous time without an aid of a discretization or a potential .a regeneration times idea is employed in the argument . in order to apply a general state space markov process theory developed in ,one also needs to show that the minorization condition holds on , which we do in [ subsect : c petite ] .convergence of initial distributions to the steady state follows by application of a theorem in after a small modification of the minorization condition argument . in section [sect : non exp mixing ] we show that mixing is not exponential and for a large class of initial distribution convergence of initial distributions to the steady state occurs at sub - exponential rates .the argument is similar to .however , we can not use the potential for certain upper bound estimates and different methods are required .the key property that leads to sub - exponential mixing is that the ( invariant ) measure of the states for which at least one particle will not collide with a heat reservoir or a disk in time is of the order of .the dynamics thus resembles one of an expanding map with a neutral fixed point and sub - exponential convergence rates for a class of initial distributions can be obtained using arguments similar to .the bounds on the measure of the particles that will not collide with a heat reservoir or a disk in time are obtained using the minorization condition and other properties of the dynamics .let be a circular domain of radius .a disk is anchored at the center of .the disk is allowed to rotate freely with angular velocity ; denote by a position of a marked point on .let and be two vertical walls on the top and on the bottom of , splitting into two halves , left and right .see fig .[ fig : manyparticles ] .our system consists of particles in with positions and velocities .the particles are confined to either of the two halves and move freely between collisions with . the collisions with and specular , i.e. , the angles of incidence are equal to the angles of reflection .when a particle collides with the boundary of the disk , an energy exchange occurs , in which the particle exchanges the tangential component of its velocity with the angular velocity of the disk and the normal component of the particle s velocity changes sign . more precisely , if is the particle s velocity decomposition upon collision with and the disk rotates with angular velocity , then the post - collision velocities are : where is the particles velocity decomposition and is the disk s angular velocity immediately after the collision .this interaction was introduced in .the left and right parts of , and , act as heat baths at possibly different temperatures and respectively .particles get absorbed by the heat baths upon collision with , and , upon collision of a particle with the disk , a new particle is emitted immediately at the collision location with speed and angle distributed according to , where or depending on whether the particle is confined to the left or the right half of .this distribution on the boundary corresponds to particle s velocity distributed as in .we would like to define the associated markov process with the dynamical rules governed as above .a phase space for such a process should consist of quadruples with proper identifications at the collisions .in particular , when , if has positive ( negative ) horizontal component , then the corresponding particle is confined to the right ( left ) half of the domain . to simplify the notation ,we would like to exclude states for some , and has zero horizontal component from the phase space . in our future argumentswe will frequently omit mentioning the situation of particles reflecting from : due to symmetry and circular shape of , the distance of flight is the same whether reflection occurs or not .in addition , we would like to exclude all the states with stopped particles ( for some ) and all the states that lead to such with positive probability .a particle stops if it collides tangentially with a stopped disk ( ) .consequently , any state with a particle heading for a tangential collision may lead to positive probability of stopping a particle as time evolves .let = & \ { ( , , , ) : v_i 0 , 1 i k , + & x_i l_u l_d , v_i , + & } / ~ where corresponds to a choice of outgoing velocities upon collision of a particle with .let be the associated markov process on ; denote the transition probability kernel by .note that is locally compact and separable , and has strongly markov with right - continuous sample paths because we chose to keep track of the outgoing velocities are collisions .those assumptions are necessary in order to apply general state markov process theory in section [ sect : existence ] .[ thm : main ] there exists a unique invariant measure for the markov process .it is mixing , but not exponentially mixing .moreover , all initial distributions converge to , but for a large class of initial distributions the convergence rate is at best polynomial .in section [ sect : existence ] we will show existence of an invariant probability measure , which is mixing ; in section [ sect : non exp mixing ] that mixing and the convergence of initial distributions to for a class of initial distributions are not exponential .a non - empty compact set is called -petite if there exist such that here is the uniform probability measure on .the condition is frequently called the minorization or doeblin s condition on . for any and a set define to be the first hitting time on and to be the first hitting time on after waiting time .we will use the following result by meyn and tweedie on continuous - time general state markov chains : [ thm : meyn existence ] assume there exists a petite set such that * for all and * for some , <\infty ] and is a geometric random variable with , then =\mathbb{e}_{\nu}[\tau_1 + \cdots + \tau_{\sigma}]\ ] ] \mathbb{e}_{\nu}[\tau_1 ] + \mathbb{p}[\sigma=2]\mathbb{e}_{\nu}[\tau_1+\tau_2 ] + \cdots\ ] ] + ( 1-\nu(c))\nu(c)2 \mathbb{e}_{\nu}[\tau ] + \cdots = \frac{\mathbb{e}_{\nu}[\tau]}{\nu(c ) } < \infty.\ ] ] theorem [ thm : meyn existence ] asks for the initial distribution of to be a point measure at ; assume there exists a stopping time such that is distributed with . if \ } < \infty ] , then by a similar estimate we conclude that <\infty\} ] for all , then for all follows too . for the actual process , the first part of the argument applies if is the invariant measure. not only we do not know if it exists , showing that there exists a stopping time with distributed as is nontrivial .however , if there exists a stopping time such that the system almost renews at time since enough initial data is forgotten by time due to the randomness of the heat baths , the argument may still carry through .a bit more precisely we would like to find a stopping time such that is independent enough from and is distributed similar to given that is distributed similar to .this will guarantee almost geometric rates of hitting .in addition , we want to be small enough so that \ } < \infty ] .let be a state in .some of the particles in may be heading for a collision with the disk in a sense that each of these particles will collide with the disk before colliding with .let be the time of the last of those collisions with the disk given .note that is finite and deterministic .let be the minimum time at which all particles and the disk randomize .more precisely , starting with , such that both of the following events have occurred : * all particles in have collided with at least once ( ensures that all the initial particles velocities are forgotten ) ; * a particle originated from , hit the disk at some time , and collided with again ( ensures that the angular velocity is forgotten ) } . a priori it is not completely clear whether is almost surely finite; we will show it along with the expected value estimates .though , at time , the initial velocities of the particles and the angular velocity of the disk are forgotten , the positions and may still be strongly correlated since collision times may be .also note that belongs to the collision manifold for some . in the following itis convenient to introduce a change of variables in order to make passing from to easier .we would like to replace with , where the new coordinates are based on the information from the past or the future collision .see fig.[fig : coordinates ] .more precisely , let , , , and be as follows : is the point of the past collision of the particle with if its previous collision was with and not with ( here we do not count collisions with ) ; otherwise is the point of the future collision of the particle with .note that the geometry is chosen in such a way that a particle can experience at most one collision with between collisions with . in the first scenario , is the distance of flight of the particle to in the direction of ( with possible reflection off , and in the second scenario is the distance of flight of the particle to in the direction of . let be the angle with respect to the normal to at the collision point , and is the speed of the particle. we will denote states in by .we will also denote angles of collision with the disk by .note that .then suppose we start with a point measure , , and run until time .once time is reached , the speeds and angles of the particles are distributed independently with + if and is distributed with , where or depending on the side from which the disk experienced its last collision before time ( in case of , i.e. after disk collision , is the same if and is a certain mix distribution if ) .the inverse temperature is the only memory kept for the distribution of at time .let .we would like to show that due to enough randomization of speeds and angular velocities , the expected time ] .note that lemma [ lemma : subsequent tau ] does not say anything about the initial waiting time for renewal starting from or an arbitrary initial distribution , only about the subsequent ones .we will prove lemma [ lemma : subsequent tau ] in subsection [ subsect : phi_tau returns ] . in order to apply theorem [ thm : meyn existence ], we need to find a petite set such that probabilities of hitting at regeneration times are uniformly bounded away from and .in addition , we want \ } < \infty ] .the initial position and velocity of the particle uniquely determine whether the particle will collide with the disk or not . in case of no collision ,the time of flight is .if the particle is headed for a collision with the disk , there are three possibilities for the value of the angular velocity of the disk upon collision : 1 .original ; 2 . , if the particle that collided with the disk immediately before the first particle ; and 3 . a random angular velocity acquired by the disk due to collision with a particle emitted randomly from . in the situation, is drawn from the distribution .the expected value for the exit time after collision with the disk is bounded as follows : for the convenience of notation below , set if .then \leq \frac{l}{s_1}+ \max\{\frac{l}{s_1 } , \frac{l}{\sqrt{\omega^2+s_1 ^ 2\cos^2(\varphi_1 ' ) } } , \frac{l}{\sqrt{s_2 ^ 2 \sin^2(\varphi_2 ' ) + s_1 ^ 2\cos^2(\varphi_1')}},\frac{l}{\sqrt{\beta_{\min } \pi}}\}\ ] ] similar estimate holds for ] in the equation ( [ eqn : e tau bound ] ) depends only on and , . in the distribution , each and are distributed according to ( before disk collision ) .then _ [ ] & _ [ _ 1 i k \ { } + + ] d + & _ -^ _ 0^ _ _ j(s,)dr d++ + & = _ 0^ _ 0^ s^2 e^-_j s^2 ( ) dds + & + _ ^ _ 0^ s^2 e^-_j s^2 ( ) dds + + d for some . * proof of lemma [ lemma : tau(delta ) from c ] . * if then [ eqn : e tau bound c ] _ [ ] & _ 1 i k \{}++ + & + + if we start with and wait for some time , then some of the particles may experience collisions with and redistribute their speeds and angles according to . then \leq \frac{2l}{\sqrt{\epsilon}s_{\min}}+d = : d',\ ] ] where is the constant from lemma [ lemma : subsequent tau ] .now suppose we start with any .the stopping time .the time is finite almost surely ; so are ] since their expectations are bounded . for almost every in the support of , is also finite almost surely . therefore , for any , . we would like to show that there exits such that for any , where is the uniform probability measure on . this statement implies , in particular , that for any , there is a sample path which takes precisely time to complete .we will start by showing this implication , with additional restrictions for the path to be regular in a sense that it stays away from tangential collisions ( precise definition to follow ) and for .those restrictions will be important later when we extend our argument to showing that is petite .our proof follows general outline of the proof of the minorization condition , prop . 2 in .the state represents positions and velocities of particles as well as the marked position and the angular velocity of the disk .particles and the disk interact as the dynamics evolves . in order to make the analysis simpler, we would like to choose a path that decouples particles and the disk .we achieve it by imposing a rule that as soon as a particle reaches , it does not collide with the disk anymore ; the exceptions are that some particle needs to reset the disk s angular velocity and the final trip to .note that , with this assumption , if , the times for particle to reach for the first time are deterministic and uniformly bounded by some .we treat the final state similarly by running the dynamics backwards in time .let be the times for the particles in to reach when running the system backwards in time ; note that .in addition , we impose that , at each collision with except setting the disk and the last one for each particle , the emission angles are uniformly bounded away from and .namely , we require that the emission angles satisfy for and some .the decoupling of the particles reduces the problem to a sub - problem concerning only one particle , which can be stated as follows : [ lemma : path subproblem ] for large enough and for any ( or ) , there exists a particle path from to , with outgoing angles and speeds satisfying and , such that takes precisely time to complete .moreover , the paths can be chosen in such a way that the number of collisions is bounded by some monotone function of .* proof of lemma [ lemma : path subproblem ] .* let be the minimal number of collisions required to travel between two diametrically opposite points satisfying the angle assumption ( usually , but if is very small compared to , may be larger ) .then collisions is enough to travel between any and .let and be the minimal and maximal times of travel along such an path with collisions satisfying the speed and angle parameters of lemma [ lemma : path subproblem ] .note that a simple application of the lagrange multipliers method guarantees that , for all intermediate values ] with segments .let .then for any , there exists a path between and taking time to complete with the maximal number of segments , as desired . given lemma [ lemma : path subproblem ] , in order to reach , we just need to send a particle to reset the disk and to reconnect the paths of particles matching them in an arbitrary manner in and in . for particles that do not reset the disk , .the particle that resets the disk has to change the angular velocity from to , where and are the angular velocities of the disk after all the particles in or have collided with in forward and backward times respectively . to change to simply needs to emit a particle with such that .however , for the purposes of obtaining the lower bounds in our subsequent argument , we would do it in two steps , by first setting to some intermediate value and only then to .in most situation we would choose .let denote the time to reset the disk .then , for some .for the particle that resets the disk we choose .if we take , then we are guaranteed to have a path from to with collisions with .call such a path .thus , we obtain : [ lemma : path ] given , there exists and such that for any , there exists a sample path making less than collisions such that between the first and the last collisions of each particle with the , all and . * density bounds along . *the next step is to show that if we start with a point measure at , , it acquires density as it evolves with the dynamics in a sense that has a nontrivial absolutely continuous component for large enough .moreover , the density of this absolutely continuous component is uniformly bounded away from zero in a neighborhood of each path and , in particular , at the endpoint .density in a neighborhood of a point is the product of the densities in neighborhoods of coordinates of each particle and the disk . for the majority of the path , particles and the disk do not interact ,so we can deal with their densities separately . in lemma[ lemma : acquiring density ] we show that each particle acquires density with a uniform lower bound in a fixed size neighborhood of the second collision and lemma [ lemma : pushing density forward ] keeps track of lower bounds of the densities at subsequent collisions along the path as we push the measure forward .lemmas [ lemma : acquiring density for the disk ] and [ lemma : reacquiring density ] deal with the particle that collides with the disk , making the disk acquire density and the particle to re - acquire density at the next collision with after loosing some to the disk .since the number of collisions along is bounded , if we combine lemmas [ lemma : acquiring density]-[lemma : reacquiring density ] together , by the last collision , each particle as well as the disk have density in a fixed - size neighborhood of the collision point .after the last collision the dynamics is deterministic and the value of the density is preserved under push forwards .this allows us to conclude that has a uniform lower bound on the density at . to formalize the argument, we need to define more precisely what we mean by a neighborhood of a collision point : in the coordinate system defined in section [ sect : settings ] the dynamics has discontinuities at collision points .let us first replace coordinate by : this coordinate change makes re - distribute uniformly at collisions ; in addition , for -coordinates , the jacobian for the standard billiard flow is equal to ( see for details ) .then , at each collision point of , we extend the coordinates forward or backward in time to accommodate neighborhoods of fixed size at each collision point for some small enough . herewe assume that is taking negative values before collisions and positive values after collisions .this extension is possible due to the bounds on the speeds and angles introduced in lemma [ lemma : path ] .first , we show that by pushing forward , one can acquire density with a uniform lower bound . by the design of our path , as soon as a particle reaches , it is independent from other particles . therefore , we only need to show that the each particle acquires density with a uniform lower bound in a -neighborhood of some point along a particle sub - path in . given and , let and let be the uniform measure with density on .[ lemma : acquiring density ] there exist such that \geq \eta_0 \mu_{h_{\zeta}(r,0,s,\sin(\varphi))} ] .here denotes the projection to the disk coordinates .when a particle collides with the disk that rotates at a set angular velocity , it looses its density to the disk , i.e. , ] and the particle is sent to change the angular velocity of the disk from to or from to , then , upon return to at time , \geq \eta_2 \eta_0 \mu_{h_{\zeta_2 \zeta}(r_2,0,s_2,\sin(\varphi_2))} ] will also have lower and upper bounds . by the same reasoning as in lemma [ lemma : acquiring density ], we conclude that there exist , and such that \geq \eta_2 \eta_0 \mu_{h_{\zeta_2 \zeta}(r_2,0,s_2,\sin(\varphi_2))}$ ] . mixing and convergence of initial distributions to the invariant measure follow almost immediately from our existence argument for the invariant probability measure . in the proof of prop .[ prop : petite ] , a lot of effort has been devoted to guarantee lower bounds on the densities of the pushed forward measures . a weaker property of markov processes , irreducibility ,can be shown in a similar manner by dropping the lower bounds on times and densities and allowing the paths we follow to start anywhere in the phase space .a continuous - time markov process is called irreducible if for all , whenever leb , there exists some , possibly dependent on both and , such that .[ lemma : irreducible ] markov process is irreducible . the proof of lemma [ lemma : irreducible ] is a simple modification of the proof of prop .[ prop : petite ] . the markov process is called ergodic if an invariant probability measure exists and the proof of ergodicity of relies on sampling the markov process at integer times , which generates a discrete - time skeleton chain .denote the transition probability kernel for by .the following theorem by meyn and tweedie relates skeleton chains to the ergodicity of the markov processes .6.1 ) [ thm : meyn ergodicity ] suppose the markov process is irreducible and is an invariant probability measure for . then is ergodic if and only if is irreducible .[ prop : ergodic ] the markov process is ergodic .the mixing of the invariant measure for the markov process and the convergence of initial distributions to the invariant measure follow from ergodicity by the dominated convergence theorem . indeed , _t ^t- & = _ t _ a |_(^t ( , a)-(a))d| + & _ t _ p^t(,)-()d= 0 , and to show mixing one may replace with in the second integral .* proof of prop .[ prop : ergodic ] .* ergodicity follows once we show that the time- sampled chain is irreducible .the proof is a modification of the proof of prop .[ prop : petite ] .what we really need to show is that for any , there exists and a sample path from to in time .in addition , some density is acquired and carried through along the path .no lower bounds are required .the existence of a sample path is guaranteed in the proof of lemma [ lemma : path ] noting that allowing and to be not in only changes times it takes for particles to reach for the first time in forward or backward times respectively .time for lemma [ lemma : path subproblem ] is allowed to vary in a range of values .in particular , we can choose such that is integer ( here is does not have to be the same for all states either ) .modifying lemmas [ lemma : acquiring density]-[lemma : reacquiring density ] to apply for all injection parameters and dropping the lower bounds , we conclude that is irreducible .[ prop : non exp mixing ] there exist ( many ) initial probability distributions on that converge to the unique invariant measure with sub - exponential rates , i.e. such that for large enough in particular , the unique invariant measure for the markov process is not exponentially mixing , i.e. there exist ( many ) borel sets , such that \mu| \geq \mu(a ) \times \frac{\varsigma}{t^2}.\ ] ] the proof is rather similar to the sub - exponential mixing proof for a system driven by thermostats in except the dynamics on is not deterministic and there is no potential to aid the estimates on an upper bound on the measure of .[ prop : petite ] ensures that there exist such that , where and is the uniform probability measure on . by the invariance of , we conclude .let .this guarantees that for any , any particle in that experiences a collision with at time does not experience any disk collisions on time interval .denote by the projection of the phase space into components associated with particle and let be the uniform probability measure on .let the particle will collide with in time .here we use coordinates defined in subsect .[ subsect : tau ] .let .then if is drawn uniformly in , with probability , at least one particle in will hit in time .we are interested in the probability that a randomly emitted particles will not collide with the disk or in time .consider only the situations when a particle will fly at least distance before the next collision , i.e. , which happens with probability .then the probability that a randomly emitted particles will not collide with the disk or in time is greater or equal than then at least -fraction of , and therefore at least fraction of , ends up in in time . by the invariance of , we conclude that for large enough . to estimate an upper bound on the fraction of that will end up in , we observe that the only way to get to not from is to emit a slow enough particle from . therefore, starting from any initial distribution , the probability to end up in is bounded above by note that the dynamics for our system is statistically very similar to the dynamics of an expanding map with a neutral fixed point ( aka the pomeau - manneville map ) .indeed , the mass originally in evolves in for at least time ; extra mass is deposited from the parts of the phase space where particles experience nearly tangential collisions .one can complete the proof of prop .[ prop : non exp mixing ] using an argument very similar to and .then for large enough and \mu(b_{nn\delta } ) % \ ] ] =c-(1+c)\frac{\xi}{\frac{n^3 \zeta ' \delta}{(n+1)^2}+\xi'}]\frac{\xi}{n^2 n^2 \delta^2 } \geq \frac{c}{2}\frac{\xi'}{n^2 n^2 \delta^2}.\ ] ] s. p. meyn and r. l. tweedie : generalized resolvents and harris recurrence of markov processes .doeblin and modern probability ( blaubeuren , 1991 ) , 227250 , contemp . math ., 149 , amer .soc . , providence , ri , 1993 .
|
we consider a class of mechanical particle systems with deterministic particle - disk interactions coupled to gibbs heat reservoirs at possibly different temperatures . we show that there exists a unique ( non - equilibrium ) steady state . this steady state is mixing , but not exponentially mixing , and all initial distributions converge to it . in addition , for a class of initial distributions , the rates of converge to the steady state are sub - exponential .
|
many real - world technological , social and biological complex systems have a network structure . due to their importance and influence on our life ( recall ,e.g. , the internet , the www , and genetic networks ) investigations of properties of complex networks are attracting much attention . such properties as robustness against random damages and absence of the epidemic threshold in the so called scale - free networks are nontrivial consequences of their topological structure . despite undoubted advances in uncovering the main important mechanisms , shaping the topology of complex networks , we are still far from complete understanding of all peculiarities of their topological structure .that is why it is so important to look for new approaches which can help us to reveal this structure .the structure of networks may be completely described by the associated adjacency matrices .the adjacency matrices of undirected graphs are symmetric matrices with matrix elements , equal to number of edges between the given vertices .the eigenvalues of an adjacency matrix are related to many basic topological invariants of networks such as , for example , the diameter of a network .recently , in order to characterize networks , it was proposed to study spectra of eigenvalues of the adjacency matrices as a fingerprint of the networks .the rich information about the topological structure and diffusion processes can be extracted from the spectral analysis of the networks .studies of spectral properties of the complex networks may also have a general theoretical interest .the random matrix theory has been successfully used to model statistical properties of complex classical and quantum systems such as complex nucleus , disordered conductors , chaotic quantum systems ( see , for example , reviews ) , the glassy relaxation and so on .as the adjacency matrices are random , in the limit ( is the total number of vertices ) , the density of eigenvalues could be expected to converge to the semicircular distribution in accordance with the wigner theorem .however , rodgers and bray have demonstrated that the density of eigenvalues of a sparse random matrix deviates from the wigner semicircular distribution and has a tail at large eigenvalues , see also .recent numerical calculations of the spectral properties of small - world and scale - free networks , and the spectral analyses of the internet have also revealed that the wigner theorem does not hold .the spectra of the internet and scale - free networks demonstrate an unusual power - law tail in the region of large eigenvalues . at the present time there is a fundamental lack in understanding of these anomalies .in order to carry out a complete spectral analysis of real networks it is necessary to take into account all features of these complex systems described by a degree distribution , degree correlations , the statistics of loops , etc . at this timethere is no regular approach that allows one to handle this problem .our paper fills this gap .our approach is valid for any network which has a _local tree - like structure_. in particular , these are uncorrelated random graphs with a given degree distribution , and their straightforward generalizations allowing pair correlations of the nearest neighbors .these graph ensembles have one common property : almost every finite connected subgraph of the infinite graph is a tree . the tree is a graph , which has no loops .a random bethe lattice is an infinite random tree - like graph .all vertices on a bathe lattice are statistically equivalent .these features ( the absence of loops and the statistical equivalence of vertices ) are decisive for our approach .the advantage of bethe lattices is that they frequently allow analytical solutions for a number of problems : random walks , spectral problems , etc .real - world networks , however , often contain numerous loops .in particular , this is reflected in a strong `` clustering '' , which means that the ( relative ) number of loops of length do not vanish even in very large networks .nevertheless , we believe , that the study of graphs with a local tree - like structure may serve as a starting point in the description of more complex network architectures . in the present paperwe will derive exact equations which determine the spectra of infinite random uncorrelated and correlated random tree - like graphs .for this , we use a method of random walks .we propose a method of an approximate solution of the equations .we shall show that the spectra of adjacency matrices of random tree - like graphs have a tail at large eigenvalues . in the case of a scale - free degree distribution ,the density of eigenvalues has a power - law behavior .we will compare spectra of random tree - like graphs and spectra of real complex networks .the role of weakly connected vertices will also be discussed .let be the symmetric adjacency matrix of an -vertex mayer s graph , , ( the mayer graph has either or edges between any pair of vertices , and has no `` tadpoles '' , i.e. , edges attached at a single vertex ) .degree ( the number of connections ) of a vertex is defined as a random graph , which is , in fact , an ensemble of graphs , is characterized by a degree distribution : here , is the averaging over the ensemble . we suppose that each graph in the ensemble has vertices .graph ensembles with a given _ uncorrelated _ vertex degree distribution may be realized , e.g. , as follows .consider all possible graphs with a sequence of the numbers of vertices of degree , , , assuming in the thermodynamic limit [ , .suppose that all these graphs are equiprobable .then , simple statistical arguments lead to the conclusion that almost all finite connected subgraphs of an infinite graph do not contain loops . this approach can be easily generalized to networks with correlations between nearest - neighbor vertices , characterized by the two - vertex degree distribution : here is the total number of edges . in the case of an uncorrelated graphwe have where is the mean degree of a vertex .the spectrum of may be calculated by using the method of random walks on a tree - like graph and generating functions .we define a generating function where is the number of walks of length from to , where is any vertex of : in a tree - like graph the number of steps is an even number . in order to return to must go back along all of the edges we have gone .let be the number of walks of length starting at and ending at for the first time .we define one can prove that let be the distance from to and be the the number of paths of length starting at and ending at for the first time .we define one can prove where is the shortest path from to .there is an important relationship : in this sum the vertex is the nearest neighbor of and a second neighbor of the vertex .solving the recurrence equation ( [ 9b ] ) , we can find and .we define .equation ( [ 9b ] ) may be written in a form we can find , from which we get let us define .then the density of the eigenvalues of a random graph is determined as follows : where is positive and tends to zero .note that the equations ( [ 4a])([31 ] ) are valid for both uncorrelated and correlated tree - like graphs . in the case of a connected graphwe have and .( [ 31 ] ) gives solving this equation , we get the well known result : this is a continuous spectrum of extended eigenstates with eigenvalues .the presence of the denominator on the right - hand side of eq .( [ 20 ] ) leads to a difference of the spectrum of this graph from wigner s semi - circular law . in exact terms ,wigner s law is valid for the eigenvalue spectra of real symmetric random matrices whose elements are independent identically distributed gaussian variables .these specific random matrices for wigner s law essentially differ from the adjacency matrices , which we consider in this paper .so , in our case , the semicircular law may be used only as a landmark for a contrasting comparison .in the case of uncorrelated random tree - like graphs , random parameters on the right - hand side of eq .( [ 31 ] ) are equivalent and statistically independent .they are also independent on the degree .we define the distribution function of at in the fourier representation as : \right\rangle \ , , \label{32}\ ] ] where the brackets means the averaging over the ensemble of random uncorrelated graphs associated with a degree distribution . the statistical independence of the random parameters , , , on the right hand side of eq .( [ 31 ] ) allows us to use the following identity : & = & 1-\sqrt{x}\int\limits_{0}^{\infty } \frac{dy}{\sqrt{y}}j_{1}(2\sqrt{xy}% ) \left\langle \exp ( iy[\lambda + i\varepsilon -\sum_{i=1}^{k-1}t_{i}])\right\rangle \nonumber \\[5pt ] & = & 1-\sqrt{x}\int\limits_{0}^{\infty } \frac{dy}{\sqrt{y}}j_{1}(2\sqrt{xy}% ) e^{iy(\lambda + i\varepsilon ) } \times \nonumber \\[5pt ] & & \sum_{k}\frac{kp(k)}{\langle k\rangle } \left\langle \exp ( -iyt)\right\rangle ^{k-1}\ , , \label{identity}\end{aligned}\ ] ] where is the bessel function and .thus , we get the exact self - consistent equation for : where . solving eq .( [ 34 ] ) gives the distribution of , and so we can obtain , from which we get .( [ 5 ] ) , ( [ 7 ] ) , and ( [ 10 ] ) give & = & \frac{1}{\pi } % % tcimacro{\func{re}}% % beginexpansion \mathop{\rm re}% % endexpansion \int_{0}^{\infty } dye^{iy\lambda } \phi ( f_{\lambda } ( y))\ , , \label{ro}\end{aligned}\ ] ] where .( [ 34 ] ) , we find the -th moment of the distribution function , eq .( [ 32 ] ) : a general case it is difficult to solve eq . ( [ 34 ] ) exactly .let us find an approximate solution .we neglect fluctuations of around a mean value .a self - consistent equation for the function may be obtained if we insert into the right - hand side of eq .( [ 44 ] ) for .we get below we will call this approach an `` effective medium '' ( em ) approximation . at real , is a complex function , which is to be understood as an analytic continuation from the upper half - plane of , .therefore , . in the framework of the em approach , the density , eq .( [ ro ] ) , takes an approximate form ( [ 23 ] ) may be solved analytically at .we look for a solution in the region . it is convenient to use a continuum approximation in eq .( [ 23 ] ) .the real and imaginary parts of this equation take a form & & \int_{k_{0}}^{k_{cut}}\!\!\!\!\!\!\!\!\frac{dk\,kp(k)}{\displaystyle\left ( \!1-(k-1)\mathop{\rm re}\frac{t(\lambda ) } { \lambda } \right ) ^{\!2}\!+\left ( \!(k-1)\mathop{\rm im}\frac{t(\lambda ) } { \lambda } \right ) ^{\!2}}\ , , \label{1new } \\[0.1 in ] & & 1=\frac{1}{\lambda ^{2}\left\langle k\right\rangle } \times \nonumber \\% [ 5pt ] & & \int_{k_{0}}^{k_{cut}}\!\!\!\!\!\!\!\!\frac{dk\,k(k-1)p(k)}{\displaystyle% \left ( \!1-(k-1)\mathop{\rm re}\frac{t(\lambda ) } { \lambda } \right ) ^{\!2}\!+\left ( \!(k-1)\mathop{\rm im}\frac{t(\lambda ) } { \lambda } \right ) ^{\!2}}\ , , \label{2new}\end{aligned}\ ] ] where and are the smallest and largest degrees , respectively . a region gives a regular contribution into the integrals ( [ 1new ] ) and ( [ 2new ] ) while a region gives a singular contribution . here . as a resultwe obtain 1\cong \frac{1}{\lambda ^{2}\left\langle k\right\rangle } & & % \int_{k_{0}}^{k_{\lambda } } dk\,k(k-1)p(k)+\frac{\pi \lambda k_{\lambda } p(k_{\lambda } ) } { \left\langle k\right\rangle \mathop{\rm im}t(\lambda ) } \ , .\label{4new}\end{aligned}\ ] ] if decreases faster than at , i.e. is finite , then in the leading order of we find within the same approach one can find from eq .( [ 41 ] ) that the density also has two additive contributions inserting eq .( [ 39a ] ) gives the density here .the asymptotic expression ( [ 39b ] ) is our main result .the right - hand side of this expression originates from two equal , additive contributions : the contribution from the real part of and the one from the imaginary part of .one can show that the asymptotic behavior of the real part , , in the leading order of is universal and is valid even for graphs with finite loops .contrastingly , the asymptotics of in the leading order of and the corresponding contribution to the right - hand side of eq .( [ 39b ] ) depend on details of the structure of a network . the analysis of eq .( [ 23 ] ) shows that the main contribution to an eigenstate with a large eigenvalue is given by vertices with a large degree .as we shall show below , in the limit , the result ( [ 39b ] ) is asymptotically exact .the relationship between largest eigenvalues and highest degrees , , for a wide class of graphs was obtained in a mathematical paper , ref .this contribution of highly connected vertices may be compared with a simple spectrum of `` stars '' , which are graphs consisted of a vertex of a degree , connected to dead ends .the spectrum consists of two eigenvalues and a -degenerate zero eigenvalue .note that asymptotically , in the limit of large , eq .( [ 39b ] ) gives if decreases slower than an exponent function at large , that is , if higher moments of the degree distribution diverge .a classical random graph has the poisson degree distribution . the tail of is given by eq .( [ 39b ] ) with where is a number of the order of : . \label{tailp}\ ] ] this equation agrees with the previous results obtained by different analytical methods . for a `` scale - free '' graph with at large , at , we get an asymptotically exact power - law behavior : where the eigenvalue exponent . at a finite , there is a finite - size cutoff of the degree distribution .the cutoff determines the upper boundary of eigenvalues : .this result agrees with an estimation of the largest eigenvalue of sparse random graphs obtained in ref . .let us analyze the accuracy of the em approach .one can use the following criterion .we introduce a quantity . here , is the -th moment of the approximate distribution ( [ 35 ] ) . inserting the function ( [ 35 ] ) into eq .( [ 44 ] ) gives .the function would be an exact solution of eq .( [ 34 ] ) if for all .note that at we have , because this equality is the basic equation in the framework of the em approximation . at and , in the leading order of ,( [ 44 ] ) gives & \cong & 1-ank_{0}\lambda ^{-2}\ln \lambda \ \ \ \ \ \ \text { at } % \gamma = 3\ , , \label{48b } \\ [ 5pt ] & \cong & 1-ank_{0}^{\gamma -2}\lambda ^{-2(\gamma -2)}\ \ \text { at } % 2<\gamma <3 \ , , \label{48c}\end{aligned}\ ] ] where is the smallest degree in and is a numerical factor .this estimation allows us to conclude that at the em solution becomes asymptotically exact . at small ,the em approximation are less accurate .for example , at for a scale - free network , we obtain , where . only at large and ,the parameter is close to 1 , i.e. . one can conclude that the em approach gives a reliable result close to the exact one in the range in our derivations we assumed the tree - like local structure of a network , that is , the absence of finite - size loops in an infinite network . loosely speaking , this assumption may fail if the second moment of the degree distribution diverges .this can be seen from the following simple arguments .the length of a typical loop is of the order of the average shortest - path length of a network .since the mean number of the second - nearest neighbors in the infinite uncorrelated net is and diverges if diverges , the average shortest - path length and the length of a typical loop are small and may turn out to be finite even in the limit of an infinite net if diverges . in this situation , the result( whether there are loops of finite length in the infinite network or not ) is determined by the size - dependence of the cut - off of the degree distribution . in its turn , this dependence is determined by the specifics of an ensemble and varies from network to network .many real - world networks are characterized by strong correlations between degrees of vertices .the simplest ones are correlations between degrees of neighboring vertices .let us study the effect of degree correlations on spectra of random tree - like graphs . using the pair degree distribution ( [ 1a ] ), it is convenient to introduce the conditional distribution that a vertex of degree is connected to a vertex of degree : the method used above for the calculation of spectra of uncorrelated graphs may be generalized to correlated graphs . for this, one should take into account correlations between the degree of a vertex and the generating function in eq .( [ 31 ] ) .we define the distribution function of in the fourier representation as : \right\rangle .\label{51}\ ] ] averaging eq .( [ 31 ] ) and using the identity ( [ identity ] ) , we obtain an exact equation for : the density of eigenvalues is of the following form these equations are a generalization of the equations derived above for uncorrelated graphs .indeed , for an uncorrelated graph , we have and . as a resultwe get eqs .( [ 34 ] ) and ( [ ro ] ) .let us use the em approximation .we neglect fluctuations around a mean value and use an approximation then we get a self - consistent equation for the complex function : at this equation has a solution this solution gives where as before .it agrees with the result presented in eq .( [ 39b ] ) for uncorrelated graphs .one concludes that the short - range correlations between degrees of neighboring vertices in the scale - free networks does not change the eigenvalue exponent .let us consider random walks on a graph with the transition probability of moving from a vertex to any one of its neighbors .the transition matrix then satisfies clearly , for each vertex is related with the laplacian of the graph -a_{v , w}/\sqrt{k_{v}k_{w } } & \text { \ \ \ otherwise } \end{array } \right . ,\label{62}\ ] ] as follows where .therefore , if we know the density of eigenvalues of , we can find the density of eigenvalues of the laplacian : .we denote the eigenvalues of the matrix by .the eigenfunction corresponds to the largest eigenvalue . in order to calculate the spectrum of we use the same method of random walks described in the section ii .the probability of one step is given by eq .( [ 61 ] ) .we define the generating function and and obtain an exact equation which is similar to eq .( [ 31 ] ) : where but . at , we get exact equations for the function and the density of the eigenvalues : the function is an exact solution of eq .( [ 64 ] ) .this solution corresponds to the eigenvalue and gives the delta - peak in the density .the second largest eigenvalue is related to several important graph invariants such as the diameter of the graph , see , for example , : here the _ diameter _ of a graph is the maximum distance between any two vertices of a given graph . in order to find the spectrum at we use the em approach .we assume and get an equation for a complex function : is given by for completeness , we present the spectrum of the transition matrix of a -regular tree : which easily follows from eqs .( [ 66 ] ) and ( [ 67 ] ) .the second eigenvalue is equal to .let us compare available spectra of classical random graphs and scale - free networks , empirical spectra of the internet , and spectra of random tree - like graphs .at first we discuss spectra of adjacency matrices .the spectra were calculated in the framework of the em approach from eqs .( [ 23 ] ) and ( [ 41 ] ) for different degree distributions .our results are represented in figs .[ fig1 ] and [ fig2 ] . _ classical random graphs . _classical random graphs have the poisson degree distribution .the density of eigenvalues of the associated adjacency matrix has been obtained numerically in . in fig .[ fig1 ] we display results of the numerical calculations and our results obtained within the em approach .we found a good agreement in the whole range of eigenvalues .there are only some small differences in the region of small eigenvalues which may be explained by an inaccuracy of the em approach in this range . in this region ,the density has an elevated central part that differs noticeably from the semicircular distribution .the spectrum also has a tiny tail given by eq .( [ 39b ] ) which can hardly be seen in fig .[ fig1 ] , see for detail section v and refs . _ scale - free networks ._ spectra of scale - free graphs with the degree distribution differ strongly from the semicircular law .the barabsi - albert model has a tree - like structure , the exponent of the degree distribution , and negligibly weak correlations between degrees of the nearest neighbors .therefore , one can assume that the spectrum of a random tree - like graph can mimic well the spectrum of the model . in fig .[ fig1 ] we compare the spectrum of the random tree - like graph with and the spectrum of the barabsi - albert model obtained from simulations .the density of states has a triangular - like form and demonstrates a power - law tail .there is only a noticeable deviation of the em results from the results of simulations at small eigenvalues . in order to improve the em results, we used , as an ansatz , the distribution function ^{-ixt(\lambda ) } $ ] instead of the function . in this case, there are two unknown complex functions and which were determined self - consistently from eq .( [ 34 ] ) ._ power - law tail . _the power - law behavior of the density of eigenvalues is an important feature of the spectrum of scale - free networks .the simulations of the barabsi - albert model having the degree exponent revealed a power - law tail of the spectrum , with the eigenvalue exponent .our prediction is in agreement with the result of these simulations .the study of the topology of the internet at the autonomous system ( as ) level revealed a power - law behavior of eigenvalues of the associated adjacency matrix .the degree distribution of the network has the exponent .the eigenvalues of the internet graph are proportional to the power of the rank of an eigenvalues ( starting with the largest eigenvalue ) : with some exponent .this leads to .the _ multi _dataset analyzed in gave and , hence , the eigenvalue exponent .the _ oregon _dataset gave , .our results with substituted , give the eigenvalue exponent in agreement with the results obtained from empirical data for this network .there are the following reasons for the agreement between the theory for tree - like graphs and the data for the internet . at first, although the average clustering coefficient of the internet at as level is about 0.2 , the local clustering coefficient rapidly decreases with increasing the degree of a vertex .in other words , the closest neighborhood of vertices with large numbers of connections is `` tree - like '' . recall that vertices with large numbers of connections determine the large - eigenvalue asymptotics of the spectrum .so , we believe that our results for the asymptotics of the spectra of tree - like networks is also valid for the internet and other networks with similar structure of connections . secondly , the internet is characterized by strong correlations between degrees of neighboring vertices . however , as we have shown in the section vi , such short - range degree correlations do not affect the power - law behavior of eigenvalues .the study of the internet topology also revealed a correspondence between the large eigenvalues and the degree : .this result is in agreement with our theoretical prediction that it is the highly connected vertices with a degree about that produce the power - law tail .the calculations of the eigenvalues spectrum of the adjacency matrix of a pseudofractal graph with have revealed a power - law behavior with .the effective medium approximation gives lower value .the origin of the difference is not clear .one should note that the pseudofractal is a deterministically growing graph with a very large clustering coefficient and , what is especially important , with long - range correlations between degrees of vertices ._ weakly connected nodes ._ let us study the influence of weakly connected vertices with degrees on the spectra of random tree - like graphs with the degree distribution . in fig .[ fig2] and [ fig2] we represent the evolution of the spectrum of the network with , when the smallest degree decreases from 5 to 1 .the spectra were calculated in the framework of the em approximation .similar results are obtained at different . for ,two peaks at non - zero eigenvalues emerge in the density of states . in order to understand an origin of the peaks one can note that for this degree distribution the average degree is close to .for example , at we have .therefore , in this network , the probability to find a vertex having three links is larger than the probability to find a vertex with a degree .there are large parts of the network which have a local structure . in fig .[ fig2] we show a density of eigenvalues of an infinite bethe lattice [ see eq .( [ 20 ] ) at ] . at small eigenvalues ,the density of the regular tree fits well the density of the random network . at large , the density of eigenvalues demonstrates a power - law behavior with the exponent . in the case we have .this network contains long chains which connect vertices with degrees . in fig .[ fig2] we display the density of eigenvalues of an infinite chain ( see eq .( [ 20 ] ) at ) . at small eigenvaluesthis density of eigenvalues fits well the density of eigenvalues of the random network .therefore , it is the vertices with small degrees that are responsible for the formation of density of networks at small eigenvalues . _dead - end vertices ._ let us investigate the effect of dead - end vertices on the spectra of random tree - like graphs with different degree distributions .[ fig2] shows a spectrum of a scale - free network with and the probability of dead - end vertices . the em approximation is used .the spectrum has a flat part and two peaks at moderate eigenvalues . as we have shown above ,this ( intermediate ) part of the spectrum is formed mainly by the vertices with degree and 3 .the emergency of a dip at zero is a new feature of the spectrum .in fact , there is a gap in the spectrum obtained in the in the framework of the em approach .the width of the gap increases with increasing .one can see this in the insert on the fig .[ fig2] .the dead - end vertices also produce a delta peak at .the central peak corresponds to localized eigenstates .note that the appearance of the central peak and a dip is a general phenomena in random networks with dead - end vertices .we also observed this effect in the classical random graphs .spectral analysis of the internet topology on the as level revealed a central peak with a high multiplicity .thus the conjecture that localized and extended states are separated in energy may well hold in complex networks .a similar spectra was observed in many random systems , for example , in a binary alloy . in order to estimate the height of the delta peakit is necessary to take into account all localized states .unfortunately , so far this is an unsolved analytical problem . in fig .[ fig3 ] we show local parts of a network , which produce localized states .one can prove that configurations with two and more dead - end vertices , see fig .[ fig3] , produce eigenstates with .the corresponding eigenvectors have non - zero components only at the dead - end vertices . fig .[ fig3] shows another configuration which produces an eigenstate with the eigenvalue .a corresponding eigenvector is localized at vertices 0 , 1 and 2 ._ finite - size effects ._ in the present paper we studied the spectral properties of infinite random tree - like graphs .numerical studies of large but finite random trees demonstrate that the spectrum of a finite tree consists , speaking in general terms , of a continuous component and an infinity of delta peaks .the components correspond to extended and localized states , respectively .there is a hole around each delta peak in the spectrum .a finite regular tree has a spectral distribution function which looks like a singular cantor function .these results demonstrate that finite size effects in spectra may be very strong .in particular , the finite size of a network determines the largest eigenvalue in its spectrum .as was estimated in the section v , the largest eigenvalue of the adjacency matrix associated with a scale - free graph is of the order of ._ spectrum of the transition matrix .[ fig4 ] we represent a spectrum of the transition matrix defined by eq .( [ 61 ] ) for a tree - like graph with the scale - free degree distribution at large degrees .the spectrum was calculated from eqs .( [ 66 ] ) and ( [ 67 ] ) with the degree exponent and the probabilities and taken from empirical degree distribution of the internet at the as level . the spectrum lies in the range . in fig .[ fig4 ] we compare our results with the spectrum of the transition matrix of the internet obtained in .unfortunately , the data are too scattered to make a detailed comparison with our results .nevertheless , one can see that the spectrum of of the tree - like graph reproduces satisfactory the general peculiarities of the real spectrum .namely , the spectra have a wide dip at zero eigenvalue and a central delta - peak .the multiplicity of the zero eigenvalue have been estimated in . for a detailed comparison between the spectra , correlations in the internetmust also be taken into account . in order to reveal an effect of dead - end vertices we calculated spectra of on a random tree - like graph with the poisson and the scale - free degree distributions in the case when dead - end vertices are excluded , that is , and .these spectra are displayed in the insert on the fig .[ fig4 ] . in the whole range of eigenvaluesthese spectra are very close to the spectrum of a bethe lattice with the degree these calculations confirm the fact that it is the dead - end vertices that produce the dip in the spectrum of the internet .in this paper we have studied spectra of the adjacency and transition matrices of random uncorrelated and correlated tree - like complex networks . we have derived exact equations which describe the spectrum of random tree - like graphs , and proposed a simple approximate solution in the framework of the effective medium approach .our study confirms that spectra of scale - free networks as well as the spectra of classical random graphs do not satisfy the wigner law .we have demonstrated that the appearance of a tail of the density of the eigenvalues of sparse random matrices is a general phenomenon .the spectra of classical random graphs ( the erds - rnyi model ) have a rapidly decreasing tail .scale - free networks demonstrate a power - law behavior of the density of eigenvalues . we have found a simple relationship between the degree exponent and the eigenvalue exponent : .we have shown that correlations between degrees of neighboring vertices do not affect the power - law behavior of eigenvalues .comparison with the available results of the simulations of the barabsi - albert model and the analysis of the internet at the autonomous system level shows that this relationship is valid for these networks .we found that large eigenvalues are produced by highly connected vertices with a degree .many real - world scale - free networks demonstrate short - range correlations between vertices and a decrease of a local clustering coefficient with increasing degree of a vertex. therefore , the relationship between the degree - distribution exponent and the eigenvalue exponent may also be valid for these networks .we can conclude that the power - law behavior is a general property of real scale - free networks .weakly connected vertices form the spectrum at small eigenvalues .dead - end vertices play a very special role .they produce localized eigenstates with ( the central peak ) .they also produce a dip in the spectrum around the central peak . in conclusion, we believe that our general results for the spectra of tree - like random graphs are also valid for many real - world networks with a tree - like local structure and short - range degree correlations ._ note added_.after we have finished our work we have learned about a recent mathematical paper ref . , where large eigenvalues of spectra of complex random graphs were calculated .the statistical ensemble of graphs , which was considered in that paper , essentially differs from that of our paper and has a different cutoff of the degree distribution , but the asymptotics of spectra agree in many cases .d. cvetkovi , m. domb , and h. sachs , _ spectra of graphs : theory and applications _( johann ambrosius barth , heidelberg , 1995 ) ; d. cvetkovi , p. rowlinson , and s. simi , _ eigenspaces of graphs _( cambridge university press , cambridge , 1997 ) .farkas , i. dernyi , a .-barabsi , and t. vicsek , phys .e * 64 * , 026704 ( 2001 ) ; i. farkas , i. derenyi , h. jeong , z. neda , z.n .oltvai , e. ravasz , a. schubert , a .-barabsi , and t. vicsek , physica a * 314 * , 25 ( 2002 ) .a. bekessy , p. bekessy , and j. komlos , stud .* 7 * , 343 ( 1972 ) ; e.a .bender and e.r .canfield , j. combinatorial theory a * 24 * , 296 ( 1978 ) ; b. bollobs , eur . j. comb . * 1 * , 311 ( 1980 ) ; n.c .wormald , j. combinatorial theory b * 31 * , 156,168 ( 1981 ) ; m. molloy and b. reed , random structures and algorithms * 6 * , 161 ( 1995 ) .one should note that the position of the cut - off may depend on details of the ensemble of random graphs .in particular , z. burda and a. krzywicki , cond - mat/0207020 , showed that the exclusion of multiple connections in a network may diminish .m. krivelevich and b. sudakov , combinatorics , probability and computing * 12 * , 61 ( 2003 ) .
|
we propose a general approach to the description of spectra of complex networks . for the spectra of networks with uncorrelated vertices ( and a local tree - like structure ) , exact equations are derived . these equations are generalized to the case of networks with correlations between neighboring vertices . the tail of the density of eigenvalues at large is related to the behavior of the vertex degree distribution at large . in particular , as , . we propose a simple approximation , which enables us to calculate spectra of various graphs analytically . we analyse spectra of various complex networks and discuss the role of vertices of low degree . we show that spectra of locally tree - like random graphs may serve as a starting point in the analysis of spectral properties of real - world networks , e.g. , of the internet . 2
|
super - hydrophobic surfaces ( i.e. , water contact angle greater than 150 ) have attracted recently much attention in fundamental research and potential industrial applications , such as waterproof surfaces , anti - sticking , anti - contamination , self - cleaning , anti - fouling , anti - fogging , low - friction coatings , adsorption , lubrication , dispersion , and self - assembly . in general , artificial super - hydrophobic surfacescan be realized governing both the chemical compositions and morphological structure of the solid surfaces . in particular ,surface roughness ( micro- and nano - morphology ) may also be enhanced especially by hierarchical and fractal structures , possibly allowing air pocket formation to further repel water penetration . nevertheless , realizing a permanent super - hydrophobic surface remains quite a challenge . lately , chemical , mechanical , thermal stability , and time durability have been addressed .+ however , the best and most efficient surfaces known so far evolved in 460 million years in plants and animals owe to adaptation to different environments and now they serve as models for the development of artificial biologically inspired or biomimetic materials .recent studies demonstrate that super - hydrophobicity of many natural surfaces principally results from the presence of at least two - fold morphology at both micro- and nano - scales and the low energy materials on the surfaces .for instance , the hierarchical architecture of the _ salvinia _ leaf surface is dominated by complex elastic papillae millimetric in size coated with self - assembly nano - scaled epidermal wax crystals ranging in sizes from 0.2 to 100 m .the terminal cells of each super - hydrophobic papilla lack the wax crystals and form evenly distributed hydrophilic cells that cover only 2% of the surface .these hydrophilic cells stabilize the air layer by pinning the liquid - vapor interface to the tips of the papillae .this prevents the loss of air caused by formation and detachment of gas bubbles due to instabilities , such as pressure fluctuations , especially in a turbulent water flow environment . the unique combination of hydrophilic cells on super - hydrophobic papillae provide a promising concept for the development of a coating with a long - term super - hydrophobic behaviour .in general , the adhesion with the water is so strong that the elastic papillae bend and swing back when the tips snap off the droplets .+ the so - called salvinia or petal effect is therefore referred to super - hydrophobic adhesive surfaces with hydrophilic and hydrophobic hierarchical morphology providing sufficient roughness for exhibiting both a large contact angle and a high contact angle hysteresis , conversely to the lotus effect ( high contact angle value and low contact angle hysteresis ) .consequently , a water droplet on such a surface is nearly spherical in shape and can not roll off even when the leaf is turned upside down .however , larger drops can roll off the surface at the slightest tilting or vibration .+ in general , it is very difficult to fabricate an applicable engineering super - hydrophobic surface on stainless steel , because the textured films easily fall off from the stainless steel substrate . lately, some achievements on the realization and characterization of stable super - hydrophobic surfaces on stainless steel have been made and particularly using carbon nanotube coatings .furthermore , stainless steel potential applications include electrodes for super - capacitors , fuel cells , capacitive deionization and capacitive mixing for extracting energy from salinity difference of water resources , field emission probes , sensors , catalyst support for wastewater treatment and tribological applications .therefore , stainless steel may be considered as a valid candidate for direct growth of carbon nanotubes by cvd , also because of its high content of iron as the catalyst element . in particular , direct growth is widely used due to several advantages , such as capability to produce dense and uniform deposits , reproducibility , strong adhesion , adjustable deposition rates , ability to control crystal structure , surface morphology and orientation of the cvd products , reasonable cost and wide scope in selection of chemical precursors .+ recently , we have shown that the direct growth of high quality mwcnts on stainless steel in the absence of any external catalysts is possible . moreover , acid treatments and oxidation - reduction stages on this type of surface are not necessary because of the native nano - scale roughness of the substrate and the iron - rich substrate surface both act as an efficient catalyst or template in the synthesis of mwcnts . particularly , at our working temperaturemostly iron nanoparticles are involved in the growth mechanism .furthermore , after the first growth , the stainless steel substrate may be used again , just carefully removing the synthesized carbon nanotubes in an ultrasonic bath .we remark that ultrasonication is generally needed to detach the mwcnt film from the steel substrate , due to its strong adhesion . + here , we illustrate a simplified recipe to synthesize mwcnts on a sheet of aisi 316 stainless steel by cvd , without any external catalysts .moreover , we will investigate the mwcnt hierarchical morphology from the sem micrographs of the films and their super - hydrophobic properties will be characterized . in particular, we will show that owing to their particular hierarchical architecture , the super - hydrophobic mwcnt coatings for stainless steel exhibit long - term high contact values and also high adhesive force with water ( high contact angle hysteresis ) .therefore , the super - hydrophobic state achieved is stationary .a mm piece of aisi 316 stainless steel sheet ( fe 70% , cr 18% , ni 10% , and mo 2% , goodfellow cambridge , ltd . )was carefully sonicated in deionized water and degreased in isopropyl alcohol for 10 min . then, the steel substrate was placed on a molybdenum sample holder acting also as resistive heater and inserted into an ultra high vacuum chamber and the pressure was brought up to torr by a rotary pump . at this stage , argon gas ( 500 sccm )was inserted at 12 torr and then the heater was increased at the working temperature .the sample temperature was controlled with an optical pyrometer , so when the substrate reached the working temperature , acetylene ( c ) was introduced ( 200 sccm ) in the chamber and mwcnts grew in dynamic condition , since the rotary pump was kept in operation during the process .after 10 min of growth , ar gas ( 500 sccm ) was inserted in chamber for 5 min .in figure [ fig : figure1]a , d sem ( zeiss leo supra 35 ) images of our synthesized nanostructures are reported , showing that a high density of randomly oriented mwcnts uniformly grew on the stainless steel sheet with an average film thickness of m . moreover , mwcnts come with a wide distribution of tube diameters , with average value nm . also , we have shown in our past works transmission electron microscopy ( tem ) images confirming the multi - walled nature of the as - grown carbon nanotubes .furthermore , in figure [ fig : figure1]b , d it may be observed that mwcnts are mostly capped and often they present carbonaceous nanostructures ( amorphous and/or graphitic carbon ) around the tips ( figure [ fig : figure1]d ) and close to the stainless steel surface ( figure [ fig : figure1]c ) , with a characteristic dimension of hundred nanometers , as also reported by other authors .moreover , we characterized the wettability of mwcnt films acquiring images of sessile water drops cast on the carbon nanotube films by a custom setup with a ccd camera .static advanced contact angles were measured increasing the volume of the drop by step of 1 .a plugin for the open - source software imagej was exploited to estimate the contact angle values by using cubic b - spline interpolation of the drop contour to reach subpixel resolution , with an accuracy of .the deionized water ( 18.2 m cm ) drop volume used to achieve the contact angles of samples was 10 .moreover , every contact angle was measured s after drop casting to ensure that the droplet reached its equilibrium position . in figure[ fig : figure2]a the image of a water droplet cast on the mwcnt film is shown .the experimental contact angle value is with no observable roll - off angle , even if the substrate is turned upside down .therefore , we infer that the contact angle hysteresis is so high to pin the water droplet on the mwcnt surface .the adhesive force in unit of length of a surface in contact with water is given by where mn m and for our mwcnt samples mn m .therefore , for a water drop with diameter 1 mm ( figure [ fig : figure2]a ) , the adhesive force of the mwcnt film in contact with the drop is .the obtained result is about 25% lesser than the adhesive force of a single gecko foot - hair ( i.e. , seta ) , but 10 times higher than that of _ salvinia _ leaf .interestingly , the contact angle value achieved is among the highest reported in literature for not chemically treated , functionalized , or suitably textured randomly distributed mwcnt films .+ furthermore , figure [ fig : figure2]b reports the variations of contact angle and droplet radius as functions of the elapsed time from drop cast on the mwcnt films . in such suction experimentwe show that although samples are porous , the contact angle trend is constant to demonstrate the stability in time of the super - hydrophobic state of mwcnt coatings . on the other hand , the droplet radius linearly decreases of within 10 min owing to the liquid evaporation and not to the suction process , otherwise the contact angle would also linearly decrease .our results are particularly remarkable , since the water contact angle of mwcnt films has been reported to decrease linearly with time , from an initial value of to within 15 min .in addition , the obtained high contact angle value may be attributed to hydrophilic ( ) carbonaceous nanostructures around the tips of the hydrophobic randomly arranged mwcnts ( - ) , constituting a two - fold hierarchical morphology able to stabilize the super - hydrophobicity of the film by the salvinia effect ( figure [ fig : figure3 ] ) . as it occurs in salvinia ,water droplets are pinned by the attractive interaction due to hydrophilic carbonaceous nanostructures , while they exhibit super - hydrophobic contact angles with a large amount of air pockets , owing to the repulsive interaction of hydrophobic mwcnts . in this way ,the film results in a air - retaining super - hydrophobic surface .the super - hydrophobic effect due to the presence of carbonaceous nanostructures on the mwcnt tips has been also reported by han et al . in their work ,vertically aligned mwcnt were processed by plasma immersion ion implantation in order to cap them by hydrophilic amorphous carbon nanoparticles .the authors report a measured water contact angle value of and zero roll - off angle .however , the difference in contact angle values between our and han s results could be attributed to the lower density of carbonaceous nanostructures and to the random distribution of the mwcnts in our samples .evidently , they observed the lotus effect realizing in that way a waterproof surface , while we recognized the salvinia effect , thus fabricating an air - retaining super - hydrophobic surface . + in order to analyze in more detail the super - hydrophobicity of our samples, we used the cassie - baxter equation in the hydrophobic regime with the surface solid fraction , the surface air fraction , the apparent contact angle , and the young s contact angle of the surface defined as the surface tensions of the solid - vapor , the solid - liquid , and the liquid - vapor interfaces are denoted by , , and , respectively . moreover , if we consider the experimental contact angle measured for highly pure mwcnt ( nanocyl , nc7000 , assay 90% , diameter : 5 - 50 nm ) random network films realized as in ref . , as the young s contact angle of the hierarchical mwcnt composite surface with apparent contact angle , we easily obtain an air fraction .this result suggests the formation of a large amount of air pockets .however , we have recently reported that highly hydrophobic mwcnt random network films with a hierarchical surface morphology , owing to their young s contact angle close to are in a metastable wenzel - cassie - baxter state , which is stationary . indeed ,figure [ fig : figure2]b suggests that air pockets are very stable in time .it is worth noting that the metastability of the mwcnt film is coherent with the salvinia effect , in which although the liquid droplets are pinned on salvinia leaves , they can roll off from the surface by a slight vibration .in summary , we have realized super - hydrophobic mwcnt films on aisi 316 stainless steel by cvd without the addition of any external catalysts or pre - treatments , at low - temperature .furthermore , the investigation at sem reveals that the mwcnt coatings are carbon nanotube random networks with a two - fold hierarchical morphology owing to the presence of hydrophilic carbonaceous nanostructures on the top of the hydrophobic mwcnts .the surface hierarchical architecture of the mwcnt films provides a stationary super - hydrophobic state for the coatings because of the salvinia effect .such mwcnt films may be used for super - hydrophobic stainless steel realizations , such as drag reduction , anti - corrosion , anti - fouling , and anti - contamination .we thank r. de angelis , f. de matteis , and p. prosposito ( universit di roma tor vergata , roma , italy ) for their courtesy of contact angle instrumentation .this project was financial supported by the european office of aerospace research and development ( eoard ) through the air force office of scientific research material command , usaf , under grant no .fa9550 - 14 - 1 - 0047 .+ y. huang , j. zhou , b. su , l. shi , j. wang , s. chen , l. wang , j. zi , y. song , l. jiang , colloidal photonic crystals with narrow stopbands assembled from low - adhesive superhydrophobic substrates , j. am .134 ( 2012 ) 1705317058 .w. barthlott , t. schimmel , s. wiersch , k. koch , m. brede , m. barczewski , s. walheim , a. weis , a. kaltenmaier , a. leder , h. f. bohn , the salvinia paradox : superhydrophobic surfaces with hydrophilic pins for air retention under water , adv .22 ( 2010 ) 23252328 .h. yang , p. pi , z .- q .cai , x. wen , x. wang , j. cheng , z. ru yang , facile preparation of super - hydrophobic and super - oleophilic silica film on stainless steel mesh via sol gel process , appl .256 ( 2010 ) 40954102 .y. chen , z. lv , j. xu , d. peng , y. liu , j. chen , x. suna , c. feng , c. wei , stainless steel mesh coated with mno/carbon nanotube and polymethylphenyl siloxane as low - cost and highperformance microbial fuel cell cathode materials , j. power sources 201 ( 2012 ) 136141 . m. a. anderson , a. l. cudero , j. palma , capacitive deionization as an electrochemical means of saving energy and delivering clean water .comparison to present desalination practices : will it compete ?, electrochim .acta 55 ( 2010 ) 38453856 .n. sano , y. hori , s. yamamoto , h. tamon , a simple oxidation - reduction process for the activation of a stainless steel surface to synthesize multi - walled carbon nanotubes and its application to phenol degradation in water , carbon 50 ( 2012 ) 115122 .m. d. abad , j. c. snchez - lpez , a. berenguer - murcia , v. b. golovko , m. cantoro , a. e. h. wheatley , b. f. g. j. a. fernndez , j. robertsond , catalytic growth of carbon nanotubes on stainless steel : characterization and frictional properties , diam .17 ( 2008 ) 18531857 .m. hashempour , a. vicenzo , f. zhao , m. bestetti , direct growth of mwcnts on 316 stainless steel by chemical vapor deposition : effect of surface nano - features on cnt growth and structure , carbon 63 ( 2013 ) 330347 .l. camilli , m. scarselli , s. d. gobbo , p. castrucci , f. nanni , e. gautron , s. lefrant , m. de crescenzi , the synthesis and characterization of carbon nanotubes grown by chemical vapor deposition using a stainless steel catalyst , carbon 49 ( 2011 ) 33073315 .l. camilli , m. scarselli , s. d. gobbo , p. castrucci , f. r. lamastra , f. nanni , e. gautron , s. lefrant , f. dorazio , f. lucari , m. de crescenzi , high coercivity of iron - filled carbon nanotubes synthesized on austenitic stainless steel , carbon 50 ( 2012 ) 718721 .l. camilli , m. scarselli , s. d. gobbo , p. castrucci , e. gautron , m. de crescenzi , structural , electronic and photovoltaic characterization of multiwalled carbon nanotubes grown directly on stainless steel , beilstein j. nanotechnol . 3 ( 2012 ) 360367 .l. camilli , p. castrucci , m. scarselli , e. gautron , s. lefrant , m. de crescenzi , probing the structure of fe nanoparticles in multiwall carbon nanotubes grown on a stainless steel substrate , j. nanopart .res . 15 ( 2013 )a. f. stalder , t. melchior , m. mller , d. sage , t. blu , m. unser , low - bond axisymmetric drop shape analysis for surface tension and contact angle measurements of sessile drops , colloid surface a 364 ( 2010 ) 7281 .j. yang , z. zhang , x. men , x. xu , x. zhu , reversible conversion of water - droplet mobility from rollable to pinned on a superhydrophobic functionalized carbon nanotube film , j. colloid interface sci . 346( 2010 ) 241247 . f. de nicola , p. castrucci , m. scarselli , f. nanni , i. cacciotti , m. de crescenzi , exploiting the hierarchical morphology of single - walled and multi - walled carbon nanotube films for highly hydrophobic coatings , beilstein j. nanotechnol . 6 ( 2015 )
|
we have taken advantage of the native surface roughness and the iron content of aisi 316 stainless steel to direct grow multi - walled carbon nanotube ( mwcnt ) random networks by chemical vapor deposition ( cvd ) at low - temperature ( ) , without the addition of any external catalysts or time - consuming pre - treatments . in this way , super - hydrophobic mwcnt films on stainless steel sheets were obtained , exhibiting high contact angle values ( ) and high adhesion force ( high contact angle hysteresis ) . furthermore , the investigation of mwcnt films at scanning electron microscopy ( sem ) reveals a two - fold hierarchical morphology of the mwcnt random networks made of hydrophilic carbonaceous nanostructures on the tip of hydrophobic mwcnts . owing to the salvinia effect , the hydrophobic and hydrophilic composite surface of the mwcnt films supplies a stationary super - hydrophobic coating for conductive stainless steel . this biomimetical inspired surface not only may prevent corrosion and fouling but also could provide low - friction and drag - reduction .
|
recently , we have published a numerical algorithm for the cauchy problem for the ordinary differential equations .we showed that it could be much more accurate , even by few orders of magnitude , than traditional numerical methods based on finite differences . in physical applications , the requirement of one force evaluation per time step makes that the most often chosen algorithm is the verlet algorithm , being the simple third order taylor predictor method , or the equivalent _ leap - frog _algorithm . in this case , the possibility to use algorithm being much more accurate then verlet algorithm and as fast as the verlet algorithm makes new perspective for simulating such complex systems as , e.g. , tetratic phases or auxetics .apart from the problem of numerical accuracy there is also the possibility of the loss of the time - reversibility in finite - difference methods , . in the following ,we discuss our algorithm with respect to integrating the motion equations . to this aimwe have introduced a few examples of the forced linear and nonlinear oscillators and 2d lennard - jones fluid ., of the forced harmonic oscillator , where , ( the thick line ) and three pairs of approximating expressions , ( 1 ) and ( 1 ) starting at point ( p ) , ( 2 ) and ( 2 ) starting at point ( p ) , ( 3 ) and ( 3 ) starting at point ( p ) .we have chosen in the approximations ( 1),(2),(3 ) , whereas in ( 1 ) , ( 2 ) , ( 3 ) . in this examplethe number of exact digits is equal to 6 , . ]we present the procedure of finding an approximate solution of the following initial value problem for the second order differential equation of the form : where , and are given functions , and , are fixed reals . for the function we assume that it is sufficiently smooth , so we can write , using the taylor formula , in some neighborhood of in the form ^kf(x_0 , v_0 ) + o([\sqrt{(x - x_0)^2 + ( v - v_0)^2}]^n ) . \label{r_3}\ ] ] we introduce a formal real parameter and instead of the eqs .( [ r_1]-[r_2 ] ) we consider the family of problems with the initial data in eq .( [ r_2 ] ) .next , we seek the approximate solution of eq .( [ r_4 ] ) in the form where are unknown functions of satisfying the condition putting eq.([r_5 ] ) into eq.([r_4 ] ) and next comparing the coefficients of order we get the system of differential equations for , which , together with initial conditions eq.([r_6 ] ) , determine in the unique way .the differential equations for we solve by simple integration . , ( the thick line ) , the approximate solution generated by the velocity - verlet algorithm with step size and the solution generated by our polynomial algorithm ( of order of ) . in this example and . ] to illustrate this procedure we consider the mathematical pendulum problem with external force for we get where we substitute for and the derivatives with respect to for the derivatives with respect to .hence , and after integrating the above equations in the interval ] .practically , for a fixed we look for the interval of $ ] such that where is a fixed accuracy . in the above example of mathematical pendulum the condition states that .next , we repeat our procedure for the eq .( [ r_1 ] ) with the new initial data and so on .fig.[fig1 ] is a visualization of the updating procedure for the initial data .thus , every time the condition in eq.([r_tol ] ) fails at some value of , the new initial data are defined at , i.e. , . in many examplesit is enough to put to get the good approximation of the solution .( in case of the verlet a. ) and the given accuracy ( in case of polynomial method ) .time units , , represent calculation time of the velocit - verlet algorithm with , ] and and different initial values of and .the attractors have been plotted for .the parameters of the polynomial algorithm : . ] and and . ] in the case of the verlet algorithm whereas and in the case of the polynomial method . ]while performing numerical integrating motion equation one always is fighting for the numerical accuracy . in classical finite - difference methods like verlet algorithm ,leapfrog algorithm or runge - kutta algorithm this is connected with the chosen size of the time step .however , the smaller step size the larger cumulated round - off error because more time steps are necessary to cover the given time interval .thus , one should use a numerical method using the smaller number of steps ( the larger value of ) without loss in numerical accuracy .the advantage of our method is already evident in fig .[ fig2 ] , where three solutions of the forced oscillator equation have been plotted , the exact one represented by the equation and two numerical approximations represented by the velocity - verlet algorithm with the step size and our polynomial of degree in formal variable . in the case of the polynomial method there have been plotted , in the figure , only the dots representing the points , where the condition eq .( [ r_tol ] ) fails for a given accuracy .they are the only points where the numerical round - off errors contribute to the approximate solution .the remaining points ( in between ) , which have not been plotted , do not contribute to round - off errors cumulation .one always can recalculate them from the exact expression for the polynomial representation of .the following advantage of our algorithm could be relatively shorter total calculation time than in any numerically stable finite - difference method in the limit of small values of . in fig .[ fig3 ] , we have presented calculation time dependence of the velocity - verlet algorithm on the value of and our polynomial algorithm on the given accuracy .the results in the figure have been obtained from the programs calculating deviations of the approximate solutions from the exact one .the numerical errors arising from the assumed value of can be by a few orders smaller than in classic finite - difference methods .this feature has already been discussed in our paper , where we compared various numerical algorithms with respect to their numerical accuracy .the next feature of the presented algorithm is that it applies also for strongly non - linear motion equations . in particular , in fig.[fig4 ] , we have presented two different attractors of the forced duffing oscillator ( the parameters have been taken from the fig . 2.20 in a book by holden ) with the same values of and but different initial conditions . in fig .[ fig5 ] , there has been presented entire trajectory starting from the initial condition and leading to one of the attractors .one can use our method also for chaotic solutions of the oscillator .however , we do not discuss this possibility in this paper .the polynomial , in the case of the duffing oscillator , is represented by the following formulae : \nonumber \\ & & + \lambda^2\,[24\,x_0\,\sin(t_0+\tau)\,b\,v_0 + 72\,\sin(t_0+\tau)\,\tau\,b\,v_0 ^ 2+\frac{11}{20}\,x_0 ^ 3\,\tau^6\,v_0 ^ 2 \nonumber \\ & & -12\,x_0\,\cos(t_0)\,\tau\,b\,v_0-b\,\sin(t_0)\,a+ \frac{1}{2}\,x_0 ^ 2\,\tau^3\,b\,\sin(t_0)\nonumber \\ & & -108\,\cos(t_0)\,b\,v_0^ 2 -3\,x_0 ^ 2\,\tau\,b\,\sin(t_0)-12\,x_0\,\tau\,\cos(t_0+\tau)\,b\,v_0 \nonumber\\ & & -\frac{3}{2}\,x_0 ^ 2\,\cos(t_0)\,\tau^2\,b + \frac{1}{6}\,x_0 ^ 3\,\tau^3\,a \nonumber \\ & & + 108\,\cos(t_0+\tau)\,b\,v_0 ^ 2 + 36\,\tau\,b\,v_0 ^ 2\,\sin(t_0)-3\,x_0 ^ 2\,\cos(t_0+\tau)\,b \nonumber \\ & & + \frac{1}{8}\,x_0 ^ 5\,\tau^4 - 2\,x_0\,\cos(t_0)\,\tau^3\,b\,v_0+\frac{7}{20}\,\tau^6\,v_0 ^ 3\,a \nonumber \\ & & + \frac{9}{40}\,x_0\,\tau^8\,v_0 ^ 4-\frac{3}{2}\,\cos(t_0)\,\tau^4\,b\,v_0 ^ 2+\frac{1}{2}\,\tau^2\,b\,\sin(t_0)\,a \nonumber \\ & & -24\,x_0\,b\,v_0\,\sin(t_0)+\frac{53}{140}\,x_0 ^ 2\,\tau^7\,v_0 ^ 3 + \sin(t_0+\tau)\,b\,a \nonumber \\ & & + \frac{9}{10}\,\tau^5\,b\,v_0 ^ 2\,\sin(t_0)+\frac{1}{6}\,\tau^3\,v_0\,a^2+x_0\,\tau^4\,b\,v_0\,\sin(t_0)+ \nonumber\\ & & \frac{2}{5}\,x_0\,\tau^5\,v_0 ^ 2\,a-18\,\tau^2\,\cos(t_0+\tau)\,b\,v_0 ^ 2+\frac{3}{40}\,\tau^9\,v_0 ^ 5 \nonumber \\ & & + \frac{1}{4}\,x_0 ^ 2\,\tau^4\,v_0\,a+3\,x_0 ^ 2\,\cos(t_0)\,b- \cos(t_0)\,\tau\,b\,a+\frac{3}{8}\,x_0 ^ 4\,\tau^5\,v_0 ] \nonumber \\\end{aligned}\ ] ] and in this case the accepted approximated solution should satisfy the inequality for a given , where is the coefficient of ( we set the formal parameter after all ) . in our previous paper have shown that our method could be used also for molecular - dynamics simulations of large number of particles . to this end , we have simulated barometric formula in the case of the ideal gas of molecules in the gravitational field and the gas was contacted nos - hoover thermostat , . in all mentioned by us cases , till now , the series expansion of the force ( eq . ( [ r_3 ] ) ) consisted of a finite number of terms .the question arises , could the method be extended to a more general case , where the number of terms is infinite ? in order to show the possibility we have considered 2d lennard - jones fluid represented by a system of particles interacting with lennard - jones potential energy , .\ ] ] then , the force experienced by the particle from another particle being a distance away is repesented by the following formula .\ ] ] in this case the series expansion in the neighborhood of ( see eq . [ r_3 ] ) leads to an infinite number of terms including the powers of . in the case of the approximating polynomials of the order of the numerical algorithm is equivalent to the velocity - verlet algorithm and it is represented by the following set of equations : where and and are the initial location and velocity of the particle , respectively. the accuracy control parameter should satisfy the condition the generalization of the algorithm to the case of the polynomial approximation of the order becomes much more complex and it is not presented in this paper .however , already the results obtained in the linear approximation ( in formal parameter ) become promising . in fig .[ fig6 ] , there has been plotted the kinetic energy ( per particle ) of 500 particles representing 2d lennard - jones fluid versus time in the case of the velocity - verlet algorithm and our polynomial approximation ( eqs .[ r_poloz]-[r_predkosc ] ) , linear in . in this case , the total time used by the polynomial method was of the same order as the total time of the verlet method ( ) . in order to preserve the given numerical accuracy the polynomial algorithm was runnining according to the following steps : 1 .start with , where 2 . if eq .( [ r_kondverlet ] ) fails then change the value of by some factor , e.g. 3 .calculate the values of the polynomials and 4 .goto ( i ) .in the considered example , the time intervals used during the entire simulation run were distributed as follows : the total calculation time strongly depends on the value of and the higher order of the approximating polynomial ( in the formal variable ) makes possible larger values of .we have discussed possible numerical advantages of integrating motion equations with the help of the recently published algorithm for solving the initial - value problem for the ordinary differential equations .contrary to the traditional finite - difference methods , representing truncated series expantion of the solution of the equation of motion under consideration , the method is not discrete in time .this makes possible that for the large class of problems in physics the algorithm could be faster and more accurate than traditional finite - difference schemes .the particular example of the 2d lennard - jones fluid , which has been discussed above , suggests that the method could be applied to the many - body problems + + + * acknowledgments * + we are indebted for discussion with prof .k. wojciechowski and prof .we also thank prof .w.g . hoover for his comments on the algorithm and the suggestion of some numerical tests .92 m.r . dudek and t. nadzieja ,c 16 , 413 ( 2005 ) l. verlet , phys .159 , 98 ( 1967 ) r.w .hockney and j.w .eastwood , _computer simulation using particles _( new york : mcgraw - hill , 1981 ) h. j. c. berendsen and w. f. van gunsteren , _ molecular dynamics simulation of statistical mechanical systems _ ( proceeding of the enrico fermi summer school , p. 43- 65 . soc .italiana de fisica , bologna , 1985 ) k.w .wojciechowski and d. frenkel , cmst 10(2 ) , 235 ( 2004 ) r. lakes , advanced materials 5 , 393(1993 ) k.e .evans and a. alderson , advanced materials 12 , 617 ( 2000 ) d.a.konyok , k.w .wojciechowski , yu . m. pleskachevskii , s.v .shilko , mech .10 , 35 ( 2004 ) , in russian .advanced series in nonlinear dynamics , vol . 13 .time reversibility , computer simulation , and chaos , ed .william graham hoover , world scientific ( 1999 ) s. toxvaerd , molec .72 , 159(1991 ) chaos , ed .arun v. holden , manchester university press ( 1986 ) s. nos , molec .52 , 255 ( 1984 ) w.g .hoover , phys .a 31 , 1695 ( 1985 )
|
we have presented some practical consequences on the molecular - dynamics simulations arising from the numerical algorithm published recently in paper . the algorithm is not a finite - difference method and therefore it could be complementary to the traditional numerical integrating of the motion equations . it consists of two steps . first , an analytic form of polynomials in some formal parameter ( we put after all ) is derived , which approximate the solution of the system of differential equations under consideration . next , the numerical values of the derived polynomials in the interval , in which the difference between them and their truncated part of smaller degree does not exceed a given accuracy , become the numerical solution . the particular examples , which we have considered , represent the forced linear and nonlinear oscillator and the 2d lennard - jones fluid . in the latter case we have restricted to the polynomials of the first degree in formal parameter . the computer simulations play very important role in modeling materials with unusual properties being contradictictory to our intuition . the particular example could be the auxetic materials . in this case , the accuracy of the applied numerical algorithms as well as various side - effects , which might change the physical reality , could become important for the properties of the simulated material . + pacs:31.15.qg , 02.60.cb , 02.60.-x
|
the stochastic nature of oscillation excitation due to turbulent convection is one major source of noise ( i.e. realisation noise ) and systematics in helioseismology .another source of systematics is spatial modulation of the waves by active regions and large - scale convection .accounting for and reducing such noise and systematics is important in helioseismology , global as well as local .global helioseismic power spectral analyses use long uninterrupted observations towards this end .in contrast , determining localized non - axisymmetric perturbations inside the sun the main goal of local helioseismic techniques necessarily involves using observations of more limited extent in both space and time .such a task may appear more difficult and susceptible to larger uncertainties and systematics .the last two decades have witnessed the development and refinement of a new class of local techniques , that include helioseismic holography , far - side imaging and time - distance helioseismology , which have been fairly successful in achieving such tasks .these new techniques are based on studying quantifiable properties of causal connections that acoustic waves establish between points on the solar surface during their travel inside the sun .these techniques are being fine - tuned , achieving increased sensitivity to local changes in the structure and dynamics of the sun , e.g. sunspots . at the same time , the necessity to accurately estimate the errors and systematics in the measurements is also being increasingly felt .time - distance helioseismology uses temporal cross - correlations of the oscillation signals from separated points on the solar surface .the wave packet - like structure of such temporal cross - correlation signals is understood to be due to the propagation of wave packets formed by acoustic waves .the waves constituting a single wave packet travel with approximately the same horizontal phase speed and in the high frequency limit follow the same path inside the sun .connection between the time - distance and modal frequency - wavenumber analyses have been studied by and .kosovichev & duvall provided an useful formula to fit the time - distance correlation signals .bogdan showed , with an explicit calculation , that a group of acoustic waves with approximately the same horizontal phase - veocity indeed interfere constructively to form a wave packet thereby leading to the observed structure of temporal cross - correlation signals .this understanding led to further refinement of time - distance measurement procedures that include phase - speed filtering : three - dimensional fourier spectra of data cubes are filtered to select waves that travel with approximately the same horizontal phase speed and inverted back to the time domain to perform the cross - correlations .such phase - speed filtering not only improves the signal - to - noise in travel time measurements , but also makes possible measuring travel times at very short travel distances ( shallow depths and hence of the high degree modes ) .these improvements are crucial in measuring travel times at each location keeping the original spatial resolution of the data , thereby allowing tomographic study of localized structures such as sunspots ( e.g. * ? ? ?? * ; * ? ? ?in this paper we report on the identification of a significant source of systematics in travel - time measurements that arises due to an adverse coupling of localized strong spatial modulation of oscillation amplitudes in sunspots with the phase - speed filtering procedure .further , we present a numerical model that describes this source of systematics .largely reduced p - mode acoustic power observed in sunspots is thought to have contributions from several causes that are of two major physical origins : ( i ) the interaction between sunspot magnetic field and the quiet - sun p modes and convection , and ( ii ) radiative transfer effects induced by altered thermal conditions within the sunspot .the former physical process is thought to be responsible for ( a ) absorption of p modes as known from a number of studies following the work of , ( b ) alteration of the p - mode eigenfunctions , and ( c ) reduced excitation of p modes within the sunspot .the latter radiative transfer effects cause ( a ) changes in the formation height of spectral lines used to measure the velocities within spots , and ( b ) imperfect measurements through changes in the spectral line profile due to zeeman splitting and the darkness of the spot .the spatial variation of acoustic power can be determined from doppler images by forming pixel - wise temporal power spectra and summing the power in the p - mode band of frequencies . here , we calculate p - mode power within a band of frequencies between 1.7 and 5.3 mhz over three active regions containing a small , medium and large sized spots using mdi doppler velocity data ( high - resolution data for the small and medium size spots , and full - disk resolution data for the large spot ) .the noaa ar numbers for these three sunspots are respectively , ar8555 , ar8243 and ar10488 .we find that quiet - sun regions devoid of any significant magnetic field show p mode power that is more or less homogeneous over the solar surface .when a sufficient number of individual pixel values ( or realizations ) are averaged over , the p mode power is nearly a constant over space in the quiet - sun . to determine the relative deviations that active regions introduce in the p mode power we normalize the power distribution within an active region with respect to a quiet - sun spatial average .the quiet - sun regions chosen for the normalization are from within the larger regions covering the sunspots and are of the same latitudinal extent as the active regions but are outside of any significant magnetic field .we call the square root of such a spatial power map the oscillation ` amplitude modulation function ' , , where is the horizontal position on the solar surface . note that is derived by averaging over the p - mode band ( 1.7 5.3 mhz ) , but , in general , amplitude modulations are frequency dependent .figure 1 displays derived for the three active regions chosen , with their mdi magnetograms shown as well .we note here that detailed studies of local magnetic modulations of oscillation power and their relation to the local magnetic field strengths have been reported by and .the latter authors have also constructed simple models which allow a comparison with the changes in modal power distribution determined from ring diagram ( normal mode ) analyses .the measured spectrum of oscillations in the presence of such long lived spatial modulation of oscillation amplitudes is the convolution , in wavenumber space , of the frequency - wavenumber spectrum of oscillations with the wavenumber spectrum of the modulating function .the possible errors that such convolutions would introduce in the modal parameters could be reduced by using an observational time series sufficiently longer than the life span of amplitude modulators . such a way of reducing the systematics is not available in local helioseismology , where the objective is to probe perturbations localized in space and time . however , a purely _ time - space analysis _ of the oscillation field , in contrast to a _ frequency - wavenumber analysis _ , would not be subject to the kind of errors from which a modal power - spectral analysis suffers .for example , in time - distance analysis , a temporal cross - correlation of oscillation signals from two locations is not affected by a stationary scaling of oscillation amplitudes and hence the ( phase ) travel times are not affected. however , the intermediate step of phase - speed filtering , with recourse to fourier space to select waves of certain modal relations as explained in the previous section , couples the scales or wavenumbers of the modulating function to the oscillation spectra .this causes perturbations in the wavenumbers of the oscillation spectra which in turn manifests as perturbations in travel times measured over regions where the oscillation amplitudes are modulated . before examining this effect by way of a numerical model of the measurement procedures in the next section , we first demonstrate the changes in travel times as measured using a standard time - distance analysis procedure using mdi velocity data cubes .we perform experiments using artificial amplitude modulation functions , which bring out the essential features of the coupling between the spatial variation of oscillation amplitudes and the frequency - wavenumber spectrum of the phase - speed filter .we choose two forms for for this purpose ; horizontal one - dimensional cuts across these modulation functions are shown in the top row of figure 2a .we have chosen a peak suppression of 80% ( which is typical of umbrae of medium sized spots , see figure 1 ) for both functions .the gaussian form , denoted as ( top left panel in figure 2a ) , has a fwhm of about 16 mm while the disc - like function , denoted as ( top right panel in figure 2a ) , has a sharp spatial gradient connecting zero suppression to the peak suppression , which is spread over a disc of diameter 16 mm . each velocity image of a very quiet region data cubeis multiplied by before running the data through a standard time - distance analysis procedure that includes phase - speed filtering and uses center - annulus geometry for computing cross - correlations .travel time maps are calculated for the range in travel distances that are normally used in tomographic inversions , and are compared with those obtained for the original quiet - sun data ( without introducing any amplitude variations ) .the shifts in mean phase travel times , i.e. mean of ingoing and outgoing wave phase travel times , , as a function of are shown as maps in figure 2 .hereafter , by travel times we always refer to mean phase travel times and remove the subscript mean in the notations , i.e. .figure 3 shows these spatially averaged over the masked area , which is about 16 mm in diameter , and denoted as , as a function of .the results in figure 2 and 3a show the following main features of amplitude suppression on travel times .firstly , steeper spatial gradients in the amplitude suppression cause larger shifts in travel times ( compare left and right columns in figure 2 ) in addition to the proportional changes caused by the amount of suppression . secondly , smaller show positive shifts in travel times ( longer travel times ) , while the larger show the opposite change ( shorter travel times ) , with the change over occurring at larger for larger spatial gradient suppression .thirdly , the magnitude of decreases as increases . to estimate changes in travel times that sunspots could introduce purely due to the spatial variation that they cause in the oscillation amplitudes ,we then apply determined from the pixel - wise power map as explained earlier and shown in figure 1 to the same quiet - sun patch data cube and compare the travel - time maps obtained with and without the application of .the results for are shown in figure 4 , similarly to that shown in figure 2 .figure 3b compares the dependence of , which are averaged over the surface area of the spots , for the small , medium and large sized spots .how do the changes we have measured and shown in figures 2 4 , which are purely due to the combined action of spatial amplitude variation and phase - speed filtering , compare with the actual travel times measured in sunspot regions ?for this purpose , we have measured the mean travel - time shifts over the three sunspot regions shown in figure 1 with exactly the same measurement procedure as for the results shown in figures 2 4 .the dependence of averaged over the area of the spots is shown in figure 5a .the fractional values , with respect to the measured for these spots , of the similar changes measured over the amplitude modulated areas shown in figure 3 are shown in panels and of figure 5 . in summary ,the results in figures 2 5 show that spatially localized amplitude variations in the oscillation field caused by sunspots ( fig .1 ) , in combination with the phase - speed filtering in the analysis procedure , can account for mean travel - time shifts in the range of 5 40% in the observed travel - time anomalies in sunspots . a simple experiment of applying the amplitude modulation after the phase - speed filtering leads to negligible changes in travel time .this proved to us that the effect was caused by the interaction of the phase - speed filter with the amplitude modulation .a clear understanding of the origin , and a method of accounting for it in the travel times measured in sunspots , of such changes are important because these can be a source of systematic errors in the subsurface inferences derived using differential inversion methods such as are described in and references therein . in the next sectionwe build a numerical model of the action of phase - speed filter and its interaction with an amplitude function .in this section we derive a simple model showing the effect of the amplitude suppression function on the center - to - annulus cross - covariance and hence the center - to - annulus travel time .the three generic steps in computing center - to - annulus cross - covariances are to filter the data , average the data over the annulus and over a small region around the center point to obtain the `` annulus '' and `` center '' signals , and then to compute the cross - covariance of the `` center '' and `` annulus '' signals .we write the observed oscillation signal , , e.g. the line - of - sight component of the velocity at the solar surface as , where is the underlying oscillation signal , i.e. the signal that would be seen if the sunspot had no effect on the oscillation amplitude .horizontal position is given by and time by .the first step is to filter the observed signal .this is performed in the fourier domain ( ) by multiplying the fourier transform of the observed signal , , by a filter function : this filtering can be transformed back from wavenumber to the space domain as a convolution over space : where is the inverse fourier transform over wavenumber of the filter function and is a convolution over space . using the convention for discrete fourier transforms given in appendix 1 of gizon & birch ( 2004 ) , eq .[ eq : filter_action0 ] can be rewritten as , where the sum over surface position is taken over all points where is not zero , and is the grid spacing in the horizontal directions ( near disk center mm for full - disk mdi data and mm for two - by - two binned high - resolution mdi data ) .the second step is to average the filtered signal over the annulus and then , separately , over a small region around the center point to obtain the `` annulus '' and `` center '' signals respectively ( see for a detailed description of the exact procedure employed in this paper ) in general we can write where and are the weight functions for obtaining the `` annulus '' and `` center '' signals from the filtered data . combining the above equations ( eqs . [ [ eq : phi_ann]-[eq : phi_center ] ] ) with equation ( [ eq : filter_action ] ) we obtain with the weight functions given by we have now expressed ( eqs . [[ eq : w_ann]]-[[eq : w_center ] ] ) the `` center '' and `` annulus '' signals as averages of the unfiltered data . the weight functions and express the weights with which the raw data are averaged , at each temporal frequency , to obtain these average signals .figure [ fig:6 ] shows an example of these weight functions . in general ,the weight functions are not well localized as a result of the strong horizontal wavenumber dependence of the phase - speed filtering .the final step is to compute the cross - covariance of the `` center '' and `` annulus '' signals .this cross - covariance , for center position and annulus with radius , is given by employing equations ( [ eq : w_ann ] ) and ( [ eq : w_center ] ) we can write equation ( [ eq : cov_filtered_phi_orig ] ) as where the point - to - point cross - covariance of is we can express equation ( [ eq : cov_filtered_phi ] ) in terms of the covariance of the underlying wavefield by noticing that the point - to - point cross - covariance of can be written in terms of the point - to - point cross - covariance , , of the underlying wavefield , , and the amplitude suppression function as as a result , the center - to - annulus cross - covariance of is this is the desired result ; we have the center - to - annulus cross - covariance of the filtered wavefield in terms of the point - to - point cross - covariance of the underlying wavefield .as described in gizon & birch ( 2004 ) we can compute in terms of only the power spectrum of .the weight functions depend only on the filter function and the averaging done to obtain the `` center '' and `` annulus '' signals .thus , for a given measurement scheme ( i.e. filter and spatial averaging scheme ) and for a particular power spectrum of the underlying wavefield , we can compute how the center - to - annulus cross - correlation of the modified wavefield depends on the amplitude function . from equation ( [ eq : cov_filtered_phi_a ] )we can see that the effect of an amplitude function is to alter the weights with which different two - point cross - covariances contribute to the full center - to - annulus cross - covariance . in order to demonstrate the validity of equation ( [ eq : cov_filtered_phi_a ] ) we computed the cross - covariances for the case shown in the top left panel of figure 4 : this case corresponds to applying the amplitude variation measured over the small sunspot ( spot 1 ) to a quiet - sun patch and measuring the travel - time shifts for a distance of 4.96 mm . in figure [ fig:7 ]we show a comparison of the travel times shown in figure 4 ( top left panel ) with the travel times measured from predicted by equation ( [ eq : cov_filtered_phi_a ] ) .we see that the model presented in this section predicts the effect of the amplitude function reasonably well .the results in the previous two sections show that systematic shifts in travel times are caused by the interaction of the spatial variation in oscillation amplitudes , be they of any origin ( as demonstrated by our experiments with artificial modulation functions ) , with the phase - speed filtering in the analysis procedure .this understanding shows that if we could remove the strong spatial variation in the oscillation amplitudes , caused by the objects of our main concern , i.e. sunspots , without affecting the temporal phase evolution of oscillation signals , then this particular effect could be reversed thereby removing the systematic shifts .this suggests that the oscillation signal at each pixel could be boosted up by a constant factor obtained from the amplitude function ( fig .1 ) derived from the pixel - wise power maps : a simple way of estimating the pixel - wise scale - up factor is just taking the inverse of . a natural way to carry outthis remedy is to boost the amplitude of the oscillation signal in each pixel over the sunspot region so that the functions look smoother and have values similar to that of the more or less homogeneous quiet - sun regions .we choose the case of the small sunspot ( spot 1 ) shown in the top row of figure 1 .the pixel - wise scale - up factor is given by = 1/ .here , we concern ourselves with correcting the power deficit only within the sunspot , and so we determine the scale - up factor in a small area in and around the sunspot . the sunspot ( spot 1 ) is found to be about 14 mm in diameter , as seen in ( top left panel in figure 1 ) , and we choose an area of about 28 mm square centered around this spot and calculate in this region . to minimize the effects of pixel - scale variations we smooth by a four pixel box car .the resulting map of the scale - up factor is shown in figure 8a .we note that was determined for p modes within a frequency band of 1.7 5.3 mhz .hence we use calculated as above to boost the amplitudes of p modes in the same band of frequencies , i.e. we apply in fourier space to boost the amplitudes of p modes in this band .these are then inverted back to the time - space domain to get the corrected doppler velocity data cube that is subjected to the same time - distance analysis procedure as before ( section 2 ) .figure 8b shows the frequency distribution of power averaged over the sunspot pixels ( an area of 14 mm square around the spot center ) before and after correction , i.e. before and after applying , and also quiet - sun power averaged over an area of the same size .we note here that such artificial enhancement of oscillation amplitudes will not undo the real physical changes in travel times that the spots have caused , but are expected to undo the changes arising from the amplitude modulations demonstrated in the previous section .we calculated maps of changes in mean phase travel times , , before and after the amplitude or power corrections described above .the results for two representative travel distances of 4.96 and 16.5 mm are shown in the two columns of figure 9 .the top row shows the original or uncorrected travel times , the middle row shows the corrected travel times , and in the bottom row are shown the differences .it is instructive to compare with the corresponding panels ( i.e. for the same ) in figures 2 and 4 : the corrections are of the same sign and of similar magnitude as that of in figures 2 and 4 .this suggests that the simple amplitude - boosting correction scheme presented here reduces , to some extent , the systematic shift in the travel times caused by the reduction of oscillation amplitudes in the sunspot .possible complications for the correction scheme include the spatial averaging that was used to create the scale - up function , noise in the estimate of the amplitude suppression function , and frequency dependence of the real solar suppression of oscillation amplitudes .a more detailed analysis of this correction scheme and also some variants of it , including a study of how the corrections in travel times affect the subsurface inferences through inversions , is left for a separate paper ( paper ii ) .the change in travel times in response to changes in surface oscillation amplitudes depends on the spatial gradient of the amplitude modulation , the amount of reduction in the amplitudes , the travel distances , and the details of the phase - speed filter .the principal finding is that the largest and significant changes occur only for waves with short travel distances ( up to about 16 mm ) .values in the range of 5 - 40% of the travel - time anomalies that sunspots cause could be a result of oscillation amplitude reduction ( figure 5b ) .this might indicate that the subsurface inferences from inversions would , correspondingly , undergo significant changes for the near - surface layers .however , the exact amount of changes and how the particular dependence of that we have shown here would influence the inferences regarding deeper layers can only be assessed by doing detailed analyses of inversions .we have shown that a simple correction , which involves boosting up the p - mode amplitudes , is able to reverse the interaction of amplitude suppression and phase - speed filtering thereby removing substantially the systematic changes or artifacts in travel times .we have shown this for the case of a small sunspot , where there is a measurable amount of p - mode power within the umbra . in large and very dark sunspots, the signal - to - noise ratio for the p - mode signal in the umbrae is too low to carry out this correction successfully .we have demonstrated that the effects of oscillation amplitude variations on travel times are caused by the phase - speed filtering procedure , which is however crucial to achieving high signal - to - noise as well as high spatial resolution in the measurements of travel times .spatial amplitude modulations ( convolutions in fourier space ) and the phase - speed filtering are non - commuting operations in the frequency - wavenumber ( fourier ) domain .the travel distance dependence of the systematic changes in travel times are seen to be of the same form as the actual travel - time anomalies measured over sunspots ( compare figures 3b and 5a ) . in spite of such a similarity between the systematic errors and the real changes in for sunspots ,it is important that the other known signatures that sunspots leave in local helioseismic measurements are differentiated from the above . in particular , sunspots show large asymmetries measured in both the amplitudes of cross - covariances and travel times , as well as in the control - correlation ingression and egression measurements of helioseismic holography , between the out- and in - going wave correlations .these asymmetries possibly relate to the irreversible changes in acoustic waves impinging on real sunspots and hence their origin is independent of the travel - time shifts that we have shown here .the contributions due to the effect that we have studied here are also likely to be present in the helioseismic holography studies ; because these studies do involve selecting in fourier space modes of certain frequency - wavenumber range , and hence a similar influence as that of a phase - speed filter is possible .this work utilizes data from the solar oscillations investigation/ michelson doppler imager ( soi / mdi ) on the solar and heliospheric observatory ( soho ) .the mdi project is supported by nasa grant nag5 - 13261 to stanford university .soho is a project of international cooperation between esa and nasa .the work in part was supported by the uk particle physics and astronomy research council ( pparc ) through grants ppa / g / s/2000/00502 , ppa / v / s/2000/00512 and pp / x501812/1 .the work of acb was supported by nasa contract nnh04cc05c .spr thanks dr .kosovichev for discussions and critical comments .we thank dr .douglas braun for useful comments .alamanni , n. , cavallini , f. , ceppatelli , g. & righini , a. 1990 , , 228 , 517 balthasar , h. , & schmidt , w. 1993 , , 279 , 243 birch , a.c . , kosovichev , a. g. , & duvall , t. l. , jr .2004 , , 608 , 580 bogdan , t.j .1997 , , 477 , 475 bogdan , t.j .2000 , , 192 , 373 bogdan , t. j. , hindman , b. w. , cally , p. s. , charbonneau , p. 1996, , 465 , 406 braun , d. c. , la bonte , b. j. , & duvall , t. l. , jr .1987 , , 319 , l27 braun , d. c. , la bonte , b. j. , & duvall , t. l. , jr .1988 , , 335 , 1015 braun , d.c . , & lindsey , c. 1999 , , 513 , l79 braun , d.c . , lindsey , c. , & birch , a.c .2004 , baas , 204 , 530 christensen - dalsgaard , j. 2002 , _ rev ._ , 74 , 1073 couvidat , s. , birch , a.c . ,kosovichev , a. g. , & zhao , j. 2004 , , 607 , 554 duvall , t. l. , jr . , & harvey , j. w. 1986 , in seismology of the sun and the distant stars , ed .gough ( dordrecht reidel ) , 105 duvall , t. l. , jr ., jefferies , s. m. , harvey , j. w. & pomerantz , m. a. 1993 , , 362 , 430 duvall , t. l. jr ., dsilva , s. , jefferies , s. m. , harvey , j. w. , & schou , j. 1996 , , 379 , 235 duvall , t. l. , jr .1997 , sol .phys . , 170 , 63 gizon , l. , & birch , a.c .2004 , , 614 , 472 gizon , l. , & birch , a.c .2005 , living rev .solar phys . 2 , 6 .url ( cited on 1/1/06 ) : http://www.livingreviews.org/lrsp-2005-6 hindman , b.w . , & brown , t.m .1998 , , 504 , 1029 hindman , b.w . ,jain , r. , & zweibel , e.g. 1997 , , 476 , 392 hughes , s.j ., rajaguru , s.p . , & thompson , m.j .2005 , , 627 , 1040 jain , r. , hindman , b.w ., & zweibel , e.g. 1996 , , 464 , 476 kosovichev , a.g . ,duvall , t.l . , jr . in score-96 : solar convection and oscillations and their relationship , ed .pijpers et al . ,1997 ( kluwer ) kosovichev , a.g . ,duvall , t.l . ,jr . 1999 , current science , 77 , 1467 kosovichev , a.g . ,duvall , t.l . , jr . ,scherrer , p.h .2000 , , 192 , 159 libbrecht , k.g .1992 , , 387 , 712 lindsey , c. , & braun , d.c .1997 , , 485 , 895 lindsey , c. , & braun , d.c .2000 , science , 287 , 1799 lindsey , c. , & braun , d.c .2005 , , 620 , l1107 nicholas , c.j . ,thompson , m.j ., & rajaguru , s.p .2004 , , 225 , 213 rajaguru , s.p . ,basu , sarbani , & antia , h.m . , 2001 , , 563 , 410 rajaguru , s.p . ,hughes , s.j . , &thompson , m.j .2004 , , 220 , 381 scherrer , p.h . , et al .1995 , , 162 , 219 schou , j. 1992 , ph.d .thesis , aarhus univ . ,thomas , j.h . &stanchfield ii , d.c.h . ,2000 , , 537 , 1086 venkatakrishnan , p. , kumar , brajesh & tripathy , s. c. , 2001 , 202 , 229 wachter , r. et al . 2006 , in preparation werne , j. , birch , a.c ., & julien , k. 2004 , in proceedings of soho 14/gong 2004 `` helio- and asteroseismology : towards a golden future '' , ed .danesy , p.et al , p 172 woodard , m.f .1984 , ph.d .thesis , univ .california , san diego .zhao , j. , kosovichev , a.g . , & duvall , t.l.,jr .2001 , , 557 , 384 zhao , j. et al .2006 , in preparation ( paper ii )
|
it is well known that the observed amplitude of solar oscillations is lower in sunspots than in quiet regions of the sun . we show that this local reduction in oscillation amplitudes combined with the phase - speed filtering procedure in time - distance helioseismic analyses could be a source of systematic errors in the range of 5 - 40% in the measured travel - time anomalies of acoustic waves around sunspots . removing these travel time artifacts is important for correctly inferring the subsurface structure of sunspots . we suggest an empirical correction procedure and illustrate its usage for a small sunspot . this work utilizes data from mdi/_soho_.
|
group testing is a combinatorial scheme developed for the purpose of efficient identification of infected individuals in a given pool of subjects . the main idea behindthe approach is that if a small number of individuals are infected , one can test the population in groups , rather than individually , thereby saving in terms of the number of tests conducted .a tested group is said to be positive if at least one of its members tests positive ; otherwise , the tested group is said to be negative .each individual from a population of size is represented by a binary test vector ( or signature ) of length , indicating in which of the tests the individual participates .the test outcomes are represented by a vector of size that equals the entry - wise boolean or function of the signatures of infected individuals .the reconstruction task consists of identifying the infected individuals based on their test signatures , using the smallest signature length .the work in was extended in , where the authors proposed two coding schemes for use in information retrieval systems and for channel assignments aimed at relieving congestion in crowded communications bands .the coding schemes are now known as superimposed codes , including _disjunct / zero - false - drop _ ( d / zfd ) and _ separable / uniquely decipherable _ ( s / ud ) codes . for compactness , we henceforth only use the terms disjunct ( d ) and separable ( s ) to describe such codes .these two classes of superimposed codes were extensively studied see - and references therein .superimposed codes are inherently asymmetric with respect to the elements of the input alphabet : if all tested subjects are negative ( i.e. , zero ) , the output is negative .otherwise , for any positive number of infected test subjects , the output is positive .consequently , a zero output carries significantly more information than an output equal to one . for many applications , including dna pooling and other genomic and biological testing methods with low sensitivity, a test may be positive only if a sufficiently large number of subjects test positive .in particular , a test outcome may be positive if and only if all subjects are infected . a probabilistic scheme that makes the group test symmetric with respect to the all - positive and all - negative testswas first described in . to the best of our knowledge ,these are the only two papers dealing with symmetric group tests , although only within a probabilistic setting where the inputs are assumed to follow a binomial distribution .in addition , the method was only studied for two extremal parameter choices , using a game - theoretic framework in which the player s strategies ( i.e. , reconstruction methods ) are fixed and involve some form of oracle information .a more recent approach to the problem of a nearly - symmetric group testing was described in .threshold group testing assumes that a test is positive ( or negative ) if more ( or less ) than ( or ) individuals tested are positive , where .a test produces an _ arbitrary _ outcome ( zero or one ) otherwise .the latter feature makes the testing problem highly nontrivial and substantially different from symmetric group testing ( sgt ) .we are concerned with describing symmetric group testing in a combinatorial setting , extending the concept of symmetry using information theoretic methods and in developing analogues of and codes , termed and codes .our results include bounds on the size of the test set of symmetric group testing , construction methods for symmetric and codes , and efficient reconstruction algorithms in the absence and presence of errors .bounds on the size of and codes are based on lovsz local lemma and constructive coding theoretic arguments .the paper is organized as follows .section [ sec : info - theory ] contains a short exposition regarding symmetric group testing , while information - theoretic bounds on the number of required tests are derived in noisy and noise - free scenarios . in section [ sec : ggt ] , a generalization of sgt ( termed _ generalized group testing _ )is introduced by employing a lower and upper threshold , and information theoretic bounds on the number of required tests are derived .section [ sec : superimposed ] introduces symmetric superimposed codes and contains derivations on the bounds on the size of and codes .finally , section [ sec : construction ] presents some techniques for constructing symmetric superimposed codes .the use of sgt was originally motivated by applications in circuit testing and chemical component analysis . as an illustrative example , consider the situation where one is to test identically designed circuits using only serial and parallel component concatenation . in the serial testing mode , one can detect if all circuits are operational . in the parallel mode, one can detect if all circuits are non - operational .if at least one circuit is operational and one is non - operational , neither of the two concatenation schemes will be operational .detecting efficiently which of the circuits are non - operational would require a ternary output group testing scheme .the reasons for introducing symmetric group testing with ternary outputs are twofold .the first motivation is to provide symmetry in the information content of the output symbols zero and one .note that in standard group testing , a zero output automatically eliminates all tested subjects from further consideration , which is not the case with the symbol one .in symmetric group testing , the symbols zero and one play a symmetric role .the second motivation comes from biological applications in which the sensitivity of the measurement devices is such that it can only provide a range for the number of infected individuals in a pool : for example , the output may be `` 0 '' if less than individuals in the test set are infected , 1 if more than individuals are infected , and `` 2 '' in all other cases . throughout the paperwe use the word `` positive '' ( `` negative '' ) to indicate that a tested subject has ( does not have ) a given property . for the asymmetric testing strategy ,the outcome of a test is negative if all tested subjects are negative , and positive otherwise .the outcome of a symmetric test is said to be positive ( denoted by `` 1 '' ) , inconclusive ( denoted by `` 2 '' ) , or negative ( denoted by `` 0 '' ) , if all the subjects tested are positive , at least one subject is positive and another one is negative , and all subjects are negative , respectively .let denote the total number of test subjects , and let denote the defective set with cardinality .furthermore , let denote the collection of codewords ( or signatures ) corresponding to ; note that the length of each codeword is equal to .also , let denote the noise - free observation vector ( test outcome ) equal to the _ternary addition _ of the codewords of the defective set , where ternary addition is defined as follows .[ def1 ] for a ternary alphabet , we define _ ternary addition _ , , via the rules , , and . clearly , ternary addition is commutative and associative .note that in general the ternary addition operator is more informative than its binary counterpart ; consequently , one expects that the number of required tests in the sgt is smaller than the number of required tests in a similar asymmetric group testing ( agt ) scheme .we hence focus on finding upper and lower bounds on the minimum number of tests in a sgt scheme that ensures detection of the defective set with probability of error asymptotically converging to zero . in the noise - free case ,the observation vector is equal to the superposition of the signatures of defectives , i.e. , where `` '' denotes ternary addition , and stands for the signature of the defective subject .let the tests be designed independently , and let denote the probability that a subject is part of a given test , independent of all other subjects .it was shown in that for any , a sufficient number of tests for asymptotically achieving a probability of error equal to zero in the agt setting is lower bounded as where denotes all ordered pairs of sets that partition the defective set , such that and . in the above equation , stands for the mutual information between and ; for a single test , ( where ) is a vector of size , with its entry equal to 1 if the defective subject in is in the test and 0 otherwise , while is the outcome of the test . also , a necessary condition for zero error probability in agt was shown to be of the form by following the same steps in the proof of the above two bounds , it can be easily shown that the same sufficient and necessary conditions hold for the sgt scheme , except for the fact that the mutual information will evaluate to a different form given the change in the output alphabet .furthermore , it can be easily shown that for a fixed value of , and , as ; consequently , these bounds are asymptotically tight . in the following proposition, we evaluate the expressions for mutual information in and for sgt . in the noise - free sgt for , where , and where for any , and .let ( where ) denote the number of ones in , or alternatively , the hamming weight of . from the definition of mutual information, one has if , then otherwise , if , then similarly , it can be shown that for agt , in order to compare sgt and agt , fix and let using the above definition , the bounds in ( [ sufficient ] ) and ( [ necessary ] ) asymptotically simplify to and , respectively .also , let and denote the values of for the choice of that minimizes the lower bounds in the agt and sgt expressions , the value of that maximizes may differ for agt and sgt .] , respectively .1 shows the behavior of these parameters with respect to . as can be seen , for , , butas grows , this ratio converges to one .[ figure1 ] and versus in the noise - free and noisy scenario.,width=283 ] this type of noise accounts for false alarms in the outcome of the tests . in this case , the noise vector , , is modeled as a vector of independent identically distributed ( i.i.d . )bernoulli random variables with parameter ; the vector of noisy observations , , equals the ternary sum of the vector of noise - free observations and the noise vector , i.e. , .note that in this model , we assume that both 0 s and 1 s may change to the value 2 , while 2 s remain unaltered in the presence of noise .this model applies whenever dilution effects may occur , since adding one positive ( negative ) subject may change the outcome of a negative ( positive ) group . in the presence of binary additive noise , for any , (q ) \ \ \ \ \ \ \ \ \ \ \ \\textnormal{if}\ \ \\ 1\!\leq\ ! i\!\leq\ !m\!-\!1\\ \\ \!\!\!\ ! h\left(p^mq,(1-p)^m(1-q)\right)\\ \ \-\left[(1-p)^m+p^m\right]h(q ) \ \ \ \ \ \ \ \ \ \ \ \\textnormal{if}\ \ \ \i\!=\!m\\ \end{array}\ ! , \right.\end{aligned}\ ] ] it can be easily shown that if , where denotes the hamming weight of .similarly , if , then . also one has (q),\end{aligned}\ ] ] where denotes the hamming weight of . using these two expressions in the definition of the mutual information completes the proof .similarly , it can be shown that for agt , fig .1 also shows the behavior of and with respect to , for the dilution noise model with .as can be seen , sgt outperforms the agt scheme : for , the ratio is approximately equal to , and as grows it decreases . note that for small values of in this model , sgt does not offer any significant advantage when compared to agt : for , eqs .( [ i_sban ] ) and ( [ i_aban ] ) are identical .in many applications , the experiment can not be modeled by agt or sgt , and a more general model is required .we hence consider the generalized group testing ( ggt ) problem in which the outcome of a test equals 0 if the number of defectives in the corresponding pool is less than or equal to ( where ) , equals 1 if the number of defectives is larger than ( where ) , and equals 2 if the number of defectives is larger than and is less than or equal to .note that when ggt reduces to agt , and when and ggt reduces to sgt . in ggt , and for any , where , with as before , denotes the probability that a subject is part of a given random test .similarly to the case of agt and sgt , one can define as where is defined in the same manner as in ( [ alpha_p ] ) .2 shows versus for agt , sgt , and ggt . as can be seen , when , ggt outperforms sgt and agt .for example , when , one has and , and at , one has . the arguments that maximize denoted by , , and and are tabulated in table 1 for different values of . , , and versus in the absence of noise.,width=283 ] & & & & & + 2 & 0.500 & 0 & 1 & & 0.175 & 1 & 2 + & 0.351 & 0 & 1 & & 0.825 & 9 & 10 + & 0.649 & 1 & 2 & & 0.161 & 1 & 2 + 4 & 0.500 & 1 & 2 & & 0.839 & 10 & 11 + & 0.406 & 1 & 2 & & 0.150 & 1 & 2 + & 0.594 & 2 & 3 & & 0.850 & 11 & 12 + & 0.341 & 1 & 2 & & 0.076 & 0 & 1 + & 0.659 & 3 & 4 & & 0.924 & 13 & 14 + & 0.294 & 1 & 2 & & 0.071 & 0 & 1 + & 0.706 & 4 & 5 & & 0.929 & 14 & 15 + & 0.259 & 1 & 2 & 17 & 0.500 & 7 & 9 + & 0.741 & 5 & 6 & & 0.473 & 7 & 9 + & 0.231 & 1 & 2 & & 0.527 & 8 & 10 + & 0.769 & 6 & 7 & 19 & 0.500 & 8 & 10 + & 0.209 & 1 & 2 & & 0.475 & 8 & 10 + & 0.791 & 7 & 8 & & 0.525 & 9 & 11 + & 0.190 & 1 & 2 & 21 & 0.500 & 9 & 11 + & 0.810 & 8 & 9 +in this section , we introduce disjunct and separable symmetric superimposed codes for the sgt scheme .we postpone the discussion of generalized disjunct and separable codes to the full version of the paper . given two ternary vectors and , and a ternary addition operator , we say that is _ included _ in if .let the addition operator be the ternary addition described in def .a code is a symmetric -disjunct code if for any sets of binary codewords and , the sum being included in implies that , for any .a symmetric code is an -separable code if implies and for any .henceforth , we refer to as the strength of code and use and to denote the number of codewords ( the signatures of the subjects ) and their length , respectively . the rate of a superimposed code of strength is defined as , where stands for the logarithm base two .whenever apparent from the context , we use and instead of and . in this subsection , we derive an upper bound on the size of codes using probabilistic methods .[ thm_upper ] let be a fixed number and let ; if is asymptotically upper bounded by , where ^{1/m} ] , then there exists a symmetric disjunct superimposed .the rate of this code is asymptotically upper bounded by .let be a set of codewords with length , and let be a set of codewords chosen from .there are different possibilities for . for the possible choice of ,define as the event that at least one of the codewords in is included in the ternary sum of the other codewords . from this definition ,each is mutually independent of all except ] , then there exists an asymmetric disjunct superimposed .the rate of this code is asymptotically upper bounded by .consequently , the ratio is approximately equal to 2 when and tends to 1 as grows . in this subsection, we find an upper bound on the size of symmetric separable codes when .[ thm_upper2 ] let and let ; if is asymptotically smaller than , where and , then there exists a symmetric separable superimposed .no two distinct pairs of codewords of a separable code have an identical ternary sum .let be a code of codewords with length and let be a set of four codewords of .there are different choices for . for the choice of ,we define as the event that at least two distinct pairs of codewords in have an identical ternary sum . using lovsz s local lemma , if <1 , \vspace{-0.1cm}\ ] ] then , where is an upper bound on , i.e. , for all .one has and as , . substituting into ( [ lovasz2 ] ) and using the fact that , as , completes the proof .due to space limitations , we only consider symmetric separable codes with for which a construction based on coding theoretic methods is particularly simple .let denote four different columns of the parity - check matrix furthermore , assume that .at the positions where is or all codewords have a or a 1 , respectively .if at some position , equals , then necessarily if we denote the binary sum by , the former claim is equivalent to .this contradicts the assumption that is a parity check matrix of a linear code of distance at least five .hence , the columns of must form a code . from the gilbert - varshamov bound , one would conclude that where is the length of the code and is the number of codewords . for and large enough , one would have and {6}\left(2^{\frac{1}{3}}\right)^{n - k}$ ] .r. dorfman , `` the detection of defective members of large populations , '' _ ann .436 - 440 , 1943 .w. kautz and r. singleton , `` nonrandom binary superimposed codes , '' _ ieee trans . inf .363 - 377 , oct . 1964 .a. de bonis and u. vaccaro , `` efficient constructions of generalized superimposed codes with applications to group testing and conflict resolution in mutliple access channels , '' _ theoretical computer science _ ,223 - 243 , sept . 2003 .y. cheng and d - z .du , `` efficient constructions of disjunct matrices with applications to dna library screening , '' _j. comput .1208 - 1216 , 2007 .a. g. dyachkov , a. j. macula and v. rykov , `` new construction of superimposed codes , '' _ ieee trans .inf . theory _ ,284 - 290 , jan .t. huang , c. weng , `` a note on decoding of superimposed codes , ''_ j. comb ._ , vol . 7 , pp .381 - 384 , nov . 2003 .a. j. macula , `` simple construction of d - disjunct matrices with certain constant weights , '' _ discrete math .311 - 312 , 1996 .s. blumenthal , s. kumar , and m. sobel , `` a symmetric binomial group - testing with three outcomes , '' _ purdue symp .decision procedures _ , 1971 .f. hwang , `` three versions of a group testing game , '' _siam j. alg ._ , vol . 5 , pp . 145 - 153 , june 1984p. damaschke , `` threshold group testing , '' _ general theory of information transfer and combinatorics _ in : lncs , vol . 4123 , pp .707 - 718 , 2006 .n. alon and j. spencer , _ the probabilistic method _ , wiley - interscience series in discrete mathematics and optimization , 1998 .g. atia and v. saligrama , `` boolean compressed sensing and noisy group testing , '' arxiv:0907.1061v4 , 2010 .
|
we describe a generalization of the group testing problem termed _ symmetric group testing_. unlike in classical binary group testing , the roles played by the input symbols zero and one are `` symmetric '' while the outputs are drawn from a ternary alphabet . using an information - theoretic approach , we derive sufficient and necessary conditions for the number of tests required for noise - free and noisy reconstructions . furthermore , we extend the notion of disjunct ( zero - false - drop ) and separable ( uniquely decipherable ) codes to the case of symmetric group testing . for the new family of codes , we derive bounds on their size based on probabilistic methods , and provide construction methods based on coding theoretic ideas .
|
unlike gan films , single - crystalline gan nanowires ( nws ) can be grown on a wide variety of crystalline as well as amorphous substrates by several epitaxial growth techniques . in plasma - assisted molecular beam epitaxy ( pa - mbe ) ,dense ensembles of gan nws form spontaneously under n - excess at elevated temperatures .regardless of the substrate , spontaneously formed single gan nws are virtually free of both extended defects and homogeneous strain .nevertheless , as for any other self - organized growth process , the degree of control on the properties of gan nw ensembles is rather limited . in pa - mbe, the diameter of gan nws can be regulated by the ga / n ratio and the nw number density is expected to be determined by the diffusion length of ga adatoms . for low ga / n ratios, the diameter of single nws can be as small as nm .however , in most cases , the effective average diameter becomes larger due to the coalescence of adjacent nws .this undesired phenomenon , caused by nw mutual misorientation as well as nw radial growth , is favored by the high nucleation density of gan nws ( typically , in the range of ) .unfortunately , nw coalescence not only results in a poor degree of control over the nw morphology but also introduces extended defects as well as inhomogeneous strain .therefore , in order to take advantage of spontaneously formed nws for the fabrication of gan - based devices on dissimilar substrates , it is highly desirable to develop growth approaches designed to gain control over the nucleation and growth of gan nws .carnevale et al . proposed a two - step growth method to separate the nucleation and growth processes .they showed that , by increasing the substrate temperature during the nucleation stage , it is possible to disentangle and vary the number density , the height , and the length of spontaneously formed gan nws .however , even though they were able to vary the nw number density in a wide range , the resulting nw ensembles were rather short ( nm ) and inhomogeneous in height. motivated by the experiments performed by carnevale et al . , we systematically investigate in the present work the impact of modifying the growth conditions at different stages during the spontaneous formation of homogeneous gan nw ensembles on si . in the first part of this work, we analyze the impact of increasing the substrate temperature at different times during the nucleation of gan nws .the latter is monitored in situ by reflection high - energy electron diffraction ( rheed ) as well as line - of - sight quadrupole mass spectrometry ( qms ) . while we were not able to reduce significantly the number density for ensembles of long and homogeneous nws , we gained control over area coverage , average nw diameter and coalescence degree . in the second part of this work, we investigate the influence of the growth conditions employed during the incubation stage that precedes nw nucleation .the results demonstrate that the properties of the nw ensemble do not depend on the incubation stage .therefore , a two - step growth approach , where more favorable growth conditions for nw nucleation are employed during the incubation stage , can be used to reduce the total growth time without affecting the properties of the final nw ensemble .this growth approach paves the way for growing gan nws under more extreme growth conditions ( lower ga / n ratios and higher substrate temperatures ) which typically result in smaller nw diameters , lower degrees of coalescence , and improved optical properties .all samples were grown on si substrates in a mbe system equipped with a solid - source effusion cell for ga as well as a radio - frequency n plasma source for active n. the impinging fluxes were calibrated in gan - equivalent growth rate units of nm / min , as described in ref .. a growth rate of 1 nm / min is equivalent to s .the desorbing ga flux was monitored in - situ during the experiments by qms . the qms response to the ga signal was also calibrated in gan - equivalent growth rate units , as explained in detail elsewhere .since there is no ga accumulation on the surface , the ga incorporation rate per unit area ( i.e. , the deposition rate ) is given by , where is the impinging ga flux .the substrate temperature was measured with an optical pyrometer calibrated to the surface reconstruction transition temperature of si ( approx . ) . the as - received si substrates were etched using diluted ( ) hf . in order to remove any residual si from the surface, the substrates were outgassed at for min prior to growth . afterward, the substrates were exposed to an active nitrogen flux of nm / min at the growth temperature for min .then , the growth was initiated by opening the ga shutter at . for all experiments was kept constant and equal to nm / min .for the two - step growth samples , the substrate temperature was first kept at .then , the substrate temperature was increased to the specified times during the nw nucleation stage at a rate of /min keeping the ga and n shutters open during the entire process .for the two - step sample in section [ sec2 ] , the substrate temperature was instead increased to subsequently the ga flux was raised as well . at the end , the growth was stopped by closing all shutters and cooling down the samples .the morphology of the samples was investigated by cross - sectional and plan - view scanning electron microscopy following the method established in ref . . for each sample, we analyzed several plan - view micrographs containing a few hundreds of nws . using the open - source software imagej , we derived the area , perimeter , and circularity of the nw top facets .the circularity is defined by and was used to estimate the coalescence degree as explained below . while uncoalesced nws typically exhibit high values of , anisotropic radial growth as well as nw coalescence usually lead to shapes exhibiting lower values of . as proposed in ref . , we used a threshold value of to distinguish between single nws and coalesced aggregates .furthermore , we introduced an additional criterion considering only nws that also exhibit an equivalent - disk diameter nm as uncoalesced .this is due to the fact that for highly coalesced nw ensembles nw aggregates may cluster as bundles with roundish cross - sectional shapes exhibiting circularity values higher than .the coalescence degree was then assessed as where is the total cross - sectional area of coalesced nws and the total cross - sectional area of all nws considered in the analysis .corresponding to the volume fraction of coalesced nws , a such defined coalescence degree is the relevant quantity when examining data from experimental techniques probing the material volume .accordingly , the average equivalent - disk diameter of uncoalesced nws was determined from a normal distribution fitted to the diameter distribution of all uncoalesced nws .the total nw number density was calculated taking into account that coalesced aggregates are composed of several nws .the number of nws contained in a coalesced aggregate was estimated dividing the cross - sectional area of the aggregate by the average cross - sectional area of uncoalesced nws .continuous - wave micro - photoluminescence ( -pl ) experiments were carried out using a hecd laser ( nm ) with an excitation density below w/ .the luminescence was dispersed by a monochromator ( cm focal length , lines / mm ) and detected by a charge - coupled device detector .) to the experimental data .the blue dashed and the green dashed - dotted lines indicate the contributions of nw nucleation and collective effects , respectively . depicts the average delay time for nw formation , in this case min . for samplesa2a4 , the substrate temperature was increased by at the indicated times . ]lxcccc & & sample a1 & sample a2 & sample a3 & sample a4 + step 1 & substrate temperature ( ) & 815 & 815 & 815 & 815 + & growth time ( min ) & 270 & 80 & 65 & 50 + step 2 & substrate temperature ( ) & - & 845 & 845 & 845 + & growth time ( min ) & - & 190 & 205 & 220 + [ tab : growthconditions_abcd ] here , we investigate to which extent the morphology and distribution of gan nws can be controlled using a two - step growth process where the growth conditions are modified during the nucleation stage . figure [ fig : figure1 ] shows the temporal evolution of the ga incorporation rate per unit area for a reference gan nw ensemble prepared in a conventional fashion , namely , keeping constant all growth parameters throughout the entire process .this reference sample , referred to as sample a1 , was grown at using a ga flux of nm / min .the total growth time was hours .after an incubation time of min , the appearance of gan - related spots in the rheed pattern ( not shown here ) as well as the initial increase of ( fig .[ fig : figure1 ] ) reveal the formation of the first gan nws .afterward , rapidly increases due to the continuous nucleation of gan nws . as discussed in ref . , the variation in the slope of after about min indicates the end of the nucleation stage . the final increase in before reaching steady - state growth conditions is due to the onset of collective effects , i.e. , the shadowing of the impinging fluxes by long nws and the exchange of ga atoms between adjacent nws . as reported in ref . , the temporal evolution of can be described by the sum of two logistic functions , that describe the respective contributions of nw nucleation and collective effects . in equation ( [ equation3 ] ), represents the average delay time for nw formation , a rate constant related to the nw formation rate after the incubation time , and __ a__ the final value of in the absence of collective effects .analogously , is the average delay time for the onset of collective effects , a rate constant , and _ _ a__ the contribution of collective effects to the final value of . as shown in fig .[ fig : figure1 ] , eq .( [ equation3 ] ) yields an excellent fit of the experimental data . in the figure, we also depict the individual contributions of the two logistic functions .the average delay time for nw formation derived from the fit is min . in view of the fact that each nw experiences a different delay time before it is formed , we expect to gain control over both nw number density and coalescence degree by modifying the substrate temperature at the nucleation stage ( stage ii in fig .[ fig : figure1 ] ) , as reported by carnevale et al .since the average delay time for nw formation increases exponentially with substrate temperature .raising the temperature before completion of the nw nucleation should suppress further nucleation and thus decrease the final nw number density and coalescence degree . to investigate this possibility , we prepared a series of samples where the substrate temperature was increased by at different times during the nucleation stage ( samples a2a4 ) . for all samples , we used the same impinging fluxes , initial substrate temperature , and total growth time as for sample a1 .as indicated in fig .[ fig : figure1 ] , for samples a2a4 the substrate temperature was increased after , , and min , respectively . therefore , before increasing the temperature , for sample a2 the nucleation was close to the end , for sample a3 well advanced , and for sample a4 at the early times .the growth conditions of samples a1a4 are summarized in table [ tab : growthconditions_abcd ] .figures [ fig : figure2 ] ( a)(d ) show plan - view scanning electron micrographs of samples a1a4 , respectively .the introduction of a two - step growth procedure leads to a clear reduction in the area fraction covered by gan nws .also , the earlier the substrate temperature is raised during nucleation , the stronger the influence on the final nw ensemble .figures [ fig : figure2 ] ( e)(h ) show the corresponding cross - sectional scanning electron micrographs .the nws in sample a1 are about m long . when taking into account the average delay time for the nw formation ( min ) , we find that the axial growth rate is approximately nm / min .this value is close to the impinging active n - flux , in good agreement with the experiments reported in refs .in contrast , the average nw length for samples a2a4 is only m. the reduction in the axial growth rate , which becomes ga - limited , is due to the enhanced ga desorption during the second step of the growth .figure [ fig : figure3 ] depicts the circularity histograms derived from the analysis of the cross - sectional shapes of the gan nws of samples a1a4 .the histogram of the reference sample [ fig .[ fig : figure3](a ) ] is rather broad , reflecting a wide variety of cross - sectional shapes as result of nw coalescence .interestingly , the variation in the substrate temperature during the nucleation stage has a strong impact on the distribution of cross - sectional shapes . as shown in figs .[ fig : figure3 ] ( b)(d ) , the earlier the temperature is raised during the nucleation stage , the narrower the circularity histogram . figure [ fig : figure4](a ) shows the coalescence degrees for samples a1a4 derived from their circularity histograms as well as the area fraction covered by gan nws .the coalescence degree steadily decreases from ( reference sample ) to ( sample a4 ) when decreasing the time at which the substrate temperature is increased .the figure also evidences a decrease in the area fraction from to . in principle, the reduction of both the coalescence degree and the area fraction can be caused by the suppression of further nucleation and/or a decrease in radial growth during the second growth step .next , we extract the average diameter of uncoalesced nws and the total nw number density .figure [ fig : figure4](b ) presents the values of these parameters for samples a1a4 .the average nw diameter steadily decreases from to nm from sample a1 to a4 . regarding the total nw number density , the effect of modifying the substrate temperature during growth is not as clear .increasing the temperature during the second half of the nucleation stage does not seem to influence the total nw number density which remains almost constant ( ) .only when the temperature is increased at the beginning of the nucleation stage , we observe a clear reduction to a value of for sample a4 .therefore , we conclude that the continuous decrease in the coalescence degree observed in fig . [fig : figure4](a ) is mainly caused by a reduction in radial growth during the second step of the growth .this effect is the result of a decrease in the effective ga / n ratio due to the enhanced ga desorption .the high coalescence degree of sample a4 despite exhibiting a low total nw number density as well as a small average nw diameter is surprising . a close inspection of the scanning electron micrograph shown in fig .[ fig : figure2](h ) reveals that a significant number of nws is bent .in addition , the distance between coalesced aggregates is much larger than the average nw diameter [ see fig . [fig : figure2](d ) ] .we suggest that nw coalescence is not only due to nw mutual misorientation and nw radial growth but also induced by electrostatic attraction during growth .the latter effect is most likely caused by the exposure of the nw ensemble to electrons originating from the n plasma source or the rheed measurements .electrostatic attraction between adjacent nws has already been reported and systematically investigated in si nws .this phenomenon is expected to be more pronounced for thin and long nws , as those of sample a4 .the intrinsic coalescence degree of sample a4 is therefore expected to be below the measured .figure [ fig : figure4](c ) shows normalized photoluminescence spectra at 10 k for samples a1-a4 .the intensities of all samples are comparable despite the reduction in area coverage [ fig .[ fig : figure4](a ) ] .for all samples , the spectra are dominated by the recombination of a excitons bound to neutral oxygen [ which shows a full width at half maximum of mev .these findings indicate that the given set of growth conditions used affect neither the inhomogeneous strain in the nanowire , nor the density of nonradiative recombination centers . the results of these experiments demonstrate that , independent of the impinging fluxes , it is possible to obtain a certain degree of control over the morphology and distribution of gan nws by increasing the substrate temperature during the nucleation stage .this growth approach is found to be indeed an efficient method to decrease the average nw diameter as well as the area coverage .however , it seems that reduced nw diameters make the nws more susceptible to electric attraction , leading to nw bending and non - intrinsic nw coalescence .therefore , despite the reduction in nw diameter , the presented nw ensembles still suffer from a non - negligible degree of coalescence .we anticipate that additional measures aimed to prevent electrostatic attraction of nws , such as negatively biasing the substrate to deflect electrons originating from the n plasma , may enable a further reduction of the nw coalescence .the total nw number density , however , can not be significantly decreased when growing long and homogeneous nw ensembles .this fact indicates that during the second step of the growth further nucleation is not completely suppressed .our results are in striking contrast to those obtained by carnevale et al . , namely , a high degree of control over nw density by using a similar two - step growth approach .the underlying reason for this discrepancy could be the much shorter growth times used by carnevale et al .( min ) .the shorter times resulted not only in a lower density but also in much shorter and inhomogeneous nw ensembles .these apparently contradictory results can be reconciled by assuming that despite the presumable unfavorable nucleation conditions the nw number density increases slowly but steadily during the second step of the growth until a homogeneous nw ensemble ( as those shown in fig .[ fig : figure2 ] ) is formed due to the onset of collective effects . .the coalescence degrees are derived from eq .[ equation2 ] . ] .the coalescence degrees are derived from eq .[ equation2 ] . ]lxcc & & sample b1 & sample b2 + step 1 & substrate temperature ( ) & 855 & 815 + & growth time ( min ) & 420 & 25 + & ga flux ( nm / min ) & 16.5 & 5.5 + step 2 & substrate temperature ( ) & - & 855 + & growth time ( min ) & - & 335 + & ga flux ( nm / min ) & - & 16.5 + [ tab : growthconditions_ef ] we have recently reported that a two - step growth approach , as the one described above , can also be used to achieve higher substrate temperatures .this possibility arises from the fact that the long incubation time that hinders direct growth at elevated temperatures can be arbitrarily reduced by using a lower substrate temperature during the first step of the growth .next , we investigate whether the total growth time can be reduced , without modifying the morphological and optical properties of the nw ensemble , using a two - step growth approach . to this end , we prepared a second reference nw sample , referred to as sample b1 , at a substrate temperature of . due to the high substrate temperature , nominally ga - rich growth conditions [ nm / min ] were required to compensate for the high ga desorption rate .the total growth time was h and the delay time before detecting the formation of the first gan nws by rheed ( i.e. , the incubation time ) was as long as 90 min .the corresponding average delay time for nw formation was min .we then prepared another sample using a two - step growth approach ( sample b2 ) . during the first step of the growth, we used the growth conditions of sample a1 , namely , a ga flux of nm / min and a substrate temperature of .after min , we observed the onset of nw nucleation by rheed .at that point , we changed to the growth conditions of sample b1 by first increasing the substrate temperature and subsequently the ga flux . during the entire process ,the ga and n shutters were kept open .therefore , during nw nucleation and elongation stages , the growth conditions of samples b1 and b2 were the same .consequently , was reduced to min and the growth was finished after a total growth time of h , i.e. , h less than for sample b1 .the growth conditions of both samples are summarized in table [ tab : growthconditions_ef ] .figures [ fig : figure5 ] shows plan - view [ ( a ) and ( c ) ] and cross - sectional [ ( b ) and ( d ) ] scanning electron micrographs of samples b1 and b2 . despite the different growth conditions used during the incubation stage and the shorter total growth time employed for growing sample b2 , the sample morphology is indistinguishable .for both samples , the nws exhibit a similar density as well as comparable lengths and cross - sectional shapes .interestingly , the cross - sectional scanning electron micrographs reveal that the diameter of these gan nws increases during growth .such a temporal evolution in the nw diameter , not observed before in nw ensembles grown at lower temperatures , suggests that the effective ga / n ratio is not constant during growth at for the impinging fluxes used in these experiments .figure [ fig : figure6 ] depicts the circularity histograms of the cross - sectional shapes of nws from samples b1 and b2 . due to the increase in nw diameter during growth ,the coalescence degrees were derived from the circularity histograms without introducing a diameter limit for uncoalesced nws .the coalescence degrees are indicated in fig .[ fig : figure6 ] and listed in table [ tab : properties_ef ] , where we also show the values of the average length , diameter , area coverage , and total number density . as expected from the visual inspection of fig .[ fig : figure5 ] , the quantitative analysis of the scanning electron micrographs reveals that , within the experimental error , the morphological properties of samples b1 and b2 agree fairly well . x rl c rl & & & + length ( m ) & & & & & + area coverage ( ) & & & & & + coalescence degree ( ) & & & & & + average uncoalesced diameter ( nm ) & & & & & + nw number density ( ) & & & & & + [ tab : properties_ef ] k ) pl spectra of samples b1 ( black , top , multiplied by a factor of 10 for clarity ) and b2 ( blue , bottom ) . ] finally , the low - temperature ( k ) near band - edge pl spectra of samples b1 and b2 are shown in fig . [fig : figure7 ] . in both cases ,the spectrum is dominated by the recombination of a excitons bound to neutral o [ and si donors [ at and ev , respectively . due to the high substrate temperature , these transitions are as narrow as mev . beside these lines , we also observe the recombination of b excitons bound to neutral donors [ , free a excitons [ , a excitons bound to neutral acceptors [ , and the so - called ux band [ , . for both samples ,all these transitions are centered at the same energy and exhibit comparable linewidths and intensities. therefore , the pl spectra and intensities of samples b1 and b2 are quite similar .the strong similarities between the morphological and optical properties of samples b1 and b2 reveal that the growth conditions employed during the incubation stage , i.e. , prior to nw nucleation , do not influence the properties of the final nw ensemble . by choosing appropriate growth conditions the incubation time can thus be reduced to arbitrary values .thereby , the present two - step growth approach enables the growth of nw ensembles in shorter times without affecting their final properties .we have investigated how a variation in the growth parameters during either the incubation or the nucleation stages influences the final properties of homogeneous gan nw ensembles prepared by pa - mbe . as a result, we gained both valuable insight into the nucleation mechanisms of spontaneously formed gan nws and developed growth methods to improve the control over the spontaneously formed nws .we demonstrated that in contrast to what would be expected the growth conditions used during the incubation stage influence neither the morphological properties nor the low temperature pl spectra of nw ensembles .therefore , it is possible to obtain nw ensembles with similar properties but in shorter growth times by using more favorable growth conditions for nw nucleation ( lower substrate temperature and/or higher impinging fluxes ) during the incubation stage .this finding is important for nw growth at higher substrate temperatures where the incubation time becomes the limiting factor .in contrast , a variation in the growth parameters during the nucleation stage has a strong influence on the properties of the final nw ensemble .the impact on the final morphology depends on the time at which the growth conditions are modified during the nucleation stage .this growth approach does not result in a significant reduction in the nw number density because further nucleation is not completely suppressed after modifying the growth parameters .however , a two - step growth approach is found to be an efficient method to gain control over other important parameters such as area coverage , coalescence degree , and average nw diameter . in order to further reduce the coalescence degree of spontaneously formed gan nws, additional measures have to be taken to prevent the electrostatic attraction of thin nws .we thank anne - kathrin bluhm for providing the scanning electron micrographs presented in this work , vladimir m. kaganer for fruitful discussions on the attraction and coalescence of nws , hans - peter schnherr for his dedicated maintenance of the mbe system , and christian hauswald for a critical reading of the manuscript . financial support of this work by the deutsche forschungsgemeinschaft within sfb 951is gratefully acknowledged .
|
we investigate the influence of modified growth conditions during the spontaneous formation of gan nanowires on si in plasma - assisted molecular beam epitaxy . we find that a two - step growth approach , where the substrate temperature is increased during the nucleation stage , is an efficient method to gain control over the area coverage , average diameter , and coalescence degree of gan nanowire ensembles . furthermore , we also demonstrate that the growth conditions employed during the incubation time that precedes nanowire nucleation do not influence the properties of the final nanowire ensemble . therefore , when growing gan nanowires at elevated temperatures or with low ga / n ratios , the total growth time can be reduced significantly by using more favorable growth conditions for nanowire nucleation during the incubation time .
|
understanding the interaction of coupled individual systems continues to receive interest in the engineering research community . recently , more attention has been paid to cluster synchronization problems which have much wide applications , such as segregation into small subgroups for a robotic team or physical particles , predicting opinion dynamics in social networks , and cluster phase synchronization of coupled oscillators . in the models reported in most of the literature ,the clustering pattern is predefined and fixed ; research focuses are on deriving conditions that can enforce cluster synchronization for various system models .preliminary studies in reported algebraic conditions on the interaction graph for coupled agents with simple integrator dynamics . subsequently , a cluster - spanning tree condition is used to achieve intra - cluster synchronization for first - order integrators ( discrete time or continuous time ) , while inter - cluster separations are realized by using nonidentical feed - forward input terms . for more complicated system models , e.g. , nonlinear systems ( ) and generic linear systems ( ) , both control designs and inter - agent coupling conditionsare responsible for the occurrence of cluster synchronization . for coupled nonlinear systems , e.g. , chaotic oscillators ,algebraic and graph topological clustering conditions are derived for either identical models ( ) or nonidentical models ( ) under the key assumption that the input matrix of all systems is identical and it can stabilize the system dynamics of all individual agents via linear state feedback ( i.e. , the so - called quad condition ) . for identical generic linear systems which are partial - state coupled , a stabilizing control gain matrix solved from a ricatti inequality is utilized by all agents , and agents are pinned with some additional agents so that the interaction subgraph of each cluster contains a directed spanning tree .the system models introduced above can describe a rich class of applications for multi - agent systems .a common characteristic is that the uncoupled system dynamics of all the agents can be stabilized by linear state feedback attenuated by a unique matrix ( i.e. , static state feedback ) .this simplification allows the derivation of coupling conditions to be independent of the control design of any agent , and thus offers scalability to a static coupling strategy .this kind of benefit still exists for nonidentical nonlinear systems which are full - state coupled , as all the system dynamics can be constrained by a common lipchitz constant ( lipchitz can imply the quad condition ) .however , for the class of partial - state coupled nonidentical linear systems , the stabilizing matrices for distinct linear system models are usually different .then the coupling conditions under static couplings will be correlated with the control designs of all individual systems .this correlation not only harms the scalability of a coupling strategy but also increases the difficulty in specifying a graph topological condition on the interaction graph .the goal of this paper is to achieve state cluster synchronization for partial - state coupled nonidentical linear systems , where agents with the same uncoupled dynamics are supposed to synchronize together .this is a problem of practical interest , for instance , maintaining different formation clusters for different types of interconnected vehicles , providing different synchronization frequencies for different groups of clocks using coupled nonidentical harmonic oscillators , reaching different consensus values for people with different opinion dynamics , and so on . in order to relieve the difficulties in using the conventionalcouplings , couplings with a _ dynamic _ structure is proposed by introducing a vanishing auxiliary variable which facilitates interactions among agents . with the proposed dynamic couplings , an algebraic necessary and sufficient condition , which is independent of the control design ,is derived .this newly derived algebraic condition subsumes those published for integrator systems in as special cases . due to the entanglement between nonidentical system matrices and the parameters from the interaction graph , the algebraic condition is not straightforward to check .thus , a graph topological interpretation of the algebraic condition is provided under the assumption that the interaction subgraph associated with each cluster contain a directed spanning tree .we also derive lower bounds for the local coupling strengths in different clusters , which are independent of the control design due to the dynamic coupling structure .this spanning tree condition is further shown to be a necessary condition when the clusters and the inter - cluster links form an acyclic structure .this conclusion reveals the indispensability of direct links among agents belonging to the same cluster , and further strengthens the sufficiency statement presented initially in .another contribution of the proposed dynamic couplings in comparison to those static couplings in is that the lower bound of a global factor which weights the whole interaction graph is also independent of the control design . for this reason ,the least exponential convergence rate of cluster synchronization is characterized more explicitly than that in .the derived results in this paper are illustrated by simulation examples for two applications : cluster heading alignment of nonidentical ships and cluster phase synchronization of nonidentical harmonic oscillators .the organization of this paper is as follows : following this section , the problem formulation is presented in section [ sec : problemstatement ] . in section [ sec : syn_leaderless ] , both algebraic and graph topological conditions for cluster synchronization are developed .simulation examples are provided in section [ sec : simulation ] .concluding remarks and discussions for potential future investigations follow in section [ sec : conclusion ] .consider a multi - agent system consisting of agents , indexed by , and clusters .let be a nontrivial partition of , that is , , , and , .we call each a cluster .two agents , and in , belong to the same cluster if and .agents in the same cluster are described by the same linear dynamic equation : where with initial value , , is the state of agent and is the control input ; and are constant system matrices which are distinct for different clusters . a directed interaction graph is associated with system ( [ sys : linearmodel ] ) such that each agent is regarded as a node , , and a link from agent to agent corresponds to a directed edge .an agent is said to be a neighbor of if and only if . the adjacency matrix \in{\mathbb{r}}^{l\times l} ] as the laplacian of , where and for any .corresponding to the partition , a subgraph , , of contains all the nodes with indexes in , and the edges connecting these nodes .[ fig : topology_illu_leaderless ] for illustration . without loss of generality , we assume that each cluster , , consists of agents ( ) , such that , , , , where and . then , the laplacian of the graph can be partitioned into the following form : where each specifies intra - cluster couplings and each with , specifies inter - cluster influences from cluster to , .note that is not the laplacian of in general .construct a new graph by collapsing any subgraph of , , into a single node and define a directed edge from node to node if and only if there exists a directed edge in from a node in to a node in .we say admits an acyclic partition with respect to , if the newly constructed graph does not contain any cyclic components .if the latter holds , by relabeling the clusters and the nodes in , we can represent the laplacian in a lower triangular form so that each cluster receives no input from clusters if . in fig .[ fig : topology_illu_leaderless ] , the two subgraphs and illustrate an acyclic partition of the whole graph .the main task in this paper is to achieve cluster synchronization for the states of systems in via distributed couplings through the control inputs which is defined as follows : for , [ sys : controllaws_leaderless ] ,\end{aligned}\ ] ] where is the control gain matrix to be specified ; the vector , is an auxiliary control variable with initial value , ; is the global weighting factor for the whole interaction graph ; each is a local weighting factor used to adjust the intra - cluster coupling strength of cluster .note that the couplings in takes a dynamic structure .the reasons why conventional static couplings ( e.g. , those in ) are not preferred will be explained in details in the main part .the cluster synchronization problem is defined below .[ def : cluster_syn ] a linear multi - agent system in with couplings in is said to achieve -cluster synchronization with respect to the partition if the following holds : for any and , , , , , and for any set of , there exists a set of , such that , , . in the definition , all auxiliary variables , , are required to decay to zero so as to guarantee that the control effort of every agent is essentially of finite duration . for state separations among distinct clusters, one should not expect them to happen for any set of s and s ; an obvious counterexample is that all system states will stay at zero when for all .some assumptions throughout the paper are in order .[ assump : stabilizable ] each of the pairs , is stabilizable .[ assump : a_unstable ] each has at least one eigenvalue on the closed right half plane .this assumption excludes trivial scenarios where all system states synchronize to zero . to deal with stable s, one may introduce distinct feed - forward terms in as studied in . in order to segregate the system states according to the uncoupled system dynamics in, an additional mild assumption is made on the system matrices s , namely , they can produce distinct trajectories .rigorously , for any , the solutions and to the linear differential equations and , respectively satisfy for almost all initial states and in the euclidean space .[ assump : zero_row_sums ] every block of defined in has zero row sums , i.e. , .this assumption guarantees the invariance of the clustering manifold ^t : x_1(t)=\cdots = x_{l_1}(t),\ldots , x_{\sigma_n+1}(t)=\cdots = x_{l}(t)\}.\ ] ] it is imposed frequently in the literature to result in cluster synchronization for various multi - agent systems ( see ) . to fulfill it , one can let positive and negative weights be balanced for all of the links directing from one cluster to any agent in another cluster .the negative weights for inter - cluster links is supposed to provide desynchronizing influences .note also that with assumption [ assump : zero_row_sums ] each is the laplacian of a subgraph , ._ notation _ : ^t\in { \mathbb{r}}^{n} ] , if each subgraph , , contains only cooperative edges and has a directed spanning tree , and the weighting factors satisfy and for each where each satisfies .following the proof of the sufficiency part of theorem [ thm : clustersyn_leaderless ] , we need to show that the system is exponentially stable under the conditions in theorem [ thm : clustersyn_leaderless2 ] .first , these conditions guarantee the existence of positive definite matrices , s , satisfying .hence , can be written as for .these inequalities together with weyl s eigenvalue theorem ( ) yield the following : which further implies that now , consider the lyapunov function candidate for the system . taking time derivative on both sides of , one gets \zeta(t)\\ & \leq \zeta^t(t)[(\hat { \mathcal{w}}\otimes i_n)(\hat { \mathbf{a}}+\hat { \mathbf{a}}^t)-c\hat { \mathcal{w}}\otimes i_n]\zeta(t)\\ & \leq-[c-\lambda_{\max}(\hat { \mathbf{a}}+\hat { \mathbf{a}}^t ) ] v(t),\end{aligned}\ ] ] where the last inequality follows from .this confirms the exponential stability of system , and therefore cluster synchronization can be achieved exponentially fast with the least rate of ] , where is a prescribed positive scalar . for generic linear systems with static couplings in, this quad condition requires that for any , with for all . given a , for the existence of control gains s, one needs all s to satisfy ) ] and thus no can be solved from .in contrast , the dynamic couplings in do not impose such constraints on the system models . generally , it is not always necessary to let every subgraph contain a directed spanning tree .in fact , agents belonging to a common cluster may not need to have direct connections at all as long as the algebraic condition in theorem [ thm : clustersyn_leaderless ] is satisfied .this point is illustrated by a simulation example in the next section .nevertheless , the spanning tree condition turns out to be necessary under some particular graph topologies as stated by the corollary below .[ thm : clustersyn_leaderless_acyclic ]let be an interaction graph with an acyclic partition as in , and let the edge weights of every subgraph be nonnegative . under assumptions [ assump : stabilizable ] to [ assump :zero_row_sums ] , a multi - agent system with couplings in achieves -cluster synchronization if and only if every contains a directed spanning tree , and the weighting factors satisfy where each is defined in . by theorem [ thm : clustersyn_leaderless ], we can examine the stability of .let , , be a set of nonsingular matrices such that where is the jordan form of .denote .then , the block triangular matrix has diagonal blocks , where , , .hence , the matrix is hurwitz if and only if for any .this claim is equivalent to the conclusion of this corollary due to lemma [ thm : eign_subgraphs_leaderless ] , the first claim of lemma [ thm : lapalacian_reduce ] , and assumption [ assump : a_unstable ] that requires .this corollary reveals the indispensability of _ direct _ links among agents in the same cluster under an acyclicly partitioned interaction graph .note that such direct communication requirements for intra - cluster agents is not necessary under a nonnegatively weighted interaction graph ( see for references ) .it is worth mentioning for the condition in that one can set for all , and adjust the global factor only to result in cluster synchronization . in contrast , without the acyclic partitioning structure , the local weighting factors s need to satisfy the lower bound conditions in .note that specifies the tightest lower bound for , while a lower bound reported in for identical linear systems via lyapunov stability analysis can be quite loose .in this section , we provide application examples for cluster synchronization of nonidentical linear systems .we also conduct numerical simulations using these models to illustrate the derived theoretical results . and ( a ) cyclic partition ( b ) acyclic partition.,width=377 ] consider a group of four ships with the interaction graph described by fig .[ fig : topology_4nodes](a ) , where ship 1 and 2 ( respectively , ship 3 and 4 ) are of the same type .the purpose is to synchronize the heading angles for ships of the same type .the steering dynamics of a ship is described by the well - known nomoto model : where is the heading angle ( in degree ) of a ship , ( deg / s ) is the yaw rate , and is the output of the actuator ( e.g. , the rudder angle ) .the parameter is a time constant , and is the actuator gain , both of which are related to the type of a ship .define for the system matrices and assume that , , , .the solutions to the riccati equations in are given by and , which lead to the control gain matrices ] .since we set according to .the weighted graph laplacian is given by which yields using the definition in .so , , and for any and , the inequalities in hold .we choose .it follows that , , and for .then , we can choose so that the inequalities in are satisfied .simulation result in fig .[ fig : clustering_ship1 ] shows that cluster synchronization is achieved for the heading angles ( the velocity of every agent will converge to zero as shown in fig .[ fig : clustering_ship1_velocity ] ) .now , let so that agents and in cluster have no direct connection .cluster synchronization is still achieved as shown in fig .[ fig : clustering_ship2 ] .this example illustrates that intra - cluster connections are not necessary for cluster synchronization under a cyclicly partitioned interaction graph . however , under an acyclic partition as in fig .[ fig : topology_4nodes](b ) , the first cluster of agents , having no direct connections , can not achieve state synchronization as shown in fig .[ fig : notclustering_shipacyclic ] . the studied cluster synchronization problem for nonidentical linear systems may find applications in the coexistence of oscillators with different frequencies .to see this , let us consider two clusters of coupled harmonic oscillators with graph topology in fig .[ fig : topology_6nodes ] , where the first cluster contains a sender and two receivers and , the second cluster contains a sender and two receivers and , and the four receivers are coupled by some directed links .assume the angular frequencies of the two clusters of oscillators are rad / s and rad / s , respectively .so , the dynamic equation of each oscillator is which corresponds to the following system matrices : the objective is to let the receivers of each cluster follow the state of the sender . by a similar design procedure as in the previous example , we can set ] , , and .simulation result in fig .[ fig : clustering_oscillator ] shows the synchronous oscillations of the harmonic oscillators with two distinct angular frequencies .this paper investigates the state cluster synchronization problem for multi - agent systems with nonidentical generic linear dynamics . by using a dynamic structure for coupling strategies, this paper derives both algebraic and graph topological clustering conditions which are independent of the control designs . for future studies ,cluster synchronization which can only be achieved for the system _ outputs _ is a promising topic , especially for linear systems with parameter uncertainties or for heterogeneous nonlinear systems . for completely heterogenous linear systems, research works following this line are conducted by the authors in and others in . for nonlinear heterogeneous systems , the new theory being established for _ complete output _ synchronization problems may be further extended .another interesting challenge existing in cluster synchronization problems is to discover other graph topologies that meet the algebraic conditions .let , where for . clearly , has the inverse matrix . by direct computationone can show that this implies the first claim when . for the second claim ,consider that rearrange the columns and rows of by permutation and similarity transformations to get the following block upper - triangular matrix where is defined in .then , the second claim of this lemma follows immediately .the closed - loop system equations for using couplings are given as ,\ ] ] for all , where ^t ] clearly , and . by, one can obtain the following dynamic equations : , \end{split}\ ] ] for and , .since stabilizes , the variable tends to zero as if and only if tends to zero . denote ^t,\ ] ] which evolves with the following differential equation clearly, and every ( hence every ) all converge to zero if and only if is hurwitz .that is , we have shown that and , , .next , we prove that for any vanishes as . to this end , for each , let be the solution of with an arbitrary initial value .since by assumption [ assump : zero_row_sums ] , we have that ,\end{aligned}\ ] ] for any . subtracting the above from yields above system is exponentially stable and driven by inputs which all converge to zero exponentially fast .therefore , for any , we have , as .lastly , we show that inter - cluster state separations can be achieved for any initial states s by selecting s properly . given any set of , ,choose , such that for all , , and -e^{a_jt}[x_l(0)-\eta_l(0)]\|\neq 0 ] of all agents to form ^t.\ ] ] it follows that with and one can derive , after a series of manipulations , that (0),\;\;\textrm{as}\ ; t \rightarrow \infty,\end{aligned}\ ] ] where each ^t\in{\mathbb{r}}^l ] .it then follows from the definitions of and that for all \\ & \rightarrow e^{at}\sum_{k=1}^l \nu_{ik } [ x_k(0)-\eta_k(0)],\;\;\textrm{as}\ ; t \rightarrow \infty.\end{aligned}\ ] ] since is non - hurwitz , is nonzero as .then , for any set of initial states , , one can always find a set of , such that for any two agents and , .this completes the proof .y. feng , s. xu , and b. zhang , `` group consensus control for double - integrator dynamic multiagent systems with fixed communication topology , '' _ int .j. robust nonlinear control _ , vol . 3 , pp .532547 , 2014 .y. han , w. lu , and t. chen , `` cluster consensus in discrete - time networks of multiagents with inter - cluster nonidentical inputs , '' _ ieee trans .neural netw . and learning systems _ , vol . 24 , no . 4 , pp . 566578 , 2013 .p. delellis , m. di bernardo , and g. russo , `` on quad , lipschitz , and contracting vector fields for consensus and synchronization of networks , '' _ ieee trans .circuits syst .i : reg . papers _ , vol .58 , no . 3 , pp .576583 , 2011 .h. liu , c. de persis , and m. cao , `` robust decentralized output regulation with single or multiple reference signals for uncertain heterogeneous systems , '' _ int .j. robust nonlinear control _ , vol . 25 , no . 9 , pp .13991422 , 2015 .
|
this paper considers the cluster synchronization problem of generic linear dynamical systems whose system models are distinct in different clusters . these nonidentical linear models render control design and coupling conditions highly correlated if static couplings are used for all individual systems . in this paper , a dynamic coupling structure , which incorporates a global weighting factor and a vanishing auxiliary control variable , is proposed for each agent and is shown to be a feasible solution . lower bounds on the global and local weighting factors are derived under the condition that every interaction subgraph associated with each cluster admits a directed spanning tree . the spanning tree requirement is further shown to be a necessary condition when the clusters connect acyclicly with each other . simulations for two applications , cluster heading alignment of nonidentical ships and cluster phase synchronization of nonidentical harmonic oscillators , illustrate essential parts of the derived theoretical results . cluster synchronization ; coupled linear systems ; nonidentical systems ; graph topology
|
recent research efforts have examined the protocol parameters of popular swarming peer - to - peer content distribution systems , in order to identify their impact on system performance and robustness .such efforts have mainly focused on the protocol algorithms that are believed to be the major factors affecting system behavior , such as bittorrent s piece and peer selection strategies . however , the actual manner by which peers form connections and the overlay is constructed has been largely overlooked .as shown by urvoy _et al . _ , the time needed to distribute content in bittorrent is directly affected by the overlay topology . moreover , ganesh _ et al . _ evaluated the impact of the overlay structure on the spread of epidemics , which can be viewed as a special case of robustness in peer - to - peer file replication . to the best of our knowledge, there has been no study that specifically investigated optimal overlay construction strategies for content replication . in this paper , we evaluate two such strategies in the bittorrent protocol .we first present and evaluate the _ tracker strategy _ , which most bittorrent implementations use by default to guide new connection establishment .we identify a concrete shortcoming , namely the strategy s tendency to cause peer clustering and potential network partitions , which might have an adverse impact on system robustness . to address this, we introduce an alternative , the _ preemption strategy _ , which dictates giving preference to certain new peer connection requests .we evaluate the properties of overlays generated by both strategies using extensive simulations , focusing on flash crowd scenarios , when the system is under high load and more vulnerable to churn . indeed ,the flash crowd phase as it is the most critical phase for a torrent , as there is a single seed . in case some peers become disconnected from this initial seed , they will experience a much higher download completion time .moreover , a poorly structured overlay may result in a slower propagation of the pieces , thus a lower overall performance .however , in this study , we focus on the overlay property rather than on its impact on performance . based on our results ,we identify the _ maximum number of outgoing connections _ as a parameter that significantly affects the structure and properties of the generated overlay .this parameter is currently used in bittorrent to enforce a hard upper limit on the number of connections a peer can initiate .we define metrics that characterize the overlay structure , and compute these metrics for various values of the number of maximum outgoing connections per peer .the contributions of this work include the following . 1 .we show that , for the default bittorrent overlay construction strategy , there is no single value of the maximum number of outgoing connections that optimizes all considered metrics .in addition , a value between and clearly offers a better choice than the usual default value of .we also show that our proposed preemption strategy outperforms the default one for all metrics . for this strategy ,a maximum number of outgoing connections that is simply equal to the maximum peer set size ( number of neighbors ) presents the best choice . as a result , our proposed strategy , while simple and easy to implement , removes the need to set the maximum number of outgoing connections in an ad - hoc manner , thereby simplifying the protocol .the rest of this paper is organized as follows . in section [ sec : terminology ] we define the terms we use .we present the tracker and preemption strategies in section [ sec : strategies ] .section [ sec : methodology ] then describes our experimental methodology , while our results on the properties of overlays generated with both strategies are presented in section [ sec : results ] .section [ sec : related ] describes related work , and we conclude and outline future work in section [ sec : conc ] .in this section , we present the terms used to describe the bittorrent overlays .* peer set : * each peer maintains a list of other peers to which it has open tcp connections .this list is called the peer set , also known as the neighbor set .neighbor _ of peer is a peer that belongs to s peer set .* maximum peer set size : * the upper limit on the number of peers that can be in the peer set .it is a configuration parameter of the protocol . *average peer set size : * a torrent - wide metric calculated by summing up the peer set size for each peer in the torrent , and dividing by the total number of peers .* incoming and outgoing connections : * when a peer initiates a tcp connection to peer , we say that has an _ outgoing _ connection to , and that has accepted an _ incoming _ connection from .note that all connections are really bidirectional , they are just flagged as incoming or outgoing .this flag has no impact on the actual data transfer , however , it is used to decide whether a new outgoing connection can be established , as explained in section [ sec : bittorrent - overview ] . *maximum number of outgoing connections : * the upper limit on the number of outgoing connections a peer can establish .this is a configuration parameter of the protocol .we first present the overlay construction strategy bittorrent follows , then propose an alternative based on preempting existing connections . a piece of content to be distributed with bittorrent is first divided into multiple pieces .a _ metainfo file _ is then created by the content provider , which contains all the information necessary for the download , including the number of pieces , hashes that are used to verify the integrity of received data , and the ip address and port number of the tracker . to join a torrent ,a peer retrieves the metainfo file out of band , usually from a well - known web site . then contacts the tracker who returns a random subset of other peers already participating in the download ; we call this subset the _ initial peer set_. a typical number returned by many tracker implementations is , which is also what we use for our simulations . after receiving this initial peer set ,the new peer attempts to initiate new connections , under the following two constraints : 1 ) a peer is not allowed to establish more than a fixed number of outgoing connections , typically , and 2 ) a peer can not maintain in total more than a fixed number of open connections , typically ( the maximum peer set size ) .the latter limit is imposed to avoid performance degradation due to competition among tcp flows , while the former serves to ensure that some connection slots are kept open for new peers that will join later . in this manner, the initial peer set can be augmented later by connections initiated by remote peers . whenever the peer set size falls below a given threshold ( typically ) , a peer contacts the tracker again and asks for more .to avoid overwhelming the tracker with such requests , there is usually a minimum interval between two such consecutive messages .finally , each peer contacts the tracker periodically ( typically once every minutes ) to indicate that it is still present in the network .if no heartbeat is received for more than minutes , the tracker assumes the peer has left the system , and does not include it in future initial peer sets .the potential shortcoming of the default strategy can be seen by considering the effect of the maximum number of outgoing connections . a small number for this parameter will allow peers who have recently joined the system to connect to older ones , whereas a large number will cause peers to be more connected to others that joined around the same time .thus , we expect that , when increasing this value , we will observe the formation of clusters of peers that joined close together in time . for very large values , close to the maximum peer set size , this could even cause the creation of mostly disjoint cliques that share data within themselves , thereby compromising the robustness of the system to churn .if the connecting peer between two cliques were to disconnect , we would have the creation of partitions in the system .our results bear out this hypothesis . to address this issue, we propose an alternative strategy based on preempting existing connections .the only difference from the default strategy manifests itself when a peer wants to establish a connection to a peer that has already reached its maximum peer set size . in the default strategysuch a connection attempt would simply be rejected . with preemption ,however , peer will accept the new connection after dropping an existing one , if and only if has discovered from the tracker ( as opposed to through other means , e.g. , peer exchange ) .thus , an implementation of the preemption strategy would be exactly the same as the default one , with the following modification .when peer joins a torrent , it receives the ip addresses of several existing peers including peer .let us assume that attempts to initiate a connection to peer . if has not reached its maximum peer set size , the connection is accepted with no further action .however , if has already reached its maximum peer set size , it will either 1 ) accept the connection from , after tearing down an existing connection , if discovered s ip address from the tracker , or 2 ) refuse the connection in any other case .the rationale behind this strategy is the goal of introducing some randomness in the connection establishment process , to help convergence to the maximum peer set size as fast as possible and prevent cliques .the default strategy gives preference to connections from peers who joined close in time , especially at the beginning of the download .the preemption strategy attempts to spread connections uniformly over the peers delivered in the peer lists by the tracker , without being affected by external peer connection mechanisms ( e.g. , peer exchange ) .note that , if decides to accept the new connection , it selects the connection to close at random among all the connections that were initiated by remote peers ( the incoming connections ) . in casethere is no such connection , it selects any connection at random .the rationale behind this is to maximize the probability that the remote peer can quickly recover from such an unexpected connection drop . indeed , in case the remote peer has reached its maximum number of outgoing connections , closing an incoming connection from peer ( an outgoing connection for ) will allow to quickly either establish a new outgoing connection .if were to close an outgoing connection , then would only be able to wait for a new incoming connection request .an additional useful heuristic ( which we do not currently employ in our simulations ) when selecting connections to close would be to never close a connection to a peer that is currently unchoked or is actively sending data .our preemption strategy assumes that somehow knows whether has received its address from the tracker .the easiest way to implement this functionality is to set a specific bit in the bittorrent handshake message sent from to .there are unused reserved bits in the handshake message that can be used for this purpose .as is untrusted , will never accept more than a few percents of preempted peers , typically of the peer set ( yet in our evaluation , we put no limit on the number of accepted preempted peers ) .this way , a misbehaving or evil peer will not be able to harm a regular peer by making him drop all its connections using preemption with fake peers . consequently , to implement the preemption strategy ,one only needs to modify clients , but not the tracker .moreover , as this new strategy is based on a specific bit set in the handshake message , it is backward compatible with existing bittorrent clients .indeed , the default behavior of a bittorrent client that receives a handshake with an unknown bit set is to ignore this bit .before presenting our results , we first outline our experimental setup and describe the simulation parameters .we then characterize the peer arrival and departure distributions we consider in this study , and present the metrics used to evaluate the properties of the overlay . in order to investigate the properties of the overlays generated by the two strategies, we developed a simulator that captures the evolution of the overlay structure over time as peers join and leave the torrent .the simulator source code is publicly available , and it follows the protocol as it is implemented in the official bittorrent client version 4.0.2 .we do not model the peer and piece selection strategies used in data exchange , since we focus on the construction and robustness properties of the overlay instead .we believe that simulations , rather than physical experiments , are a more appropriate vehicle for evaluating these properties , for three main reasons .first , the bittorrent overlay can not be explored using a crawler , as is the case for other peer - to - peer systems , such as gnutella .this is because the protocol itself does not offer a generic distributed mechanism for peer discovery , i.e. , there is no way to make a bittorrent peer ( that does not support the peer exchange extension ) to provide any information about the peers in its peer set . as several bittorrent clients do not support this extension , the information we would get from public torrents would be largely incomplete .second , we can not analyze existing traces collected at various trackers , since a peer never shares with the tracker its connectivity with other peers in the swarm .lastly , we could instead set up our own controlled testbed , e.g. , on planetlab , running real experiments and collecting statistics .however , running such experiments is harder and more time consuming than running simulations , and it will not bring significantly more insights than simulations . indeed ,a frequent argument against bittorrent simulations , namely the fact that it is challenging to accurately model the system dynamics , is arguably not applicable in our case , since we focus exclusively on the overlay construction , which is far easier to model than bittorrent s data exchange . in any case, we have validated our simulation results by comparing them against results from real experiments on a controlled testbed .these experiments are not presented here due to space limitations , but can be found in our technical report . following observations by guo _et al . _ , we model peer arrivals and departures with an exponential distribution .we split simulated time into _slots_. slot , where , is defined as the simulated time elapsed between time minutes and time minutes .we focus on a flash crowd scenario , where most peers arrive soon after the beginning of the torrent s lifetime .thus , within each time slot , the number of new peers that join the torrent is and .each peer stays connected to the torrent for a random period of time uniformly distributed between and simulated minutes . under these assumptions , peers will arrive during the first minutes , peers during the next minutes , peers between the and minute , and the remaining peers during the fourth 10-minute period .no peers will arrive after the first minutes of the simulation . as a result, there will be more peer arrivals than departures during the first two time slots , and vice versa starting from the third time slot .the evolution of the torrent size that results from this model corresponds to a typical real torrent , based on previous studies .the typical lifetime of a bittorrent peer is in the order of several hours , while the torrent lifetime ranges from several hours to several months . in our simulations ,the average peer and torrent lifetimes are around and simulated minutes respectively .as we only focus on the overlay construction during the flash crowd , considering longer lifetimes would not give any new insights .indeed , as our results show , the peer arrival and departure order have a significant impact on the overlay , unlike the duration of their presence in the torrent .we use three simple metrics to evaluate the structure of an overlay , which we believe capture different important overlay properties well .first , the _ bottleneck index _ is defined as the ratio of the number of connections between the first peers ( equal to the maximum peer set size ) to join the torrent ( including the initial seed ) and the rest of the peers , over the maximum possible number of such connections ( ) .this index provides an indication of the presence of a bottleneck between the first set of participating peers and the rest of the torrent .the existence of such a bottleneck would arguably adversely impact both the content distribution speed and robustness of the overlay .note that a lower bottleneck index implies a worse bottleneck .the second metric we use is the _ average peer set size_. a larger average peer set size implies a larger number of neighbors , which should lead to more opportunities of finding a peer that is willing to exchange data and higher resilience to churn .lastly , we measure the _ overlay diameter _ as the maximum number of hops in the torrent .a small diameter indicates that a piece can reach any peer within a few hops .therefore , this metric also serves to evaluate the diversity of pieces in the system , which has been shown to lead to efficient piece replication .in our simulations , we use the official bittorrent client s default parameter values and set the maximum peer set size to and the minimum number of neighbors to .we then vary the maximum number of outgoing connections from to with a step of .we evaluate the properties of overlays generated using both the default and the preemption strategies .figure [ fig : all - metrics ] plots the three metrics we consider over the maximum number of outgoing connections .we observe that , for the tracker strategy ( solid line ) , there is no value of that optimizes all three metrics . the highest bottleneck index , which would result in a more robust overlay , as well as a relatively small overlay diameter are both achieved for around .however , the optimal average peer set size occurs for equal to . as a result ,the common practice to set to half of the maximum peer set size ( thus making it ) is by no means the best choice .rather , since average peer set size approaches its maximum for equal to , we propose setting the maximum number of outgoing connections between and , which achieves a better trade - off for the three metrics we consider . in addition, we see in figure [ fig : all - metrics - diameter ] that the overlay diameter for the tracker strategy is when is set to .this means that the peer graph is partitioned into two separate subgraphs .we also observe that the bottleneck index increases for values up to and decreases for larger values .to explain these results we focus now on the actual connections among peers in the overlay .figure [ fig : connectivity - nopreempt ] plots these connections for the tracker strategy , captured after simulated minutes , i.e. , after the arrival of peers ( see section [ sec : simulation - parameters ] ) , for four distinct values of : , , , and . the results are shown in the form of a _ connectivity matrix _ , where a dot at means that peers and are neighbors .we observe that , for lower values of , there exists good connectivity among peers , with some clustering being observed for those who joined the torrent first .however , when increasing further , we see the formation of a small cluster that consists of the first peers ( same as the maximum peer set size ) .this clustering becomes clearer for increasingly larger values of , to the point where , for equal to the maximum peer set size , those first peers form a completely separate partition from the rest of the overlay .the creation of two separate partitions will definitely be harmful to system robustness , as the seed and the first peers , who already have most of the pieces , will be unable to share them with the rest of the system .we attempt to explain the reason behind this clustering with an example . in the following , peer is the peer to join the torrent . for to ( see fig .[ fig : connectivity - nopreempt-40 ] ) , all of peer s neighbors belong to the first peers who joined the torrent .the reason is that when peer arrives , it establishes outgoing connections to all other peers already in the system .it then waits for new arrivals in order to establish the remaining connections it still needs to reach its maximum peer set limit .those missing connections are satisfied after the arrival of another peers on average , .similarly , when peer arrives , it establishes up to outgoing connections .however , now needs to wait for the arrival of a larger number of peers in order to establish its remaining incoming connections , because the probability that its ip address is returned by the tracker to new peers decreases as the number of peers in the torrent increases .this explains why , as compared to , the neighbors of belong to a larger set of peers ( ) .this leads us to believe that an alternative strategy that introduces some randomness into the connection establishment process would exhibit better behavior . with that in mind , let us now look at the properties of an overlay built using our proposed preemption strategy .as shown in figure [ fig : all - metrics ] ( dashed line ) , such an overlay exhibits better characteristics than the one generated with the tracker strategy , for the three metrics we consider . moreover , a value equal to the maximum peer set size clearly gives the best results for all three metrics .in addition , looking at the connectivity matrices of the overlay built with the preemption strategy captured after simulated minutes ( shown in figure [ fig : connectivity - preempt ] ) , we observe good connectivity among peers for all value of , without the clustering effects observed with the tracker strategy . thus , the preemption strategy obviates the need to heuristically select the maximum number of outgoing connections allowed at each peer , as the best overlay structure is always attained when that parameter is equal to the maximum peer set size .in addition , the proposed strategy outperforms the default one for all considered metrics .therefore , the preemption strategy , while simple and easy to implement , offers a strong alternative to the one used by most bittorrent clients .there has been a fair amount of work on the performance and robustness of bittorrent systems , most of which is complementary to ours .bram cohen , the protocol s creator , first described its main mechanisms and their design rationale .several measurement studies attempted to characterize the protocol s properties by examining real bittorrent traffic .et al . _ measured several peer characteristics derived from the tracker log of the red hat linux 9 iso image , including the proportion of seeds and leechers and the number and geographical spread of active peers .they observed that , while there is a correlation between upload and download rates , the majority of content is contributed by only a few leechers and the seeds .pouwelse _ et al . _ studied the content availability , integrity , and download performance of torrents on a once - popular tracker website .et al . _ additionally examined bittorrent sharing communities and found that sharing - ratio enforcement and the use of rss feeds to advertise new content may improve peer contributions .at the same time , guo _ et al . _ demonstrated that the rate of peer arrival and departure from typical torrents follows an exponential distribution and that performance fluctuates widely in small torrents .they also proposed inter - torrent collaboration as an incentive for leechers to stay connected as seeds after the completion of their download . a more recent study by legout _ et al . _ examined peer behavior by running extensive experiments on real torrents .they showed that the rarest - first and choking algorithms play a critical role in bittorrent s performance , and claimed that the use of a volume - based tit - for - tat algorithm , as proposed by other researchers , is not appropriate .there have also been some simulation studies attempting to better understand bittorrent s system properties .et al . _ performed an initial investigation of the impact of different peer arrival rates , peer capacities , and peer and piece selection strategies .et al . _ utilized a discrete event simulator to evaluate the impact of bittorrent s core mechanisms and observed that rate - based tit - for - tat incentives can not guarantee fairness .they also showed that the rarest - first algorithm outperforms alternative piece selection strategies .lastly , tian _ et al . _ studied peer performance toward the end of the download and proposed a new peer selection strategy that enables more peers to complete their download , even after the departure of all the seeds .our work differs from all previous studies in its approach and results .we performed extensive simulations to examine the impact of the overlay construction strategy on system properties and robustness .our results showcase the importance of the maximum number of outgoing connections and propose a concrete improvement to the protocol .in this paper , we introduce a new preemptive overlay construction strategy for bittorrent .we evaluate it along with the default bittorrent tracker strategy for a flash crowd scenario , based on three different metrics .our results show that the tracker strategy is quite sensitive to the maximum number of outgoing connections , which does not seem to have a single value that optimizes all metrics .in addition , a value between and offers a better choice than the current bittorrent default of . on the other hand ,the proposed preemption strategy outperforms the default one for all three metrics considered .furthermore , there is a clear optimal choice for the maximum number of outgoing connections ( equal to the maximum peer set size ) , a fact that removes the need to set this parameter in an ad - hoc manner .these results already provide some initial insights into how the default tracker strategy behaves and how to improve it using preemption. however , many questions remain open for future work .first , while we have introduced specific metrics for evaluating the properties of the overlay structure , and we have discussed how these metrics are linked to the system s robustness , we have not formally quantified their impact .this is a necessary step in understanding how the overlay structure actually affects system properties .in addition , it would be interesting to investigate whether the preemption strategy can be exploited by an attacker in order to disconnect peers from torrents . finally , while we have examined the tracker strategy and its preemption - based alternative ,there exist other strategies based on gossiping , e.g. , peer exchange , which are also promising .some preliminary results in that direction show that such strategies produce an overlay with large diameter and low bottleneck index , but they achieve the best average peer set size .it would be interesting to better understand the trade - offs involved in such gossiping techniques , and incorporate some of their features into our preemption strategy .we want to thank matthieu latapy for his suggestions on the use of preemption , and christos gkantsidis for his helpful comments .
|
swarming peer - to - peer systems play an increasingly instrumental role in internet content distribution . it is therefore important to better understand how these systems behave in practice . recent research efforts have looked at various protocol parameters and have measured how they affect system performance and robustness . however , the importance of the strategy based on which peers establish connections has been largely overlooked . this work utilizes extensive simulations to examine the default overlay construction strategy in bittorrent systems . based on the results , we identify a critical parameter , the maximum allowable number of outgoing connections at each peer , and evaluate its impact on the robustness of the generated overlay . we find that there is no single optimal value for this parameter using the default strategy . we then propose an alternative strategy that allows certain new peer connection requests to replace existing connections . further experiments with the new strategy demonstrate that it outperforms the default one for all considered metrics by creating an overlay more robust to churn . additionally , our proposed strategy exhibits optimal behavior for a well - defined value of the maximum number of outgoing connections , thereby removing the need to set this parameter in an ad - hoc manner . bittorrent , overlay construction , preemption , robustness , outgoing connections
|
neutral diffusion or coalescent models predict that genetic diversity at unconstrained sites is proportional to the ( effective ) population size for a simple reason : two randomly chosen individuals have a common parent with a probability of order and the first common ancestor of two individuals lived of order generations ago . forward in time , this neutral coalescence corresponds to _genetic drift_. however , the observed correlation between genetic diversity and population size is rather weak , implying that processes other than genetic drift dominate coalescence in large populations .this notion is reinforced by the observation that pesticide resistance in insects can evolve independently on multiple genetic backgrounds and can involve several adaptive steps in rapid succession .this high mutational input suggests that the short - term effective population size of _ d. melanogaster _ is greater than and conventional genetic drift should be negligible .possible forces that accelerate coalescence and reduce diversity are _ purifying _ and _ positive _ selection .historically , the effects of purifying selection have received most attention ( reviewed by ) and my focus here will be on the role of positive selection .a selective sweep reduces nearby polymorphims through _ hitch - hiking_. polymorphisms linked to the sweeping allele are brought to higher frequency , while others are driven out .linked selection not only reduces diversity , but also slows down adaptation in other regions of the genome an effect known as hill - robertson interference .hill - roberston interference has been intensively studied in two locus models where the effect is quite intuitive : two linked beneficial mutations arising in different individuals compete and the probability that both mutations fix increases with the recombination rate between the loci .pervasive selection , however , requires many - locus - models .here , i will review recent progress in understanding how selection at many loci limits adaptation and shapes genetic diversity .linked selection is most pronounced in asexual organisms .the theory of asexual evolution is partly motivated by evolution experiments with microbes , which have provided us with detailed information about the spectrum of adaptive molecular changes and their dynamics .i will then turn to facultatively sexual organisms which include many important human pathogens such as hiv and influenza as well as some plants and nematodes .finally , i will discuss obligately sexual organisms , where the effect of linked selection is dominated by nearby loci on the chromosome .the common aspect of all these models is the source of stochastic fluctuations : random associations with backgrounds of different fitness .in contrast to genetic drift , such associations persist for many generations , which amplifies their effect . in analogy to genetic drift , the fluctuations in allele frequencies through linked selection have been termed _ genetic draft _ . the ( census )population size determines how readily adaptive mutations and combinations thereof are discovered but has little influence on coalescent properties and genetic diversity .instead , selection determines genetic diversity and sets the time scale of coalescence .the latter should not be rebranded as as this suggests that a rescaled neutral model is an accurate description of reality .in fact , many features are qualitatively different .negligible drift does not imply that selection is efficient and only beneficial mutations matter . on the contrary, deleterious mutations can reach high frequency through linkage to favorable backgrounds and the dynamics of genotype frequencies in the population remains very stochastic .genealogies of samples from populations governed by draft do not follow the standard binary coalescent process .instead coalescent processes allowing for multiple mergers seem to be appropriate approximations which capture the large and anomalous fluctuations associated with selection .those coalescent models thus form the basis for a _ population genetics of rapid adaptation _ and serve as null - models to analyze data when kingman s coalescent is inappropriate .to illustrate clonal interference , draft , and genealogies in presence of selection , this review is accompanied by a collection of scripts based on ffpopsim at http://webdav.tuebingen.mpg.de/interference[webdav.tuebingen.mpg.de/interference ] .evolution experiments ( reviewed in ) have demonstrated that adaptive evolution is ubiquitous among microbes .experiments with rna viruses have shown that the rate of adaptation increases only slowly with the population size , suggesting that adaptation is limited by competition between different mutations and not by the availability of beneficial mutations .the competition between clones , also known as _ clonal interference _ , was directly observed in _e. coli _ populations using fluorescent markers .similar observations have been made in rich lenski s experiments in which _e. coli _ populations were followed for more that 50000 generations .a different experiment selecting _ e. coli _ populations for heat tolerance has shown that there are 1000s of sites available for adaptive substitutions , that there is extensive parallelism among lines in the genes and pathways bearing mutations , and that mutations frequently interact epistatically . by following the frequencies of microsatellite markers in populations of _e. coli _ , estimated the beneficial mutation rate to be per genome and generation with average effects of about .similarly , it has been shown that beneficial mutations are readily available in yeast and compete with each other in the population for fixation . at any given instant ,the population is thus characterized by a large number of segregating clones giving rise to a broad fitness distribution .the fate of a novel mutation is mainly determined by the genetic background it arises on .similar rapid adaptation and competition is observed in the global populations of influenza , which experience several adaptive substitutions per year , mainly driven by immune responses of the host . in summary , evolution of asexual microbesdoes not seem to be limited by finding the necessary single point mutations , but rather by overcoming clonal interference and combining multiple mutations .is the total mutation rate . in models of adaptive evolution ,the high fitness tail of , shown into in the inset , is the most important part .if it falls off faster than exponentially , the fitness distribution tends to be smooth . otherwise , the distribution is often dominated by a few large effect mutations . ]these observations have triggered intense theoretical research on clonal interference and adaptation in asexuals . in the models studied , rare events , e.g. the fittest individual acquiring additional mutations , dramatically affect the future dynamics .intuition is a poor guide in such situations and careful mathematical treatment is warranted .nevertheless , it is often possible to rationalize the results in a simple and intuitive way with hindsight , and i will try to present the important aspects in accessible form .our discussion assumes that fitness is a unique function of the genotype .thereby , we ignore the possibility of frequency - dependent selection . a diverse population with many different genotypes can then be summarized by its distribution along this fitness - axis ; see fig . [fig : pop_and_distribution_sketch]a&b .fitness distributions are shaped by a balance between injection of variation via mutation and the removal of poorly adapted variants .most mutations have detrimental effects on fitness , while only a small minority of mutations is beneficial .the distribution of mutational effects in rna virus has been estimated by mutagenesis .roughly half of random mutations are effectively lethal , while were found to be beneficial in this experiment . a distribution of mutational effects , , is sketched in fig .[ fig : pop_and_distribution_sketch]c . general properties of are largely unknown and will depend on the environment .deleterious mutations rarely reach high frequencies but are numerous , while beneficial mutations are rare but amplified by selection . but in order to spread and fix , a beneficial mutation has to arise on an already fit genetic background or have a sufficiently large effect on fitness to get ahead of everybody else .two lines of theoretical works have put emphasis either on the large effect mutations ( clonal interference theory ) or `` coalitions '' of multiple mutations of similar effect . both approaches , sketched in fig .[ fig : ci_vs_mm ] are good approximations depending on the distribution of fitness effects . , has to match the speed of the nose , , in a quasi - steady state .the fixation probability of a mutation with effect increases with increasing background fitness as sketched in panel ( c ) .a mutant in the bulk of the fitness distribution has essentially zero chance of taking over the population since many fitter individuals exist . in the opposite case when the mutant is the fittest in the population , is proportional to as we would expect in the absence of interference .since there are very few individuals with very high fitness , most mutations that fix come from a narrow region ( light grey ) where the product of and , sketched in blue , peaks .note that is malthusian or log - fitness . scripts to illustrate interference and fixation can be found in the http://webdav.tuebingen.mpg.de/interference/fixation_asex.html[online supplement ] . ]consider a homogeneous population in which mutations with effect on fitness between and arise with rate as sketched in fig .[ fig : pop_and_distribution_sketch]c . in a large populationmany beneficial mutations arise every generation . in order to fix ,a beneficial mutation has to outcompete all others ; see fig .[ fig : ci_vs_mm]a . in other words ,a mutation fixes only if no mutation with a larger effect arises before it has reached high frequencies in the population .this is the essence of clonal interference theory by .the gerrish - lenski theory of clonal interference is an approximation since it ignores the possibility that two or more mutations with moderate effects combine to outcompete a large effect mutation a processi will discuss below .its accuracy depends on the functional form of and the population size .one central prediction of clonal interference is that the rate of adaptation increases only slowly with the population size and the beneficial mutation rate .this is a consequence of the fact that the probability that a particular mutation is successful decreases with since there are more mutations competing .this basic prediction has been confirmed in evolution experiments with virus .how the rate of adaptation depends on and is sensitive to the distribution of fitness effects .generically , one finds that the rate of adaptation is , where depends on the properties of .clonal interference theory places all the emphasis on the mutation with the largest effect and ignores variation in genetic background or equivalently the possibility that multiple mutations accumulate in one lineage .it is therefore expected to work if the distribution of effect sizes has a long tail allowing for mutations of widely different sizes .it fails if most mutations have similar effects on fitness .a careful discussion of the theory of clonal interference and its limitations can be found in .if most beneficial mutations have similar effects , a lineage can not fix by acquiring a mutation with very large effect but has to accumulate more beneficial mutations than the competing lineages .if population sizes and mutation rates are large enough that many mutations segregate , the distribution of fitness in the population is roughly gaussian , see fig .[ fig : ci_vs_mm]b , and the problem becomes tractable .more precisely , is governed by the deterministic equation \ , ds\ ] ] where accounts for amplification by selection of individuals fitter than the fitness mean and elimination of the less fit ones .the second term accounts for mutations that move individuals from to at rate . integrating this equation over the fitness yields fisher s `` fundamental theorem of natural selection '' , which states that the rate of increase in mean fitness is where is the variance in fitness and is the average mutation load a genome accumulates in one generation .a steadily moving mean fitness suggests a traveling wave solution of the form where is the fitness relative to the mean .( [ eq : ft ] ) is analogous to the breeder s equation that links the response to selection to additive variances and co - variances . in quantitative genetics ,the trait variances are determined empirically and often assumed constant , while we will try to understand how is determined by a balance between selection and mutation . to determine the average , we need an additional relation between and the mutational input . to this end , it is important to realize that the population is thinning out at higher and higher fitness and only very few individuals are expected to be present above some as sketched in fig . [ fig : ci_vs_mm]b .the dynamics of this high fitness `` nose '' is very stochastic and not accurately described by eq .( [ eq : popdis ] ) . however , the nose is the most important part where most successful mutations arise .there have been two strategies to account for the stochastic effects and derive an additional relation for the velocity .( i ) the average velocity , , of the nose is determined by a detailed study of the stochastic dynamics of the nose . at steady state , this velocity has to equal the average velocity of the mean fitness given by eq .( [ eq : ft ] ) , which produces the additional relation required to determine .( ii ) alternatively , assuming additivity of mutations , has to equal the average rate at which fitness increases due to fixed mutations ( see for a related idea ) .i will largely focus on this latter approach , as it generalizes to sexual populations below .in essence , we need to calculate the probability of fixation of mutations with effect size that arise in random individuals in the population . depends on and implicitly on the traveling fitness distribution . using this notation, we can express as the sum of effects of mutations that fix per unit time : note that the mutational input is proportional to the census population size . to solve eq .( [ eq : selfconsist ] ) , we first have to calculate the fixation probability , which in turn is a weighted average of the fixation probability , , given the mutation appears on a genetic background with relative fitness .the latter can be approximated by branching processes .a detailed derivation of is given in the supplement of , while the subtleties associated with approximations are discussed in .the qualitative features of are sketched in fig .[ fig : ci_vs_mm]c .the product describes the distribution of backgrounds on which successful mutations arise .this distribution is often narrowly peaked right below the high fitness nose ( see fig .[ fig : ci_vs_mm]c ) . mutations on backgrounds with lower fitness are doomed , while there are very few individuals with even higher background fitness .the larger , the broader this region is . to determine the rate of adaptation, one has to substitute the results for into eq .( [ eq : selfconsist ] ) and solve for . a general consequence of the form of the self - consistency condition eq .( [ eq : selfconsist ] ) is that if is weakly dependent on , we will find proportional to . in this casethe speed of evolution is proportional to the mutational input .with increasing fitness variance , , the genetic background fitness starts to influence fixation probabilities , such that eventually increases only slowly with . for models in which beneficial mutations of fixed effect arise at rate , the rate of adaptation in large populationsis given by .the above has assumed that is constant , but these expressions hold for more general models with a short - tailed distribution with suitably defined effective and .[ [ synthesis ] ] synthesis + + + + + + + + + clonal interference and multiple mutation models both predict diminishing returns as the population increases , but the underlying dynamics are rather different . in the clonal interference picture ,population take - overs are driven by single mutations and the genetic background on which they occur is largely irrelevant ( depends little on ) .the mutations that are successful , however , have the very largest effects . in the multiple mutation regime ,the effect of the mutations is not that crucial , but they have to occur in very fit individuals to be successful ( increases rapidly with ) . in both models , the speed of adaptation continues to increase slowly with the population size and there is no hard `` speed limit '' . distinguishing a speed limit from diminishing returns in experiments is hard .whether one or the other picture is more appropriate depends on the distribution of available mutations . if falls off faster than exponential , adaptation occurs via many small steps ; if the distribution is broader , the clonal interference picture is a reasonable approximation .the borderline case of an exponential fitness distribution has been investigated more closely , finding that large effect mutations on a pretty good background make the dominant contributions , i.e. , a little bit of both .empirical observations favor this intermediate situation .influenza evolution has been analyzed in great detail and is was found that a few rather than a single mutation drive the fixation of a particular strain .similarly , evolution experiments suggest that the genetic background is important , but a moderate number of large effect mutations account for most of the observed adaptation .note the somewhat unintuitive dependence of on parameters in eq .( [ eq : asex_speed ] ) . instead of the mutational input and , depends on and for . in large populations ,the dominant time scale of population turnover is goverened by selection and is of order . and measure the strength of reproduction noise ( drift ) and mutations relative to , respectively ( see for a discussion of this issue in the context of deleterious mutations ) . in large populations , the infinite sites model starts to break down and the same mutations can occur independently in several lineages limiting interference .competition between beneficial mutations in asexuals results in a slow ( logarithmic ) growth of the speed of adaptation with the population size ( eq . ( [ eq : asex_speed ] ) ) .how does gradually increasing the outcrossing rate alleviate this competition ? the associated advantages of sex and recombinationhave been studied extensively .it is instructive to consider facultatively sexual organisms that outcross at rate , and in the event of outcrossing have many independently segregating loci .facultatively sexual species are common among rna viruses , yeasts , nematodes , and plants . and sampled from the parental distribution with variance , offspring fitness is symmetrically distributed around the parental mean with variance .a mutation , indicated as a red dot in the sketch , can thereby hop from an individual with one background fitness to a very different one .( b ) if the outcrossing rate is lower than the fitness of some individuals , clones , indicated in red , can grow at rate . as the population adapts ,the growth rate of the clones is reduced , eventually goes negative and the clone disappears . the beneficial mutation , however , persists on other backgrounds . in small populations , the rate of adaptation increases linearly with the population size as sketched in panel ( c ) .for each outcrossing rate , there is a point beyond which interference starts to be important .( d ) epistasis causes condensation of the population into a small number of very fit genotypes .crosses between these genotypes result in unfit individuals . in the absence of forces that stabilize different clones ,one clone will rapidly take over if .scripts illustrating evolution of faculatively sexual populations can be found in the http://webdav.tuebingen.mpg.de/interference/fixation_sex.html[online supplement ] . ]most of our theoretical understanding of evolution in large facultatively mating populations comes from models similar to those introduced above for asexual populations .in addition to mutation , we have to introduce a term that describes how an allele can move from one genetic background to another by recombination ; see fig . [fig : fac_sex]a . given the fitness values of the two parents and and assuming many independently segregating loci , the offspring fitness is symmetrically distributed around the mid - parent value with half the population variance; see illustration in fig .[ fig : fac_sex]a and . to understand the process of fixation in such a population ,the following is a useful intuition : an outcrossing event places a beneficial mutation onto a novel genotype , which is amplified by selection into a clone whose size grows rapidly with the fitness of the founder ; see fig .[ fig : fac_sex]b .these clones are transient , since even an initially fit clone falls behind the increasing mean fitness .however , large clones produce many recombinant offspring ( daughter clones ) , which greatly enhances the chance of fixation of mutations they carry . since clone size increases rapidly with founder fitness , the fixation probability is still a very steep function of the background fitness and qualitatively similar to the asexual case ( fig .[ fig : ci_vs_mm]c ) . with increasing outcrossing rate ,the fitness window from which successful clones originate becomes broader and broader .if outcrossing rates are large enough that genotypes are disassembled by recombination faster than selection can amplify them , is essentially flat and the genetic background does not matter much .this transition was examined by : the essence of this result is that adaptation is limited by recombination whenever is smaller than the standard deviation in fitness in the absence of interference . in this regime, depends weakly on , but increases rapidly with .this behavior is sketched in fig .[ fig : fac_sex]c .similar results can be found in .the above analysis assumed that recombination is rare , but still frequent enough to ensure that mutations that rise to high frequencies are essentially in linkage equilibrium .this requires . studied the selection on standing variation at intermediate and low recombination rates .adaptation in presence of horizontal gene transfer was investigated by , , and .in contrast to asexual evolution , epistasis can dramatically affect the evolutionary dynamics in sexual populations .epistasis implies that the effect of mutations depends on the state at other loci in the genome . in the absence of sex ,the only quantity that matters is the distribution of available mutations , .the precise nature of epistasis is not crucial . in sexual populations , however , epistasis can affect the evolutionary dynamics dramatically : when different individuals mix their genomes , it matters whether mutations acquired in different lineages are compatible .since selection favors well adapted combinations of alleles , recombination is expected to be on average disruptive and recombinant offspring have on average lower fitness than their parents ( the so - called `` recombination load '' ) .this competition between selection for good genotypes and recombination can result in a condensation of the population into fit clones ; see fig .[ fig : fac_sex]d , and .selective interference has historically received most attention in obligately sexual organisms most relevant to crop and animal breeding .artificial selection has been performed by farmers and breeders for thousands of years with remarkable success .evolution experiments with diverse species , including chicken , mice and drosophila , have shown that standing variation at a large number of loci responds to diverse selection pressures ; see for a recent review . in obligately sexual populations , distant loci can respond independently to selection and remain in approximate linkage equilibrium .the frequencies of different alleles change according to their effect on fitness averaged over all possible fitness backgrounds in the population .small deviations from linkage equilibrium can be accounted for perturbatively using the so - called quasi - linkage equilibrium ( qle ) approximation .interferes with other mutation in a region of width over a time , where is the crossover rate per base .the extent of interference is sketched by grey bulges , each of which corresponds to a mutation that fixed .interference starts to be important when the bulges overlap .since the area of the bulges , roughly `` height '' , is approximately independent of , interference depends on and the rate of sweeps rather than the effect size .the rate of adaptation is therefore primarily a function of the maplength .( b ) a selective sweep reduces neutral genetic variation in a region of width .the effect of sweeps on neutral diversity is explored in http://webdav.tuebingen.mpg.de/interference/draft.html[online supplement ] ] this approximate independence , however , does not hold for loci that are tightly linked . observed that interference between linked competing loci can slow down the response to selection an effect now termed _ hill - robertson interference _ .felsenstein realized that interference is not restricted to competing beneficial mutations but that linked deleterious mutations also impede fixation of beneficial mutations ( see background selection below ) .the term hill - robertson interference is now used for any reduction in the efficacy of selection caused by linked fitness variation . a deeper understanding of selective interference was gained in the 1990ies . the key insight of barton was to calculate the fate of a novel mutation considering all possible genetic backgrounds on which it can arise and summing over all possible trajectories it can take through the population . for a small number of loci , the equations describing the probability of fixation can be integrated explicitly .weakly - linked sweeps cause a cumulative reduction of the fixation probability at a focal site that is roughly given by the ratio of additive variance in fitness and the squared degree of linkage . further identified a critical rate of strong selective sweeps that effectively prevents the fixation of mutations with an advantage smaller than . if sweeps are too frequent , the weakly selected mutation has little chance of spreading before its frequency is reduced again by the next strong sweep . at short distances ,selective sweeps impede each other s fixation more strongly .this interference is limited to a time interval of order generations where one of the sweeping mutations is at intermediate frequencies . during this time, a new beneficial mutation will often fall onto the wildtype background and is lost again if it is not rapidly recombined onto the competing sweep .the latter is likely only if it is further than nucleotides away from the competing sweep , where is the crossover rate per basepair . in other words , a sweeping mutation with effect prevents other sweeps in a region of width , and occupies this chromosomal `` real estate '' for a time ; see fig .[ fig : obligate_sex]a .hence strong sweeps briefly interfere with other sweeps in a large region , while weak sweeps affect a narrow region for a longer time .the amount of interference is therefore roughly independent of the strength of the sweeps , and the total number of sweeps per unit time is limited by the map - length , where the integral is over the entire genome and is the local crossover rate .larger populations can squeeze slightly more sweeps into . inmost obligately sexual organisms , sweeps rarely cover more than a few percent of the total map length such that recombination is not limiting adaptation unless sweeps cluster in certain regions . however , as i will discuss below , even rare selective sweeps have dramatic effects on neutral diversity .interference between selected mutations reduces the fixation probability of beneficial mutations , slows adaptation , and weakens purifying selection .these effects are very important , but hard to observe since significant adaptation often takes longer than our window of observation .typically , data consists of a sample of sequences from a population .these sequences differ by single nucleotide polymorphisms , insertions , or deletions , and we rarely know the effect of these differences on the organism s fitness . from a sequence sample of this sort , the genealogy of the population is reconstructed and compared to models of evolution in most cases a neutral model governed by kingman s coalescent . from this comparison we hope to learn about evolutionary processes . however , linked selection , be it in asexual organisms , facultatively sexuals , or obligately sexuals , has dramatic effects on the genealogies .substantial effects on neutral diversity are observed at rates of sweeps that do not yet cause strong interference between selected loci for the simple reason that neutral alleles segregate for longer times .selective sweeps have strong effects on linked neutral diversity and genealogies .a sweeping mutation takes about generations to rise to high frequency .linked neutral variation is preserved only when substantial recombination happens during this time .given a crossover rate per base , recombination will separate the sweep from a locus at distance with probability per generation ( assuming ) .hence a sweep leaves a dip of width in the neutral diversity ( see fig .[ fig : obligate_sex]b ) . within this region, selection causes massive and rapid coalescence and only a fraction of the lineages continue into the ancestral population ( see fig .[ fig : coalescence]a ) .this effect has been further investigated by , who showed that the effect of recurrent selective sweeps is well approximated by a coalescent process that allows for multiple mergers : each sweep forces the almost simultaneous coalescence of a large number of lineages ( a fraction ) .similar arguments had been made previously by , who called the stochastic force responsible for coalescence _genetic draft_. extended the analysis of durret and schweinsberg partial sweeps that could be common in structured populations , with over - dominance , or frequency dependent selection .and the per site mutation rate exceeds one and can result in multiple bursts of coalescence almost at the same time .( c ) a genealogical tree drawn from a simulation of a model of rapidly adapting asexual organisms .coalescence often occurs in bursts .furthermore , branching is often uneven . at many branchings in this `` ladderized '' tree ,most individuals descend from the left branch .those are well known features of multiple merger coalescence processes such as the bolthausen - sznitman coalescent .( d ) coalescence and fitness classes .most population samples consists of individual from the center of the fitness distribution , while their distant ancestors were among the fittest .in large populations , most coalescence happens in the high fitness nose and the time until ancestral lineages arrive " in the nose corresponds to long terminal branches ( compare panel c ) .how genealogies depend on selection can be studied using simulations , see http://webdav.tuebingen.mpg.de/interference/coalescence.html[online supplement ] . ]the rapid coalescence of multiple lineages is unexpected in the standard neutral coalescent ( a merger of lineages occurs with probability ) . in coalescence induced by a selective sweep ,however , multiple mergers are common and dramatically change the statistical properties of genealogies .a burst of coalescence corresponds to a portion of the tree with almost star - like shape .alleles that arose before the burst are common , those after the burst rare .this causes a relative increase of rare alleles , as well as alleles very close to fixation . the degree to which linked selective sweeps reduce genetic diversitydepends primarily on the rate of sweeps per map length . in accord with this expectation , it is found that diversity increases with recombination rate and decreases with the density of functional sites .in addition to occasional selective sweeps , genetic diversity and the degree of adaptation can be strongly affected by a large number of weakly selected sites , e.g. weakly deleterious mutations , that generate a broad fitness distribution .soft sweeps refer to events when a selective sweep originates from multiple genomic backgrounds , either because the favored allele arose independently multiple times or because it has been segregating for a long time prior to a environmental change .soft sweeps have recently been observed in pesticide resistance of drosophila and are a common phenomenon in viruses with high mutation rates . a genealogy of individuals sampled after a soft sweep is illustrated in fig .[ fig : coalescence]b .the majority of the individuals trace back to one of two or more ancestral haplotypes on which the selected mutation arose .hence coalescence is again dominated by multiple merger events , except that several of those events happen almost simultaneously .this type of coalescent process has been described in . despite dramatic effects on genealogies, soft sweeps can be difficult to detect by standard methods that scan for selective sweeps .those methods use local reductions in genetic diversity , which can be modest if the population traces back to several ancestral haplotypes . the number of ancestral haplotypes in a sample after a soft sweep depends on the product of , the per - site mutations rate , and selection against the allele before the sweep . to detect soft sweeps, methods are required that explicitly search for signatures of rapid coalescence into several lineages in linkage disequilibrium or haplotype patterns .individual selective sweeps have an intuitive effect on genetic diversity , but what do genealogies look like when many mutations are competing in asexual or facultatively sexual populations ?it has recently been argued that the genealogies of populations in many models of rapid adaptation are well described by coalescent processes with multiple mergers .this was first discovered by , who studied a model where a population expands its range .the genealogies of individuals at the front are described by the bolthausen - sznitman coalescent , a special case of coalescent processes with multiple mergers . recently, it has been shown that a similar coalescent process emerges in models of adaptation in panmictic populations .[ fig : coalescence]c shows a tree sampled from a model of a rapidly adapting population .a typical sample from a rapidly adapting population will consist of individuals from the center of the fitness distribution .their ancestors tend to be among the fittest in the population .substantial coalescence happens only once the ancestral lineages have reached the high fitness tip , resulting in long terminal branches of the trees .once in the tip , coalescence is driven by the competition of lineages against each other and happens in bursts whenever one lineage gets ahead of everybody else .these bursts correspond to the event that a large fraction of the population descends from one particular individual .these coalescent events have approximately the same statistics as neutral coalescent processes with very broad but non - heritable offspring distributions . in the case of rapidly adapting asexual populations ,the effective distribution of the number of offspring is given by which gives rise to the bolthausen - sznitman coalescent .this type of distribution seems to be universal to populations in which individual lineages are amplified while they diversify and is found in facultatively sexual populations , asexual populations adapting by small steps , as well as populations in a dynamic balance between deleterious and beneficial mutations .asymptotic features of the site frequency spectrum can be derived analytically .one finds that the frequency spectrum diverges as at low frequencies corresponding to many singletons .furthermore , neutral alleles close to fixation are common with diverging again as .this relative excess of rare and very common alleles is a consequence multiple mergers which produce star - like sub - trees and the very asymmetric branching at nodes deep in the tree ( compare fig .[ fig : coalescence]c ) .the time scale of coalescence , and with it the level of genetic diversity , is mostly determined by the strength of selection and only weakly increases with population size .essentially , the average time to a common ancestor of two randomly chosen individuals is given by the time it takes until the fittest individuals dominate the population .in most models , this time depends only logarithmically on the population size .background selection refers to the effect of purifying selection on linked loci , which is particularly important if linked regions are long .if deleterious mutations incur a fitness decrement of and arise with genome wide rate , a sufficiently large population settles in a state where the number of mutations in individuals follows a poisson distribution with mean .individuals loaded with many mutations are selected against , but continually produced by de novo mutations .all individuals in the population ultimately descent from individuals carrying least deleterious mutations . within this model, the least loaded class has size and coalescence in this class is accelerated by compared to a neutrally evolving population of size . for large ratios ,the poisson distribution of background fitness spans a large number of fitness classes and this heterogeneity substantially reduces the efficacy of selection .the effect of background selection is best appreciated in a genealogical picture .genetic backgrounds sampled from the population tend to come from the center of the distribution . since the deleterious mutations they carry were accumulated in the recent past , lineages `` shed '' mutations as we trace them back in time until they arrive in the mutation free class akin to fig . [fig : coalescence]d .this resulting genealogical process , a fitness class coalescent , has been described in . a recent study on the genetic diversity of whale lice suggests that purifying selection and frequent deleterious mutations can severely distort genealogies . present methods for the analysis of sequence samples under purifying selection .the fitness class coalescent is appropriate as long as muller s ratchet does not yet click .more generally , fixation of deleterious mutations , adaptation , and environmental change will balance approximately .it has been shown that a small fraction of beneficial mutations can be sufficient to halt muller s ratchet . in this dynamic balance between frequent deleterious and rare beneficial mutations ,the genealogies tend to be similar to genealogies under rapid adaptation discussed above .contradicting neutral theory , genetic diversity correlates only weakly with population size , suggesting that linked selection or genetic draft are more important than conventional genetic drift .draft is most severe in asexual populations , for which models predict that the fitness differences rather than the population size determine the level of neutral diversity . as outcrossing becomes more frequent , the strength of draft decreases and diversity increases . with increasing coalescence times, selection becomes more efficient as there is more time to differentiate deleterious from beneficial alleles . inobligately sexual populations , most interference is restricted to tightly linked loci and the number of sweeps per map length and generation determines genetic diversity .since interference slows adaptation , one expects that adaptation can select for higher recombination rates . indeed , positive selection results in indirect selection on recombination modifiers .changing frequencies of outcrossing have been observed in evolution experiments .however , the evolution of recombination and outcrossing rates in rapidly adapting populations remains poorly understood , both theoretically and empirically .the traveling wave models discussed above assume a large number of polymorphisms with similar effects on fitness and a smooth fitness distribution , which are drastic idealizations .more typically , one finds a handful of polymorphisms with a distribution of effects .simulations indicate , however , that statistical properties of genealogies are rather robust regarding model assumptions as long as draft dominates over drift .appropriate genealogical models are prerequisite for demographic inference . if , for example , a neutral coalescent model is used to infer the population size history of a rapidly adapting population , one would conclude that the population has been expanding .incidentally , this is inferred in most cases .some progress towards incorporating the effect of purifying selection into estimates from reconstructed genealogies has been made recently .alternative genealogical models accounting for selection should be included into popular analysis programs such as beast .it is still common to assign an `` effective '' size , , to various populations . in most cases , is a proxy for genetic diversity , which depends on the time to the most recent common ancestor . with the realization that coalescence times depend on linked selection and genetic draft , rather than the population size and genetic drift ,the term should be avoided and replaced by , the time scale of coalescence .defining suggests that the neutral model is valid as long as is used instead of .we have seen multiple times that drift and draft are of rather different natures and that this difference can not be captured by a simple rescaling .each quantity then requires its own private , rendering the concept essentially useless .some quantities like site frequency spectra are qualitatively different and no maps them to a neutral model .the ( census ) population size is nevertheless important in discovering beneficial mutations .for this reason , large populations are expect to respond more quickly to environmental change as we are painfully aware in the case of antibiotic resistance of pathogens .large populations might therefore track phenotypic optima more closely resulting in beneficial mutations with smaller effect , which in turn might explain their greater diversity .the majority of models discussed assume a time invariant fitness landscape .this assumption reflects our ignorance regarding the degree and timescale of environmental fluctuations ( for work on selection in time - dependent fitness landscapes , see ) .time - variable selection pressures , combined with spatial variation , could potentially have strong effects .similarly , frequency - dependent selection and more generally the interaction of evolution with ecology are important avenues for future work .the challenge consists of choosing useful models that are tractable , appropriate , and predictive .116 natexlab#1#1bibnamefont # 1#1bibfnamefont # 1#1citenamefont # 1#1url # 1`#1`urlprefix[2]#2 [ 2][]#2 , , , , , , , and , , * * ( ) , . , , * * ( ) , ., , * * ( ) , . , ,* * ( ) , . , , * * ( ) , . , and , , * * ( ) , . , and , , * * ( ) ,. , and , , * * ( ) , . , and , , * * ( ) , . , , , , , , , , , , , , _et al . _ , ,* * ( ) , . , , * * , . , , and , , * * ( ) , . , and , , * * ( ) , . , , , , and , , * * ( ) , . , , , and , , * *( ) , . , , and , , * * ( ) , . , , _ _ ( ) . , , * * ( ) , . , , , , , and , , * * ( ) , . , , , , , and , , * * ( ) , . , , * * ( ) , . , , * * , . , , and , , * * ( ) , . , , and , , * * ( ) , . , , and , , * * ( ) ,. , and , , . , and , , * * ( ) , . , , and , , * * ( ) , . , and , , * * ( ) , . , , and , , * * ( ) , . , , and , , . , and , , * * ( ) , . , and , , * * ( ) ,. , and , , * * ( ) , . , and , , * * ( ) , . , , * * ( ) , . , , * * ( ) , . , , _ _ ( ), and , , * * ( ) , . , , * * ( ) , . , , , , and , , * * ( ) , . , , , , , and , , * * , . , , * * ( ) ,, , * * ( ) , . , , and , , * * ( ) , . , , , and , , * * ( ) , . , and , , * * ( ) , , . , , , and , , * * ( ) , . , and , , * * , . , and , , * * ( ) , . , , , and , , * * ( ) , . , and , , * * ( ) , . , , and , , * * ( ) , . , , and , , * * ( ) ,. , , , , , and , , * * ( ) , . , and , , * * ( ) , . , , * * ( ) , . , , * * ( ) , . , , * * , . , , , , , , and , , * * ( ) , . , , and , , * * ( ) , . , , and , , . , , , , , , , and ,, * * ( ) , . , , _ _ ( ) . , and , , * * ( ) , . , and , , * * ( ) , . , and , , * * , . , , , and , , * * ( ) , . , , and , , * * ( ) , . , , * * ( ) , . , and , , * * ( ) , . , and , , * * ( ) , . , and , , * * ( ) , . , and , , * * , . , and , , * * , . , and , , * * , . , , and , , * * , . , , , and , , * * ( ) , . , and , , * * ( ) , . , , * * ( ) , . , , and , , * * ( ) , . , and , , * * ( ) , . , and , , . , , and , , * * , . , and , , * * ( ) , . , , , and , , * * ( ) , . , , * * ( ) , . , and , , * * ( ) , . , and , , * * ( ) , . , and , , * * ( ) , . , and , , * * , . , , and , , * * ( ) , . , , and , , * * ( ) ,. , and , , * * ( ) , . , , , and , , . , , , , , , , , , , and , , * * ( ) , . , , * * , . , , * * ( ) ,. , , , , , , , , and , , * * ( ) , . , , , and , , * * ( ) , . , , , , , , , , , , , and , , * * ( ) , . , and , , * * ( ) , . , , , , , , and , , * * ( ) , . , , and , , * * ( ) , . , and , , . , , , , , , and , , * * ( ) , . , , and , , * * ( ) ,. , and , , * * ( ) , . , , , , and , , * * ( ) , . , , , , and , , * * ( ) , . , , , and , , * * , . , and , , * * ( ) , . , and , , * * ( ) , . , , , and , , * * ( ) . , and , , * * ( ) , . , , , , , , , , , , and , , * * ( ) , .* genetic drift : stochastic changes in allele frequencies due to non - heritable variation in offspring number . * purifying selection : selection against deleterious mutations . * positive selection : selection for novel beneficial mutations . * genetic draft : changes in allele frequencies due to ( partly ) heritable random associations with genetic backgrounds . * hitchhiking : rapid rise in frequency through an association with a very fit background . * selective interference : reduction of fixation probability through competition with other beneficial alleles . *clonal interference : competition between well adapted asexual subpopulations from which only one subpopulation emerges as winner .* branching process : stochastic model of reproducing and dying individuals without a constraint on the overall population size .* epistasis : background dependence of the effect of mutations .epistasis can result in rugged fitness landscapes . *kingman coalescent : basic coalescence process where random pairs of individuals merge .* multiple merger coalescent : coalescent process with simultaneous merging of more than 2 lineages . * bolthausen - sznitman coalescent ( bsc ) : special multiple merger coalescent which approximates genealogies in many models of adaptation .
|
to learn about the past from a sample of genomic sequences , one needs to understand how evolutionary processes shape genetic diversity . most population genetic inference is based on frameworks assuming adaptive evolution is rare . but if positive selection operates on many loci simultaneously , as has recently been suggested for many species including animals such as flies , a different approach is necessary . in this review , i discuss recent progress in characterizing and understanding evolution in rapidly adapting populations where random associations of mutations with genetic backgrounds of different fitness , i.e. , genetic draft , dominate over genetic drift . as a result , neutral genetic diversity depends weakly on population size , but strongly on the rate of adaptation or more generally the variance in fitness . coalescent processes with multiple mergers , rather than kingman s coalescent , are appropriate genealogical models for rapidly adapting populations with important implications for population genetic inference .
|
many algorithms are limited by bandwidth , meaning that the memory subsystem can not provide the data as fast as the arithmetic core could process it .one solution to this problem is to introduce multi - level memory hierarchies with low - latency and high - bandwidth caches which exploit temporal locality in an application s data access pattern . in many scientific algorithmsthe bandwidth bottleneck is still severe , however .while there exist many models predicting the influence of main memory bandwidth on the performance , less is known about bandwidth - limited in - cache performance .caches are often assumed to be infinitely fast in comparison to main memory .our proposed model explains what parts contribute to the runtime of bandwidth - limited algorithms on all memory levels .we will show that meaningful predictions can only be drawn if the execution of the instruction code is taken into account . to introduce and evaluate the model , basic building blocks of streaming algorithms ( load , store and copy operations )are analyzed and benchmarked on three x86-type test machines .in addition , as a prototype for many streaming algorithms we use the stream triad , which matches the performance characteristics of many real algorithms .the main routine and utility modules are implemented in c while the actual loop code uses assembly language .the runtime is measured in clock cycles using the ` rdtsc ` instruction .section [ sec : machines ] presents the microarchitectures and technical specifications of the test machines . in section[ sec : model ] the model approach is briefly described .the application of the model and according measurements can be found in sections [ sec : theory ] and [ sec : results ] .an overview of the test machines can be found in table [ tab : arch ] . as representatives of current x86 architectureswe have chosen intel `` core 2 quad '' and `` core i7 '' processors , and an amd `` shanghai '' chip .the cache group structure , i.e. , which cores share caches of what size , is illustrated in figure [ fig : cache_arch ] .for detailed information about microarchitecture and cache organization , see the intel and amd optimization handbooks .although the shanghai processor used for the tests sits in a dual - socket motherboard , we restrict our analysis to a single core . .test machine specifications .the cache line size is 64 bytes for all processors and cache levels . [cols="<,^,^,^ " , ]the proposed model introduces a systematic approach to understand the performance of bandwidth - limited loop kernels , especially for in - cache situations . using elementary datatransfer operations we have demonstrated the basic application of the model on three modern quad - core architectures .the model explains the bandwidth results for different cache levels and shows that performance for bandwidth - limited kernels depends crucially on the runtime behavior of the instruction code in l1 cache .this work proposes a systematic approach to understand the performance of bandwidth - limited algorithms .it does not claim to give a comprehensive explanation for every aspect in the behavior of the three covered architectures .work in progress involves application of the model to more relevant algorithms like , e.g. , the jacobi or gauss - seidel smoothers .future work will include verification and refinements for the architectures under consideration .an important component is to fully extend it to multi - threaded applications .another possible application of the model is to quantitatively measure the influence of hardware prefetchers by selectively disabling them .w. jalby , c. lemuet and x. le pasteur : wbtk : a new set of microbenchmarks to explore memory system performance for scientific computing . international journal of high performance computing applications , vol .18 , 211224 ( 2004 ) .
|
we present a performance model for bandwidth limited loop kernels which is founded on the analysis of modern cache based microarchitectures . this model allows an accurate performance prediction and evaluation for existing instruction codes . it provides an in - depth understanding of how performance for different memory hierarchy levels is made up . the performance of raw memory load , store and copy operations and a stream vector triad are analyzed and benchmarked on three modern x86-type quad - core architectures in order to demonstrate the capabilities of the model .
|
the signal of a new nature were first detected in a run of experiments of 1994 - 1996 , carried out with the aid of a modernized high - precision quartz gravimeter `` sodin '' and a magnet attached to it [ 1 - 4 ] .the signal appeared as permanent smooth peak - shape pulses with duration from 2 to 10min . andwith amplitudes up to 15l ( l is the amplitude of the moon tide ) .the experiments were carried out within a special box of the gravimetric laboratory of ssai msu , in a basement room at a level of 2 meters underground .the duration of the uninterrupted experiment was no more than 30 days .besides the last peak ( 19.04.96 ) , all results were obtained on the basis of one gravimeter measurements . in the last experimenttwo gravimeters were used .they were positioned on the same base in the box : one of the them was with a magnet , the other ( reference one ) was without magnet .both gravimeters were connected to the same power network but had separated stabilized power sources . in the experiment with two gravimeters , that with magnet detected an anomalous event which was not fixed by the gravimeter without magnet . since the procedure of investigating the signal of new nature with the aid of two gravimeters was of short - term duration ( and the main results were obtained by one gravimeter ) , the run of experiments in 1994 - 1996 could be considered only as preliminary one .nontheless , it should be noted that the signals ( 12 peaks ) were concentrated in space around some stellar direction having the following coordinates in the second equatorial system : right ascension , declination .this was nearly coincident with some chosen spatial direction found in the course of investigating the assumed new anisotropic interaction with the aid of a torsion balance arranged inside high - current magnets [ 3 - 8 ] as well as when observing time changes in the intensity of the -decay of radioactive elements [ 3,4,9 - 10 ] in the process of their rotation together with the earth .the new run of experiments was aimed at more precise measurements owing to elimination of technique errors of the preceding investigations .in the run of experiments of 1999 - 2000 , the method of two gravimeters with a magnet attached to one of them , was used .the gravimeters were positioned in an underground gravimetric laboratory of ssai msu , at a depth of 10 m , on a special base separated from the foundation of the building .a recently made stabilized power system could feed the gravimeters also in a case of accidental power switching off during 2hs .the information from gravimeters was transmitted to a personal computers ( pc ) . a schematic diagram of a quartz sensitive system is shown in fig.1 .its main component is a quartz lever 1 ( 2 cm in length with a platinum mass of _ m = 0.05 g _ ) suspended on torsion quartz fibres 2 and additionally off - loaded by a vertical quartz spring 3 .such a construction gives to the device the high sensitivity to changes in gravity up to 1_ _( is the free fall acceleration ) as well as sufficient protection against microseismic disturbances .the optical recording system comprises a galogen lamp 4 fed from a high - stabilised power source , light of which lamp enters into the instrument through the objective lens 5 with the aid of optical fibers .the quartz rod 6 welded to the level is a cylindrical lens , it forms an image of the point light source , which falls further on a photosensor rule 7 with a sensitivity on the order of 1 m .the digital signal from the photosensor rule output enters further through a special interface to a computer pc which executes preliminary processing and averaging of data over the interval of 1 min .the accuracy of minute s data is depending on the microseismic noise level varying considerably in the course of day .the sensitive system of the instrument includes some additional units ( not shown in fig.1 ) protecting it from thermal , electrostatic , and atmospheric pressure disturbances .calibration of the instrument is carried out by the inclination procedure with an accuracy of about which is quite sufficient for the experiment being considered .the amplitude of constantly recorded by the device changes in gravity due to moon - sun tides and corresponding earth s deformation , is which allows to evaluate magnitudes of possible anomalous effects against the background of tides . to measure the new interaction by the sodin gravimeter , a constant magnet ( 60 mm in diameter , mm in height , the field b in the centre of ) was attached to it in such a way that the vector potential lines of the magnet in the vicinity of a test platinum weight ( see fig.1 ) were directed perpendicular to the earth s surface .the magnet played the role of a peculiar amplifier of the new force ( see in detail refs . [ 3,4 ] ) .to illustrate the experiment , in figs.2 - 6 some temporal fragments of recording of the gravimeters are shown .they demonstrate smooth peak - form overshoots by the `` sodin '' gravimeter s209 with the attached constant magnet . at the first stage of experiments ,the gravimeter with the magnet and the reference gravimeter worked on different time scales ( moscow ( msk ) and greenwich ( utc ) time , respectively ) .therefore a shift of the curves is seen in figs.2,3 which corresponds to that of time zones ( i.e. 3hs ) .it is seen from the figures that the amplitudes of overshoots are in some cases ( figs.4,5 ) much superior to the amplitude of the moon tide . in february - march 2000, the overshoots corresponded to an increase in the gravitational effect , in may - june on the contrary to its decrease . as opposed to the experiments of 1994 - 1996 in which the duration of the peaks was various ( from 2 to 10 min .and more ) , the peak length in consideration was practically constant and equaled 2 - 3 min . in table 1 presented are all anomalous deflections recorded by the gravimeter with magnet and not fixed by the reference gravimeter without magnet .consider in detail an experimental fragment shown in fig.6 .as is seen , the events 1,3,4 corresponding to local earthquakes are neatly coincident in the recordings of both gravimeters , but the event 2 was fixed only by the gravimeter with magnet .as was said above , the instruments in the course of the experiment were arranged on the base separated from the foundation of the building , at a depth of 10 m . therefore , as is clearly seen from fig.6 , various earth s oscillations acted identically onto both gravimeters .their design gives them good protection against thermal , electrostatic , and barometric disturbances .the gravimeters were powered from the same network , therefore , if some disturbance of `` unknown nature '' could pass through it that would manifest itself in both readings .the possible `` jumps '' in quartz occurring sometimes when using the gravimeters `` sodin '' have quite different form and can not explain the nature of the smooth minute s peaks fixed in the experiment .the result obtained also can not be explained by noise in the computers , because during the long use of them before the run of experiments of 1994 - 1996 as well as in other experiments carried out in 2001 , such peaks were not observed .the outer electromagnetic noise was practically minimum and , which is the main , that would uniformly tell on both gravimeters .one can not also explain the obtained results by the influence on the gravimeter of various hypothetical particles predicted in modern calibration theories ( like axions , photinos , gravitinos , etc . )[ 11,12 ] since both gravimeters are equal for their properties .in table 1 shown are the times of detection of the peaks by the gravimeter with magnet during the first half - year 2000 as well as times of corresponding events in the sun with indication of character of the event and the observatory from which the information was obtained .the more complete information about the solar events accompanying the peaks can be found in refs . [ 13 ] . according to recent investigations of anisotropic properties of the new force presented in ref.[14 ] , this force is directed at each point of space over a cone with the opening of formed around the vector * a* with the coordinates the results of [ 14 ] are in good agreement with [ 15 ] .the force can act also in direction of * a* if an object being acted upon has a closed circular current .this was observed in experiments on investigating changes in the intensity of -decay of radioactive elements [ 15 ] .an analysis of the experimental material presented here has shown that all events in table 1 ( except ( 2 ) in fig.6 ) corresponded to such observation time when the direction of a normal to the surface of the earth ( i.e. the direction of the maximum sensitivity of the gravimeters ) was coincident ( to a precision of ) with the generator of above said cone .the time of the event ( 2 ) in fig.6 indicates the direction of the normal to the earth s surface coincident ( to ) with the coordinates of the vector * a* itself : , ( i.e. with the axis of the cone ) .thus the experiments carried out with the use of two gravimeters have confirmed the earlier results obtained with the aid of only one gravimeter with an attached magnet .their results ( the direction of the new force ) are coincident with those of the experiments on investigating the anisotropic properties of space with the aid of plasma generators as well as the changes in intensity of the -decay of radioactive elements .the authors are grateful to participants of the seminar at the astrocosmic center of the physical institute of ras and personally to prof .v.v.burdyuzsa for a fruitful discussion of the experimental results , as well as to e.p.morozov , l.i.kazinova , a.yu.baurov for the help in preparation of the text of the paper .[ 1 ] fig.1 sensitive system of the sodin quartz gravimeter : 1 - beam , + 2 - horizontal quartz wires suspension system , + 3 - main spring , + 4 - lamp , + 5 - object lens , + 6 - beam , + 7 - ccd - scale , + 8 - prism , + 9 , 10 - thermocompensator , + 11 , 12 , 13 , 14 - micrometric compensation mechanism , + 15 - constant magnet .
|
in consequence of long - term ( 2000 ) observation of a system of two high - precision quartz gravimeters ( one of them with an attached magnet ) placed in a special ( at a depth of m ) gravimetric laboratory on a common base separated from the foundation of the building , signals of a new nature were detected . they were of a smooth peak - type shape , several minutes duration , and with amplitudes often more than that of the moon tide . the nature of the signals can not be explained in the framework of traditional physical views but can be qualitatively described with the aid of a supposed new interaction connected with the hypothesis about the existence of the cosmological vectorial potential * a* , a new presumed fundamental vectorial constant .
|
the dynamic properties and conditional independence structure of stochastic kinetic models are analyzed using a marked point process framework . a stochastic kinetic model or skm is a highly multivariate jump process used to describe chemical reaction networks .skms have become particularly important as models of the network of interacting biomolecules in a cellular system .the necessity of a stochastic process approach to the dynamics of such biochemical reaction systems is now clear , with skms providing continuous - time , mechanistic descriptions firmly grounded in chemical kinetic theory and the underlying statistical physics .the gillespie algorithm for simulation of skms is now an important tool in the science of systems biology .however , there are few analytical tools for study of the dynamic properties of skms ( although note and ) , especially when the skm is of modest or high dimension .this paper develops what appear to be the first methods for analyzing the local and global dynamic independence structure implied by a given skm and shows how these may be used to uncover the modular architecture of the network at coarser or finer levels of resolution .the required information about the parameters of the skm is modest , and consistent with the partial information about these currently available for many biochemical reaction networks .skms are often thought of as continuous - time , homogeneous markov chains having nonfinite state space .however , the fact that there are a finite number of possible types of jump of the process corresponding to the different types of possible biochemical reaction in the system allows formulation of both the skm and its subprocesses as multivariate counting processes .this turns out to be a fruitful approach for the problems addressed here .in fact , the markov property is not needed for the results and methods of the paper .the main contributions may be summarized as follows .graphical models for skms and dynamic molecular networks are introduced .these kinetic independence graphs ( kigs ) are directed , cyclic graphs whose vertices are the different types ( or species ) of biomolecule in the system .the kig encodes local independences that result from a lack of dependence of the conditional intensity of a subprocess on the internal history of some of the species .given a partition ] , of the internal histories of and conditional on a history of the jumps in .conditions under which this history corresponds to the internal history of are derived and are easily checked computationally .such a conditional independence is termed a global ( as opposed to local ) dynamic independence here .the new results enable mathematical definition of a modularization of an skm using its implied dynamics .graphical decomposition methods are developed for the identification of nested modularizations that allow the extent of coarse - graining to be varied and provide computationally efficient algorithms for large skms .junction tree representations are shown to provide a useful tool for visualizing , summarizing and manipulating the modularizations . applying the techniques of the paper to an skm that represents detailed empirical knowledge of the metabolic network of the human red blood cell yields new insight into the biological organization and dynamics of this cellular system .graphical models and their associated analytical and computational methods allow the modularization of large , complex models into smaller components and provide a particularly effective means of representing and analyzing conditional independence relationships .certain graphical approaches are now used quite extensively in computational biology and have also been readily assimilated by the wider biological scientific community , which has long found diagrammatic representations of reaction schemes useful . however, rigorous graphical representations of biochemical networks as dynamic processes that is graphical models in the statistical sense do not appear to have been considered previously .indeed , graphical models for continuous time stochastic processes in general are in an early stage of development .didelez introduced graphs based on the local independence structure of conditional intensities for finite state , composable markov processes and multivariate point processes , respectively ; is an earlier contribution , also for finite state markov processes .skms require new methods since interest is in dynamic independences between groups of species rather than the counting processes for the different types of reaction per se .furthermore , the markov process for species concentrations implied by the skm neither has finite state space , nor is it composable for most skms of interest ( see section [ sec3 ] ) . in practice, the skm is constructed from a large list of the biochemical reactions that comprise the network under study .this list , or `` network reconstruction , '' is usually compiled using extensive experimental evidence in the literature on the component parts of the system and their molecular interactions .indeed , the approaches of molecular biology and genetics , including genome sequencing , have already proved remarkably successful in providing life scientists with a very extensive `` parts list '' for biology .systems biology is an increasingly influential , interdisciplinary approach that aims to describe mathematically the stochastic dynamic behavior of the whole system as an emergent property of the network of interacting biomolecules .a principal challenge is thus to map from fine level descriptions such as reaction lists and their implied skms to higher level , coarse - grained descriptions of the dynamic properties .related is the increasingly held view that biochemical reaction networks are modular , that is their architecture can be decomposed into units that perform `` nearly independently '' , and that identifying such modules is a crucial step in the endeavor to understand and , ultimately , to selectively control cellular systems .however , it is recognized that rigorous , mathematical definition and identification of modularizations for biochemical networks is difficult , especially from a dynamic perspective ( see , chapter 3 ) . as a result ,such modularization techniques have been slow to develop , and there seems to be no prior work allowing for stochastic and non - steady state dynamics .the dynamic independence results and associated graphical methods developed here provide an effective means of addressing these problems .broadly speaking , the paper also illustrates the utility of a statistical and probabilistic approach to the dynamics of biological systems which , despite their stochastic nature , have hitherto more often received the attention of physical scientists .the structure of the paper is as follows .section [ sec21 ] introduces skms and reaction networks in a manner requiring no previous background in systems biology or biochemistry .section [ sec22 ] defines an skm as a marked point process and provides a formal construction using the well - known gillespie algorithm as a point of departure .section [ sec23 ] then shows how to accommodate subprocesses of the skm in a counting process framework and discusses their conditional intensities and internal histories ( natural filtrations ) . section [ sec3 ] introduces the kinetic independence graphs , or kigs , and examines local independence and graphical separation in the undirected kig . section [ sec4 ] then relates these to global conditional independence of species histories in theorems [ main1 ] and [ main2 ] , which are central to the paper .rigorous proofs of these theorems are quite involved and are given as appendix [ appa ] .section [ sec5 ] develops graphical decomposition methods and associated theory for the identification of modularizations of skms , while section [ sec6 ] applies the techniques of the paper to the skm of the human red blood cell .section [ sec7 ] highlights some directions for future research .a stochastic kinetic model is a continuous - time jump process modeling the state of a chemical system , ^{\prime} ] is usually known as the stoichiometric matrix .any two columns of are taken to be nonequal ; hence , there is a bijection between the mark space and . a formal construction of an skm is given below in section [ sec22 ] but it is helpful at this stage to note the following linear equation determining the dynamic evolution of : where ^{\prime} ] .denote by ; the internal history of the entire process and by ; the internal history of the counting process .the probability law of , and hence that of , is determined by what are known as the -conditional intensities , ] is bounded by an integrable random variable ) , each intensity is a local rate of reaction in exactly the chemical sense that is , ] , and molecules of type are produced for each in the subset \subset\mathcal{v} ] are called the _ reactants _ ( or inputs ) of the reaction , and the species ] are known as the stoichiometries of the reaction . if a species is a reactant but not a product , then its corresponding entry in the stoichiometric matrix ( i.e. , the change in the level of caused by reaction ) is given by . alternatively , if species is a product but not a reactant , then .there is no assumption that \cap p[m]=\varnothing ] and ] and :=\{j\in\mathcal{v}|\beta_{j}>0\} ] is small .certain reactions are `` coupled '' in that a product of one reaction is also a reactant of another reaction . from a stochastic process perspective , the specification of the list of component reactions as in ( [ reaction ] ) for all implies dependences between the levels ( or concentrations ) of the different biomolecules . as a simple but nonetheless biochemically meaningful illustration ,consider the following example of an skm .[ example1_wellbeh ] consider the skm with the 5 different species , , , , and the 6 reactions the gene is responsible for the production of molecules of protein via the intermediate ( mrna ) species . in this simplified representation , and act as simple catalysts in the reactions ( `` transcription '' ) and ( `` translation '' ) , respectively .the third reaction consists of the binding of 2 molecules of ( the sole reactant ) to form the new molecule ( the sole product ) .the fourth reaction is the reverse of the third .the fifth reaction sets up a `` negative feedback cycle '' whereby the production of is negatively self - regulated by the binding of to to form the distinct species .genes bound in this way to are not then available to participate in the reaction , thus preventing over - production of the protein .we shall return later to the same example .the gillespie stochastic simulation algorithm has become an important tool in biological science for studying biochemical and cellular systems .given its familiarity in mathematical and computational biology , the following construction of an skm as a marked point process takes as its point of departure the conditional distributions employed in the gillespie algorithm . for our purposes , the algorithm is usefully viewed as outputting a realization of the mpp , from which the resultant process is easily constructed as in ( [ xt ] ) below .readers less concerned with formal constructions and already familiar with stochastic kinetics may proceed safely to section [ sec23 ] after noting definition [ skm ] of an skm and ( [ cmgm ] ) for the conditional reaction intensities ( or `` hazards '' ) .denote the numbers of molecules of all species at time by .let be the initial , deterministic state of the system , and define .we write the -field generated by the first points and marks as .also let , where the mark is excluded from the generating collection of random variables. now introduce the important propensity ( or reaction rate ) function for the reaction , , where is continuous .the conditional distributions implied by stochastic kinetic theory and employed in the gillespie algorithm are given by \\[-8pt ] \eqntext{t > t_{s},s=0,1,\ldots,}\end{aligned}\ ] ] that is the waiting time to the next occurrence of a jump ( reaction ) is exponentially distributed with parameter ; and which gives the mark ( or jump ) distribution . notethat both the waiting time and mark distributions depend only on , the levels of the species present following the reaction . the pure jump process is given straightforwardly , for , by it being well known that is a time - homogeneous markov chain under .it turns out to offer significant advantages and simplification to adopt a mpp framework for the problems addressed in the paper .an skm is thus defined here directly in terms of the mpp and its corresponding counting processes .it is implicit in our definition of a mpp that whenever .thus , reactions occur instantaneously and no two reactions ever have identical occurrence times in continuous time .the physical interpretation is that reaction durations are negligible and may be ignored .the random variables are ] with mark space given by the columns of , , where no 2 columns of are equal ; and where the probability measure is such that ( [ wait ] ) holds -a.s . on , ( [ jump ] )holds -a.s . on , and -a.s .equivalently , the skm may be denoted by the corresponding multivariate counting process ( mvcp ) , ] , and counts the number of reactions of type that occur during ] .when has finite expectation , this means that ] . forany subset ofmolecular species , let the vector process denote the corresponding subprocess of .we identify with its mvcp , analogously to the treatment of above . for , consider the subset of reactions that change ( the level of ) , that is , , where is the subvector of corresponding to the elements of .one can identify with the mpp , , where each jump time corresponds to the occurrence of some reaction in ; the mark gives the resultant jumps in the elements of and takes its value in the mark space .this results in the following definition of an skm subprocess .[ nadeltaa ] let ] is a partition of then , .a proof that is given in appendix [ prooffs ] . for ,take . finally , since , ^{\prime} ] , there is no change in is equal to .similarly , the probability conditional on that , during ] per se . thus , the vertex set of the graph will be rather than .it is worth noting that existing graphical models for continuous - time markov chains are not applicable to skms because the markov process neither has finite state space , nor is it composable for most skms of interest . roughly speaking, composability implies that any change of state in can be represented as a change in only one of several components .consider the use of and as components [ since if is composable with more than 2 subsets of species as components , it must be composable with just 2 components]either the paths of and have common jump times contradicting that is composable , or they constitute 2 separate skms which then require a new method for their individual analysis .the kinetic independence graph of an skm is defined as follows .[ lignew]the directed graph with vertex set is the_ kinetic independence graph _ ( _ kig _ ) of the skm ] is the set of reactants of all reactions that change species .since only partial information about the skm is required for construction of the kig , the necessary information is currently available for many biochemical reaction networks . for each , it is required to know the reactants ] implies that ] implies that the realized value of the r.v . may be `` computed '' from the sample path of the subprocess for ] .the motivation for definition [ lignew ] of the kig of an skm is that the local evolution of species depends only on the stochastic rate of reactions that change the number of molecules ( the level ) of , which in turn depend only on the levels of their reactants . to make this exact, the concept of local independence is needed .let .we will say that is _ locally independent _ of ( given ) if and only if the -intensity , , is measurable for all is , the internal history of is irrelevant for the -intensity of the species in .only intensities of subprocesses conditional on the history of the whole system , , are considered here ( as opposed to -intensities where ) . as a consequence of definition [ lignew ], one can read off from the kig , for any collection of vertices , those subprocesses with respect to which is locally independent , that is , which are irrelevant for the instantaneous evolution of .denote the closure of by .[ local]let be the kig of the skm ] because }(t-)\} ] , recalling that =\bigcup_{m\in\delta ( b)}r[m] ] , it follows that \subseteq \mathrm{cl}(b) ] by lemma [ filts ] .thus , each intensity is measurable , and the remainder of the proposition follows immediately .proposition [ local ] accords with chemical intuition . given the internal history of at time , the levels of the species ] . for this reason , one can not assert in proposition [ local ] that is measurable , but rather that it is measurable .more generally , a particular skm may imply further local independences of than those encoded by the kig for example , due to a deterministic relationship between two subsets of species arising from a chemical conservation relation but this level of knowledge about the skm is not assumed in constructing the kig .graphical separations in the undirected version of the kig , written , are central in what follows .diagrammatically , is the undirected graph obtained from by substituting lines for arrows .let .the notation stands for the _ graphical separation _ of from by , that is , the property that every sequence of edges ( or path ) in that begins with some and ( without any repetition of vertices ) ends with some , includes a vertex in . with ] , and let ] .that is , does not participate as a reactant in any reaction that changes , and vice versa .therefore , for example , \subseteq b \cup d ] and ^{\prime} ] .the 2 module `` residuals '' are given by and .each module residual is locally independent of the other given that module s internal history .furthermore , the 2 modules are conditionally independent given the history of their intersection , .in fact , these 2 modules correspond to the maximal prime subgraphs of for this example ( see definition [ mpd ] ) .the graphical methods for identifying skm modularizations in section [ sec5 ] are , broadly speaking , also based around the maximal prime decomposition of the undirected kig .this section will present the theorems establishing that for a partition ] to , conditional on the history , are unchanged when the conditioning -field also includes the internal history of ( resp . , ) .roughly speaking , and over any time interval ] be an skm as in definition [ skm ] , and let ] are independent under ( see , e.g. , , proposition 4.7.2 ) , and hence the -fields ] are independent .of course , two or more of the subprocesses ] .however , denoting by the reactions that change alone , the reaction set can be partitioned as ] be a partition of , the species set of an skm .define , the set of reactions that change and , but not .similarly , define , , and those that change alone by . then ] is a _standard skm _ if it satisfies all of the following : ( i ) every reaction changes at least 1 species , that is , ; ( ii ) every species in is changed by at least one reaction , that is , the row ( iii ) if a zeroth order reaction is included ( i.e. , =\varnothing ] then |=1 ] then \}\setminus\{r^{\ast}[m]\}|\leq1 ] are the reactants changed by .the first condition of definition [ stdskm ] is obvious .the second does not preclude an effect of the concentration of species that are constant over time ( via the functions or the constants ) .the third is just a convention .the fourth ensures that if \neq \varnothing ] be a standard skm .a subset of reactions , , is said to be identified by consumption of reactants if and only if : \(i ) for all , ] provided that \neq\varnothing ] ; and ( ii ) such that , where denotes the vector formed by setting all positive elements of to zero .[ onidentgam ] condition [ identgamma ] implies that no 2 reactions in change reactants identically , hence the reactions in are identified uniquely by their consumption of reactants .condition [ identgamma ] will be satisfied with by most skms of interest , possibly after explicit inclusion of enzymes in reaction mechanisms .although autocatalytic reactions such as and its reverse violate condition ( i ) , these could be accommodated by instead including a more detailed mechanism , for example , and . an alternative approach would be to work with , the graph obtained from the undirected version of the kig by adding an edge whenever and for some ] for any .therefore , if is replaced by , condition [ identgamma ] for can be dropped from the statements of theorems [ main1 ] and [ main2 ] , and from that of corollary [ main3 ] .we are now in a position to state the main results of section [ sec4 ] of the paper .theorem [ main1 ] is concerned with global dynamic independence under , the law of the -variate poisson process ( see lemma [ likl ] ) .[ main1]let be the kig of a standard skm , ] be a partition of .suppose also that condition [ identgamma ] holds for ( where is possibly empty , in which case the condition is trivial ) .then implies that , where is the natural filtration of , is given by definition [ ndstar ] , and is the law of the -variate poisson process in lemma [ likl ] .we provide here a somewhat heuristic discussion of this result , a rigorous treatment being given in appendix [ appa1 ] .the argument can be broken down into four steps .first , the reaction counting processes ] is a partition of the reaction set .equation ( [ one ] ) holds because the three mvcps associated with each element of the partition are ( unconditionally ) independent .second , consider again definition [ ndstar ] for , ] , and let ] _ and _ the sample path of the counting process for that reaction , , may be computed over the same time interval ( so that the jump times are known ) .there are two main elements involved in the argument .first , the graphical separation again has an important implication for reactants : for any reaction , either \subseteqa\cup d ]. recalling ( [ cmgm ] ) , only the sample path of the subprocess for the reactants ] or ] or \subseteq b\cup d ] or ] or \subseteq b\cup d ] , beginning with . by definition the path of , specifically of its subcomponent , allows identification of the jump times corresponding to all reactions in that change identically to .but since such reactions in change alone , they must do so uniquely ( among reactions in ) since no 2 columns of are equal ( definition [ skm ] ) .therefore , the path of suffices in this case to compute .the argument for other groups in the partition is similar . for , it has already been noted that the reactions in change uniquely ( among themselves ) . the argument for the last 2 groups is essentially the same .the third group is further partitioned as ] corresponding to the jump times so identified allows one to `` isolate '' just those caused by reaction ( since , again , reactions in change uniquely among themselves ) . the argument for is similar , after noting that the jump times of all reactions in can be identified by eliminating all those of and of .the preceding two theorems allow the use of lemma [ cidom ] to obtain the following corollary , which summarizes the main results of section [ sec4 ] .[ main3]let be the kig of a standard skm , ] be a partition of .suppose also that condition [ identgamma ] holds for ( where is possibly empty ) .then the separation in the undirected kig implies that the global conditional independence holds , where is the natural filtration of .apply lemma [ cidom ] to the 3 -fields , recalling from lemma [ likl ] that . since , theorem [ main1 ] implies that .now , whence , which is given by ( [ rnderiv ] ) .again since , theorem [ main2 ] implies that , where is a nonnegative , -measurable random variable for .lemma [ cidom ] then implies that , as required . under the conditions of corollary [ main3 ] , the separation does _ not _ imply in general that , where the conditioning is now on rather than .similarly , the separation in the moral graph , , does not imply that . the following theorem and proof establishes both points .the procedure for constructing is the usual one edges are inserted in the kig whenever 2 parent nodes of a common child are `` unmarried '' ( i.e. , have no edge between them ) and then the undirected version of the resulting graph is formed .[ countereg]let be the kig of a standard skm , ] be a partition of .suppose also that condition [ identgamma ] holds for and .then it is possible that neither nor holds , where is as usual the internal history of the subprocess .the proof is by example .consider the standard skm with and reactions which has the kig , .note that .clearly , and .note also that ^{\prime} ] and .it suffices to show that , under both and , ] .first , show that .clearly , , and since -n_{r}(s) ] since is measurable .however , is clearly not measurable and so can not be a version of ] .it is of interest in applications to understand , for a given partition ] that result in the same change in is , there do not exist 2 reactions in that change identically but do not have the same membership of both of the sets ] be a partition of , the species set of an skm . then , if and only if the following condition holds : for any 2 reactions with , the reaction has the same membership of the two sets ] can often be altered slightly to make the processes and identical . examples of this are given in section [ sec6 ] in connection with the red blood cell skm .rigorous mathematical definition and identification of modularizations for biochemical reaction networks is recognized as being a difficult problem , especially from a dynamic perspective .a prominent approach has been to construct a graph representing `` interactions '' between species and to consider different _ partitions _ of the species between modules , maximizing an objective function based on the fraction of edges that are intra - modular relative to the expected fraction in an `` equivalent , '' randomized graph when the same partition of species is used . from a stochastic process perspective, the graphs used often do not encode properly the dependence structure of the molecular network for example , in contrast to a kig , metabolic network graphs typically omit the local dependence between reactants in the same reaction , only capturing that between reactant and product .the approach is intended to operationalize the concept that modules function `` near - independently . ''however , the measure of modularity adopted for the objective function is rather distant from well - defined notions of dynamic ( in)dependence between species .the local and global conditional independence results developed in sections [ sec3 ] and [ sec4 ] make it possible to add content to and make rigorous what is meant by near - independence of modules , and to accommodate `` overlapping '' modules with nonempty intersection .the term modularization is derived from the biological literature where `` modularity '' has been much discussed .a modularization here is a hypergraph of the vertex set of the kig ( i.e. , a collection of subsets of species ) with the following property the internal history at time of each subset ( or module ) is conditionally independent of the internal history of all the other modules , given the history of its intersection with those modules .[ modsatn]let be the species set of an skm ] .note that since , ( [ mod ] ) is equivalent to the statement .roughly speaking , the global evolution on ] of , , forms a _ decomposition _ of into the subgraphs and if the separation holds and the subgraph is complete .the subgraph is _ prime _ if there does not exist a decomposition of .[ mpd ] let be an undirected graph with vertex set , and .the induced subgraph is a maximal prime subgraph of if is prime and there exists a decomposition of for all satisfying .maximal prime subgraph decomposition _( _ mpd _ ) of is given by , the unique collection of maximal prime subgraphs of , and satisfies that . a _ junction tree _representation of the mpd , , always exists and has the subsets as its clusters ( i.e. , as the vertices of the junction tree ) .a junction tree is a connected , undirected graph without cycles in which the intersection of any 2 clusters of the tree , , is contained in every cluster on the unique path in between and .such trees will prove very useful in visualizing , representing and manipulating modularizations of skms .we say , for reasons that will become apparent , that any 2 clusters adjacent in the tree are separated by their intersection , and call that intersection a _ separator _ _ of _ .the skm modularization algorithm presented below contains as a special case the method due to for computation of , applied to the undirected version of the kig , .the advantage of this version of algorithm [ tmod ] is that it can be fully automated to identify the mpd modularization of the skm in a manner that is computationally feasible even for very large skms .however , it will often be informative to consider a range of nested modularizations in order to explore the different levels of organization of the reaction network . to this end , the general version of algorithm [ tmod ]first obtains a junction tree of the clique decomposition for ( a minimal triangulation of provides the finest , most detailed modularization that is identified .the clique decomposition of is unique ( since it corresponds to the mpd of ) .coarser - grained modularizations , including the mpd one , are obtained by successively aggregating adjacent clusters in the junction tree .[ tmod]let be the kig of an skm . 1 .construct , the undirected version of ; 2 .construct , a minimal triangulation of ; 3 . obtain the clique decomposition of with the cliques , say , ordered to satisfy the running intersection property ( i.e. , for s.t . ) ; 4 . organize the clique decomposition as a ( rooted ) junction tree in which , for , the parent of is set ; 5 .either go to step 7 or , select a pair of adjacent clusters in and _ aggregate _ them by updating as follows : set and , replace cluster by ( retaining its numbering , ) , set , and set 6 .go to step 5 ; 7 .return .the property that is triangulated is equivalent to saying that can be decomposed recursively until all the resulting subgraphs are complete .such a recursive decomposition produces a collection of subgraphs containing the cliques , that is , the maximally complete subgraphs of .triangulation refers to the operation of adding edges to so that it becomes triangulated .the triangulation in step 2 must be minimal that is , one for which removal of any edge added during triangulation results in an untriangulated graph otherwise , remark [ rem51 ] below does not hold , in general .efficient algorithms have been developed in the graphical literature for both minimal triangulation and clique decomposition ( see ) which can be exploited here to compute the skm modularizations and associated junction trees .the following special case of algorithm [ tmod ] returns the junction tree representation of the maximal prime decomposition ( mpd ) of the undirected kig , .[ rem51 ] algorithm [ tmod ] returns for the undirected kig , , when step 5 is replaced by : 5 .while [ there exists a separator of such that is incomplete ] , aggregate within the 2 clusters separated by ; then go to step 7 .it is worth noting the time complexity of steps 2 and 4 .the general problem of finding an optimal triangulation of an undirected graph ( i.e. , one that adds least edges among all triangulations ) is _ np - hard_. the complexity of minimal triangulation ( step 2 ) is where is the number of edges in , .the complexity of constructing the clique junction tree ( steps 3 and 4 combined ) is , .a concise proof that the clusters of the tree returned by algorithm [ tmod ] constitute a modularization of the skm with any choice of aggregation scheme in stage 5is made possible by establishing that , like , is a junction tree , and that the intersections of adjacent clusters of continue to correspond to separators in , and hence in .the following proposition does just that .[ jtprop]let be the undirected graph returned by applying algorithm [ tmod ] to the kig , , of an skm .denote the clusters ( modules ) of by .then is a junction tree .suppose that are any 2 adjacent clusters in with separator , and that ( as is conventional ) the edges are labeled by the corresponding separator .then and the graphical separation holds in , and hence in , where is the union of the clusters in ( ) , the are the 2 subtrees obtained by cutting the edge in , and ( ) .proof of proposition [ jtprop ] is given in appendix [ prooffs ] .we can now state and prove the result that establishes the validity of our modularization identification methods .[ jtmod ] let be the kig of a standard skm , ] . by proposition [ check ] , this ensures that and are the same subprocess , whence for .clearly , the second step need only be performed if it is desired to be able to replace by in the defining conditional independencies of the modularization ( definition [ modsatn ] ) .the species involved in this case are and these therefore now appear in the edge labels ( separators ) of rather than in the residuals .the proposition below establishes that the validity of the modularization remains unchanged by such an operation .[ copied]suppose that , is a modularization according to definition [ modsatn ] of a standard skm ] . clearly , . by corollary [ main3 ], it suffices to show that .let , the species copied to , and , the species copied from .the separation implies that , which yields the required result since =s_{d}\cup t_{d}\cup f_{d}\cup\varnothing ] [ suppose not then in the kig , either or which contradicts the separation ] . for any , \neq\varnothing ] by definition [ stdskm](iii ) and( iv ) ; clearly \subseteq d ] ( similarly for ) , hence and .hence and , which implies immediately that together with ( which is proved below ) it follows that since and .it then follows that as required since it is clear from the definition of and that and .( the reader unfamiliar with conditional independence of -fields and its properties is referred to see , in the context of this proof , theorem 2.2.1 , corollary 2.2.4 , theorem 2.2.10 and corollary 2.2.11 there . )it remains to establish ( [ rem ] ) . under , and hence also under , are independent -fields ( see lemma [ likl ] ) .it follows that since ] was constructed .recall definition [ ndstar ] for ; its history is given by ] in which case is adapted to , or \subseteqb\cup d ] or \subseteq b\cup d ] and \neq\varnothing ] and \neq \varnothing ] and hence , by ( iv ) of definition [ stdskm ] , either \neq\varnothing ] but not both .therefore , if \subseteq a\cup d ] ) then both and are measurable with respect to }\subseteq\mathcal{f}_{t}^{a\cup d}\subseteq ] is measurable } ] ( resp . , \subseteq b\cup d ] ( resp ., \subseteq b\cup d ] or \subseteq b\cup d ] ( resp ., \subseteq b\cup d ] and the partition consists of singletons .hence , is adapted to and are -stopping times .\(iii ) : we have that \subseteq a\cup d ] is -measurable ( by , e.g. , theorem 2.1.10 of ) .hence the summand is -measurable , is adapted to and are -stopping times .( iiib ) then .define for any -variate counting process , the `` ground process '' .we may then write where is the number of reactions on ] is -measurable .hence the summand is -measurable , and are -stopping times .\(iv ) : we have that \subseteq b\cup d ] ( resp ., \subseteq b\cup d ] and ] and ] by definition [ ci ] and hence ] .furthermore , \mathsf{\tilde{e}}[\psi_{23}|\mathcal{f}^{3}] ] , whence by theorem 2.2.14 of .proof of proposition [ jtprop ] the proof is in 3 steps , according to the number of pairs of clusters aggregated under step 5 of algorithm [ tmod ] : ( i ) for the case where no pair of clusters is aggregated , and hence ( ii ) for the case where exactly 1 pair of clusters is aggregated ; ( iii ) for the case where more than 1 pair of clusters is aggregated . is a junction tree representation of the clique decomposition of .for the proof of this case see the proof of theorem 4.6 of . is connected ( as a consequence of being connected ) , and has nodes and edges ( one less edge than ) ; is therefore a tree , whence there is a unique path in between any pair of its clusters .it is straightforward ( but somewhat tedious ) to show that every cluster on this path must contain since the corresponding path in possesses this junction property [ by ( i ) above ] .hence is a junction tree .it remains to prove that for any 2 adjacent clusters in , we have and . we will show ( iia ) that edges `` in common '' between and edges not removed by the cluster aggregation carry the same label , that is , the intersection of the clusters joined by each such edge is unchanged ; and ( iib ) that cutting any such edge in both and results in pairs of subtrees whose clusters have identical unions in the two cases .the result then follows from ( i ) above . if both clusters , , joined by such an edge , are in and the claim is obviously true .consider then the case where , say , is the result of the aggregation of the cluster pair .suppose , without loss of generality , that in .now .the edge joining to in was formerly , in , the edge , whence since is on the path between and in and .thus , the intersection of the clusters joined by the edge is always the same in and , as claimed .let the edge that is cut in both cases be [ where it is understood that , say , may be equal to in and hence equal to in ] .it is required to show , using an obvious notation , that and .it is well known that cutting an edge in any tree results in 2 disconnected subtrees .one of the 2 pairs of subtrees generated here must contain 2 identical subtrees .suppose then , without loss of generality , that , whence .the subtrees and have the same clusters , except for the aggregation of the cluster pair to form in .it is straightforward ( but tedious ) to show that , for and a cluster in , {\mathcal{t}_{c}^{\mathit{de}}}\setminus\{m_{\alpha } , m_{\beta } \}=[m_{\gamma}]_{\mathcal{t}_{\mathrm{mod}}^{\mathit{de}}}\setminus\{m_{\alpha\beta } \},\ ] ] where {\mathcal{t}} ] , where is any one of its clusters .it follows that {\mathcal{t}_{c}^{\mathit{de}}}\setminus \{m_{\alpha},m_{\beta}\}\bigr\}\cup m_{\alpha}\cup m_{\beta } \\ & = & \bigl\{\bigcup[m_{\gamma}]_{\mathcal{t}_{\mathrm{mod}}^{\mathit{de}}}\setminus\{m_{\alpha \beta } \}\bigr\}\cup m_{\alpha\beta}=v_{\mathit{de}}^{\mathrm{mod}}\end{aligned}\ ] ] as required .the proof is by induction on the number of cluster pairs , say , that are aggregated . parts ( i ) and ( ii ) above establish the proposition for and .exactly the same mode of argument as the one used in ( ii ) above also establishes that if the proposition holds for , it must hold for .this completes the proof ._ _ ampf__ ( unbound ) ; _ _ adpf__ ; _ _ atpf__ ; _ _ dhap__ phosphate ; _ _ e4p__ 4-phosphate ; _ _ fru6p__ 6-phosphate ; _ _ fru16p2__ 1,6-phosphate ; _ _ glca6p__ - d - glucono-1,5-lactone ; _ _ glcin__ ( cytoplasmic ) ; _ _ glcout__ glucose ; _ _ glc6p__ 6-phosphate ; _ _ grap__ 3-phosphate ; _ _ gri13p2__,3-bisphospho - d - glycerate ; _ _ gri3p__-phospho - d - glycerate ; _ _ gri23p2__,3-bisphospho - d - glycerate ; _ _ gri2p__-phospho - d - glycerate ; _ _ gsh__ glutathione ; _ _ gssg__ glutathione ; _ _ lac__ ; _ _ lacex__ external lactate ; _ mgatp _ ; _ mgadp _ ; _ mgamp _ ; _ mg _ ; _ mggri23p2 _ ; _ nadh _ ; _ _ nadpf__ ( unbound ) ; _ _ nadphf__ ; _ _ p1f__ ; _ _ p2__ ; _ _ p1nadp__ bound nadp;__p1nadph__ bound nadph ; _ _ p2nadp__ bound nadp ; _ _ p2nadph__ bound nadph ; _ _ pep__ ; _ _ phi__ ; _ nad _ ; _ _ prpp__ phosphoribosylpyrophosphate ; _ _ pyr__ ; _ _ pyrex__ pyruvate ; _ _ rib5p__ 5-phosphate ; _ _ rul5p__ 5-phosphate ; _ _ sed7p__ sedoheptulose 7-phosphate ; _ _ xul5p__-phosphate .the author is grateful for the research environment provided by the statistical laboratory and the cambridge statistics initiative , and to a. p. dawid , v. didelez , c. holmes , m. jacobsen and d. j. wilkinson for helpful discussions .the comments of the associate editor and referees were valuable in improving the paper .computations were performed using r version 2.8.1 and r packages grbase and rgraphviz .kauffman , k. , pajerowski , j. , jamshidi , n. , palsson , b. . and edwards , j. ( 2002 ) .description and analysis of metabolic connectivity and dynamics in the human red blood cell ._ biophysical journal _ * 83 * 646662 .
|
the dynamic properties and independence structure of stochastic kinetic models ( skms ) are analyzed . an skm is a highly multivariate jump process used to model chemical reaction networks , particularly those in biochemical and cellular systems . we identify skm subprocesses with the corresponding counting processes and propose a directed , cyclic graph ( the kinetic independence graph or kig ) that encodes the local independence structure of their conditional intensities . given a partition $ ] of the vertices , the graphical separation in the undirected kig has an intuitive chemical interpretation and implies that is locally independent of given . it is proved that this separation also results in global independence of the internal histories of and conditional on a history of the jumps in which , under conditions we derive , corresponds to the internal history of . the results enable mathematical definition of a modularization of an skm using its implied dynamics . graphical decomposition methods are developed for the identification and efficient computation of nested modularizations . application to an skm of the red blood cell advances understanding of this biochemical system . .
|
the determination of constructible numbers with origami is a problem with an interesting development , that was completely solved only after the axiomatization proposed by huzita - justin ( cf . , ) .the work by alpering - lang ( ) proved that the list of possible one - fold axioms was complete and settled a new scenario , where the role of new axioms was still emphasized .the axiomatic viewpoint seems a natural perspective for the study of other geometrical instruments . in this work we present a general purpose formal language for the axiomatization of geometrical instruments .our formalization takes into account both the geometric and arithmetic properties of the instruments : the concept of _ tool _ formalizes an instrument as a set of axioms , while the concept of _ map _ formalizes the constructible points and curves of the instrument .our formalization provides a natural frame to express well - known results , but also leads to new relations between instruments .the key concepts of our language are introduced in section 2 . in section , we define _ geometric equivalence _ and _ virtual equivalence of tools _ and prove relations between the tools described in this work . in section we present an equivalence relation between maps and define an _ arithmetical equivalence of tools_. we conclude with an arithmetic classification of the tools described .a more extended version of this work can be found in as an evolution of a previous work ( cf .we collect in the annex a list of basic axioms which formalize the common tasks performed by the geometric instruments that will be considered in this work ( mainly ruler , compass , origami ) .a more comprehensive list of axioms can be found in . * ruler * compass * ruler and compass {r@{}l } { \bf \mathcal{rc}:=\langle } & { \rm\bf\{line , circle , radiuscircle\}},\\ & { \rm\bf\ { lineintersect , circleintersect , linecircleintersect\}\rangle}. \end{array}\ ] ] * euclidean compass + \{*circle * } , \{*circleintersect*} . *ruler and euclidean compass {r@{}l } { \bf \mathcal{rec}}:=\langle & { \rm\bf\{line , circle\}},\\ & { \rm\bf\ { lineintersect , circleintersect , linecircleintersect\}\rangle}. \end{array}\ ] ] * origami + \{*line , perpendicularbisector , bisector , perpendicular , tangent , commontangent , perpendiculartangent * } , \{*lineintersect*} .* thalian origami ( ) + \{*line , perpendicularbisector * } , \{*lineintersect*} . * pythagorean origami ( ) + \{*line , perpendicularbisector , bisector * } , \{*lineintersect*} .* conics ( ) {r@{}l } { \bf \mathcal{co}}:=\langle & { \rm\bf\{line , circle , radiuscircle , conic\}},{\rm\bf\ { lineintersect , } \\ & { \rm\bf circleintersect , linecircleintersect , coniclineintersect,}\\ & { \rm\bf coniccircleintersect , conicintersect\}\rangle}. \end{array}\ ] ] given a line and a point not on , the construction generates the parallel to the line passing through the point (: clearly .* ] .table [ cataleg_conjuntsfinals ] describes the set of constructible points of the tools introduced in previous section . as usual, denotes the pythagorean closure of ( i.e. , the smallest extension of where every sum of two squares is a square ) ; the field of euclidean numbers ( i.e. , the smallest subfield of closed under square roots ) is denoted by , and is the field of origami numbers ( i.e. , the smallest subfield of which is closed under the operations of taking square roots , cubic roots and complex conjugation ) . for the axioms generating several objects ,one has to specify an ordering to properly identify each of them .when there is no natural ordering , we use a _ radial sweep _ , a common technique in computational geometry .it consists in sweeping counterclockwise the plane with a given half - line ; the points are ordered in the order they are met . for the axioms described in this tablewe propose the following ordering : : let and be the centers of circles and respectively .the order of points and is given by a radial sweep with center and half - line .we use the same criterion to order the output of axioms and , taking as the center of a conic the midpoint of the segment defined by the focus . : if is not a diameter of circle , we order and to assure that the angle is positive. if the line is a diameter , we order points as points on the line , using a radial sweep with center and half - line .we use the same criterion to order the output of the axiom .a r. c. alperin . a mathematical theory of origami constructions and numbers ._ new york journal of mathematics _ , * 6 * , 119133 ( 2000 ) .r. c. alperin and r. lang .one- , two- , and multi - fold origami axioms ._ in origami _ , 371393 , ( 2009 ) .a. baragar .constructions using a compass and twice - notched straightedge ._ the american mathematical monthly _ , * 109 * , 151164 ( 2002 ) .r. geretschlager .euclidean constructions and the geometry of origami . _ mathematics magazine _ , * 68 * , 357371 ( 1995 ) .d. a. cox .galois theory .wiley , hoboken . new jersey ( 2004 ) .r. hartshorne .geometry : euclid and beyond .springer - verlag .new - york ( 2000 ) .h. huzita .axiomatic development of origami geometry ._ proceedings of the first international meeting of origami science and technology _ , 143-158 ( 1989 ) .j. justin ._ british origami _ , * 107 * , 1415 ( 1984 ) .george e. martin .geometric constructions .springer - verlag .new - york ( 1998 ) .e. tramuns , ph.d . thesis _a formalization of geometric constructions _, universitat politcnica de catalunya , 2012 .e. tramuns . the speed of origami versus other construction tools ._ origami _ , 531542 ( 2010 ) . c. videla . on points constructible from conics . _ the mathematical intelligencer _ , * 19 * , 5357 ( 1997 ) .
|
we present a formalization of geometric instruments that considers separately geometric and arithmetic aspects of them . we introduce the concept of _ tool _ , which formalizes a physical instrument as a set of _ axioms _ representing its geometric capabilities . we also define a _ map _ as a tool together with a set of points and curves as an initial reference . we rewrite known results using this new approach and give new relations between origami and other instruments , some obtained considering them as tools and others considering them as maps .
|
many applications , such as nuclear magnetic resonance ( nmr ) spectrometry or mass - spectrometry , measurements are often made of mixtures of physical components which can be identified by their specific spectrum . discriminating between these elementary components or sources can be made by acquiring several measurements at different times or locations in order to observe different mixtures , yielding multispectral data . in this context ,blind source separation ( bss ) aims at recovering the spectra from measurements in which the components sources are mixed up together in an unknown way .the instantaneous linear mixture model assumes that the measurements are linear mixture of sources with spectra . in other words , there exist mixture coefficients such that : where the vectors are added in order to account for noise and model imperfections .this mixing model can be conveniently rewritten in matrix form : where : * is the number of measurements .* is the number of samples of the sources .* is the number of sources .* is the measurements matrix in which each row is a measurement . * the unknown source matrix in which each row is a spectrum / source . * the unknown mixing matrix which defines the contribution of each source to the measurements .* is an unknown noise matrix accounting for instrumental noise and/or model imperfections . with the notation {\sum_{ij}|x_{ij}|^p} ] .this algorithm is also widely used due to its easy implementation and its efficiency in decreasing the cost function .however , it does not necessarily converge .in hierarchical als ( hals , ) , the columns of and rows of are processed one by one .this yields a simple and fast optimization process to solve the constrained sub - problems .it is however possible to solve exactly the constrained sub - problems of type at each iteration as follows : in for instance , lin uses a projected gradient descent subroutine to solve the sub - problems .guan et al . later provided a faster first - order method .it has to be mentioned that other approaches based on geometrical methods have also been investigated to solve nmf problems ; these approaches are however generally very sensitive to noise .as was previously stated , non - negativity is not always sufficient to recover the actual sources and mixing matrix . in non - negative ica ,one additionally enforces the independence of the sources .however , this approach is also sensitive to noise .sparsity , on the other hand , has been shown to provide robustness to noise .we give a short introduction about this prior in the next section , before presenting sparse nmf algorithms .+ [ sec : sparsity ] sparsity constraints have already proved their efficiency to solve a very wide range of inverse problems ( see and references therein ) . in the context of bss, sparsity has been shown to increase the diversity between the sources which greatly helps their separation . in the wide sense ,a sparse signal is such that its information content is concentrated into only a few large non - zero coefficients , or can be well approximated in such a way .the sparsity of a signal however depends on the basis or dictionary in which it is expressed .for instance , a sine wave will be sparse in the fourier domain since it can be encoded with one coefficient in this domain , while in the direct domain most of its coefficients are non - zero : the more a basis captures the structures of a signal , the sparser will be in such a basis . in this article we will however only focus on sparsity in the direct domain .+ [ sec : sparsealgorithms ] in many applications the non - negativity and sparsity of the sources arise naturally as for instance in ms or nmr spectroscopy .recent works have emphasized the fact that this knowledge can indeed help perform more relevant factorizations .in , kim & park have proposed to formulate the nmf problem as : in this equation , the sparsity - enforcing regularizer favors solutions where a single source dominates at each sample .however , it does not enforce the intrinsic sparsity of each of the sources .the authors made use of an active - set method to solve this problem .this technique can solve exactly each constrained sub - problem in a similar way than in problem .however , it is not clear how the parameters and must be set .the authors provide an implementation on their website .in , zdunek & cichocki used a similar regularization term for but with decreasing during the algorithms in order to be more robust to local minima , and without exactly solving the sub - problems . in ,hoyer used a sparse regularization of the form that uniformly enforce the sparsity of .the author later used a different type of sparse prior in defined for some vector as follows : this sparseness function goes from 1 when is perfectly sparse only 1 active coefficient to 0 when it is not at all all coefficients active , with the same value .the idea is therefore to optionally impose a chosen level of sparsity for the sources and/or the mixtures : this problem is solved by using projected gradient descent steps followed by a projection on the sparseness constraint if the constraint is active and with a multiplicative update otherwise .however , the constraint is directly related to the expected sparseness level of the sources which is not necessarily known beforehand . furthermore , hard - constraining the sparseness level may make the solution very dependent on the sparseness parameters and .an implementation of this algorithm is available online .sparse hals aims at solving problem with a sparsity penalization of the form . in the case of an data fidelity term, this problem can be rewritten as : in this algorithm , the columns of and lines of are updated one by one . as each sub - problem admits a straightforward analytic solution and no matrix inversionis required , the hals algorithm turns to be a simple and fast nmf solver . in a recent implementation of the hals , the parameter ofis automatically managed in order to obtain a required sparsity rate ( defined as the ratio of coefficients smaller than times the largest coefficient ) .it is important to note that none of these algorithms makes use of the sparse prior explicitly to deal with additive noise ; therefore they may not be robust in case of noise contamination .in the last decade the use of sparsity in the field of bss has been widely explored . in ,the authors have introduced a sparsity - enforcing bss technique coined generalized morphological component analysis ( gmca ) which has been shown to be effective at separating out sparse signals from noisy data .morphological diversity has been defined in as a mean to characterize separable sources based on their geometrical structures or morphologies : separable sources with different morphologies do not share the same significant coefficients in a given sparse representation .when sparsity holds in the direct domain this means that the entries of each source with the most significant amplitudes should be different .this does not mean that their supports are disjoint but rather their most significant elements should be disjoint .the objective of this paper is to extend this sparsity - enforcing bss algorithm to deal with non - negative mixtures .following , the gmca with an additional non - negative constraint estimates the mixing matrix and the sources by minimizing the following optimization problem : the pseudo - norm counts non - null coefficients in and therefore limits their number , thus enforcing the sparsity of . in the vein of alternating least squares, gmca alternately and iteratively estimates the unconstrained least square solution and projects on the non - negativity constraint , with an additional thresholding step for the sources in order to keep only the most significant coefficients .these updates are provided in lines 6 and 7 of * algorithm [ alg : ngmcanaive ] * where the hard - thresholding operator is defined as follows : it has been emphasized in that one crucial feature of gmca is the use of a decreasing threshold . at the beginning , this parameter is first set to a high value and then decreases throughout the iterative down to a final value that depends on the noise level .simulated annealing has already inspired decreasing and regularizations in nmf .the motivation behind a decreasing threshold is however different : 1 . first estimating the mixing matrix from the entries of the sources that have the highest amplitude and thus likely to belong to only one source .help removing the smallest coefficients which are more sensitive to noise contamination . in the same way as in the original gmca , for a source with index ,the threshold is set to where is an empirical estimator ( the median absolute deviation ) of the source noise variation . is chosen at each iteration in order to obtain a linear increase of the number of active coefficients in , so as to refine the estimation while maintaining some continuity .the final , , is usually taken in the range ] + ] .the fbs then reduces to a projected gradient algorithm similar to the updates in .similarly , the update of in problem with can be solved with fbs , with .the proximal operator of this function also takes an explicit form usually termed skewed - position soft - thresholding operator : +,\ ] ] where the soft - thresholding operator is defined as : +.\ ] ] this soft - thresholding , induced by the norm , is well - known to introduce a bias .it is therefore customary to use . in the fbsthis is made by replacing the soft - thresholding operator with the hard - thresholding .rigorously , it is not a proximal operator since is not convex nor semi - continuous ; this means that there is no convergence guarantee of the forward - backward splitting algorithm when hard - thresholding is used . in the same vein as the naive ngmca introduced in the previous section , and alternately updated with the exception that , now , each sub - problem is solved exactly , which guarantees that the solution is stable and has the sought structure .the main steps of the algorithm are given in * algorithm [ alg : ngmcas]*. one must notice that exactly solving each sub - problem is costly since it requires sub - iterations at each step .fortunately , it has been recently showed that the speed of convergence of the fbs algorithm can be greatly improved by using the multi - step techniques introduced by nesterov .both steps have the algorithm make use of an accelerated version of the fbs algorithm .a detailed description of how this accelerated algorithm is used to update is given in appendix [ app : fistasubproblem ] .a special care has to be given to renormalizations since the regularization tends to reduce the norm of .more specifically , since the norm of can keep increasing to compensate the reduction of the norm of , the algorithm can converge to the degenerate solution and . in the algorithm ,the columns of are therefore renormalized to unity before updating .this also assigns to the coefficients of their overall importance in the estimation of .following the general thresholding strategy used in gmca and its extensions , the threshold decreases from step to step .however , the strategy used in this version ngmca differs from the one used in the naive approach . in the former ,the threshold is applied to the sources as defined by their least - square estimate . on the contrary the threshold in ngmca applies at each gradient descent step .the update rule of in the sub - iterations of ngmca ( without the acceleration ) can be written as follows : +,\ ] ] with containing only ones .iterative soft - thresholding therefore operates on the gradient and not directly on the source values like in the naive ngmca .also , unlike with hard - thresholding , a variation of affects all active coefficients .our strategy consists in starting with a large parameter which forces the coefficients of to be non - increasing in the first iteration .the threshold is then linearly decreased in order to refine the solution while preserving continuity , down to where is this time an estimate of the noise level in the row of the gradient .we also implemented a version of ngmca using hard - thresholding aiming at solving problem with an pseudo - norm instead of the norm for the regularization .the superscript and are specified to differentiate between respectively the hard- and the soft - thresholding versions . , * initialize * , and normalize the columns of select * return * this section we first compare the introduced algorithms and classical algorithms on noiseless data , in order to better understand their behaviors , and then we benchmark the gmca - based algorithms with state - of - the - art sparse algorithms on noisy data .the settings of the simulations and the evaluation methodology are described in the following sections .reference matrices and coefficients are uniformly generated respectively from the distribution of and , where : * is a bernoulli random variable with activation parameter , i.e. it equals 1 with probability and 0 otherwise .* is a centered and reduced generalized gaussian random variable with shape parameter . in practice and control 2 kinds of sparsity .the bernoulli parameter affects the number of actual zeros in and and therefore exact sparsity . on the other hand, selects the sharpness of the distribution of , which pdf is proportional to ( with and dependent on the standard deviation which is fixed to 1 here ) . as special cases , for , is a gaussian random variable , and with it is a laplacian random variable . with , is considered as approximately sparse the generated signals become sparser when decreases . in the experiments and unless stated otherwise , the standard settings will be , , with measurements of samples . in order to evaluate and compare the algorithms ,a scale and permutation invariant criterion is needed .this criterion has to be well adapted to measure the reconstruction performance . in many applications ,the signal of interests are the sources .more precisely , it has to be noticed that noise - reducing priors are applied on the sources only .this implies that a good estimate of the sources should be the least contaminated by noise and interferences from the other sources .moreover , in a noisy setting , a perfectly estimated mixing matrix do not necessarily yield a good estimate of the sources : indeed a slightly degraded mixing matrix may be preferred if it leads to less noisy sources .these points make criteria based on this variable not adequate to measure a good separation . in the next, we will focus on estimating the separation performance using a criterion on the sources . in , vincent et al .have proposed different criteria to evaluate the performance of blind source separation techniques . in the noisy setting , they propose separating each estimated source into the sum of several components : with the projection of on the reference source it estimates ; and , in its orthogonal space , respectively standing for interferences with other sources , contamination with noise and contamination with algorithm artifacts .they design an snr - type energy ratio criterion named source distortion ratio ( sdr ) : as stated in , this criterion is a global performance measure taking into account all the elements of the reconstruction , i.e. a correct separation ( low ) , efficient denoising ( low ) and little artifacts left by the algorithm ( low ) .also , this criterion has the advantage of being scale - invariant . in the next , the sdr will be used to evaluate the separation performance of the proposed technique with respect to state - of - the - art methods .because of the permutation invariance , one can not know _ a priori _ which estimated source stands for which reference source . reference and estimated sources are therefore paired one - to - one in order to obtain the best mean sdr .the sdr on ( coined sdr ) has be used in the experiments below in order to assess the performances of the algorithms . in the following , the sdr has been evaluated from several monte - carlo simulations ; the number of simulations will be given in each figure s caption .in this first experimental section , ngmca , ngmca and ngmca are tested on data with a large activation rate for ( ) and therefore not the very sparse sources for which they are designed .these settings are not favorable for the ngmca algorithms , which allows to emphasize the differences of behavior between them .since the problem reduces to an exact factorization when there is no noise , is set to 0 , hence the final threshold in this case is identical for all the algorithms .als , the multiplicative update and ( non - sparse ) accelerated hals are also performed as standard algorithms to play the role of references .the maximum number of iterations were set to large enough numbers in order to assure the convergence of all the algorithms .precisely , the number of iterations is set to 5000 for hals , 40,000 for the multiplicative update , 500 for als and the gmca - based algorithms , with a maximum of 80 sub - iterations in ngmca and ngmca ) .* figure [ fig : r_a80n00 ] : this benchmark shows the influence of the number of sources on the estimation of .this parameter is of course important since the more numerous they are , the more difficult they are to separate .ngmca outperforms als , while with , the final iterations are identical for both algorithms .the thresholding strategy of the first stage of the iterations therefore proves its efficiency for the separation . though ngmca is not performing as well as ngmca and ngmca with few sources, it is much more robust than all the algorithms with large numbers of sources .this is further detailed in paragraph [ sec : l1vsl0 ] .+ ) with respect to the number of sources ( , noiseless , average of 48 simulations),width=321 ] * figure [ fig : oscillations ] : this figure exhibits the evolution of the cost function throughout the iterations , for 40 sources and a large activation rate ( ) . in the refinement phase, the sparsity parameter is left constant at its final value in order to observe the convergence of the algorithms and the possibility to enhance the reconstruction .ngmca converges to a lower value than ngmca while ngmca does not converge at all .an explanation is provided in paragraph [ sec : constrvsunconstr ] . + during the iterations for a representative example ( , , noiseless).,width=321 ] * figure [ fig : aalpha_r35a80 ] : this benchmark shows the influence of coefficients distribution on the reconstruction . modifying the parameter is a way to make its distribution more or less sparse .a sparser yields less correlated columns and hence a better conditioning of the problem ( table [ tab : conditioning ] ) which simplifies the separation .ngmca tends to be more robust than the other algorithms to ill - conditioned mixtures .+ ) with respect to the distribution parameter ( , , noiseless , average of 48 simulations),width=321 ] + [ tab : conditioning ] + [ cols="<,^,^,^,^,^,^,^,^",options="header " , ] + [ sec : constrvsunconstr ] the differences of performance between ngmca and ngmca for large numbers of sources and large activation rates in figure [ fig : r_a80n00 ] can be understood by observing the evolution of the cost function during the iterations ( figure [ fig : oscillations ] ) . properly applied non - negativity and sparsity constraints can help refining the reconstruction of and once the sources have been sufficiently disambiguated . indeed , since ngmca not exactly solve the constrained cost function , it does not necessarily neither converge to a minimum nor lead to a stable solution , while ngmca does .+ [ sec : l1vsl0 ] the explanation of the differences between ngmca and ngmca lies in the properties of hard- and soft - thresholding .this is summarized in figure [ fig : soft - hard ] which shows thresholding applied to three two - dimensional points .when a point a column of has two large coefficients , i.e. when the point is in the quadrant , it suffers a bias with soft - thresholding while it remains untouched with hard - thresholding . on the other hand ,the shift induced by soft - thresholding increases the ratio of its larger coefficient over its smaller one , which helps reinforce the affectation of a point to the direction of its larger coefficient ( in this case : ) . with the thresholded point, this means here that .while the lesser bias created by hard - thresholding can lead to better accuracy , reinforcing the affectation of each point to a direction with soft - thresholding leads to better separation of the sources . ) , width=288 ] with few sources , both ngmca and ngmca separate correctly as displays figure [ fig : r_a80n00 ] .the bias is therefore costly for ngmca since it leads to some compensation behaviors : with 80% activation rate , nearly every coefficient suffers the bias and one source tends to compensate for all the others with a positive offset .this can be seen on source 5 in figure [ fig : biasedsources ] .the offset correlates with all the ground truth sources as can be observed from the correlation matrix in figure [ fig : correlationbias ] : estimated source gathers all the thresholded coefficients of the other estimated sources and is therefore affected by the interferences . on the other hand, the effect of soft - thresholding on the coefficients amplitude helps giving more weight to larger coefficients which is essential for the separation of sources from ill - conditioned mixtures as shown in figure [ fig : r_a80n00 ] with a large number of sources ; and in figure [ fig : aalpha_r35a80 ] with a large for instance .after normalization , ngmca , , , noiseless),width=321 ] , , , noiseless , samples 1 to 80),width=321 ] in this section , noise is added to the data and the input of the algorithms is therefore , with a gaussian matrix with independent and uniformly distributed coefficients . in the experiments ,the amount of noise is given in term of snr on the data , snr .ngmca , ngmca and ngmca are compared with hoyer s , kim & park s algorithms and sparse accelerated hals , which are competitive , publicly available algorithms and take sparsity into account in different ways ( paragraph [ sec : sparsealgorithms ] ) . there is no straightforward way to set kim & park s algorithm parameters and we therefore used the default parameters of the implementation .it is then left running until convergence . for hoyer s algorithm , no prioris applied on and the sparsity ratio of is optimally tuned using the ground truth sources , since there is no automatic way to set the parameters .the sparsity level for the sparse accelerated hals is also provided from the ground truth sources and both algorithms are left running for 5000 iterations in order to assure convergence . in all this section , in the ngmca algorithms , as an effective trade - off between noise removal and good separation of the sources .the comparisons also include an oracle which solves the non - negatively constrained inversion problem in using the ground - truth sources : the sparsity parameter is set to such as in ngmca .this oracle stands for the optimal which could be recovered by ngmca if the uncontaminated mixtures were known .of course , since the mixture are not known in practice in bss , the oracle yields unachievable results , but it provides a reference line for the comparisons and a limit for the progression margin of the reconstructions . * figures [ fig : noise_a10r15 ] , [ fig : noise_a10r15_oracle ] and [ fig : noise_a30r15 ] : these benchmarks show the reconstruction results for 15 sources with activation rate of 10% ( figures [ fig : noise_a10r15 ] and [ fig : noise_a10r15_oracle ] ) and 30% ( figure [ fig : noise_a30r15 ] ) the lower the activation rate , the better the sparse prior with a varying level of noise contamination in the data .figure [ fig : noise_a10r15_oracle ] and [ fig : noise_a30r15 ] display the loss in sdr compared to the oracle in order to facilitate the visualization . in both cases , ngmca is less sensitive to noise and outperforms the other algorithms .+ ) with respect to the data snr ( snr ) + ( , , average of 192 simulations),width=321 ] + ) minus the oracle snr , with respect to the data snr ( snr ) ( , , average of 192 simulations),width=321 ] + ) minus the oracle snr , with respect to the data snr ( snr ) ( , , average of 192 simulations),width=321 ] * figures [ fig : noise_a50r15 ] : this benchmark shows the same experience than the previous ones but with a larger activation rate ( 50% ) which is less favorable to the gmca - based algorithms .ngmca remains better in most settings .hoyer s algorithm performs similarly to ngmca for very noisy data but it is important to remember that in our experiment , hoyer s algorithm and sparse accelerated hals are provided with the ground truth sparsity ratios , which would not be available with such precision in practice . for cleaner data , ngmca and ngmca begin to overtake ngmca , which corroborates the results of the previous section for noiseless data with large activation rates and few sources ( figure [ fig : r_a80n00 ] ) .+ ) minus the oracle snr , with respect to the data snr ( snr ) ( , , average of 192 simulations),width=321 ] * figure [ fig : m_a30n03r15 ] : this benchmarks provides the reconstruction results for noisy data ( 15db ) , 15 sources , a low activation rate ( 30% ) and a varying number of measurements .the lower the number of measurements , the more difficult the reconstruction is , since the redundancy can help denoising and discriminating between the sources . while we have exhibited results for a large number of measurements so far, this shows that ngmca also compares favorably with other algorithms when the number of measurements is more restrained .+ ) with respect to the number of measurements + ( snr=15db , , , average of 72 simulations),width=321 ] * figure [ fig : r_15dba30 ] : this benchmarks provides the reconstruction results for sparse ( 30% activation rate ) and noisy ( 15db ) data , and a varying number of sources .the complexity of the separation rises with the number of sources hence the reconstruction results decrease with it for all algorithms , but in any case , ngmca performs best for all the values .+ ) with respect to the number of sources ( snr=15db , , average of 48 simulations),width=321 ] + figure [ fig : modelandinit ] provides the same results as figure [ fig : noise_a30r15 ] but compares ngmca and ngmca with version of them which are initialized with and and hence , with a perfect separation from the start .the initialized ngmca is also provided with the exact noise standard deviation .the difference in term of reconstruction quality between the regular algorithms and their optimally initialized version is extremely small .this shows that the automatic estimation of the noise level within ngmca is appropriate , and that the initialization of ngmca and ngmca is robust . ) with respect to the data snr ( snr ) + ( , , average of 192 simulations),width=321 ]remember that the estimated sources ( see ) can be decomposed as follows : any bss algorithm must minimize at the same time interferences , noise and artifacts in order to achieve good performance .these three terms are strongly affected by the sparsity prior : * _ interferences _ : they intervene when the sources are not correctly , or not completely , separated .the term is computed as the projection on all the sources but the target .sparsity , as a measure of diversity , can greatly help getting a correct separation of the sources , hence keeping this term relatively small .however , it can still create interferences when the sparse model for the sources departs from their actual structure , such as in figure [ fig : biasedsources ] .interferences then originate from an imperfect source prior and/or badly separated sources .+ * _ noise _ : is the part of the reconstruction that projects on the noise but not the sources .since the gaussian noise studied in this article spreads uniformly on all the coefficients , while sparse sources concentrate their energy on few coefficients , the thresholding effect implied by and regularizations significantly denoises the estimates .this reduces the importance of the noise term and therefore helps obtaining better reconstructions .+ * _ artifacts _ : for a given source , the artifacts gathers the residues which are neither explained by the other sources nor the noise .we observed that the soft - thresholding operator introduces a bias which is the main contributor to the artifacts .again , this term will increase when the sparsity level of the sources decreases : in such a case , the sparsity prior is not as well suited to constrain the morphology of the sources .it is also important to notice that even when the sources are very sparse , improperly constrained solutions are more akin to be contaminated with a higher level of artifacts .+ all these aspects interact with each other .as shown in this section , ngmca provides an effective trade - off between noise , interferences and bias .indeed , through the experiments , we show that ngmca outperforms other algorithms in most scenarios , according to the sdr criterion which takes into account these three origins of reconstruction deterioration . + for low activation rate ( high sparsity ), ngmca performs definitely better than the other algorithms for a large range of noise levels ( figures [ fig : noise_a10r15 ] and [ fig : noise_a30r15 ] ) while in the extreme noiseless case it performs quite reasonably . in this setting ,the sparsity - enforcing prior plays its role at : i ) getting a good separation process with respect to other priors ( such as the pseudo - norm ) ; this helps reducing the interferences , ii ) correctly denoising the sources ; this tends to lower the noise contribution and artifacts .ngmca is noticeably quite robust to departures from the sparsity assumptions : it performs reasonably well with large activation rates ( figures [ fig : r_a80n00 ] and [ fig : noise_a50r15 ] ) but at the cost of a slight bias of the estimated sources ( figures [ fig : correlationbias ] and [ fig : biasedsources ] ) which tends to increase the contribution of the artifacts .additionally , the ngmca algorithm provides good separation performance for a large range of numbers of sources ( figure [ fig : r_15dba30 ] ) as well as for ill - conditioned problems arising from a lack of observations in figure [ fig : m_a30n03r15 ] , or from correlated mixing directions in figure [ fig : aalpha_r35a80 ] .these results can be explained by the good separation power of the regularizer with an appropriate tuning of the regularization parameter , in order to disentangle sparse sources , together with the appropriate implementation of the non - negativity constraints .db),width=321 ] ) with respect to the data snr ( snr ) + ( , , average of 96 synthetic nmr data simulations),width=321 ] in physical applications , molecules can be identified by their specific nuclear magnetic resonance ( nmr ) spectra . in this section ,we simulate more realistic data , using nmr spectra of real molecules .these spectra are well adapted to the current settings since they are very sparse .the information about the peaks can be found in the spectral database for organic compounds , sdbs .the spectra were convoluted with a laplacian with width at half maximum of 3 samples , in order to account for the acquisition imperfections .the number of samples is set to . is made of real spectra such as the ones displayed in figure [ fig : nmrspectra ] .some sources can exhibit strong normalized scalar product , such as cholesterol and menthone spectra for instance ( 0.67 ) .the mixing coefficients of are simulated in the same way as in the previous section ( , ) .the observed data is where is an i.i.d .gaussian noise matrix .an example of measurement where the lactose spectrum is particularly strong is provided in figure [ fig : appli_mixture ] . in figure[ fig : noise_application ] , the number of measurements is limited to the number of sources , i.e. , which occurs in some applications ; and the curves show the influence of noise in the data .with so few measurements , denoising becomes more important , while at the same time the noise is underestimated by the algorithm since the problem is less constrained . to compensate this behavior , is this time set to 2 .ngmca fails to obtain suitable results .indeed , in this setting the conditioning of the problem is extremely poor : and ngmca is not able to converge . on the other hand, ngmca performs from 3 to 5db better than all the other algorithms .this shows once again that ngmca is particularly robust for a large variety of settings .an example of reconstruction is given in figure [ fig : appli_reconstruction_example ] , where ngmca is able to identify more peaks that sparse accelerated hals .its reconstruction is however not completely noiseless , since there is always a trade - off to find between denoising , separation and bias .db),width=321 ] in figure [ fig : m_application ] , the number of measurements varies from 15 to 90 . since the conditioning greatly improves for larger numbers of measurements , ngmca results increase very quickly .but in any case , although ngmca and sparse accelerated hals obtain similar results to ngmca when there are enough measurements , ngmca still performs better than all the other tested algorithms in most of the settings . ) with respect to the number of measurements ( snr , , average of 36 synthetic nmr data simulations),width=321 ]following the philosophy of reproducible research , the algorithms introduced in this article will be available at _ http://www.cosmostat.org / gmcalab_.in this paper we have introduced a new algorithm , ngmca , to tackle the problem of sparse non - negative bss from noisy mixtures . inspired by a recent sparse bss algorithm coined gmca ,several extensions have been explored which imply that a rigorous handling of both sparse and non - negative constraints are essential to avoid instabilities and sub - optimal solutions . in particular ,one extension estimates both a mixing and a source matrix by exactly solving the non - negatively constrained and penalized sub - problems , using proximal techniques .extensive comparisons have been carried out with state - of - the - art algorithms on synthetic data ; these experiments show that this ngmca extension is robust to noise contamination thanks to a dedicated thresholding strategy , with negligible parameter tuning .the experiments also show that it performs well for a wide variety of settings , including problems with highly correlated mixture directions , few observations or a large number of sources . finally , the ngmca algorithm yields highly competitive results on synthetic mixtures of real nmr spectra . in this article however ,the sparsity of the sources only held in the direct or sample domain . future work will focus on extending ngmca to deal with the more general setting where the sources are still non - negative in the direct domain , but are sparse in a different signal representation .* algorithm [ alg : fistasubs ] * solves the sub - problem in with using fista . * initialize * , , , +$ ] * return * .b . was supported by the french national agency for research ( anr ) 11-astr-034 - 02-multid .we thank the associate editor and the anonymous reviewers for their help in improving the clarity and quality of the paper .we also thank daniel machado for his help in writing the paper . c. fvotte , n. bertin , and j .- l .durrieu , `` nonnegative matrix factorization with the itakura - saito divergence : with application to music analysis , '' _ neural computation _ , vol .21 , no . 3 , pp .793830 , 2009 .a. cichocki , r. zdunek , and s .-amari , `` csiszrs divergences for non - negative matrix factorization : family of new algorithms , '' _ independent component analysis and blind signal separation _ , vol .3889 , no . 1 , pp .3239 , 2006 .m. berry , m. browne , a. langville , v. pauca , and r. plemmons , `` algorithms and applications for approximate nonnegative matrix factorization , '' _ computational statistics & data analysis _52 , no . 1 ,pp . 155173 , 2007 .m. merritt and y. zhang , `` interior - point gradient method for large - scale totally nonnegative least squares problems , '' _ journal of optimization theory and applications _ , vol .126 , no . 1 ,pp . 191202 , 2005 .a. cichocki , r. zdunek , a. h. phan , and samari , _ nonnegative matrix and tensor factorizations : applications to exploratory multi - way data analysis and blind source separation_. john wiley & sons , ltd , 2009 .w. s. ouedraogo , a. souloumiac , m. jadane , and c. jutten , `` geometrical method using simplicial cones for overdetermined nonnegative blind source separation : application to real pet images , '' in _ proceedings of lva / ica _ , 2012 , pp .
|
non - negative blind source separation ( bss ) has raised interest in various fields of research , as testified by the wide literature on the topic of non - negative matrix factorization ( nmf ) . in this context , it is fundamental that the sources to be estimated present some diversity in order to be efficiently retrieved . sparsity is known to enhance such contrast between the sources while producing very robust approaches , especially to noise . in this paper we introduce a new algorithm in order to tackle the blind separation of non - negative sparse sources from noisy measurements . we first show that sparsity and non - negativity constraints have to be carefully applied on the sought - after solution . in fact , improperly constrained solutions are unlikely to be stable and are therefore sub - optimal . the proposed algorithm , named ngmca ( non - negative generalized morphological component analysis ) , makes use of proximal calculus techniques to provide properly constrained solutions . the performance of ngmca compared to other state - of - the - art algorithms is demonstrated by numerical experiments encompassing a wide variety of settings , with negligible parameter tuning . in particular , ngmca is shown to provide robustness to noise and performs well on synthetic mixtures of real nmr spectra . bss , nmf , sparsity , morphological diversity
|
let , , , be an undirected graph and let be the set of all spanning trees of . in the _ minimum spanning tree problem _, a cost is specified for each edge , and we seek a spanning tree in of the minimum total cost . this problem is well known and can be solved efficiently by using several polynomial time algorithms ( see , e.g. , ) . in this paper , we first study the _ recoverable spanning tree problem _ ( rec st for short ) .namely , for each edge , we are given a _first stage cost _ and a _ second stage cost _ ( recovery stage cost ) . given a spanning tree ,let be the set of all spanning trees such that ( the recovery set ) , where is a fixed integer in ] , where is a _ nominal cost _ of and is the maximum deviation of the cost of from its nominal value .in the traditional case , denoted by , is the cartesian product of all these intervals , i.e. , e\in e\}. \label{intset}\ ] ] in a polynomial algorithm for the _ recoverable robust matroid basis _problem under scenario set was constructed , provided that the recovery parameter is constant . in consequence ,rob rec st under is also polynomially solvable for constant .unfortunately , the algorithm proposed in is exponential in .interestingly , the corresponding recoverable robust version of the shortest path problem ( is replaced with the set of all paths in ) has been proven to be strongly np - hard and not at all approximable even if . it has been recently shown in that rob rec st under is polynomially solvable when is a part of the input . in order to prove this result , a technique called the _ iterative relaxation _ of a linear programming formulation ,whose framework was described in , has been applied this technique , however , does not imply directly a strongly polynomial algorithm for rob rec st , since it requires the solution of a linear program . in a popular and commonly used modification of the scenario set has been proposed .the new scenario set , denoted as , is a subset of such that under each scenario in , the costs of at most edges are greater than their nominal values , where is assumed to be a fixed integer in ] , are distinct . if is not a spanning tree , then contains a subgraph depicted in figure [ fcycle ] , where ] .suppose that , at some step , a cycle appears , which is formed by some edges from and the remaining edges of ( not removed from ) .such a cycle must appear , since otherwise would be a spanning tree .let us relabel the edges so that are on this cycle , i.e. the first moves consisting in adding and removing create the cycle , ] , for some \setminus \{i\} ] we add to two arcs , namely , and , for \setminus \{i\} ] .an easy verification shows that all edges , must have the same costs with respect to .indeed , if some costs are different , then there exists an edge exchange which decreases the cost of .this contradicts our assumption that is a minimum spanning tree with respect to .finally , there must be an arc in the subgraph such that .since , the arc is present in the admissible graph .this leads to a contradiction with our assumption that is an augmenting path .now suppose that is not a spanning tree .we consider only the case since the proof of case is just the same . for a convenience ,let us number the nodes on from to , so that .the arcs , which correspond to the -arcs of , belong to the cycle graph .hence , by claim [ ccycleg ] , must contain a subgraph depicted in figure [ fcycle ] , where ] .set , ] is a given constant .this inequality means that for each edge the nominal cost is positive and is at most greater than .it is reasonable to assume that this condition will be true in many practical applications for not very large value of .[ lemappr1 ] suppose that for each , where ] , then for any . *if , then for any .let .since , we get we first prove implication . by ( [ ee0 ] ) and the definition of , we obtain where .the rest of the proof is the same as in the proof of lemma [ lemappr1 ] .we now prove implication . by ( [ ee0 ] ) and the definition of , we have if . then and note that the value of under can be computed in polynomial time . in consequence, the constants and can be efficiently determined for every particular instance of the problem .clearly , we can assume that for each , which implies , where .hence , we can assume that for every instance of the problem .we thus get from lemma [ lemappr2 ] ( implication ( i ) ) that for any and the problem is approximable within .if , and are the constants from lemmas [ lemappr1 ] and [ lemappr2 ] , then the following theorem summarizes the approximation results : rob rec st is approximable within under scenario set and it is approximable within under scenario set . observe that lemma [ lemappr1 ] and lemma [ lemappr2 ] hold of any sets and ( the particular structure of these sets is not exploited ) .hence the approximation algorithms can be applied to any problem for which the recoverable version ( [ incst1 ] ) is polynomially solvable .in this paper we have studied the recoverable robust spanning tree problem ( rob rec st ) under various interval uncertainty representations .the main result is the polynomial time combinatorial algorithm for the recoverable spanning tree .we have applied this algorithm for solving rob rec st under the traditional uncertainty representation ( see , e.g. , ) in polynomial time . moreover , we have used the algorithm for providing several approximation results for rec st with the scenario set introduced by bertsimas and sim and the scenario set with a budged constraint ( see , e.g , .there is a number of open questions concerning the considered problem .perhaps , the most interesting one is to resolve the complexity of the robust problem under the interval uncertainty representation with budget constraint .it is possible that this problem may be solved in polynomial time by some extension of the algorithm constructed in this paper .one can also try to extend the algorithm for the more general recoverable matroid base problem , which has also been shown to be polynomially solvable in . c. liebchen , m. e. lbbecke , r. h. mhring , and s. stiller . the concept of recoverable robustness , linear programming recovery , and railway applications . in _ robust and online large - scale optimization, volume 5868 of _ lecture notes in computer science _ , pages 127 .springer - verlag , 2009 .
|
this paper deals with the recoverable robust spanning tree problem under interval uncertainty representations . a strongly polynomial time , combinatorial algorithm for the recoverable spanning tree problem is first constructed . this problem generalizes the incremental spanning tree problem , previously discussed in literature . the algorithm built is then applied to solve the recoverable robust spanning tree problem , under the traditional interval uncertainty representation , in polynomial time . moreover , the algorithm allows to obtain several approximation results for the recoverable robust spanning tree problem under the bertsimas and sim interval uncertainty representation and the interval uncertainty representation with a budget constraint . robust optimization ; interval data ; recovery ; spanning tree
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.