article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
* s * * nmr setup * + for the experiments described in this work we employed the nuclear magnetic resonance ( nmr ) technique . the equipment used was a varian 500 mhz nmr spectrometer .this probe has two channels which can be used to manipulate two nuclear spins simultaneously .the sample used was comprised of about molecules of enriched chloroform ( chcl ) with c , diluted in 97% of deutered acetone .this sample was placed in a static magnetic field of approximately t , aligned along the -axis direction .the chloroform molecule has two nuclei ( hydrogen and carbon ) with nuclear spins equal to 1/2 , which interact with each other via exchange interaction . for liquid , isotropic samples with weak coupling , the dynamics of the systemis well described by the hamiltonian under the static field the nuclei larmor frequencies are and , while their coupling constant is .we resort to rotating frames of reference , such that the effect of the local hamiltonians ( due to the static field and spurious frequency displacements ) are canceled , and focus can be set on the interaction part .+ s1.1 radio - frequency pulses to manipulate the spins during an nmr experiment we apply radio - frequency pulses of an oscillating magnetic field .this field is applied in the plane and has a much smaller intensity than the field . to the hamiltonianwe thus add the time - dependent radio - frequency pulse hamiltonian , namely : where controls the pulse shape .tuning close to or allows one to select the nuclei to act upon .as the nuclear larmor frequencies are far from each other , a quasi - squared short pulse could be used .the pulse amplitude , , in our experiment was set such that a rotation in the hydrogen nuclear spin is performed in . in the rotating frame of each nuclei, this quasi - squared pulse translates into a spin rotation ] . to do that we parametrized as where is the natural nmr hamiltonian in the rotating frame .normalized hilbert - schmidt distance between the exact and the approximated evolution operator as a function of simulated time .each one of the four stages has its specific number of steps determined in order to minimize the distance . ]the errors incurred due to that procedure can be minimized by adjusting the number of steps around the time intervals where the hamiltonian changes the most .we thus divided the whole evolution in four stages , each one being divided by a specific uniform time step .see fig .[ fig : hs ] .this minimization generated some very small angles , whose experimental implementation could compromise the overall fidelity expected , besides of being below the experimental precision . because of that , upon the top of the trotterization procedure , we neglected any rotation with angles below .the hilbert - schmidt distance between the exact and the approximated evolution operator under such an approximation is shown in fig.[fig : hs ] .the values used in the experiments can be read from the plots in fig.[fig : angles ] .lastly , as we can not perform pulses in the -axis , we used the mathematical identity .therefore , each pulse along the direction was turned into a sequence of three pulses in the plane . all in all , about 2000 pulses , in a real - time evolution , were necessary to simulate the whole annealing protocol .+ s2.3 results for positive in the main text we concentrated in the results for negative . here ,fig.[fig : jpos ] , we show the results for the other instance , that is , for positive , which present the same features as observed previously .all the other parameters remained the same . + [ cols="<,^ , > " , ] s2.4 experimental errors we assessed the errors present in the combined experimental procedure of preparing and measuring the system state by repeating it several times . by doing this, we found that the reconstructed density matrix elements were gaussian distributed with a relative standard deviation of .it is worth mentioning that such a procedure is performed just once during the evolution and hence its associated error will not scale as the number of time steps increase . as for the errors due to the fault pulses , which clearly scale up as the number of time steps increases, one should expect in an nmr experiment a typical error of few degrees for their angles .indeed , our estimations revealed that standard deviations of for rotations in the and directions and of for the free evolution duration could correctly describe our findings .the error bars shown in figs.[fig : results ] and [ fig : jpos ] are evaluated correspondingly for such source of errors .
we performed a banged - digital - analog simulation of a quantum annealing protocol in a two - qubit nuclear magnetic resonance ( nmr ) quantum computer . our experimental simulation employed up to 235 trotter steps , with more than 2000 gates ( pulses ) , and we obtained a protocol success above 80% . given the exquisite control of the nmr quantum computer , we performed the simulation with different noise levels . we thus analyzed the reliability of the quantum annealing process , and related it to the level of entanglement produced during the protocol . although the presence of entanglement is not a sufficient signature for a better - than - classical simulation , the level of entanglement achieved relates to the fidelity of the protocol . _ introduction . _ among the models for quantum computation , quantum annealing arises as one of the front runners that may first establish the quantum supremacy the stage at which implementations of quantum computers will start solving problems deemed intractable for their classical counterparts . such a model is inspired on the adiabatic quantum computation ( aqc ) scheme , originally proposed by farhi et al . , in which the answer of an abstract problem can be encoded in the ground state of a physical system . similarly to aqc , quantum annealing exploits the gradual modification of the system state character , in order to find the solution of a hard problem starting from an easy one . however , regarding its physical implementation , quantum annealing presents a key advantage , since it is tailored for scenarios in which the system is in contact with a thermal environment due to this interaction , under appropriated conditions , relaxation processes to the ground state may enhance the protocol success . because of that , and due to its simplicity , quantum annealing has attracted great attention . for instance , it was adopted as the quantum computation model by d - wave the first company commercially producing and selling devices advertised as quantum computers . this first private venture was recently followed by an initiative from google / ucsb . all that led to an increased scrutiny of the model . indeed , soon after the announcement of d - wave s first machine , took place an important and intense debate whether their computer would be actually a quantum computer . even though the first evidences indicated that quantum annealing would be the right model for the machine s behavior , they were taken as disputable , as semi - classical approaches could also reasonably describe the experimental results , and no evidence of speed - up was found . in such a debate , naturally , the `` holy grail '' became whether the machine could generate entanglement during the computation . recently d - wave conducted an experiment that unequivocally showed the presence of entanglement among the qubits composing one of their first processors . having settled that issue , other questions became natural and pertinent . the aim of this contribution is twofold : _ i ) to assess the reliability of a quantum annealing simulation under a massive `` banged - digital - analog '' quantum computation . _ recently , a only - digital simulation of the quantum annealing process was performed in a system composed of nine superconducting qubits . due to their system size and the noise acting on it , they were able to perform only few ( five ) trotter steps , and , for the ferromagnetic chain problem with 4 spins , the fidelity obtained was . to overcome some of the issues encountered in this implementation , here we combine the digital simulation with an analog part . digital - analog quantum simulations have been proposed to different architectures . such a scheme might potentially lead quantum annealing to inherit features , like designing interactions on demand and error correction protocols , from digital quantum computing . _ ii ) to relate the amount of entanglement generated during a quantum annealing protocol with its success . _ the role of entanglement is one of the most unclear questions concerning aqc . our results suggest that , once fixed an annealing schedule , its fidelity shall be related to the amount of entanglement created during the protocol . therefore , only high levels of entanglement , i.e. , extreme experimental control of noise sources , might guarantee a better - than - classical result . in order to tackle these issues , here we employed a two - qubit nmr quantum processor to perform the banged - digital - analog quantum computation . in nmr the interaction among the nuclear spins , which are the qubits of the processor , is always on , performing the analog part of the computation . besides that , one can shine the system with radio frequency ( rf- ) pulses to perform banged - digital single qubit gates . moreover , the amount of entanglement generated in the protocol can be changed by a tunable source of decoherence . _ the computational problem . _ the task we analyse here is that of finding the ground state of an ising spin glass model , defined by the hamiltonian : in this expression , is the usual component of a spin-1/2 operator at site , and the parameters and represent local fields and spin - spin couplings , respectively . besides being a paradigm for many - body quantum systems , the problem of finding its ground state is known to be representative of several optimization problems ( np - hard ) . note that if we define the eigenvectors of by with , then the ground state of the ising hamiltonian is certainly a product state of the form . the challenge is to determine which of the possible product states of this form is the actual ground state . a task for which brute force search clearly will not be efficient . the quantum annealing strategy to approach this problem relies on the quantum adiabatic theorem . first , one initializes the system in the ground state of a simple hamiltonian , . here we choose , with , which ground state is , with . thus , the initial state is an equiprobable superposition of all possible product states that can be the ground state of . second , a `` schedule '' is chosen such that the system hamiltonian is adiabatically changed onto the ising hamiltonian . specifically , the system is governed by the time - dependent hamiltonian : h(t)=(t)h_easy+(t ) _ i=1^n h_i_i^z + ( t)_i < j=1^nj_ij_i^z_j^z , [ eq : timeh ] where the envelope functions , , and are changed smoothly during the protocol , ] , thus it is expected that some entanglement will be generated during this process . naturally , any physical implementation of the quantum annealing protocol must run in a finite time , and is under the influence of a thermal environment . the finitude of the protocol duration implies that the adiabaticity condition is somewhat broken , and the state at each time is a superposition of a large component of the ground state and small parts of the first excited states . furthermore , if the protocol is slow enough and the system and the environment are weakly coupled , the thermal environment turns this superposition into a mixture of such eigenstates . these two unavoidable facts , in general , result in errors for the quantum annealer . however , since we are searching for the ground state of the problem hamiltonian , for low temperatures the thermalization process might help transferring population from the excited states to the ground state . this possible improvement of the quantum annealer comes at the cost of keeping the temperature low , and of waiting for the relaxation process to happen . in the following , we address the robustness of the quantum annealer in such realistic conditions fixing the running time for different noise strengths and we compare the quality of its results with the ones obtained by a classical simulation . in the classical simulation here employed , we model each spin as a magnet compass , with magnetization pointing at the direction . the magnetization components of each qubit evolve accordingly to the noise - free bloch s equations : this set of coupled differential equations can be easily solved by a classical computer . we say that the state of the system at time is given by , with the state of each qubit defined by magnetization components through for . in this classical model there is clearly no entanglement during the whole evolution . as such we expect that this simulation will fail during the time intervals where the quantum annealing process produces some entanglement . as a last remark , since this is a simulation performed in a classical computer , we do not include any noise effects . the state assigned to the system remains always pure . _ nmr experiment . _ in order to address the reliability of a quantum annealing process in controlled laboratory conditions , we conducted an experiment of a small - scale quantum annealer within the framework of nuclear - magnetic resonance ( nmr ) . nmr is a well established test bed for quantum information processing , allowing for an exquisite control of the nuclear spins of molecules . our experiment was performed using a sample of carbon - enriched chloroform ( chcl ) molecules , where the nuclear spins of the hydrogen and the carbon were taken as physical implementations of qubits . for that , a static magnetic field of was applied to the sample along the direction , yielding and as the hydrogen and carbon larmor frequencies , respectively . the sample contains around identical chloroform molecules highly diluted in deutered acetone , thus intermolecular interactions can be safely ignored . such conditions lead to the natural nmr system hamiltonian : h_nmr = -_h -_c + 2j , [ eq : hnmr ] with the interaction strength between the two spins given by the coupling constant . single qubit operations are performed in a straightforward manner by applying radio frequency pulses with specific field polarizations , tuned in resonance with each one of the spins . however , since pulses and read - out act collectively , it is not possible to address each molecule individually , meaning that only average properties of the sample are measured . another consequence of such a lack of spatial resolution is that a magnetic field gradient along the direction acts as a dephasing channel for the computational basis . given that the strength of this process is determined by the gradient intensity , one has an effective _ knob _ to adjust the noise level in this setup . furthermore , as the system s hamiltonian is not diagonal in that basis , eigenenergy transitions may also be induced by the presence of such magnetic field . we exploit this tool to assess the connection between different levels of entanglement and the fidelity of the noisy computation . our quantum annealer is then comprised of two qubits and its time evolution is simulated experimentally using the natural ( analog ) nmr hamiltonian and an appropriate ( digital ) sequence of pulses . for that , we first divided ( trotterization ) the total evolution generated by the time - dependent hamiltonian into 235 time steps , with , such that . in this way we can write the evolution operator as , with each evolution block determined numerically . experimentally , each is translated into a sequence of pulses and free system evolution ( see figure [ fig : simulation ] ) . the time intervals are not necessarily equal , and their choice take into account how fast the hamiltonian changes during each block . such an approach is necessary because there exists a trade - off between the minimization of the number of pulses and the fidelity obtained for the time evolution . pulses angles and the free evolution time interval were chosen as to minimize the ( hilbert - schmidt ) distance between and the implemented transformation . more details can be found in the supplementary material . . each evolution block is implemented using free evolution and an appropriate and polarized rf - pulses applied to each qubit . the respective pulses angles and free evolution duration are chosen as to maximize the fidelity of the operation with . [ fig : simulation ] ] here , we choose to experimentally search for the ground state of two instances of the ising hamiltonian with parameters for the qubit encoded in the hydrogen nuclear spin , and for the qubit encoded in the carbon nuclear spin . the instances differ from each other only by the sign of spin - spin coupling , which absolute value is chosen to be . taking makes the ground state of unique and separable , as any possible degeneracy is lifted . for we take and . hence , any entanglement created during the time evolution is due to the adiabatic algorithm . with these choices , the total time for the simulated protocol was fixed in with scheduling [ see fig . [ fig : setup](a ) ] selected in such a way that no level - crossing involving the ground state is present . in an ideal , noiseless and error - free realization of this protocol , fidelities between the time - evolved state and the instantaneous ground state would remain above 0.997 during the whole process , reaching at the end of the protocol . moreover , in this scenario , the quantum annealer would always be better than the classical simulation described above , as may be seen in fig . [ fig : setup](b ) . also note , fig . [ fig : setup](c ) , that when and the spin - spin coupling hamiltonian are comparable , the instantaneous time - evolved state would exhibit a fair amount of entanglement . . * ( b ) protocol fidelity . * quantum ( solid line ) and classical ( dashed line ) fidelities between the time - evolved state and the instantaneous ground state ( ) . the fidelity for the quantum protocol is degraded when the scheduling imposes fast hamiltonian changes . for the classical protocol errors appear when some entanglement is expected in the ground state . * ( c ) entanglement evolution . * the evolution of entanglement , as measured by its negativity , in the time - evolved state . high amounts of entanglement are obtained when the contributions of and are comparable . ] with the experiment design fixed ( scheduling and optimized s ) , our nmr implementation obeys the following structure : _ a ) _ we initialize the system in the ( pseudo - pure ) state ; _ b ) _ switch on the field gradient ; _ c ) _ apply a sequence of pulses leading to the evolution , up to a sequence with ; _ d ) _ switch off the field gradient ; _ e ) _ perform full - state tomography . this is repeated up to for a fixed field gradient . with the state snapshots we evaluate various quantities that characterize the quality of the computation and the entanglement generated . afterwards we change the field gradient and the whole process is repeated . . several figures of merit were evaluated for increasing values of applied magnetic field gradient : open squares show the fidelity between the experimentally measured state and the theoretical instantaneous ground state ; the success , solid disks , gives the fidelity as for the open squares , but only taking into account the diagonal part of the density matrices in the computational basis ; open diamonds show the evolution of entanglement in in the experiment ; and , finally , blue crosses give information about the purity of the system . ] _ results and discussion . _ part of the results are shown in the fig . [ fig : results ] . as expected , the fidelity ( open squares ) between the experimentally produced state and the ground state is near unity at the beginning of the experiment for all values of the gradient . as the protocol continues , fidelity decreases due to intrinsic errors and also due to the induced noise by the field gradient . clearly , the greater the gradient , the worse the fidelity gets . such a behavior is also observed when one looks at the fidelity of states considering only the population occupation of the computational basis ( diagonal part of the state density matrix in the computational basis ) , what we called `` success '' . this figure of merit is pertinent for experiments where measurements can only be performed in the computational basis . notice that towards the end of the protocol , both success and fidelity reach the same value . this happens because the off - diagonal terms of the experimental state dye out due to the decoherence , and the ground state is diagonal in the computational basis . ] we can also observe how the amount of entanglement evolves with time , and its resilience to noise . the top panel of fig . [ fig : results ] , where no gradient is present , is to be compared with fig.[fig : setup](c ) . as discussed before , when both the transversal and longitudinal components of the time - dependent hamiltonian are present , a higher amount of entanglement is generated . as we increase the noise strength , i.e. , the field gradient , the amount of entanglement created during the protocol greatly reduces . these two facts above suggest a correlation between the amount of entanglement generated during the quantum annealing protocol and the overall quality of the process . to make this correlation clearer , in fig . [ fig : avg ] we plot the time - average fidelity and success for the quantum annealer as a function of the field gradient , and also the time - averaged entanglement and the maximum achieved entanglement as a function of the field gradient . the curves are monotonically related to each other . naturally , the inference of such a correlation between the entanglement generated and the quality of the process poses perhaps a more important question regarding quantum computation : would the presence of entanglement be a signature of a better than classical computation ? to address this question , we also plot in fig . [ fig : avg ] the time - average fidelity and success for the classical simulation . as the ground state of is entangled at some times , and the states produced by the classical algorithm are always separable , the time averaged fidelity and success can never be one . nevertheless , as the classical simulation does not suffer from noise , its quality is the same for all values of the gradient . it is thus expected that for a given level of noise the quantum fidelity and success should be below the classical counterparts . surprisingly , however , this crossing happens despite the fact that considerable amounts of entanglement are generated by the quantum annealer . this shows that producing entanglement in a quantum annealing process does not necessarily mean that the quantum computation is more reliable than any classical simulation . _ conclusions . _ our results , obtained from an nmr quantum annealer , provide clear evidences that entanglement should not be considered the figure of merit to assert that a quantum annealing computation would be more reliable than any classical computation , even though a correlation between high amounts of entanglement and better quantum computation seems to exist . in addition , as for the context of digital - analog adiabatic quantum computing , our results also reinforce that such an approach should be indeed considered as a viable and promising implementation of continuous time evolutions . the combination of a long analog part with a fault - tolerant sequence of digital gates might pave the way to bring the promise of a quantum computer to a more tangible reality . * acknowledgements * + we would like to thank enrique solano for various important remarks to our results . the authors are supported by the instituto nacional de cincia e tecnologia - informao quntica ( inct - iq ) , and by the brazilian funding agencies faperj , and cnpq . + * author contributions * + f. b. and f. d - m . equally contributed to this project .
the stabilisation of unstable periodic orbits ( upos ) using feedback control has attracted the attention of many authors over a number of years .the time - delayed feedback method of pyragas , has been of particular interest . here , the feedback is proportional to the difference between the current and a past state of the system .specifically , where is some state vector , is the period of the targeted upo and is a feedback gain matrix .advantages of this method include the following .first , since the feedback vanishes on any orbit with period , the targeted upo is still a solution of the system with feedback .control is therefore achieved in a non - invasive manner .second , the only information required a priori is the period of the target upo , rather than a detailed knowledge of the profile of the orbit , or even any knowledge of the form of the original odes , which may be useful in experimental setups .the method has been implemented successfully in a variety of laboratory situations , as well as analytically and numerically in spatially extended pattern - forming systems ; more examples can be found in a recent review by pyragas . until now, there has been little or no study on whether there are limitations to pyragas feedback control as the period of the targeted orbit , and hence the delay time , becomes large . in this paper , we investigate the use of pyragas feedback on unstable periodic orbits with arbitrarily large period .one mechanism for the generation of long - period periodic orbits is at bifurcations from homoclinic orbits or heteroclinic cycles . in this paperwe focus on a subcritical bifurcation from a symmetric heteroclinic cycle , specifically the heteroclinic cycle of guckenheimer and holmes .the bifurcation produces a branch of unstable long - period periodic orbits and we investigate using a time - delayed feedback control similar to the pyragas feedback as a stabilisation mechanism .the addition of pyragas feedback to the odes considered by guckenheimer and holmes results in an infinite - dimensional delay equation . in order to analyse trajectories near the periodic orbit of interest, we make a number of assumptions about the form of solutions to the delay - differential equation and reduce the flow to a three - dimensional map .this method , after the assumptions have been made , is a modified version of the standard ` small box and poincar map ' analysis used by many authors to study the dynamics of trajectories close to heteroclinic cycles .this reduction of an infinite - dimensional delay equation to a finite dimensional map has not appeared before in the literature .although our assumptions are not fully rigorously justified , we test the validity of our arguments by comparing our results with a numerical example .we find excellent agreement between the analytical and numerical results .a surprising result of the analysis for the particular example we use is that as the period of the orbit increases , the amplitude of the gain parameter required to stabilise the unstable orbits decreases .this paper is organised as follows . in section [ sec : rev ] we give a review of heteroclinic cycles and their bifurcations .we describe the guckenheimer holmes heteroclinic cycle , and summarise the standard approach to analysing trajectories close to heteroclinic cycles . in section [ sec : feed ] we describe how we choose the feedback control terms which are added to the equations .we then perform the reduction of the equations described above , which gives us a method of computing the stability of the periodic orbits . section [ sec : num ] contains numerical examples and section [ sec : conc ] concludes .a heteroclinic cycle is a topological circle of connecting orbits between at least two saddle - type equilibria . in generic ( non - symmetric ) dynamical systems ,heteroclinic cycles are of high codimension and their existence for open sets of parameter values is unexpected .if a dynamical system contains flow - invariant subspaces , the connecting orbits can be contained within these subspaces , and then the heteroclinic cycle is robust to perturbations of the system that preserve the invariance of these subspaces .flow invariant subspaces can arise due to symmetry , or due to other restrictions on the flow ( such as extinctions in population dynamics models ) .the review of krupa contains many examples of robust heteroclinic cycles . in this paperwe consider robust heteroclinic cycles in symmetric systems .consider a continuous - time dynamical system defined by an ode : where is a -equivariant vector field , that is , and is a finite lie group .an equilibrium of satisfies .we consider only hyperbolic equilibria , and assume that is smoothly linearisable about each equilibrium .[ def : het_con ] is a _ heteroclinic connection _ between two equilibria and of if is a solution of which is backward asymptotic to and forward asymptotic to .a heteroclinic cycle is an invariant set consisting of the union of a set of equilibria and orbits , where is a heteroclinic connection between and ; and .we require that .if , then is a homoclinic orbit .a heteroclinic cycle is a _ homoclinic cycle _ if there exists such that for all . for define the _ isotropy subgroup _ , for an isotropy subgroup of , we define the _ fixed - point subspace _ [ def : het_cyc ] a heteroclinic cycle is _ robust _ if for each , , there exists a fixed - point subspace , where and 1 . is a saddle and is a sink for the flow restricted to , 2 . there is a heteroclinic connection from to contained in .robust heteroclinic cycles occur as codimension - zero phenomena in systems with symmetry .that is , they can exist for open sets of parameter values .bifurcations of heteroclinic cycles therefore occur as codimension - one phenomena .we now consider the computation of the stability of heteroclinic cycles and the associated bifurcations .the stability of a heteroclinic cycle is usually computed by constructing poincar maps on a poincar section of the flow .the flow near the cycle is divided into two parts ; the ` local ' part , near the equilibria , where the flow can be well approximated by the linearised flow about the equilibria , and the ` global ' part of the flow , where the trajectory is away from the equilibria .the global part of the flow occurs on a much faster timescale than the local part and can be approximated by a linearisation of the flow around the heteroclinic connections .the construction of such poincar maps is a standard procedure , details can be found in , for example .heteroclinic cycles generically lose stability in two ways : resonant bifurcations and transverse bifurcations .transverse bifurcations occur when one of the eigenvalues at an equilibrium passes through zero ; the equilibrium undergoes a local bifurcation . we do not consider transverse bifurcations here ,see for details . throughout this paper ,when we refer to ` the eigenvalues at an equilibrium ' , we of course mean the eigenvalues of the jacobian matrix of the flow linearised about that equilibrium . at a resonant bifurcationthe eigenvalues at the equilibria are generically non - zero , but satisfy an algebraic condition that determines a global change in the stability properties of the cycle .resonant bifurcations were first studied in the non - symmetric case by chow et al . in the context of a bifurcation from a homoclinic orbit .a more recent study considers a codimension - two resonant bifurcation from a robust heteroclinic cycle with complex eigenvalues .resonant bifurcations are generically accompanied by the birth or death of a long - period periodic orbit .if is the bifurcation parameter controlling the resonant bifurcation ( that is , at the bifurcation point ) , then the period of the bifurcating periodic orbit generically scales as resonant bifurcations can occur in a supercritical or subcritical manner .we consider the subcritical case , when the branching periodic orbits are unstable , and in the following show that pyragas - type time - delayed feedback can stabilise the periodic orbits .our analysis focuses on the guckenheimer holmes cycle in .the guckenheimer holmes cycle is a prototypical example of a robust heteroclinic cycle .we use this cycle as an example on which to base our analysis .first we review the original case with no feedback .the equations considered by guckenheimer and holmes can be written : where , and and are real parameters .the equations are equivariant under the symmetry group , generated by a reflection and a rotation : we label the equilibrium on the positive -axis as . here , and throughout the remainder of the paper , subscripts on equilibria , coordinates and similar objects should be taken mod .each two - dimensional coordinate plane is a fixed point subspace . if , then the only equilibria in each coordinate plane are those lying on the coordinate axes .we consider the case and then it can be shown that in the plane , is a saddle and is a sink . it can additionally be shown that in forward time trajectories are bounded away from infinity and therefore by the poincar bendixson theorem there exists a heteroclinic connection from to .similarly , connections also exist from to , and to .these connections lie in two - dimensional fixed - point subspaces ( the two - dimensional coordinate planes ) , so the cycle is robust .the resulting heteroclinic cycle is shown schematically in figure [ fig : ghcyc ] .also note that , so the cycle is homoclinic .the stability of the cycle can be calculated using the methods described above .it is a standard procedure , but we outline the method here , as we use similar ideas later when considering the stability of periodic orbits in the system with added time - delayed feedback . consider a trajectory which passes close to the equilibrium .the linearised flow near is : the direction is the ` radial ' direction , and as shown in , for heteroclinic cycles of this type , the radial direction does not affect the stability of the cycle .all trajectories move away from the origin , and also away from infinity , and in this case are attracted to an ` invariant sphere ' which contains the heteroclinic cycle . therefore ,for simplicity , we henceforth ignore this component .we define poincar sections close to : where , and construct a poincar return map on . consider a trajectory which passes through at time with .the trajectory will hit at with we thus write down a local map , which describes the flow near the equilibria : the flow near the heteroclinic connection from to a plane near is approximated by the global map : where is the coordinate of the trajectory when it next hits , and is a positive constant . note that the constant term in this expansion of is zero because the plane is invariant .we write .since , the map is a return map on .write and then the return map is where .the map has fixed points at and at .the fixed point at corresponds to the heteroclinic cycle in the flow and is stable if . the heteroclinic cycle loses stability in a resonant bifurcation at .the second fixed point at corresponds to a branch of periodic orbits , as long as is small and positive .the symmetry acts as a spatio - temporal symmetry on the periodic orbits .that is , if we write the periodic solution as a trajectory , with minimal period , then the stability of the orbits can be determined by finding the stability of the fixed point in the map .it is simple to see that if , is small and positive ( and hence corresponds to a periodic orbit in the flow ) when , so the resonant bifurcation is subcritical .we find that and so is unstable .conversely , if , then corresponds to a branch of stable periodic orbits if , and the bifurcation is supercritical .the period of the orbit is approximately where and is the time spent by the trajectory each time it passes close to an equilibrium .we are ignoring the time spent away from the equilibria ( that is , close to the heteroclinic connections in the invariant planes ) since it is much less than when we are close to the resonant bifurcation , that is , . in equations ,the resonant heteroclinic bifurcation at is degenerate .that is , the branch of periodic orbits exists only at .this corresponds to the case in the map .we add additional higher order terms to break this degeneracy , specifically we consider the additional terms preserve the equilibria and the symmetries of the system , and also the invariant planes and the heteroclinic cycle .the heteroclinic cycle still loses stability in a resonant bifurcation at , but now a branch of periodic orbits is created in either or .the sign of determines the branching direction and whether , in the map , is greater or less than .if , we see a branch of unstable periodic orbits in ( and the resonant bifurcation is subcritical ) .if , we see a branch of stable periodic orbits in ( and the bifurcation is supercritical ) .a complete study of the effect of fifth order terms on the dynamics near the gh cycle has not been performed .however , the above assertion can be seen by considering the effect of the new term on the component when the trajectory is close to the plane but away from either coordinate axis . in the following ,we consider the subcritical case , where the periodic orbits are unstable , and add non - invasive time - delayed feedback to stabilise the orbits near the heteroclinic cycle .to ease analysis and improve the accuracy in the numerical computations in section [ sec : num ] , we introduce new coordinates . along with a change in timescale , this transforms equations to where .note that in these coordinates , the equilibria are at , e.g. , .the invariant planes in the coordinates are transformed to .however , we are not interested in trajectories which lie in the coordinate planes , only those which are close to them .pyragas feedback is additive and has the form where is a ( real ) gain matrix and is the period of the targeted periodic orbit .our choice of coordinates suggests the following slightly altered functional form for the feedback : for trajectories close to the periodic orbit , and so the feedback terms are approximately of pyragas form . for this choice of feedback ,the equilibria and the invariance of the coordinate planes ( in the original coordinates ) are preserved .however , we additionally choose to use the symmetries of the system to make a further change in the form of the feedback which simplifies the subsequent analysis .the feedback we use is : where is one - third of the period of the orbit . due to the spatiotemporal symmetry of the periodic orbit under the action of , the feedback vanishes at the periodic orbit , and so the periodic orbit is still a solution of the system .however , this feedback does not preserve the equilibria or invariant planes ( in the original coordinates ) .we choose the matrix in a similar manner to that in , as follows .we write where the matrix has the form of the feedback matrix used by fiedler _ et al . _ in a two - dimensional example ; stabilising periodic orbits emanating from a subcritical hopf bifurcation . recall that the orbit has two unstable directions , and one stable direction the radial direction .the matrix is chosen so the feedback is rotated to align with the unstable directions , and there is no feedback in the stable direction .the resulting equations with feedback are where we analyse the stability of the periodic orbits close to the heteroclinic cycle in a similar manner to the methods used without feedback .we assume we are close to the resonant bifurcation , that is , , so that the periodic orbit lies close to the heteroclinic cycle , and consider the flow close to the periodic orbit .the linearised equations close to the equilibrium are given by : where the are the components of the feedback gain matrix .as before we neglect the equation since the feedback only acts in directions tangent to the plane containing the periodic orbit , we assume that when trajectories are close enough to the periodic orbit the dynamics in the radial direction are unaffected .that is , near , the direction will be contracting and so not affect the stability of the orbit . in section [ sec :just ] we show numerical results which support this assumption .recall that the periodic orbits we are attempting to stabilise are spatiotemporally symmetric under the action of .we make use of this in the following . at each equilibrium , we define a contracting direction , and an expanding direction . at , the contracting direction is the direction , and the expanding direction is the direction . unlike in the case without feedback , we can not solve the linear equations explicitly , and so we make the following approximations .let be the periodic orbit for the original system ( in the logarithmic coordinates ) .then is still a solution of the system with feedback .consider solving the delay differential equation for the system with feedback for a trajectory which starts close to .that is , for , is close to .then for , the feedback terms in the delay differential equation will be small , that is , the equations will only be a small perturbation from the original system . by continuity , the solution for will also be close to .set as the trajectory intersects the plane ( , for , ) on the time the trajectory passes close to an equilibrium , . with no feedback, the local part of the trajectory can be written down exactly .the expanding and contracting components , and satisfy : for , where is the length of time spent near the equilibrium ( i.e. in the small box ) and is the expanding coordinate of the trajectory as it intersects the plane . for the system with feedback, we can not explicitly solve the linearised equations . given the argument above, we assume that we start sufficiently close to the periodic orbit that solutions are only a small perturbation away from those for the case with no feedback .that is , we write , for , where and are functions which satisfy and if the trajectory is exactly the periodic orbit , .we will use this assumed form of the local flow together with equations and and the global flow as before to derive a new return map .this gives recurrence relations for , and the functions and .figure [ fig : lm ] shows a schematic of the local flow past an equilibrium , in the original coordinates .we again ignore the time the trajectory spends near the heteroclinic connections but away from the equilibria , so for , the flow is given by using the symmetry , we can rewrite the linear equations and as where the delayed terms are the corresponding coordinates near the previous equilibrium , that is the time of flight of the trajectory between the planes and , , will not be equal to the delay time except when the trajectory is exactly on the periodic orbit ( see figure [ fig : lm ] ) . in order to find the coordinates at , we assume the flow given by and is also valid for $ ] . the coordinates of the trajectory at therefore : writing , where ( since we are close to the periodic orbit ) and expanding and about zero gives since .substituting into and gives : we also have that for , substituting , , and into equations and we find : + \\\gamma_{23}[\delta_{i-1}(-\mu+g_{i-1}'(0))+g_{i-1}(t)-g_i(t ) ] , \end{gathered}\ ] ] + \\ \gamma_{33}[\delta_{i-1}(-\mu+g_{i-1}'(0))+g_{i-1}(t)-g_i(t ) ] .\end{gathered}\ ] ] these expressions are true for all , so we set to simplify and find : + \gamma_{23}[\delta_{i-1}(-\mu+g_{i-1}'(0 ) ) ] , \\ g_i'(0)=&\gamma_{32}[y_e^{i-1}-y_e^i+\delta_{i-1}(\lambda+f_{i-1}'(0 ) ) ] + \gamma_{33}[\delta_{i-1}(-\mu+g_{i-1}'(0 ) ) ] , \end{aligned}\ ] ] that is , a recurrence relation for and if the and are known .we write , and to further simplify : we next find an expression for which we use to find a recurrence relation for the .recall that , so from we have in order to be able to get tractable results in what follows , we need to invert the above equation for . motivated by numerical results , which we give in section [ sec : just ] , we make the following assumption : that is , that is approximately a linear function of . using this gives us so where .we make a similar assumption on the , that is , , and then use to find : which is an expression for the local map .we assume that the global map is of the same form as the case without feedback when we are close enough to the periodic orbit , and hence find a return map for the : substituting equation into equations and results in a third order recurrence system : note that when , the recurrence relation reduces to that for the system with no feedback , as expected .this system of three recurrence relations has a fixed point at which corresponds to the periodic orbit in the flow .the stability of the fixed point in the recurrence relation will correspond to the stability of the periodic orbit in the flow .the jacobian matrix of at this fixed point is : & \gamma_{22}\hat{x } \\-\delta(\gamma_{32}-\gamma_{33 } ) & \hat{x}[\gamma_{32}+\delta(\gamma_{32}-\gamma_{33 } ) ] & \gamma_{32}\hat{x } \end{pmatrix}\ ] ] where . the characteristic equation of is where the fixed point will be unstable if has any solutions with , so curves with define stability boundaries of the periodic orbit .recall that and is a function of just two parameters , and .we consider the stability of the periodic orbit as the parameters and are varied .we split our investigation of the stability boundaries into three cases .we introduce the bifurcation parameter . without feedback , the heteroclinic cycle is stable in and the periodic orbits exist and are unstable in .we consider analytically the limits of the stability boundary curves as .the boundaries can actually be computed exactly ( although the algebra is rather nasty ) , since the eigenvalues are the roots of a cubic .we plot the boundaries for specific parameter values in figure [ fig : stab_anal ] . in section [ sec : num ] we compute the stability of the periodic orbit in the original system , numerically using the continuation package dde - biftool .a stability boundary with corresponds to a steady state bifurcation of the periodic orbit .this occurs when , that is it can easily be computed that in the limit , using , we find it is also simple to calculate that in the limit the eigenvalue which goes through as this curve is crossed is greater than if and less than if .a stability boundary with will correspond to a period - doubling bifurcation of the periodic orbit .these curves will have , that is , again , we can compute the coefficients and in the same limit as above , we find so there are two solutions for some function of , and .the direction of the bifurcation as these lines are crossed in this case depends on . for , , it can easily be computed that we must have .the computations in this case are messier , so we omit them , and give the resulting curve in the limit , the direction of the bifurcation again will depend on .the curves , and describe the limiting cases of the stability boundaries of the periodic orbit as the point is approached .since the characteristic polynomial is cubic , it can be solved for any values of and . in figure[ fig : stab_anal ] we plot the solutions of for a specific set of parameter values . in this caseonly the curves corresponding to and are stability boundaries .the lower boundary is the quadratic curve for and the left hand boundary is a straight line corresponding to .the remaining curves with do not form stability boundaries in this case because the periodic orbit is already unstable in the regions in which they exist .we can see that for these parameter values , the periodic orbit is stable for a wide range of parameters , and specifically , can be stabilised arbitrarily close to the heteroclinic cycle , that is , for arbitrarily large period .that is , for any we can find a for which the periodic orbit is stable .in fact , for this particular case , we see that as gets smaller , in order for the orbit to be stable , we have to choose , the gain parameter , to be increasingly small .this seems a rather surprising result - that as the period of the targeted orbit increases , the amplitude of the gain parameter tends towards zero .we note that the recurrence relations have a second solution , , , , which corresponds to the heteroclinic cycle .we can consider the stability of this solution by considering the solutions to in the limit . in this limit , , and .the cubic equation therefore has one solution with and two solutions with .therefore this fixed point is always unstable in the recurrence relation , and so the heteroclinic cycle is always unstable in the flow .we use the matlab package dde - biftool to numerically analyse the stability of periodic orbits in the system .the delay time was set equal to the period of the bifurcating periodic orbits ( and so is a function of ) , and was calculated numerically from the system with no feedback .parameters used were the same as those used to produce figure [ fig : stab_anal ] .figure [ fig : stab1 ] shows a contour plot of the amplitude of the largest floquet multiplier as the parameters and are varied .the periodic orbit is stable when all floquet multipliers have amplitude less than , and this region is indicated by the shading in figure [ fig : stab1 ] . comparison with figure [ fig : stab_anal ] , showing the stability as calculated analytically , shows a very good agreement between the location of the stability boundaries .the shapes of the boundaries also agrees , that is , the left hand boundary is a straight line , whereas the lower boundary is part of a parabola .the nature of the bifurcations that occur as the boundaries are crossed also agrees with the analytical result .that is , the left hand boundary is a period - doubling bifurcation , with a critical floquet multiplier equal to , and the lower boundary is a steady state bifurcation with a critical floquet multiplier of .forward integration of the equations also confirms the stability results . in figure [fig : int ] we show results from such an integration .we also show the derivative of the coordinates and the feedback terms . it can be seen that as the periodic orbit is approached , the derivative of the expanding coordinate tends to ( in this case , ) , and the feedback terms tend towards zero . in section [ sec : anal ]we make a number of assumptions regarding the form of solutions to the delay differential equations .firstly , we assume that the ` radial ' direction does not affect the stability of the periodic orbits , and so we neglect this coordinate in our construction of a poincar map . secondly , that trajectories starting near the periodic orbits will be only small perturbations from the form of solutions to the original equations without feedback .thirdly , we make the assumption that , and . here, we address each assumption in turn and show that our numerical results support these assumptions .figure [ fig : intdiff1 ] shows the feedback terms in a forward integration of equations as the periodic orbit is approached .it can be seen here that the feedback terms corresponding to the radial direction are much smaller than the other feedback terms on this scale they can not be distinguished from zero .hence the affect of the feedback on the radial direction is negligible and this assumption is justified . regarding the second assumption ,it can be clearly seen in figure [ fig : intdiff1 ] that the feedback terms decay to zero as the periodic orbit is approached . however , this is to be expected in the case that the periodic orbit is stable . in figure[ fig : int_un ] we show the results of an integration in which the periodic orbit is unstable .it can be seen from the time series in [ fig : unstable1 ] that the trajectories still remain approximately of the form of the periodic orbit even though the trajectory is moving away .figure [ fig : unstable2 ] shows the feedback terms , a measure of how close the trajectory is to the periodic orbit .although they are increasing in magnitude , they do so in the same manner one would expect for an unstable periodic orbit in ordinary differential equations .that is , by starting trajectories close enough to the periodic orbit , the feedback magnitude can be bounded above for arbitrarily long time .the third assumption is that that , and .note that in the recurrence relations , the terms in only appear in the combination ( similarly with in the combination ) .therefore , we only need to show that the difference between and is much smaller than to justify our assumption ( and similar for the ) . for the integration we perform for figure [ fig : int1 ]we compute the values of , and ( and the corresponding values for ) on each pass the trajectory makes past an equilibrium .we plot these values in figure [ fig : fip ] .it can be seen that the difference between and ( and between and ) is clearly very small , and is much less than ( ) for this example .we have shown that a time - delayed feedback control mechanism similar to that first introduced by pyragas can be used to stabilise periodic orbits of arbitrarily large period , specifically those resulting from a resonant bifurcation from a heteroclinic cycle .our analytical results are based on a analysis of the stabilisation of orbits near the guckenheimer holmes cycle .these results are asymptotic , that is , they are correct in the limit of the periodic orbit being close to the heteroclinic cycle . however , in comparison with numerical results ( which conversely , are much harder to obtain when the orbit is close to the cycle due to the long period of the orbit ) , the results actually agree for some large(ish ) range of parameters away from the bifurcation point. it should also be possible to extend this analysis so that it applies to resonant bifurcations from higher dimensional heteroclinic cycles .however , care may need to be taken with the transverse eigenvalues .as the resonant bifurcation is approached , the period of the bifurcating periodic orbit grows like , where is the bifurcation parameter .this is in contrast to the homoclinic bifurcation , in which case the bifurcating periodic orbit has a period which grows like .this difference in scaling between the growth rate of the periods of the orbits indicates that the results of adding similar time - delayed feedback near a subcritical homoclinic bifurcation may be quite different to the results given here .work on this problem is ongoing .the author would like to thank mary silber for many useful discussions regarding this work , and david barton for assistance using dde - biftool .two anonymous referees also provided some helpful comments .this work was supported in part by grant nsf - dms-0709232 .d. j. gauthier , d. w. sukow , h. m. concannon and j. e. s. socolar , stabilizing unstable periodic orbits in a fast diode resonator using continuous time - delay autosynchronization , _ phys .e _ , * 50 * ( 1994 ) , 2343 .t. fukuyama , h. shirahama and y. kawai , dynamical control of the chaotic state of the current - driven ion acoustic instability in a laboratory plasma using delayed feedback , _ physics of plasmas _ , * 9 * ( 2002 ) , 4525 . k. engelborghs , t. luzyanina , g. samaey , dde - biftool v. 2.00 : a matlab package for bifurcation analysis of delay differential equations , technical report tw-330 , department of computer science , k. u. leuven , leuven , belgium , 2001 .
the pyragas method of feedback control has attracted much interest as a method of stabilising unstable periodic orbits in a number of situations . we show that a time - delayed feedback control similar to the pyragas method can be used to stabilise periodic orbits with arbitrarily large period , specifically those resulting from a resonant bifurcation of a heteroclinic cycle . our analysis reduces the infinite - dimensional delay - equation governing the system with feedback to a three - dimensional map , by making certain assumptions about the form of the solutions . the stability of a fixed point in this map corresponds to the stability of the periodic orbit in the flow , and can be computed analytically . we compare the analytic results to a numerical example and find very good agreement . feedback control , heteroclinic cycle , delay equation . 37c27 , 37c29 .
the resourcesync collaboration between the national information standards organization ( niso ) and the open archives initiative ( oai ) focuses on designing an approach for the synchronization of web resources . asresources constantly change ( being created , updated , deleted , and moved ) applications that leverage them would benefit from a standardized synchronization framework aligned with the web architecture . the resourcesync specification ( in beta phase at the time of writing ) fulfills the needs of different communities and is , amongst others , targeted at cultural heritage portals such as europeana , repositories of scientific articles such as arxiv , and linked data applications such as dbpedia live . in the frameworkwe refer to a * source * as a server that hosts resources subject to synchronization and to a * destination * as a system that retrieves those resources to synchronize itself with the source .different use cases imply distinct characteristics . from the source sperspective the resource volume and the resource change frequency are most relevant , whereas synchronization latency and accuracy requirements are essential considerations for destinations .the resourcesync framework therefore offers multiple modular capabilities that a source can selectively implement to address specific synchronization needs . for the purpose of this paperwe discuss only two of the capabilities .an extensive discussion about the theoretical background of the framework can be found in van de sompel et al . and we refer to the specification document for a detailed description of all capabilities .the here described capabilities are the * resource list * and the * change list*. the resource list , as the name implies , is a list of resources and their descriptions that the source makes available for synchronization . the resource list presents a snapshot of the source s resources at one particular point in time and a source can publish a resource list recurrently e.g. , once a week or once a month .the change list is a list that provides information about changes to the source s resources .depending on the publication and update frequency of the change list , this capability can help decrease synchronization latency and reduce communication overhead .it is up to the source to determine the publication frequency as well as the temporal interval that is covered by a change list .it may , for example , describe all resource changes of one day , or one hour , or may simply contain a fixed number of changes , regardless of how long it takes to accumulate them . both resource list and change list serve the following purposes : * synchronization :allow destinations to obtain current resources ; requires the resources uri .* audit : allow destinations to verify the accuracy of their synchronized content ; requires the resources last modification date and fixity information . *link : allow sources to express alternate ways for destinations to retrieve content ; requires inclusion of links .one example for the inclusion of such links is a source providing a pointer to a mirror location . in this casethe source prefers destinations to obtain the resource from that specified location and not from the original uri , with the intention to reduce the load on the source .another example is a source providing a pointer to partial content , meaning only the part of the resource that has actually changed .a destination can obtain this information and use it to patch its local copy of the resource . for consistency and to minimize the barrier of adoption , it is desirable to implement all capabilities based on a single document format instead of using multiple formats within one framework . sitemaps serve the purpose of advertising a server s resources and support search engines in their discovery . in this sensea sitemap is fairly similar to a resource list , which motivated us to investigate the use of the sitemap format for the resourcesync framework .a sitemap is an xml document , which must contain three xml elements : the root element _< urlset > _ ,one _ < url > _ child element per resource , and exactly one _ < loc > _ element as a child of each _< url > _ element that conveys the uri of the resource .the sitemap schema , allows only the _< url > _ element to be a child of the root element and none of the mandatory elements can have attributes .listing [ ls : sm_simple ] shows a simple sitemap . ....xml version="1.0 " encoding="utf-8 " ?> < urlset xmlns="http://www.sitemaps.org / schemas / sitemap/0.9 " > < url > < loc > http://example.com / res1</loc > < lastmod>2013 - 02 - 01t13:00:00z</lastmod > </url > < /urlset > .... the _< loc > _ element can be utilized in the resourcesync framework to convey the uri of the resource that is subject to synchronization .its last modification time can be provided with the sitemap - native optional _< lastmod > _ child element of the _< url > _ element .in order to use sitemaps for all resourcesync capabilities , extensions to the sitemap format are required on two levels : at the _ resource level _ , where child elements of an _< url > _ element need to be included that provide : * audit : metadata about the resource such as the nature of the change , the resource s size , its content - based hash value , mime - type etc . , and * link : links to related resources such as mirror locations or partial content . at the _ root level _ , where child elements of the _ < urlset > _element need to be included that provide : * audit : the document s last modification time , * link : links for navigational support for destinations and to convey information about the source , and * an indication of the capability implemented by a specific sitemap document since all capability documents have the same format . in general , we wanted to avoid google consuming resourcesync documents in an unintended way because it does not understand the extensions , yet keep them valid from the perspective of the sitemap xml schema and keep the door wide open for when google understands and eventually consumes them .these intentions led to the following concerns : + * concern 1 : * it was unclear how google would act upon the inclusion of additional child elements to the _< url > _ element .since the sitemap schema allows for external child elements as long as they are properly declared in their namespace , we did not anticipate major issues but still had to convince ourselves that our extensions were compliant . + * concern 2 : * how google responds to the inclusion of links , as children of the _ < url > _ element was unclear . if google , for example , indexes both the uri provided in the _< loc > _ element and the uri provided in the link to a mirror location , it would go against the source s intention to reduce its load . also , it is meaningless if google indexes the uri of the partial content as it is not helpful without the full resource pointed to in the _< loc > _ element .these would be unintended consequences of the inclusion of links .+ * concern 3 : * since the schema does not allow for child elements to the _< urlset > _ root element other than _ < url > _ , the concern was that google would reject the resourcesync capability documents .the behavior towards included link elements on this level was unclear too .we conducted a series of informal experiments to determine how google , as a major search engine , responds to resourcesync enhanced sitemaps .we submitted sitemaps with varying degrees of modification to google s webmaster tool , analyzed its immediately returned parsing report , and observed the effects of our sitemaps to their index .the goal of this experiment was to test the addition of metadata elements to each _< url > _ child element of the _< urlset > _ root element . to convey the type of change the resource underwent , we tested the addition of an _ rs : change _ attribute to the _< lastmod > _ element . for additional metadata , we tested new elements in the resourcesync namespace such as _ < rs : size > _ , _ < rs : fixity > _ , and _< rs : mimetype > _ to convey the resource s size , content - based hash value , and mime - type , respectively .listing [ ls : change_list ] shows a sitemap - based change list that we tested against google . ....xml version="1.0 " encoding="utf-8 " ?> < urlset xmlns="http://www.sitemaps.org / schemas / sitemap/0.9 " xmlns : rs="http://www.openarchives.org / rs / terms/ " > < url > < loc > http://example.com / res1</loc > < lastmod rs : change="updated " > 2013 - 01 - 02t13:00:00z < /lastmod> < rs : size>6230</rs : size > < rs : fixity type="md5 " > a2f94c567f9b370c43fb1188f1f46330 < /rs : fixity > < rs : mimetype > text / html</rs : mimetype > < /url> < /urlset > .... all tested child elements from the resourcesync namespace were tolerated and the _ rs : change _ attribute , even though in violation to the sitemap schema , was ignored . however , even though this approach proved feasible , we decided against the addition of multiple child elements and in favor of just one additional child element with multiple attributes .we named the child element _< rs : md > _ and the possible attributes to describe a resource in a change list are _ change _ , _ length _ , _ hash _ , and _ type _ conveying the same metadata as above .this approach has two main advantages .first , there is only one added child element that needs to be defined in the resourcesync namespace and secondly , its attributes are defined in the atom syndication format and the atom link extension internet draft .their semantics are inherited in the resourcesync framework . to provide links to related resources we tested the _< rs : link > _ element from the resourcesync namespace with the uri being conveyed in its _ href _ attribute as seen in listing [ ls : change_list_link ] . to provide a mirror location, the link has the relation type _ duplicate _( defined in rfc6249 ) and for partial content ( for example json patch ) a patch - specific relation type . ....xml version="1.0 " encoding="utf-8 " ?> < urlset xmlns="http://www.sitemaps.org / schemas / sitemap/0.9 " xmlns : rs="http://www.openarchives.org / rs / terms/ " > < url > < loc rel="nofollow" > http://example.com / res1</loc > < lastmod>2013 - 01 - 02t13:00:00z</lastmod > < rs : link rel="duplicate " href="http://mirror.example.com / res1"/ > < rs : link rel="http://www.openarchives.org / rs / terms / patch " href="http://example.com / res1-json - patch " type="application / json - patch"/ > < /url/urlset > .... google did not return an error but we did observe unintended consequences ( concern 2 ) with this approach as we found both linked resources ( http://mirror.example.com/res1 and http://example.com/res1-json-patch ) indexed .our informal tests indicate that google parses sitemaps aggressively and indexes uris it discovers . for a resource synchronization frameworkthis can be a real detriment because resources in a resource list or change list are subject to synchronization but they may not be meant for indexing by search engines . to address this concern , we tested the _rel=``nofollow '' _ attribute in the _< loc > _ child element as well as in the _< rs : link > _ child elements with the goal of preventing google from indexing the referenced resource . however , the attribute was ignored in either child element .it did not cause any warnings or errors but it also did not prevent google from indexing the resource . we were able to improve on this situation by renaming the child element to something different than _< link > _ and include the uri as its content rather then the value of its _ href _ attribute .however , we adopted the former approach because expressing a link without using the _ href _ attribute is counterintuitive .the resulting approach provides no guarantees that google will not index the uris provided in links .therefore , we additionally introduced an approach that separates discovery of resourcesync capability documents from discovery of regular sitemaps .we define a * capability list * as a document that lists links to all capability documents offered by a source . unlike a sitemap , which is usually discovered via the _ robots.txt _file , the capability list is discovered via the well - known uri _./well - known / resourcesync _ , as defined in the resourcesync specification .this distinct discovery is a best effort approach to implement a separation of concerns but there is no guarantee that google does not discover the well - known uri and follow the links to the resourcesync capability documents .we would be happy to see search engines such as google adopting the resourcesync format but as long as they do not understand how to interpret the content of the capability documents , the source might be better off not to advertise them in the robots.txt .an interesting aspect of the parsing of the change list shown in listing [ ls : change_list_link ] was that google returns a warning that it expects link elements ( as well as `` meta '' elements ) to be in the xhtml namespace .none of the above results changed when using the _ < xhtml : link > _ child element from the xhtml namespace and so , to remain within the resourcesync namespace , we renamed the link element to _rs : ln>_. to help destinations distinguish between capability documents and to convey the document s last modification time we tested the insertion of the _ < rs : md > _ child element to the _< urlset > _ root element with two attributes .the attribute identifying the capability document is called _ capability _ ( as defined in the resourcesync specification ) and the document s last modification time is conveyed with the attribute _ modified_. we included the child element into a resource list ( listing [ ls : cap_id ] ) and submitted the document to google .xml version="1.0 " encoding="utf-8 " ?> < urlset xmlns="http://www.sitemaps.org / schemas / sitemap/0.9 " xmlns : rs="http://www.openarchives.org / rs / terms/ " > < rs : md capability="resourcelist " modified="2013 - 02 - 03t09:00:00z"/ > < url > < loc > http://example.com / res1</loc > < lastmod>2013 - 02 - 01t13:00:00z</lastmod > </url > < /urlset > .... google did not reject the sitemap , even tough it violated the xml schema .it merely returned a warning that the child element is not recognized .this supports our suspicion that the google does not validate submitted sitemaps against the schema but rather uses a different logic , which we can only speculate about , to evaluate its correctness .two kinds of links at the root level of a resourcesync document are featured in the framework .a navigational link pointing to the capability list to support destinations in discovering all offered capabilities and a link to a document that provides information about the source . ....< ? xml version="1.0 " encoding="utf-8 " ? >< urlset xmlns="http://www.sitemaps.org / schemas / sitemap/0.9 " xmlns : rs="http://www.openarchives.org / rs / terms/ " > < rs : ln rel="resourcesync " href="http://example.com / capabilitylist.xml"/ > < rs : ln rel="describedby " href="http://example.com / info - about - source.xml"/ > < url > < loc > http://example.com / res1</loc > < lastmod>2013 - 02 - 01t13:00:00z</lastmod > < /url> < /urlset > .... we tested this idea and included two _< rs : ln > _ child elements from the resourcesync namespace into the submitted sitemap .the link to the capability list has the relation type_ resourcesync _( defined in ) and the informational link has the relation type _ decribedby _ ( as defined in powder ) .listing [ ls : nav_link ] shows the structure of the sitemap used for this experiment .google did not reject the sitemap , even though it contains child elements of the _ < urlset > _ element different than _< url>_. it did return a warning though that the child elements are not recognized .unlike in our experiment with links in a _ < url > _ block , the uris of these links were not indexed .the purpose of this series of experiments was to test our sitemap format extensions and to see how google would respond to them when submitted to their webmaster tool . +* concern 1 * did not materialize .the sitemap schema allows for external elements within the _< url > _ block and hence these extensions are perfectly compliant . + * concern 2 * did materialize as we saw unintended consequences in terms of indexed uris that were provided with link elements .our tests indicate that google is rather aggressive in indexing uris from link elements as they occur in _< url > _ blocks .we approach this situation by isolating the discovery of resourcesync capabilities ( via the resourcesync specific well - known uri ) from regular sitemaps ( via robots.txt ) .the well - known uri refers to a capability list containing pointers to all offered capability documents .+ * concern 3 * did not materialize .even though the schema did not allow for child elements of the _ < urlset > _ root element , google did not reject our syntax .we suspect that google does not validate a submitted sitemaps against the schema but rather uses some unknown logic to evaluate the correctness of the sitemaps .in addition , our conversations with microsoft and google resulted in their adjustment of the sitemap schema to allow for child elements to the root element .this means that the resourcesync enhancements to sitemaps are now fully compliant .uris provided in link elements on this level were not subject to be indexed .listing [ ls : change_list_complete ] shows a change list based on the sitemap format as adopted in the specification . ....xml version="1.0 " encoding="utf-8 " ?> < urlset xmlns="http://www.sitemaps.org / schemas / sitemap/0.9 " xmlns : rs="http://www.openarchives.org / rs / terms/ " > < rs : ln rel="resourcesync " href="http://example.com / capabilitylist.xml"/ > < rs : ln rel="describedby " href="http://example.com / info - about - source.xml"/ > < rs : md capability="changelist " modified="2013 - 02 - 03t09:00:00z"/ > < url > < loc > http://example.com / res1</loc >< lastmod>2013 - 01 - 02t13:00:00z</lastmod > < rs : md change="updated " length="6230 " type="text / html " hash="md5:a2f94c567f9b370c43fb1188f1f46330"/ > < rs : ln rel="duplicate " href="http://mirror.example.com / res1"/ > < rs : ln rel="http://www.openarchives.org / rs / terms / patch " href="http://example.com / res1-json - patch " type="application / json - patch"/ > < /url> < /urlset > .... we also tested the atom syndication format and even introducing a resourcesync - specific document format as alternatives to the sitemap format .our reasoning for the decision in favor of the sitemap format is detailed in our previous work klein et al .we did not run extensive tests with other search engines . while this is subject to future work ,initial tests indicate that microsoft s bing , for example , is even more liberal in accepting our sitemap extensions .the resourcesync specification is the collaborative work of niso and oai .funding is provided by the alfred p. sloan foundation and uk participation is supported by jisc .
the documents used in the resourcesync synchronization framework are based on the widely adopted document format defined by the sitemap protocol . in order to address requirements of the framework , extensions to the sitemap format were necessary . this short paper describes the concerns we had about introducing such extensions , the tests we did to evaluate their validity , and aspects of the framework to address them . [ data sharing ]
one issue with the application of block designs in agricultural field trials is that a treatment assigned to a particular plot typically has effects on the neighboring plots besides the effect on its own plot .see , , , , , , goldringer , brabant and kempton ( ) , clarke , baker and depauw ( ) , and for examples in various backgrounds .interference models have been suggested for the analysis of data in order to avoid systematic bias caused by these neighbor effects .various designs have been proposed by , , filipiak and markiewicz ( ) , , ai , ge and chan ( ) , ai , yu and he ( ) , druilhet and tinssonb ( ) and among others . all of them considered circular designs , where each block has a guard plot at each end so that each plot within the block has two neighbors . to study noncircular designs , investigated the case when the block size , say , is or , which is extended by to , where is the number of treatments .both of them restricted to the subclass of pseudo symmetric designs and the assumption that the within - block covariance matrix is proportional to the identity matrix .this paper provides a unified framework for deriving optimal pseudo symmetric designs for an arbitrary covariance matrix as well as the general setup of and .most importantly , the kushner s type linear equations system is developed as a necessary and sufficient condition for any design to be universally optimal , which is a powerful device for deriving asymmetric designs .moreover , a new approach of finding the optimal sequences are proposed .these results are novel for models with at least two sets of treatment - related nuisance parameters , which are left and right neighbor effects here .they shed light on other similar or more complicated models such as the one in afsarinejad and hedayat ( ) and kunert and stufken ( ) for the study of crossover designs . here, parallel results are also provided for the undirectional interference model where the left and right neighbor effects are equal .it is further established that the efficiency of any given design under the latter model is not less than the one under the former model , for the purpose of estimating the direct treatment effects . throughout the paper ,we consider designs in , the set of all possible block designs with blocks of size and treatments .the response , denoted as , observed from the plot of block is modeled as where .the subscript denotes the treatment assigned in the plot of block by the design .furthermore , is the general mean , is the block effect , is the direct treatment effect of treatment , is the neighbor effect of treatment from the left neighbor , and is the neighbor effect of treatment from the right neighbor . one major objective of design theorists is to find optimal or efficient designs for estimating the direct treatment effects in the model .if is the vector of responses organized block by block , model ( [ eqn:729 ] ) is written in a matrix form of where , , and .the notation means the transpose of a vector or a matrix . here, we have with as the kronecker product , and represents a vector of ones with length .also , , and represent the design matrices for the direct , left neighbor and right neighbor effects , respectively .we assume there is no guard plots , that is , . then we have and , where with the indicator function . here, we merely assume , with being an arbitrary positive definite symmetric matrix . given a matrix ,say , we define the projection .the information matrix for the direct treatment effect is where is the matrix such that . by direct calculations, we have where with , , and with .it is obvious that .for the special case of , we have the simplification of , and the latter is denoted by . pointed out that when is of type- , that is , with and , we have hence , the choices of designs agree with that for .this special case will be particularly dealt with in section [ sec5 ] .we allow to be an arbitrary covariance matrix throughout the rest of the paper .note that a design in could be considered as a result of selecting elements from the set , , of all possible block sequences with replacement . for sequence , define the sequence proportion , where is the number of replications of in the design .a design is determined by , which is in turn determined by the _ measure _ for any fixed . for ,define to be when the design consists of the single sequence , and let .then we have .similarly , . note that is a schur s complement of , for which we also have .it is obvious that , where .in approximate design theory , we try to find the optimal measure among the set to maximize for a given function satisfying the following three conditions [ ] : is concave . for any permutation matrix . is nondecreasing in the scalar . a measure which achieves the maximum of among for any satisfying ( c.1)(c.3 )is said to be _ universally optimal_. such measure is optimal under criteria of , , , , etc .the rest of the paper is organized as follows .section [ sec2 ] provides some preliminary results as well as a necessary and sufficient condition for a pseudo symmetric measure to be universally optimal among .the latter is critical for deriving the optimal sequence proportions through an algorithm .section [ sec3 ] provides a linear equations system of , as a necessary and sufficient condition for a measure to be universally optimal .section [ sec4 ] provides similar results for the model with .further , it is shown that the efficiency of any design under the latter model would be at least its efficiency under model ( [ eqn:728 ] ) .also , an alternative approach is given to derive the optimal sequence proportions .section [ sec5 ] derives theoretical results regarding feasible sequences when is of type- .section [ sec6 ] provides some examples of optimal or efficient designs for various combinations of and .let be the set of all permutations on symbols . for permutation and sequence with and , we define . for measure , we define .a measure is said to be _ symmetric _ if for all . for sequence ,denote by the _ symmetric block _ generated by .such symmetric blocks are also called equivalent classes by , due to the fact that symmetric blocks generated by two different sequences are either identical or mutually disjoint .now let be the total number of distinct symmetric blocks which partition .without loss of generality , suppose these symmetric blocks are generated by sequences , .then we have . for a symmetric measure, we have where and is the cardinality of .the linearity of , conditions ( c.1)(c.3 ) and properties of schur s complement together yield the following lemma .[ lemma:0223 ] for any measure , say , there exists a symmetric measure , say , such that for any satisfying .define a measure to be _ pseudo symmetric _ if are all completely symmetric .it is easy to verify that a symmetric measure is also pseudo symmetric .the difference is that ( [ eqn:0325 ] ) does not has to hold for a general pseudo symmetric measure .lemma [ lemma:0223 ] indicates that an optimal measure in the subclass of ( pseudo ) symmetric measures is automatically optimal among . for a pseudo symmetric measure, we have , , where . hence , where and .now we show that both and are positive definite for any measure , and hence is positive definite for any pseudo symmetric measure .the latter is the key to prove theorem [ thm:0325 ] .[ lemma323 ] is positive definite for any measure .it is sufficient to show the nonsingularity of for all .suppose is singular , there exists a nonzero vector such that since is a nonnegative definite matrix , we have which in turn yields equation ( [ eqn:323 ] ) is only possible when each column of consists of identical entries , that is , the rows of are identical . in the sequel , we investigate the possibility of ( [ eqn:323 ] ) for sequence .define to be a zero one vector of length with only its entry as one , then the first , second and last rows of are given by , and , respectively .now we continue the discussion in the following four cases .( i ) if , the equality of the first two rows of indicates , which is impossible since the left - hand side is a vector of integers and the right - hand side is a vector of fractional numbers .( ii ) if and , the first and the last rows of can not be the same . (iii ) if , and , the equality of the first and the last rows of necessities , which together with the equality of the first two rows of indicates , which is again impossible .( iv ) if , and , by looking at the and entries of the first and last rows of , ( [ eqn:323 ] ) necessities and which is impossible by simple algebra .[ lemma325 ] is positive definite for any measure . since has column and row sums as zero .we have where means the entry in . for vector ,define .for any nonzero , we have in view of the fact that , and the rank of is .hence , the lemma is concluded .[ lemma:0205 ] for a pseudo symmetric measure , say , we have , where with . in proving lemma [ lemma:0205 ] , we used the equations , .note that is the as defined in .lemma [ lemma323 ] shows that only case ( i ) of the four cases proposed by them is possible .hence the generalized inverse in is now replaced by in ( [ eqn:1109 ] ) . by applying lemmas [ lemma:0223 ] and [ lemma:0205 ] , we derive the following proposition .[ prop:323 ] let .a measure is universally optimal if it is a pseudo symmetric measure with , if and only if .let and .by lemma [ lemma323 ] we have , where means the determinant of a square matrix . for measure , we call the set the _ support _ of . one can identify universally optimal pseudo symmetric measures based on the following theorem .see for an algorithm based on a similar theorem .[ thm:323 ] a pseudo symmetric measure , say , is universally optimal if and only if and =1.\ ] ] moreover , each sequence in reaches the maximum in ( [ eqn:3232 ] ) .if , we have , which means that such design has no information regarding , and hence can be readily excluded from the consideration . in the sequel, we restrict the discussion to the case of . by lemmas[ lemma:0223 ] , [ lemma323 ] and [ lemma:0205 ] , a pseudo symmetric measure , say , is universally optimal if and only if it achieves the maximum of , which is equivalent to -\varphi(\xi)}{\delta}\leq0,\ ] ] for any measure .it is well known that the same result holds for except that should be replaced by . by applying ( [ eqn:8013 ] ) to ( [ eqn:8014 ] ), we have in ( [ eqn:8015 ] ) , by setting to be a degenerated measure which puts all its mass on a single sequence , we derive by taking , we have the equal sign for ( [ eqn:8015 ] ) . also observe that conditioning on fixed , the left - hand side of ( [ eqn:8015 ] ) is a linear function of the proportions in .thus , we have hence , the theorem follows .for sequence and vector , define the quadratic function . for measure , define .one can verify that . since is strictly convex for all in view of lemma [ lemma323 ] ,thus is also strictly convex .let be the unique point in which achieves minimum of and define .recall and , now we derive theorem [ thm:325 ] below which is important for proving theorem [ thm:0325 ] and results in section [ sec4 ] .[ thm:325 ] . implies . implies .first , we have then ( i ) is proved if we can show . to see the latter ,define . if contains a single sequence , say , let be the measure with , then we have .hence , . if contains more than one sequences , let be the gradient of evaluated at point and define to be the convex hull of .we claim , since otherwise we could find a vector so that for all , which would indicate that is not the minimum point of , and hence the contradiction is reached .note that indicates there exists a measure , say , such that and , which yields and hence .( i ) is thus proved .observe that the minimum of is achieved at the unique point .if , we have and hence the contradiction is reached .( ii ) is thus concluded . for ( iii ) ,if there is a sequence , say , with and , we have , and hence the contradiction is reached .[ thm:0325 ] a measure is universally optimal among if and only if &=&y^*b_t/(t-1),\label{eqn:03252 } \\ \sum_{s\in{\cal t}}p_s\bigl[e_{s 10}+e_{s 11 } \bigl(x^*\otimes b_t\bigr)\bigr]&=&0,\label { eqn:03253 } \\\sum_{s\notin{\cal t}}p_s&=&0.\label{eqn:03254}\end{aligned}\ ] ] note that ( [ eqn:03252])([eqn:03254 ] ) is equivalent to necessity . by proposition [ prop:323 ], there exists a symmetric measure , say , which is universally optimal .further , we have .. then we have , which indicates .the latter combined with proposition [ prop:323 ] yields .hence , by similar arguments as in , we have where means the moore penrose generalized inverse .since is a symmetric measure , we have . by lemmas [ lemma323 ] , [ lemma325 ] and the orthogonality between and , we obtain >0 ] , hence we have .now ( [ eqn:04134 ] ) follows in view of ( [ eqn:912 ] ) and .[ cor:0418 ] a measure with , and being completely symmetric is universally optimal under model ( [ eqn:0413 ] ) if and only if when is persymmetric , a pseudo symmetric dual measure is universally optimal under model ( [ eqn:728 ] ) if and only if ( [ eqn:0424 ] ) and ( [ eqn:04242 ] ) holds .since is a univariate function , one can use the kushner s ( ) method to find and with the computational complexity of , where is the total number of symmetric blocks . if we have to deal with multivariate functions such as ( e.g. , when is not persymmetric and the side effects are directional ) , the computation of and is more involved but manageable .see for an example where is -dimensional .alternatively , one can build an efficient algorithm ( see the ) based on ( [ eqn:3232 ] ) to derive the optimal measure , which further induces and .by restricting to the type- covariance matrix , we derive theoretical results regarding for .note that the cases of and have been studied by and .two special cases of type- covariance matrix are the identity matrix and a completely symmetric matrix .[ thm:0426 ] assume to be of type- .if , we have where and are the integers satisfying and .if , we have },\label{eqn:04245 } \\ y^*&=&k-1-\frac{2}{k}-\frac{1}{2k[k(k-3)+1/t ] } , \\ { \cal t}&=&\langle s_0\rangle\cup\bigl\langle s_0 ' \bigr\rangle,\label{eqn:04246}\end{aligned}\ ] ] where and is its dual sequence .moreover , a measure maximizes if and only if .due to ( [ eqn:05233 ] ) , here we assume throughout the proof without loss of generality . for sequence ,define the quantities , , , . by direct calculations , we have \\ & & { } -2(2\chi_s-2f_{s , t_1}-2f_{s , t_k}+ \i_{t_1=t_k})/k .\nonumber\end{aligned}\ ] ] \(i ) follows by the same approach as in theorem 1.a of kushner ( ) with only more tedious arguments based on ( [ eqn:0523])([eqn:05232 ] ) .now we focus on .first , we have , and , and hence , and .it can be verified that reaches its minimum at . since , it is sufficient to show for the purpose of proving ( ii ) .we first restrict the consideration to the subset .if we only exchange the treatments in locations , the values of , and remain invariant .note that is increasing in the quantity . if for a certain location , say , we have .at least one of and would be in the set .after switching this location with location , will be increased by , and at the same time the amount of decrease for will be at most .note that for all and , and hence a sequence , say , which maximizes should be of the format is the number of distinct treatments in sequence and . among sequences of this particular format ,the sequence which maximizes should satisfy , where we take the maximization over the empty set to be . without loss of generality , we assume .now we shall show for maximizing sequences as follows .suppose , this indicates . by decreasing by one and changing from to ,the quantity is increased by the amount of .\ ] ] if , we have in view of and .suppose , we have , hence we have at this point , we have shown . by similar arguments , one can show that the sequence maximizes among . by direct calculations, we have hence , ( [ eqn:04245])([eqn:04246 ] ) are proved . for the rest of ( ii ), the sufficiency of is indicated by the proof of theorem [ thm:0424 ] . for the necessity ,it is enough to note that the two components of will not be identical if .hence , the lemma is concluded .this section tries to illustrate the theorems of this paper through several examples for various combinations of and . by theorem [ thm:0424](iii ) , the efficiency of a design is higher under model ( [ eqn:0413 ] ) than under model ( [ eqn:728 ] ) for any criterion function satisfying ( c.1)(c.3 ) under a mild condition , that is , is persymmetric .hence , it is sufficient to propose optimal or efficient designs under model ( [ eqn:728 ] ) .the existence of the universally optimal measure in is obvious in view of lemmas [ lemma:0223 ] and [ lemma:0205 ] .however , to derive an exact design , one has to restrict the consideration to the subset .universally , optimal measure does not necessarily exist in except for certain combinations of . in this case, one can convert in the equations of theorem [ thm:0325 ] into by multiplying both sides of the equations by .then one can define a distance between two sides of the equations and find the solution , say , to minimize this distance .if there is universally optimal measure in , such approach automatically locates the universally optimal exact design ; otherwise , the exact designs thus found are typically highly efficient under the different criteria .see and figure [ fig1 ] for evidence .let be the eigenvalues of for an exact design .if is universally optimal , we have . here , we define - , - , - and -efficiencies of design as follows : it is well known that a universally optimal measure has unity efficiency under these four criteria .we begin with the discussion on the case when is of type- . for the latter, studied the conditions on for a pseudo symmetric design to be universally optimal for and , which was further extended by to .we would comment on these cases and then explore the case of and .finally , irregular form of will be briefly discussed .for , corollary [ cor:0418 ] indicates that the necessary and sufficient condition for a pseudo symmetric design to be universally optimal is .theorem 2 of proposed , which is sufficient but not necessary for universal optimality . for and , corollary [ cor:0418 ] indicates that sufficient conditions regarding given by theorems 1 and 3 of are also necessary . for , showed that the optimal values of are given by irrational numbers , and hence an exact universally optimal design does not exist .in fact , based on theorem [ thm:0325 ] here , one can derive efficient exact designs for the majority values of and .for example , below with and yields the efficiencies of , , and .note that the -efficiency is relatively lower than other efficiencies due to the asymmetry of the design ..\ ] ] for , showed that the set should include sequences , , and its dual sequence as defined in theorem [ thm:0426 ] .the optimal proportion for them are again irrational numbers .further , they proposed the use of type i orthogonal array ( ) , that is , , and proved that the -efficiencies of such designs are at least .note that is pseudo symmetric , hence its efficiencies are identical under criteria , , and .when , theorem [ thm:0426](ii ) indicates that a pseudo symmetric design with will be universally optimal .for example , when and , below with is universally optimal . here , the first sequences are equivalent to while the rest are equivalent to ..\end{aligned}\ ] ] when , there is a large variety of symmetric blocks in and there will be infinity many solutions for optimal sequence proportions . even for and , we shall have .let be the proportions of these symmetric blocks . a pseudo symmetric design with , , , , , and will be universally optimal .one simple solution is .hence a design which assigns of its blocks to sequences , , and is universally optimal . at last, we would like to convey the message that the deviation of from type- has large impact on the choice of designs .for simplicity of illustration , we consider the form .when and , the efficiency of reduces to .in fact , corollary [ cor:0418 ] indicates that , instead of for , becomes the dominating symmetric block among the four . to be more specific , a pseudo symmetric design with sequences solely from yields the efficiency of for all four criteria .when we tune to , the efficiency of further reduces to , while the symmetric design based on becomes even more efficient .one the other hand , when takes negative values , the efficiency of becomes even higher than .similar phenomena are observed for other cases of . for , we also observe that the value of influences the choice of design substantially .the details are omitted due to the limit of space .we end this section by figure [ fig1 ] .it shows that the linear equations system in theorem [ thm:0325 ] is powerful in deriving efficient exact designs for arbitrary values of . when , and .the -efficiency is plotted by the dashed line , while - , - and -efficiencies are all plotted by the same solid line . ]recall that is be total number of distinct symmetric blocks and are the representatives for each of the symmetric blocks .note that two pseudo symmetric measures with the same vector of have the same information matrix and hence the same performance under all optimality criteria . for a measure and a sequence , we define we also define and to be vector of length with the entry as 1 and other entries as 0 ._ step _ 0 : choose tuning parameters and such that is in a small neighborhood of zero and is in a neighborhood of one ._ step _ 1 : choose initial measure . put and ,then let . _ step _ 2 : check optimality . if , go to step 3otherwise , output the optimal measure as ._ step _ 3 : update the measure .let and the updated measure is .increase by 1 and go back to step 2 .there is a possibility of tie in choosing in step 1 and in step 3 .the strategy in such case is quite arbitrary .let . if , one can either choose an arbitrary and let or replace in step 3 by .the same strategy applies to the choice of .note that the update algorithm in step 3 is essentially a steepest descent algorithm .the parameter is to adjust for the length of step for the _ best _ direction . by the concavity of the optimality criteria , the global optimumis guaranteed to be found . in the examples of this paper, works well enough .the parameter is used to adjust for time of convergence .when the sequential algorithm converges very slow , one can increase to save time . in most examples of this paper, setting enable us to obtain the optimal design within seconds .we are grateful to the associate editor of this paper and two referees for their constructive comments on earlier versions of this manuscript .
a systematic study is carried out regarding universally optimal designs under the interference model , previously investigated by kunert and martin [ _ ann . statist . _ * 28 * ( 2000 ) 17281742 ] and kunert and mersmann [ _ j . statist . plann . inference _ * 141 * ( 2011 ) 16231632 ] . parallel results are also provided for the undirectional interference model , where the left and right neighbor effects are equal . it is further shown that the efficiency of any design under the latter model is at least its efficiency under the former model . designs universally optimal for both models are also identified . most importantly , this paper provides kushner s type linear equations system as a necessary and sufficient condition for a design to be universally optimal . this result is novel for models with at least two sets of treatment - related nuisance parameters , which are left and right neighbor effects here . it sheds light on other models in deriving asymmetric optimal or efficient designs .
complex network theory has successfully accounted for structural and dynamical problems of complex systems in terms of their connectivity patterns .most studies on complex networks , so far , have dealt with isolated network layers . however , many real - world complex systems such as physical , social , biological , and infrastructural systems consist of multiple layers of networks interacting each other . recently ,several studies on multiplex networks in which a node belongs to multiple network layers of distinct types of links have contributed to the progress of research on multi - layer complex systems along with other approaches like interdependent and interconnected networks .these studies have shown that the coupling structure and the interactions among different layers can significantly affect percolation , diffusion , cascade of failures , and network evolution in such networks .for many real - world multiplex networks , network layers are correlated one another rather than combined randomly .although there exist various forms of correlations between network layers , the interlayer degree correlation would be one of the simplest types as observed in multiplex online game social network data . in this case , a positive correlation represents that the degree of a node in one layer tends to be correlated with that in other layers , such that the hub in one layer also has many neighbors in the other layers . on the contrary, the hub in one layer would have few neighbors in the other layers for negatively correlated multiplex networks .recently , the effect of such interlayer degree correlation was addressed for connectivity of multiplex networks .furthermore , a few studies demonstrated that interdependent networks with higher interlayer degree correlation or more assortative layers are more robust under random damage .however , there is still lack of unified understanding of various robustness properties of multiplex networks due to the role of interlayer degree correlations .network robustness refers to the structural resilience of a network to external perturbations , which has been one of the most active topics in complex networks theory .the study on the network robustness aims not only for theoretical interests but also for practical applications to design more resilient structures against random breakdowns or intentional attacks .backup pathway between a pair of nodes is a meaningful concept of the network robustness , captured by the connection between a pair through at least two paths , termed biconnectivity .since a biconnected pair in networks can communicate under removal of one route , the biconnectivity can play a significant role in the network robustness .another widely - used measure of the network robustness is the the size of remaining giant component after removing a fraction of nodes or links , either chosen randomly or targeted with respect to their degrees .previous studies found that the network robustness under removal of nodes ( or links ) depends on the connectivity patterns of networks . in multiplex networks, different types of connectivity can be meaningful depending on the context with which the multiple network layers are coupled .in addition to the usual connectivity , for example , the so - called mutual connectivity can be significant in multiplex networks with cooperative or interdependent layers , in which case a node requires simultaneous connectivities through each and every layer for proper functioning . here , we study the impact of the interlayer degree correlation on various robustness properties of multiplex networks in terms of the biconnectivity , the connectivity , and the mutual connectivity . to take account of interlayer degree correlations, we mainly consider two layers of multiplex ( duplex ) networks with comparing three representative correlated structure ; maximally - positive ( mp ) , maximally - negative ( mn ) , and uncorrelated ( uc ) multiplex following ref . . in the mp case ,node s degrees in different layers are maximally correlated in their degree order , whereas they are maximally anti - correlated in the mn case .therefore , a node that is the hub in one layer is also the hub in the other layer for the mp case , but it has the smallest degree in the other layer for the mn case .real - world multiplex networks , of course , would be neither the mp nor the mn case , but the understanding based on these limiting structures with theoretical simplicity can be of illustrative and instructive for building insight towards more realistic situations .first , we examine the biconnectivity . subset of nodes in a network connected by at least two disjoint paths is said to form a biconnected component , or bicomponent for short .existence of the giant bicomponent spanning finite fraction of the entire system is important for stable connectivity of the network . by definition ,all nodes in a bicomponent have at least one alternative way preserving the connection in networks .if a typical time scale of the restoration of a broken node is much shorter than that of successive failures , every node in the bicomponent can completely endure its connectivity . generalizing the generating function method from to obtain the size of the giant bicomponent for multiplex networks with layers , we first define the generating function for the joint degree distribution of distinct types of links ( layers ) , , where is used to designate the degrees of a node in each layer , as where is used to denote the auxiliary variables coupled to . we also define the generating function for the remaining degree distribution by following a randomly chosen -type link , given by where is the mean degree of layer . then, on locally tree - like networks , the probability that a node reached upon following an -type edge does not belong to the giant component is given by the coupled self - consistency equations with .the size of the giant bicomponent , , is equal to the complementary probability that a randomly chosen node has none or one of its links leading to a node in the giant component , therefore , where the first two terms give the size of the giant unicomponent , , and the last term gives the difference between and .( filled symbols ) , and the unicomponent , ( open symbols ) , for the mp ( ) , the uc ( ) , and the mn ( ) couplings of duplex er networks .( b ) the gap between and as a function of .note that is the same with . for the mn coupling, the entire network is connected into a single bicomponent when .theoretical curves ( lines ) and numerical results ( points ) obtained with nodes , averaged over runs are shown together .( c ) data collapse of the scaled bicomponent size for the mn coupling , , vs. the finite - size scaling variable , , with and . ]the condition of existence of the giant bicomponent is that the largest eigenvalue of the jacobian matrix , , of eq .( [ ui ] ) at to be larger than . for duplex networks, can be expressed as where and .the largest eigenvalue of is given in terms of and as , .\end{aligned}\ ] ] the analytic predictions based on the above generating function method as well as numerical simulation results are obtained for the duplex erds - rnyi ( er ) networks .the main results from comparisons of the three correlation types are as follows .first , the more correlated - coupling there is in multiplex networks , the lower does the percolation threshold become . furthermore , the size of the giant bicomponent for the mp case , , is the same as that of the giant unicomponent , ( figs .2a , b ) , meaning that all pairs of nodes in the giant unicomponent have at least two independently connected paths .in addition , the giant bicomponent always exists for any non - zero link density , so that the mp coupling offers a well - connected structure even with sparse link density . on the contrary ,the emergence of the giant bicomponent for the mn coupling is much delayed . after passing the percolation threshold , , the size of the bicomponent increases slower than ( fig .2a , b ) . near the critical point , , where ( fig .2c ) , which is twice the mean - field critical exponent for in agreement with general critical behavior of bicomponent .therefore increases from zero in a convex manner near , in contrast to the behavior of displaying a concave increase above with for all three cases . when , the entire network is connected into a single component for the mn coupling and the disparity between and disappears , too .the maximum value of for the mn coupling is located at , which is larger than that for the uc coupling .the mn coupling hinders the emergence of the giant bicomponent for low density , yet it can establish the biconnected structure over the whole network with a finite link density .the error and attack tolerance of a network under structural disturbance has been one of the major problems in network theory , which has also been addressed in the context of interdependent networks in recent years . in this section, we consider this problem for multiplex networks with interlayer degree correlations . for the analytic calculation of the giant component size after removing a fraction of nodes , we extend the generating function method for single networks to multiplex networks .first , let be the probability that a node with degrees is removed from the initial network , which encodes the node removal strategy .for example , when fraction of nodes are removed uniformly by chance , .for the intentional attack in which one removes targeted nodes in order of the total degree , one has , where is the heaviside step function and is the cutoff total degree for the attack . with , we can define the joint degree generating function after the node removal as \prod_{i=1}^n x_i^{k_i}.\end{aligned}\ ] ] similarly , the generating function for the remaining degrees upon following a randomly chosen -type link is given by then , on locally tree - like networks , the probability that a node reached by following an -type link does not belong to the giant component , , is given by the coupled self - consistency equations , we finally obtain the giant component size after the node removal as with the appropriately chosen for , e.g. , the random breakdown or the intentional attack based on the total degree . in what follows we present the main results from the analytic calculations together with the numerical simulations on various node removal scenarios and multiplex network couplings . in the first two following subsections , we will demonstrate our analyses on duplex er networks with layers of equal link density ( denoted as ) , after which the results on other graph ensembles and coupling types are briefly outlined .( b ) and ( c ) under random damage .the mp coupling produces more robust structure than the others against random failure .theoretical curves ( lines ) and numerical results ( points ) obtained with nodes , averaged over runs are shown together . ] for the random deletion of nodes , that is with for duplex networks , the mp ( mn ) coupling is more resilient ( vulnerable ) than the others.the percolation threshold for the mp , , is always larger than that for the uc and the mn couplings , so that more removal of nodes is needed to destroy connection at a given ( figs .the curve for mn coupling exhibits several kinks , which were found to occur when the minimum total degree of the network changes .rescaled size of the giant component , where is the size of the giant component with , for the mp coupling is also larger than those for the other cases for any .main reason of the high robustness of the mp coupling might be the skewness of its total degree distribution . by the opposite reason , the mn coupling is more vulnerable under random breakdowns of nodes compared to the uc and the mp cases .generically the interlayer degree correlation increases the network robustness to random damage , but the effect of correlated multiplexity becomes less significant as the network becomes dense ( fig .( b ) and ( c ) under the intentional attack based on total degrees .the mn case is more robust for the dense networks but vulnerable for the sparse networks .theoretical curves ( lines ) and numerical results ( points ) obtained with nodes , averaged over runs are shown together . ] for the intentional attack on nodes in the descending order of total degrees , _i.e. _ , $ ] for duplex networks , the structural robustness of correlated multiplex networks depends on both the coupling types and link densities , as illustrated by the behaviors of the critical attack fraction ( fig .when the network is sparse , _i.e. _ , the mp case is more robust against the attack than the uc case ( fig .4b ) . on the contrary , when , the percolation threshold for the mp coupling is larger than the uc case meaning that the mp is more vulnerable to the attack in this regime ( fig .the mn coupling results in the opposite effect to the mp coupling against the attack .the mn case is more robust for dense networks but vulnerable for sparse networks than the uc case . besides these general trends, the critical attack fraction versus the mean degree in duplex er networks exhibits much more complicated pattern compared to that of random failures , including the anomalous decrease of with respect to , albeit in some narrow windows .more detailed investigation would be necessary to examine the structural origin of such anomalies. meanwhile , it is well known from single network studies that networks with more skewed degree distribution are more vulnerable under degree - based attacks in general . in this perspective , it is interesting to note the mp coupling can produce more robust multiplex network system against the attack for sufficiently sparse link density despite skewness . and ( a ) and the attacks with and ( b ) , on partially - correlated duplex er networks for the failure with ( c ) and the attack with ( d ) , and on triplex er networks under the failure with ( e ) and the attack with ( f ) . ]to take a more comprehensive overview of the effect of various multiplex coupling factors , we consider additional layer coupling scenarios : i ) duplex er networks of layers with different mean degrees , ii ) duplex er networks with non - maximal correlated couplings , and iii ) triplex er networks .first , we examine the duplex er networks with layers of different mean degree , . as a specific example , we study the case for against the random failure ( fig .5a ) and the attack ( fig .the results are qualitatively the same as the equal mean degree case with the same total mean degree : for random failure , the mp coupling is most robust and the mn coupling is least robust ( fig . 5a , to be compared with fig .the opposite behaviors are obtained for the targeted attack with ( fig . 5b , to be compared with fig .4c ) . for ,the mp case becomes more vulnerable than the uc case against the attack when the total mean degree exceeds , which is less than that for the identical mean degree case , suggesting that the layer degree disparity can shrink the regime where the mp coupling is most robust to the attack .second , the duplex er networks with non - maximal correlated coupling are considered .we construct a non - maximal correlated coupling in the following way .a fraction of nodes are maximally correlated - coupled ( either mp or mn ) while the other fraction is randomly coupled ( uc ) .the parameter sets the strength of correlated coupling between multiplex layers . in this scheme ,the joint degree distribution of the duplex network is obtained by , where is either mp or mn , which can be readily adopted for theoretical calculation .the results for show that the non - maximal correlation can still affect the robustness of networks but the magnitude of the effect is smaller than that of the maximally correlated couplings ( fig .5c , d ; to be compared with fig .3d , 4c , respectively ) .finally , we briefly address the robustness of the correlated triplex er networks with equal layer - densities . as there can be two independent interlayer couplings for triplex networks ,there exist a total of six different combinations of layer couplings . herewe show the results for three representative coupling combinations : mp - mp , uc - uc , and mn - mp couplings .for example , the mn - mp coupling may represent the case where the first layer is coupled with the second layer by the mn coupling whereas it is coupled with the third layer by the mp coupling .we found that among these three cases the mp - mp coupling is most robust to random node failure but can be fragile to targeted attack , whereas the mn - mp coupling exhibits the opposite behaviors ( fig .the mn - mn coupling gives same results with the mn - mp coupling in this case .we also observed that the mp - uc ( mn - uc ) coupling yields intermediate behaviors between mp - mp ( mn - mp ) and uc - uc couplings : as well as for mp - uc ( mn - uc ) lies between those of mp - mp ( mn - mp ) and uc - uc couplings .we also study the same problem for multiplex scale - free ( sf ) networks numerically . to build the sf network layers with tunable degree exponent and mean degree, we use the static model , where each node has an endogenous weight given by , with being a constant , . for each step to construct a network , a pair of nodes , say and , are chosen independently following the probability and , respectively , and connected unless they are already linked .one repeats this step until the layer has the desired mean degree . for typical cases with ,the degree distribution of the resulting layer is asymptotically scale - free , decaying as with the degree exponent . to the random failure for (a ) and the intentional attack based on the total degree for and ( b ) , obtained with nodes , averaged over runs . ]we use sf layers with identical degree exponent , which is in the range . in this regime , each layer itself is extremely resilient against the random failures due to high degree heterogeneity , as is well - known from the single - network studies .therefore , all three coupling types show high robustness , with only a small difference among them that the mp coupling is most robust and the mn coupling is least robust , similarly with duplex er cases ( fig .6a ) . for the attack ,the mn case is more resilient for dense networks but more vulnerable for sparse networks , again in qualitative similarity to duplex er cases , as illustrated by the comparisons of duplex sf networks of equal mean degrees and , respectively ( fig .in multiplex network systems , layers may be interdependent , in the sense that nodes in one layer may require supports from corresponding nodes in the other layers and vice versa , demanding simultaneous connectivities in each and every layers of the network for proper function . for such systems , one can address the network robustness in terms of mutually - connected component , also called mutual component for short , whose size can be obtained by the generating function method due to ref . , as follows . on locally tree - like networks ,the probability that a node reached by following an -type link does not belong to the giant mutual component , , is given by the following coupled self - consistency equations , then the size of the giant mutual component , , for multiplex networks is obtained by ( b ) , and on the correlated triplex er networks ( c ) .lines represent analytical calculations and the symbols in ( a ) are numerical results obtained with nodes , averaged over runs . ] the main results of analytic predictions from the above theory as well as the numerical simulations for the duplex er networks are as follows ( fig .as is well - known , the giant mutual component emerges discontinuously , in contrast with the ordinary percolation transition that exhibits a continuous phase transition .similarly to the ordinary connectivity , the percolation threshold of the mutual percolation for the mp coupling is lower , whereas the mn coupling requires denser network for the emergence of the giant mutual component than the other cases .we performed additional analyses on multiplex er networks , shown in fig . 7 , for the cases of non - maximal correlated couplings ( fig .7a ) , unequal layer - densities ( fig .7b ) , and triplex layers ( fig .( a ) , and for the attacks based on the degree with ( b ) , ( c ) , and ( d ) .theoretical curves ( lines ) and numerical results ( points ) obtained with nodes , averaged over runs , are shown together . ] following a similar procedure to the preceding section , one can calculate the giant mutual component size under removal of randomly chosen nodes or targeted nodes with the highest degrees on locally tree - like networks .combining the theory for the mutual percolation and the node removals ( see also for an alternative approach ) , the probability that a node reached by following an -type link does not belong to the giant mutual component after deletion of nodes can be obtained by the following coupled self - consistency equations ( 1-y_i^{k_i-1})\prod_{j=1,j\ne i}^{n } ( 1-y_j^{k_j}).\end{aligned}\ ] ] then the size of the giant mutual component after node removals can be computed as \prod_{j=1}^{n } ( 1-y_j^{k_j}).\end{aligned}\ ] ] for the duplex er networks with equal layer - densities , we found that the mp ( mn ) coupling is more robust ( vulnerable ) than the other cases against the random node removals . the result for the mp casewas also obtained earlier in refs .the rescaled size of the giant mutual component , where is the size of the giant mutual component with , for the mp ( mn ) coupling is larger ( smaller ) than those for the others for any removal fraction ( fig .8a ) . for the targeted attack based on the total degree ,however , the effect of correlated multiplexity is more complicated . for sufficiently low density , e.g. , ( fig .8b ) , the mp ( mn ) coupling is more robust ( vulnerable ) than the others against the attack . with intermediate density ,say ( fig .8c ) , the mn coupling is most robust and the uc is most vulnerable . for high enough density , e.g. , ( fig .8d ) , the mn ( mp ) case is most robust ( vulnerable ) against the attack , opposite to the low density case .this shows that the effect of correlated multiplexity on the robustness of mutual connectivity is not monotonic and could depend strongly on the details of interdependency .finally , we examine the robustness property of a real - world multiplex network under node removals .the real - world network data we consider consists of two layers , the internet backbone network and the high - voltage electrical transmission network in italy .these two network layers can be regarded as interdependent in such a way that a failure in one layer ( say , a power station in the power grid ) would lead to that on the other layer ( say , a power control station communicating through the internet ) , and vice versa . thus this system can be modeled as a multiplex network . following the rationale of , we have established the interdependency between two layers based on the geographical distance so that each node in the internet network is interdependent on the closest node in the power transmission network .nodes with no interdependent partner are thought to be functional autonomously . we first calculate numerically the fraction of functional interdependent nodes , , of the internet - power transmission multiplex network following the interdependent cascade model of ref . upon the random failure and the degree - based targeted attack on the interdependent nodes ( fig .the numerical results show that the rescaled fraction of functional nodes , where is the fraction of functional nodes with , is relatively robust against the random failure as it can endure up to around 80% interdependent - node removals , whereas it rapidly disintegrates upon the targeted attack on as small as 20% of highest - degree interdependent nodes .we also examine the effect of correlated couplings in this system to the attack vulnerability by using artificial multiplex networks with rewired interdependency into the mp or the mn types ( fig .the results for the rewired multiplex networks show that the mn coupling is more robust to the targeted attack on high - degree nodes than the mp coupling .it is interesting to note that the behavior of the real - world network data lies close to that of the mn coupling despite significant difference in actual interdependency patterns .in this paper , we have studied various network robustness properties of multiplex networks focusing on the role of the correlation between degrees of a node across different layers .we have analyzed specifically the biconnectivity and the error and attack tolerance of the ordinary as well as the mutual connectivity , covering a wide spectrum of network robustness relevant to multiplex networks .we found that the correlated coupling of multiplex layers can significantly alter the robustness properties of multiplex networks in diverse ways .for example , positively - correlated multiplex networks are more robust , whereas anti - correlated multiplex are less robust , in the context of the biconnectivity and the ordinary as well as mutual connectivity upon random node failure . to the targeted attack based on nodes degrees , on the contrary ,positively - correlated multiplex networks with sufficiently high link - density can be highly vulnerable , whereas the anti - correlated ones can become more resilient .we also examined the effect of various additional multiplex - coupling factors and a real - world example of the italian internet - power transmission multiplex system .our analyses reveal that the notion of network robustness can exhibit more diversified aspects in multiplex networks compared to single - network situation , dependent on specific context and interplay between the network layers .we expect our initial analyses could prompt attention and provide a basic insight for further research endeavors on understanding the robustness of correlated multiplex systems .interesting topics of future work in this regard would include the extension to account for higer - order correlation properties beyond the interlayer degree correlation considered in this work , such as clustering in multiplex networks .we thank v. rosato for providing the italian internet backbone and the high - voltage electrical transmission network data .we also thank the anonymous referees for useful comments .this work was supported by basic science research program through the nrf grant funded by msip ( no .2011 - 0014191 ) .b. m. is also supported by a korea university grant .l. is also supported by global ph.d .fellowship program ( no . 2011 - 0007174 ) through nrf , msip .99 m. e. j. newman , _ networks : an introduction _ ( oxford university press , oxford , 2010 ) .r. cohen and s. havlin , _ complex networks : structure , robustness and function _ ( cambridge university press , cambridge , 2010 ). l. m. verbrugge , social forces * 57 * , 1286 ( 1979 ) .j. f. padgett and c. k ansell , am . j. sociol . *98 * , 1259 ( 1993 ) .r. g. little , j. urban technol .* 9 * , 109 ( 2002 ) .v. rosato , l. issacharoff , f. tiriticco , s. meloni , s. porcellinis , and r. setola , int . j. crit. infrastruct . * 4 * , 63 ( 2008 ) .m. szell , r. lambiotte , and s. thurner , proc .107 * , 13636 ( 2010 ) .p. j. mucha , t. richardson , k. macon , m. a. porter , and j .-onnela , science * 328 * , 876 ( 2010 ) .a. cardillo , j. gmez - gardenes , m. zanin , m. romance , d. papo , f. del pozo , and s. boccaletti , sci . rep . * 3 * , 1344 ( 2013 ) .s. v. buldyrev , r. parshani , g. paul , h. e. stanley , and s. havlin , nature ( london ) * 464 * , 1025 ( 2010 ) .e. a. leicht and r. m. dsouza , arxiv:0907.0894 .lee , j. y. kim , w .- k .goh , and i .-kim , new j. phys .* 14 * , 033027 ( 2012 ) .j. gmez - gardenes , i. reinares , a. arenas , and l. m. floria , sci .* 2 * , 620 ( 2012 ) .s. gmez , a. daz - guilera , j. gmez - gardenes , c. j. prez - vicente , y. moreno , and a. arenas , phys .110 * , 028701 ( 2013 ) .m. kivel , a. arenas , m. barthelemy , j. p. gleeson , y. moreno , and m. a. porter , arxiv:1309.7233 .r. parshani , s. v. buldyrev , and s. havlin , phys .lett . * 105 * , 048701 ( 2010 ) .j. gao , s. v. buldyrev , h. e. stanley , and s. havlin , nat .* 8 * , 40 ( 2012 ) .w . son , g. bizhani ,c. christensen , p. grassberger , and m. paczuski , epl * 97 * , 16006 ( 2012 ) . v. h. p. louzada , n. a. m. araujo , j. s. andrade jr , h. j. herrmann , sci . rep . * 3 * , 3289 ( 2013 ) . c. d. brummitt , k .-lee , and k .-goh , phys .e * 85 * , 045102(r ) ( 2012 ) . c. m. schneider , n. yazdani , n. a. m. araujo , s. havlin , and h. j. herrmann , sci .rep . * 3 * , 1969 ( 2013 ) .v. nicosia , g. bianconi , v. latora , and m. barthelemy , phys .lett . * 111 * , 058701 ( 2013 ) ; j. y. kim and k .-goh , phys .rev . lett . * 111 * , 058702 ( 2013 ) .r. parshani , c. rozenblat , d. ietri , c. ducruet , and s. havlin , epl * 92 * , 68002 ( 2010 ) . s. v. buldyrev , n. w. shere , and g. a. cwilich , phys .e * 83 * , 016112 ( 2011 ) .d. zhou , h. e. stanley , g. dagostino , and a. scala , phys .e * 86 * , 066103 ( 2012 ) .r. albert , h. jeong , and a .-barabsi , nature * 406 * , 378 ( 2000 ) .d. s. callaway , m. e. j. newman , s. h. strogatz , and d. j. watts , phy .lett . * 85 * , 5468 ( 2000 ) .r. cohen , k. erez , d. ben - avraham , and s. havlin , phys .. lett . * 85 * , 4626 ( 2000 ) .r. cohen , k. erez , d. ben - avraham , and s. havlin , phys .lett . * 86 * , 3682 ( 2001 ) .p. holme , b. j. kim , c. n. yoon , and s. k. han , phys .e * 65 * , 056109 ( 2002 ) .a. x. c. n. valente , a. sarkar , and h. a. stone , phys .lett . * 92 * , 118702 ( 2004 ) .t. tanizawa , g. paul , r. cohen , s. havlin , and h. e. stanley , phys . rev .e * 71 * , 047101 ( 2005 ) . c. m. schneider , a. a. moreira , j. s. andrade jr . ,s. havlin , and h. j. herrmann , proc . natl .108 * , 3838 ( 2011 ) .m. e. j. newman and g. ghoshal , phys .* 100 * , 138701 ( 2008 ) .p. kim , d .- s .lee , and b. kahng , phys . rev .e * 87 * , 022804 ( 2013 ) .r. parshani , s. v. buldyrev , and s. havlin , proc .108 * , 1007 ( 2011 ) .x. huang , j. gao , s. v. buldyrev , s. havlin , and h. e. stanley , phys .e * 83 * , 065101(r ) ( 2011 ) .goh , b. kahng , and d. kim , phys .lett . * 87 * , 278701 ( 2001 ) .e. cozzo , m. kivel , m. d. domenico , a. sol , a. arenas , s. gmez , m. a. porter , and y. moreno , arxiv:1307.6780 .
we study the robustness properties of multiplex networks consisting of multiple layers of distinct types of links , focusing on the role of correlations between degrees of a node in different layers . we use generating function formalism to address various notions of the network robustness relevant to multiplex networks such as the resilience of ordinary- and mutual connectivity under random or targeted node removals as well as the biconnectivity . we found that correlated coupling can affect the structural robustness of multiplex networks in diverse fashion . for example , for maximally - correlated duplex networks , all pairs of nodes in the giant component are connected via at least two independent paths and network structure is highly resilient to random failure . in contrast , anti - correlated duplex networks are on one hand robust against targeted attack on high - degree nodes , but on the other hand they can be vulnerable to random failure .
quantum cryptography is a technique for generating and distributing cryptographic keys in which the secrecy of the keys is guaranteed by quantum mechanics .the first such scheme was proposed by bennett and brassard in 1984 ( bb84 protocol ) . sender and receiver ( conventionally called alice and bob ) use a quantum channel , which is governed by the laws of quantum mechanics , and a classical channel which is postulated to have the property that any classical message sent will be faithfully received .the classical channel will also transmit faithfully a copy of the message to any eavesdropper , eve . along the quantum channel a sequence of signalsis sent chosen at random from two pairs of orthogonal quantum states .each such pair spans the same hilbert space .for example , the signals can be realized as polarized photons : one pair uses horizontal and vertical linear polarization ( ) while the other uses linear polarization rotated by degrees ( ) .bob at random one of two measurements each performing projection measurements on the basis or . the _ sifted key_ consists of the subset of signals where the bases of signal and measurement coincide leading to deterministic results. this subset can be found by exchange of classical information without revealing the signals themselves .any attempt of an eavesdropper to obtain information about the signals leads to a non - zero expected error rate in the sifted key and makes it likely that alice and bob can detect the presence of the eavesdropper by comparing a subset of the sifted key over the public channel . if alice and bob find no errors they conclude ( within the statistical bounds of error detection ) that no eavesdropper was active .they then translate the sifted key into a sequence of zeros and ones which can be used , for example , as a one - time pad in secure communication .several quantum cryptography experiments have been performed . in the experimental set - up noiseis always present leading to a bit error rate of , typically , 1 to 5 percent errors in the sifted key .alice and bob can not even in principle distinguish between a noisy quantum channel and the signature of an eavesdropper activity .the protocol of the key distribution has therefore to be amended by two steps .the first is the _ reconciliation _ ( or error correction ) step leading to a key shared by alice and bob .the second step deals with the situation that the eavesdropper now has to be assumed to be in the possession of at least some knowledge about the reconciled string .for example , if one collects some parity bits of randomly chosen subsets of the reconciled string as a new key then the shannon information of an eavesdropper on that new , shorter key can be brought arbitrarily close to zero by control of the number of parity bits contributing towards it .this technique is the generalized privacy amplification procedure by bennett , brassard , crpeau , and maurer .the final measure of knowledge about the key used in this article is that of change of shannon entropy . if we assign to each potential key an a - priori probability then the shannon entropy of this distribution is defined as = - \sum_x p(x ) \log p(x ) \ ; .\ ] ] note that all logarithms in this article refer to basis .the knowledge eve obtains on the key may be denoted by and leads to an a - posteriori probability distribution .the difference between the shannon entropy of the a - priori and the a - posteriori probability distribution is a good measure of eve s knowledge : -s\left [ p(x|k)\right ] \ ; .\ ] ] for short , we will call the _ entropy change_. we recover the shannon information as the expected value of that difference as where eve s knowledge occurs with probability .if we are able to give a bound on for a specific run of the quantum key distribution experiment then this is a stronger statement than a bound a the shannon information : we guarantee not only security on average but make a statement on a specific key , as required for secure communication .the challenge for the theory of quantum cryptography is to provide a statement like the following one : if one finds errors in a sifted key of length then , after error correction under an exchange of bits of redundant information , a new key of length can be distilled on which , with probability , a potential eavesdropper achieves an entropy change of less than . here has to be chosen in view of the application for which the secret key is used for .it is not necessary that each realization of a sifted key leads to a secret key ; the realization may be rejected with some probability . in that casealice and bob abort the attempt and start anew .the final goal is to provide the security statement taking into account the real experimental situation .for example , no real channel exist which fulfill the axiom of faithfulness .there is the danger that an eavesdropper can separate alice and bob and replace the public channel by two channels : one from alice to eve and another one from eve to bob . in this separate world scenarioeve could learn to know the full key without causing errors .she could establish different keys with alice and bob and then transfer effectively the messages from alice to bob .this problem can be overcome by _ authentication _ .this technique makes it possible for a receiver of a message to verify that the message was indeed send by the presumed sender .it requires that sender and receiver share some secret knowledge beforehand .it should be noted that it is not necessary to authenticate all individual messages sent along the public channel .it is sufficient to authenticate some essential steps , including the final key , as indicated below . in the presented protocol ,successful authentication verifies at the same time that no errors remained after the key reconciliation .the need to share a secret key beforehand to accomplish authentication reduces this scheme from a quantum key distribution system to a quantum key growing system : from a short secret key we grow a longer secret key . on the other hand ,since one needs to share a secret key beforehand anyway , one can use part of it to control the flow of side - information to eve during the stage of key reconciliation in a new way . with side - informationwe mean any classical information about the reconciled key leaking to the eavesdropper during the reconciliation .another problem is that in a real application we can not effectively create single photon states .recent developments by law and kimble promise such sources , but present day experiments use dim coherent states , that is coherent pulses with an expected photon number of typically per signal .the component of the signal containing two or more photon states , however , poses problems .it is known that an eavesdropper can , by the use of a quantum non - demolition measurement of the total photon number and splitting of signals , learn with certainty all signals containing more than one photon without causing any errors in the sifted key .if eve can get hold of an ideal quantum channel this will lead to the existence of a maximum value of loss in the channel which can be tolerated .it is not known at present whether this qnd attack , possibly combined with attacks on the remaining single photons , is the optimal attack but it is certainly pretty strong .the eavesdropper is restricted in her power to interfere with the quantum signals only by quantum mechanics . in the most general scenario , she can entangle the signals with a probe of arbitrary dimensions , wait until all classical information is transmitted over the public channel , and then make a measurement on the auxiliary system to extract as much information as possible about the key .many papers , so far , deal only with single photon signals .at present there exists an important claim of a security proof in this scenario by mayers .however , the protocol proposed there is , up to now , far less efficient than the here proposed one .other security proofs extend to a fairly wide class of eavesdropping attacks , the coherent attacks . in this paper i will give a solution to a restricted problem .the restriction consists of four points : * the eavesdropper attacks each signal individually , no _ coherent or collective attacks _ take place . *the signal states consist , indeed , of two pairs of orthogonal single photon states so that two states drawn from different pairs have overlap probability .* bob uses detectors of identical detection efficiencies . *the initial key shared by alice and bob is secret , that is the eavesdropper has negligible information about it .using the part of the key grown in a previous quantum key growing session is assumed to be safe in this sense . within these assumptionsi give a procedure that leads with some a - priori probability to a key shared by alice and bob . if successful , the key is secure in the sense that with probability any potential eavesdropper achieved an entropy change less than .in contrast to all other work on this subject , this procedure takes into account that the eavesdropper does not necessarily transmit single photons to the receiver ; she might use multi - photon signals to manipulate bob s detectors .the procedure presented here might not be optimal , but it is certifiable safe within the four restrictions mentioned before .it should be pointed out that coherent eavesdropping attacks are at present beyond our experimental capability .alice and bob can increase the difficulty of the task of coherent or collective eavesdropping attacks by using random timing for their signals ( although here one has to be weary about the error rate of the key ) or by delaying their classical communication thereby forcing eve to store her auxiliary probe system coherently for longer time .there is an important difference between the threat of growing computer power against classical encryption techniques and the growing power of experimental skills in the attack on quantum key distribution : while it is possible to decode today s message with tomorrow s computer in classical cryptography , you can not use tomorrow s experimental skills in eavesdropping on a photon sent and detected today .it is seems therefore perfectly legal to put some technological restrictions on the eavesdropper .this might be , for example , the restriction to attacks on individual system , or even the restriction to un - delayed measurements .for the use of dim coherent states one might be tempted to disallow eve to use perfect quantum channels and to give her a minimum amount of damping of her quantum channel .the ultimate goal , however , should be to be _able _ to cope without those restrictions .the structure of the paper is as follows . in section [ howto ]i present the complete protocol on which the security analysis is based .then , in section [ elements ] i discuss in more detail the various elements contributing to the protocol .the heart of the security analysis is presented in section [ expected ] before i summarize in section [ analysis ] the efficiency and security of the protocol .the protocol presented here is a suitable combination of the bennett - brassard protocol , reconciliation techniques and authentication methods .i make use of the fact that alice and bob have to share some secret key beforehand . instead of seeing thatas a draw - back , i make use of it to simplify the control of the side - information flow during the classical data exchange .side - information might leak to eve in the form of parity bits , exchanged between alice and bob during reconciliation , or in the form of knowledge that a specific bit was received correctly or incorrectly by bob .the side - information could be taken care of this during the privacy amplification step using the results of . herei present for clarity a new method to avoid any such side - information which correlates eve s information about different bits ( as parity bits do which are typically used in reconciliation ) by using secret bits to encode some of the classical communication .the notation of the variables is guided by the idea that denotes numbers of bits , especially key length at various stage , denotes numbers of secure bits used in different steps of the protocol , denote probabilities of failing to establish a shared key , denote failure probabilities critical to the safety of an established key , while denotes the probability that alice and bob , unknown to themselves , do not even share a key .quantities or denote expected values of the quantity .the protocol steps and their achievements are : 1 . alice sends a sufficient number of signals to bob to generate a sifted key of length .2 . bob notifies alice in which time slot he received a signal .alice and bob make a `` time stamp '' allowing them to make sure that the previous step has been completed before they begin the next step .this can be done , for example , by taking the time of synchronized clocks after step and to include this time into the authentication procedure .alice sends the bases used for the signals marked in the second step to bob .5 . bob compares this information with his measurements and announces to alice the elements of the generalized sifted key of length .the generalized sifted key is formed by two groups of signals .the first is the sifted key of the bb84 protocol formed by all those signals which bob can unambiguously interpret as a deterministic measurement result of a single photon signal state .the second group consists of those signals which are ambiguous as they can not be thought of as triggered by single photon signals .if two of bob detectors ( for example monitoring orthogonal modes ) are triggered , then this is an example of an ambiguous signal . the number of these ambiguous signals is denoted by . + the announcement of this step has to be included into the authentication .reconciliation : alice sends , in total , encoded parity - check bits over the classical channel to bob as a key reconciliation .bob uses these bits to correct or to discard the errors . during this stephe will learn the actual number of errors .the probability that an error remains in the sifted key is given by .depending on the reconciliation scheme , eve learns nothing in this step , or knows the position of the errors , or knows that bob received all the remaining bits correctly .7 . from the observed number of errors and of ambiguous non - vacuum results can conclude , using a theorem by hoeffding , that the expected disturbance measure is , with probability , below a suitable chosen upper bound . with probability find a value for which allows them to continue this protocol successfully . here is a weight factor fixed later on .given the upper bound on the disturbance rate , alice and bob shorten the key by a fraction during privacy amplification such that the shannon information on that final key is below .the shortening is accomplished using a hash function chosen at random . to make a statement about the entropy change eve achieved for this particular transmission they observe that this change is with probability less than .the probability can be estimated by .9 . in the last step alice chooses at random a suitable hash function which she transmits encrypted to bob using secret bits. then she hashes with that function her new key , the time from step , and the string of bases from step into a short sequence , called the _ authentication tag _ , the tag is sent to bob who compares it with the hashed version of his key . if no error was left after the error correction the tags coincide.this step is repeated with the roles of alice and bob interchanged .if bob detects an error rate too high to allow to proceed with the protocol , he does not forward the correct authentication to alice .the probability eve could have guessed the secret bits used by alice or by bob to encode their hashed message is given by .the probability that a discrepancy between the two versions of the key remains undetected is denoted by .the _ probability of detected failure _ is with and this failure does not compromise the security . in the case of success alice and bob can now say that , at worst , with a _ probability of undetected failure _ ( failure of security ) of ( with ) the eavesdropper can achieve an entropy change for the final key which is bigger than .the remaining probability describes the probability that alice and bob do not detect that they do not even share a key .note that the final authentication is made symmetric so that no exchange of information over the success of that step is necessary .otherwise a party not comparing the authentication tags could regard the key as safe in a separate - world scenario .more explanation about the authentication procedure can be found in section [ authentication ] .the classical information becoming available to eve during the creation of the sifted key will be taken care of in the calculations of section [ expected ] .the public channel is now used for the following tasks : * creation of the sifted key , where eve learns which signals reached bob and from which signal set each signal was chosen from , * transmission of encrypted parity check bits , on which eve learns nothing , * for bi - directional reconciliation methods : feedback concerning the success of parity bit comparisons ( see following section ) , * for reconciliation methods which discard errors : the location of bits discarded from the key , * announcement of the hash function chosen in this particular realization , * transmission of the encrypted hash function for authentication and of the unencrypted authentication tags .the main subject of this paper is to give the fraction by which the key has to be shortened to match the security target as a function of the upper bound on the disturbance .the estimation has to take care of all information available to eve by a combination of measurements on the quantum channel and classical information overheard on the public channel .this classical information depends on the reconciliation procedure used .the nature of this information might allow eve to separate the signals into subsets of signals , for example those being formed by the signals which are correctly ( incorrectly ) received by bob , and to treat them differently .the knowledge of the specific hash function is of no use to eve in construction of her measurement on the signals .this is a result of the assumption that eve attacks each signal individually and that the knowledge of the hash functions tells eve only whether a specific bit will count towards the parity bit of a signal subset or not .she only will learn how important each individual bit is to her .if the bit is not used then it is too late to change the interaction with that bit to avoid unnecessary errors , since the damage by interaction has been done long before .if it is used , then eve intends to get the best possible knowledge about it anyway. this situation might be different for scenarios which allow coherent attacks .in this section i explain in more detail the steps of the quantum key growing protocol .special attention is given to the security failure probabilities , limiting the security confidence of an established shared key , and to the failure probabilities , limiting the capability to establish a shared key . elements of the generalized sifted key are signals which either can be unambiguously interpreted as being deterministicly detected , given the knowledge of the polarization basis , or which trigger more than one detector .we think of detection set - ups where detectors monitor one relevant mode each . due to loss it is possible to find no photon in any mode .since eve might use multi - photon signals we may find photons in different monitored modes simultaneously , leading to ambiguous signals since more than one detector gives a click .detection of several photons in _one _ mode , however , is deemed to be an unambiguous result .( see further discussion in section [ evesinteraction ] . ) in practice we will not be able to distinguish between one or several photons triggering the detector .the length of the sifted key accumulated in that way is kept fix to be of length . for the reconciliationwe have to distinguish two main classes of procedures : one class corrects the errors using redundant information and the other class discards errors by locating error - free subsections of the sifted key . the class of error - correcting reconciliation can be divided in two further subclasses : one subclass uses only uni - directional information flow from alice to bob while the second subclass uses an interactive protocol with bi - directional information flow . the difference between the three approaches with respect to our protocol shows up in the number of secret bits they need to reconcile the string , the length of the reconciled string , and the probability of success of reconciliation . for experimental realization one should think as well of the practical implementation .for example , interactive protocols are very efficient to implement . to illustrate the difference i give examples for the error correction protocols .the benchmark for efficiency of error correction is the shannon limit .it gives the minimum number of bits which have to be revealed about the correct version of a key to reconcile a version which is subjected to an error rate .this limit is achieved for large keys and the error correction probability approaches then unity .the shannon limit is given in terms of the amount of shannon information contained in the version of the key affected by the error rate .for a binary channel , as relevant in our case , this is given by the minimum number of bits needed , on average , to correct a key of length affected by the error rate is then given by as mentioned before , perfect error correction is achievable only for .linear codes are a well - established technique which can be viewed in a standard - approach as attaching to each -bit signal a number of bits of linearly independent parity - check bits making it in total a -bit signal .the receiver gets a noisy version of this n - bit signal and can now in a well - defined procedure find the most - likely -bit signal . linear codes which will safely return the correct -bit signal if up to of the bits were flipped by the noisy channel are denoted by ] codes are designed to cope with the situation that even the parity bits might be affected by noise .one can partly take advantage of the situation that these bits are transmitted correctly . however , non - optimal performance is not a security hazard .the search for an optimal linear code is beyond the scope of this paper . to illustrate the problem i present as specific example the code ] of secret bits gained in that instance and the average number of input secret bits .then the condition for an overall gain on average is to have a positive value of resulting in -n_s \right\ } \\ & & -n_{\rm aut}-n_{\rm rec}\ ; . \nonumber\end{aligned}\ ] ] to explore the implications of this condition we go to the limit of large sample sizes .then we can neglect the number of secret bits used for authentication and and the safety parameter .the remaining contribution of now comes from the error correction part . for ideal error correction we can set and can use the shannon limit which gives with the shannon information shared between alice and bob given by with these preparations we find - n_{\rm sif } ( 1- i_{ab}(\epsilon_{\rm meas}))\ ; .\ ] ] in the limit of we can assume that still satisfies any confidence limits put on . therefore the condition is now equivalent to as we see from figure [ limitplot ] this means that the protocol in the presented form will be able to grow secret keys only for set - ups operating at an error rate of less than for error correction .however , making use of the concept of spoiling information and of improved estimates of might result in lower estimates for . a lower bound is , however , the shannon information shared by alice and eve in this scenario .fuchs et al . give in a sharp bound for , which is shown in figure [ limitplot ] as dotted line .the difference between and represent the average gain in a run of the key growing protocol in the limit of ideal error correction and infinite sample sizes .the gain gives the length of the final key as a fraction of the generalized sifted key .in this paper i have given estimates needed in quantum cryptography which are closely oriented towards practical experiments .i do not deal with security against all possible attacks in quantum mechanics , but i deal with all attacks on individual signals .this allows me to include issues related to practical implementation of quantum cryptography which still can not be treated in the general scenario .one of these issues is the question of signals which , for example , triggered simultaneously two detectors monitoring orthogonal polarization modes .( this is the question of multi - photon signals resent by eve , leading to ambiguous signals . ) the other important question is that of an efficient key reconciliation prior to privacy amplification .as seen in this paper it is possible to use the efficient bi - lateral error correction scheme of brassard and salvail without compromising security . in the statistical analysisi showed that it is possible to limit in this scenario the knowledge of the eavesdropper on the final key in a individual realization from _ measured quantities _ for parameters which seem to be reachable in experiments .as measure of the eavesdropper s knowledge i used the change between a - priori and a - posteriori shannon entropy associated with the corresponding probability distributions over all possible keys from eve s point of view .one has to take into account that single photon signals states are not used in today s experiments .however , this theory can be extended to signal states containing multi - photon components .a first approach for that is to estimate for each bit of the reconciled key on which eve could have performed successfully a splitting operation with subsequent delayed measurement .denote by the total number of these bits , then we need to reduce the key during privacy amplification by the statistics , however , becomes more complicated this way and it seems to be better to include the dim coherent states directly as signal states and to solve the problem in a clean way .work in that direction is currently under progress .the estimates for are not necessarily sharp in the case of error correction , and even in the case of discarding errors this limit could be lowered using spoiling information .however , the possible improvement of efficiency of the key growing process is limited and this fine - tuning might be postponed until the experimental relevant situation for dim coherent signal states is solved .i would like to thank miloslav duek , richard hughes , paul townsend and the participants of the 1997 workshop on quantum information at the institute for scientific interchange ( italy ) for discussions and steven van enk for helpful critical comments on the manuscript .for fincancial support i would like to thank elsag bailey and the academy of finland .the foundations to this article were laid while i did research for my phd thesis under supervision and support of steve barnett .in this appendix we prove the inequality ( [ cauchyapplied ] ) starting from the expression we rewrite the first sum as and use the cauchy inequality , given as or we set and to obtain the inequality this can be used to estimate the first part in ( [ intermediate2 ] ) while the second part can be estimated similarly so that , with the help of eqn .( [ n1vector ] ) , we find the result optimize the expression ( [ cauchyapplied ] ) we first note that we can assume that .if eve starts with a strategy defined by operators not satisfying this condition , then she could use the a - operators without a change in the obtained collision probability or disturbance .when we combine the two strategies we find that the resulting vectors satisfy and .this then gives the estimate .another observation is that we can always choose which means that there are less or equal errors in the sifted key coming from the use of the polarization basis + than from the basis . this can be always satisfied , since both polarization basis could be interchanged . using and the definition of and this results in with the angle between and .the three relevant relations now become after elimination of according to ( [ en1 ] ) and the use of the relations ( [ vectorrelation ] ) our next step is to show that we can estimate the optimal value of by replacing by . to see that we observe that this would allow to decrease by eqn ( [ middleeq ] ) , meaning a lower error rate . at the same time grows indirectly from the falling value of and directly , since with . to prove the last pointwe calculate this is positive , if is positive .this is , indeed , the case since allows us to evaluate at the maximal value of where it gives zero .this proves that and with that .therefore , three relevant equations become we solve ( [ e3 ] ) and ( [ p3 ] ) for and and insert these into ( [ pc3 ] ) . the maximum over then taken and we find the strategy resulting in this collision probability is described by in the derivation we have chosen and find the optimal solution respects this choice for . for find so that we conclude that start from equation ( [ pccorrected ] ) and use the cauchy inequality in a similar way as in appendix [ optimisation ] .we obtain the bound next we introduce the angles between the corresponding vectors , make use of the relations ( [ vectorrelation ] ) and ( [ dotproductrelation ] ) , use the symmetry argument as in appendix [ optimisation ] and find after some transformation the set of equations the first observation is that it is optimal to choose since this choice optimizes while it leaves unchanged .the second observation is that the choice of within the subspace defined by and fixed values of and is optimal if this choice is possible . in this casewe are left with the equations at the end of a short maximization calculation we find a solution consistent with symmetry condition ( [ symmcond ] ) for .it is given by this maximum is obtained by choosing the values and .the symmetry condition ( [ symmcond ] ) then gives which limits the range of validity to .for we find the optimal solution by selecting .a short maximization calculation then gives the bound for the choice of parameters and .we apply cauchy inequalities to equation ( [ pccorrpos ] ) and use the vector notations , , , and to find it becomes clear immediately that we can replace by and by because of relations similar to ( [ dotproductrelation ] ) .similar to the calculations in appendices [ optimisation ] and [ corrected ] we introduce the angles and use the relations ( [ vectorrelation ] ) and ( [ dotproductrelation ] ) and the symmetry argument introduced in appendix [ optimisation ] to find the new form of ( [ pcpc ] ) as \nonumber \end{aligned}\ ] ] while we take from appendix [ corrected ] the expression for as we next perform a variation along the path defined by and find that is optimized for the choice .an optimization calculation for the remaining parameters leads to the estimate for a disturbance .this optimum is obtained by choosing and .n. ltkenhaus and s. m. barnett , in _ proceedings of an international workshop on quantum communication , computing , and measurement , held september 25 - 30 in shizuoka , japan _ , o. hirota , a. s. holevo , and c. m. caves , eds . , ( plenum press , new york , 1997 ) .
in this article i present a protocol for quantum cryptography which is secure against attacks on individual signals . it is based on the bennett - brassard protocol of 1984 ( bb84 ) . the security proof is complete as far as the use of single photons as signal states is concerned . emphasis is given to the practicability of the resulting protocol . for each run of the quantum key distribution the security statement gives the probability of a successful key generation and the probability for an eavesdropper s knowledge , measured as change in shannon entropy , to be below a specified maximal value .
in designing wearable products , such as garments or head gears , one of the most important objectives is to make the products to fit the humans comfortably . in order to accommodate the human shape variabilities ,different sizes are usually created .traditionally , this is achieved by considering a few key dimensions of the human body . to design a sizing system ,anthropometric measurement data of these dimensions are collected and tabulated .then , a griding of the dimensions is formed to create the sizes .although traditional anthropometry has a long history and has accumulated a vast amount of data , it is limited by its tools ( mainly tape measure and caliper ) ; the sparse dimensional measurements do not provide sufficient shape information . in many applications , designers need full 3d models as representative shapes for the sizes . these models are sometimes called manikins when the full body is concerned , or head forms in case of the head and face are concerned . for the purpose of generality , we call them _ design models _ in this paper .these 3d models provide the overall shape of the human body .they need to be chosen carefully to ensure that the manufactured items fit the target population . without 3d information ,design models are often created by artists who sculpt out the 3d forms , interpolating the measurement dimensions using their experience in creating human shapes .these approaches are labour - intensive and do not create accurate human models . 3d anthropometric data , obtained using 3d imaging technologies , provide detailed shape information .in addition , traditional measurements can also be extracted from the 3d models .therefore , 3d anthropometric data offers an opportunity to improve the quality of the design models and , at the same time , maintain the simplicity of the traditional design schemes where key body dimensions are used .despite the fact that 3d anthropometric databases have been available for two decades , surprisingly little research has been reported concerning using the 3d data to improve the design models .robinette , ball ( * ? ? ?* chapter 7 ) , and meunier et al . are recent attempts in this direction .however , all of these methods rely heavily on manual interaction and do not give a systematic way to optimize the design models such that the sizing system provide maximal accommodation . in this paper, we present a method for automatically creating a set of design models based on a 3d anthropometric database and optimally computing the design models that represent the underlying target population .starting with a 3d anthropometric database that represents the target population for the product to be designed , the method requires as input the ordered set of measurements on the human body that need to be considered during the design phase .for instance , when designing glasses , the set to be considered are the width of the face and the width of the bridge of the nose .furthermore , the method requires as input an ordered set of tolerances ; one tolerance for each measurement in . in the example with the glasses, we may know that the glasses to be designed can be adjusted by to fit faces with different widths and by to fit noses with different widths of the bridge . in this case, contains the tolerances and .if the product is non - adjustable , the tolerances represent the measurement intervals that the sizes cover . with this information ,the method finds a set of design models that optimally represent the population acquired in the database for the specific purpose of designing a product that depends on the measurements and that allows adjustments along the tolerances .the method proceeds by mapping each of the subjects in the database to a point in and by solving a discrete covering problem in this space .we use the subjects in the database directly instead of learning their underlying probability distribution since the type of distribution is usually unknown .the design models created using this approach are expected to have higher data accuracy and completeness than traditionally produced design models because we compute the best design models from a database that represents the target population .sections [ conversion ] , [ cover_box ] , and [ fixed_box ] discuss how to convert the problem to a discrete covering problem and how to solve the covering problem . to obtain the design models, the method employs the full procrustes mean as outlined in section [ gpa ] .the design models represent the given database .if the database is large , we expect good design models .if the database contains few subjects , it is possible to assume and fit a probability distribution satisfied by the population and to extrapolate subjects from the database based on that distribution . in case of a small database , this approach may provide more accurate results .section [ extrapolate ] discusses this option .section [ experiments ] gives design examples .some prior work has focused on operating in a parameter space based on traditional sparse anthropometric measurements . if only sparse anthropometric measurements are known , it is not straight forward to find 3d design models because computing a 3d model based on a small set of measurements is an under constrained problem . in this work ,the shape information is provided by a 3d anthropometric database .this data allows to compute design models by constraining them to lie within the shape space spanned by the database . in this work ,we use the civilian american and european surface anthropometry resource ( caesar ) database to compute a sizing system .this database contains human shapes in a similar posture .furthermore , each shape contains a set of 73 anthropometric landmarks . we exploit this information to parameterize the modelsmcculloch et al . used the traditional sparse anthropometric measurements to automatically create an apparel sizing system that has good fit . for a fixed set of measurements on the body , a fixed number of sizes , and a fixed percentage of the population that needs to be fitted by the sizing system , they optimize the fit of the sizing systemthe fit for a human body is defined by a weighted distance function between the measurements on the body and the measurements used for the sizing system . optimizing the fit amounts to solving a non - linear optimization system .this system is hard to solve and suboptimal solutions may be obtained .this approach can operate in multi - dimensional spaces .it is not straight forward to obtain design models from the resulting sizing system because computing a design model from a sparse set of measurements is an under constrained problem .mochimaru and kouchi represented each model in a database of human shapes using a set of manually placed landmark positions .they proposed an approach to find representative three - dimensional body shapes .the approach first reduces the dimensionality of the data using multi - dimensional scaling , and then uses principal component analysis ( pca ) to find representative shapes .mochimaru and kouchi showed that this approach is suitable to find representative shapes of a human foot .while this approach is fully automatic , it assumes that the distortion introduced by multi - dimensional scaling is small .while this may be true for low - dimensional data , it is not in general true when three - dimensional measurements of high resolution are considered .hence , there is no guarantee that the design models optimally represent the population .recently , three - dimensional measurements of high resolution have been used to aid in the design process .robinette studied different ways to align a database of 3d laser scans of heads for helmet design .the goal is to find alignments with minimum variability of shape .this way , one can design helmets that offer maximum protection .meunier et al . used a database of 3d laser scans of heads to objectively assess the fit of a given helmet .this assessment strategy is useful to analyze existing designs , but it does not produce design models .kouchi and mochimaru proposed an approach to design spectacle frames based on database of human face shapes .the approach proceeds by analyzing the morphological similarities of the faces in the database and by dividing the faces into four groups . for each group, a representative form is found automatically . in this work ,a spectacle frame was designed for each representative shape , and it was shown that a good fit was achieved .guo et al . proposed a similar approach to construct design models for helmet design using a database of magnetic resonance images of heads .the approach proceeds by automatically dividing the head shapes into groups and by constructing one design model per group .although the methods by kouchi and mochimaru and guo et al . require little manual work , the methods are specific to spectacle and helmet design , respectively , and can not easily be extended to the design of other gear or garments .furthermore , the division of heads into groups is not guaranteed to produce design models that optimally represent a target population .meunier et al . propose to parameterize a set of head scans and to perform pca on the parameterized scans .they then use the first two principal components to manually design a set of three design models . while this approach aims to exploit the information provided by a parameterized database of head shapes ,the approach is heuristic and assumes that the data follows a gaussian distribution .furthermore , manually picking the design models based on a learned distribution is hard in a high - dimensional space without reducing the dimensionality of the data because humans can not easily visualize high - dimensional spaces .hence , if the aim is to consider four or more principal components , the approach by meunier et al . becomes very difficult to use .we propose a fully - automatic method to compute design models that represent a given 3d anthropometric database well .since the approach computes the design models automatically , it can operate in high - dimensional spaces . unlike the method by mcculloch et al . , our method does not rely on solving a non - linear optimization system . instead , we model the fit explicitly using a set of tolerances ( one tolerance along each dimension ) that explains by how much the garment or gear can be adjusted along each dimension . these tolerances depend directly on the design and the materials that are used in a specific application . in this way, our method finds optimal design models for a specific task such as helmet design . to our knowledge , this is the first method that simultaneously considers the optimal fit accommodation and design models .the proposed approach proceeds by first parameterizing all of the subjects present in the database . in general, finding accurate point - to - point correspondences between a set of shapes is a hard problem .however , in our application , we know that the shapes are human body shapes , which allows the use of template - based approaches to parameterize the database .first , consider the case where the scans are assumed to be in similar posture .this assumption is common in our application because in a typical 3d anthropometry survey , the human subjects are asked to maintain a standard posture . in this case, we can proceed by first using ben azouz et al.s approach to automatically predict landmarks on the scans followed by xi et al.s approach to parameterize the models .ben azouz et al.s approach proceeds by learning the locations of a set of anthropometric landmarks from a database of human models using a markov random field ( mrf ) , and by using probabilistic inference on the learned mrf to automatically predict the landmarks on a newly available scan .xi et al.s approach exploits the anthropometric landmarks to fit a template model to the scans .the method proceeds in two steps .first , it computes a radial basis function that maps the anthropometric landmarks on the template mesh to the corresponding landmarks on the scan .this function is used to deform the template mesh .second , the method further deforms the template to fit the scan using a fine fitting approach as in allen et al . .second , consider the general case where the postures of the scans vary . in this case , we can again proceed by first predicting landmarks on the scans automatically and by using a template - based deformation to parameterize the models . when using this method , the locations of the landmarks are learned and predicted in an isometry - invariant space and the template fittingapproach uses a skeleton - based deformation to allow for posture variation . in this work ,we use the caesar database .the models of this database are in similar posture and for each model , we know a set of 73 manually placed anthropometric landmarks .hence , we use the anthropometric landmarks as input to xi et al.s approach to parameterize the models .once the database is parameterized , each subject of the database is represented by a triangular mesh .we can now measure corresponding distances on all of the bodies .this paper considers euclidean distances .however , the techniques outlined in the following extend to arbitrary distances , such as geodesics .the approach computes the ordered set of distances that are meaningful for the design of a specific garment or gear for each of the bodies in the database . for each body ,the measurements in order can be viewed as a point in the _ parameter space _ .the entire database is then represented as a set of points in .the designer can specify a range of fit along each of the dimensions that were measured on the bodies .this range specifies how much a garment stretches or by how much gear can be adjusted in the given direction .the ordered set of ranges in order defines the side lengths of a -dimensional box in .let denote the side length in dimension and let denote the -dimensional box with side lengths centered at the origin .the problem of computing a sizing system that fits the given population can now be expressed as a covering problem in .that is , we aim to cover all of the points in with translated copies of .sections [ cover_box ] and [ fixed_box ] discuss how to solve this problem .once the points in are covered by a set of translated copies of , we aim to convert each box back into a body shape that represents the points covered by .this is achieved by computing the full procrustes mean shape of the body shapes corresponding to the points covered by .section [ gpa ] discusses this step in detail .this section discusses the problem of finding the minimum number of translated copies of that cover all of the points in .we first analyze how many translated copies of need to be considered by the algorithm .when considering only one distance dimension , the box becomes a line segment of fixed length .note that two line segments that are combinatorially equivalent ( i.e. they cover the same points of ) do not add anything to the space of solutions .therefore , the algorithm only needs to consider translated line segments that are combinatorially different from each other .since the set covered by a translated copy of only changes if a point of either enters or leaves the set , the algorithm only needs to consider the translated line segments with endpoints . since a line segment has two endpoints and since there are points in , we need to consider boxes . extending this argument to distance dimensionsyields that at most translated copies of need to be considered .the problem of finding the minimum number of boxes among the set of at most translated copies of that cover all of the points in can now be viewed as a set cover problem .a history of this problem can be found in vazirani ( * ? ? ?* chapter 2 ) .this problem is np - hard and it is therefore impractical to find an optimal solution .hence , we aim to find an approximation to the solution that has bounded error .a solution is a -approximation of the optimal solution if it uses boxes with for any , where is the number of boxes used by the optimal solution . in the following ,we discuss how to compute a -approximation .we then discuss a more efficient greedy algorithm to find a set of boxes that is a -approximation of the optimal solution .this section summarizes the approach by hochbaum and maass to find a -approximation to the covering problem .the approach proceeds by dividing the parameter space into a regular grid with width along each dimension . for a given input parameter related to , the approach partitions the parameter space into slabs of grid cells along each dimension .note that partitions are possible because the locations of the slabs can be shifted times along each dimension .for each of the partitions , the algorithm computes the union of optimal solutions in the sets of grid cells created by the partition .the algorithm takes time . for a detailed analysis ,refer to .this is only practical for problems with few points in low dimensions .hence , we only implemented this approach for .this section summarizes an efficient algorithm to find a solution to the covering problem .unfortunately , the solution is not guaranteed to be a -approximation of the optimal solution .the approach proceeds by considering translated copies of .we denote these boxes . for each box ,we compute the set of points of covered by .this yields a collection of sets .our goal is to find a small subset of the sets that covers all of the points in .we solve the problem using a greedy approach by repeatedly selecting the set covering the maximum number of uncovered points until all points are covered .finding the set of points covered by one box takes time .hence , finding all of the sets takes time . we store the following information . for each point ,we store which boxes contain , for each box , we store which points are contained in , and we store for each box the number of uncovered points in .the set that covers the maximum number of uncovered points can now be found in time .furthermore , using the stored information , we can remove one point from all of the boxes in time .since there are a total of points , the algorithm takes time .hence , this approach takes time . as discussed above ,if we consider all of the combinatorially different boxes , is at most .when all combinatorially different boxes are considered , it can be proven that this greedy approach is a -approximation of the optimal solution ( * ? ? ?* chapter 2 ) .hence , we can find a -approximation of the optimal solution in time .denote the set of all combinatorially different boxes by . in practice , we reduce the number of boxes further by considering only the translated copies of with centers in . denote the set of these boxes by .this way , we obtain and reduce the running time to at the cost of not considering all of the combinatorially different boxes . to analyze the approximation ratio of this solution , consider the case , where the boxes become line segments .assume that the optimal solution picks a line segment not included in .recall that is in the set .all of the elements covered by can be covered by two line segments in ; namely , the line segment centered at the leftmost point in and the line segment centered at the rightmost point in .this argument can be generalized to -dimensional space as follows .a box picked by the optimal solution that is not included in can be covered by boxes in .hence , the optimal solution using needs at most times the number of boxes picked by the optimal solution using . since we solve the problem using the greedy algorithm on and since the greedy algorithm is known to compute a -approximation of the optimal solution ( * ? ? ?* chapter 2 ) , our approach is guaranteed to compute a -approximation of the optimal solution .in some applications such as the design of garments , it is not desirable to produce sizes for an entire population . instead , one wishes to produce a fixed number of sizes that fits the largest portion of the population .for instance , a company that manufactures t - shirts may wish to manufacture three sizes ( small , medium , and large ) in a way that the sizes fit the largest possible portion of the population . in parameter space, this means that we do not wish to cover all of the points in .instead , the goal is to cover the maximum number of points in with a fixed number of boxes .this problem is also np - hard since a polynomial - time algorithm to solve this problem would give a polynomial - time algorithm to solve problem considered in section [ cover_box ] .hence , we solve this problem using a greedy approach .unfortunately , the solution is not guaranteed to be a -approximation of the optimal solution . however , by a similar argument to the one in the previous section , we can show that the approach computes a -approximation of the optimal solution .the greedy approach proceeds as in section [ greedy_cover ] .the only difference is that we stop after the first boxes are selected . using an analysis similar to the one in section [ greedy_cover ] ,it can be shown that this algorithm takes time , where is the number of boxes considered by the algorithm .recall that in theory and that we set in practice .once the algorithm selected a set of boxes to represent the given measurements , we aim to convert each box back into a body shape that represents the points covered by .this is achieved by finding the parameterized body shapes corresponding to points in covered by and by computing the full procrustes mean of these shapes . to compute the full procrustes mean of shapes, we repeatedly compute the average of the shapes and align each of the shapes to the average shape using a rigid transformation .if the box contains a sufficient number of points and if the mean of these points is close to the center of , the procrustes mean is a good representative of .otherwise , the procrustes mean of these shapes will not yield a good representation of the shapes covered by .if this situation occurs , the approach can be modified by sampling a set of points in and by finding the shapes corresponding to these points as outlined in section [ extrapolate ] .we can then find a design model by computing the full procrustes mean of the shapes .this approach yields a set of body shapes corresponding to the boxes . the shapes can now be used for the design of garments or gear .note that the shapes are not simply scaled versions of each other since each shape is derived from a different set of true body scans .we use each shape as a design model .the design models computed in the previous sections represent the given database .if the database contains few subjects , the coverage of the target population by the computed design models may be small . in this case , extrapolating subjects from the database may provide more accurate results . in order to extrapolate subjects from the database , we need to assume that the target population obeys a specific probability density . in this section ,we assume that the data follows a gaussian distribution .we discuss how to extrapolate subjects from the database to improve the coverage of our method .we use the approach by allen et al . to compute a new shape based on a new point in .since the database is parameterized and follows a gaussian distribution , we can perform pca of the data . in pca space ,each shape is represented by a vector of pca weights .pca yields a mean shape and a matrix that can be used to compute a new shape based on a new vector of pca weights as . recall that we aim to create a new shape based on a new point in . to achieve this goal, we use the database to learn a linear mapping from to .this mapping is called _ feature analysis _ and described in detail by allen et al .feature analysis yields a matrix that can be used to compute a new vector of pca weights based on a new point as .this model allows to compute a shape based on a new point as .it remains to outline how to find new points .we assume that the points can be modeled by a gaussian distribution .we learn the distribution from the given database using maximum likelihood estimation .we then sample a set of points from this distribution and find the corresponding shapes using feature analysis .we use these new subjects along with the given database to find the design models .another way to find new points is to sample the surface of a fixed equal probability density of the learned gaussian distribution ( this surface is an ellipsoid ) .this approach is useful to increase boundary coverage .we use as database 50 faces containing 6957 triangles each from the caesar database .figure [ extra_faces ] shows three faces that were obtained using feature analysis .we demonstrate the proposed approach for the design of glasses and for the design of helmets .the first example shows the concept of the presented approaches using a small database of faces .the second example gives an evaluation of the greedy approaches using a large database of heads .when designing glasses , one may wish to measure the width of the face and the width of the bridge of the nose .using these two measurements yields .figure [ measurements ] shows the measurements on one face in the database .the first dimension measures the euclidean distance between the blue points and the second dimension measures the euclidean distance between the red points .for this example , we use 50 faces from the caesar database .we choose a small database since this allows to illustrate the result of the -approximation .in our example , we aim to design glasses that can be adjusted by in the first dimension and by in the second dimension .this defines the box .figure [ approx ] shows the result when covering using a -approximation with .the figure shows the points as grey triangles and the centers of the boxes as black squares .furthermore , for each box , the figure shows a screen shot of the corresponding procrustes mean shape .these face shapes can be used to create the sizes for the glasses .figure [ greedy ] shows the result of a greedy covering of .recall that this is a -approximation of the optimal solution .the symbols used in the figure are identical to the ones in figure [ approx ] .figure [ greedy](a ) shows the result when we aim to cover the entire population .we can see that in this example , we only require one extra shape when covering with the greedy algorithm than when covering using a -approximation with .figure [ greedy](b ) shows the result of greedily covering a large subset of with three boxes .we can see that the selected boxes cover the parameter space well . [cols="^,^ " , ] when designing a helmet , the three most crucial measurements are the head width , the head depth , and the face hight . using these three measurements yields .figure [ measurements_helmet ] shows the measurements on one head in the database .the first dimension measures the euclidean distance between the red points , the second dimension measures the euclidean distance between the green points , and the third dimension measures the euclidean distance between the blue points .for this example , we conduct an evaluation .we use 1500 heads from the caesar database to compute the design models and we then test the quality of fit using 500 different heads from the caesar database .in our example , we aim to design helmets that can be adjusted by in the first dimension , by in the second dimension , and by in the third dimension .this defines the box .figures [ greedy_head_complete ] and [ greedy_head ] show the results of a greedy covering of .the figures show the points as black points and the centers of the boxes as red points .furthermore , for each box , the figures show a screen shot of the corresponding procrustes mean shape .the coordinate axes are shown in the colour of the corresponding dimension ( see figure [ measurements_helmet ] ) .figure [ greedy_head_complete ] shows the result when we aim to cover the entire population .this covering requires eight design models .figure [ greedy_head ] shows the result of greedily covering a large subset of with three boxes .the boxes corresponding to the three design models cover a large subset of .note that for the data used in this example , it is not easy to manually find the best locations of the boxes since we use three measurements and since it is hard to optimally place points manually in three - dimensional space .+ we use 500 different head shapes to compute the quality of fit of the computed design models as follows .we compute the points in corresponding to the 500 head shapes and we compute how many of these points are covered by at least one of the computed boxes . the eight design models computed using the greedy covering algorithm shown in figure [ greedy_head_complete ] cover of all the shapes .the three design models computed by greedily covering the largeset subset of using three boxes shown in figure [ greedy_head ] cover of all the shapes .this shows that when using the three design models shown in figure [ greedy_head ] to design three sizes of a helmet , we expect that of all adults find that at least one of these three helmets fits them .this paper presented a novel approach to generate design models for the design of gear or garments .the approach makes use of the widely available anthropometric databases to find design models that represent a large portion of the population .we find the design models by solving a covering problem in a low - dimensional parameter space .note that in this paper , we use translated boxes of the same size to cover the parameter space .this seems the most intuitive shape with which a designer may wish to cover the parameter space . to solve this problem, we presented a slow -approximation and a fast -approximation of the optimal solution .if the aim is to cover the parameter space with translated balls or ellipsoids of the same size , all of the algorithms in this paper can be adapted to this scenario .if the aim is to cover the parameter space with boxes , balls , or ellipsoids of different size or orientation , the algorithms presented in sections [ greedy_cover ] and [ fixed_box ] can be adapted to this scenario .the resulting design models are only as good as the given anthropometric database .this paper discussed the option of improving the coverage of the target population by extrapolating models from the database before computing the design models .we thank pengcheng xi for providing us with the data .this work has partially been funded by the cluster of excellence _ multimodal computing and interaction _ within the excellence initiative of the german federal government .
when designing a product that needs to fit the human shape , designers often use a small set of 3d models , called _ design models _ , either in physical or digital form , as representative shapes to cover the shape variabilities of the population for which the products are designed . until recently , the process of creating these models has been an art involving manual interaction and empirical guesswork . the availability of the 3d anthropometric databases provides an opportunity to create design models optimally . in this paper , we propose a novel way to use 3d anthropometric databases to generate design models that represent a given population for design applications such as the sizing of garments and gear . we generate the representative shapes by solving a covering problem in a parameter space . well - known techniques in computational geometry are used to solve this problem . we demonstrate the method using examples in designing glasses and helmets .
the relationship between traffic and air pollutants such as has been examined using many different approaches [ e.g. , ] .proximity to traffic has frequently been used as a proxy for traffic related air pollution exposure in environmental health [ ] . in such studies ,the goal is to determine whether there is a relationship between air pollution and health outcomes . when direct measurements of specific pollutant levels are not available , proximity to roadways and traffic levels are sometimes used as proxies . in general, levels decline with distance from a highway [ ] .while data on proximity to major roads have proven to be a cost - effective approach in epidemiological studies of traffic exposure , they do not necessarily account for traffic volume .inclusion of volume further improves the quality of traffic exposure measurement [ ] .for instance , found that including an index of traffic intensity and proximity in a model , along with an indicator of gas cooker use in the home , improved the correlation between model estimates and levels of nitrogen dioxide measured from a monitor located close to a child s home or school .other studies [ e.g. , ] also used traffic volume to improve the quality of exposure information . one way to include traffic volume information in a model is to introduce vehicular counts within a buffer zone , which call weighted - road - density .the idea is to calculate the total ( road length traffic volume ) for a given circle and divide it by the area , that is , , where is the length of a segment , the traffic volume and the radius of the circle .either actual traffic counts or a road classification system can be used for .the authors found that actual traffic counts were better at predicting than a simple hierarchical classification of roads .in addition , weighted road density was found to be a better predictor than proximity to a major road. rose et al.s ( ) method assumed that all roads within a circle had the same effect regardless of distance to the point of interest . proposed a method that made use of road density , traffic volume and distance to roads from points of interest .they were able to estimate a dispersion function for a pollutant , which improved estimates of over those obtained using only average daily traffic ( adt : number of vehicles / day ) on the closest highway , adt on the busiest highway within a buffer and the sum for all road segments within a buffer .the underlying framework for the methods reviewed above is land use regression which uses traffic - related variables as predictors for [ e.g. , ] . added a time - varying component to a model using multiple linear regression to forecast levels up to 8 hours in advance by using current and past 15 hours meteorology along with traffic information .further methods for assessing intraurban exposure were reviewed by : ( i ) statistical interpolation [ ] , ( ii ) line dispersion models [ ] , ( iii ) integrated emission - meteorological models [ ] , and ( iv ) hybrid models combining personal or household exposure monitoring with one of the preceding methods [ ] , or combining two or more of the preceding methods with regional monitoring [ ] . broke down the alternatives into just two categories : dispersion - based models and empirical models .as pointed out by , a disadvantage of geostatistical interpolation is the limited availability of monitoring data .this approach requires a reasonably dense network of sampling sites .government monitoring data generally come from a sparse network of stations , giving rise to systematic errors in estimates at sites far from the monitoring stations .increasing the number of monitoring sites can be helpful but costly , so it has not been used extensively .researchers often have to use pollution measurements over relatively short time periods as a substitute for the comparatively long periods covered by health histories .this poses a choice between relying on a government network that provides temporal detail for a limited number of sites or on their own more detailed spatial network , which usually covers a short period of time . to address the limitations inherent in each source of available data , applied a longitudinal model that established a relationship between data from us environmental protection agency ( epa ) monitoring sites with daily or finer temporal resolution and those from the study of traffic , air quality and respiratory health in children ( star ) with monthly resolution .it was assumed that the relationship at the monthly level held at the daily level , using a model in which data from epa sites were used to estimate pollution information at study sites .this model performed well as measured by in a simple linear model that used star site observations as the response variable and the predictions based on epa measurements as the predictor variable .the model showed that about 73% of the variability at the star sites can be explained by the predictions .this article extends and seeks to improve zhang s ( ) method by including traffic as predictors in the model .a traffic - related variable can then be used to explain the spatial variation observed in the random intercept of the longitudinal model , thus providing a practical way for estimating the temporal / spatial distribution of in a region .star is an epidemiological study of childhood asthma designed to investigate whether common air contaminants are related to disease severity .four monthly outdoor measurements were taken for each subject , with three months separating each consecutive measurement .observations used in this analysis were taken between april 25 , 2006 and march 21 , 2008 .in contrast to the star study , the epa monitoring sites provide hourly measurements .average daily was calculated from these hourly measurements .figure [ locations ] shows the locations of four epa sites in connecticut and 316 star study sites used in this analysis .we selected randomly 266 star learning sites for model development and the remaining 50 sites were used for model validation .measurements and 316 star sites which have monthly measurements . ]inverse distance weighting ( idw ) was used to interpolate daily values at star sites based on daily averages at the four epa sites .let denote the measurement at star site ( between days to , say ) , and let denote the idw interpolated value at site on day , for , and .a new variable can be created by taking the average of for site over the same period as .figure [ starepamsr ] plots against for the 316 sites in figure [ locations ] , where weights are the reciprocal of distance .values at 316 star sites ( ) vs average of idw interpolated values from epa measurements over the same period as . ] the connecticut department of transportation reports adt for all state roads on a three - year cycle .the data for 2006 were used in this analysis .figure [ roadsadt ] shows these road segments which have reported adt . there are 5196 road segments , with lengths ranging from 16 meters to 12,295 meters , median of 740 meters and mean of 1207 meters . the range for adt was 0 to 184,000 ( mean of 22,323 and median of 11,400 ) .three models were compared in this study .first , we considered a linear model : where denotes the measurement on the natural log scale , is the natural log of the average idw interpolated for that site over the corresponding period , is the traffic information ( adt ) , and is some random error , for .second , we specified a longitudinal model with random effects for sites : where denotes the measurement at star site on the natural log scale , is the corresponding average of idw interpolated on the natural log scale , is the traffic information , is a random intercept for site , and is some random error , for and .the random effects and are mutually independent .a scatter plot showing this relationship for these data is shown in figure [ star10 ] , which shows ( the measurement at site ) against ( average of idw interpolated daily values at site over the period corresponding to ) for six randomly selected sites , with lines connecting values for a site in temporal order .levels vs averages of idw interpolated levels for six randomly selected star sites , with lines connecting values in temporal order . ] finally , we specified a modified longitudinal model which allowed for spatial correlation among site effects for the model in equation ( [ longequ ] ) , that is , . elements in the covariance matrix are given by , where denotes spatial distance .the random effects and s are mutually independent .we adjusted for traffic effects using the integrated exposure model proposed by which introduced covariates into the linear predictor in a regression model .the contribution of traffic was expressed as where denotes adt for point on a line representing a highway and is a dispersion function for the pollutant generated at . we can achieve computational efficiency with little loss in accuracy by representing this contribution numerically taking the sum of the product of adt , the segment length and the unknown dispersion function which depends on distance . discussed alternative forms of linear dispersion functions , for example , stepped , polynomial or spline . in this examplewe used a step function , in which we estimated a value for the level of dispersion between specified distance intervals , and : , where is the pollution effect from a unit intensity source within the interval , is adt , and is length of the segment .the linear predictor related to traffic effects can now be written as where .adt is reported in highly variable lengths , and while this approach might work well for short segments , it can become problematic for long segments , for example , if the center of one road is close to a site but most of the remaining segments are relatively far away . to mitigate this problem , we divided the segments into smaller subsegments and found that 50-meter segments provided an adequate accuracy . to show this , we tested lengths such as 10-meter , 50-meter , 100-meter and up to 5000-meter and found little difference in the resulting estimates between 10 and 50 meters .for this example , we used 50-meter .segments were divided into subsegments using a python ( http://www.python.org/ ) script which calls relevant arcgis [ ] functions .values of s were predetermined by our experience with earlier analysis . setting the values of leaves the values of s to be estimated as regression parameters .two possible approaches for incorporating traffic effects were examined : a single - step model which sets the contribution of highway segments within 2000 meters as equal and for distances farther than 2000 meters as 0 ; and a multi - step model with steps at 400 meters , 800 meters , 1200 meters , 1600 meters and 2000 meters .while models ( [ linearequ ] ) and ( [ longequ ] ) were fitted using a frequentist approach , we obtained parameter estimates for the third model under the bayesian framework .the three models were fitted to levels at the 266 learning sites and the results were used to estimate levels not only at these sites but at the 50 validation sites as well . by assuming that the relationship at the monthly level also holds at the daily level , we also obtained daily estimates .one predictor variable was based on daily pollution levels obtained by interpolating with idw measurements from the four epa sites .we also included the remaining predictors representing traffic - related effects .once daily predictions at the sites were obtained , they were averaged over the same periods as the star observations .systematic departures for site estimates were evaluated using simple linear regression : where is the observation at star site , is the average of the estimated daily values at site over the same period as , and .in addition , we calculated the root mean square error ( rmse ) : [ simplelinear ] shows results from fitting the model in equation ( [ linearequ ] ) using the single - step and multi - step dispersion models for the traffic effect .table [ longtraffic ] shows results from fitting the corresponding longitudinal model in equation ( [ longequ ] ) . in table[ simplelinear ] , the results from the multi - step dispersion model reveal that the effects of the first two steps ( 0400 m and 400800 m ) are not significantly different from zero at the 0.05 significance level . while parameter .4cd2.4d2.4c@ * traffic * & & & & & & + single - step & & -0.3728 & 0.1181 & -3.1570 & 0.0016 & 0.3857 + & & 0.9428 & 0.0447 & 21.0930 & 0.0001 & + & & 0.1524 & 0.0098 & 15.5110 & 0.0001 & + multi - step & & -0.3963 & 0.1184 & -3.3470 & 0.0008 & 0.3911 + & & 0.9341 & 0.0446 & 20.9230 & 0.0001 & + & & -0.0133 & 0.0283 & -0.4710 & 0.6378 & + & & 0.0062 & 0.0236 & 0.2630 & 0.7926 & + & & 0.0622 & 0.0233 & 2.6660 & 0.0078 & + & & 0.0675 & 0.0151 & 4.4810 & 0.0001 & + & & 0.0495 & 0.0099 & 4.9900 & 0.0001 & + estimates of the next three steps ( 8001200 m , 12001600 m and 16002000 m ) are significantly different from zero , their values are nearly the same ( 0.0622 , 0.0675 and 0.0495 ) .similar observations can be made on the results from the longitudinal model in table [ longtraffic ] . while one might expect values to decline with distance, this could be due to the high correlation among traffic covariates for the five steps .the variance inflation factor ( vif ) for each traffic variable in model ( [ linearequ ] ) was above one and the vifs for two of them were above three . while multi - collinearity does not greatly affect prediction severely in general , it can be difficult to diagnose the potential issue of extrapolation with multiple predictors when making a prediction at a new site .moreover , note from table [ simplelinear ] that the adjusted only improved marginally with the use of multi - step variables . for these reasons we focused on the model using the single - step traffic variable ..4ccd2.4d2.4@ * traffic * & & & & & & + single - step & & -0.5974 & 0.1033 & 797 & -5.7826 & 0.0001 + & & 1.0281 & 0.0389 & 797 & 26.4628 & 0.0001 + & & 0.1529 & 0.0146 & 264 & 10.5075 & 0.0001 + & & 0.0402 & & & & + & & 0.0619 & & & & + multi - step & & -0.6344 & 0.1053 & 797 & -6.0222 & 0.0001 + & & 1.0250 & 0.0389 & 797 & 26.3591 & 0.0001 + & & -0.0117 & 0.0419 & 260 & -0.2797 & 0.7800 + & & 0.0070 & 0.0350 & 260 & 0.1985 & 0.8428 + & & 0.0627 & 0.0346 & 260 & 1.8149 & 0.0707 + & & 0.0653 & 0.0223 & 260 & 2.9300 & 0.0037 + & & 0.0503 & 0.0147 & 260 & 3.4294 & 0.0007 + & & 0.0398 & & & & + & & 0.0619 & & & & + .4cd2.4d2.4d2.4@ & & & & & + & -0.8524 & 0.0896 & -0.9838 & -0.8748 & -0.6251 + & 1.0828 & 0.0312 & 1.0068 & 1.0828 & 1.1365 + & 0.1023 & 0.0153 & 0.0725 & 0.1023 & 0.1333 + & 0.0748 & 0.0203 & 0.0419 & 0.0722 & 0.1207 + & 0.0648 & 0.0033 & 0.0588 & 0.0647 & 0.0716 + & 12.3184 & 3.6682 & 6.5307 & 12.2449 & 19.5918 + the single - step dispersion function was also used for the modified longitudinal model and the results are shown in table [ spatialmodel ] . table [ predspatialmodel ] summarizes results from a comparison of the fitted and the observed levels at the 50 validation sites using the model in equation ( [ valmodel ] ) .also included are a comparison of results for models with and without the traffic variable . including the traffic variable improved performance of both the linear and the longitudinal models .for instance , the predictive for model ( [ valmodel ] ) changed from 0.2617 to 0.4375 and rmse from 2.9527 to 2.5763 after including traffic variable in the longitudinal model .the additive bias in the longitudinal model changed from 1.0821 ( -value 0.283 ) to 1.2584 ( -value 0.0637 ) . c c c d2.3 d2.4 c c c@ & & & & & & & & + linear & & 0.1163 & 1.1220 & 0.104 & 0.9180 & 0.2605 & 2.9687 & n + model & & 1.0526 & 0.1260 & 8.352 & 0.0001 & & + & & 0.8978 & 0.7073 & 1.269 & 0.206 & 0.4342 & 2.5843 & y + & & 0.9468 & 0.0768 & 12.327 & 0.0001 & & + [ 4pt ] longi- & & 1.0821 & 1.0057 & 1.076 & 0.2830 & 0.2617 & 2.9527 & n + tudinal & & 0.9333 & 0.1114 & 8.377 & 0.0001 & & + model & & 1.2584 & 0.6748 & 1.865 & 0.0637 & 0.4375 & 2.5763 & y + & & 0.8998 & 0.0725 & 12.409 & 0.0001 & & + [ 4pt ] modified & & 0.5247 & 0.5539 & 0.947 & 0.3450 & 0.5807 & 2.2081 & n + longi- & & 0.9703 & 0.0586 & 16.560 & 0.0001 & & + tudinal & & 0.6802 & 0.5131 & 1.326 & 0.1860 & 0.6106 & 2.1311 & y + model & & 0.9527 & 0.0541 & 17.622 & 0.0001 & & + for the modified longitudinal model that included spatial correlation , the estimated was not significantly different from zero , thus being similar to the estimates from the model without the traffic variable . however , when the traffic variable was included in this model , the predictive was 0.6106 , which was slightly higher than 0.5807 for the model without traffic . comparing rmses led to similar conclusions , that is , the model that included traffic had a lower rmse compared with the model without traffic .figure [ predest50 ] shows a scatter plot of observed vs predicted from the modified longitudinal model with traffic effects .values at 50 validation star sites . ] to see whether traffic effects explain the spatial correlation in the random intercepts of the longitudinal model , we compared the sample semivariograms for two versions of the longitudinal model ( [ longequ ] ) , one with traffic and the other without ( figure [ semivariogram ] ) .we can see that the semivariogram after accounting for traffic is almost flat compared with the one without traffic .this suggests that the spatial correlation in the random intercept has been partially explained by the inclusion of traffic in the model .based on the estimated , predictive and rmse for the 50 validation sites , we concluded that inclusion of traffic effects improved the linear , the longitudinal and the modified longitudinal models .in addition , the modified longitudinal model worked reasonably well for making predictions at random sites . in the modified longitudinal model , no temporal correlation structurewas assumed for the residual .an area for future research would be to develop a model that allows for both spatial and temporal correlation . and demonstrated how such models could be estimated . from an application perspective , however , assuming only spatial correlation has the advantage of being less computationally demanding .one would need to weigh the benefits and costs of using a more complex model that includes a spatiotemporal correlation structure .another area for further research is to allow for additional predictors such as land use , population density and elevation similar to that used by .in addition , one needs to explore whether these models can be applied to different temporal resolutions .the epa sites record levels on an hourly basis , so if the level of pollutant varies with time of day as a subject moves from place to place , this could have relevant health consequences . it would also be useful to determine whether the proposed model can be applied to other pollutants generated by traffic . the us epa monitors a variety of relevant pollutants , including carbon monoxide , ozone , particulate matter 2.5 and sulfur dioxide .epidemiological studies have been carried out to explore the relationship between exposure to these pollutants and health [ e.g. , ] .if this approach also performs well for these pollutants , one would be able to study the effect of daily pollution levels on health .finally , it would be interesting to develop alternative models for estimating the daily pollution levels at multiple sites , for example , similar to the latent spatial process used by . as a result, it would be no longer necessary to assume that the relationship between monthly epa measures and star sites would hold at the daily level .however , implementation of such models would be computationally expensive , which could pose a significant challenge for potential users .the authors thank an associate editor for the many constructive comments that greatly enhanced our paper .the authors appreciate the computing services provided by yale university biomedical high performance computing center , which was funded by nih grant rr19895 .the authors also thank dr .janneane gent for her valuable input about the study of traffic , air quality and respiratory health in children .
data used to assess acute health effects from air pollution typically have good temporal but poor spatial resolution or the opposite . a modified longitudinal model was developed that sought to improve resolution in both domains by bringing together data from three sources to estimate daily levels of nitrogen dioxide ( ) at a geographic location . monthly measurements at 316 sites were made available by the study of traffic , air quality and respiratory health ( star ) . four us environmental protection agency monitoring stations have hourly measurements of . finally , the connecticut department of transportation provides data on traffic density on major roadways , a primary contributor to pollution . inclusion of a traffic variable improved performance of the model , and it provides a method for estimating exposure at points that do not have direct measurements of the outcome . this approach can be used to estimate daily variation in levels of over a region . , ,
determinantal point processes ( dpps ) are discrete probability models over the subsets of a ground set of items .they provide an elegant model to assign probabilities to an exponentially large sample , while permitting tractable ( polynomial time ) sampling and marginalization .they are often used to provide models that balance `` diversity '' and quality , characteristics valuable to numerous problems in machine learning and related areas .the antecedents of dpps lie in statistical mechanics , but since the seminal work of they have made inroads into machine learning . by now they have been applied to a variety of problems such as document and video summarization , sensor placement , recommender systems , and object retrieval .more recently , they have been used to compress fully - connected layers in neural networks and to provide optimal sampling procedures for the nystrm method .the more general study of dpp properties has also garnered a significant amount of interest , see e.g. , . however , despite their elegance and tractability , widespread adoption of dpps is impeded by the cost of basic tasks such as ( exact ) sampling and learning .this cost has motivated a string of recent works on approximate sampling methods such as mcmc samplers or core - set based samplers .the task of learning a dpp from data has received less attention ; the methods of cost per iteration , which is clearly unacceptable for realistic settings .this burden is partially ameliorated in , who restrict to learning low - rank dpps , though at the expense of being unable to sample subsets larger than the chosen rank .these considerations motivate us to introduce krondpp , a dpp model that uses kronecker ( tensor ) product kernels . as a result ,krondppenables us to learn large sized dpp kernels , while also permitting efficient ( exact and approximate ) sampling .the use of kronecker products to scale matrix models is a popular and effective idea in several machine - learning settings .but as we will see , its efficient execution for dpps turns out to be surprisingly challenging . to make our discussion more concrete , we recall some basic facts now .suppose we have a ground set of items .a discrete dpp over is a probability measure on parametrized by a positive definite matrix ( the _ marginal kernel _ ) such that , so that for any drawn from , the measure satisfies where is the submatrix of indexed by elements in ( i.e. , {i , j \in a} ] .we denote the block in by for any valid pair , and extend the notation to non - kronecker product matrices to indicate the submatrix of size at position .[ prop : basic ] let be matrices of sizes so that and are well - defined . then, a. if , then , ; b. if and are invertible then so is , with ; c. = . an important consequence of prop . [ prop : basic] is the following corollary . [ corr : eigendecompose ]let and be the eigenvector decompositions of and . then , diagonalizes as . we will also need the notion of partial trace operators , which are perhaps less well - known : let .the _ partial traces _ and are defined as follows : {1 \leq i , j \leq n_1 } \in \mathbb r^{n_1\times n_1 } , \qquad \operatorname{tr}_2(a ) : = \sum\nolimits_{i=1}^{n_1 } a_{(ii ) } \in \mathbb r^{n_2 \times n_2}.\ ] ] the action of partial traces is easy to visualize : indeed , and . for us ,the most important property of partial trace operators is their positivity .[ prop : posdef - operator ] and are positive operators , i.e. , for , and . please refer to ( * ? ? ?in this section , we consider the key difficult task for krondpps : learning a kronecker product kernel matrix from observed subsets . using the definition ( [ eq:2 ] ) of , maximum - likelihood learning of a dpp with kernel results in the optimization problem : this problem is nonconvex and conjectured to be np - hard ( * ? ? ? * conjecture 4.1 ) .moreover the constraint is nontrivial to handle .writing as the indicator matrix for of size so that , the gradient of is easily seen to be in , the authors derived an iterative method ( `` the picard iteration '' ) for computing an that solves by running the simple iteration moreover , iteration is guaranteed to monotonically increase the log - likelihood .but these benefits accrue at a cost of per iteration , and furthermore a direct application of can not guarantee the kronecker structure required by krondpp . our aim is to obtain an efficient algorithm to ( locally ) optimize . beyond its nonconvexity , the kronecker structure imposes another constraint . as in first rewrite as a function of , and re - arrange terms to write it as it is easy to see that is concave , while a short argument shows that is convex .an appeal to the convex - concave procedure then shows that updating by solving , which is what does ( * ? ? ?2.2 ) , is guaranteed to monotonically increase .but for krondppthis idea does not apply so easily : due the constraint the function fails to be convex , precluding an easy generalization . nevertheless , for fixed or the functions are once again concave or convex .indeed , the map is linear and is concave , and is also concave ; similarly , is seen to be concave and and are convex .hence , by generalizing the arguments of ( * ? ? ?2 ) to our `` block - coordinate '' setting , updating via should increase the log - likelihood at each iteration .we prove below that this is indeed the case , and that updating as per ensure positive definiteness of the iterates as well as monotonic ascent . in order to show the positive definiteness of the solutions to, we first derive their closed form .[ prop : differenciation ] for , , the solutions to are given by the following expressions : moreover , these solutions are positive definite .the details are somewhat technical , and are hence given in appendix [ app : cccp - psd ] . we know that , because . since the partial trace operators are positive ( prop .[ prop : posdef - operator ] ) , it follows that the solutions to are also positive definite .we are now ready to establish that these updates ensure monotonic ascent in the log - likelihood .[ thm : cccp ] starting with , , updating according to generates positive definite iterates and , and the sequence is non - decreasing . updating according to generates positive definite matrices , and hence positive definite subkernels .moreover , due to the convexity of and concavity of , for matrices hence , .thus , if verify , by setting and we have the same reasoning holds for , which proves the theorem . as ( and similarly for ) , updating as in is equivalent to updating * genearlization .* we can generalize the updates to take an additional step - size parameter : experimentally , ( as long as the updates remain positive definite ) can provide faster convergence , although the monotonicity of the log - likelihood is no longer guaranteed .we found experimentally that the range of admissible is larger than for picard , but decreases as grows larger .the arguments above easily generalize to the multiblock case .thus , when learning , by writing the matrix with a 1 in position and zeros elsewhere , we update as .\ ] ] from the above updates it is not transparent whether the kronecker product saves us any computation .in particular , it is not clear whether the updates can be implemented to run faster than .we show below in the next section how to implement these updates efficiently . from theorem [ thm : cccp ] , we obtain algorithm [ algo : cccp ] ( which is different from the picard iteration in , because it operates alternatingly on each subkernel ) .it is important to note that a further speedup to algorithm [ algo : cccp ] can be obtained by performing stochastic updates , i.e. , instead of computing the full gradient of the log - likelihood , we perform our updates using only one ( or a small minibatch ) subset at each step instead of iterating over the entire training set ; this uses the stochastic gradient .matrices , training set , parameter . // or update stochastically // orupdate stochastically * return* the crucial strength of algorithm [ algo : cccp ] lies in the following result : [ thm : complexity ] for , the updates in algorithm [ algo : cccp ] can be computed in time and space , where is the size of the largest training subset .furthermore , stochastic updates can be computed in time and space .indeed , by leveraging the properties of the kronecker product , the updates can be obtained without computing .this result is non - trivial : the components of , and , must be considered separately for computational efficiency . the proof is provided in app .[ app : cccp - complexity ]. however , it seems that considering more than 2 subkernels does not lead to further speed - ups .if , these complexities become : * for non - stochastic updates : time , space , * for stochastic updates : time , space .this is a marked improvement over , which runs in space and time ( non - stochastic ) or time ( stochastic ) ; algorithm [ algo : cccp ] also provides faster stochastic updates than .however , one may wonder if by learning the sub - kernels by alternating updates the log - likelihood converges to a sub - optimal limit .the next section discusses how to jointly update and .we also analyzed the possibility of updating and jointly : we update and then recover the kronecker structure of the kernel by defining the updates and such that : we show in appendix [ app : joint - updates ] that such solutions exist and can be computed by from the first singular value and vectors of the matrix {i , j=1}^{n_1} ] ; for picard , was initialized with . ; the thin dotted lines indicated the standard deviation from the mean . ] 1.28 ; the thin dotted lines indicated the standard deviation from the mean.,title="fig : " ] 1.15 ; the thin dotted lines indicated the standard deviation from the mean.,title="fig : " ] 1.15 ; the thin dotted lines indicated the standard deviation from the mean.,title="fig : " ] for figures [ fig : synth - small ] and [ fig : synth - large ] , training data was generated by sampling 100 subsets from the true kernel with sizes uniformly distributed between 10 and 190 . to evaluate krk - picardon matrices too large to fit in memory and with large ,we drew samples from a kernel of rank ( on average ) , and learned the kernel stochastically ( only krk - picardcould be run due to the memory requirements of other methods ) ; the likelihood drastically improves in only two steps ( fig.[fig : synth - very - large ] ) . as shown in figures [ fig : synth - small ] and [ fig : synth - large ] , krk - picardconverges significantly faster than picard , especially for large values of .however , although joint - picardalso increases the log - likelihood at each iteration , it converges much slower and has a high standard deviation , whereas the standard deviations for picardand krk - picardare barely noticeable . for these reasons ,we drop the comparison to joint - picardin the subsequent experiments .we compared krk - picardto picardand em on the baby registry dataset ( described in - depth in ) , which has also been used to evaluate other dpp learning algorithms .the dataset contains 17 categories of baby - related products obtained from amazon .we learned kernels for the 6 largest categories ( ) ; in this case , picardis sufficiently efficient to be prefered to krk - picard ; this comparison serves only to evaluate the quality of the final kernel estimates .the initial marginal kernel for em was sampled from a wishart distribution with degrees of freedom and an identity covariance matrix , then scaled by ; for picard , was set to ; for krk - picard , and were chosen ( as in joint - picard ) by minimizing .convergence was determined when the objective change dipped below a threshold .as one em iteration takes longer than one picard iteration but increases the likelihood more , we set and .the final log - likelihoods are shown in table [ tab : babies ] ; we set the step - sizes to their largest possible values , i.e. and .table [ tab : babies ] shows that krk - picardobtains comparable , albeit slightly worse log - likelihoods than picardand em , which confirms that for tractable , the better modeling capability of full kernels make them preferable to krondpps ..5 .test set [ cols="<,^,^,^",options="header " , ] [ tab : runtimes ] we construct a ground truth gaussian dpp kernel on the genes dataset and use it to obtain 100 training samples with sizes uniformly distributed between 50 and 200 items . similarly to the synthetic experiments , we initialized krk - picard s kernel by setting where is a random matrix of size ; for picard , we set the initial kernel . , .] 1.3 , .,title="fig : " ] 1.3 , .,title="fig : " ] figure [ fig : genetic ] shows the performance of both algorithms . as with the synthetic experiments , krk - picardconverges much faster ;stochastic updates increase its performance even more , as shown in fig .[ fig : genetic - stochastic ] . average runtimes and speed - up are given in table [ tab : runtimes ] : krk - picardruns almost an order of magnitude faster than picard , and stochastic updates are more than two orders of magnitude faster , while providing slightly larger initial increases to the log - likelihood .we introduced krondpps , a variant of dpps with kernels structured as the kronecker product of smaller matrices , and showed that typical operations over dpps such as sampling and learning the kernel from data can be made efficient for krondppson previously untractable ground set sizes . by carefully leveraging the properties of the kronecker product, we derived for a low - complexity algorithm to learn the kernel from data which guarantees positive iterates and a monotonic increase of the log - likelihood , and runs in time .this algorithm provides even more significant speed - ups and memory gains in the stochastic case , requiring only time and space .experiments on synthetic and real data showed that krondppscan be learned efficiently on sets large enough that does not fit in memory .while discussing learning the kernel , we showed that and can not be updated simultaneously in a cccp - style iteration since is not convex over .however , it can be shown that is geodesically convex over the riemannian manifold of positive definite matrices , which suggests that deriving an iteration which would take advantage of the intrinsic geometry of the problem may be a viable line of future work .krondppsalso enable fast sampling , in operations when using two sub - kernels and in when using three sub - kernels ; this allows for exact sampling at comparable or even better costs than previous algorithms for approximate sampling .however , as we improve computational efficiency , the subset size becomes limiting , due to the cost of sampling and learning .a necessary line of future work to allow for truly scalable dpps is thus to overcome this computational bottleneck . * appendix : kronecker determinantal point processes *we use ` ' to denote the operator that stacks columns of a matrix to form a vector ; conversely , ` ' takes a vector with coefficients and returns a matrix .let , and .we note the matrix with all zeros except for a 1 at position , its size being clear from context .we wish to solve it follows from the fact that that and .moreover , we know that the jacobian of is given by . hence , the last equivalence is simply the result of indices manipulation .thus , we have similarly , by setting , we have that hence , which proves prop . [ prop : differenciation ] .the updates to and are obtained efficiently through different methods ; hence , the proof to thm .[ thm : complexity ] is split into two sections .we write so that .recall that .we wish to compute efficiently .we have \\ & = \operatorname{tr}\left[l_2^{-1 } ( l\delta l)_{(ij)}\right ] \\ & = \operatorname{tr}\left[l_2^{-1}\sum\nolimits_{k,\ell=1}^{n_1 } l_{(ik ) } \delta_{(k\ell)}l_{(\ell j)}\right ] \\ & = \sum\nolimits_{k,\ell=1}^{n_1 } { l_1}_{ik } { l_1}_{\ell j } \operatorname{tr}(l_2^{-1 } l_2 \delta_{(k\ell ) } l_2 ) \\ & = \sum\nolimits_{k,\ell=1}^{n_1 } { l_1}_{ik } { l_1}_{\ell j } \underbrace{\operatorname{tr}(\theta_{(k\ell ) } l_2)}_{a_{k\ell } } - \underbrace{\operatorname{tr}((i+l)^{-1}_{(k\ell ) } l_2)}_{b_{k\ell } } \\ & = ( l_1 a l_1 - l_1 b l_1)_{ij}. \end{aligned}\ ] ] the matrix can be computed in simply by pre - computing in and then computing all traces in time .when doing stochastic updates for which is sparse with only non - zero coefficients , computing can be done in . by diagonalizing and ,we have with and . and can all be obtained in as a consequence of prop .[ prop : basic ] . then let , which can be computed in .then is computable in .overall , the update to can be computed in , or in if the updates are stochastic . moreover ,if is sparse with only non - zero coefficients ( for stochastic updates ) , can be computed in space , leading to an overall memory cost .we wish to compute $ ] efficiently . can be computed in .as before , when doing stochastic updates can be computed in time and space due to the sparsity of .regarding , as all matrices commute , we can write where is diagonal and is obtained in . moreover , which allows us to compute in total . overall , we can obtain in or in for stochastic updates , in which case only space is necessary .in order to minimize the number of matrix multiplications , we equivalently ( due to the properties of the frobenius norm ) minimize the equation and set .suppose that has an eigengap between its largest singular value and the next , and let be the first singular vectors and value of .let and . then and are either both positive definite or negative definite .the proof is a consequence of ( * ? ? ?this shows that if is initially positive definite , setting the sign of based on whether and are positive or negative definite , which will be positive if and only if .] , and updating maintains positive definite iterates . given that if and , , a simple induction then shows that by choosing an initial kernel estimate , subsequent values of will remain positive definite .matrices , training set , step - size . power_method to obtain the first singular value and vectors of matrix . * return *
determinantal point processes ( dpps ) are probabilistic models over all subsets a ground set of items . they have recently gained prominence in several applications that rely on `` diverse '' subsets . however , their applicability to large problems is still limited due to the complexity of core tasks such as sampling and learning . we enable efficient sampling and learning for dpps by introducing krondpp , a dpp model whose kernel matrix decomposes as a tensor product of multiple smaller kernel matrices . this decomposition immediately enables fast _ exact _ sampling . but contrary to what one may expect , leveraging the kronecker product structure for speeding up dpp learning turns out to be more difficult . we overcome this challenge , and derive batch and stochastic optimization algorithms for efficiently learning the parameters of a krondpp .
a central concept in welfare economics are social welfare functions ( swfs ) in the tradition of arrow , i.e. , functions that map a collection of individual preference relations over some set of alternatives to a social preference relation over the alternatives .arrow s seminal theorem states that every swf that satisfies pareto optimality and independence of irrelevant alternatives is dictatorial .this sweeping impossibility significantly strengthened an observation by and sent shockwaves throughout economics as well as political philosophy and political theory ( see , e.g. , * ? ? ?a large body of subsequent work has studied whether more positive results can be obtained by modifying implicit assumptions on the domain of admissible preferences , both individually and collectively .these approaches can be roughly divided into two categories .for one , as pioneered by , the assumption of _ collective _ transitivity has been weakened to quasi - transitivity , acyclicity , path independence or similar conditions .although this does allow for non - dictatorial aggregation functions that meet arrow s criteria , these functions turned out to be highly objectionable , usually on grounds of involving a weak kind of dictatorship or violating other conditions deemed to be indispensable for reasonable preference aggregation ( for an overview of the extensive literature , see * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?particularly noteworthy are results about acyclic collective preference relations ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) because acyclicity is necessary and sufficient for the existence of maximal elements when there is a finite number of alternatives . concludes that `` the arbitrariness of power of which arrow s case of dictatorship is an extreme example , lingers in one form or another even when transitivity is dropped , so long as _some _ regularity is demanded ( such as the absence of cycles ) . ''another stream of research has analyzed the implications of imposing structure on the _ individual _ preferences .this has resulted in a number of positive results for restricted domains , such as dichotomous or single - peaked preferences , which allow for attractive swfs ( e.g. , * ? ? ? * ; * ? ? ? * ; * ? ? ?* ; * ? ? ?many economic domains are concerned with an infinite set of outcomes , which satisfies structural restrictions such as compactness and convexity .preferences over these outcomes are typically assumed to satisfy some form of continuity and convexity , which roughly imply that preferences are robust with respect to minimal changes in outcomes and with respect to convex combinations of outcomes .various results have shown that arrow s impossibility remains intact under these assumptions ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ? provide an overview and conclude that `` economic domain restrictions do not provide a satisfactory way of avoiding arrovian social choice impossibilities , except when the set of alternatives is one - dimensional and preferences are single - peaked . ''the point of departure for the present approach is the observation that , to the best of our knowledge , all impossibilities require some form of transitivity ( e.g. , acyclicity ) , even though no such assumption is necessary to guarantee the existence of maximal elements in continuous and convex domains . has shown that all continuous and convex preference relations admit a maximal element in every non - empty , compact , and convex set of outcomes . moreover, returning maximal elements under the given conditions satisfies standard properties of choice consistency introduced by .hence , there is little justification for demanding transitivity , which has come under independent attack in normative and descriptive decision theory ( see , e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?is preferred to lottery but the certainty equivalent of is less preferred than the certainty equivalent of , shows experimental failures of transitivity ( see , e.g. , * ? ? ?] as writes , `` once considered a cornerstone of rational choice theory , the status of transitivity has been dramatically reevaluated by economists and philosophers in recent years . ''we show that , not only does arrow s theorem cease to hold on convex domains when dispensing with transitivity , but , moreover , arrow s axioms and some weak technical assumptions narrow down the choice of a suitable swf to an intriguing combination of pairwiseness and utilitarianism .more precisely , we consider a convex set of outcomes consisting of all probability measures with finite support on some abstract set of alternatives .examples of outcome sets are shares of divisible public goods , lotteries , time shares , monetary shares , etc .individual and collective preference relations over these outcomes are assumed to satisfy continuity , convexity , and symmetry .we then show that there is a unique inclusion - maximal cartesian domain of preference profiles that allows for arrovian aggregation while satisfying minimal richness conditions .this domain allows for arbitrary preferences over pure outcomes , which in turn completely determine an agent s preferences over all remaining outcomes . when interpreting outcomes as lotteries , this preference extension has a particularly simple and intuitive explanation : an agent prefers one lottery to another if and only if the former is more likely to return a more preferred pure outcome .incidentally , this preference extension , which constitutes a central special case of skew - symmetric bilinear ( ssb ) utility functions as introduced by , has been backed by recent experimental evidence ( see * ? ? ?we then prove that the only arrovian swfs on this domain are affine welfare maximizers for the underlying ssb utility functions . as a consequence, there is a unique anonymous arrovian swf , which compares outcomes by the sign of the bilinear form given by the pairwise majority margins .the resulting collective preference relation over _ pure _ outcomes coincides with majority rule and the corresponding choice function is therefore consistent with condorcet s principle of always returning a pure outcome that is majority - preferred to every other pure outcome .this relation is naturally extended to mixed outcomes such that , by the minimax theorem , every compact and convex set of outcomes admits a collectively most preferred outcome .our results challenge the traditional transitive way of thinking about preferences , which has been largely influenced by the pervasiveness of scores and grades .a compelling opinion on transitivity is expressed in the following apt quote by decision theorist peter c. fishburn : `` transitivity is obviously a great practical convenience and a nice thing to have for mathematical purposes , but long ago this author ceased to understand why it should be a cornerstone of normative decision theory .[ ] the presence of intransitive preferences complicates matters [ ] however , it is not cause enough to reject intransitivity .an analogous rejection of non - euclidean geometry in physics would have kept the familiar and simpler newtonian mechanics in place , but that was not to be .indeed , intransitivity challenges us to consider more flexible models that retain as much simplicity and elegance as circumstances allow .it challenges old ways of analyzing decisions and suggests new possibilities '' .a special case of our setting , which has been particularly well studied , concerns sets of outcomes that consist of all lotteries over some finite set of alternatives and individual preferences over lotteries that satisfy the _von neumann - morgenstern axioms _ , i.e. , preferences over lotteries that can be represented by assigning cardinal utilities to alternatives such that lotteries are compared based on expected utility . conjectured that arrow s impossibility still holds under these assumptions and showed that this is indeed the case when there are at least four alternatives .there are various versions of this statement which differ in modeling assumptions and whether swfs aggregate cardinal utilities or the preference relations represented by these utilities .the one closest to the framework of this paper is theorem 4.3 by .our results apply to arrovian aggregation of preferences over lotteries under much loosened assumptions about preferences over lotteries . in particular, the axioms we presume entail that preferences over lotteries can be represented by _ skew - symmetric bilinear ( ssb ) utility functions _ , which assign a utility value to each pair of lotteries .one lottery is preferred to another lottery if the ssb utility for this pair is positive .ssb utility theory is a generalization of linear expected utility theory due to , which does not require the controversial independence axiom and transitivity ( see , e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?independence requires that if lottery is preferred to lottery , then a coin toss between and a third lottery is preferred to a coin toss between and ( with the same coin used in both cases ) .there is experimental evidence that independence is systematically violated by human decision makers .the allais paradox is perhaps the most famous example .detailed reviews of such violations , including those reported by , have been provided by and .our characterization of arrovian swfs is related to harsanyi s _ social aggregation theorem _ , which shows that , for von neumann - morgenstern preferences over lotteries , weighted welfare maximization already follows from pareto indifference . however , harsanyi s theorem is a statement about a single preference profile considered in isolation .the weights given to the agents may depend on their preferences .this can be prevented by adding axioms that connect the collective preferences across different profiles .the swf that derives the collective preferences by adding up the normalized utility representations is known as _ relative utilitarianism _it was characterized by using essentially independence of redundant alternatives ( a weakening of independence of irrelevant alternatives ) and monotonicity ( a weakening of a pareto - type axiom ) .as shown by and further explored by , aggregating ssb utility functions is fundamentally different from aggregating von neumann - morgenstern utility functions in that harsanyi s pareto axiom does not imply weighted welfare maximization . as we show in this paper ,this can be rectified by considering a multi - profile framework and independence of irrelevant alternatives .the probabilistic voting rule that returns the maximal elements of the unique anonymous arrovian swf is known as _maximal lotteries _ and was recently axiomatized using two consistency conditions .interestingly , this function and some variations of it were repeatedly reinvented ( see , e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?let be a non - empty universal set of alternatives .by we denote the set of all probability measures with finite support on .we require all one - element sets to be measurable .hence , for , for all but finitely many . for , let be the set of probability measures in with support in , i.e. , .we will refer to elements of as outcomes and one - point measures in as pure outcomes .furthermore , let be an asymmetric binary relation over , which is interpreted as the preference relation of an agent .given two outcomes , we write when neither nor , and if or .for , let and be the strict upper and strict lower contour set of ; denotes the indifference set of . for , is the preference relation restricted to outcomes in .we will consider preference relations that are continuous , i.e. , small changes in probabilities do not result in a reversal of preference .one of several possibilities to define continuity is the archimedean axiom , which requires that , for any given outcome , the convex hull of a more preferred outcome and a less preferred outcome also contains an equally preferred outcome .a preference relation is continuous if , for all , another standard assumption is that preferences are convex .we will use convexity as defined by . is convex if , for all and , equivalently , one could require that the indifference set for an outcome is a hyperplane through ; the upper and lower contour sets are the corresponding half spaces .note that convexity implies that upper contour sets , lower contour sets , and indifference sets are convex .moreover , upper contour and lower contour sets are either open or empty and indifference sets are closed .the existence of maximal elements is usually quoted as the main reason for insisting on transitivity of preference relations .it was shown by that continuity and convexity are already sufficient for the existence of maximal elements , even when preferences are intransitive ( see also * ? ? ?* ; * ? ? ?[ thm : sonnenschein] if is a continuous and convex preference relation , then for all non - empty , compact , and convex sets . has shown that two intuitive choice consistency conditions are equivalent to choosing maximal elements according to an acyclic relation .these conditions are known as _sen s _ ( or _ contraction _ ) and _ sen s _ ( or _ expansion _ ) .we will show that choosing maximal elements according to convex relation implies contraction and expansion , even in the absence of acyclicity .for some set of outcomes , the convex hull is the smallest convex set that contains .a choice function maps every feasible ( i.e. , compact and convex ) set out outcomes to a subset thereof .contraction requires that if some outcome is chosen from some feasible set , then it is also chosen from any feasible subset that it is contained in . a choice function satisfies contraction if for all compact and convex , expansion prescribes that an outcome that is chosen from two feasible sets , should also be chosen for the convex hull of their union . satisfies expansion if for all compact and convex , our goal is to link the above choice consistency conditions to choosing the maximal elements of some relation .we show that if a choice function chooses the maximal elements of a convex relation , then it satisfies contraction and expansion .the converse holds if convexity is weakened to only require that the weak upper and lower contour sets are convex .convexity is only needed for the expansion part .[ thm : rationalizable ] let be a convex preference relation .then satisfies contraction and expansion for compact and convex subsets of .first we show contraction .let be compact and convex with and .this implies that .second we show expansion .let be compact and convex and .then , for all .since satisfies convexity , we have for all .thus , .convexity implies that indifference curves are straight lines .the symmetry condition introduced by prescribes that either all indifference curves are parallel or meet at one point ( which may be outside of ) . for all and , \text . \end{aligned } \tag{symmetry}\ ] ] justifies this axiom by stating that `` the degree to which is preferred to is equal in absolute magnitude but opposite in sign to the degree to which is preferred . ''he continues by writing that he is `` a bit uncertain as to whether this should be regarded more as a convention than a testable hypothesis much like the asymmetry axiom [ ] , which can almost be thought of as a definitional characteristic of strict preference . '' by we denote the set of all continuous , convex , and symmetric preference relations . despite the richness of , preference relations therein admit a particularly nice representation .it was shown by that if , then there is a skew - symmetric and bilinear ( ssb ) utility function such that , for all , is skew - symmetric if for all . is bilinear if it is linear in both arguments . ] moreover , is unique up to scalar multiplication .we denote by the set of all ssb functions on . for outcomes with finite support, can be written as a convex combination of the values of for one - point measures .for this purpose , we identify every alternative with the one - point measure that puts probability on .then , for all , we will oftentimes represent ssb functions restricted to for finite x as skew - symmetric matrices in .when requiring transitivity on top of continuity , convexity , and symmetry , the four axioms characterize _ weighted linear ( wl ) _ utility functions as introduced by .when additionally requiring independence , then is separable , i.e. , , where is a linear von neumann - morgenstern utility function representing . for independently distributed outcomes ( as considered in this paper ), ssb utility theory coincides with regret theory as introduced by ( see also * ? ? ?* ; * ? ? ?* ) . through the representation of restricted to a finite by a skew - symmetric matrix, it becomes apparent that the minimax theorem implies the existence of maximal elements of on .this was noted by ( * ? ? ?* theorem 4 ) and already follows from . goes on to show that choosing maximal elements of from feasible sets satisfies contraction and expansion , which follows from because relations in satisfy convexity by definition .for the remainder of the paper we deal with the problem of aggregating the preferences of multiple agents into a collective preference relation .the set of agents is for some .the preference relations of agents belong to some _ domain _ .a function from the set of agents to the domain is a preference profile .we will write preference profiles as tuples with indices in .a _ social welfare function ( swf ) _ maps a preference profile to a collective preference relation .arrow s impossibility theorem shows that the only swfs that satisfy two desirable properties , pareto optimality and independence of irrelevant alternatives , are dictatorial functions .pareto optimality prescribes that a unanimous preference of one outcome over another in the individual preferences should be reflected in the collective preference .an swf is _ pareto optimal _ if , for all , , and , independence of irrelevant alternatives requires that collective preferences over some feasible set of outcomes should only depend on the individual preferences over this set ( and not on the preferences over outcomes outside this set ) .together with transitivity , it is the driving force in arrow s theorem and has much stronger implications than apparent at first sight . in our framework, we will assume that feasible sets are based on the availability of alternatives and are therefore of the form for .formally , we say that an swf satisfies _ independence of irrelevant alternatives ( iia ) _ if , for all and , any swf that satisfies pareto optimality and iia will be called an _ arrovian _arrow has shown that , when no structure is imposed on preference relations and feasible sets , every arrovian swf is dictatorial , i.e. , the preference relation of one fixed agent is a sub - relation of the collective preference relation ( formally , there is such that for all , ) .dictatorships are examples of swfs that are extremely biased towards one agent . in many applications ,_ any _ differentiation between agents is unacceptable and all agents should be treated equally . this property is known as anonymity .we denote by the set of all permutations on . for and a preference profile , is the preference profile where agents are renamed according to .then , an swf is _ anonymous _ if for all and , anonymity is clearly a stronger requirement than non - dictatorship . in order to prove our characterization , we need to assume that any domain satisfies certain richness conditions . we denote the completely indifferent preference relation on for some by and require that .this allows agents to express complete indifference .second , we require that the domain is neutral in the sense that it is not biased towards certain alternatives . for and , let such that for all .then , for , we define such that if and only if for all .it is assumed that if and only if for all and .it should also be possible for agents to declare completely opposed preferences .for , is the inverse of , i.e. , if and only if for all . then implies for all . note that this condition is not implied by the previous neutrality condition because not only the preferences over alternatives , but also the preferences over outcomes are inverted .finally , we demand that contains a preference relation with ( at least ) four linearly ordered alternatives . with and for some , then there is some with and for some . ]we characterize the largest domain for which an anonymous arrovian swf exists .it turns out that preferences within this domain have been studied before and have a natural interpretation . is based on _ pairwise comparisons _ if for all . by denote the set of ssb functions that are based on pairwise comparisons .hence , if is based on pairwise comparisons and outcomes are interpreted as lotteries , one outcome is preferred to another if and only if the former is more likely to return a more preferred pure outcome . these preferences over outcomesare quite natural and can be seen as the canonical ssb representation consistent with a given ordinal preference relation over alternatives ( see for an axiomatic characterization and for an investigation of efficiency , strategyproofness , and related properties with respect to such preference relations ) .illustrates preferences based on pairwise comparisons for three transitively ordered alternatives .\(a ) at ( -150 : ) ; ( b ) at ( 90 : ) ; ( c ) at ( -30 : ) ; \(f ) at ( ) ; ( a.center ) ( b.center ) ( c.center ) ( a.center ) ; \(a ) ( ) ; in 90,95, ... ,119 \(o ) at ( ) ; ( p ) at ( intersection of f o and a b ) ; ( q ) at ( intersection of a c and p f ) ; ; in 85,80, ... ,65 \(o ) at ( ) ; ( p ) at ( intersection of f o and b c ) ; ( q ) at ( intersection of a c and p f ) ; in .1,.2, ... ,.9 ( ) ( ) ; the proof starts by showing that continuous and convex preference relations are completely determined by their symmetric part up to orientation .[ lem : indiff ] let be continuous and convex preference relations such that . then , .we first show an auxiliary statement : if is continuous and convex and such that contains an open set , then .assume for contradiction that or , equivalently , and let such that a neighborhood of is contained in . consider the case when and let .then convexity implies that for all .this contradicts the fact that a neighborhood of is contained in .the case is symmetric .now let be continuous and convex such that .let . by assumption , we have .moreover , is the disjoint union of , , and , , , respectively .this implies that .assume for contradiction that and .let and .continuity implies that .convexity of implies that .hence , , which contradicts . hence , or .similarly , or .if or , then contains either or .if and in non - empty or and in non - empty , then contains an open set .similarly , if .hence , .thus , for all with and , either and or and . in the first case , we say that is oriented positively at , in the latter case oriented negatively .let and .note that both and are convex .moreover , for all , and , for all , . since for all , neither of and contains an open set. then also does not contain an open set .let .note that contains an open set .let such that and is oriented positively at .assume for contradiction that is oriented negatively at , i.e. , and . implies , which is equivalent to .also is equivalent to and then .since and are disjoint , this is a contradiction . hence and , i.e., is oriented positively at .similarly , if such that and is oriented negatively at , then is oriented negatively at .if , the statement of the lemma holds trivially .if , there is such that is oriented positively or negatively at ( but not both ) , i.e. , .assume that is oriented positively at .let . if there is , then two applications of what we have proven above yield that is oriented positively at .if , then either or contains an open set . hence either or . since by assumption , we have that . then is trivially oriented positively ( and negatively ) at .together we get that is oriented positively at all . by denote the restriction of a relation to those comparisons involving , i.e. , . by continuity and convexity of , is the hyperplane separating and .similarly , is the hyperplane separating and . since , it follows that .since is oriented positively at , we get that .now let be arbitrary .if , we have that , as this would imply that either or contains an open set in which case or , respectively .hence , there is . since , it follows that . also , since , i.e. , , it follows that .now consider the case when .if , then follows trivially . in case , let .if contains an open set , let such that . exists , since .since , contains an open set , which contradicts .hence , does not contain an open set . since , it does not contain an open set either , which implies that .let .since it follows from a previous case that . then implies contradicting , which means that the current case can not occur .lastly , consider .if , then and from before we know that . if , then is open and hence intersects with .for we know that .this means that and , which is a contradiction .hence , which means that is oriented positively at . frombefore it follows that .together , we have that for all , i.e , .if is oriented negatively at , we get by an analogous argument .is a generalization of theorem 2 by .the proof only requires continuity and convexity , but not symmetry , of . , , and need to be convex for all . to see this ,consider the following preference relations on the closed interval ] and and otherwise .both , and are continuous and convex according to the weaker convexity assumption defined above . for is clear . to see this for ,observe that for all ] and ] or and ] .let and . then .if , the strict part of pareto optimality of implies that .this contradicts .hence , . _ case 2 ( ) _ : assume without loss of generality that . by [ item : domain1 ] , we get .our richness assumptions imply that there is with , and for some .implies that .hence , if suffices to show that . by [ item : domain1 ] , we get that and .hence , . __ ad [ item : domain3 ] : the proof is analogous to the proof of [ item : domain2 ] .[ thm : pcdomain ] let be an anonymous arrovian swf on some domain . then .let and such that and .we have to show that .first assume there are and such that or .then , implies that .otherwise , and .implies that .hence , and .has established that arrovian aggregation is only possible if individual preferences are based on pairwise comparisons . in the remainder of this paper , we will with slight abuse of notation treat arrovian swfs as functions from to with .the following lemmas show that for every preference profile and all alternatives and , only depends on the number of agents who prefer to , whenever is from the domain of -preferences and represents . given preference profile ,let be the set of agents who strictly prefer over and .also , let be the set of agents who are indifferent between and .we only show case [ item : pareto1 ] .case [ item : pareto2 ] can be proved by a symmetric argument .let and consider a preference profile such that and the values of are irrelevant for all .let .since for all , the indifference part of pareto optimality implies that .hence , .similarly , we get . takes the following form for some . the indifference part of pareto optimality and imply that for some .this implies that , i.e. , .similarly , we get . in particular , .since , iia implies that .hence , .we first prove the case when all of are distinct .let and consider a preference profile such that and for all and .then , by , we can assume without loss of generality that for all .now consider the preference profile note that and because by assumption .let and .since , we have by iia .moreover , and iia yield . implies that for all without loss of generality .thus , for some , takes the form now consider the preference profile and let . the indifference part of pareto optimality of and imply that for some . since , we get that .let .let such that , , , and for all and . and denote the corresponding collective ssb functions .since satisfies iia , we have and without loss of generality .implies that we can assume without loss of generality that and take the following form for some . that and by the way they were constructed from and .since satisfies iia , we get that and .in particular , this means that and .the scalar disappears by choosing suitable ssb functions representing the collective preferences without loss of generality .implies that only depends on and and not on and .hence , there is a function such that for all and .we now leverage the indifference part of pareto optimality to show that is a linear combination of the s .hence , is affine welfare maximizing . for all , let . for convenience ,we write for . since for all , it suffices to show that for all . to this end, we will first show that for all with .let as above , , and consider the following preference profile with . we have that , for and , for all .the indifference part of pareto optimality implies that .let , , and . since , by definition of , we have to show that . by definition of , we get that takes the following form . from , it follows that .this proves the desired relationship .now we can rewrite to by definition of , this is equivalent to to show this , let and consider the following preference profile with . that , for and , for all .the indifference part of pareto optimality implies that . with the same definitions as before and , takes the following form . from , we get that .hence , .this is equivalent to where the last equality follows from skew - symmetry of and the definition of .this proves . fromwe know that there are such that , for all , .assume for contradiction that for some .let be the set of agents such that and consider a preference profile with such that then , for , we have that for all and for all .pareto optimality of implies that .however , we have which is a contradiction .p. anand .rationality and intransitive preference : foundations for the modern view . in p.anand , p. k. pattanaik , and c. puppe , editors , _ the handbook of rational and social choice _ , chapter 6 .oxford university press , 2009 .t. c. bergstrom . when non - transitive relations take maxima and competitive equilibrium ca nt be beat . in w.neuefeind and r. g. riezmann , editors , _ economic theory and international trade ( essays in memoriam of j. trout rader ) _ , pages 2952 .springer - verlag , 1992 .d. e. campbell and j. s. kelly .impossibility theorems in the arrovian framework . in k.j. arrow , a. k. sen , and k. suzumura , editors , _ handbook of social choice and welfare _ , volume 1 , chapter 1 .elsevier , 2002 .m. condorcet ._ essai sur lapplication de lanalyse la probabilit des dcisions rendues la pluralit des voix_. imprimerie royale , 1785 .facsimile published in 1972 by chelsea publishing company , new york . c. daspremont and l. gevers .social welfare functionals and interpersonal comparability . in k.j. arrow , a. k. sen , and k. suzumura , editors , _ handbook of social choice and welfare _ ,volume 1 , chapter 10 , pages 459541 .elsevier , 2002 .g. kreweras .aggregation of preference orderings . in_ mathematics and social sciences i : proceedings of the seminars of menthon - saint - bernard , france ( 127 july 1960 ) and of gsing , austria ( 327 july 1962 ) _ , pages 7379 , 1965 .m. le breton and j. a. weymark .arrovian social choice theory on economic domains . in k. j. arrow , a. k. sen , and k. suzumura , editors , _ handbook of social choice and welfare _ , volume 2 , chapter 17 .north - holland , 2011 .m. j. machina .generalized expected utility analysis and the nature of observed violations of the independence axiom . in b. stigum and f. wenstop , editors , _ foundations of utility and risk theory with applications _, chapter 5 .springer , 1983 .j. redekop .arrow theorems in econonmic environments . in w.a. barnett , h. moulin , m. salles , and n. j. schofield , editors , _ social choice , welfare , and ethics _ , pages 163185 .cambridge university press , 1995 .r. l. rivest and e. shen .an optimal single - winner preferential voting system based on game theory . in _ proceedings of the 3rd international workshop on computational social choice ( comsoc ) _ , pages 399410 , 2010 .h. sonnenschein .demand theory without transitive preference with applications to the theory of competitive equilibrium . in j.chipman , l. hurwicz , m. richter , and h. sonnenschein , editors , _ preferences , utility and demand_. houghton mifflin harcourt , 1971 .
we consider social welfare functions that satisfy arrow s classic axioms of _ independence of irrelevant alternatives _ and _ pareto optimality _ when individual and collective preferences are continuous and convex . these assumptions are sufficient for the existence of maximal elements and the choice consistency of functions that return these elements . we provide characterizations of both the domains of preferences and the social welfare functions that allow for arrovian aggregation . the domains allow for arbitrary preferences over pure outcomes , which in turn completely determine an agent s preferences over all remaining outcomes . the only arrovian social welfare functions on these domains constitute an intriguing combination of utilitarianism and pairwiseness . when also assuming anonymity , arrow s impossibility turns into a complete characterization of a unique desirable social welfare function .
several rich strands of work in the behavioral sciences have been concerned with characterizing the nature and sources of human error .these include the broad of notion of _ bounded rationality _ and the subsequent research beginning with kahneman and tversky on heuristics and biases . with the growing availability of large datasets containing millions of human decisions on fixed , well - defined , real - world tasks, there is an interesting opportunity to add a new style of inquiry to this research given a large stream of decisions , with rich information about the context of each decision , can we algorithmically characterize and predict the instances on which people are likely to make errors ?this genre of question analyzing human errors from large traces of decisions on a fixed task also has an interesting relation to the canonical set - up in machine learning applications .typically , using instances of decision problems together with `` ground truth '' showing the correct decision , an algorithm is trained to produce the correct decisions in a large fraction of instances .the analysis of human error , on the other hand , represents a twist on this formulation : given instances of a task in which we have both the correct decision _ and _ a human s decision , the algorithm is trained to recognize future instances on which the human is likely to make a mistake .predicting human error from this type of trace data has a history in human factors research , and a nascent line of work has begun to apply current machine - learning methods to the question .[ [ model - systems - for - studying - human - error ] ] * model systems for studying human error * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + as the investigation of human error using large datasets grows increasingly feasible , it becomes useful to understand which styles of analysis will be most effective .for this purpose , as in other settings , there is enormous value in focusing on model systems where one has exactly the data necessary to ask the basic questions in their most natural formulations .what might we want from such a model system ?* it should consist of a task for which the context of the human decisions has been measured as thoroughly as possible , and in a very large number of instances , to provide the training data for an algorithm to analyze errors .* so that the task is non - trivial , it should be challenging even for highly skilled human decision - makers . * notwithstanding the previous point ( ii ) , the `` ground truth '' the correctness of each candidate decision should be feasibly computable by an algorithm .guided by these desiderata , we focus in this paper on chess as a model system for our analysis . in doing so, we are proceeding by analogy with a long line of work in behavioral science using chess as a model for human decision - making .chess is a natural domain for such investigations , since it presents a human player with a sequence of concrete decisions which move to play next with the property that some choices are better than others . indeed , because chess provides data on hard decision problems in such a pure fashion , it has been described as the `` drosophila of psychology '' .( it is worth noting our focus here on _ human _ decisions in chess , rather than on designing algorithms to play chess .this latter problem has also , of course , generated a rich literature , along with a closely related tag - line as the `` drosophila of artificial intelligence '' . )[ [ chess - as - a - model - system - for - human - error ] ] * chess as a model system for human error * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + despite the clean formulation of the decisions made by human chess players , we still must resolve a set of conceptual challenges if our goal is to assemble a large corpus of chess moves with ground - truth labels that classify certain moves as errors . let us consider three initial ideas for how we might go about this , each of which is lacking in some crucial respect for our purposes .first , for most of the history of human decision - making research on chess , the emphasis has been on focused laboratory studies at small scales in which the correct decision could be controlled by design . in our list of desiderata , this means that point ( iii ) , the availability of ground truth , is well under control , but a significant aspect of point ( i ) the availability of a vast number of instances is problematic due to the necessarily small scales of the studies . a second alternative would be to make use of two important computational developments in chess the availability of databases with millions of recorded chess games by strong players ; and the fact that the strongest chess programs generally referred to as _ chess engines _ now greatly outperform even the best human players in the world .this makes it possible to analyze the moves of strong human players , in a large - scale fashion , comparing their choices to those of an engine .this has been pursued very effectively in the last several years by biswas and regan ; they have used the approach to derive interesting insights including proposals for how to estimate the depth at which human players are analyzing a position . for the current purpose of assembling a corpus with ground - truth error labels , however , engines present a set of challenges .the basic difficulty is that even current chess engines are far from being able to provide guarantees regarding the best move(s ) in a given position . in particular ,an engine may prefer move to in a given position , supplementing this preference with a heuristic numerical evaluation , but may ultimately lead to the same result in the game , both under best play and under typical play . in these cases , it is hard to say that choosing should be labeled an error .more broadly , it is difficult to find a clear - cut rule mapping an engine s evaluations to a determination of human error , and efforts to label errors this way would represent a complex mixture of the human player s mistakes and the nature of the engine s evaluations . finally , a third possibility is to go back to the definition of chess as a deterministic game with two players ( white and black ) who engage in alternating moves , and with a game outcome that is either ( a ) a win for white ,( b ) a win for black , or ( c ) a draw .this means that from any position , there is a well - defined notion of the outcome with respect to optimal play by both sides in game - theoretic terms , this is the _ minimax value _ of the position . in each position , it is the case that white wins with best play , or black wins with best play , or it is a draw with best play , and these are the three possible minimax values for the position. this perspective provide us with a clean option for formulating the notion of an error , namely the direct game - theoretic definition : a player has committed an error if their move worsens the minimax value from their perspective .that is , the player had a forced win before making their move but now they do nt ; or the player had a forced draw before making their move but now they do nt .but there s an obvious difficulty with this route , and it s a computational one : for most chess positions , determining the minimax value is hopelessly beyond the power of both human players and chess engines alike .we now discuss the approach we take here .[ [ assessing - errors - using - tablebases ] ] * assessing errors using tablebases * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in our work , we use minimax values by leveraging a further development in computer chess the fact that chess has been solved for all positions with at most pieces on the board , for small values of .( we will refer to such positions as _-piece positions_. ) solving these positions has been accomplished not by forward construction of the chess game tree , but instead by simply working backward from terminal positions with a concrete outcome present on the board and filling in all other minimax values by dynamic programming until all possible -piece positions have been enumerated .the resulting solution for all -piece positions is compiled into an object called a _-piece tablebase _ , which lists the game outcome with best play for each of these positions .the construction of tablebases has been a topic of interest since the early days of computer chess , but only with recent developments in computing and storage have truly large tablebases been feasible .proprietary tablebases with have been built , requiring in excess of a hundred terabytes of storage ; tablebases for are much more manageable , though still very large , and we focus on the case of in what follows . tablebases and traditional chess engines are thus very different objects .chess engines produce strong moves for arbitrary positions , but with no absolute guarantees on move quality in most cases ; tablebases , on the other hand , play perfectly with respect to the game tree indeed , effortlessly , via table lookup for the subset of chess containing at most pieces on the board .thus , for arbitrary -piece positions , we can determine minimax values , and so we can obtain a large corpus of chess moves with ground - truth error labels : starting with a large database of recorded chess games , we first restrict to the subset of -piece positions , and then we label a move as an error if and only it worsens the minimax value from the perspective of the player making the move . adapting chess terminology to the current setting , we will refer to such an instance as a _blunder_. this is our model system for analyzing human error ; let us now check how it lines up with desiderata ( i)-(iii ) for a model system listed above .chess positions with at most pieces arise relatively frequently in real games , so we are left with many instances even after filtering a database of games to restrict to only these positions ( point ( i ) ) . crucially , despite their simple structure, they can induce high error rates by amateurs and non - trivial error rates even by the best players in the world ; in recognition of the inherent challenge they contain , textbook - level treatments of chess devote a significant fraction of their attention to these positions ( point ( ii ) ) . andthey can be evaluated perfectly by tablebases ( point ( iii ) ) .focusing on -piece positions has an additional benefit , made possible by a combination of tablebases and the recent availability of databases with millions of recorded chess games .the most frequently - occurring of these positions arise in our data thousands of times .as we will see , this means that for some of our analyses , we can control for the exact position on the board and still have enough instances to observe meaningful variation .controlling for the exact position is not generally feasible with arbitrary positions arising in the middle of a chess game , but it becomes possible with the scale of data we now have , and we will see that in this case it yields interesting and in some cases surprising insights . finally , we note that our definition of blunders , while concrete and precisely aligned with the minimax value of the game tree , is not the only definition that could be considered even using tablebase evaluations. in particular , it would also be possible to consider `` softer '' notions of blunders .suppose for example that a player is choosing between moves and , each leading to a position whose minimax value is a draw , but suppose that the position arising after is more difficult for the opponent , and produces a much higher empirical probability that the opponent will make a mistake at some future point and lose .then it can be viewed as a kind of blunder , given these empirical probabilities , to play rather than the more challenging .this is sometimes termed _ speculative play _ , and it can be thought of primarily as a refinement of the coarser minimax value .this is an interesting extension , but for our work here we focus on the purer notion of blunders based on the minimax value .in formulating our analysis , we begin from the premise that for analyzing error in human decisions , three crucial types of features are the following : * the skill of the decision - maker ; * the time available to make the decision ; and * the inherent difficulty of the decision . any instance of the problem will implicitly or explicitly contain features of all three types : an individual of a particular level of skill is confronting a decision of a particular difficulty , with a given amount of time available to make the decision . in our current domain , as in any other setting where the question of human error is relevant , there are a number of basic genres of question that we would like to ask .these include the following .* for predicting whether an error will be committed in a given instance , which types of features ( skill , time , or difficulty ) yield the most predictive power ? * in which kinds of instances does greater skill confer the largest relative benefit ? is it for more difficult decisions ( where skill is perhaps most essential ) or for easier ones ( where there is the greatest room to realize the benefit ) ?are there particular kinds of instances where skill does not in fact confer an appreciable benefit ? * an analogous set of questions for time in place of skill : in which kinds of instances does greater time for the decision confer the largest benefit ? is additional time more beneficial for hard decisions or easy ones ? and are there instances where additional time does not reduce the error rate ? * finally , there are natural questions about the interaction of skill and time : is it higher - skill or lower - skill decision - makers who benefit more from additional time ?these questions motivate our analyses in the subsequent sections .we begin by discussing how features of all three types ( skill , time , and difficulty ) are well - represented in our domain .our data comes from two large databases of recorded chess games .the first is a corpus of approximately 200 million games from the free internet chess server ( fics ) , where amateurs play each other on - line .the second is a corpus of approximately 1 million games played in international tournaments by the strongest players in the world .we will refer to the first of these as the fics dataset , and the second as the gm dataset .( gm for `` grandmaster , '' the highest title a chess player can hold . ) for each corpus , we extract all occurrences of -piece positions from all of the games ; we record the move made in the game from each occurrence of each position , and use a tablebase to evaluate all possible moves from the position ( including the move that was made ) .this forms a single instance for our analysis .since we are interested in studying errors , we exclude all instances in which the player to move is in a theoretically losing position where the opponent has a direct path to checkmate because there are no blunders in losing positions ( the minimax value of the position is already as bad as possible for the player to move ) .there are 24.6 million ( non - losing ) instances in the fics dataset , and 880,000 in the gm dataset . we now consider how feature types ( a ) , ( b ) , and ( c ) are associated with each instance .first , for skill , each chess player in the data has a numerical rating , termed the _ elo rating _ , based on their performance in the games they ve played .higher numbers indicate stronger players , and to get a rough sense of the range : most amateurs have ratings in the range 1000 - 2000 , with extremely strong amateurs getting up to 2200 - 2400 ; players above 2500 - 2600 belong to a rarefied group of the world s best ; and at any time there are generally about fewer than five people in the world above 2800 .if we think of a game outcome in terms of points , with 1 point for a win and 0.5 points for a draw , then the elo rating system has the property that when a player is paired with someone 400 elo points lower , their expected game outcome is approximately points an enormous advantage . , the expected score for the higher - ranked player under the elo system is . ] for our purposes , an important feature of elo ratings is the fact that a single number has empirically proven so powerful at predicting performance in chess games .while ratings clearly can not contain all the information about players strengths and weaknesses , their effectiveness in practice argues that we can reasonably use a player s rating as a single numerical feature that approximately represents their skill . with respect to temporal information ,chess games are generally played under time limits of the form , `` play moves in minutes '' or `` play the whole game in minutes . ''players can choose how they use this time , so on each move they face a genuine decision about how much of their remaining allotted time to spend . the fics dataset contains the amount of time remaining in the game when each move was played ( and hence the amount of time spent on each move as well ) ; most of the games in the fics dataset are played under extremely rapid time limits , with a large fraction of them requiring that the whole game be played in 3 minutes for each player . to avoid variation arising from the game duration ,we focus on this large subset of the fics data consisting exclusively of games with 3 minutes allocated to each side .our final set of features will be designed to quantify the difficulty of the position on the board i.e. the extent to which it is hard to avoid selecting a move that constitutes a blunder .there are many ways in which one could do this , and we are guided in part by the goal of developing features that are less domain - specific and more applicable to decision tasks in general .we begin with perhaps the two most basic parameters , analogues of which would be present in any setting with discrete choices and a discrete notion of error these are the number of legal moves in the position , and the number of these moves that constitute blunders .later , we will also consider a general family of parameters that involve looking more deeply into the search tree , at moves beyond the immediate move the player is facing . to summarize , in a single instance in our data , a player of a given rating , with a given amount of time remaining in the game , faces a specific position on the board , and we ask whether the move they select is a blunder .we now explore how our different types of features provide information about this question , before turning to the general problem of prediction .we begin by considering a set of basic features that help quantify the difficulty inherent in a position .there are many features we could imagine employing that are highly domain - specific to chess , but our primary interest is in whether a set of relatively generic features can provide non - trivial predictive value .above we noted that in any setting with discrete choices , one can always consider the total number of available choices , and partition these into the number that constitute blunders and the number that do not constitute blunders .in particular , let s say that in a given chess position , there are legal moves available these are the possible choices and of these , are blunders , in that they lead to a position with a strictly worse minimax value .note that it is possible to have , but we exclude these positions because it is impossible to blunder .also , by the definition of the minimax value , we must have ; that is , there is always at least one move that preserves the minimax value . , for the fics dataset.,scaledwidth=40.0% ]a global check of the data reveals an interesting bimodality in both the fics and gm datasets : positions with and positions with are both heavily represented .the former correspond to positions in which there is a unique blunder , and the latter correspond to positions in which there is a unique correct move to preserve the minimax value .our results will cover the full range of values , but it is useful to know that both of these extremes are well - represented .now , let us ask what the empirical blunder rate looks like as a bivariate function of this pair of variables . over all instances in which the underlying position satisfies and , we define to be the fraction of those instances in which the player blunders .how does the empirical blunder rate vary in and ?it seems natural to suppose that for fixed , it should generally increase in , since there are more possible blunders to make . on the other hand ,instances with often correspond to chess positions in which the only non - blunder is `` obvious '' ( for example , if there is only one way to recapture a piece ) , and so one might conjecture that the empirical blunder rate will be lower for this case . in fact , the empirical blunder rate is generally monotone in , as shown by the heatmap representation of in figure [ fig : blunder - rate - heat - map ] .( we show the function for the fics data ; the function for the gm data is similar . ) moreover , if we look at the heavily - populated line , the blunder rate is increasing in ; as there are more blunders to compete with the unique non - blunder , it becomes correspondingly harder to make the right choice .-value defined in section [ subsec : difficulty].,title="fig:",scaledwidth=23.0% ] -value defined in section [ subsec : difficulty].,title="fig:",scaledwidth=23.0% ] + ( a ) gm data + -value defined in section [ subsec : difficulty].,title="fig:",scaledwidth=23.0% ] -value defined in section [ subsec : difficulty].,title="fig:",scaledwidth=23.0% ] + ( b ) fics data [ [ blunder - potential ] ] * blunder potential * + + + + + + + + + + + + + + + + + + + given the monotonicity we observe , there is an informative way to combine and : by simply taking their ratio .this quantity , which we term the _ blunder potential _ of a position and denote , is the answer to the question , `` if the player selects a move uniformly at random , what is the probability that they will blunder ? '' .this definition will prove useful in many of the analyses to follow .intuitively , we can think of it as a direct measure of the _ danger _ inherent in a position , since it captures the relative abundance of ways to go wrong . in figure[ fig : blunder - rate - blunder - potential ] we plot the function , the proportion of blunders in instances with , for both our gm and fics datasets on linear as well as logarithmic -axes .the striking regularity of the curves shows how strongly the availability of potential mistakes translates into actual errors .one natural starting point for interpreting this relationship is to note that if players were truly selecting their moves uniformly at random , then these curves would lie along the line .the fact that they lie below this line indicates that in aggregate players are preferentially selecting non - blunders , as one would expect .and the fact that the curve for the gm data lies much further below is a reflection of the much greater skill of the players in this dataset , a point that we will return to shortly .[ [ the - gamma - value ] ] * the -value * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we find that a surprisingly simple model qualitatively captures the shapes of the curves in figure [ fig : blunder - rate - blunder - potential ] quite well .suppose that instead of selecting a move uniformly at random , a player selected from a biased distribution in which they were preferentially times more likely to select a non - blunder than a blunder , for a parameter . if this were the true process for move selection , then the empirical blunder rate of a position would be we will refer to this as the _ -value _ of the position , with parameter . using the definition of the blunder potential to write , we can express the -value directly as a function of the blunder potential : we can now find the value of for which best approximates the empirical curves in figure [ fig : blunder - rate - blunder - potential ] .the best - fit values of are for the fics data and for the gm data , again reflecting the skill difference between the two domains .these curves are shown superimposed on the empirical plot in the figure ( on the right , with logarithmic -axes ) .we note that in game - theoretic terms the -value can be viewed as a kind of _ quantal response _ , in which players in a game select among alternatives with a probability that decreases according to a particular function of the alternative s payoff .since the minimax value of the position corresponds to the game - theoretic payoff of the game in our case , a selection rule that probabilistically favors non - blunders over blunders can be viewed as following this principle .( we note that our functional form can not be directly mapped onto standard quantal response formulations .the standard formulations are strictly monotonically decreasing in payoff , whereas we have cases where two different blunders can move the minimax value by different amounts in particular , when a win changes to a draw versus a win changes to a loss and we treat these the same in our simple formulation of the -value . ) a key focus in the previous subsection was to understand how the empirical blunder rate varies as a function of parameters of the instance . herewe continue this line of inquiry , with respect to the skill of the player in addition to the difficulty of the position .recall that a player s _ elo rating _ is a function of the outcomes of the games they ve played , and is effective in practice for predicting the outcomes of a game between two rated players .it is for this reason that we use a player s rating as a proxy for their skill .however , given that ratings are determined by which games a player wins , draws , or loses , rather than by the extent to which they blunder in -piece positions , a first question is whether the empirical blunder rate in our data shows a clean dependence on rating .in fact it does .figure [ fig : blunder - rate - rating ] shows the empirical blunder rate averaged over all instances in which the player has rating .the blunder rate declines smoothly with rating for both the gm and fics data , with a flattening of the curve at higher ratings .[ [ the - skill - gradient ] ] * the skill gradient * + + + + + + + + + + + + + + + + + + + + we can think of the downward slope in figure [ fig : blunder - rate - rating ] as a kind of _ skill gradient _ , showing the reduction in blunder rate as skill increases .the steeper this reduction is in a given setting , the higher the empirical benefit of skill in reducing error .it is therefore natural to ask how the skill gradient varies across different conditions in our data .as a first way to address this , we take each possible value of the blunder potential ( rounded to the nearest multiple of ) , and define the function to be the empirical error rate of players of rating in positions of blunder potential .figure [ fig : blunder - rate - potential - rating ] shows plots of these curves for equal to each multiple of , for both the gm and fics datasets ..,scaledwidth=48.0% ] + we observe two properties of these curves .first , there is remarkably little variation among the curves .when viewed on a logarithmic y - axis the curves are almost completely parallel , indicating the same rate of proportional decrease across all blunder potentials . a second , arguably more striking, property is how little the curves overlap in their ranges of -values . in effect , the curves form a kind of `` ladder '' based on blunder potential : for every value of the discretized blunder potential , every rating in 1200 - 1800 range on fics has a lower empirical blunder rate at blunder potential than the best of these ratings at blunder potential . in effect , each additional increment in blunder potential contributes more , averaging over all instances , to the aggregate empirical blunder rate than an additional 600 rating points , despite the fact that 600 rating points represent a vast difference in chess performance .we see a similar effect for the gm data , where small increases in blunder potential have a greater effect on blunder rate than the enormous difference between a rating of 2300 and a rating of 2700 .( indeed , players rated 2700 are making errors at a greater rate in positions of blunder potential 0.9 than players rated 1200 are making in positions of blunder potential 0.3 . ) and we see the same effects when we separately fix the numerator and denominator that constitute the blunder potential , and , as shown in figure [ fig : skill - gradient ] .to the extent that this finding runs counter to our intuition , it bears an interesting relation to the _ fundamental attribution error _ the tendency to attribute differences in people s performance to differences in their individual attributes , rather than to differences in the situations they face .what we are uncovering here is that a basic measure of the situation the blunder potential , which as we noted above corresponds to a measure of the _ danger _ inherent in the underlying chess position is arguably playing a larger role than the players skill .this finding also relates to work of abelson on quantitative measures in a different competitive domain , baseball , where he found that a player s batting average accounts for very little of the variance in their performance in any single at - bat .we should emphasize , however , that despite the strong effect of blunder potential , skill does play a fundamental role role in our domain , as the analysis of this section has shown . and in general it is important to take multiple types of features into account in any analysis of decision - making , since only certain features may be under our control in any given application .for example , we may be able to control the quality of the people we recruit to a decision , even if we ca nt control the difficulty of the decision itself .[ [ the - skill - gradient - for - fixed - positions ] ] * the skill gradient for fixed positions * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + grouping positions together by common values gives us a rough sense for how the skill gradient behaves in positions of varying difficulty .but this analysis still aggregates together a large number of different positions , each with their own particular properties , and so it becomes interesting to ask how does the empirical blunder rate vary with elo rating _ when we fix the exact position on the board ? _ the fact that we are able to meaningfully ask this question is based on a fact noted in section [ sec : intro ] , that many non - trivial -piece positions recur in the fics data , exactly , several thousand times . for each such position , we have enough instances to plot the function , the rate of blunders committed by players of rating in position .let us say that the function is _ skill - monotone _ if it is decreasing in that is , if players of higher rating have a lower blunder rate in position .a natural conjecture would be that every position is skill - monotone , but in fact this is not the case . among the most frequent positions , we find several that we term _ skill - neutral _ , with remaining approximately constant in , as well as several that we term _ skill - anomalous _ , with increasing in .figure [ fig : skill - gradient - position ] shows a subset of the most frequently occurring positions in the fics data that contains examples of each of these three types : skill - monotone , skill - neutral , and skill - anomalous . is described in forsyth - edwards notation ( fen ) above the panel in which its plot appears . ]the existence of skill - anomalous positions is surprising , since there is a no _ a priori _ reason to believe that chess as a domain should contain common situations in which stronger players make more errors than weaker players .moreover , the behavior of players in these particular positions does not seem explainable by a strategy in which they are deliberately making a one - move blunder for the sake of the overall game outcome . in each of the skill - anomalous examples in figure [ fig : skill - gradient - position ] , the player to move has a forced win , and the positionis reduced enough that the worst possible game outcome for them is a draw under any sequence of moves , so there is no long - term value in blundering away the win on their present move .finally , we consider our third category of features , the time that players have available to make their moves .recall that players have to make their own decisions about how to allocate a fixed budget of time across a given number of moves or the rest of the game .the fics data has information about the time remaining associated with each move in each game , so we focus our analysis on fics in this subsection . specifically , as noted in section [ sec : features ] , fics games are generally played under extremely rapid conditions , and for uniformity in the analysis we focus on the most commonly - occurring fics time constraint the large subset of games in which each player is allocated 3 minutes for the whole game . as a first object of study ,let s define the function to be the empirical blunder rate in positions where the player begins considering their move with seconds left in the game .figure [ fig : blunder - rate - time - left ] shows a plot of ; it is natural that the blunder rate increases sharply as approaches , though it is notable how flat the value of becomes once exceeds roughly 10 seconds .[ [ the - time - gradient ] ] * the time gradient * + + + + + + + + + + + + + + + + + + + this plot in figure [ fig : blunder - rate - time - left ] can be viewed as a basic kind of _ time gradient _ , analogous to the skill gradient , showing the overall improvement in empirical blunder rate that arises from having extra time available . heretoo we can look at how the time gradient restricted to positions with fixed blunder potential , or fixed blunder potential and player rating .we start with figure [ fig : blunder - rate - potential - time - left ] , which shows , the blunder rate for players within a narrow skill range ( 1500 - 1599 elo ) with seconds remaining in positions with blunder potential . in this sense , it is a close analogue of figure [ fig : blunder - rate - potential - rating ] , which plotted , and for values of above seconds , it shows a very similar `` ladder '' structure in which the role of blunder potential is dominant .specifically , for every , players are blundering at a lower rate with to seconds remaining at blunder potential than they are with over a minute remaining at blunder potential . a small increase in blunder potentialhas a more extensive effect on blunder rate than a large increase in available time .we can separate the instances further both by blunder potential and by the rating of the player , via the function which gives the empirical blunder rate with seconds remaining when restricted to players of rating in positions of blunder potential .figure [ fig : time - gradient ] plots these functions , with a fixed value of in each panel .we can compare curves for players of different rating , observing that for higher ratings the curves are steeper : extra time confers a greater relative empirical benefit on higher - rated players . across panels, we see that for higher blunder potential the curves become somewhat shallower : more time provides less relative improvement as the density of possible blunders proliferates .but equally or more striking is the fact that all curves retain a roughly constant shape , even as the empirical blunder rate climbs by an order of magnitude from the low ranges of blunder potential to the highest .comparing across points in different panels helps drive home the role of blunder potential even when considering skill and time simultaneously .consider for example ( a ) instances in which players rated 1200 ( at the low end of the fics data ) with 5 - 8 seconds remaining face a position of blunder potential 0.4 , contrasted with ( b ) instances in which players rated 1800 ( at the high end of the fics data ) with 42 - 58 seconds remaining face a position of blunder potential 0.8 . as the figure shows , the empirical blunder rate is lower in instances of type ( a ) a weak player in extreme time pressure is making blunders at a lower rate because they re dealing with positions that contain less danger .[ [ time - spent - on - a - move ] ] * time spent on a move * + + + + + + + + + + + + + + + + + + + + + + thus far we ve looked at how the empirical blunder rate depends on the amount of time remaining in the game .however , we can also ask how the probability of a blunder varies with the amount of time the player actually spends considering their move before playing it .when a player spends more time on a move , should we predict they re less likely to blunder ( because they gave the move more consideration ) or more likely to blunder ( because the extra time suggests they did nt know what do ) ?the data turns out to be strongly consistent with the latter view : the empirical blunder rate is higher in aggregate for players who spend more time playing a move .we find that this property holds across the range of possible values for the time remaining and the blunder potential , as well as when we fix the specific position .we ve now seen how the empirical blunder rate depends on our three fundamental dimensions : difficulty , the skill of the player , and the time available to them .we now turn to a set of tasks that allow us to further study the predictive power of these dimensions . in order to formulate our prediction methods for blunders, we first extend the set of features available for studying the difficulty of a position .once we have these additional features , we will be prepared to develop the predictions themselves . thus far , when we ve considered a position s difficulty , we ve used information about the player s immediate moves , and then invoked a tablebase to determine the outcome after these immediate moves .we now ask whether it is useful for our task to consider longer sequences of moves beginning at the current position . specifically ,if we consider all -move sequences beginning at the current position , we can organize these into a _ game tree _ of depth with the current position as the root , and nodes representing the states of the game after each possible sequence of moves .chess engines use this type of tree as their central structure in determining which moves to make ; it is less obvious , however , how to make use of these trees in analyzing blunders by human players , given players imperfect selection of moves even at depth 1 .let us introduce some notation to describe how we use this information .suppose our instance consists of position , with legal moves , of which are blunders. we will denote the moves by , leading to positions respectively , and we ll suppose they are indexed so that are the non - blunders , and are the blunders .we write for the indices of the non - blunders and for the indices of the blunders . finally ,from each position , there are legal moves , of which are blunders .the set of all pairs for constitutes a potentially useful source of information in the depth-2 game tree from the current position .what might it tell us ? first, suppose that position , for , is a position reachable via a blunder .then if the blunder potential is large , this means that it may be challenging for the opposing player to select a move that capitalizes on the blunder made at the root position ; there is a reasonable chance that the opposing will instead blunder , restoring the minimax value to something larger .this , in turn , means that it may be harder for the player in the root position of our instance to see that move , leading to position , is in fact a blunder .the conclusion from this reasoning is that when the blunder potentials of positions for are large , it suggests a larger empirical blunder rate at .it is less clear what to conclude when there are large blunder potentials at positions for positions reachable by non - blunders .again , it suggests that player at the root may have a harder time correctly evaluating the positions for ; if they appear better than they are , it could lead the player to favor these non - blunders . on the other hand , the fact that these positions are hard to evaluate could also suggest a general level of difficulty in evaluating , which could elevate the empirical blunder rate .there is also a useful aggregation of this information , as follows .if we define and , and analogously for and , then the ratio is a kind of aggregate blunder potential for all positions reachable by blunders , and analogously for with respect to positions reachable by non - blunders . in the next subsection, we will see that the four quantities , , , indeed contain useful information for prediction , particularly when looking at families of instances that have the same blunder potential at the root position .we note that one can construct analogous information at greater depths in the game tree , by similar means , but we find in the next subsection that these do not currently provide improvements in prediction performance , so we do not discuss greater depths further here . we develop three nested prediction tasks : in the first task we make predictions about an unconstrained set of instances; in the second we fix the blunder potential at the root position ; and in the third we control for the exact position .[ [ task-1 ] ] * task 1 * + + + + + + + + in our first task we formulate the basic error - prediction problem : we have a large collection of human decisions for which we know the correct answer , and we want to predict whether the decision - maker will err or not . in our context , we predict whether the player to move will blunder , given the position they face and the various features of it we have derived , how much time they have to think , and their skill level . in the process , we seek to understand the relative value of these features for prediction in our domain . we restrict our attention to the 6.6 million instances that occurred in the 320,000 empirically most frequent positions in the fics dataset .since the rate of blundering is low in general , we down - sample the non - blunders so that half of our remaining instances are blunders and the other half are non - blunders .this results in a balanced dataset with 600,000 instances , and we evaluate model performance with accuracy . for ease of interpretation , we use both logistic regression and decision trees . since the relative performance of these two classifiers is virtually identical , but decision trees perform slightly better , we only report the results using decision trees here . table [ tab : features ] defines the features we use for prediction .in addition the notation defined thus far , we define : to be the skill features consisting of the rating of the player and the opponent ; for the number of non - blunders in position ; to be the difficulty features at depth 1 ; as the difficulty features at depth 2 defined in the previous subsection ; and as the time remaining ..features for blunder prediction .[ cols="<,<",options="header " , ] in table [ tab : task1results ] , we show the performance of various combinations of our features .the most striking result is how dominant the difficulty features are .using all of them together gives 0.75 accuracy on this balanced dataset , halfway between random guessing and perfect performance . in comparison , skill and timeare much less informative on this task .the skill features only give 55% accuracy , time left yields 53% correct predictions , and neither adds predictive value once position difficulty features are in the model .the weakness of the skill and time features is consistent with our findings in section [ sec : basic ] , but still striking given the large ranges over which the elo ratings and time remaining can extend .in particular , a player rated 1800 will almost always defeat a player rated 1200 , yet knowledge of rating is not providing much predictive power in determining blunders on any individual move . similarly, a player with 10 seconds remaining in the entire game is at an enormous disadvantage compared to a player with two minutes remaining , but this too is not providing much leverage for blunder prediction at the move level .while these results only apply to our particular domain , it suggests a genre of question that can be asked by analogy in many domains .( to take one of many possible examples , one could similarly ask about the error rate of highly skilled drivers in difficult conditions versus bad drivers in safe conditions . )another important result is that most of the predictive power comes from depth 1 features of the tree .this tells us the immediate situation facing the player is by far the most informative feature . finally , we note that the prediction results for the gm data ( where we do not have time information available ) are closely analogous ; we get a slightly higher accuracy of , and again it comes entirely from our basic set of difficulty features for the position .p4.6cmp3 cm model & accuracy + random guessing & 0.50 + & 0.73 + & 0.73 + & 0.72 + & * 0.75 * + & 0.55 + & * 0.75 * + & 0.53 + & * 0.75 * + & * 0.75 * + [ [ human - performance - on - a - version - of - task-1 ] ] * human performance on a version of task 1 * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + given the accuracy of algorithms for task 1 , it is natural to consider how this compares to the performance of human chess players on such a task . to investigate this question , we developed a version of task 1 as a web app quiz and promoted it on two popular internet chess forums .each quiz question provided a pair of -piece instances with white to move , each showing the exact position on the board , the ratings of the two players , and the time remaining for each .the two instances were chosen from the fics data with the property that white blundered in one of them and not the other , and the quiz question was to determine in which instance white blundered . in this sense , the quiz is a different type of chess problem from the typical style , reflecting the focus of our work here : rather than `` white to play and win , '' it asked `` did white blunder in this position ? '' . averaging over approximately 6000 responses to the quiz from 720 participants , we find an accuracy of , non - trivially better than random guessing but also non - trivially below our model s performance of .the relative performance of the prediction algorithm and the human forum participants forms an interesting contrast , given that the human participants were able to use domain knowledge about properties of the exact chess position while the algorithm is achieving almost its full performance from a single number the blunder potential that draws on a tablebase for its computation .we also investigated the extent to which the guesses made by human participants could be predicted by an algorithm ; our accuracy on this was in fact lower than for the blunder - prediction task itself , with the blunder potential again serving as the most important feature for predicting human guesses on the task .[ [ task-2 ] ] * task 2 * + + + + + + + + given how powerful the depth 1 features are , we now control for and and investigate the predictive performance of our features once blunder potential has been fixed .our strategy on this task is very similar to before : we compare different groups of features on a binary classification task and use accuracy as our measure .these groups of features are : , , , , , and the full set . for each of these models, we have an accuracy score for every pair .the relative performances of the models are qualitatively similar across all pairs : again , position difficulty dominates time and rating , this time at depth 2 instead of depth 1 . in all cases ,the performance of the full feature set is best ( the mean accuracy is 0.71 ) , but alone achieves 0.70 accuracy on average .this further underscores the importance of position difficulty . additionally , inspecting the decision tree models reveals a very interesting dependence of the blunder rate on the depth 1 structure of the game tree .first , recall that the most frequently occurring positions in our datasets have either or . in so - called `` only - move '' situations , where there is only one move that is not a blunder , the dependence of blunder rate on is as one would expect : the higher the ratio , the more likely the player is to blunder .but for positions with only one blunder , the dependence reverses : blunders are _ less _ likely with higher ratios .understanding this latter effect is an interesting open question .[ [ task-3 ] ] * task 3 * + + + + + + + + our final prediction question is about the degree to which time and skill are informative once the position has been fully controlled for . in other words, once we understand everything we can about a position s difficulty , what can we learn from the other dimensions ?to answer this question , we set up a final task where we fix the position completely , create a balanced dataset of blunders and non - blunders , and consider how well time and skill predict whether a player will blunder in the position or not .we do this for all 25 instances of positions for which there are over 500 blunders in our data . on average , knowing the rating of the player alone results in an accuracy of 0.62 , knowing the times available to the player and his opponent yields 0.54 , and together they give 0.63 .thus once difficulty has been completely controlled for , there is still substantive predictive power in skill and time , consistent with the notion that all three dimensions are important .we have used chess as a model system to investigate the types of features that help in analyzing and predicting error in human decision - making .chess provides us with a highly instrumented domain in which the time available to and skill of a decision - maker are often recorded , and , for positions with few pieces , the set of optimal decisions can be determined computationally . through our analysiswe have seen that the inherent difficulty of the decision , even approximated simply by the proportion of available blunders in the underlying position , can be a more powerful source of information than the skill or time available .we have also identified a number of other phenomena , including the ways in which players of different skill levels benefit differently , in aggregate , from easier instances or more time .and we have found , surprisingly , that there exist _ skill - anomalous _ positions in which weaker players commit fewer errors than stronger players .we believe there are natural opportunities to apply the paper s framework of skill , time , and difficulty to a range of settings in which human experts make a sequence of decisions , some of which turn out to be in error . in doing so, we may be able to differentiate between domains in which skill , time , or difficulty emerge as the dominant source of predictive information .many questions in this style can be asked . for a setting such as medicine ,is the experience of the physician or the difficulty of the case a more important feature for predicting errors in diagnosis ? or to recall an analogy raised in the previous section , for micro - level mistakes in a human task such as driving , we think of inexperienced and distracted drivers as a major source of risk , but how do these effects compare to the presence of dangerous road conditions ? finally , there are a number of interesting further avenues for exploring our current model domain of chess positions via tablebases .one is to more fully treat the domain as a competitive activity between two parties .for example , is there evidence in the kinds of positions we study that stronger players are not only avoiding blunders , but also steering the game toward positions that have higher blunder potential for their opponent ?more generally , the interaction of competitive effects with principles of error - prone decision - making can lead to a rich collection of further questions .we thank tommy ashmore for valuable discussions on chess engines and human chess performance , the ficsgames.org team for providing the fics data , bob west for help with web development , and ken rogoff , dan goldstein , and sbastien lahaie for their very helpful feedback .this work has been supported in part by a simons investigator award , an aro muri grant , a google research grant , and a facebook faculty research grant .p. jansen .problematic positions and speculative play . in _ computers , chess , and cognition _ , springer , 1990 .e. jones , v. harris .the attribution of attitudes. _ j. experimental social psych ._ , 3(1967 ) .d. kopec .advances in man - machine play . in _ computers , chess , and cognition _ , springer , 1990 .h. lakkaraju , j. kleinberg , j. leskovec , j. ludwig , and s. mullainathan . human decisions and machine predictions , 2016. working paper . k. w. regan and t. biswas .psychometric modeling of decision making via game play . in _ieee conference on computational inteligence in games ( cig ) _ , 2013 .g. salvendy and j. sharit . human error . in _handbook of human factors and ergonomics_. john wiley & sons , 2006 .
an increasing number of domains are providing us with detailed trace data on human decisions in settings where we can evaluate the quality of these decisions via an algorithm . motivated by this development , an emerging line of work has begun to consider whether we can characterize and predict the kinds of decisions where people are likely to make errors . to investigate what a general framework for human error prediction might look like , we focus on a model system with a rich history in the behavioral sciences : the decisions made by chess players as they select moves in a game . we carry out our analysis at a large scale , employing datasets with several million recorded games , and using _ chess tablebases _ to acquire a form of ground truth for a subset of chess positions that have been completely solved by computers but remain challenging even for the best players in the world . we organize our analysis around three categories of features that we argue are present in most settings where the analysis of human error is applicable : the skill of the decision - maker , the time available to make the decision , and the inherent difficulty of the decision . we identify rich structure in all three of these categories of features , and find strong evidence that in our domain , features describing the inherent difficulty of an instance are significantly more powerful than features based on skill or time . = 10000 = 10000 [ 1]>p#1
the global economy is a prototypic example of complex self - organizing system , whose collective properties emerge spontaneously through many local interactions . in particular , international trade between countries defines a complex network which arises as the combination of many independent choices of firms .it was shown that the topology of the world trade network ( wtn ) strongly depends on the gross domestic product ( gdp ) of world countries .on the other hand , the gdp depends on international trade by definition , which implies that the wtn is a remarkably well documented example of adaptive network , where dynamics and topology coevolve in a continuous feedback . in general, understanding self - organizing networks is a major challenge for science , as only few models of such networks are analytically solvable .however , in the particular case of the wtn , the binary topology of the network is found to be extremely well reproduced by a null model which incorporates the degree sequence .these results , which have been obtained using a fast network randomization method that we have recently proposed , make the wtn particularly interesting . in this paper , after briefly reviewing our randomization method , we apply it to study the occurrence of triadic ` motifs ' , i.e. directed patterns involving three vertices ( see fig.[mot_img ] ) .we show that , unlike other properties which have been studied elsewhere , the occurrence of motifs is not explained by only the in- and out - degrees of vertices .however , if also the numbers of reciprocal links of each vertex ( the _ reciprocal degree sequence _ ) are taken into account , the occurrences of triadic motifs are almost completely reproduced .this implies that , if local information is enhanced in order to take into account the reciprocity structure , motifs display no significant deviations from random expectations . therefore the ( in principle complicated ) self - organization process underlying the evolution of the wtn turns out to be relatively simply encoded into the local dyadic structure , which separately specifies the number of reciprocated and non - reciprocated links per vertex .thus the dyadic structure appears to carry a large amount of information about the system .in this section we briefly summarize our recently proposed randomization method and how it can be used to detect patterns when local constraints are considered .our method , which is based on the maximum - likelihood estimation of maximum - entropy models of graphs , introduces a family of null models of a real network and uses it to detect topological patterns analytically . defining a null model means setting up a method to assign probabilites . in our approach , a real network with verticesis given ( either a binary or a weighted graph , and either directed or undirected , whose generic entry is ) and a way to generate a family of randomized variants of is provided , by assigning each graph a probability . in the method ,the probabilities are such that a maximally random ensemble of networks is generated , under the constraint that , on average , a set of desired topological properties is set equal to the values observed in the real network .this is achieved as the result of a constrained shannon - gibbs entropy maximization where the linear combination of the contraints is called _ graph hamiltonian _( the coefficients are free parameters , acting as lagrange multipliers controlling the expected values ) and the denominator is called _partition function_. the next step is the maximization of the probability to obtain the observed graph , i.e. the real - world network to randomize .this step fixes the values of the lagrange multipliers as they are found by maximizing the log - likelihood that is , that the ensemble average of each constraint , , equals the observed value on the real network , .once the numerical values of the lagrange multipliers are found , they can be used to find the ensemble average of any topological property of interest : the exact computation of the expected values can be very difficult .for this reason it is often necessary to rest on the _ linear approximation method _ .however , in the present study we will consider particular topological properties ( i.e. motif counts , see below ) whose expected value can be evaluated _exactly_. our method also allows to obtain the variance of by applying the usual definition : =\langle[x(\mathbf{g})-\langle x\rangle)]^2\rangle=\sum_{i , j}\sum_{t , s}\sigma[g_{ij},g_{ts}]\left(\frac{\partial x}{\partial g_{ij}}\frac{\partial x}{\partial g_{ts}}\right)_{\mathbf{g}=\langle\mathbf{g}\rangle } \label{eq_generalpropagation}\ ] ] where ] allows to detect deviations from randomness in the observed topology .in particular , as we show later , it is possible to calculate by how many standard deviations the observed value differs from the expected value .quantities which are consistent with their expected value are explained by the enforced constraints .on the other hand , significantly deviating properies can not be traced back to the constraints and therefore signal the incompleteness of the information encoded in the constraints .other approaches achieve this result by explicitly generating many randomized variants of the real network , measuring on each such variant , and finally computing the sample average and standard deviation of .this is extremely time consuming , especially for complicated topological properties .by contrast , our method is entirely analytical .it yields any expected quantity in a time as short as that required in order to measure on the single network .if the network is a binary graph ( i.e. if each graph in the ensemble is uniquely specified by its adjacency matrix ) , then the simplest ( i.e. local ) choice of the constraints is the _ degree sequence _ ,i.e. the vector of degrees ( numbers of incident links ) of all vertices . for directed networks , which are our interest here, there are actually two degree sequences : the observed in - degree sequence ( with ) and the observed out - degree sequence ( with ) .this null model , which is known as the _ directed configuration model _( dcm ) , can be completely dealt with analytically using our method ( see appendix ) . when applied to the wtn, the dcm shows that many topological properties ( such as the degree - degree correlations and the directed clustering coefficients ) are in complete accordance with the expectations .this shows that the degree sequences and are extremely informative , as their ( partial ) knowledge allows to reconstruct many aspects of the ( complete ) topology . on the other hand, it was also shown that the _ reciprocity _ of the wtn is highly non - trivial .this means that the occurrence of reciprocal links is much higher than expected under any model which , as the dcm , treats two reciprocal links ( e.g. and ) as statistically independent .a direct consequence is that the reciprocity , as well as any higher - order directional pattern , should not be reproduced by the dcm .these seemingly conflicting results can only be reconciled if , for some reason , the topological properties that have been studied under the dcm mask the effects of reciprocity . in particular , the directed clustering coefficients , which are based on ratios of realized triangles over the maximum number for each vertex , may show no overall deviation from the dcm , even if the numerator and denominator separately deviate from it . inwhat follows , we investigate this possibility by considering all the observed subgraphs of three vertices ( which include both open and closed triangles ) separately .also , we will use an additional null model which also takes the number of reciprocal links of each vertex into account .this second null model is the _ reciprocal configuration model _ ( rcm ) .the local constraints defining it are the three , observed directed - degree sequences , with ( non - reciprocated out degree ) , , with ( non - reciprocated in - degree ) and , with ( reciprocated degree ) to be imposed across the ensemble of networks having the same number of vertices of the observed configuration and , on average , the above - mentioned directed - degree sequences . in the appendixwe describe both the dcm and the rcm in more detail , and derive their expectations explicitly . in the following analyses ,we use yearly bilateral data on exports and imports from the gleditsch database to analyse the six years 1950 , 1960 , 1970 , 1980 , 1990 , 2000 .this database contains aggregated trade data between countries , i.e. data as they result by summing the single commodity - specific trade exchanges .so we end up with six different , real , asymmetric matrices with entries ( ) .these adjacency matrix elements are the fundamental data allowing us to obtain all the possible representations of the wtn : to build the binary , directed representation we are interested in here , we restrict ourselves to consider two different vertices as linked , whenever the corresponding element is strictly positive .this implies that the adjacency matrix of the binary , directed representation of the wtn in year is simply obtained by applying the heaviside step function to the database entries , i.e. ] , the observed and the expected occurrences of motif differ .large absolute values of $ ] indicate motifs that are either over- or under - represented under the particular null model considered and therefore not explained by the constraints defining it , as shown in fig.[motcm ] and fig.[motrcm ] and discussed in the next section .
in self - organizing networks , topology and dynamics coevolve in a continuous feedback , without exogenous driving . the world trade network ( wtn ) is one of the few empirically well documented examples of self - organizing networks : its topology strongly depends on the gdp of world countries , which in turn depends on the structure of trade . therefore , understanding which are the key topological properties of the wtn that deviate from randomness provides direct empirical information about the structural effects of self - organization . here , using an analytical pattern - detection method that we have recently proposed , we study the occurrence of triadic ` motifs ' ( subgraphs of three vertices ) in the wtn between 1950 and 2000 . we find that , unlike other properties , motifs are not explained by only the in- and out - degree sequences . by contrast , they are completely explained if also the numbers of reciprocal edges are taken into account . this implies that the self - organization process underlying the evolution of the wtn is almost completely encoded into the dyadic structure , which strongly depends on reciprocity .
populations of biological cells frequently show stochastic switching between alternative phenotypic states .this phenomenon is particularly well - studied in bacteria and bacteriophages , where it is known as phase variation .phase variation often affects cell surface features , and its evolutionary advantages are believed to involve evading attack from host defense systems ( e.g. the immune system ) and/or `` bet - hedging '' against sudden catastrophes which may wipe out a particular phenotypic type .switching between different phenotypic states is controlled by an underlying genetic regulatory network , which randomly flips between alternative patterns of gene expression .several different types of genetic network are known to control phase variation these include dna inversion switches , dna methylation switches and slipped strand mispairing mechanisms . in this paper , we study a simple model for a genetic network that allows switching between two alternative states of gene expression .its key feature is that it includes a linear feedback mechanism between the switch state and the flipping rate .when the switch is active , an enzyme is produced and the rate of switching is linearly proportional to the copy number of this enzyme .the statistical properties of this model are made non - trivial by this feedback , leading , among other things , to non - poissonian behaviour that may be of advantage to cells in surviving in certain dynamical environments .our model is very generic and does not aim to describe any specific molecular mechanism in detail , but rather to determine in a general way the consequences of the linear feedback for the switching statistics .motivated by the fact that cells often contain multiple copies of a particular genetic regulatory element , due to dna replication or dna duplication events during evolution , we also consider the case of two identical switches in the same cell .we find that the two copies of the switch are coupled and may exhibit interesting and potentially important correlations or anti - correlations .our model switch is fundamentally different from bistable gene networks that have been the subject of previous theoretical interest .in fact , as we shall show , our switch is not bistable but is intrinsically unstable in each of its two states . before discussing our model in detail , we provide a brief overview of the basic biology of genetic networks and summarise some previously considered models for genetic switches .genetic networks are interacting , many - component systems of genes , rna and proteins , that control the functions of living cells .genes are stretches of dna ( base pairs long in bacteria ) , whose sequences encode particular protein molecules . to produce a protein molecule , the enzyme complex rna polymerase copies the gene sequence into a messenger rna ( mrna ) molecule .this is known as transcription .the mrna is then translated ( by a ribosome enzyme complex ) into an amino acid chain which folds to form the functional protein molecule .the production of a specific set of proteins from their genes ultimately determines the phenotypic behaviour of the cell .phenotypic behaviour can thus be controlled by turning genes on and off .regulation of transcription ( production of mrna ) is one important way of achieving this .transcription is controlled by the binding of proteins known as transcription factors to specific dna sequences , known as operators , usually situated at the beginning of the gene sequence .these transcription factors may be activators ( which enhance the transcription of the gene they regulate ) or repressors ( which repress transcription , often by preventing rna polymerase binding ) .a given gene may encode a transcription factor that regulates itself or other genes , leading to complex networks of transcriptional interactions between genes .there has been much recent interest among both physical scientists and biologists in deconstructing complex genetic networks into modular units , and in seeking to understand their statistical properties using theory and simulation .of particular interest is the fact that genetic networks are intrinsically stochastic , due to the small numbers of molecules involved in gene expression . this can give rise to heterogeneity in populations of genetically and environmentally identical cells .for some genetic networks , this heterogeneity is `` all - or - nothing '' : the population splits into two distinct sub - populations , with different states of gene expression .such networks are known as bistable genetic switches : they have two possible long - time states , corresponding to alternative phenotypic states .well - known examples are the switch controlling the transition from the lysogenic to lytic states in bacteriophage , and the lactose utilisation network of the bacterium _ escherichia coli _ . several simple mechanisms for achieving bistability have been studied , including pairs of mutually repressing genes , positive feedback loops and mixed feedback loops .such bistable genetic networks can allow long - lived and binary responses to short - lived signals for example , when a cell is triggered by a transient signal to commit to a particular developmental pathway .theoretical treatments of bistable genetic networks usually consider the dynamics of the copy number ( or concentration ) of the regulatory proteins involved .this affects the activation state of the genes , which in turn influences the rate of protein production .the macroscopic rate equation approach provides a deterministic ( mean - field ) description of the dynamics that ignores fluctuations in protein copy number or gene expression state .this approach , applied to a switch with two mutually repressing genes , has shown that co - operative binding of regulatory proteins is an important factor in generating bistability .other studies have shown , however , that bistability can be achieved even when the deterministic equations have only one solution , due to stochasticity and fluctuations in protein numbers .an alternative approach is to study the dynamics of stochastic flipping between two stable states using stochastic simulations , by numerically integrating the master equation , or by path integral - type approaches .this dynamical problem bears some resemblance to the kramers problem of escape from a free energy minimum , and one expects on general grounds that the typical time spent in one of the bistable states should be exponentially large in the typical number of proteins present in the state .this has been confirmed , at least for cooperative toggle switches formed of mutually repressing genes . from the perspective of statistical physics ,interesting questions arise concerning the distribution of escape times and the connection to first passage properties of stochastic processes . in this paper , however , we are concerned with an intrinsically different situation from these bistable genetic networks .the molecular mechanisms controlling microbial phase variation typically involve a binary element that can be in either of two states .for example , this may be a short fragment of dna that can be inserted into the chromosome in either of two orientations , a repeated dna sequence that can be altered in its number of repeats , or a dna sequence that can have two alternative patterns of methylation .the flipping of this element between its two states is stochastic , with a flipping rate that is controlled by various regulatory proteins , the activity of which may be influenced by environmental factors .we shall consider the case where a feedback exists between the switch state and the flipping rate .this is particularly interesting from a statistical physics point of view because it leads to non - poissonian switching behaviour , as we shall show .our work has been motivated by several examples .the _ fim _system in uropathogenic strains of the bacterium _e. coli _ controls the production of type 1 fimbriae ( or pili ) , which are `` hairs '' on the surface of the bacterium .individual cells switch stochastically between `` on '' and `` off '' states of fimbrial production . the key feature of the _ fim _ switch is a short piece of dna that can be inserted into the bacterial dna in two possible orientations .because this piece of dna contains the operator sequence for the proteins that make up the fimbriae , in one orientation , the fimbrial genes are transcribed and fimbriae are produced ( the `` on '' state ) and in the other orientation , the fimbrial genes are not active and no fimbriae are produced ( the `` off '' state ) .the inversion of this dna element is mediated by recombinase enzymes .feedback between the switch state and the switch flipping rate arises because the fime recombinase ( which flips the switch in the on to off direction ) , is produced more strongly in the on switch state than in the off state .this phenomenon is known as orientational control .the production of a second type of fimbriae in uropathogenic _e. coli _ , pap pili , also phase varies , and is controlled by a dna methylation switch . here , the operator region for the genes encoding the pap pili can be in two states , in which the dna is chemically modified ( methylated ) at different sites , and different binding sites are occupied by the regulatory protein lrp . switching in this systemis facilitated by the papi protein , which helps lrp to bind .feedback between the switch state and the flipping rate arises because the production of papi itself is activated by the protein papb , which is only produced in the `` on '' state .a common feature of the above examples is the existence of a feedback mechanism : in the _ fim _ system this occurs through orientational control , and in the _ pap _ system , through activation of the _ papi _ gene by papb . in this paper, we aim to study the role of such feedback within a simple , generic model of a binary genetic switch .we shall assume that the feedback is linear , and we thus term our model a `` linear feedback switch '' . in a recent publication , we introduced a simple mathematical model of a dna inversion genetic switch with orientational control , which was inspired by the _fim _ system .our model reduces to the dynamics of the number of molecules of a `` flipping enzyme '' , which mediates switch flipping , along with a binary switch state .enzyme is produced only in the on switch state .as the copy number of increases , the on to off flipping rate of the switch increases and this results in a non - poissonian flipping process with a peak in the lifetime of the on state . the model is linear in the sense that the rate at which the switch is turned off is a linear function of the number of enzymes which it produces . in our previous work , we imagined enzyme to be a dna recombinase , and the two switch states to correspond to different dna orientations , in analogy with the _ fim _ system .however , the same model could be used to describe a range of molecular mechanisms for binary switch flipping with feedback between the switch state and flipping rate , and can thus be considered a generic model of a genetic switch with linear feedback . in our recent work , we obtained exact analytical expressions for the steady state enzyme copy number for our model switch with linear feedback , in the particular case where the flipping enzyme switches only in the on to off direction ( this being the relevant case for _ fim _ ). we also calculated the flip time distribution for this model analytically .conceptually , such a calculation is reminiscent of the study of persistence in statistical physics where , for example , one asks about the probability that a spin in an ising system has not flipped up to some time . for the flip time distribution , we introduced different measurement ensembles according to whether one starts the time measurement from a flip event ( the switch change ensemble ) or from a randomly selected time ( the steady state ensemble ) . in the present paper ,we extend this work to present the full solution of the general case of the model and extend our study of its persistence properties .the introduction of a rate for the enzyme mediated off to on flipping ( ) has most significant effects on the flip time distributions , as illustrated in figs .[ fig : diagram ] and [ fig : diagramk3off0 ] where we show the parameter range over which a peak is found in for zero and non - zero .we also prove an important relation between the two measurement ensembles defined in and use it to show that a peak in the flip time distribution only occurs in the switch change ensemble and not in the steady state ensemble .we find that the non - poissonian behaviour of this model switch leads to interesting two - time autocorrelation functions .we also study the case where we have two copies of the switch in the same cell and find that these two copies may be correlated or anticorrelated , depending on the parameters of the model , with potentially interesting biological implications .the paper is structured as follows . in sectionii we define the model , describe its phenomenology , and show that a `` mean - field '' , deterministic version of the model has only one steady state solution . in sectioniii we present the general solution for the steady state statistics and in section iv we study first passage - time properties of the switch ; technical calculations are left to the appendices . in sectionv we consider two coupled model switches and we present our conclusions in section vi .we consider a model system with a flipping enzyme and a binary switch , which can be either on or off ( denoted respectively as and ) .enzyme is produced ( at rate ) only when the switch is in the on state , and is degraded at a constant rate , regardless of the switch state .this represents protein removal from the cell by dilution on cell growth and division , as well as specific degradation pathways .switch flipping is assumed to be a single step process , which can either be catalysed by enzyme , with rate constants and and linear dependence on the number of molecules of , or can happen `` spontaneously '' , with rates and .we imagine that the `` spontaneous '' switching process may in fact be catalysed by some other enzyme whose concentration remains constant and which is therefore not modelled explicitly here . our model , which is shown schematically in fig .[ fig : sketch ] , is defined by the following set of biochemical reactions : [ eq : react ] {{k^{\textrm{on}}}_3 } s_{{\textrm{off } } } + r & s_{{\textrm{on } } } & \xrightleftharpoons[{k^{\textrm{off}}}_4]{{k^{\textrm{on}}}_4 } s_{{\textrm{off}}}\,\,.\end{aligned}\ ] ] ( colour online ) a schematic illustration of the model dna inversion switch . ]we notice that there are two physically relevant and coupled timescales for our model switch : the timescale associated with changes in the number of molecules ( dictated by the production and decay rates and ) , and that associated with the flipping of the switch ( dictated by , and the concentration ) .we first consider the case where the timescale for production / decay is much faster than the switch flipping timescale .the top left panel of fig .[ fig : sampletraj_k3 ] shows a typical dynamical trajectory for parameters in this regime . here, we plot the number of molecules , together with the switch state , against time .this result was obtained by stochastic simulation of reaction set ( [ eq : react ] ) using the gillespie algorithm .this algorithm generates a continuous time markov process which is exactly described by the master equation ( [ eq : master ] ) . for a given switch state ,the number of molecules of varies according to reactions ( [ eq : reacta ] ) .when the switch is in the on state , grows towards a plateau value , and when the switch is in the off state , decreases exponentially towards .the time evolution of can thus be seen as a sequence of relaxations towards two different asymptotic steady states , which depend on the switch position . to better understand this limiting case, we can make the assumption that the number of molecules evolves deterministically for a given switch state .we can then write down deterministic rate equations corresponding to the reaction scheme ( [ eq : react ] ) .these equations are first order differential equations for , the mean concentration of the enzyme .when the switch is on , the rate equation reads with solution \;.\ ] ] thus the plateau density in the on state is given by the ratio and the timescale for relaxation to this density is given by , the rate of degradation of .when the switch is in the off state , the rate equation for reads instead and one simply has exponential decay to with decay time . in this parameter regime, switch flipping typically happens when the number of molecules of has already reached the steady state ( as in the top left panel of fig . [fig : sampletraj_k3 ] ) .thus , the on to off switching timescale is given by , where is the plateau concentration of flipping enzyme when the switch is in the on state , given by eq.([rhoon ] ) .since the corresponding plateau concentration in the off switch state is zero , the off to on switch flipping timescale is simply given by .we now consider the opposite scenario , in which switching occurs on a much shorter timescale than relaxation of the enzyme copy number .a typical trajectory for this case is shown in the bottom left panel of fig .[ fig : sampletraj_k3 ] . here ,switching reactions dominate the dynamics of the model , and the dynamics of the enzyme copy number follows a standard birth - death process , with an effective birth rate given by the enzyme production rate in the on state multiplied by the fraction of time spent in the on state . a more quantitative account for these behavioursis provided later on , in [ sec : pss ] . for parameter values between these two extremes , where the timescales for switch flipping and enzyme number relaxation are similar , it is more difficult to provide intuitive insights into the behaviour of the model .a typical trajectory for this case is given in the middle left panel of fig .[ fig : sampletraj_k3 ] . here , we have set the on to off and off to on switching rates to be identical : and .we notice that typically , less time is spent in the on state than in the off state .as soon as the switch flips into the on state , the number of molecules starts increasing and the on to off flip rate begins to increase .consequently , the number of molecules rarely reaches its plateau value before the switch flips back into the off state . to illustrate the effects of including the parameter , we also show trajectories for different values of the ratio in fig .[ fig : sampletraj_k3off ] , for fixed . for small , the amount of enzyme decays to zero in the off state before the next off - to - on flipping event resulting in bursts of enzyme production .in contrast , when is , flipping is rapid in both directions so that is peaked at intermediate . to explore how the switching behaviour of our model arises , we can write down mean - field , deterministic rate equations corresponding to the full reaction scheme ( [ eq : react ] ) .these equations describe the time evolution of the mean concentration of molecules and the probabilities and of the switch being in the on and off states .these equations implicitly assume that the mean enzyme concentration is completely decoupled from the state of the switch .thus correlations between the concentration and the switch state are ignored and the equations furnish a mean - field approximation for the switch . as we now show , this crude type of mean - field description is insufficient to describe the stochastic dynamics of the switch , except in the limit of high flipping rate . noting that , the mean - field equations read : the above equations have two sets of possible solutions for the steady state values of and , but only one has a positive value of , and is therefore physically meaningful .the result is : where and the most interesting conclusion to be drawn from this mean - field analysis is that there is only one physically meaningful solution . in this solution ,the enzyme concentration is less than the plateau value in the on state [ of eq.([rhoon ] ) ] .thus reaction scheme ( [ eq : react ] ) does not have an underlying bistability .the two states of our stochastic switch evident in figures [ fig : sampletraj_k3 ] and [ fig : sampletraj_k4 ] for low values of and are not bistable states but are rather intrinsically unstable and transient states , each of which will inevitably give rise to the other after a certain ( stochastically determined ) period of time . in this sense ,our model is fundamentally different from the bistable reaction networks which have previously been discussed . on the other hand , in the limit of rapid switch flipping , where or is large, the mean - field description holds and the protein number distribution does show a single peak whose position is well approximated by eq .( [ rho ] ) , as shown in figures [ fig : sampletraj_k3 ] and [ fig : sampletraj_k4 ] for the case .returning to the fully stochastic version of the reaction scheme ( [ eq : react ] ) , we now present an exact solution for the steady state statistics of this model . a solution for the case where was sketched in ref . . herewe present a complete solution for the general case where , and we discuss the properties of the steady state as a function of all the parameters of the system .we first define the probability that the system has exactly enzyme molecules at time and the switch is in the state ( where ) .the time evolution of is described by the following master equation : where we use the shorthand notations , and . in the steady state ,the time derivative in eq.([eq : master ] ) vanishes , and the problem reduces to a pair of coupled equations for and : [ eq : masterp ] to solve the above equations we introduce the generating functions the steady - state equations ( [ eq : masterp ] ) can be now written as a set of linear coupled differential equations for : [ eq : gz ] where are linear differential operators : l_1(z ) = & k_1 ( z-1 ) _ z - k_2 ( z-1 ) + k^_3 z _ z + k^_4 , + l_2(z)=&k^_3 z _ z + k^_4 , + l_3(z)= & k_1 ( z-1 ) _ z + k^_3 z _ z + k^_4 , + l_4(z)= & k^_3 z _ z + k^_4 . in order to solve the two coupled eqs .( [ eq : gz ] ) it is first useful to take their difference .after simplification this yields the relation : next , we take the first derivative of ( [ eq : goff ] ) and then replace the derivatives of with the relation ( [ eq : relgon ] ) . after some algebra, one finds that verifies the following second order differential equation : where the greek letters are combinations of the parameters of the model : we now introduce the new variable and the new parameter combinations : we can now write ( and ) in terms of the variable ( [ udef ] ) by defining the functions the differential equation ( [ eq : diff ] ) then reads : looking for a regular power series solution of the form one obtains the following solution : where denotes the confluent hypergeometric function of the first kind , and denotes the pochhammer symbol .the constant will be determined using the boundary conditions , which we discuss later .we first note that the above result for can be translated into by replacing with the expression of in ( [ gonu ] ) and expanding in powers of : ^{n - m } \binom{n}{m}\\ = \sum_{n=0}^{\infty } z^n \sum_{m = n}^{\infty } a_m u_0^{m - n } [ ( u_1-u_0)]^{n } \binom{m}{n } \label{gonsol}\end{gathered}\ ] ] where we have relabelled the indices and in the last line .we can identify from ( [ eq : genfun ] ) as the coefficient of in the above expression : from ( [ gonu ] ) and ( [ eq : gonfull ] ) we read off substituting ( [ an ] ) in ( [ eq : ponsol ] ) we deduce , using the definition of the hypergeometric function ( [ hgdef ] ) and noting , that in deriving this expression we have , in fact , established the following identity which will prove useful again later : to compute , we integrate eq.([eq : relgon ] ) , which yields , using the form of ( [ eq : gonfull ] ) : where is our second integration constant .we then have two constants , and , which still need to be determined .the constant can be found using the normalisation condition , which is equivalent to . using this condition ,we obtain in order to compute the remaining constant , we consider the boundary condition at . from the definition ( [ eq : genfun ] ) of the generating function we see that .our boundary condition thus reads : setting in the master equation eq.([eq : masterpon ] ) [ noting that the term in vanishes ] gives in terms of and : combining eqs.([eq : fastest ] ) [ with ] and ( [ eq : kappa ] ) , substituting in eq.([eq : boundary ] ) , using eq.([eq : ponpoff ] ) to eliminate , and finally substituting in expressions for and from eq.([eq : ponsol ] ) , we determine : \,\,.\end{gathered}\ ] ] the final step in obtaining our exact solution is to provide an explicit expression for . from ( [ eq : fastest ] ) we have and using the identity ( [ fident ] )we obtain : \,\,,\end{gathered}\ ] ] where is the kronecker delta .our exact analytical solution ( [ eq : ponsol ] ) , ( [ eq : a0sol ] ) and ( [ eq : poffsol ] ) is verified by comparison to computer simulation results in the right panels of figs .[ fig : sampletraj_k3 ] and [ fig : sampletraj_k4 ] . here , we plot the probability distribution function for the total number of enzyme molecules : computer simulations of the reaction set ( [ eq : react ] ) were carried out using gillespie s stochastic simulation algorithm .perfect agreement is obtained between the numerical and analytical solutions , as shown in figs .[ fig : sampletraj_k3 ] and [ fig : sampletraj_k4 ] . having derived the steady state solution for , we now analyse its properties as a function of the parameters of the model .we choose to fix our units of time by setting , the decay rate of enzyme , to be equal to unity ( so our time units are ) . with these units ,the plateau value for the number of enzyme molecules in the on switch state is given by . in this section, we will only analyse the case where . to further simplify our analysis, we set and ( a discussion of the case where and is provided in ref .we then analyse the probability distribution as a function of the -dependent switching rate and the -independent switching rate .the results are shown in the right - hand panels of fig .[ fig : sampletraj_k3 ] and fig .[ fig : sampletraj_k4 ] .we consider the three regimes discussed in section [ sec : phen ] : that in which enzyme number fluctuations are much faster than switch flipping , that where the opposite is true , and finally the regime where the two timescales are similar . in the regime where switch flipping is much slower than enzyme production / decay [ ,the probability distribution is bimodal .this is easily understandable in the context of the typical trajectories shown in the left top panels in figs .[ fig : sampletraj_k3 ] and [ fig : sampletraj_k4 ] : in this regime , the number of molecules of always reaches its steady - state value before the next switch flip occurs .it follows then that is a bell - shaped distribution peaked around , while is highly peaked around zero , so that the total distribution is bimodal . in contrast , when switching occurs much faster than enzyme number fluctuations the probability distribution is unimodal and bell shaped , as might be expected from the trajectories in the bottom left panels of figs .[ fig : sampletraj_k3 ] and [ fig : sampletraj_k4 ] . as discussed in section [ sec : phen ] , in this regime the number of molecules behaves as a standard birth - death process with effective birth rate given by multiplied by the average time the switch spends in the on state , and death rate .for such a birth - death process the steady state probability is a poisson distribution with mean given by the ratio of the birth rate to the death rate . to show that our analytical result reduces to this poisson distribution, we consider the case where enzyme - mediated switching dominates ( as in fig .[ fig : sampletraj_k3 ] ) , so that both and are much greater than .the fraction of time spent in the on state is , thus the effective birth rate is . in the limit and with constant, one finds that , , and ] . plugging this result into eq.([eq : fastest ] ) and taking again the limit [ and using that finally yields the result that is indeed a poisson distribution .the same approach can be taken for the case of fig .[ fig : sampletraj_k4 ] , where is constant , and and become very large .the probability distribution then becomes a poisson distribution with mean $ ] .the above result is only valid when .in fact , as shown in fig . [fig : sampletraj_k3off ] , when the distribution of is peaked at 0 and does not have a poisson - like shape . finally , when there is no clear separation of timescales between enzyme number fluctuations and switch flipping , the distribution function for the number of enzyme molecules has a highly non - trivial shape , as shown in the middle panels of figs .[ fig : sampletraj_k3 ] and [ fig : sampletraj_k4 ] .we now calculate the first passage time distribution for our model switch .we define this to be the distribution function for the amount of time that the switch spends in the on or off states before switching .this distribution is biologically relevant , since it may be advantageous for a cell to spend enough time in the on state to synthesise and assemble the components of the `` on '' phenotype ( for example , fimbriae ) , but not long enough to activate the host immune system , which recognises these components .the calculation for the case was sketched in . herewe provide a detailed calculation of the flip time distribution in the more general case .we find that this dramatically reduces the parameter range over which the flip time distribution has a peak .we demonstrate an important relation between the flip time distributions for the two relevant choices of initial conditions ( switch change ensemble and steady state ensemble ) . the first passage time distribution is important and interesting from a statistical physics point of view as it is related to `` persistence '' .generally , persistence is expressed as the probability that the local value of a fluctuating field does not change sign up to time .for the particular case of an ising model , persistence is the probability that a given spin does not flip up to time . in our model, the switch state plays the role of the ising spin . for other problems ,there has been much interest in the long - time behaviour of the persistence probability , which can often exhibit a power - law tail . in our case , however , we expect an exponential tail for the distribution of time spent in the on state , because linear feedback will cause the switch to flip back to the off state after some characteristic time .we are therefore interested not only in the tail of the first passage time distribution , but in its shape over the whole time range .we consider the probability that if we begin monitoring the switch at time when there are molecules of the flipping enzyme , it remains from time in state , and subsequently flips in the time interval .this probability is averaged over a given ensemble of initial conditions , determined by the experimental protocol for monitoring the switch .mathematically , the initial condition for switch state is selected according to some probability and we define as the flip time distribution for the ensemble of initial conditions given by . the most obvious protocol would be to measure the interval from the moment of switch flipping , so that the times correspond to switch flips and the are the durations of the on or off switch states .we call this the _ switch change ensemble _ ( ) . in this ensemble ,the probability of having molecules of at the time when the switch flips into the state is : where for notational simplicity , represents .the numerator of the r.h.s of eq.([eq : w1 ] ) gives the steady state probability that there are molecules present in state , multiplied by the flip rate into state .the denominator normalises .we also consider a second choice of initial condition , which we denote the _ steady state ensemble _( ) . here , the initial time is chosen at random for a cell that is in the state .this choice is motivated by practical considerations : experimentally , it is much easier to pick a cell which is in the state and to measure the time until it flips out of the state , than to measure the entire length of time a single cell spends in the state .the probability of having molecules of at time is then the ( normalised ) steady - state distribution for the state : to compute the distribution , we first consider the survival probability , that , given that at time ( chosen according to ensemble ) , the switch was in state , at time it is still in state and has molecules of enzyme . as the ensemble only enters through the initial condition , we may drop the superscript in what follows .the evolution equation for is the same as for , but without the terms denoting switch flipping into the state .this removes the coupling between and that was present in the evolution equations ( [ eq : masterp ] ) ) : [ eq : survn ] introducing the generating function the above equations reduce to : [ eq : surv ] we can relate to by noting that is the total probability that the switch has not flipped up to time . hence , equations ( [ eq : surv ] ) can be solved using the method of characteristics .the result , detailed in appendix [ app : char ] , is : where and .the function is the generating function for the distribution of enzyme numbers at the starting time for the measurement : where refers to or .the function can be obtained in an analogous way : this produces the same expression as for , but with set to zero and with all `` '' superscripts replaced by `` '' : so that .we can then obtain the distributions and by differentiating the above expressions , according to eq.([eq : link ] ) : { { \widetilde w}}\left ( k_1 \tau_{\textrm{on}}+ e^{-t/\tau_{\textrm{on } } } ( 1-k_1 \tau_{\textrm{on } } ) \right)\\ + \left(\frac{1}{\tau_{\textrm{on } } } - k_1 \right ) { { \widetilde w } } ' \left ( k_1 \tau_{\textrm{on}}+ e^{-t/\tau_{\textrm{on } } } ( 1-k_1 \tau_{\textrm{on } } ) \right ) \bigg\}\,\,,\end{gathered}\ ] ] in the above expressions , the function is given for the steady state ensemble ( ) by and for the switch change ensemble ( ) by we now show that a useful and simple relation can be derived between and .let us imagine that we pick a random time , chosen uniformly from the total time that the system spends in state .the time will fall into an interval of duration , as illustrated in fig .[ fig : illustration ] .we can then split the interval into the time before and the time after , such that .schematic illustration of a possible time trajectory for the switch ; is a random time falling in an interval of total length and splitting it into two other intervals denoted and , as discussed in section [ sec : relation ] . ]we first note that the probability that our randomly chosen time falls into an interval of length is : eq.([eq : plength ] ) expresses the fact that the probability distribution for a randomly chosen flip time is , but the probability that our random time falls into a given segment is proportional to the length of that segment .since the time is chosen uniformly , the probability distribution for , for a given , will also be uniform ( but must be less than ) : one can now obtain from by integrating eq.([eq : cond ] ) over all possible values of , weighted by the relation ( [ eq : plength ] ) .this leads to the following relation between and : taking the derivative with respect to this can be recast as where is simply the mean duration of a period in the on state .we have verified numerically that the expressions ( [ eq : font ] ) and ( [ eq : fofft ] ) for and derived above do indeed obey the relation ( [ eq : linkbis2 ] ) .this relation can also be understood in terms of backward evolution equations as we discuss in appendix [ sec : bev ] .we now focus on the shape of the flip time distribution , in particular , whether it has a peak .a peak in could be biologically advantageous for two complementary reasons .firstly , after the switch enters the on state there may be some start - up period before the phenotypic characteristics of the on state are established , so it would be wasteful for flipping to occur before the on state of the switch has become effective .secondly , the on state of the switch may elicit a negative environmental response , such as activation of the host immune system , so that it might be advantageous to avoid spending too long a time in the on state .for example , in the case of the _ fim _ switch , a certain amount of time and energy is required to synthesise fimbriae , and this effort will be wasted if the switch flips back into the off state before fimbrial synthesis is complete .on the other hand , too large a population of fimbriated cells would trigger an immune response from the host , therefore the length of time each cell is in the fimbriated state needs to be tightly controlled .we note that for bistable genetic switches and many other rare event processes , waiting time distributions are exponential ( on a suitably coarse - grained timescale ) .this arises from the fact that the alternative stable states are time invariant in such systems .the presence of a peak in for our model switch would indicate fundamentally different behaviour , which occurs because the two switch states in our model are time - dependent . the presence of a peak in the distribution requires the slope of at the origin to be positive . applying this condition to the function ( [ eq : font ] )we get : eq.([eq : ww ] ) allows us to expressing the derivatives of as functions of the moments of , so that we finally get our condition as a relation between the mean and the variance of the initial ensemble : where denotes an average taken using the weight of eq .( [ eq : w1 ] ) or ( [ eq : w2 ] ) .analogous conditions can be found for a peak in the off to on waiting time distribution .the moments involved in the above inequality can be computed using the exact results of the previous section .the l.h.s . of ( [ eq : inequality ] )can then be computed numerically for different values of the parameters , to determine whether or not a peak is present in . for the sse ,there is never a peak in the flip time distribution .this follows directly from the relation ( [ eq : linkbis2 ] ) between the sse and sce , which shows that the slope of at the origin is always negative : thus a peak in the waiting time distribution can not occur when the initial condition is sampled in the steady state ensemble . for the sce, we tested inequality ( [ eq : inequality ] ) numerically and found that a peak in the distribution is possible for the time spent in the on state ( ) , but not for the off to on waiting time distribution ( ) .this is as expected and can be explained by noting that to produce a peak in , the flipping rate must increase with time in state . in the on state the flipping rate typically does increase with time as the enzyme is produced , while in the off state the flipping rate decreases in time as decays .we now discuss the general conditions for the occurrence of a peak in .we first recall from section [ sec : pss ] that in the regime where the copy number of the enzyme relaxes much faster than the switch flips [ , the plateau level of is reached rapidly after entering the on state , so that the flipping rate out of the on state is essentially constant .this leads to effectively exponentially distributed flip times from the on state , so that no peak is expected .in the opposite regime , where switch flipping is much faster than number relaxation [ , we again expect poissonian statistics and therefore exponentially distributed flip times .thus it will be in the intermediate range of that a peak in the flip time distribution may occur .the exact condition for this ( [ eq : inequality ] ) is not particularly transparent as the dependence on the parameters is implicit in the values of the and . in particular , the effects of the parameters and are coupled , since the effective -mediated switching rate depends on the copy number of .however we can make a broadbrush description of what is required .first the switch should enter the on state with typical values of so that there is an initial rise in the value of and therefore the flipping rate .second , we expect that the flipping should be predominantly effected by the enzyme rather than spontaneously flipping _ should govern the flipping rather than .occurrence of a peak in the waiting time distribution sampled in the switch change ensemble .the shaded area delimits the region where there is a peak ( here the parameters are : , and and ) .the dashed line delimits the same region for .the insets show an instance of the distribution both in the sce ( solid red line ) and in the sse ( blue dashed line ) : ( a ) there is a peak ( , , ) ; ( b ) on the transition line , where the slope at the origin vanishes ( , , ) ; ( c ) there is no peak ( , , ) . ][ fig : diagram ] shows the region in the plane where has a peak , for the case where and .these results are obtained numerically , using the inequality ( [ eq : inequality ] ) .the distribution is peaked for parameter values inside the shaded region .the insets show examples of the distributions and for various parameter values . at the boundary in parameter space between peaked andmonotonic distributions ( solid line in fig .[ fig : diagram ] ) , has zero gradient at ( inset ( b ) ) . the dashed line in fig .[ fig : diagram ] ) shows the position of the boundary for a larger value of the enzyme production rate . as increases , the range of values of for which there is a peak decreases . increasing increases the number of enzyme present , which will increase both the off to on and on to off switching frequency , since here .thus it appears that approximately the same qualitative behaviour can be obtained for smaller values of when is increased .same plot as fig .[ fig : diagram ] but for .the shaded area delimits the values of and ( with ) for which there is a peak in the flip time distribution .the dashed line is the separation line for .the examples in the insets have as parameters and : ( a ) , ; ( b ) , ; ( c ) , . ] in our previous paper , we analysed the case where : _ i.e. _ the flipping enzyme switches only in the on to off direction .this case applies to the _ fim _ system .[ fig : diagramk3off0 ] shows the analogous plot , as a function of and , when .the region of parameter space where a peak occurs in is much wider than for nonzero . in this casean increase of produces a _larger _ range of parameter values for which there is a peak ( dotted line in fig . [fig : diagramk3off0 ] ) . here , the off to on switching process is -independent , and is mediated by only ( since ) .the typical initial amount of present on entering the on state is thus not much affected by , although the plateau level of increases with .therefore , as increases , the enzyme copy number in the on state becomes more time - dependent , increasing the likelihood of finding a peak .diagram showing the occurrence of a peak when the ratio is varied . here and .the inset shows a zoom of the plot in the vicinity of . ]the comparison between figs .[ fig : diagram ] and [ fig : diagramk3off0 ] suggests that the relative magnitudes of the -mediated switching rates in the on to off and off to on directions , and , play a major role in determining the parameter range over which is peaked .this observation is confirmed in fig .[ fig : diagramk_vary_3off ] , where the boundary between peaked and unpeaked distributions is plotted in the plane for various ratios .the larger the ratio , the smaller the region in parameter space where there is a peak .an intuitive explanation for this might be that as increases , the the typical initial number of molecules in the on state increases , so that less time is needed for the level to reach a steady state , resulting in a weaker time - dependence of the on to off flipping rate and less likelihood of a peak occurring in .if the presence of a peak in is indeed an important requirement for such a switch in a biological context , then we would expect that a low value of , as is in fact observed for the _ fim _ system , would be advantageous .a peaked distribution of waiting times is by no means the only potentially useful characteristic of this type of switch . in this section ,we investigate two other types of behaviour that may have important biological consequences : correlations between successive flips of a single switch , and correlated flips of multiple switches in the same cell .we analyse these novel phenomena using numerical methods .we introduce a new correlation measure which enables us to quantify the extent of the correlation as a function of the parameter space .our main findings are that a single switch shows time correlations which appear to decay exponentially , and that two switches in the same cell can show correlated or anti correlated flipping behaviour depending on the values of and .biological cells often experience sequences of environmental changes : for example , as a bacterium passes through the human digestive system it will experience a series of changes in acidity and temperature .it is easy to imagine that evolution might select for gene regulatory networks with the potential to `` remember '' sequences of events .the simple model switch presented here can perform this task , in a very primitive way , because it produces correlated sequences of switch flips : the amount of enzyme present at the start of a particular period in state depends on the recent history of the system .in contrast , for bistable gene regulatory networks , or other bistable systems , successive flipping events are uncorrelated , as long as the system has enough time to relax to its steady state between flips . in our recent work , we demonstrated that successive switch flips can be correlated for our model switch , and that this correlation depends on the parameter : correlation increases as increases . here, we extend our study and introduce a new measure of these correlations : the two time probability that the switch is in position at time and in position at time . in the steady statethe two - time probability depends only on the time difference . in order to compare different simulations results ,we define the auto - correlation function : where , , and ( ) is the probability of being in the ( off ) state .the correlation function ( [ ctau ] ) takes values between and , in such a way that it is positive for positive correlations , negative for negative correlations and vanishes if the system is uncorrelated .this function allows us to understand whether , given that the switch is in a given position at time , it will be in the same state at a later time . fig . [fig : corrone ] shows simulation results for different values of and .as expected , the correlation function vanishes in the limit of large , meaning that in this limit there are no correlations .furthermore , we can see that the strength of the correlations decreases when either or are increased .this is consistent with the previous remark that in the limit of large switching rate ( _ i.e. _ either or ) the distribution of enzyme numbers tends to a poisson distribution .it is thus not surprising that in this same limit the correlations vanish . in the insets of fig .[ fig : corrone ] we plot the same correlation function on a semi - logarithmic scale .the data for the highest values of or ( the dotted green curves ) is not shown since the decrease is too sharp , and does not allow for a clear interpretation .for the smallest values of and ( blue curves ) , the decay seems to be exponential .however , for intermediate values of or ( dashed red curves ) the evidence for an exponential decay is less clear and the issue deserves a more extensive numerical investigation . for the sake of completenesswe also show in figure [ fig : corronek3off ] similar data for the case where .we find that qualitatively the data has a very similar behaviour to the case where .( colour online ) the two - time auto - correlation function for , .the insets shows the same data in a semi - log scale .top : is varied with constant .bottom : is varied with constant . ]( colour online ) the correlation function when .as previously , and .the data labelled as corresponds to while corresponds to . foreach and the superscripts , and refer to different values of , and respectively .the inset shows the same plot on a semi - log scale . ]many bacterial genomes contain multiple phase - varying genetic switches , which may demonstrate correlated flipping .for example , in uropathogenic _e. coli _ , the _ fim _ and _ pap _ switches , which control the production of different types of fimbriae , have been shown to be coupled .although these two switches operate by different mechanisms , it is also likely that multiple copies of the same switch are often present in a single cell .this may be a consequence of dna replication before cell division ( in fast - growing _e. coli _ cells , division may proceed faster than dna replication , resulting in up to copies per cell ) . randomly occurring gene duplication events , which are believed to be an important evolutionary mechanism , might also result in multiple copies of a given switch on the chromosome .it is therefore important to understand how multiple copies of the same switch would be likely to affect each other s function .let us suppose that there are two copies of our model switch in the same cell .each copy contributes to and is influenced by a common pool of molecules of enzyme .our model is still described by the set of reactions ( [ eq : react ] ) , but now the copy number of and can vary between 0 and 2 ( with the constraint that the total number of switches is 2 ) . to measure correlations between the states of the two switches ( denoted and ) we define the _ two switch _ joint probability as the probability that switch 1 is in state at time and switch is in state at time .this function is the natural extension of the previously defined two - time probability for a single switch .thus , in analogy to ( [ ctau ] ) , we can define a two - time correlation function : where ( ) is again the steady state probability for a single switch to be ( ) .if the two switches are completely uncorrelated , we expect that and , so that ( given that ) .in contrast , if the switches are completely correlated , , and . for completely anti - correlated switches , we expect that , and . in fig .[ fig : twoswitches ] we plot the function for two identical coupled switches , for several parameter sets .our results show that for small values of , there is correlation between the two switches , over a time period , which is of the same order as the typical time spent in the on state for these parameter values .our results also show that the nature of these correlations depends strongly on . in the case where ( top panel of fig .[ fig : twoswitches ] ) , one can see that the correlation is positive , meaning that the two switches are more likely to be in the same state .in contrast , when is set to zero ( bottom panel of fig . [fig : twoswitches ] ) , the correlation is negative , meaning that the two switches are more likely to be in different states . to understand these correlations , consider the extreme situation where both the two switches are off , and the number molecules of has dropped to zero . in this case , the only possible event is a mediated switching which could take place , for instance , for the first switch .then , once the first switch is on , it will start producing more enzyme , and , if , this will enhance the probability for the second switch to flip on too .this might explain why , when we see a positive correlation between the two switches . on the other hand , if we consider the opposite situation where both the two switches are on , and the number of molecules of is around its plateau value , then the on to off switching probability for the two switches will be at its maximum .however , after one of the switches has flipped ( e.g. the first ) , the switching probability will start decreasing , this reducing the flipping rate for the second switch .this suggests that may have the effect of inducing negative correlations , while induces positive correlations .we also point out the presence of a small peak in in fig .[ fig : twoswitches ] ( indicated by the arrow ) which suggests the presence of a time delay : when one switch flips , the other tends to follow a short time later .we leave the detailed properties of these correlations and their parameter dependence to future work .( colour online ) normalised two - time correlation function for two identical switches .the parameter values are : , , . in the top panel while in the bottom panel .the parameter is varied from to in each case . ]in this paper we have made a detailed study of a generic model of a binary genetic switch with linear feedback .the model system was defined in section ii by the system of chemical reactions ( [ eq : react ] ) .linear feedback arises in this switch because the flipping enzyme is produced only when the switch is in the on state , and the rate of flipping to the off state increases linearly with the amount of .thus , when the switch is in the on state the system dynamics inexorably leads to a flip to the off state .we have shown that this effect can produce a peaked flip time distribution and a bimodal probability distribution for the copy number of .a mean field description does not reproduce this phenomenology and so a stochastic analysis is required .we have studied this model analytically , obtaining exact solutions for the steady state distribution of the number of molecules , as well as for the flip time distributions in the two different measurement ensembles defined in section [ sec : firstpassage ] , the switch change ensemble and the steady state ensemble .we have shown how these ensembles are related and demonstrated that the flip time distribution in the switch change ensemble may exhibit a peak but the flip time distribution in the steady state ensemble can never do so .we also provide a generic relationship between the flip time distribution sampled in the two different ensembles . given that in single - cell experiments , measuring the flip time distribution in the sce is much more demanding than in the sse , our result provides a way to access the sce flip time distribution by making measurements only in the sse .our flip time calculations are reminiscent of persistence problems in non - equilibrium statistical physics where , for example , one is interested in the time an ising spin stays in one state before flipping .however , because of the linear feedback of our model switch , the flip time distribution is not expected to have a long tail as in usual persistence problems , rather it is the shape of the peak of the distribution which is of interest . by studying numerically the time correlations of a single switch , using the two time autocorrelator ( [ ctau ] ) , we have shown that our model switch can play the role of a primitive `` memory module '' .the two time autocorrelator displays nontrivial behaviour including rather slow decay , which would be worthy of further study . we have also investigated the behaviour of two coupled switches within the same cell , and showed that both positive and negative correlations could be produced by choosing the parameters appropriately . in particular for , as is the case for the _ fim _ switch , anti - correlations were observed , implying that if one switch were on at time , the other would tend to be off at that time and for a subsequent time of about one switch period .many open questions and problems remain . at a technical level one would like to compute correlations of a single switch analytically and be able to treat the multiple switch system .the model itself could be refined in several ways , for example , by introducing nonlinear feedback .it has been shown that such feedback allows nontrivial behaviour even at the level of a piecewise deterministic markov process approximation , where one assumes a deterministic evolution for the enzyme concentration , but a stochastic description for the switching . at presentour model includes no explicit coupling to the environment , but such coupling could be included in a simple way by adding into the model environmental control of parameters or . to make a closer connection to real biological switches , such as _ fim_ , one could extend the model to include , for example , multiple and cooperative binding of the enzymes .one particularly exciting direction , which we plan to pursue in future work , is to develop models for growing populations of switching cells , in which cell growth is coupled to the switch state .such models could lead to a better understanding of the role of phase variation in allowing cells to survive and proliferate in fluctuating environments .+ the authors are grateful to aileen adiciptaningrum , david gally and sander tans for useful discussions .r. j. a. was funded by the royal society of edinburgh .this work was supported by epsrc under grant ep / e030173 .we show here how to solve eq.([eq : survon ] ) using the method of characteristics ( see e.g. ) . introducing the new variable , we set \frac{{\partial}}{{\partial}z } { { \tilde h}_{\textrm{on}}}(z , t ) \,\,.\end{gathered}\ ] ] we can then identify the derivatives of and with respect to as : next , we solve these equations for and using initial conditions and : where . the reduced ordinary differential equation ( ode ) for is : { { \tilde h}_{\textrm{on}}}(r)\,\,,\ ] ] substituting in the above relation with its expression given in ( [ eq : charsol ] ) , we get an ordinary differential equation for , which can be solved by separation of variables : solving the above equation using the initial condition , we arrive at { { \widetilde w}}(z_0)\,\,,\end{gathered}\ ] ] where . substituting then from ( [ eq : charsol ] ) and one finally recovers ( [ eq : hton ] ) .in this appendix we show how the result ( [ eq : linkbis ] ) can be obtained by considering the _ backward survival probability _ : which is the probability that the system has survived in the state without flipping and with enzymes at time 0 knowing that it had enzyme molecules at a past time .the probability will verify the backward master equation in section [ sec : firstpassage ] we used the forward master equation to compute the flip time distribution in two steps .first , we computed the forward survival probability with two possible initial conditions , to distinguish the two possible scenarios of measurement .second , we summed this survival probability over all possible final configurations , and took the time derivative in order to enforce a flipping at the end of the sampling . an analogous calculation ( which we do not detail ) can be carried out considering the backward master equation ( [ eq : backwardme ] ) , and the final result has to be the same .in fact , we can consider the r.h.s .of ( [ eq : backwardme ] ) as a generator of the backward dynamics .thus the solution of the backward evolution equation will have as boundary condition the statistics of the final configuration at time 0 , and will yield the statistics of the possible corresponding initial configurations at ( with the additional constraint that the switch never flipped ) .since for both sce and sse we condition that on switch flips at , the boundary condition of ( [ eq : backwardme ] ) has to be taken when the switch is flipping from state to state , and thus corresponds to : where is defined in ( [ eq : w1 ] ) .this is the analogue of the first step described above .the advantage is that now our boundary condition is the same for both the sce and the sse. we can relate to by noting that is the probability that the switch has not flipped going backward for a time .we now have to made a distinction between the sce and the sse , since what happens at time is precisely the initial ensemble . for the case of the sce ,we want the switch to flip at time , therefore the flip time distribution is given by : on the other hand , for the case of the sse , there is no flipping at to enforce and the flip time distribution is simply proportional to the survival probability : the denominator in ( [ eq : ssebis ] ) is chosen to ensure normalisation .furthermore , we can compute the average flip time in the sce using ( [ eq : scebis ] ) : where an integration by parts has been performed .we can see then that the denominator in eq.([eq : ssebis ] ) is exactly the average flip time . finally ,integrating eq.([eq : scebis ] ) from to infinity and replacing the result in ( [ eq : furthermore ] ) , we obtain and the result ( [ eq : linkbis ] ) is recovered .
we study the statistical properties of a simple genetic regulatory network that provides heterogeneity within a population of cells . this network consists of a binary genetic switch in which stochastic flipping between the two switch states is mediated by a `` flipping '' enzyme . feedback between the switch state and the flipping rate is provided by a linear feedback mechanism : the flipping enzyme is only produced in the on switch state and the switching rate depends linearly on the copy number of the enzyme . this work generalises the model of [ _ phys . rev . lett . _ , * 101 * , 118104 ] to a broader class of linear feedback systems . we present a complete analytical solution for the steady - state statistics of the number of enzyme molecules in the on and off states , for the general case where the enzyme can mediate flipping in either direction . for this general case we also solve for the flip time distribution , making a connection to first passage and persistence problems in statistical physics . we show that the statistics of the model are non - poissonian , leading to a peak in the flip time distribution . the occurrence of such a peak is analysed as a function of the parameter space . we present a new relation between the flip time distributions measured for two relevant choices of initial condition . we also introduce a new correlation measure to show that this model can exhibit long - lived temporal correlations , thus providing a primitive form of cellular memory . motivated by dna replication as well as by evolutionary mechanisms involving gene duplication , we study the case of two switches in the same cell . this results in correlations between the two switches ; these can either positive or negative depending on the parameter regime .
difference image analysis ( dia ) has rapidly moved to the forefront of modern techniques for making time - series photometric measurements on digital images .the method attempts to match one image to another by deriving a convolution kernel describing the changes in the point spread function ( psf ) between images . when applied to a time - series of images using a high signal - to - noise reference image , the differential photometry that can be performed on the differenceimages regularly provides superior accuracy to more traditional profile - fitting photometry , achieving errors close to the theoretical poisson limits . moreover , dia is the only reliable way to analyse the most crowded stellar fields. one will find dia in use in many projects studying object variability .for example , microlensing searches ( e.g. ; ) have been revolutionised by the ability of dia to deal with exceptionally crowded fields , and surveys for transiting planets ( e.g. ; ) looking for small % photometric eclipses have benefited substantially from the extra accuracy obtained with this method . also, dia is not limited to stellar photometry as illustrated by the discovery of light echoes from three ancient supernovae in the large magellanic cloud ( ) .the first attempts at image subtraction are summarised in the introduction of ( from now on al98 ) and are based on trying to determine the convolution kernel by taking the ratio of the fourier transforms of matching bright isolated stars on each image ( ) .development of dia reached an important landmark in al98 with their algorithm to determine the convolution kernel directly in image space ( rather than fourier space ) from all pixels in the images by decomposing the kernel onto a set of basis functions .the algorithm is very successful and efficient , and with the extension to a space - varying kernel solution described in ( from now on al00 ) , the method has become the current standard in dia .in fact , all dia packages use the associated software package isis2.2 ( e.g. ; ) , or are implementations of the alard algorithm ( e.g. ) .we refer to the method described in al98 and al00 as the alard algorithm .in this letter we suggest a change to the main algorithm to determine the convolution kernel that retains the linearity of the least - squares problem and yet is simpler to implement , has fewer input parameters and is in general more robust ( section 2 ) .we compare our algorithm directly to the alard algorithm ( section 3 ) , and suggest more techniques that increase the quality of the subtracted images . we conclude in section 4 .consider a pair of registered images of the same dimensions , one being the reference image with pixels , and the other the current image to be analysed with pixels , where and are pixel indices refering to the column and row of the image .ideally the reference image will be the better seeing image of the two and have a very high signal - to - noise ratio .this can be achieved in practice by stacking a set of best - seeing images . as with the method of al98, we use the model {ij } + b_{ij } \label{eqn : model}\ ] ] to represent the current image , where we wish to find a suitable convolution kernel and differential background . formulating this as a least - squares problem, we want to minimise the chi - squared where the represent the pixel uncertainties . at this point in the alard algorithm ,the problem is converted to standard linear least - squares by decomposing the kernel onto a set of gaussian basis functions , each multiplied by polynomials of the kernel coordinates and , and by assuming that the differential background is represented by a polynomial function of the image coordinates and .spatial variation of the convolution kernel is modelled by further multiplying the kernel basis functions by polynomials in and .this method has a major drawback in that it assumes that the chosen kernel decomposition is sufficiently complex so as to model in detail the convolution kernel. how do we know that we are making the correct choice of basis functions ?different situations may require different combinations of basis functions of varying complexity .in fact , a feature of all current dia packages ( which are all based on the al98 prescription for kernel basis functions ) is the requirement that the user defines the number of gaussian basis functions used , their associated sigma values and the degrees of the modifying polynomials .this sort of parameterisation can end up being confusing for the user , and require a large amount of experimentation to obtain the optimal result for a specific data set . with this motivation, we have developed a new dia algorithm in which we make no assumptions about the functional form of the basis functions representing the kernel .considering a spatially invariant kernel , we represent the kernel as a pixel array with pixels where and are pixel indices corresponding to the column and row of the kernel .we also define the differential background as some unknown constant .hence we may rewrite equation ( [ eqn : model ] ) as : this equation has unkowns for which we require a solution .note that the kernel may be of any shape that includes the pixel , and so to preserve symmetry in all directions , we adopt a circular kernel ( instead of the standard square shape ) . in order to solve for and in the least - squares sense , we note that the in equation ( [ eqn : chisq ] ) is at a minimum when the gradient of with respect to each of the parameters and is equal to zero . performing the differentiations and rewriting the set of linear equations in matrix form, we obtain the matrix equation with : where and are generalised indices for the vector of unknown quantities , with associated kernel indices and respectively . finding the solutions and the solution depends on the pixel variances which in turn depend on the image model values . see section [ uncertainties ] ] for and requires the construction of the matrix and vector , inverting and calculating .every pixel on both the reference image and current image has the potential to be included in the calculation of and .however , we ignore bad / saturated pixels on both images , and also any pixels on the current image for which the calculation of the corresponding model pixel value includes a bad / saturated pixel on the reference image .this implies that a single bad / saturated pixel on the reference image can discount a set of pixels equal to the kernel area on the current image .hence bad / saturated pixels on the reference image should be kept to a minimum , and excessively large kernels should be avoided .the kernel sum is a measure of the mean scale factor between the reference image and the current image , and consequently it includes the effects of relative exposure time and atmospheric extinction .we refer to as the photometric scale factor .although it is not essential , we suggest that a constant background estimate is subtracted from the reference image before solving for the kernel and differential background since this will minimse any correlation between and .finally , we mention that a difference image is defined as . assuming that most objects in the reference image are constant sources ,then a difference image will consist of random noise ( mainly poisson noise from photon counting ) except where a source has varied in brightness or the background pattern has varied .sources that are brighter or dimmer at the epoch of the current image relative to the epoch of the reference image will show up as positive or negative flux residuals , respectively , on the difference image .these areas may be measured to yield a difference flux for each object of interest .we take the following standard ccd noise model for the pixel variances : where is the ccd readout noise ( adu ) , is the ccd gain ( e/adu ) and is the master flat field image . note that the depend on the image model and consequently , fitting becomes an iterative process .note also that we assume that the reference image and master flat field image are noiseless since these are high s / n ratio images .finally , if the current image was registered with the reference image via a geometric transformation , then the flat field that is actually used in the noise model must be the result of the same transformation applied to the original master flat field . in order to calculate an initial kernel and differential background solution, we set the to the image values . in subsequent iterations , we use the current image model to set the as per equation [ eqn : noise_model ] .we also employ a 3 clip algorithm during the iterative model fitting process in order to prevent outlier image pixel values from entering the solution . after each iteration, we calculate the absolute normalised residuals for all pixels . any pixels with ignored in subsequent iterations .the iterations are stopped when no more image pixels are rejected and at least two iterations have been performed . in extending our new method to solving for a spatially variant kernel solution , we preserve flexibility by splitting the image area into an by grid of sub - regions and solving for the kernel and differential background in each sub - region .the coarse grid of kernel and differential background solutions may be interpolated to yield the solution corresponding to any given image pixel . in this waywe make no assumptions about how the kernel and differential background vary across the image area .this is in contrast to al00 , whose method employs an extension of the kernel basis functions by further multiplication by polynomials in and , and therefore requires two more input parameters from the user , namely the degrees of the polynomials describing the spatial variation of the kernel and the differential background .to illustrate the potential advantages of our new kernel solution method over that of al98 , we carry out a set of simple tests on a 1024 pixel ccd image of the globular cluster ngc1904 . in each testwe use the original image as the reference image and a transformed version of the original image as the current image , where the transformations employed are simple , spatially invariant and typical of astronomical imaging .we attempt to solve for the kernel using our new method , which is implemented in a software package called dandia ( bramich in prep . ) , and we compare the solution to that obtained using the isis2.2 software from al00 .we use the isis2.2 default parameters specifying 3 gaussian basis functions of pix with modifying polynomials of degree 6 , 4 and 3 , respectively . for both software packages ,we choose to solve for a spatially invariant kernel of size 27 pixels , and a constant differential background .the better the match between the convolved reference image and the current image , the smaller the value of the quantity .we guage the relative quality of the kernel solutions by calculating the noise ratio where and are values of calculated for a small 80x80 pixel sub - region using isis2.2 and dandia , respectively .the results of the tests described below are shown in figure [ fig : test1 ] : 1 . in test a, the current image has been created by shifting the reference image by one pixel in each of the positive and spatial directions , without resampling .the corresponding kernel should be the identity kernel ( central pixel value of 1 and 0 elsewhere ) shifted by one pixel in each of the negative and kernel coordinates .dandia recovers this kernel to within numerical rounding errors whereas isis2.2 recovers a peak pixel value of 0.995 with other absolute pixel values of up to 0.004 .consequently the residuals in the isis2.2 difference image are considerably worse than those for dandia , and the noise ratio is .2 . in test b ,the current image has been created by convolving the reference image with a gaussian of fwhm 4.0 pix . both dandia and isis2.2recover the kernel successfully , but dandia out - performs isis2.2 with .3 . in test c , we shifted the reference image by half a pixel in each of the positive and spatial directions to create the current image , an operation that required the resampling of the reference image .we used the cubic o - moms resampling method ( see section [ resample ] ) .isis2.2 fails to reproduce the highly complicated kernel matching the two images , whereas dandia does a nearly perfect job .the noise ratio is .4 . in test d, we simulate a telescope jump by setting where is a resampled version of the reference image shifted by 3.5 pixels in each of the positive and spatial directions .the corresponding kernel is a combination of the identity kernel and a shifted version of the kernel from test c. dandia accurately reproduces this kernel with a central pixel value of 0.60015 whereas isis2.2 produces a poor approximation of the kernel with a central pixel value of 0.631 .the noise ratio is .it is evident that the gaussian basis functions used in isis2.2 limit the flexibility of the kernel solution to modelling kernels that are centred near the kernel centre and that have scale sizes similar to the sigmas of the gaussians employed .it is only in test b that isis2.2 can closely model the kernel , simply because the kernel itself is a gaussian .tests a , c & d show how isis2.2 is unable to model sharp , complicated and off - centred kernels .dandia does not suffer from any of these limitations since it makes no assumption about the kernel shape , and hence it performs superbly in all of the above tests . in section 2 ,we make the assumption that the reference image and current image are registered , which implies that one of the images has been transformed to the pixel coordinate system of the other image , usually via image resampling .ideally one should transform the reference image to the current image since the reference image forms part of the model . in this way, the pixel variances in the current image are left uncorrelated from pixel to pixel . however, most implementations of dia transform the current image to the coordinate system of the reference image using image resampling .we suggest two improvements to this methodology .firstly , if resampling is to be employed , one should use an optimal resampling method .we employ the cubic o - moms ( optimal maximal - order - minimal - support ) basis function for resampling , which is constructed from a linear combination of the cubic b - spline function and its derivatives .the o - moms class of functions have the highest approximation order and smallest approximation error constant for a given support ( ) .secondly , our kernel model does not use basis functions that are functions of the kernel pixel coordinates .consequently , for two images that require only a translation to be registered , the image resampling is incorporated in the kernel solution , avoiding the problem of correlated pixel noise .dia is used extensively for extracting lightcurves of objects in time - series images , which usually only have a small pixel shift between images . by translating the current image to the reference image by an integer pixel shift , avoiding image resampling , the kernel solution process can do the rest of the job of matching the reference image to the current image .we now test our new algorithm on a pair of 1024 pixel images of ngc1904 from the same camera with fwhms of .2 pix and .9 pix . using matching star pairs ,we derive a linear transformation between the images that consists of a translation with negligible rotation , shear and scaling . from the calibration images , we measure a gain of 1.48 e/adu and a readout noise of 4.64 adu , and we construct a master flat field for use in the noise model . on the left of figure [ fig : test2 ] , we present 100 pixel cutouts of the reference image ( the better seeing image ) and the current image . when calculating the of the difference images , we use a modified version of equation [ eqn : noise_model ] to account for the noise contribution from the single - exposure reference ] where is the space variant kernel and is a factor correcting for the noise distortion from resampling the reference image .the value of depends on the resampling method used and the coordinate transformation applied .we calculate by generating a 1024 pixel image of values drawn from a normal distribution with zero mean and unit sigma , resampling the image using the same method and transformation as that applied to the reference image , and then fitting a gaussian to the histogram of transformed pixel values , the sigma of which indicates the value of . for cubic o - moms resampling and the transformation between our two test images , we obtain 0.884 .our first pair of tests involves registering the images by resampling the reference image via cubic o - moms and then using dandia ( test e ) and isis2.2 ( test f ) to generate difference images . for dandia , we solve for an array of circular kernels corresponding to a 10 grid of image sub - regions , where each kernel contains 317 pixels . the kernel used to convolve each pixel on the reference imageis calculated via bilinear interpolation of the array of kernels .the results of test e are displayed in the upper middle panel of figure [ fig : test2 ] where we show the difference image normalised by the pixel noise from equation [ eqn : noise_model2 ] with a linear scale from -2 to 2 .two variable stars are visible ( rr lyraes ) and the cosmic ray from the reference image has created a negative flux on the difference image . in the same panelwe plot the histogram of normalised pixel values overlaid with a gaussian fit , and calculate a , ignoring the small pixel areas including the variable stars and the cosmic ray ( 250 pix ) .the 100 pixel cutout corresponds to one image sub - region used to determine a kernel solution and hence we may calculate a reduced chi - squared by assuming . for isis2.2we solve for a spatially variant kernel of degree 2 with a spatially variant differential background of degree 3 in addition to the other default kernel basis functions ( see section [ initialtests ] ; 328 free parameters ) .the results of test f are shown in the upper right panel of figure [ fig : test2 ] .we obtain , and assuming free parameters per image sub - region , we obtain .tests g & h involve registering the images to within 1 pixel by translating the reference image via an integer pixel shift. then we apply dandia ( test g ) and isis2.2 ( test h ) to obtain kernel solutions , avoiding the use of resampling . for dandiawe obtain , and for isis2.2 we obtain , with corresponding of 0.99 and 1.00 , respectively ( see figure [ fig : test2 ] ) . visually , the normalised difference image cutouts in figure [ fig : test2 ] are very similar , and differences are only noticeable after detailed scrutiny .however , the analysis reveals that our algorithm performs considerably better than the alard algorithm ( test e performs 0.60 better than test f , and test g performs 0.38 better than test h ) , and that image resampling degrades the difference images ( test g performs 0.48 better than test e , and test h performs 0.70 better than test f ) .the highest quality difference image was produced by using dandia on the two images aligned to within 1 pixel but without resampling ( test g , which performs 1.08 better than test f ) .we have presented a new method for determining the convolution kernel matching a best - seeing reference image to another image of the same field .the method involves modelling the kernel as a pixel array , avoiding the use of possibly inappropriate basis functions , and eliminating the need for the user to specify which basis functions to use via numerous parameters . for images that require a translation to be registered ,the kernel pixel array incorporates the resampling process in the kernel solution , avoiding the need to resample images , which degrades their quality and creates correlated pixel noise .kernels modelled by basis functions may only partly compensate for sub - pixel translations since the basis functions are centred at the origin of the kernel coordinates .we have shown that our new method can produce higher quality difference images than isis2.2 .ideally the reference image should be aligned with the current image , preferably without resampling , but using o - moms resampling when necessary .the flexibility of our kernel model allows the construction of difference images for telescope jumps , or trailed images , which is where isis2.2 fails .these improvements have important implications for time - series photometric surveys .better quality difference images implies more accurate lightcurves , and the increased kernel flexibility will lead to less data loss due to telescope tracking and/or focus errors .d.m . bramich would like to thank k. horne and m. irwin for their useful advice , and a. arellano ferro for supplying the test images .this work is dedicated to phoebe and chloe bramich muiz .
in the context of difference image analysis ( dia ) , we present a new method for determining the convolution kernel matching a pair of images of the same field . unlike the standard dia technique which involves modelling the kernel as a linear combination of basis functions , we consider the kernel as a discrete pixel array and solve for the kernel pixel values directly using linear least - squares . the removal of basis functions from the kernel model is advantageous for a number of compelling reasons . firstly , it removes the need for the user to specify such functions , which makes for a much simpler user application and avoids the risk of an inappropriate choice . secondly , basis functions are constructed around the origin of the kernel coordinate system , which requires that the two images are perfectly aligned for an optimal result . the pixel kernel model is sufficiently flexible to correct for image misalignments , and in the case of a simple translation between images , image resampling becomes unnecessary . our new algorithm can be extended to spatially varying kernels by solving for individual pixel kernels in a grid of image sub - regions and interpolating the solutions to obtain the kernel at any one pixel . [ firstpage ] techniques : image processing , techniques : photometric , methods : statistical
we consider the rayleigh fading relay channel shown in fig .[ fig : relay channel ] , consisting of the source node , the relay node and the destination node .it is assumed that r can operate only in the half - duplex mode , i.e. , it can not receive and transmit simultaneously .it is assumed that r has perfect knowledge about the instantaneous value of the fade coefficient associated with the s - r link and d has perfect knowledge about the instantaneous values of the fade coefficients associated with the s - r , r - d and s - d links . throughout ,the phase during which the relay is in reception mode is referred to as phase 1 and the phase during which the relay is in transmission mode is referred to as phase 2 . in the non -orthogonal decode and forward ( nodf ) scheme , s transmits , r and d receive during phase 1 ( fig .[ fig : nodf_phase1 ] ) .both s and r transmit during phase 2 ( fig .[ fig : nodf_phase2 ] ) . in the orthogonaldecode and forward ( nodf ) scheme , s transmits , r and d receive during phase 1 ( fig .[ fig : odf_phase1 ] ) .only r transmits during phase 2 ( fig .[ fig : odf_phase2 ] ) .different decoder architectures for the odf scheme have been proposed in ,, and . as noted in , the implementation as well asthe performance analysis of the optimal maximal likelihood ( ml ) decoder for the odf scheme is very complicated .sub - optimal decoders called -mrc and co - operative mrc ( c - mrc ) were proposed in and respectively .a near ml decoder for the odf scheme was presented in for the single relay channel with multiple antennas .non - orthogonal relay protocols offer higher spectral efficiency when compared with orthogonal relay protocols , , .power allocation strategies for the nodf scheme were discussed in . in this paper , the near ml decoder presented in is extended for the nodf scheme . the performance of the extended near ml decoder for the nodf scheme is analyzed . throughout , we consider uncoded communication using signal sets such as m - psk , m - qam etc . by a _ labelling scheme _ , we refer to the way in which the bits are mapped on to the signal points at the source and the relay .labelling schemes at the source and the relay which result in significant performance improvement are obtained .let denote the complex signal set used at s and r , with .a collection of bits constitutes a message .let denote this message set .let denote the labelling scheme used at s during phase 1 , i.e. , it specifies how messages are mapped onto complex symbols from the signal set at the source .we assume that during phase 1 ( fig . [fig : nodf_phase1 ] ) , s transmits l complex symbols , corresponding to l messages , where and , for . the received signal at r and d during phase 1 are given by , where and are the zero mean circularly symmetric complex gaussian fading coefficients associated with the s - r and s - d links respectively with the corresponding variances given by and .the additive noises at r and d , and are circularly symmetric complex gaussian random variables with mean 0 and variance 1/2 per dimension , denoted by .let and denote the labelling schemes used at s and r respectively during phase 2 . during phase 2 ( fig .[ fig : nodf_phase2 ] ) , s transmits the l complex symbols , corresponding to the same messages transmitted during phase 1 and r transmits the complex symbols corresponding to the decoded messages .the received signal at d during phase 2 is given by , where and are the zero mean circularly symmetric complex gaussian fading coefficients associated with the s - d and r - d links respectively with the corresponding variances given by and .the additive noise at d , is .l is assumed to be large enough such that the fading coefficient associated with the s - d link during phase 2 is independent of .let denote the labelling scheme used at s during phase 1 .during phase 1 ( fig .[ fig : odf_phase1 ] ) , s transmits l complex symbols , corresponding to l messages , where and , for .the received signal at r and d during phase 1 are given by , where and are the zero mean circularly symmetric complex gaussian fading coefficients associated with the s - r and s - d links respectively with the corresponding variances given by and .the additive noises at r and d , and are .let denote the labelling schemes used at s and r respectively during phase 2 . during phase 2 ( fig .[ fig : odf_phase2 ] ) , r transmits the complex symbols corresponding to the decoded messages .the received signal at d during phase 2 is given by , where is the zero mean circularly symmetric complex gaussian fading coefficients associated with the r - d link with the corresponding variance given by .the additive noise at d , is .the contributions of the paper are as follows .* the expressions for the pairwise error probability ( pep ) , for the near ml decoder proposed , are derived for the nodf scheme .it is shown that the near ml decoder offers full diversity for the nodf scheme . * even though the s - d link is in general much weaker than the r - d link , the error performance of the nodf scheme is much better than that of the odf scheme .* it is shown that the high snr performance of the nodf scheme with a non - ideal s - r link , is exactly same as that of the nodf scheme in which the s - r link is ideal .in other words , the effect of the strength of the source - relay link completely vanishes at high snr for the nodf scheme . *it is shown that proper choice of the different labelling schemes for the source and the relay , results in a significant improvement in performance over the case where the source and the relay use identical labelling schemes . *furthermore , it is shown that the performance improvement obtained by proper choice of the labelling scheme is more pronounced in the case of the nodf scheme than the odf scheme .* we give an algorithm to obtain good labelling schemes for the source and the relay .the organization of the rest of the paper is as follows .the description of the near ml decoder for the odf and nodf schemes constitutes section ii . in section iii ,the pep expressions for the nodf and odf schemes are derived . in section iv , the effect of the choice of the labelling scheme on the performance is studied .section v compares the nodf and odf schemes with non - ideal source - relay link with the case where the source - relay link is ideal . in sections iii ,iv and v , conclusions derived based on the pep expressions are validated by simulations , with 8-psk as the signal set used at the source and the relay . _* notations : * _ denotes the standard circularly symmetric complex gaussian random vector of length . the scalar real valued gaussian random variable with mean zero and variance . for simplicity , distinction is not made between the random variable and a particular realization of the random variable , in expressions involving probabilities of random variables .for example , is simply written as . is the shorthand notation for . in some probability expressions involving conditioning of the fading coefficients ,the fact that the probability is conditioned on the values taken by the fading coefficients is not explicitly written , as it can be understood from the context .for a set , denotes the cardinality of . denotes the real part of the complex number . throughout, denotes the average energy in db of the signal set used at the source and the relay .by assumption , r has perfect knowledge about the instantaneous value of the fade coefficient associated with the s - r link and d has perfect knowledge about the instantaneous values of the fade coefficients associated with the s - r , r - d and s - d links . at r, it is assumed that the decoder performs ml decoding , i.e. , the output of the decoder at r for . for ml decoding at d, the decoder has to maximize the probability , for , over all possible choices of .the form of ( [ eqn1 ] ) is the same for all .hence we leave out in the following discussion .the ml decoder decides in favour of , if let .then we have , where equals the probability of the event that r decides in favour of message , given that was the message transmitted by the source . as in , we upper bound the probability that a message transmitted by s is decoded as another message by the corresponding pep .hence , for , is upper - bounded by the pep , \\ \label{eqn3 } & \leq \dfrac{1}{2}\exp \left\lbrace -\dfrac{1}{4}\left\vert c_{rs}\left(x_{s_1}\left(a\right)-x_{s_1}\left(j\right)\right)\right\vert ^2\right\rbrace , % leq q\left[\dfrac { \vert c_{rs}\left(x_s\left(a\right)- c_{rs}\left(x_s\left(i\right)\vert}{\sqrt{2}}\right)\right)\right] ] db . it is important to note that the labelling gain is calculated based on the upper bound on the pep , taking into consideration only those pair of messages and which contribute dominantly to the metric .the actual high snr gain provided by the labelling scheme over the scheme need not equal .throughout , the phrase _ with our labelling _ means that s and r use the labelling scheme which is to be described in this section and _ without our labelling _ means that s and r use the labelling scheme .an algorithm to obtain a good labelling scheme is as follows .* choice of : the mapping used by s during phase 1 can be chosen arbitrarily . in particular , we can choose the mapping in which message is mapped on to . *choice of : can be chosen arbitrarily .in particular we can choose . by the choice of , occurs for a set of values of , denoted as .assign symbols for , in the increasing order of , such that is maximum , i.e , choose , where is chosen to be the one which has the maximum euclidean distance from , among all symbols of which are not previously assigned .if more than one option is available while making a choice , choose any one .: consider the sets , where belongs the set of messages for which symbols have been assigned .for each one of the sets , assign symbols for , such that , where is chosen to be the one which has the maximum euclidean distance from , among all symbols of which are not previously assigned . :repeat step 3 for those messages for which symbols have not been assigned .: if the procedure described above results in a value of , where is the minimum of the squared euclidean distance between all pairs of points in the signal set , change the choice which was made recently and repeat steps 3 and 4 to ensure that is greater than .* choice of can be chosen arbitrarily .in particular we can choose . by the choice of and , occurs for a set of values of , denoted as .assign symbols for , in the increasing order of , such that is maximum , i.e , choose , where is chosen to be the one which has the maximum euclidean distance from , among all symbols of which are not previously assigned .if more than one option is available while making a choice , choose any one .: consider the sets , where belongs the set of messages for which symbols have been already assigned .for each one of the sets , assign symbols for every , such that , where is chosen to be the one which has the maximum euclidean distance from , among all symbols of which are not previously assigned .: repeat step 3 for those messages for which symbols have not been assigned .+ the algorithm to find a good labelling strategy for the odf scheme , i.e. , choosing the maps and are exactly same as the choice of the maps and for the nodf scheme .[ cols="^,^,^,^,^,^,^,^,^",options="header " , ] we consider the case where 4-psk is the signal set used at s and r whose points are labelled as shown in fig .[ fig:4psk ] .the value of is assumed to be 0.1 .the choice of the labelling scheme is described below .* choice of : the map can be chosen arbitrarily .we can choose , .the sets , , can be found and are shown in table [ table : table1 ] .* choice of : 1 . can be chosen to be .2 . the set .we choose , since the euclidean distance between and is maximum . can take only two possible symbols and , both of which are at a squared euclidean distance from . hence the value of can not be made greater than . and are chosen to be and respectively .+ the sets , , can be found and are shown in table [ table : table1 ] .* choice of : 1 . can be chosen to be .2 . the set .hence choose .the set , for which symbol has already been assigned .4 . since the sets and , both do not contain , the choice of can be made arbitrarily .we choose . as a result .the choice of , and thus made is tabulated in table [ table : table1 ] .table [ table : table1 ] also contains and ( defined in the beginning of this section ) , for . for the labelling scheme , the maps and taken to be same as . from table [table : table1 ] , we see that for the nodf scheme and .hence the labelling gain , db . similarly , from table [ table : table2 ] , we see that for the odf scheme and . hence the labelling gain , db .consider the case where 8-psk is the signal set used at s and r. the points are assigned labels as shown in fig .[ fig:8psk ] .the value of is taken to be 0.1 .* choice of : the map can be chosen arbitrarily .we can choose , .the sets , , can be found and are shown in table [ table : table3 ] .* choice of : 1 . can be chosen to be .2 . the set .we choose , since the euclidean distance between and is maximum .we choose , since the euclidean distance between and is maximum , among all possible symbols which are not assigned .3 . the set . is chosen to be , since its euclidean distance from is maximum , among all symbols which are not yet assigned .4 . the set and hence is chosen to be . 5 . and hence is chosen to be . 6 . and hence is chosen to be . finally we are left with .+ since the steps described above results in a value of , the process of assigning the map is complete .the sets , , can be found and are shown in table [ table : table3 ] .* choice of : 1 . can be chosen to be .the set .hence choose .3 . the set .hence choose , since its euclidean distance from is maximum .4 . the set .hence choose .the set .hence choose .the set .hence choose .the set , for which symbol has been already assigned . 8 .we are left with messages and . and .choose and .vs performance of nodf and odf schemes , with and without our labelling for 8-psk , with db , db and db.,width=360 ] the choice of , and thus made is tabulated in table [ table : table3 ] . from table [table : table3 ] , we see that for the nodf scheme and .hence the labelling gain , similarly , from table [ table : table2 ] , we see that for the odf scheme and .hence the labelling gain , simulation results showing the vs performance of the nodf and odf schemes , with our labelling and without our labelling , with 8-psk as the constellation used at s and r , is shown in fig .[ fig : fig3 ] . from fig[ fig : fig3 ] , it can be seen that for both the odf and the nodf schemes , the labelling strategy suggested in this section provides advantage . for the odf scheme , at high snr ,the gain provided by the labelling scheme described is about 0.5 db and for the nodf scheme , it is about 2 db .consistent with the observations made in the beginning of this section based on the pep expressions for the odf and nodf schemes , from fig .[ fig : fig3 ] , it can be seen that the gain provided by the choice of the labelling is more in the case of the nodf scheme than the odf scheme .\\ \label{eqn16 } & \leq \exp \left\lbrace -\dfrac{\vert c_{ds_1}\left(x_{s_1}\left(a\right)-x_{s_1}\left(\bar{a}\right)\right)\vert^2 + \vert c_{ds_2}\left(x_{s_2}(a)-x_{s_2}\left(\bar{a}\right)\right)+c_{dr}\left(x_{r}(a)-x_{r}(\bar{a})\right)\vert^2}{4}\right\rbrace\\ \label{eqn17 } & \hspace{-3 cm}pr \left(a \longrightarrow \bar{a } \right ) \leq \hspace{0 cm } \left [ \dfrac{1}{1+\dfrac{1}{4}\vert \sigma_{ds}\vert ^2\vert x_{s_1}({a})-x_{s_1}(\bar{a})\vert ^2}\right ] \left[\dfrac{1}{1+\dfrac{1}{4}\vert \sigma_{ds}\vert ^2\vert x_{s_2}({a})-x_{s_2}(\bar{a})\vert ^2+\dfrac{1}{4}\vert \sigma_{dr}\vert ^2\vert x_{r}({a})-x_{r}(\bar{a})\vert ^2}\right]\end{aligned}\ ] ] ' '' '' we consider the case where the s - r relay link is ideal , i.e it is assumed that r decodes the message it receives with zero probability of error .the optimal ml decoder for this case is vs performance of the nodf scheme with our labelling , with ideal and non - ideal s - r links for 8-psk , width=360 ] vs performance of the nodf scheme without our labelling , with ideal and non - ideal s - r links for 8-psk , width=360 ] vs performance of the odf scheme with our labelling , with ideal and non - ideal s - r links for 8-psk , width=360 ] vs performance of the nodf scheme without our labelling , with ideal and non - ideal s - r links for 8-psk , width=360 ] the pep that message transmitted by s is decoded as message by d is given by .taking expectation of with respect to , and , we get .we note that at high snr , the bound on the pep given by and theorem 1 ( neglecting the higher order terms ) are the same .hence at high snr , the performance of the nodf scheme with a non - ideal s - r link is expected to be same as that of the nodf scheme with an ideal s - r link . in otherswords , at high snr , the vs performance does not depend on the strength of the s - r link . on the other hand ,the pep bound for the odf scheme given in corollary 1 contains additional second order terms and is not the same as the one obtained by substituting and in .hence at high snr , the vs performance of the odf scheme with a non - ideal s - r link is not expected to be be the same as that of the odf scheme with an ideal s - r link .a comparison of vs performance of the nodf scheme with our labelling , for the cases where s - r link is ideal and non - ideal is shown in fig .[ fig : fig4 ] . a similar comparison for the nodf scheme without our labelling is presented in fig . [ fig : fig5 ] . from fig .[ fig : fig4 ] and fig [ fig : fig5 ] , it is seen clearly that at high snr the performance of the nodf schemes with a non - ideal s - r link and ideal s - r link exactly coincide . in fig .[ fig : fig6 ] and fig [ fig : fig7 ] , similar comparisons are made for the odf scheme with and without our labelling . from , fig .[ fig : fig6 ] and fig [ fig : fig7 ] , it can be seen that at high snr , the vs curves for the case where the s - r link is ideal and non - ideal do not coincide , unlike the nodf scheme . in other words , to study the high snr performance of the nodf scheme , we can assume the s - r link to be ideal , whereas the same is not true for the odf scheme .a near ml decoder which gives maximum possible diversity ( diversity order 2 ) was studied .it was shown that the nodf scheme provides advantage over the odf scheme . a proper choice of the labelling scheme used at the source and the relay results in a significant improvement in performance. it will be interesting to study the performance of the near ml decoder and the effect of the choice of labelling , when the source and the relay use coded communication techniques .this work was supported partly by the drdo - iisc program on advanced research in mathematical engineering through a research grant as well as the inae chair professorship grant to b. s. rajan .160 andrew sendonaris , elza erkip and behnaam aazhang , `` user cooperation diversity part ii : implementation aspects and performance analysis '' , ieee transactions on communications , vol .11 , november 2003 .tairan wang , alfonso cano , georgios b. giannakis and j. nicholas laneman , `` high - performance cooperative demodulation with decode - and - forward relays '' , ieee transactions on communications , vol .7 , july 2007 .
we consider the uncoded transmission over the half - duplex single relay channel , with a single antenna at the source , relay and destination nodes , in a rayleigh fading environment . the phase during which the relay is in reception mode is referred to as phase 1 and the phase during which the relay is in transmission mode is referred to as phase 2 . the following two cases are considered : the non - orthogonal decode and forward ( nodf ) scheme , in which both the source and the relay transmit during phase 2 and the orthogonal decode and forward ( odf ) scheme , in which the relay alone transmits during phase 2 . a near ml decoder which gives full diversity ( diversity order 2 ) for the nodf scheme is proposed . due to the proximity of the relay to the destination , the source - destination link , in general , is expected to be much weaker than the relay - destination link . hence it is not clear whether the transmission made by the source during phase 2 in the nodf scheme , provides any performance improvement over the odf scheme or not . in this regard , it is shown that the nodf scheme provides significant performance improvement over the odf scheme . in fact , at high snr , the performance of the nodf scheme with the non - ideal source - relay link , is same as that of the nodf scheme with an ideal source - relay link . in other words , to study the high snr performance of the nodf scheme , one can assume that the source - relay link is ideal , whereas the same is not true for the odf scheme . further , it is shown that proper choice of the mapping of the bits on to the signal points at the source and the relay , provides a significant improvement in performance , for both the nodf and the odf schemes .
the problem of comparing two large population covariance matrices has important applications in modern genomics , where growing attentions have been devoted to understanding how the relationship ( e.g. dependencies or co - regulations ) among genes vary between different biological states .our interest in this problem is motivated by a microarray study on human asthma .this study consists of 88 asthma patients and 20 controls .it is known that genes tend to work collectively in groups to achieve certain biological tasks .our analysis focuses on such groups of genes ( gene sets ) defined with the gene ontology ( go ) framework , which are referred to as go terms .identifying go terms with altered dependence structures between disease and control groups provides critical information on differential gene pathways associated with asthma .many of the go terms contain a large number of ( in the asthma data , as many as 8,070 ) genes .the large dimension of microarray data and the complex dependence structure among genes make the problem of comparing two population matrices extremely challenging . in conventional multivariate analysis where the dimension is fixed , testing the equality of two unknown covariance matrices and based on the samples with sample sizes and has been extensively studied ,see for example and the references therein . in the high - dimensional setting where , recently several authors have developed new tests other than the traditional likelihood ratio test .considering multivariate normal data , and constructed tests using different distances based on traces of the covariance matrices ; proposed a -statistic based test for a more general multivariate model .these tests are effective for dense alternatives , but often suffer from low power when is sparse .we are more interested in this latter situation , as in genomics the difference in the dependence structures between populations typically involves only a small number of genes . for sparse alternatives , investigated an -type test .they proved that the distribution of the test statistic converges to a type i extreme value distribution under the null hypothesis and the test enjoys certain optimality property .motivated by this work , we propose in this paper a perturbed variation of the -type test statistic .we verify that the conditional distribution of the perturbed -statistic provides a high - quality approximation to the distribution of the original -type test , which has important implications in achieving accurate performance in finite sample size .in contrast , the convergence rate to the extreme - value distribution of type i is of order . the asymptotic validity of our proposed new procedure does not require any structural assumptions on the unknown covariances .it is valid under weak moment conditions . on the other hand ,the aforementioned work all require certain parametric distributional assumptions or structural assumptions on the population covariances in order to derive an asymptotically pivotal distribution .assumptions of this kind are not only difficult to be verified but also often violated in real data .it is known that expression levels of the genes regulated by the same pathway or associated with the same functionality are often highly correlated .also , in the microarray and sequencing experiments , most genes are expressed at very low levels while few are expressed at high levels .this implies that the distribution of gene expressions is most likely heavy - tailed regardless of the normalization and transformations . for testing in high dimensions ,the new procedure is computationally fast and adaptive to the unknown dependence structures .section [ method.sec ] introduces the new testing procedure and investigates its theoretical properties . in section [ simulation.sec ], we compare its finite sample performance with several competitive procedures .a gene clustering algorithm is derived in section [ sec51 ] , which aims to group hundreds or thousands of genes based on the expression patterns without imposing restrictive structural assumptions .we apply the proposed procedures to the human asthma dataset in section [ real ] .section [ discuss ] discusses our results and other related work .proofs of the theoretical results and additional numerical results are provided in the supplementary material .the proposed methods have been implemented in the ` r ` package ` hdtest ` and is currently available on cran ( http://cran.r-project.org ) .let and be two -dimensional random vectors with means and , and covariance matrices and , respectively .we are interested in testing h_0 : _ 1 = _ 2 h_1 : _ 1 _ 2 [ covariance.test ] based on independent random samples and drawn from the distributions of and , respectively . for each and , we write and .let and be the sample analogues of and , where and . for each , a straightforward extension of the two - sample -statistic for the marginal hypothesis versus given by _k= , [ eq2.3 ] where and are estimators of and , respectively .since the null hypothesis in ( [ covariance.test ] ) is equivalent to , a natural test statistic that is powerful against sparse alternatives in ( [ covariance.test ] ) is the -statistic one way to base a testing procedure on the -statistic is to reject the null hypothesis ( [ covariance.test ] ) when , where corresponds to the -quantile of the type i extreme value distribution . proved that this leads to a test that maintains level asymptotically and enjoys certain optimality . in this section, we propose a new test that rejects ( [ covariance.test ] ) when , where is obtained using a fast - computing data perturbation procedure .the new procedure resolves two issues at once .first , it achieves better finite sample performance by avoiding the slow convergence of to the type i extreme value distribution .second and more importantly , our procedure relaxes the conditions on the covariance matrices required in ( particularly , their conditions ( c1 ) and ( c3 ) ) .note that their condition ( c1 ) essentially requires that the number of variables that have non - degenerate correlations with others should grow no faster than the rate of .although this condition is reasonable in some applications , it is hard to be justified for data from the microarray or transcriptome experiments , where the genes can be divided into gene sets with varying sizes according to functionalities , and usually genes from the same set have relatively high ( sometimes very high ) intergene correlations compared to those from different sets .this corresponds to an approximate block structure .many sets can contain several thousand genes , a polynomial order of .this kind of block structure with growing block size may violate condition ( c1 ) in .the crux of the derivation of the asymptotic type i extreme value distribution in is that the s are weakly dependent under under certain regularity conditions .in contrast , the new procedure we present below automatically takes into account correlations among the s . specifically , we propose the following procedure to compute with the dependence among s incorporated .independent of and , we generate a sequence of independent random variables , where is the total sample size .( ii ) . using the s as multipliers , we calculate the perturbed version of the test statistic t^_ = _ 1kp |_k^ | , [ eq2.5 ] where with and ( iii ) . the critical value is defined as the upper -quantile of conditional on ; that is , where denotes the probability measure induced by the gaussian random variables with and being fixed .this algorithm combines the ideas of multiplier bootstrap and parametric bootstrap .the principle of parametric bootstrap allows s constructed in step ( ii ) to retain the covariance structure of s .the validity of multiplier bootstrap is guaranteed by the multiplier central limit theorem , see for traditional fixed- and low - dimensional settings and for more recent development in high dimensions . for implementation, it is natural to compute the critical value via monte carlo simulation by , where and are independent realizations of in ( [ eq2.5 ] ) by repeating steps ( i ) and ( ii ) . for any prespecified , the null hypothesis ( [ covariance.test ] )is rejected whenever .the main computational cost of our procedure for computing the critical value only involves generating independent and identically distributed variables .it took only 0.0115 seconds to generate one million such realizations based on a computer equipped with intel(r ) core(mt ) i7 - 4770 cpu 3.40ghz .hence even taking to be in the order of thousands , our procedure can be easily accomplished efficiently when is large .the difference between and its monte carlo counterpart is usually negligible for a large value of . in this section ,we study the asymptotic properties of the proposed test under both the null hypothesis and a sequence of local alternatives . for the asymptotic properties, we only require the following relaxed regularity conditions .let be a finite constant independent of and .* , uniformly in , for some .* and for some . * and for some . * and are comparable , i.e. is uniformly bounded away from zero and infinity . assumptions ( c1 ) and ( c2 ) specify the polynomial - type and exponential - type tails conditions on the underlying distributions of and , respectively .assumption ( c3 ) ensures that the random variables and are non - degenerate .the moment assumptions , ( c1)(c3 ) , for the proposed procedure are similar to conditions ( c2 ) and ( c2 ) in .assumption ( c4 ) is a standard condition in two - sample hypothesis testing problems .as discussed before , no structural assumptions on the unknown covariances are imposed for the proposed procedure .theorem [ asymptotic.size ] below shows that , under these mild moment and regularity conditions , the proposed test with defined in section [ sec22a ] has an asymptotically .[ asymptotic.size ] suppose that assumptions ( c3 ) and ( c4 ) hold .if either assumption ( c1 ) holds with for some constant or assumption ( c2 ) holds with , then as , uniformly over .the asymptotic validity of the proposed test is obtained without imposing structural assumptions on and , nor do we specify any a priori parametric shape constraints of the data distributions , such as condition a3 in or conditions ( c1 ) and ( c3 ) in .next , we investigate the asymptotic power of .it is known that the -type test statistics are preferred to the -type statistics , including those proposed by and , when sparse alternatives are under consideration . as discussed in section [ intro ] , the scenario in which the difference between and occurs only at a small number of locations is of great interest in a variety of scientific studies .therefore , we focus on the local sparse alternatives characterized by the following class of matrices theorem [ power.consistency ] below shows that , with probability tending to 1 , the proposed test is able to distinguish from the alternative whenever for some .[ power.consistency ] suppose that assumptions ( c3 ) and ( c4 ) hold .if either assumption 1 holds with for some constant or assumption 2 holds with , then as , for any .theorem 2 of requires to guarantee the consistency of their procedure .moreover , they showed that the rate for the lower bound of the maximum magnitude of the entries of is minimax optimal , that is , for any satisfying , there exists a constant such that for all sufficiently large and , where is the set of -level tests over the collection of distributions satisfying assumptions ( c1 ) and ( c2 ) . hence , our proposed test also enjoys the optimal rate and is powerful against sparse alternatives .in this section , we compare the finite - sample performance of the proposed new test with that of several alternative testing procedures , including ( sc hereafter ) , ( lc hereafter ) and ( clx hereafter ) .we generated two independent random samples and such that and with and , where and are two sets of independent and identically distributed ( i.i.d . ) random variables with variances and , such that and .we assess the performance of the aforementioned tests under the null hypothesis .let and consider the following four different covariance structures for .* m1 ( block diagonals ) : set , where is a diagonal matrix whose diagonals are i.i.d .random variables drawn from .let , where , for for , and otherwise .* m2 ( slow exponential decay ) : set , where .* m3 ( long range dependence ) : let with i.i.d . , and , where with .* m4 ( non - sparsity ) : define matrices with , , the uniform distribution on the stiefel manifold ( i.e. and , the -dimensional identity matrix ) , and diagonal matrix with diagonal entries being i.i.d . random variables .we took and . in practice ,non - gaussian measurements are particularly common for high throughput data , such as data with heavy tails in microarray experiments and data of count type with zero - inflation in image processing . to mimic these practical scenarios , we considered the following three models of innovations and to generate data . * ( d1 ) let and be gamma random variables : . *( d2 ) let and be zero - inflated poisson random variables : with probability and equals to zero with probability . *( d3 ) let and be student s random variables : and with non - central parameter drawn from . for the numerical experiments , was taken to be and , and the dimension took value in . to compute the critical value for the proposed test , taken to be .[ size12 ] [ cols="^,^,^,^,^,^,^,^,^,^,^,^,^ " , ] [ goa1 ] in addition , we compared the study on changing intergene relationships across biological states with the traditional differential analysis based on mean expression levels .the proposed test on intergene relationships discovered 268 significant go terms that were missed by the traditional differential analysis .this reflects the lately growing demands on analyzing gene dependence structures .more details on this comparison are retained in the supplement . revealed a novel pathway involving epithelial inos , dual oxidases , tpo and the cytokine inf- to understand the mechanism of human asthma .multiple transcripts , together with their variants , are related , while their co - regulation mechanisms are less clear .the proposed gene clustering algorithm provides a way to study gene interactions .for illustration , we focus on the go terms that were declared significant via testing and are related to ifn- or tpo , and apply our clustering procedure to the sample from the health and disease groups separately to study how the gene clustering alters across two populations . for ifn- , we consider the go terms 0032689 ( negative regulation of ifn- production ) , 0060333 ( ifn--mediated signaling pathway ) and 0071346 ( cellular response to ifn- ) . for tpo ,the go terms have been considered include 0004601 ( peroxidase activity ) , 0042446 ( hormone biosynthetic process ) , 0035162 ( embryonic hemopoiesis ) , 0006979 ( response to oxidative stress ) , and 0009986 ( cell surface ) .their sizes vary from 17 to 439 .we take , and use hierarchical clustering algorithm with average linkage .the is estimated using the censored beta - uniform mixture model by for selecting block size .figures [ go1][go5 ] display comparisons of gene clustering between the health and disease groups ( more comparisons are included in the supplementary material ) .each vertex in the figures represents a gene or its variant and is labelled by the corresponding i d .vertexes connected by edges in gray are clustered into one group , and vertexes in red and yellow belong respectively to the maximum clique in the health and disease groups .vertexes in both colors belong to the maximum cliques for both groups . from figure [ go1 ]we see that for go:0071346 , regarding the cellular response to inf- , genes tend to function more in clusters in the asthma group than those in the health group .gene tlr3 actively appears in the largest gene clusters for both the health and asthma groups , while gene il18 is isolated in the asthma group .gene nos2 is involved in asthma by co - regulating with arg2 .these suggest that these four genes are important signatures for understanding the effect of inf- on the asthma progression .regarding the inf--mediated signaling pathway , figure [ go1 ] also shows that compared to the health group , genes seem to preferentially function separately in the asthma group .the original dominating gene clusters are broken into small groups in the presence of the disease . the different configurations in primary gene clusters between the health and asthma groups for go:0060333 provide further information on how inf- influences the inos pathway .for the critical enzyme tpo , figure [ go5 ] shows that genes also tend to function in clusters in the disease group . in the presence of asthma , the gene cluster hbb - hba2.1-hba2is preserved and the gene ipcef1 is isolated from the original largest gene cluster for go:0004601 .it is interesting to notice that the duox2 genes are isolated in the health group but do interact with many genes , particularly with tpo , in the presence of asthma as documented in .the identified duox2 gene cluster provides a candidate pathway to understand how tpo catalyzes the inos - duox2-thyroid peroxidase pathway discovered by .last but not least , it can be seen from figure [ go5 ] that the overall co - regulation patterns remain similar across populations , while those of tpo alters in the presence of asthma . in summary , based on the proposed procedure , not onlycan we test the difference in gene dependence , we can also discover the disparity in gene clustering , which reflects the difference in gene clustering patterns between the health and disease groups .in this paper , we proposed a computationally fast and effective procedure for testing the equality of two large covariance matrices .the proposed procedure is powerful against sparse alternatives corresponding to the situation where the two covariance matrices differ only in a small fraction of entries .compared to existing tests , the proposed procedure requires no structural assumptions on the unknown covariance matrices and remains valid under mild conditions .these appealing features grant the proposed test a vast applicability , particularly for real problems arising in genomics . as an important application, we introduced a gene clustering algorithm that enjoys the same nice feature of avoiding imposing structural assumptions on the unknown covariance matrices .another interesting and related problem is testing the equality of two precision matrices , which was recently studied by . in the literature of graphical models , it is common to impose the gaussian assumption on data so that the conditional dependency can be inferred based on the precision matrix . when the discrepancy between two precision matrices is believed to be sparse , the data - dependent procedure considered in this paper can be extended to comparing them by utilizing the similar -type statistic discussed in .it is interesting to investigate whether our method can be applied to testing precision matrices in the presence of heavy - tailed data , which is often modeled by the elliptical distribution family .we leave this to future work .web appendices , which include proofs of the main theorems and additional numerical results referenced in sections [ method.sec ] , [ simulation.sec ] and [ real ] are available with this paper on the biometrics website on wiley online library .the authors thank the ae and two anonymous referees for constructive comments and suggestions which have improved the presentation of the paper .jinyuan chang was supported in part by the fundamental research funds for the central universities ( grant no .jbk160159 , jbk150501 , jbk140507 , jbk120509 ) , nsfc ( grant no .11501462 ) , the center of statistical research at swufe and the australian research council .wen zhou was supported in part by nsf grant iis-1545994 .lan wang was supported in part by nsf grant nsf dms-1512267 .katsani , k. r. , irimia , m. , karapiperis , c. , scouras , z. g. , blencowe , b. j. , promponas , v. j. , and ouzounis , c. a. ( 2014 ) .functional genomics evidence unearths new moonlighting roles of outer ring coat nucleoporins ._ scientific reports _ * 4 * , 4655 .the asymptotic distribution and berry - esseen bound of a new test for independence in high dimension with an application to stochastic optimization . _the annals of applied probability _ * 18 * , 23372366 .
comparing large covariance matrices has important applications in modern genomics , where scientists are often interested in understanding whether relationships ( e.g. , dependencies or co - regulations ) among a large number of genes vary between different biological states . we propose a computationally fast procedure for testing the equality of two large covariance matrices when the dimensions of the covariance matrices are much larger than the sample sizes . a distinguishing feature of the new procedure is that it imposes no structural assumptions on the unknown covariance matrices . hence the test is robust with respect to various complex dependence structures that frequently arise in genomics . we prove that the proposed procedure is asymptotically valid under weak moment conditions . as an interesting application , we derive a new gene clustering algorithm which shares the same nice property of avoiding restrictive structural assumptions for high - dimensional genomics data . using an asthma gene expression dataset , we illustrate how the new test helps compare the covariance matrices of the genes across different gene sets / pathways between the disease group and the control group , and how the gene clustering algorithm provides new insights on the way gene clustering patterns differ between the two groups . the proposed methods have been implemented in an ` r`-package ` hdtest ` and is available on cran . [ firstpage ] differential expression analysis ; gene clustering ; high dimension ; hypothesis testing ; parametric bootstrap ; sparsity .
complex networks and complex systems describe the physical , biological , and social structures that connect our world and host the dynamical processes vital to our lives .the failure of such large - scale systems to operate in the desired way can thus lead to catastrophic events such as power outages , extinctions , and economic collapses .thus , the development and design of efficient and effective control mechanisms for such systems is not only a question of theoretical interest to mathematicians , but has a wide range of important applications in physics , chemistry , biology , engineering , and the social sciences .the roots of modern linear and nonlinear control reach back several decades , but recently research in this direction has seen a revival in physics and engineering communities .for instance , the concept of _ structural controllability _ , which is based on the paradigm of linear homogenous dynamical systems , was first introduced by lin in and more recently investigated in .these advances have enabled further progress related to structural controllability such as centrality , energy , effect of correlations , emergence of bimodality , transtion and nonlocality , the specific role of individual nodes , target control , and control of edges in switchboard dynamics .significant advances have also been made in the control of nonlinear systems , for instance the control of chaotic systems using unstable periodic orbits , control via pinning , control and rescue of networks using compensatory perturbations , and control via structural adaptation .implicit in all such network control problems are the questions of ( i ) what form(s ) of control should one choose ? and( ii ) how much effort is needed to attain a desired state ? motivated by ongoing studies on the stability and function of power grids , we study here the control of heterogeneous coupled oscillator networks .recent research into smart grid technologies has shown that certain power grid networks called _ microgrids _ evolve and can be treated as networks of kuramoto phase oscillators .a microgrid consists of a a relatively small number of localized sources and loads that , while typically operating in connection to a larger central power grid , can disconnect itself and operate autonomously as may be necessitated by physical or economical constraints .in particular , by means of a method known as _ frequency - drooping _ , the dynamics of microgrids become equivalent to kuramoto oscillator networks - a class of system for which a large body of literature detailing various dynamical phenomena exists . herewe develop a control mechanism for such coupled oscillator networks , thus providing a solution with potentially direct application to the control of certain power grids .our goal is to induce a synchronized state , a.k.a ._ consensus _ , in a given coupled oscillator network and guarantee asymptotic stability by applying as few control gains to the network as possible .our method is based on calculating the jacobian of the desired synchronized state and studying its spectrum , by which we identify the oscillators in the network that contribute to unstable eigenvalues and thus destabilize the synchronized state .importantly , our method not only identifies which oscillators require control , but also the required strength of each control gain .interestingly , we find that the control required to stabilize a network is dictated by the coupling strength , dynamical heterogeneity , and mean degree of the network , and depends little on the structural heterogeneity of the network . in other words , the number of nodes requiring control depends surprisingly little on the network topology and degree distribution and is more sensitive to the average connectivity of the network and the dynamical parameters .while kuramoto oscillator networks serve as our primary system of interest due both to its specific correlation with mircogrids as well as its rich body of literature , we note that our method can be applied to a much wider set of oscillator networks , provided that their linearized dynamics take a certain form .moreover , since kuramoto and other oscillator network models have served as a paradigmatic example for modeling and studying synchronization in various contexts , we hypothesize that our results may shed light more generally on the control of synchronization processes and could potentially give insight into other important applications such as the termination of cardiacarrhythmias and treatments for pathological brain dynamics .we consider the famous kuramoto model for the entrainment of many coupled dissipative oscillators .the kuramoto model consists of phase oscillators for that , when placed on a network dictating their pair - wise interactions , evolve according to each oscillator has a unique nature frequency that describes its preferred angular velocity in the absence of interactions , which is typically drawn randomly from a distribution . furthermore , the global coupling strength describes the influence that oscillators have on one another via the network connectivity , which is encoded in the adjacency matrix ] , and is stable if all the eigenvalues of are non - positive . in our case we have that we note that each row ( and column ) of sums to zero , i.e. , satisfies .this is a particularly convenient property for using the gershgorin circle theorem , which implies that eigenvalues of lie within the union of closed discs for , which are each centered at and have radius , where .( the full theorem is given in materials and methods . ) in particular , if all the off - diagonal entries of are non - negative , then it follows that each gershgorin disc is contained in the left - half plane , implying that all eigenvalues are non - positive and the solution is stable .an illustration of this case is presented in figure [ fig1](b ) .if , however , one or more non - diagonal entries of are negative , then each gershgorin disc corresponding to a row with a negative off - diagonal entry enters the right - half plane , admitting the possibility for one or more positive eigenvalues and thus destabilization .thus , the oscillators that require control can be easily identified as those whose corresponding rows have one or more negative off - diagonal entries .we aim to stabilize the synchronized solution by adding one or more control gains to the system , as illustrated in figure [ fig1](c ) . in following recent literature, we will refer to oscillators to which we apply control as _ driver nodes _ , and to oscillators to which we do not apply control as _free nodes_. we choose the control gains to take the form , where is the strength of the control gain and is a target phase that can in principle depend on either local or global information , and vary in time . herewe focus on the choice of target phase , and discuss other possibilities below . since the control gain depends on the current state of the system , this can be though of as a form of feedback control .the new dynamics are then given by where we take for free nodes .while the off - diagonal entries of remain unaltered , the new diagonal entries are given by .thus , we set coupling gain strength of each driver node such that it satisfies ] tend to be driver ( free ) nodes .furthermore , we find that these values scale approximately linearly with the ratio of the natural frequencies to degrees , i.e. , {\mathrel{\vcenter { \offinterlineskip\halign{\hfil\cr \propto\cr\noalign{\kern2pt}\sim\cr\noalign{\kern-2pt}}}}}\omega_i / k_i ] vs for the example er and sf networks presented above [ panels ( a ) and ( b ) , respectively ] , denoting driver nodes with red crosses and free nodes with blue circles .these results show the important role that dynamics , in addition to network structure , plays in dictating controlling the system . in particular , driver nodes of the system tend to balance a large ratio ( in absolute value ) of natural frequencies to degrees . finally , we quantify the overall effort required for consensus by studying how the fraction of driver nodes , denoted , where is the total number of driver nodes , depends on both the system s dynamical and structural parameters . presenting our results in figure [ fig4 ] ,we first explore how the fraction of driver nodes depends on the coupling strength by plotting in panel ( a ) vs for both er and sf networks with mean degrees , , and ( blue circles , red triangles , and green squares , respectively ) .results for er and sf networks are plotted with unfilled and filled symbols , respectively , and each curve represents an average over network realizations , each averaged over random natural frequency realizations . while it is expected that decreases monotonically with , the curves dependence on network topology and mean degree is nontrivial . in particular , the the shape of vs depends more sensitively on the mean degree than the topology , suggesting that network heterogeneity has little effect on overall control in comparison to average connectivity . in light of the significant dependence of overall control on the coupling strength , we investigate the coupling strength required to synchronize a network if limited mount of control is available . to this endwe calculate for each family of networks the required coupling strengths , , and for which , on average , a fraction , , and will achieve synchronization as a function of the average degree .we plot the results in figure [ fig4 ] ( b ) .we point out again that er and sf networks behave very similarly on average , and that with a larger mean degree , a smaller coupling strength is required to achieve synchronization .theoretical and practical aspects of the control of dynamical processes remains an important and ongoing area of interdisciplinary research at the intersection between mathematics , physics , biology , chemistry , engineering , and the social sciences .control of complex networks and complex systems is particularly important since together they comprise most of the world we live in , however the nonlinear nature of realistic dynamical processes and the complex network topologies of real networks represent challenges for the scientific community .building on concepts from classical linear control theory , recent work has made significant advances in understanding structural controllability , and significant progress has been made in the development of control mechanisms for networks of nonlinear systems .nonetheless , due to the problem - sensitive nature of most real - world problems and applications requiring control techniques , further progress in designing and implementing efficient and effective control mechanisms for a wide range of problems with practical constraints remains an important avenue of research . in this articlewe have focused on the control of synchronization , i.e. , consensus , in coupled oscillator networks .our primary inspiration has been advances in the research of power grid networks . in particular , recent studies have shown that certain power grids known as microgrids can be treated as kuramoto oscillator networks . herewe have presented a control method that can easily be applied to kuramoto networks and other phase oscillator networks , thus providing a control framework with potentially direct application to these new technologies .our method is based on identifying and stabilizing a synchronized state for a given network via spectral properties of the jacobian matrix and we have demonstrated its effectiveness on both erds - rnyi and scale - free networks . we have observed that driver nodes , i.e. , oscillators that require control , tend to balance ( in absolute value ) large natural frequencies with small degrees .furthermore , the overall amount of control required to achieve synchronization decreases with both coupling strength and mean degree , while the total effort required to attain a synchronized state depends sensitively on the average connectivity of the network and the dynamical parameters , but surprisingly little on the network topology and degree distribution .these results enhance our understanding of and ability to understand , optimize , and ultimately control synchronization in power - grid networks ( see in particular ) , and more generally complement important work on the control of network - coupled nonlinear dynamical systems .while our central inspiration and target application is in the area of power grid technology , synchronization phenomena plays a vital roll in a variety of complex processes that occur in both natural and man - made systems , including healthy cardiac behavior , functionality of cell circuits , stability of pedestrian bridges , and communications security . given this broad range of applications , we hypothesize that our findings here may potentially shed some light on the control of synchronization in other contexts , for instance cardiac physiology and neuroscience . for instance , a large amount of research has recently been devoted to the development of cardiac arrhythmia treatments that require minimal shock to knock out fatal asynchronous behavior such as cardiac fibrillation and the promotion of normal brain oscillations while repressing disorders such as parkinson s disease which are associated with abnormal oscillations .to derive the steady state solution , we begin with equation ( [ eq : linear ] ) , which represents the linearized dynamics of equation ( [ eq : kuramoto ] ) . recall that this linearization requires that we are searching for a synchronized state where all oscillators are tightly packed in a single cluster so we expect that .we also note that the mean frequency of all oscillators is given by the mean natural frequency .for simplicity we enter the rotating frame , effectively setting the mean frequency to zero .it is then convenient to write equation ( [ eq : linear ] ) in vector form , i.e. , where is the network laplacian whose entries are defined . while has a zero eigenvalue , denoted , rendering it non - invertible , it does have a pseudo - inverse defined using its other eigenvalues ( which are non - zero provided that the network is connect ) and corresponding eigenvectors , .each eigenvector is normalized such that forms an orthonormal basis for the space of vectors in with zero mean .thus , both and share a nullspace which is spanned by the eigenvector , and therefore map vectors onto the space of zero - mean vectors in . with the pseudoinverse in hand , we can finally obtain the desired steady - state solution by setting and solving for , which yields the solution , as desired . herewe present an example of a more general oscillator network than that in equation ( [ eq : kuramoto ] ) that can be controlled using the same method detailed above .in particular , we generalize to account for an arbitrary coupling function , yielding we assume that is -periodic and at least once continuously differentiable .importantly , need not satisfy , and thus coupling between neighboring oscillators can be _ frustrated _ , denoting that even when two oscillators are exactly equal , their interaction term does not vanish .provided that the coupling frustration is not too large , e.g. , , a tightly clustered synchronized state is attainable , and linearizing equation ( [ eq : general01 ] ) yields importantly , by defining the quantities and , it is easy to see that the linearized dynamics of equation ( [ eq : general02 ] ) are of the same form as equation ( [ eq : linear ] ) , and therefore the control method we present above can be readily applied .( gershgorin discs ) let be an complex matrix . for be the sum of absolute values of non - diagonal elements of row , and define closed disc of radius centered at . is the gershgorin disc .( gershgorin ) all eigenvalues of the matrix lie within the union of gershgorin discs .99 s. h. strogatz , exploring complex networks ._ nature _ * 410 * , 268276 ( 2001 ) .m. e. j. newman , the structure and function of complex networks ._ siam rev . _ * 45 * , 167256 ( 2003 ) .s. boccaletti , v. latora , y. moreno , m. chavez , d.u .hwang , complex networks : structure and dynamics . _ phys ._ * 424 * , 175308 ( 2006 ) .s. v. buldyrev , r. parshani , g. paul , h. e. stanley , s. havlin , catastrophic cascade failures in interdependent networks ._ nature _ * 464 * , 10251028 ( 2010 ) .a. e. motter , s. a. myers , m. anghel , t. nishikawa , spontaneous synchrony in power grid networks ._ nature phys ._ * 9 * , 191197 ( 2013 ) .m. l. pace , j. j. cole , s. r. carpenter , j. f. kitchell , trophic cascades revealed in diverse ecosystems ._ trens ecol .* 14 * , 483488 ( 1999 ) .m. scheffer , s. carpenter , j. a. foley , c. folke , b. walker , catastrophic shifts in ecosystems ._ nature _ * 413 * , 591596 ( 2001 ) .r. m. may , s. a. levin , g. sugihara , complex systems : ecology for bankers ._ nature _ * 451 * , 893895 ( 2008 ) .a. g. haldane , r. m. may , systemic risk in banking ecosystems ._ nature _ * 469 * , 351355 ( 2011 ) .d. d. iljak , _ decentralized control of complex systems _ ( academic press , boston , 1991 ) .slotine , w. li , _ applied nonlinear control _( prentice - hall,1991 ) .lin , structural controllability ._ ieee trans .control _ * 19 * , 201208 ( 1974 ) .liu , j.j .slotine , a.l .barabsi , controllability of complex networks ._ nature _ * 473 * , 167176 ( 2011 ) .z. yuan , c. zhao , z. di , w.x .wang , y.c .lai , exact controllability of complex networks ._ * 4 * , 2447 ( 2013 ) .liu , j.j .slotine , a.l .barabsi , control centrality and hierarchical structure in complex networks .plos one _ * 7 * , e444459 ( 2012 ) . g. yan , y.c .lai , c.h .lai , b. li , controlling complex networks : how much energy is needed ? _ phys .lett . _ * 108 * , 218703 ( 2012 ) .m. psfai , y. y. liu , j.j .slotine , a.l .barabsi , effect of correlations on network controllability .rep . * 3 * , 1067 ( 2013 ) .t. jia , y.y .liu , e. cska , m. psfai , j.j .slotine , a.l .barabsi , emergence of bimodality in controlling complex networks ._ * 4 * , 2002 ( 2013 ) .j. sun , a. e. motter , controllability transition and nonlocality in network control .lett . _ * 110 * , 208701 ( 2013 ) .g. menichetti , l. dallasta , g. bianconi , network controllability is determined by the density of low in - degree and out - degree nodes .lett . _ * 113 * , 078701 ( 2014 ) .j. gao , y.y .liu , r. m. dsouza , a.l .barabsi , target control of complex networks ._ * 5 * , 5415 ( 2014 ) .t. nepusz , t. vicsek , controlling edge dynamics in complex networks ._ nature phys . _* 8 * , 568573 ( 2012 ) .e. ott , c. grebogi , j. a. yorke , controlling chaos .lett . _ * 64 * , 11961199 ( 1990 ) .r. o. grigoriev , m. c. cross , h. g. schuster , pinning control of spatiotemporal chaos . _ phys .lett . _ * 79 * , 27952798 ( 1997 ) .x. f. wang , g. chen , pinning control of scale - free dynamical networks ._ phys . a _ * 310 * , 521531 ( 2002 ) .x. li , x. wang , g. chen , pinning a complex dynamical network to its equilibrium ._ ieee trans .circuits syst ., i : fundam .theory appl . _* 51 * , 20742087 ( 2004 ) .s. sahasrabudhe , a. e. motter , rescuing ecosystems from extinction cascades through compensatory perturbations .* 2 * , 170 ( 2011 ) .s. p. cornelius , w. l. kath , a. e. motter , realistic control of network dynamics .commun . _ * 4 * , 1942 ( 2013 ) .p. delellis , m. di bernardo , t. e. gorochowski , g. russo , synchronization and control of complex networks via contraction , adaption and evolution ._ circuits syst . mag ._ * 10 * , 6482 ( 2010 ) .f. pasqualetti , s. zampieri , f. bullo , controllability metrics , limitations and algorithms for complex networks ._ ieee trans .control of netw ._ * 1 * , 4052 ( 2014 ). m. rohden , a. sorge , m. timme , d. witthaut , self - organized synchronization in decentralized power grids .lett . _ * 109 * , 064101 ( 2012 ) .f. drfler , m. chertkov , f. bullo , synchronization in complex oscillator networks and smart grids ._ * 110 * , 20052010 ( 2013 ) .f. drfler , m. r. jovanovi , m. chertkov , f. bullo , sparsity - promoting optimal wide - area control of power networks ._ ieee trans .power syst . _* 29 * , 22812291 ( 2014 ) .m. fardad , f. lin , m. r. jovanovi , design of optimal sparse interconnection graphs for synchronization of oscillator network ._ ieee trans .* 59 * , 24572462 ( 2014 ). j. w. simpson - porco , f. drfler , f. bullo , synchronization and power sharing for droop - controlled inverters in islanded microgrids ._ automatica _ , * 49 * , 26032611 ( 2013 ) .a. arenas , a. daz - guilera , j. kurths , y. moreno , c. zhou , synchronization in complex networks .rep . _ * 469 * , 93153 ( 2008 ) .a. karma , physics of cardiac arhythmogenesis .matter phys . _* 4 * , 313337 ( 2013 ) .a. schnitzler , j. gross , normal and pathological oscillatory communication in the brain .neurosci . _ * 6 * , 285296 ( 2005 ). y. kuramoto , _ chemical oscillations , waves , and turbulence _( springer , new york , 1984 ) .f. drfler , f. bullo , synchronization in complex networks of phase oscillators : a survey ._ automatica _ * 50 * , 15391564 ( 2014 ) .s. h. strogatz , _ sync : the emerging science of spontaneous order _ ( hypernion , 2003 ) .e. ott , t. m. antonsen , low dimensional behavior of large systems of globally coupled oscillators ._ chaos _ * 18 * , 037113 ( 2008 ) .w. s. lee , e. ott , t. m. antonsen , large coupled oscillator systems with heterogeneous interaction delays .lett . _ * 103 * , 044101 ( 2009 ) .p. s. skardal , j. g. restrepo , hierarchical synchrony of phase oscillators in modular networks .e _ * 85 * , 016208 ( 2012 ) .p. s. skardal , d. taylor , j. sun , optimal synchronization of complex networks ._ * 113 * , 144101 ( 2014 ) .ben - israel , t. n. e. grenville , _ generalized inverses _ ( springer , new york , 1974 ). p. s. skardal , d. taylor , j. sun , a. arenas , erosion of synchronization in networks of coupled oscillators ._ phys . rev .* 91 * , 010802(r ) ( 2015 ) .g. h. golub , c. f. van loan , _ matrix computations _( johns hopkins university press , baltimore , 1996 ) .p. erds , a. rnyi , on the evolution of random graphs .inst . hung ._ * 5 * , 1761 ( 1960 ) .m. molloy , b. reed , critical point for random graphs with a given degree sequence ._ random struct .algor . _ * 6 * , 161180 ( 1995 ) .s. p. meyn , _ control techniques for complex networks _ ( cambridge univ . press , 2008 ) .c. w. gellings , k. e. yeagee , transforming the electric infrastructure ._ phys . today _ * 57 * , 4551 ( 2004 ) .f. drfler , f. bullo , synchronization and transient stability in power networks and nonuniform kuramoto oscillators ._ siam j. control optim .* 50 * , 16161642 ( 2012 ) .karma , a. & gilmour , r. f. nonlinear dynamics of heart rhythm disorders ._ phys . today _ * 60 * , 5157 ( 2007 ) .a. prindle , p. samayoa , i. razinkov , t. danino , l. s. tsimring , j. hasty , a sensing array of radically coupled genertic ` biopixels ' ._ nature _ * 481 * , 3944 ( 2011 ) .s. h. strogatz , d. m. abrams , a. mcrobie , b. eckhardt , e. ott , theoretical mechanics : crowd synchrony on the millenium bridge ._ nature _ * 438 * , 4344 ( 2005 ) .k. m. cuomo , a. v. oppenheim , circuit implementation of synchronization chaos with applications to communications .lett . _ * 71 * , 6568 ( 1993 ) . s. luther et al ., low - energy control of electrical turbulence in the heart ._ nature _ * 475 * , 235239 ( 2011 ) .j. gross , f. schmitz , i. schnitzler , k. kessler , k. shapiro , b. hommel , a. schnitzler , modulation of long - range neural synchrony reflects temporal limitations of visual attention in humans . _ proc ._ * 101 * , 1305013055 ( 2004 ) . c. hammond , h. bergman , p.brown , pathological synchronization in parkinson s disease : networks , models and treatments ._ trends neurosci ._ * 30 * , 357364 ( 2007 ) .this work was supported by the james s. mcdonnell foundation ( pss and aa ) , spanish dgicyt grant no .fis2012 - 38266 ( aa ) , and fwet project no .multiplex ( 317532 ) ( aa ) .
the control of complex systems and network - coupled dynamical systems is a topic of vital theoretical importance in mathematics and physics with a wide range of applications in engineering and various other sciences . motivated by recent research into smart grid technologies we study here control of synchronization and consider the important case of networks of coupled phase oscillators with nonlinear interactions a paradigmatic example that has guided our understanding of self - organization for decades . we develop a method for control based on identifying and stabilizing problematic oscillators , resulting in a stable spectrum of eigenvalues , and in turn a linearly stable synchronized state . interestingly , the amount of control , i.e. , number of oscillators , required to stabilize the network is primarily dictated by the coupling strength , dynamical heterogeneity , and mean degree of the network , and depends little on the structural heterogeneity of the network itself .
continuous time stochastic processes based on drift and diffusion between two absorbing boundaries have been used in a wide variety of applications including statistical physics , finance , and health science . in this article , we focus on applications to decision making tasks , where such models have successfully accounted for behavior and neural activity in a wide array of two alternative forced choice tasks , including phenomena such as the speed - accuracy tradeoff and the dynamics of neural activity during decision making in such tasks . in particular , we will discuss extensions of a specific class of diffusion model referred to as the pure drift diffusion model ( ddm ; eq . below ) , which can be shown to be statistically optimal .varieties of diffusion models have been applied within and outside of psychology and neuroscience to elucidate mechanisms for perception , decision - making , memory , attention , and cognitive control ( see reviews in ) . in the pure ddm ( and related models ) , the state variable is thought to represent the amount of accumulated noisy evidence at time for decisions represented by the two absorbing boundaries , which we refer to as the upper ( + ) and lower ( - ) boundaries .the evidence evolves in time according to a biased random walk with gaussian increments , which may be written as , and a decision is made at time , the smallest time for which hits either the top boundary ( ) or bottom boundary ( ) . hittingeither of the two boundaries corresponds to making one of two possible decisions at time .the resulting decision dynamics are thus described by the first passage times of the underlying drift diffusion process . in studying these processesone is often interested in relating performance metrics such as the mean decision time and error rate ( i.e. the probability of hitting the boundary opposite to the direction of drift ) to empirical data .for example , one may be interested in studying how actions and cognitive processes might seek to maximize reward rate , which is a simple function of error rate and mean decision time .however , not all decisions can be well described by a pure ddm with time - invariant decision parameters .many decisions may require time - varying drift rates , diffusion rates , and/or thresholds in order to model `` bottom - up '' signals , e.g. sensory processing , or to model `` top - down '' effects , for example , when there are changes in attentional focus or cognitive control .in this article , we study performance metrics for such extensions of the ddm , in which model parameters are time - varying . in doing so, we build on recent work that is focused on similar time - varying random walk models .we offer an alternative approach to deriving performance metrics in this setting , including first passage time distributions as well as expected error rates and decision times , and describe how our approach can be applied to the more general class of ornstein - uhlenbeck ( o - u ) processes .an in - depth discussion of how the present article interfaces with other studies of time - varying ddms is given in section [ sec : conclusion ] .in this section we recall the pure ddm and introduce the _ multistage drift diffusion model _ ( msddm ) . the single stage pure ddm models human decision making in two alternative forced choice tasks . the ddm models the evolution of the evidence for decision making using the following stochastic differential equation ( sde ) : where parameters and are constants referred to as the drift and diffusion rate , respectively ; is the initial condition ( the starting point of the decision process ) , and are independent wiener increments with variance .the pure ddm models a decision process in which an agent is integrating noisy evidence until sufficient evidence is gathered in favor of one of the two alternatives .a decision is made when the evidence crosses one of the two symmetric decision thresholds for the first time ( also referred to as its _ first passage time _ ) .in other words , the decision time is the first passage time of the drift diffusion process with respect to the set of points within boundaries .we will use the terms _ correct _ and _ incorrect _ to refer to responses that cross the threshold of equivalent or opposite sign of the drift rate for the decision process .for instance , if , a correct decision is one for which threshold is crossed first ; conversely , an incorrect response refers to the case in which threshold was crossed first .accordingly , we refer to the probability of crossing the negative threshold in this case as the _ error rate _ and the time at which the decision variable crosses one of the thresholds as the _ decision time_. the pure " ddm contrasts with ratcliff s ( 1978 ) extended " ddm , in which the drift rates and the initial conditions across trials in an experimental session are assumed to be random variables drawn from stationary distributions. this paper will focus entirely on analyses of the former model , ( as well as its applications to ornstein - uhlenbeck processes ) .note also that the parameterization we use for the pure ddm is different from those in some other cases , e.g. , , although the underlying model is equivalent .the ddm has been extremely successful in explaining behavioral and neural data from two alternative forced choice tasks . however , its basic assumption that model parameters such as drift rate and threshold remain constant throughout the decision process is unlikely to hold in a number of cases .for instance , in several experimental ( and real - world ) contexts the quality of evidence is not stationary ( i.e. , the drift rate and possibly even the diffusion rate are not a constant function of time ) or decision urgency leads thresholds to decay with time . for purposes of modeling situations described above , we introduce an extension of the ddm , the msddm , which is a generalization of the two stage process considered in . in an msddm , the drift rate , the diffusion rate , and the thresholds are piecewise constant functions of time .to implement an msddm , we partition the set of non - negative real numbers ( time axis ) into sets , \ ;i\in { \{1,\dots , n\}} ] and is the indicator function .+ note that corresponds to the average decision time computed by dividing the sum of decision times associated with the correct decision by the instances of such decisions ; while corresponds to the average decision time computed by dividing the sum of decision times associated with the correct decision by the instances of all ( correct and incorrect ) decisions . and are computed analogously .e. the first passage time density conditioned on a particular decision is given by where , i.e. , is the probability of the event and .note that defined in is the sum of and .f. the joint density of the evidence and the event is where is the indicator function .+ the joint density can also be used to determine the fpt distribution by integrating it over the range of .more importantly , dividing by yields the conditional density on the evidence conditioned on no decision until time .this is critical for the analysis of the msddm . in particular, we use a slight modification of at the -th stage to determine the density of the initial condition for the -th stage .various derivations for the error rate , decision time , and fpt densities may be found in the decision making literature . in the probability literature, the expressions for the error rate and the expected decision time may be derived using a differential equation approach or a martingale - based approach . under the latter ,the fpt densities are found by first determining the laplace transform by constructing an appropriate martingale . taking the inverse laplacetransforms yields the conditional fpt densities .the negative of the derivative of the laplace transform with respect to the frequency yields the conditional mean decision times .the final statement is derived by repeatedly applying the reflection principle , followed by the cameron - martin formula .formulas for the laplace transform are given in [ app : ddm ] .it is worth noting that the infinite series solutions for the fpt given in are equivalent to the small - time representations for the fpt analyzed in .for completeness we also state the large - time representation that can be obtained by solving the fokker - planck equation : the small - time and the large - time representations mean that the associated series has nice convergence properties ( e.g. , monotonicity ) for small and large values of decision times , respectively . in this sectionwe analyze the -th stage ddm .recall from above that is the random variable conditioned on the event , and that is the fpt conditioned on the event . for the -th ddm ,the initial condition is and only decisions made before the deadline are relevant . for the analysis of such a system , the two key ingredients are the density of and the density of the fpt .conditioned on a realization of , the density of can be computed using .if the density of is known , then the unconditional density of can be obtained by computing the expected value of the conditional density of with respect to .since the density of is known , this procedure can be recursively applied to obtain densities of , for each .formally , the joint density of the evidence and the event is , \end{aligned}\ ] ] where ] , is calculated using and , is calculated using , and is calculated using and .we now use the fpt properties of the -th ddm derived in [ subsec : i - ddm ] to derive fpt properties of the msddm . in particular, we view the msddm as a cascade of modified ddms in which the initial condition is a random variable and only the decisions made before a deadline are considered .given an initial condition of , we sequentially compute all of the distributions of the initial conditions for each using .then , we compute the properties of the fpt associated with the -th stage ddm .finally , we use the total probability formula to aggregate properties of these ddms to compute fpt properties of the msddm .we now describe the fpt properties of the msddm .the derivations of these expressions are contained in [ app : mult - ddm ] .a. for ] and .e. for ] .let the drift rate in the -th stage be .the unconditional and conditional fpt distributions for such a -stage ddm obtained using the analytic expressions and using monte - carlo simulations are shown in figure [ fig : gradual - time - varying ] .-stage ddm with gradually increasing drift rate .the drift rate for the -th stage is , diffusion rate at each stage is unity , the threshold , and stage initiation times are chosen uniformly in the interval ] .let the threshold in the -th stage be .the unconditional and conditional fpt distributions for such a -stage ddm obtained using the analytic expressions and using monte - carlo simulations are shown in figure [ fig : collapsing - threshold ] .-stage ddm with collapsing thresholds .the drift rate and diffusion rate at each stage are and , respectively .the stage initiation times are chosen uniformly in the interval ] are computed using the expressions derived in [ sec : performance - metrics ] .we now investigate the reward rate as a function of the threshold for a two - stage ddm .we assume that the threshold is the same for the two stages .the reward rate for the pure ddm as shown in figure [ fig : rr - pure - ddm ] is a unimodal function .in contrast , the reward rate for the two - stage ddm can be a bimodal function , as shown in figure [ fig : rr-2-ddm ] .the bimodality of the reward rate leads to peculiar behavior of the optimal threshold as and are varied , as shown in figure [ fig : opt - thresh - rr ] . in figure[ fig : opt - thresh - rr ] , we fix , , , and and study the effect of the drift rate and the switching time on the optimal threshold that maximizes the reward rate . for a given , the optimal threshold first increases as increased and then jumps down at a critical .the jump is attributed to the fact that one of the peaks in the bimodal function increases with , while the other decreases . at the critical , the global maximum switches from one peak to the other .the reward rate for the multistage ddm is a univariate function of the threshold and the globally optimal threshold can be efficiently determined using algorithms in . however , compared to the algorithms for the maximization of unimodal functions , these algorithms require additional information . in particular , an upper bound on the slope of the reward rate over the domain of interest is needed to implement these algorithms .-stage ddm obtained by maximizing reward rate .the left panel shows the variation of the optimal threshold as a function of and .the other parameters are , , and . the right panel shows the asscoiated contour plot .the regions of the contour plot associated with and correspond to the pure ddm .[ fig : opt - thresh - rr],title="fig:",scaledwidth=49.5% ] -stage ddm obtained by maximizing reward rate .the left panel shows the variation of the optimal threshold as a function of and .the other parameters are , , and .the right panel shows the asscoiated contour plot .the regions of the contour plot associated with and correspond to the pure ddm .[ fig : opt - thresh - rr],title="fig:",scaledwidth=49.5% ][ sec : conclusion ] we have analyzed in detail the fpt properties of the multistage drift diffusion model , which is a wiener diffusion model with piecewise time - varying drift rate , noise parameter , and decision thresholds . studied the two stage version of the msddm with constant thresholds and described a procedure for how to compute the fpt density . here, we have extended this result to -stages and time - varying thresholds , which required relaxing the assumption that the initial condition is a point mass .indeed , the initial condition of the -th ddm is not a deterministic quantity but is a random variable .furthermore , rather than requiring integration over fpt density to obtain conditional and unconditional expected decision times and error rates , our martingale - based approach allows for direct computation of these quantities .another major contribution of the paper is to show how various other performance metrics , such as the error rate during each stage , evolve as the underlying dynamics change . using these, one may compute a variety of behavioral performance metrics , without resorting to first computing the fpt densities .we also independently derived the fpt density for the msddm .the calculations in [ sec : performance - metrics ] are relatively straightforward to implement , and code for doing so is available online , along with code that reproduces the figures in this article .it is important to note , however , the other highly optimized software packages for computing fpt statistics for time - varying diffusion models .one such package in this domain is that of , which solves an integral equation ( code for this is available online ) . introduced a matrix based approach , similar to markov chain monte - carlo methods , to efficiently implement and analyze performance metrics for a variety of extensions of the ddm .the matrix approach has been used to analyze multistage processes associated with multiattribute choice . also relevantis the paper of , which develops an efficient numerical algorithm for estimating parameters of the time - varying diffusion model from reaction time data ( i.e. , first passage times ) .more recently , very fast codes for a broad class of diffusion models have been developed by , with implementations on both cpus and gpus for considerable performance increases .compared to these previous efforts , our work is not immediately focused on developing a rapid numerical tool for simulation , but rather introducing martingale theory as a useful approach for understanding and analyzing ddms with multiple stages .thus , the codes released with this report are not intended to compete with the efficiency of the aforementioned codes , which have been highly optimized and tuned for throughput , but instead demonstrate the simplicity and effectiveness of our analysis .this said , our results do suggest promising avenues for future numerical work . particularly relevantis work by who develop efficient numerical schemes for evaluating the relevant infinite sums involved in fpt calculations .similar methods could be applied to results in [ subsec : i - ddm ] and [ subsec : multistage - ddm ] to develop efficient and accurate msddm codes , which could in turn contribute to the growing collection of numerical tools available for practitioners using diffusion models to study decision making .our results may also serve as a starting point for further analysis of more complicated stochastic decision models . as an important step in this direction ,we have also shown how the equations for the msddm can be applied to ornstein - uhlenbeck processes , which can approximate leaky integration over the course of evidence accumulation , e.g. , the leaky competing accumulator model [ lca ] .given that the lca itself can in certain cases approximate a reduced form of more complex and biologically plausible models of interactions across neuronal populations ( e.g. , ) , our analyses offer an important step toward better understanding time - varying dynamics within and across neural networks , and how these might explain complex cognitive phenomena .more broadly , we believe our analysis furthers the theory surrounding diffusion models with time - varying drift rates , and that the tools and formulae introduced will contribute to the ongoing effort to develop and understand psychologically and neurally plausible models of decision making .we thank ryan webb for the reference to smith 2000 and associated code .we thank phil holmes and patrick simen for the helpful discussions and comments .this work was funded by the c.v .starr foundation ( a.s . ) .and n.e.l . have been supported in part by onr grants n00014 - 14 - 1 - 0635 , aro grant w911nf-14 - 1 - 0431 and the insley - blair pyne fund .45 natexlab#1#1url # 1`#1`urlprefix blurton , s. p. , kesselmeier , m. , gondan , m. , 2012 .fast and accurate calculations for cumulative first - passage time distributions in wiener diffusion models .journal of mathematical psychology 56 ( 6 ) , 470475 .bogacz , r. , 2007 .optimal decision - making theories : linking neurobiology with behaviour .trends in cognitive sciences 11 ( 3 ) , 118125 . bogacz , r. , brown , e. , moehlis , j. , holmes , p. j. , cohen , j. d. , 2006 .the physics of optimal decision making : a formal analysis of models of performance in two - alternative forced - choice tasks .psychological review 113 ( 4 ) , 700765 . borodin , a. n. , salminen , p. , 2002 .handbook of brownian motion : facts and formulae .springer .brunton , b. w. , botvinick , m. m. , brody , c. d. , 2013 .rats and humans can optimally accumulate evidence for decision - making .science 340 ( 6128 ) , 9598 .cox , d. r. , miller , h. d. , 1965 .the theory of stochastic processes .methuen & co. ltd .diederich , a. , busemeyer , j. r. , 2003 .simple matrix methods for analyzing diffusion models of choice probability , choice response time , and simple response time .journal of mathematical psychology 47 ( 3 ) , 304 322 . diederich , a. , oswald , p. , 2014 .sequential sampling model for multiattribute choice alternatives with random attention time and processing order .frontiers in human neuroscience 8 ( 697 ) , 113 .douady , r. , 1999 .closed form formulas for exotic options and their lifetime distribution .international journal of theoretical and applied finance 2 ( 1 ) , 1742 .drugowitsch , j. , 2014 .c++ diffusion model toolset with python and matlab interfaces .github repository : https://github.com/jdrugo/dm , commit : 5729cd891b6ab37981ffacc02d04016870f0a998 .drugowitsch , j. , moreno - bote , r. , churchland , a. k. , shadlen , m. n. , pouget , a. , 2012 .the cost of accumulating evidence in perceptual decision making .the journal of neuroscience 32 ( 11 ) , 36123628 .durrett , r. , 2010 .probability : theory and examples .cambridge university press .farkas , z. , fulop , t. , 2001 .one - dimensional drift - diffusion between two absorbing boundaries : application to granular segregation .journal of physics a : mathematical and general 34 ( 15 ) , 31913198 .feller , w. , 1968 .an introduction to probability theory and its applications .vol . 1 .john wiley & sons .feng , s. , holmes , p. , rorie , a. , newsome , w. t. , 2009 .can monkeys choose optimally when faced with noisy stimuli and unequal rewards .plos computational biology 5 ( 2 ) , e1000284 .frazier , p. , yu , a. j. , 2008 .sequential hypothesis testing under stochastic deadlines . in : platt , j. , koller , d. , singer , y. , roweis , s. ( eds . ) ,advances in neural information processing systems 20 .curran associates , inc . , pp . 465472 .gardiner , c. , 2009 .stochastic methods : a handbook for the natural and social sciences , 4th edition .springer .gold , j. i. , shadlen , m. n. , 2001 .neural computations that underlie decisions about sensory stimuli .trends in cognitive sciences 5 ( 1 ) , 1016 .gold , j. i. , shadlen , m. n. , 2007 .the neural basis of decision making . annual review of neuroscience 30 ( 1 ) , 535574 .gondan , m. , blurton , s. p. , kesselmeier , m. , jun .even faster and even more accurate first - passage time densities and distributions for the wiener diffusion model .journal of mathematical psychology 60 , 2022 .hansen , p. , jaumard , b. , lu , s. h. , 1992 .global optimization of univariate lipschitz functions : i. survey and properties .mathematical programming 55 ( 1 ) , 251272 .horrocks , j. , thompson , m. e. , 2004 . modeling event times with multiple outcomes using the wiener process with drift .lifetime data analysis 10 ( 1 ) , 2949 .hubner , r. , steinhauser , m. , lehle , c. , 2010 .a dual - stage two - phase model of selective attention .psychological review 117 ( 3 ) , 759784 . krajbich , i. , armel , c. , rangel , a. , 2010 . visual fixations and the computation and comparison of value in simple choice .nature neuroscience 13 ( 10 ) , 12921298 .lin , x. s. , 1998 .double barrier hitting time distributions with applications to exotic options .insurance : mathematics and economics 23 ( 1 ) , 4558 .liu , s. , yu , a. j. , holmes , p. , 2009 .dynamical analysis of bayesian inference models for the eriksen task .neural computation 21 ( 6 ) , 15201553 .milosavljevic , m. , malmaud , j. , huth , a. , koch , c. , rangel , a. , 2010 .the drift diffusion model can account for the accuracy and reaction time of value - based choices under high and low time pressure .judgment and decision making 5 ( 6 ) , 437449 .navarro , d. j. , fuss , i. g. , 2009 .fast and accurate calculations for first - passage times in wiener diffusion models .journal of mathematical psychology 53 ( 4 ) , 222230 . ratcliff , r. , 1980 . a note on modeling accumulation of information when the rate of accumulation changes over time .journal of mathematical psychology 21 ( 2 ) , 178184 .ratcliff , r. , mckoon , g. , 2008 .the diffusion decision model : theory and data for two - choice decision tasks .neural computation 20 ( 4 ) , 873 922 .ratcliff , r. , rouder , j. n. , 1998 . modeling response times for two - choice decisions .psychological science 9 ( 5 ) , 347356 .ratcliff , r. , smith , p. l. , 2004 . a comparison of sequential sampling models for two - choice reaction time .psychological review 111 ( 2 ) , 333367 .servan - schreiber , d. , printz , h. , cohen , j. , 1990 . a network model of catecholamine effects-gain , signal - to - noise ratio , and behavior .science 249 ( 4971 ) , 892895 .servant , m. , white , c. , montagnini , a. , burle , b. , 2015 .using covert response activation to test latent assumptions of formal decision - making models in humans .the journal of neuroscience 35 ( 28 ) , 1037110385 .shadlen , m. n. , newsome , w. t. , 2001 .neural basis of a perceptual decision in the parietal cortex ( area lip ) of the rhesus monkey .journal of neurophysiology 86 ( 4 ) , 19161936 .simen , p. , contreras , d. , buck , c. , hu , p. , holmes , p. , cohen , j. d. , 2009 . reward rate optimization in two - alternative decision making : empirical tests of theoretical predictions .journal of experimental psychology : human perception and performance 35 ( 6 ) , 1865 .smith , p. l. , 2000 .stochastic dynamic models of response time and accuracy : a foundational primer . journal of mathematical psychology 44 ( 3 ) , 408 463 .usher , m. , mcclelland , j. l. , 2001 .the time course of perceptual choice : the leaky , competing accumulator model .psychological review 108 ( 3 ) , 550 . verdonck , s. , meers , k. , tuerlinckx , f. , mar 2015 .efficient simulation of diffusion - based choice rt models on cpu and gpu .behavior research methods .voss , a. , voss , j. , feb 2008 .a fast numerical algorithm for the estimation of diffusion model parameters .journal of mathematical psychology 52 ( 1 ) , 1 9 .wald , a. , 1945 .sequential tests of statistical hypotheses .the annals of mathematical statistics 16 ( 2 ) , 117186 .wald , a. , wolfowitz , j. , 1948 .optimum character of the sequential probability ratio test .the annals of mathematical statistics 19 ( 3 ) , 326339 . wang , x .- j .probabilistic decision making by slow reverberation in cortical circuits .neuron 36 ( 5 ) , 955968 .white , c. n. , ratcliff , r. , starns , j. j. , 2011 .diffusion models of the flanker task : discrete versus gradual attentional selection .cognitive psychology 63 ( 4 ) , 210238 . wong , k .- f ., wang , x .- j . , 2006 .a recurrent network mechanism of time integration in perceptual decisions .the journal of neuroscience 26 ( 4 ) , 13141328 .the laplace transform , or moment generating function , of the fpt density conditioned on response is given for by & = \frac{e^ { \frac{a_1 ( z - x_0)}{\sigma_1 ^ 2}}}{(1-\er_1 ) } \frac { \sinh ( \frac{(z+x_0 ) \sqrt{2\alpha \sigma_1 ^ 2 + a_1 ^ 2 } } { \sigma_1 ^ 2 } ) } { \sinh ( \frac{2z\sqrt{2\alpha \sigma_1 ^ 2 + a_1 ^ 2}}{\sigma_1 ^ 2 } ) } , \\ \expt[e^{-\alpha \tau_1 } |x(\tau_1)=-z ] & = \frac{e^{- \frac{a_1 ( z + x_0)}{\sigma_1 ^ 2}}}{\er_1 } \frac { \sinh ( \frac{(z - x_0 ) \sqrt{2\alpha \sigma_1 ^ 2 + a_1 ^ 2 } } { \sigma_1 ^ 2 } ) } { \sinh ( \frac{2z\sqrt{2\alpha \sigma_1 ^ 2 + a_1 ^ 2}}{\sigma_1 ^ 2})}.\end{aligned}\ ] ] references for these expressions may be found in the main text .we first establish . first consider the case .let be the filtration defined by the evolution of the msddm until time conditioned on .for some , it can be shown that = e^{-2s_i x(s)} ] yields the desired expression . for , we note that is a martingale .therefore , applying the optional stopping theorem , we obtain - \sigma_i^2 t_{i-1 } & = \expt[x(\hat \tau)^2 - \sigma_i^2 \hat \tau_i ] \\ & = \expt[x(\tau_i)^2 - \sigma_i^2 \tau_i|\tau_i \le t_i ] \prob(\tau_i \le t_i ) + \expt[x_i^2 - \sigma_i^2 t_i ] \prob ( \tau_i > t_i)]\\ & = ( z^2 -\sigma_i^2 \expt[\tau_i| \tau_i \le t_i ] ) \prob(\tau_i \le t_i ) + ( \expt[x_i^2 ] - \sigma_i^2 t_i ) \prob(\tau_i > t_i).\end{aligned}\ ] ] solving the above equation for ] and ] , now establish .we note that & = \sum_{i=1}^n \expt[\tau \bs 1(t_{i-1 } < \tau \le t_i ) ] \\ & = \sum_{i=1}^n \expt[\tau \bs 1(\tau \le t_i ) | \tau > t_{i-1 } ] \prob(\tau > t_{i-1 } ) \\ & = \sum_{i=1}^n \big ( \expt[\tau_i | \tau_i \le t_i ] \prob ( \tau_i \le t_i ) \prod_{j=1}^{i-1 } \prob(\tau_j > t_j ) \big ) . \end{aligned}\ ] ]
in this work , we use martingale theory to derive formulas for the expected decision time , error rates , and first passage times associated with a multistage drift diffusion model , defined as a wiener diffusion model with piecewise constant time - varying drift rates and decision boundaries . the model we study is a generalization of that considered in . the derivation relies on using the optional stopping theorem for properly chosen martingales , thus providing formulae which may be used to compute performance metrics for a particular stage of the stochastic decision process . we also explicitly solve the case of a two stage diffusion model , and provide numerical demonstrations of the computations suggested by our analysis . we discuss the applications of these formulae for experiments involving time pressure and/or changes in attention over the course of the decision process . we further show how these formulae can be used to semi - analytically calculate reward rate in the service of optimizing the speed - accuracy trade - off . finally we present calculations that allow our techniques to approximate time - varying ornstein - uhlenbeck processes . by presenting these explicit formulae , we aim to foster the development of refined numerical methods and analytical techniques for studying diffusion models with time - varying parameters .
classical mathematical finance has , for a long time , been based on the assumption that the price process of market securities may be approximated by geometric brownian motion in liquid markets the autocorrelation of price changes decays to negligible values in a few minutes , consistent with the absence of long term statistical arbitrage .geometric brownian motion models this lack of memory , although it does not reproduce the empirical leptokurtosis . on the other hand , nonlinear functions of the returns exhibit significant positive autocorrelation . for example , there is volatility clustering , with large returns expected to be followed by large returns and small returns by small returns ( of either sign ) .this , together with the fact that autocorrelations of volatility measures decline very slowly , has the clear implication that long memory effects should somehow be represented in the process and this is not included in the geometric brownian motion hypothesis .one other hand , as pointed out by engle , when the future is uncertain , investors are less likely to invest .therefore uncertainty ( volatility ) would have to be changing over time .the conclusion is that a dynamical model for volatility is needed and in eq.([1.00 ] ) , rather than being a constant , becomes a process by itself .this idea led to many deterministic and stochastic models for the volatility ( and references therein ) . in a previous paper , using both a criteria of mathematical simplicity and consistency with market data , a stochastic volatility model was constructed , with volatility driven by fractional noise .it appears to be the minimal model consistent both with mathematical simplicity and the market data .this data - inspired model is different from the many stochastic volatility models that have been proposed in the literature .the model was used to compute the price return statistics and asymptotic behavior , which were compared with actual data .deviations from the classical black - scholes result and a new option pricing formula were also obtained .the _ fractional volatility model _ , its predictions and comparison with data will be reviewed in section 2 . when this fractional volatility model was first presented , an interesting remark by an economist was _ all right , the model seems to fit reasonably well the data , but where is the economics ? _ . the same remark might be made about the simple geometric brownian model , which does not even fit the data and is used by most of the of mathematical finance practitioners .. but , of course , our economist was right .the fractional volatility model seems to be a reasonable _ mathematical parametrization _ of the market behavior , but it is not sufficient to fit the data .one should also search for the mechanisms in the market that lead to the observed phenomena .no agent - based model can pretend to be the market itself , not even a realistic image of it .nevertheless it may provide a surrogate model of the basic mechanics at work there .therefore , the idea in this paper is to use stylized agent - based market models and find out which features of these models correspond to each one of the features of the mathematical parametrization of the data .the basic hypothesis for the model construction were : ( h1 ) the log - price process belongs to a probability product space of which the first one , , is the wiener space and the second , , is a probability space to be reconstructed from the data .denote by and the elements ( sample paths ) in and and by and the in and generated by the processes up to .then , a particular realization of the log - price process is denoted this first hypothesis is really not limitative .even if none of the non - trivial stochastic features of the log - price were to be captured by brownian motion , that would simply mean that is a trivial function in .( h2 ) the second hypothesis is stronger , although natural .one assumes that , for each fixed , is a square integrable random variable in .a mathematical consequence of hypothesis ( h2 ) is that , for each fixed , where and are well - defined processes in .( theorem 1.1.3 in ref. ) recall that if is a process such that with and being processes , then the process associated to the probability space could then be inferred from the data . according to ( [ 2.3 ] ) , for each fixed realization in one has each set of market data corresponds to a particular realization .therefore , assuming the realization to be typical , the process may be reconstructed from the data by the use of ( [ 2.4 ] ) .this data - reconstructed process was called the _ induced volatility_. for practical purposes we can not strictly use eq.([2.4 ] ) to reconstruct the induced volatility process , because when the time interval is very small the empirical evaluation of the variance becomes unreliable .instead , was estimated from with a time window sufficiently small to give a reasonably local characterization of the volatility , but also sufficiently large to allow for a reliable estimate of the local variance of .once several data sets were analyzed , the next step towards obtaining a mathematical characterization of the _ induced volatility process _ was to look for scaling properties .it turned out that neither nor were good hypothesis for the induced volatility process .it means that the induced volatility process itself is not self - similar .if instead , one computes the empirical integrated log - volatility , one finds that it is well represented by a relation of the form the process possessing very accurate self - similar properties .a nondegenerate process , if it has finite variance , stationary increments and is self - similar must necessarily have a covariance with .the simplest process with these properties is a gaussian process called fractional brownian motion , with = 0\qquad \mathbb{e}\left [ b_{h}\left ( t\right ) b_{h}\left ( s\right ) \right ] = \frac{1}{2}\left\ { \left| t\right| ^{2h}+\left| s\right| ^{2h}-\left| t - s\right| ^{2h}\right\ } \label{2.11}\ ] ] and , for , a long range dependence therefore , mathematical simplicity suggested the identification of the process with fractional brownian motion . and , from the data , one obtains hurst coefficients in the range .finally one obtains the following _ fractional volatility model _ is a volatility intensity parameter and is the observation time scale .notice that the volatility is not driven by fractional brownian motion but by fractional noise , naturally introducing an observation scale dependence . at each fixed time is a gaussian random variable with mean and variance .then , therefore with thus , the effective probability distribution of the returns might depend both on the time lag and on the observation time scale used to construct the volatility process . that this latter dependence might actually be very weak ,seems to be implied by comparison with the data from several markets .a closed - form expression for the returns distribution and its asymptotic behavior may be obtained , namely with asymptotic behavior , for large returns with and some illustrative comparisons with market data were performed . in fig.1 nyseone - day data was used to fix the parameters of the volatility process .then , using , the one - day return distribution predicted by the model is compared with the data .the agreement is quite reasonable . for comparison a log - normal with the same mean and varianceis also plotted in fig.1 .then , in fig .2 , using the same parameters , the same comparison is made for the and data .3 shows a somewhat surprising result . using the same parameters and just changing from ( one day ) to ( one minute ) , the prediction of the modelis compared with one - minute data of usdollar - euro market for a couple of months in 2001 .the result is surprising , because one would not expect the volatility parametrization to carry over to such a different time scale and also because one is dealing with different markets . in fig.4 andfig.5 one sees the same one - day and one - minute return data discussed before , as well as the predictions of the model , both in semilogarithmic and loglog plots . as seen from figs . 4 and 5 , the exact result ( [ 3.10 ] ) or ( [ 3.15 ] ) resembles the double exponential distribution recognized by silva , prange and yakovenko as a new stylized fact in market data .the double exponential distribution has been shown , by dragulescu and yakovenko , to follow from heston s stochastic volatility model .notice however that our model is different from heston s model in that volatility is driven by a process with memory ( fractional noise ) . as a result , despite the qualitative similarity of behavior at intermediate return ranges , the analytic form of the distribution and the asymptotic behavior are different. new option pricing pricing formulas may be obtained from the model both in a simplified risk - neutral form or , more accurately , using fractional malliavin calculus .assuming risk neutrality , the value of an option is the present value of the expected terminal value discounted at the risk - free rate ] and ] around the current price . every time a _ buy _ order arrives it is fulfilled by the closest non - empty ask slot , the new current price being determined by the value of the ask that fulfills it .if no ask exists when a buy order arrives it goes to a cumulative register to wait to be fulfilled .the symmetric process occurs when a _ sell _ order arrives , the new price being the bid that buys it . because the window around the current price moves up and down , asks and bids that are too far away from the current price are automatically eliminated .sell and buy orders , asks and bids all arrive at random .the only parameters of the model are the width of the limit - order book and the size of the asks and bids , the sell and buy orders being normalized to one .the model was run for different widths and liquidities and , for comparison with the fractional volatility model , one computes as before and .although the exact values of the statistical parameters depend on and , the statistical nature of the results seems to be essentially the same .fig.10 shows typical plots of the price process , the volatility , and obtained for and the limit - order book divided into discrete price slots with .the scaling properties of are quite evident from the lower right plot in the figure , the hurst coefficient being .fig.11 shows the correlation and the pdf of the one - time returns . from these resultsone concludes that the main statistical properties of the market data ( fast decay of the linear correlation of the returns , non - gaussianity and volatility memory ) are already generated by the dynamics of the limit - order book with random behavior of the agents .this implies , as pointed out by some authors that in the past have considered limit order book models , that a large part of the market statistical properties ( in normal business - as - usual days ) depends more on the nature of the price fixing financial institutions than on particular investor strategies .\(a ) the fractional volatility model provides a reasonable mathematical parametrization of the bulk market data , that is , it captures the behavior of the market in business - as - usual trading days .\(b ) a small modification of the original model , identifying the random generator of the log - price process and the integrator of the volatility process , also describes , at least , a part of the leverage effect .\(c ) the market statistical behavior in normal days seems to be more influenced by the nature of the financial institutions ( the double auction process ) than by the traders strategies .specific trader strategies and psychology should however play a role on market crisis and bubbles .
based on criteria of mathematical simplicity and consistency with empirical market data , a model with volatility driven by fractional noise has been constructed which provides a fairly accurate mathematical parametrization of the data . here , some features of the model are discussed and , using agent - based models , one tries to find which agent strategies and ( or ) properties of the financial institutions might be responsible for the features of the fractional volatility model . * keywords * : fractional volatility , statistics of returns , option pricing , agent - based models
reinforcement learning ( rl ) is a machine learning framework intending to optimise the behaviour of an agent interacting with an unknown environment . for the most practical problems , trajectory collection is costly and sample efficiency is the main key performance indicatorthis is for instance the case for dialogue systems and robotics . as a consequence , when applying rl to a new problem , one must carefully choose in advance a model , an optimisation technique , and their parameters in order to learn an adequate behaviour given the limited sample set at hand . in particular , for 20 years , research has developed and applied rl algorithms for spoken dialogue systems , involving a large range of dialogue models and algorithms to optimise them . just to cite a few algorithms : monte carlo , -learning , sarsa , mvdp algorithms , kalman temporal difference , fitted- iteration , gaussian process rl , and more recently deep rl .additionally , most of them require the setting of hyper parameters and a state space representation . when applying these research results to a new problem , these choices may dramatically affect the speed of convergence and therefore , the dialogue system performance . facing the complexity of choice ,rl and dialogue expertise is not sufficient .confronted to the cost of data , the popular _ trial and error _ approach shows its limits ._ algorithm selection _ is a framework for comparing several algorithms on a given _problem instance_. the algorithms are tested on several problem instances , and hopefully , the algorithm selection learns from those experiences which algorithm should be the most efficient given a new problem instance . in our setting , only one problem instance is considered , but several experiments are led to determine the fittest algorithm to deal with it .thus , we developed an _ online _ learning version of algorithm selection .it consists in testing several algorithms on the task and in selecting the best one at a given time .indeed , it is important to notice that , as new data is collected , the algorithms improve their performance and that an algorithm might be the worst at a short - term horizon , but the best at a longer - term horizon . in order to avoid confusion , throughout the whole article , the algorithm selectoris called a _meta - algorithm _ , and the set of algorithms available to the meta - algorithm is called a _ portfolio_. defined as an online learning problem , our algorithm selection task has for objective to minimise the expected regret . at each online algorithm selection ,only the selected algorithm is experienced .since the algorithms learn from their experience , it implies a requirement for a fair budget allocation between the algorithms , so that they can be equitably evaluated and compared .budget fairness is in direct contradiction with the expected regret minimisation objective . in order to circumvent this ,the reinforcement algorithms in the portfolio are assumed to be _ off - policy _ , meaning that they can learn from experiences generated from an arbitrary non - stationary behavioural policy .section [ sec : rlmodel ] provides a unifying view of reinforcement learning algorithms , that allows information sharing between all algorithms of the portfolio , whatever their decision processes , their state representations , and their optimisation techniques .then , section [ sec : asrl ] formalises the problem of online selection of off - policy reinforcement learning algorithms .it introduces three definitions of pseudo - regret and states three assumptions related to experience sharing and budget fairness among algorithms . beyond the sample efficiency issues ,the online algorithm selection approach addresses furthermore four distinct practical problems for spoken dialogue systems and online rl - based systems more generally .first , it enables a systematic benchmark of models and algorithms for a better understanding of their strengths and weaknesses .second , it improves robustness against implementation bugs : if an algorithm fails to terminate , or converges to an aberrant policy , it will be dismissed and others will be selected instead .third , convergence guarantees and empirical efficiency may be united by covering the empirically efficient algorithms with slower algorithms that have convergence guarantees .fourth and last , it enables staggered learning : shallow models converge the fastest and consequently control the policy in the early stages , while deep models discover the best solution later and control the policy in late stages .afterwards , section [ sec : amabma ] presents the epochal stochastic bandit algorithm selection ( esbas ) , a novel meta - algorithm addressing the online off - policy reinforcement learning algorithm selection problem .its principle is to divide the time - scale into epochs of exponential length inside which the algorithms are not allowed to update their policies . during each epoch , the algorithms have therefore a constant policy and a stochastic multi - armed bandit can be in charge of the algorithm selection with strong pseudo - regret theoretical guaranties .a thorough theoretical analysis provides for esbas upper bounds of the pseudo - regrets defined in section [ sec : asrl ] under the assumptions stated in the same section .= [ rectangle , draw , text width=8em , text centered , rounded corners , minimum height=4em ] = [ draw , -latex ] ( agent ) agent ; ( environment ) stochastic environment ; ( agent.0 ) ++ ( 4em,0em ) |- node [ near start] ( environment.0 ) ; ( environment.190 ) ++ ( -6em,0em ) |- node [ near start ] ( agent.170 ) ; ( environment.170 ) ++ ( -4.25em,0em ) |- node [ near start , right ] ( agent.190 ) ; next , section [ sec : experesults ] experiments esbas on a simulated dialogue task , and presents the experimental results , which demonstrate the practical benefits of esbas : in most cases outperforming the best algorithm in the portfolio , even though its primary goal was just to be almost as good as it .finally , sections [ sec : related ] and [ sec : conclusion ] conclude the article with respectively related works and prospective ideas of improvement .the goal of this section is to enable information sharing between algorithms , even though they are considered as black boxes .we propose to share their trajectories expressed in a universal format : the _ interaction process_. reinforcement learning ( rl ) consists in learning through trial and error to control an agent behaviour in a stochastic environment .more formally , at each time step , the agent performs an action , and then perceives from its environment a signal called observation , and receives a reward .figure [ fig : rl ] illustrates the rl framework .this interaction process is not markovian : the agent may have an internal memory . in this article, the reward function is assumed to be bounded between and , and we define the rl problem as episodic .let us introduce two time scales with different notations .first , let us define _ meta - time _ as the time scale for algorithm selection : at one meta - time corresponds a meta - algorithm decision , _i.e. _ the choice of an algorithm and the generation of a full episode controlled with the policy determined by the chosen algorithm .its realisation is called a _trajectory_. second , _rl - time _ is defined as the time scale inside a trajectory , at one rl - time corresponds one triplet composed of an observation , an action , and a reward .let denote the space of trajectories .a _ trajectory _ collected at meta - time is formalised as a sequence of ( observation , action , reward ) triplets : where is the length of trajectory .the objective is , given a discount factor , to generate trajectories with high discounted cumulative reward , also called _ return _ , and noted : since and , the return is bounded : the _ trajectory set _ at meta - time is denoted by : a sub - trajectory of until rl - time is called the _ history _ at rl - time and written with .the history records what happened in episode until rl - time : the goal of each reinforcement learning algorithm is to find a policy which yields optimal expected returns .such an algorithm is viewed as a black box that takes as an input a trajectory set , where is the ensemble of trajectory sets of undetermined size : , and that outputs a policy .consequently , a reinforcement learning algorithm is formalised as follows : such a high level definition of the rl algorithms allows to share trajectories between algorithms : a trajectory as a sequence of observations , actions , and rewards can be interpreted by any algorithm in its own decision process and state representation .for instance , rl algorithms classically rely on an mdp defined on a state space representation thanks to a projection : the state representation may be built dynamically as the trajectories are collected .many algorithms doing so can be found in the literature , for instance .then , may learn its policy from the trajectories projected on its state space representation and saved as a transition set : a transition is defined as a quadruplet , with state , action , reward , and next state .off - policy reinforcement learning optimisation techniques compatible with this approach are numerous in the literature : -learning , fitted- iteration , kalman temporal difference , etc .another option would be to perform end - to - end reinforcement learning .as well , any post - treatment of the state set , any alternative decision process model , such as pomdps , and any off - policy technique for control optimisation may be used .the algorithms are defined here as black boxes and the considered meta - algorithms will be indifferent to how the algorithms compute their policies , granted they satisfy the assumptions made in the following section .algorithm selection for combinatorial search consists in deciding which experiments should be carried out , given a problem instance and a fixed amount of computational resources : generally speaking computer - time , memory resources , time and/or money .algorithms are considered efficient if they consume little resource .this approach , often compared and applied to a scheduling problem , experienced a lot of success for instance in the sat competition .algorithm selection applied to machine learning , also called _ meta - learning _, is mostly dedicated to error minimisation given a corpus of limited size . indeed, these algorithms do not deliver _ in fine _ the same answer . in practice, algorithm selection can be applied to arbitrary performance metrics and modelled in the same framework . in the classical batchsetting , the notations of the machine learning algorithm selection problem are described in as follows : * is the space of problem instances ; * is the _ portfolio _ , _ i.e. _ the collection of available algorithms ; * is the _ objective function _ , _ i.e. _ a performance metrics enabling to rate an algorithm on a given instance ; * are the features characterising the properties of problem instances .the principle consists in collecting problem instances and in solving them with the algorithms in the portfolio .the measures provide evaluations of the algorithms on those instances . then , the aggregation of their features with the measures constitutes a training set .finally , any supervised learning techniques can be used to learn an optimised mapping between the instances and the algorithms . nevertheless , in our case, is not large enough to learn an efficient model , it might even be a singleton .consequently , it is not possible to regress a general knowledge from a parametrisation .this is the reason why the online learning approach is tackled in this article : different algorithms are experienced and evaluated during the data collection . since it boils down to a classical exploration / exploitation trade - off , multi - armed bandit have been used for combinatorial search algorithm selection and evolutionary algorithm meta - learning . in the online setting ,the algorithm selection problem for off - policy reinforcement learning is new and we define it as follows : * is the _ trajectory set _ ; * is the _ portfolio _ ; * is the _ objective function _ defined in equation [ eq : return ] .pseudo - code [ alg : algorithmselectionproblem ] formalises the online algorithm selection setting .meta - algorithm _ is defined as a function from a trajectory set to the selection of an algorithm : the meta - algorithm is queried at each meta - time , with input , and it ouputs algorithm controlling with its policy the generation of the trajectory in the stochastic environment .let be a condensed notation for the expected return of policy that was learnt from trajectory set by algorithm : .\label{eq : expmu}\ ] ] the final goal is to optimise the cumulative expected return .it is the expectation of the sum of rewards obtained after a run of trajectories : _ = _= 1^t _ ^-1 ( |)_^(),[eq : cumulativeexpectedreturn1 ] _ = _= 1^t_,[eq : cumulativeexpectedreturn2 ] _ = _ .[ eq : cumulativeexpectedreturn3 ] equations [ eq : cumulativeexpectedreturn1 ] , [ eq : cumulativeexpectedreturn2 ] and [ eq : cumulativeexpectedreturn3 ] transform the cumulative expected return into two nested expectations .the outside expectation assumes the algorithm selection fixed and averages over the trajectory set stochastic collection and the corresponding algorithms policies , which may also rely on a stochastic process .the inside expectation assumes the policy fixed and averages the evaluation over its possible trajectories in the stochastic environment .equation [ eq : cumulativeexpectedreturn1 ] transforms the expectation into its probabilistic equivalent , denoting the probability density of generating trajectory set conditionally to the meta - algorithm .equation [ eq : cumulativeexpectedreturn2 ] transforms back the probability into a local expectation , and finally equation [ eq : cumulativeexpectedreturn3 ] simply applies the commutativity between the sum and the expectation ._ nota bene _ : there are three levels of decision : meta - algorithm selects an algorithm that computes a policy that in turn controls the actions .we focus in this paper on the meta - algorithm level . in order to evaluate the meta - algorithms ,let us formulate two additional notations .first , the _ optimal expected return _ is defined as the highest expected return achievable by a policy of an algorithm in portfolio : second , for every algorithm in the portfolio , let us define as its _ canonical meta - algorithm _ , _i.e. _ the meta - algorithm that always selects algorithm : , .the _ absolute pseudo - regret _ defines the regret as the loss for not having controlled the trajectory with an optimal policy .defnabsreg [ def : abs ] the absolute pseudo - regret compares the meta - algorithm s expected return with the optimal expected return : .\label{eq : absoluteregret}\ ] ] the absolute pseudo - regret is a well - founded pseudo - regret definition .however , it is worth noting that an optimal meta - algorithm will not yield a null regret because a large part of the absolute pseudo - regret is caused by the sub - optimality of the algorithm policies when the trajectory set is still of limited size .indeed , the absolute pseudo - regret considers the regret for not selecting an optimal policy : it takes into account both the pseudo - regret of not selecting the best algorithm and the pseudo - regret of the algorithms for not finding an optimal policy . since the meta - algorithm does not interfere with the training of policies , it can not account for the pseudo - regret related to the latter .in order to have a pseudo - regret that is relative to the learning ability of the best algorithm and that better accounts for the efficiency of the algorithm selection task , we introduce the notion of _ relative pseudo - regret_. defnrelreg [ def : rel ] the relative pseudo - regret compares the meta - algorithm s expected return with the expected return of the best canonical meta - algorithm : - \mathbb{e}_\sigma\left[\sum_{\tau=1}^t\mathbb{e}\mu_{\mathcal{d}_{\tau-1}^\sigma}^{\sigma(\tau ) } \right ] .\label{eq : relregret}\ ] ] it is direct from equations [ eq : absoluteregret ] and [ eq : relregret ] that the relative pseudo - regret can be expressed in function of absolute pseudo - regrets of the meta - algorithm and the canonical meta - algorithms : since one shallow algorithm might be faster in early stages and a deeper one more effective later , a good meta - algorithm may achieve a negative relative pseudo - regret , which is ill - defined as a pseudo - regret definition .still , the relative pseudo - regret is useful as an empirical evaluation metric .a large relative pseudo - regret shows that the meta - algorithm failed to consistently select the best algorithm(s ) in the portfolio .a small , null , or even negative relative pseudo - regret demonstrates that using a meta - algorithm is a guarantee for selecting the algorithm that is the most adapted to the problem .the theoretical analysis is hindered by the fact that algorithm selection , not only directly influences the return distribution , but also the trajectory set distribution and therefore the policies learnt by algorithms for next trajectories , which will indirectly affect the future expected returns . in order to allow policy comparison , based on relation on trajectory setsthey are derived from , our analysis relies on three assumptions whose legitimacy is discussed in this section and further developed under the practical aspects in section [ sec : transgressions ] .assmorass the algorithms produce better policies with a larger trajectory set on average , whatever the algorithm that controlled the additional trajectory : }.\ ] ] [ ass : monotony ] assumption [ ass : monotony ] states that algorithms are off - policy learners and that additional data can not lead to performance degradation on average .an algorithm that is not off - policy could be biased by a specific behavioural policy and would therefore transgress this assumption .assordass if an algorithm produces a better policy with one trajectory set than with another , then it remains the same , on average , after collecting an additional trajectory from any algorithm : \leq \mathbb{e}_{\alpha'}\left[\mathbb{e}\mu_{\mathcal{d}'\cup\varepsilon^{\alpha'}}^{\alpha}\right ] .\end{array}\ ] ] [ ass : compatibility ] assumption [ ass : compatibility ] states that a performance relation between two policies learnt by a given algorithm from two trajectory sets is preserved on average after adding another trajectory , whatever the behavioural policy used to generate it .from these two assumptions , theorem [ th : notworse ] provides an upper bound in order of magnitude in function of the worst algorithm in the portfolio .it is verified for any algorithm selection : thmnotthm the absolute pseudo - regret is bounded by the worst algorithm absolute pseudo - regret in order of magnitude : [ th : notworse ] see the appendix .contrarily to what the name of theorem [ th : notworse ] suggests , a meta - algorithm might be worse than the worst algorithm ( similarly , it can be better than the best algorithm ) , but not in order of magnitude .its proof is rather complex for such an intuitive and loose result because , in order to control all the possible outcomes , one needs to translate the selections of algorithm with meta - algorithm into the canonical meta - algorithm s view , in order to be comparable with it .this translation is not obvious when the meta - algorithm and the algorithms it selects act tricky .see the proof for an example .the _ fairness of budget distribution _ has been formalised in .it is the property stating that every algorithm in the portfolio has as much resources as the others , in terms of computational time and data .it is an issue in most online algorithm selection problems , since the algorithm that has been the most selected has the most data , and therefore must be the most advanced one . a way to circumvent this issueis to select them equally , but , in an online setting , the goal of algorithm selection is precisely to select the best algorithm as often as possible . in short ,exploration and evaluation require to be fair and exploitation implies to be unfair .our answer is to require that all algorithms in the portfolio are learning _ off - policy _ , _ i.e. _ without bias induced by the behavioural policy used in the learning dataset . by assuming that all algorithms learn off - policy , we allow _ information sharing _ between algorithms .they share the trajectories they generate . as a consequence, we can assume that every algorithm , the least or the most selected ones , will learn from the same trajectory set .therefore , the control unbalance does not directly lead to unfairness in algorithms performances : all algorithms learn equally from all trajectories .however , unbalance might still remain in the exploration strategy if , for instance , an algorithm takes more benefit from the exploration it has chosen than the one chosen by another algorithm . in this article , we speculate that this chosen - exploration effect is negligible . more formally , in this article , for analysis purposes , the algorithm selection is assumed to be absolutely fair regardless the exploration unfairness we just discussed about .this is expressed by assumption [ ass : fairlearning ] .assfairass if one trajectory set is better than another for one given algorithm , it is the same for other algorithms . [ ass : fairlearning ] in practical problems , assumptions 2 and 3 are defeated , but empirical results in section [ sec : experesults ] demonstrate that the esbas algorithm presented in section [ sec : amabma ] is robust to the assumption transgressions .an intuitive way to solve the algorithm selection problem is to consider algorithms as arms in a multi - armed bandit setting .the bandit meta - algorithm selects the algorithm controlling the next trajectory and the trajectory return constitutes the reward of the bandit .however , a stochastic bandit can not be _ directly _ used because the algorithms performances vary and improve with time .adversarial multi - arm bandits are designed for non - stationary environments , but the exploitation the structure of our algorithm selection problem makes it possible to obtain pseudo - regrets of order of magnitude lower than {t}) ] , then the meta - algorithm will not be able to distinguish the two best algorithms . still , we have the guarantee that pseudo - regret {t}) ] , then .however , the budget , _i.e. _ the length of epoch starting at meta - time , equals .in fact , even more straightforwardly , the stochastic bandit problem is known to be , which highlights the limit of distinguishability at .spare the factor in table [ tab : bounds ] bounds , which comes from the fact the meta - algorithm starts over a novel bandit problem at each new epoch , esbas faces the same hard limit . as equation [ eq : ]recalls , the absolute pseudo - regret can be decomposed between the absolute pseudo - regret of the canonical meta - algorithm of the best algorithm and the relative pseudo - regret , which is the regret for not running the best algorithm alone .the relative pseudo regret can in turn be upper bounded by a decomposition of the selection regret : the regret for not always selecting the best algorithm , and potentially not learning as fast , and the short - sighted regret : the regret for not gaining the returns granted by the best algorithm .these two successive decomposition lead to theorem [ th : absolute ] that provides an upper bound of the absolute pseudo - regret in function of the canonical meta - algorithm of the best algorithm , and the short - sighted pseudo - regret , which order magnitude is known to be bounded thanks to theorem [ th : shortsightedregret ] .thmabsthm if the stochastic multi - armed bandit guarantees that the best arm has been selected in the first episodes at least times , with high probability , then : where algorithm selection selects exclusively algorithm .[ th : absolute ] see the appendix .[ c?c|c ] & + & & + & & + @ , + & & + @ , + & & @ , + + , + + & & + table [ tab : absbounds ] reports an overview of the absolute pseudo - regret bounds in order of magnitude of a two - fold portfolio in function of the asymptotic behaviour of the gap and the absolute pseudo - regret of the meta - algorithm of the the best algorithm , obtained with theorems [ th : notworse ] , [ th : shortsightedregret ] and [ th : absolute ] . table [ tab : bounds ] is interpreted by line . according the order of magnitude of in the first column, the second and third columns display the esbas absolute pseudo - regret bounds cross depending on the order of magnitude of .several remarks on table [ tab : absbounds ] can be made .firstly , like in table [ tab : bounds ] , theorem [ th : notworse ] is applied when esbas is unable to distinguish the better algorithm , and theorem [ th : shortsightedregret ] are applied when esbas algorithm selection is useful : the worse algorithm is , the easier algorithm selection gets , and the lower the upper bounds .secondly , implies that .thirdly and lastly , _ in practice _ , the second best algorithm absolute pseudo - regret is of the same order of magnitude than the sum of : .for this reason , in the last column , the first bound is greyed out , and is assumed in the other bounds .it is worth noting that upper bounds expressed in order of magnitude are all inferior to {t}) ] to agree on it .each player is considered fully empathetic to the other one . as a result ,if the players come to an agreement , the system s immediate reward at the end of the dialogue is : where is the last state reached by player at the end of the dialogue , and is the agreed option ; if the players fail to agree , the final immediate reward is : and finally , if one player misunderstands and agrees on a wrong option , the system gets the cost of selecting option without the reward of successfully reaching an agreement : players act each one in turn , starting randomly by one or the other .they have four possible actions : * refprop : the player makes a proposition : option .if there was any option previously proposed by the other player , the player refuses it . *askrepeat : the player asks the other player to repeat its proposition . *accept : the player accepts option that was understood to be proposed by the other player .this act ends the dialogue either way : whether the understood proposition was the right one ( equation [ eq : rsuccess ] ) or not ( equation [ eq : rfailure ] ) .* enddial : the player does not want to negotiate anymore and ends the dialogue with a null reward ( equation [ eq : rgiveup ] ) .understanding through speech recognition of system is assumed to be noisy : with a sentence error rate of probability , an error is made , and the system understands a random option instead of the one that was actually pronounced . in order to reflect human - machine dialogue asymmetry , the simulated user always understands what the system says : .we adopt the way generates speech recognition confidence scores : if the player understood the right option , otherwise .the system , and therefore the portfolio algorithms , have their action set restrained to these five non parametric actions : refinsist refprop , being the option lastly proposed by the system ; refnewprop refprop , being the preferred one after , askrepeat , accept accept , being the last understood option proposition and enddial . all learning algorithms are using fitted- iteration , with a linear parametrisation and an -greedy exploration : , being the epoch number .six algorithms differing by their state space representation are considered : * _ simple _ : state space representation of four features : the constant feature , the last recognition score feature , the difference between the cost of the proposed option and the next best option , and finally an rl - time feature . .* _ fast _ : . *_ simple-2 _ : state space representation of ten second order polynomials of _ simple _ features . , , , , , , , , , . * _ fast-2 _ : state space representation of six second order polynomials of _ fast _ features . , , , , , . * _n--\{simple / fast / simple-2/fast-2 } _ : versions of previous algorithms with additional features of noise , randomly drawn from the uniform distribution in $ ] . *_ constant- _ : the algorithm follows a deterministic policy of average performance without exploration nor learning .those constant policies are generated with _simple-2 _ learning from a predefined batch of limited size .+ + [ fig : simplevssquare ] in all our experiments , esbas has been run with ucb parameter .we consider 12 epochs .the first and second epochs last meta - time steps , then their lengths double at each new epoch , for a total of 40,920 meta - time steps and as many trajectories . is set to .the algorithms and esbas are playing with a stationary user simulator built through imitation learning from real - human data .all the results are averaged over 1000 runs .the performance figures plot the curves of algorithms individual performance against the esbas portfolio control in function of the epoch ( the scale is therefore logarithmic in meta - time ) .the performance is the average return of the reinforcement learning problem defined in equation [ eq : return ] : it equals in the negotiation game , with value defined by equations [ eq : rsuccess ] , [ eq : rgiveup ] , and [ eq : rfailure ] .the ratio figures plot the average algorithm selection proportions of esbas at each epoch .sampled relative pseudo - regrets are also provided in table [ tab : recap ] , as well as the gain for not having chosen the worst algorithm in the portfolio .relative pseudo - regrets have a 95% confidence interval about , which is equivalent to per trajectory .three experience results are presented in this subsection . 1.81 _simple-2 _ +_ fast-2 _ & 35 & -181 + _ simple _ + _ n-1-simple-2 _ & -73 & -131 + _ simple _ + _ n-1-simple _ & 3 & -2 + _ simple-2 _ + _ n-1-simple-2 _ & -12 & -38 + _ all-4 _ + _ constant-1.10 _ & 21 & -2032 + _ all-4 _ + _ constant-1.11 _ & -21 & -1414 + _ all-4 _ + _ constant-1.13 _ & -10 & -561 + _ all-4 _ & -28 & -275 + _ all-2-simple _ + _ constant-1.08 _ & -41 & -2734 + _ all-2-simple _ + _ constant-1.11 _ & -40 & -2013 + _ all-2-simple _ + _ constant-1.13 _ & -123 & -799 + _ all-2-simple _ & -90 & -121 + _ fast _ + _ simple-2 _ & -39 & -256 + _ simple-2 _ + _ constant-1.01 _ & 169 & -5361 + _ simple-2 _ + _ constant-1.11 _ & 53 & -1380 + _ simple-2 _ + _ constant-1.11 _ & 57 & -1288 + _ simple _ + _ constant-1.08 _ & 54 & -2622 + _ simple _ + _ constant-1.10 _ & 88 & -1565 + _ simple _ + _ constant-1.14 _ & -6 & -297 + _ all-4 _ + _ all-4-n-1 _ + _ constant-1.09 _ & 25 & -2308 + _ all-4 _ + _ all-4-n-1 _ + _ constant-1.11 _ & 20 & -1324 + _ all-4 _ + _ all-4-n-1 _ + _ constant-1.14 _ & -16 & -348 + _ all-4 _ + _ all-4-n-1 _ & -10 & -142 + _ all-2-simple _ + _ all-2-n-1-simple _ & -80 & -181 + 4*_n-2-simple _ & -20 & -20 + 4*_n-3-simple _ & -13 & -13 + 8*_n-1-simple-2 _ & -22 & -22 + _ simple-2 _ + _ constant-0.97 _ ( no reset ) & 113 & -7131 + _ simple-2 _ + _ constant-1.05 _ ( no reset ) & 23 & -3756 + _ simple-2 _ + _ constant-1.09 _ ( no reset ) & -19 & -2170 + _ simple-2 _ + _ constant-1.13 _ ( no reset ) & -16 & -703 + _ simple-2 _ + _ constant-1.14 _ ( no reset ) & -125 & -319 + 1.81.5 figures [ fig : simplevssquare1 ] and [ fig : simplevssquare2 ] plot the typical curves obtained with esbas selecting from a portfolio of two learning algorithms . on figure [ fig : simplevssquare1 ] , the esbas curve tends to reach more or less the best algorithm in each point as expected .surprisingly , figure [ fig : simplevssquare2 ] reveals that the algorithm selection ratios are not very strong in favour of one or another at any time .indeed , the variance in trajectory set collection makes _ simple _ better on some runs until the end .esbas proves to be efficient at selecting the best algorithm for each run and unexpectedly obtains a negative relative pseudo - regret of -90 .more generally , table [ tab : recap ] reveals that most of such two - fold portfolios with learning algorithms actually induced a strongly negative relative pseudo - regret .figures [ fig : squarevsconstant1 ] and [ fig : squarevsconstant2 ] plot the typical curves obtained with esbas selecting from a portfolio constituted of a learning algorithm and an algorithm with a deterministic and stationary policy .esbas succeeds in remaining close to the best algorithm at each epoch .one can also observe a nice property : esbas even dominates both algorithm curves at some point , because the constant algorithm helps the learning algorithm to explore close to a reasonable policy .however , when the deterministic strategy is not so good , the reset of the stochastic bandits is harmful . as a result, learner - constant portfolios may yield quite strong relative pseudo - regrets : in figure [ fig : squarevsconstant1 ] .however , when the constant algorithm expected return is over 1.13 , slightly negative relative pseudo - regrets may still be obtained .subsection [ sec : noarmreset ] offers a straightforward improvement of esbas when one or several algorithm are known to be constant .esbas also performs well on larger portfolios of 8 learners ( see figure [ fig:8learners1 ] ) with negative relative pseudo - regrets : ( and against the worst algorithm ) , even if the algorithms are , on average , almost selected uniformly as figure [ fig:8learners2 ] reveals .esbas offers some staggered learning , but more importantly , early bad policy accidents in learners are avoided . the same kind of results are obtained with 4-learner portfolios .if we add a constant algorithm to these larger portfolios , esbas behaviour is generally better than with the constant vs learner two - fold portfolios .we interpret esbas s success at reliably outperforming the best algorithm in the portfolio as the result of the four following potential added values : * calibrated learning : esbas selects the algorithm that is the most fitted with the data size .this property allows for instance to use shallow algorithms when having only a few data and deep algorithms once collected a lot . *diversified policies : esbas computes and experiments several policies .those diversified policies generate trajectories that are less redundant , and therefore more informational . as a result , the policies trained on these trajectories should be more efficient . *robustness : if one algorithm learns a terrible policy , it will soon be left apart until the next policy update .this property prevents the agent from repeating again and again the same blatant mistakes .* run adaptation : obviously , there has to be an algorithm that is the best on average for one given task at one given meta - time . butdepending on the variance in the trajectory collection , it is not necessarily the best one for each run .the esbas meta - algorithm tries and selects the algorithm that is the best at each run .all these properties are inherited by algorithm selection similarity with ensemble learning .simply , instead of a vote amongst the algorithms to decide the control of the next transition , esbas selects the best performing algorithm . in order to look deeper into the variance control effect of algorithm selection , in a similar fashion to ensemble learning , we tested two portfolios : four times the same algorithm _n-2-simple _ , and four times the same algorithm _n-3-simple_. the results show that they both outperform the _simple _ algorithm baseline , but only slightly ( respectively and ) .our interpretation is that , in order to control variance , adding randomness is not as good as changing hypotheses , _i.e. _ state space representations .esbas s worst results concern small portfolios of algorithms with constant policies .these ones do not improve over time and the full reset of the -multi armed bandit urges esbas to explore again and again the same underachieving algorithm . one easy way to circumvent this drawback is to use the knowledge that these constant algorithms do not change and prevent their arm from resetting . by operating this way ,when the learning algorithm(s )start(s ) outperforming the constant one , esbas simply neither exploits nor explores the constant algorithm anymore .figure [ fig : no - arm - reset ] displays the learning curve in the no - arm - reset configuration for the constant algorithm .one can notice that esbas s learning curve follows perfectly the learning algorithm s learning curve when this one outperforms the constant algorithm and achieves a strong negative relative pseudo - regret of -125 .still , when the constant algorithm does not perform as well as in figure [ fig : no - arm - reset ] , another harmful phenomenon happens : the constant algorithm overrides the natural exploration of the learning algorithm in the early stages , and when the learning algorithm finally outperforms the constant algorithm , its exploration parameter is already low .this can be observed in experiments with constant algorithm of expected return inferior to 1 , as reported in table [ tab : recap ] .several results show that , in practice , the assumptions are transgressed .firstly , assumption [ ass : compatibility ] , which states that more initial samples would necessarily help further learning convergence , is violated when the -greedy exploration parameter decreases with meta - time and not with the number of times this algorithm has been selected . indeed , this is the main reason of the remaining mitigated results obtained in subsection [ sec : noarmreset ] : instead of exploring in early stages , the agent selects the constant algorithm which results in generating over and over similar fair but non optimal trajectories .finally , the learning algorithm might learn slower because of being decayed without having explored .secondly , we also observe that assumption [ ass : fairlearning ] is transgressed .indeed , it states that if a trajectory set is better than another for a given algorithm , then it s the same for the other algorithms. this assumption does not prevent calibrated learning , but it prevents the run adaptation property introduced in subsection [ sec : reasons ] that states that an algorithm might be the best on some run and another one on other runs .still , this assumption infringement does not seem to harm the experimental results .it even seems to help in general .thirdly , off - policy reinforcement learning algorithms exist , but in practice , we use state space representations that distort their off - policy property .however , experiments do not reveal any obvious bias related to the off / on - policiness of the trajectory set the algorithms train on . andfinally , let us recall here the unfairness of the exploration chosen by algorithms that has already been noticed in subsection [ sec : assumptions ] and that also transgresses assumption [ ass : fairlearning ] .nevertheless , experiments did not raise any particular bias on this matter .related to algorithm selection for rl , consists in using meta - learning to tune a fixed reinforcement algorithm in order to fit observed animal behaviour , which is a very different problem to ours . in ,the reinforcement learning algorithm selection problem is solved with a portfolio composed of online rl algorithms . in those articles ,the core problem is to balance the budget allocated to the sampling of old policies through a _lag _ function , and budget allocated to the exploitation of the up - to - date algorithms .their solution to the problem is thus independent from the reinforcement learning structure and has indeed been applied to a noisy optimisation solver selection .the main limitation from these works relies on the fact that _ on - policy _ algorithms were used , which prevents them from sharing trajectories among algorithms .meta - learning specifically for the eligibility trace parameter has also been studied in .a recent work studies the learning process of reinforcement learning algorithms and selects the best one for learning faster on a new task .this approach assumes several problem instances and is more related to the batch algorithm selection ( see section [ sec : batch ] ) . as for rlcan also be related to ensemble rl . uses combinations of a set of rl algorithms to build its online control such as policy voting or value function averaging .this approach shows good results when all the algorithms are efficient , but not when some of them are underachieving .hence , no convergence bound has been proven with this family of meta - algorithms . horde and multi - objective ensemble rl are algorithms for hierarchical rl and do not directly compare with as .regarding policy selection , esbas advantageously compares with the rl with policy advice s regret bounds of on static policies .in this article , we tackle the problem of selecting online off - policy reinforcement learning algorithms .the problem is formalised as follows : from a fixed portfolio of algorithms , a meta - algorithm learns which one performs the best on the task at hand .fairness of algorithm evaluation is granted by the fact that the rl algorithms learn off - policy .esbas , a novel meta - algorithm , is proposed .its principle is to divide the meta - time scale into epochs .algorithms are allowed to update their policies only at the start each epoch .as the policies are constant inside each epoch , the problem can be cast into a stochastic multi - armed bandit .an implementation with ucb1 is detailed and a theoretical analysis leads to upper bounds on the short - sighted regret .the negotiation dialogue experiments show strong results : not only esbas succeeds in consistently selecting the best algorithm , but it also demonstrates its ability to perform staggered learning and to take advantage of the ensemble structure .the only mitigated results are obtained with algorithms that do not learn over time . a straightforward improvement of the esbas meta - algorithm is then proposed and its gain is observed on the task . as for next steps , we plan to work on an algorithm inspired by the _ lag _ principle introduced in , and apply it to our off - policy rl setting .1.81 [ cols="<,<,<",options="header " , ] & section [ sec : algos ] + _ n-1-fast-2 _ & f with & section [ sec : algos ] + _ constant- _ & non - learning algorithm with average performance & section [ sec : algos ] + _ _ & number of noisy features added to the feature set & section [ sec : algos ] + 1.81.5 [ tab : gloss2 ]from definition [ def : abs ] : ^_abs(t ) = t^*_- _ , ^_abs(t ) = t^*_- _ _ , ^_abs(t ) = _ _ , where is the subset of with all the trajectories generated with algorithm , where is the index of the ^th^ trajectory generated with algorithm , and where is the cardinality of finite set . by convention , let us state that if .then : ^_abs(t ) = _ _ i=1^t _ . to conclude , let us prove by mathematical induction the following inequality : _ _ ^ is true by vacuity for : both left and right terms equal .now let us assume the property true for and prove it for : _ = _ , _ = _ , _ = _ . if , by applying mathematical induction assumption , then by applying assumption [ ass : compatibility ] and finally by applying assumption [ ass : monotony ] recursively , we infer that : _ _ ^ , __ ^ , _ _ ^ , _ _^. if , the same inequality is straightforwardly obtained , since , by convention , and since , by definition .the mathematical induction proof is complete .this result leads to the following inequalities : ^_abs(t ) _ _ i=1^t _ ^ , ^_abs(t ) _^^_abs(t ) , ^_abs(t ) k_^^_abs(t ) , which leads directly to the result : , _ abs^(t)(_^k _abs^^k(t ) ) ._ this proof may seem to the reader rather complex for such an intuitive and loose result but algorithm selection and the algorithms it selects may act tricky . for instance selecting algorithm when the collected trajectory sets contains misleading examples ( _ i.e. _ with worse expected return than with an empty trajectory set ) implies that the following unintuitive inequality is always true : . in order to control all the possible outcomes , one needs to translate the selections of algorithm into s view . _by simplification of notation , . from definition[ def : ss ] : ^^esbas_ss(t ) = _ ^esbas , ^^esbas_ss(t ) = _ ^esbas , ^^esbas_ss(t ) _ ^esbas , ^^esbas_ss(t ) _= 0^_2(t ) _ss^^esbas ( ) , [ eq : th2:epochregret ] where is the epoch of meta - time .a bound on short - sighted pseudo - regret for each epoch can then be obtained by the stochastic bandit regret bounds in : _ ss^^esbas ( ) = _ ^esbas , _ss^^esbas ( ) ( ) , _ss^^esbas ( ) ( ) , _ 10 , _ss^^esbas ( ) , where = _ and where_^= \ { ll + & if _ ^= _ _ ^ , + _ _ ^-_^ & otherwise . .since we are interested in the order of magnitude , we can once again only consider the upper bound of : _ ( ) , ( _ ) , _ 20 , , where the second best algorithm at epoch such that is noted .injected in equation [ eq : th2:epochregret ] , it becomes : _ ss^^esbas(t ) _1_2_=0^_2(t ) , [ eq : deltaminbound ] which proves the result . if , then , where . means that only one algorithm converges to the optimal asymptotic performance and that such that , , such that , . in this case, the following bound can be deduced from equation [ eq : deltaminbound ] : _ ss^^esbas(t ) _ 4 + _ = _1^(t ) , _ ss^^esbas(t ) _ 4 + [ eq : cor3:result ] , where is a constant equal to the short - sighted pseudo - regret before epoch : _ 4 = _ ss^^esbas(2^_1 - 1 ) equation [ eq : cor3:result ] directly leads to the corollary . if , then .if decreases slower than polynomially in epochs , which implies decreasing polylogarithmically in meta - time , _ i.e. _ , , such that , , then , from equation [ eq : deltaminbound ] : _ ss^^esbas(t ) _ 6 + _ = _2^(t ) , _ ss^^esbas(t ) _ 6 + _ = _2^(t ) ^m^+1 , _ss^^esbas(t ) ^m^+2(t ) , [ eq : cor4:result ] where is a constant equal to the short - sighted pseudo - regret before epoch : _ 6 = _ ss^^esbas(2^_2 - 1 ). equation [ eq : cor4:result ] directly leads to the corollary . if , then . if decreases slower than a fractional power of meta - time , then , , , such that , , and therefore , from equation [ eq : deltaminbound ] : _ ss^^esbas(t ) _ 8 + _ = _3^(t ) , _ ss^^esbas(t ) _ 8 + _ = _3^(t ) , _ ss^^esbas(t ) _ 8 + _ = _3^(t ) ( 2^c^)^ , [ eq : sumxex ] where is a constant equal to the short - sighted pseudo - regret before epoch : _ 8 = _ ss^^esbas(2^_3 - 1 ) .the sum in equation [ eq : sumxex ] is solved as follows : _ i = i_0^n ix^i = x_i = i_0^n ix^i-1 , _ i = i_0^n ix^i = x_i = i_0^n , _ i = i_0^n ix^i = x , _ i = i_0^n ix^i = x , _ i = i_0^n ix^i = ( ( x-1)nx^n - x^n -(x-1)i_0x^i_0 - 1 + x^i_0 ) .this result , injected in equation [ eq : sumxex ] , induces that , , : _ ss^^esbas(t ) _ 8 + ( t ) 2^c^(t ) , _ ss^^esbas(t ) _ 8 + t^c^ ( t ) , which proves the corollary .the esbas absolute pseudo - regret is written with the following notation simplifications : and : note that is the optimal constant algorithm selection at horizon , but it is not necessarily the optimal algorithm selection : there might exist , and there probably exists a non constant algorithm selection yielding a smaller pseudo - regret .the esbas absolute pseudo - regret can be decomposed into the pseudo - regret for not having followed the optimal constant algorithm selection and the pseudo - regret for not having selected the algorithm with the highest return , _i.e. _ between the pseudo - regret on the trajectory and the pseudo - regret on the immediate optimal return : where is the expected return of policy , learnt by algorithm on trajectory set , which is the trajectory subset of obtained by removing all trajectories that were not generated with algorithm . on the one side ,assumption [ ass : fairlearning ] of fairness states that one algorithm learns as fast as any another over any history .the asymptotically optimal algorithm(s ) when is(are ) therefore the same one(s ) whatever the the algorithm selection is . on the other side ,let denote the probability , that at time , the following inequality is true : with probability , inequality [ eq : subsize ] is not guaranteed and nothing can be inferred about , except it is bounded under by .let be the subset of such that .then , can be expressed as follows : let consider the set of all sets such that and such that last trajectory in was generated by . since esbas , with , a stochastic bandit with regret in ,guarantees that all algorithms will eventually be selected an infinity of times , we know that : we recall here that the stochastic bandit algorithm was assumed to guarantee to try the best algorithm at least times with high probability and .now , we show that at any time , the longest stochastic bandit run ( _ i.e. _ the epoch that experienced the biggest number of pulls ) lasts at least : at epoch , the meta - time spent on epochs before is equal to ; the meta - time spent on epoch is equal to ; the meta - time spent on epoch is either below , in which case , the meta - time spent on epoch is higher than , or the meta - time spent on epoch is over and therefore higher than .thus , esbas is guaranteed to try the best algorithm at least times with high probability and . as a result :
dialogue systems rely on a careful reinforcement learning design : the learning algorithm and its state space representation . in lack of more rigorous knowledge , the designer resorts to its practical experience to choose the best option . in order to automate and to improve the performance of the aforementioned process , this article formalises the problem of online off - policy reinforcement learning algorithm selection . a meta - algorithm is given for input a portfolio constituted of several off - policy reinforcement learning algorithms . it then determines at the beginning of each new trajectory , which algorithm in the portfolio is in control of the behaviour during the full next trajectory , in order to maximise the return . the article presents a novel meta - algorithm , called epochal stochastic bandit algorithm selection ( esbas ) . its principle is to freeze the policy updates at each epoch , and to leave a rebooted stochastic bandit in charge of the algorithm selection . under some assumptions , a thorough theoretical analysis demonstrates its near - optimality considering the structural sampling budget limitations . then , esbas is put to the test in a set of experiments with various portfolios , on a negotiation dialogue game . the results show the practical benefits of the algorithm selection for dialogue systems , in most cases even outperforming the best algorithm in the portfolio , even when the aforementioned assumptions are transgressed .
the occurrence of rare events can vastly contribute to the evolution of physical systems because of their potential dramatic effects .their understanding has gathered a strong interest and , focusing on stochastic dynamics , a large variety of numerical methods have been developed to study their properties .they range from transition path sampling to `` go with the winner '' algorithms and discrete - time or continuous - time population dynamics ( see for a review ) , and they have been generalized to many contexts . in physics , those are being increasingly used in the study of complex systems , for instance in the study of current fluctuations in models of transport , glasses , protein folding and signalling networks .mathematically , the procedure amounts to determining a large deviation function ( ldf ) associated to the distribution of a given trajectory - dependent observable , which in turns can be reformulated in finding the ground state of a linear operator ( see for a recent review of many aspects of this correspondence ) .in fact , this question is common to both statistical and quantum physics , and the very origin of population dynamics methods lies in the quantum monte - carlo algorithm .the idea of population dynamics is to translate the study of a class of rare trajectories ( with respect to a determined global constraint ) into the evolution of several copies of the original dynamics , with a local - in - time selection process rendering the occurrence of the rare trajectories typical in the evolved population .the decay or growth of the population is in general exponential , at a rate which is directly related to the distribution of the class of rare trajectories in the original dynamics . two versions of such algorithms exist : the non - constant total population and the constant total population , for which a uniform pruning / cloning is applied on top of the cloning dynamics so as to avoid the exponential explosion or disappearance of the population . while the later version is obviously more computer - friendly , the former version presents interesting features : first , it is directly related to the evolution of biological systems ( stochastic jumps representing mutations , selection rules being interpreted as darwinian pressure ) ; second , the uniform pruning / cloning of the population , although unbiased , induces correlations in the dynamics that one might want to avoid ; last , in some situations where the selection rates are very fluctuating , the constant - population algorithm can not be used in practice because of finite - population effects ( population being wiped out by a single clone ) , and one has to resort to the non - constant one . in this article ,we focus on the non - constant population algorithm , that we study numerically in a simple model where its implementation and its properties can be examined in great details . in section [ the cloning algorithm and the large deviation function ] ,we recall for completeness the relation between large deviations and the precise population dynamics . in section[ sec : avepopldf ] we describe issues related to the averaging of distinct runs , that we quantify in section [ parallel behaviour in log - populations ] . in section [ sec : time_correct ] we propose a new method to increase the efficiency of the population dynamics algorithm by applying a realization - dependent time delay , and we present the results of its application in section [ sec : psi_timedelay ] .we characterize numerically the distribution of these time delays in section [ sec : time - delay - prop ] .our conclusions and perspectives are gathered in section [ sec : discussion ] .a method commonly used in order to determine the large deviation function is the so - called `` cloning algorithm '' .this method has its origin in the study of continuous - time markov chains and their dynamics . in this sectionwe make a review of the theoretical background behind the algorithm , of how populations are generated and of how the ldf is evaluated .let be the set of possible configurations of a system which evolves continuously in time with jumps from to occurring at transition rate .the probability to find the system at time in configuration evolves in time following the master equation where is the escape rate from configuration .if we define an additive observable over trajectories of the system ( extensive in time , such as the number of configuration changes along the trajectory ) , which increases by an amount each time the system changes from to , the probability can be detailed into .this probability is defined as the probability of finding the system at time in the configuration and with a value of the observable . in this case we can bias the statistical weight of histories by introducing a parameter which fixes the average value of , such that favors its non - typical values ( characterizes the non - biased case ) and for , where are the `` -modified '' rates . this new stochastic process is called `` -modified dynamics '' and can be conveniently rewritten as \hat{p}(c , s , t)\ ] ] where and . equation ( [ eq:4 ] ) can be seen as the evolution equation of the ( non - conserved ) probability with rates , supplemented with a population dynamics where configurations are multiplied at a rate ] .the next evolution of each copy will occur at where is chosen from a exponential law of parameter , and is drawn independently for every copy .5 . if , is erased .if , we make copies of . the repetition of this procedure will result ( after an enough time ) in an exponential growth ( or decay ) of the number of clones .we restrict for simplicity our study to situations were the ldf is positive and the population thus increases in time . we can keep track of the different changes in the number of clones and of the times where these changes occur and we will denote by the time - dependent population . once we have generated , we can compute the ldf from the slope in time of the log - population , which constitutes an evaluation of the population growth rate .this can be done in different ways , for example by fitting by and taking the ldf as or also by computing from where and are the maximum and minimum values for and and their respective times. we will refer later to this procedure as the `` bulk '' slope estimator of the ldf .note that in some situations one can extend the previously described algorithm to keep population constant , by uniformly pruning or cloning the copies at each step so as to effectively preserve the total population size without biasing its evolution .however , we are interested in situations where such approach can not be applied in practice , for instance when the cloning rate is highly fluctuating . throughout this articlewe focus our attention on a toy system where population discreteness can be studied simply : the birth and death process in one site .the system presents two states and and the transition rates read and so that equation ( [ eq:1 ] ) for this process becomes additionally , for our purposes , we will consider as additive observable the activity for which : represents the total number of configuration changes up to final time .an advantage of considering this process for our analysis is that the large deviation function for the activity can be determined analytically .the large - time cumulant generating function , also corresponds to the maximum eigenvalue of following matrix ( see equation ( [ eq:5 ] ) ) which it is found to be {1 - 4c(1-c)(1-e^{-2s})}\ ] ] equation ( [ eq:9 ] ) will allow us to assess the quality of our numerical results .the inverse of the difference between the eigenvalues of allows us to define the typical convergence time to the large time behaviour for equation ( [ eq:7 ] ) .as we mentioned before , the cloning algorithm results ( as time goes to infinity ) in an exponential growth ( for ) or decay ( for ) of the number of clones .as we will see later , the `` discreteness effects '' in the evolution of our populations are strong at initial times .that is why the determination of the ldf using this algorithm is constrained not only to the parameters , the initial number of clones and the number of realizations but also to the final time ( or the maximum population ) until which the process evolves in the numerical procedure . in order to obtain an accurate estimation of , we should average several realizations of the procedure described in section [ populations and the large deviation function ] . to perform this average, we will define below a procedure that we have called `` merging '' which will allow us to determine in a systematic way the average population from which we can obtain an estimation of the ldf .noteworthy , this erroneously could be seen as obtaining from the growth rate of the average ( or equivalently the sum ) of several runs of the population dynamics .this procedure would be incorrect since it amounts to performing a single run of the total population of the different runs , with a dynamics that would partition the total population into _ non - interacting _ sub - populations , while , as described in section [ populations and the large deviation function ] , the population dynamics induces effective interactions among the whole set of copies inside the population .in fact , the right way of performing this numerical estimate comes from computing from the average growth rate of several runs of the population _i.e. _ , from taking the average of the slopes of several instead of the slope of .the two results differ in general since .one can expect that the two results become equivalent in the large limit as the distribution of growth rate should become sharply concentrated around its average value ; however , they are different in the finite regime that we are interested in .let s consider populations .the average population is defined as . in order to compute , we introduce a procedure that we have called `` merging '' of populations which is described below . given and the result of merging these two populations is another population which represents the total number of clones for each time where a change in population for and has occurred .if is the average population for and , the merged population and the average population are related through .if we add , for example , to our previous result another population , the result is related to the average by .these `` merging '' procedure can be done for each of the populations in so that =\mathcal{m}(\mathcal{m}(\mathcal{m}( ...(\mathcal{m}(\mathcal{m}(n_{1},n_{2}),n_{3}),n_{4}), ... ),n_{j-1}),n_{j})\ ] ] is the result of systematically merging all the populations in .the average population can be recovered from ] .similarly , in the case of log - populations , ( where ) is obtained from merging all the populations in . is then computed from the slope of with ] where all the populations are defined .in other words , the average population in this interval takes into consideration all the populations while for times some populations have stopped evolving .this phenomenon is especially evident when considering a maximum population limit for the evolution of the populations ( figure [ fig : merge](a ) ) . as a consequence , depends on the distribution of final times of which are not necessarily equal to .+ an alternative that can be considered in order to overcome the influence of initial discreteness effects in the determination of is to get rid of the initial transient regime where these effects are present . in other words , to cut the initial time regime of our populations .let s call the initial cut in log - populations and equivalently the initial cut in times . is the distribution of times at . in that case , similarly as we analysed before , the average population represents only if the average is made in the interval ] .given and , we define the distance between these populations at ( with and ) , as where is the time interval spent at and is the time where changes to .evidently there are cases where but , but and and , however for these cases can also be computed . the last analysis ( and definitions )is also valid for log - populations . and enjoy interesting properties that we discuss below .+ in figure [ fig : distance1 ] , we show two log - populations and the distance between them .as can be seen in figure [ fig : distance1](a ) log - populations after a long enough time become parallel _ i.e. _ , once the populations have overcome the discreteness effects , the distance between them becomes constant as can be seen in figure [ fig : distance1](b ) .the region where the distance between populations is constant characterizes the exponential regime of the populations growth , _i.e. _ , the region where the discreteness effects are not strong anymore . + if we consider some population as reference , using the definitions above , it is possible to determine the distance between and the rest of populations in . in figure[ fig : distance2](a ) we show their average in light blue and its average over realizations {r} ] . by applying precisely this time delay correction to solve these two problems .first , we give more importance precisely to the region where the population growth is exponential .second , we omit naturally the very first initial times of the evolution of our populations .as we mentioned in section [ sec : bdp ] , the inverse of the difference between the eigenvalues of ( equation ( [ eq : ws ] ) ) allows us to define the typical convergence time to the large time behaviour for equation ( [ eq:7 ] ) .a crucial remark is that , as observed numerically , the duration before the population enters into the exponential regime is in fact larger than the time scale given by the gap : for instance , for the parameters used to obtain figure [ fig : delayedpop ] , from equation ( [ eq : tgap ] ) one has .the understanding of the duration of this discreteness effects regime would require a full analysis of the finite - population dynamics and its associated discreteness effects , which are not fully understood .we propose in this section a numerical procedure to reduce its influence .as can be seen from figure [ fig : delayedpop ] , and as it is proved in figure [ fig : varlogpop ] , the variance of log - populations ( black ) increases as a function of the time , faster during the transient regime , and slower during the exponential growth regime until the variance becomes constant .after the time - delay correction , the variance of the delayed log - population ( blue ) decreases to zero as a function of time .the -dependent decrease rate is shown in figure [ fig : varlogpop](b ) .as we discussed in section [ populations merging ] , the large deviation function can be recovered from the slope in time of the logarithm of the average population .we also mentioned in section [ discreteness effects at initial times ] , an alternative we can consider to overcome the discreteness effects would be to eliminate the initial transient regime where these effects are strong .as we will synthesize later , the improvement in the estimation of comes precisely from these two main contributions , the time delaying of populations and the discarding of the initial transient regime of the populations .let s call the analytical prediction for the large deviation function ( given by equation ( [ eq:9 ] ) ) . is obtained from the slope of the logarithm of the average population ( computed from merging several populations that have been generated using the cloning algorithm ) . is obtained through a time delay procedure over , as was described above .these two numerical estimations are in fact averages over realizations and over their last values .the method how and are computed is explained below .let s call an estimation of ( by some method ) as a function of the cut in log - population .if we consider as a set of cuts , is in fact .if ] over its last values , _i.e. _ , = \frac{1}{\gamma r } \sum_{i=\gamma-\gamma}^{\gamma } \sum_{r = 1}^{r } \psi_{*}^{r}(c_{n}^{i})\ ] ] as is shown in figure [ fig : psi1 ] .more details of the determination of these estimators are given in the subsection below .the estimators we defined in the last subsection can be obtained from the bulk " slope ( figure [ fig : psi2](a ) ) given by equation ( [ eq:6 ] ) and from the slope that comes from the affine fit of the average log - population by ( figure [ fig : psi2](b ) ) .figure [ fig : psi2 ] shows the average over realizations of the numerical estimators and as a function of the cut in log - population for . as before , ] ( without the time delay " ) is shown in black . as we already mentioned , the estimation for becomes better if we discard the initial transient regime where the discreteness effects are strong .the black curves in figure [ fig : psi2 ] represent the standard way of estimating which come from the slope of the average ( log ) population , shown in dark green in figure [ fig : delayedpop](a ) for one realization .we can observe the effect of discarding the initial transient regime of these populations by cutting systematically this curve and computing from the growth rate computed on the interval ] and ] .the improvement in the determination of the ldf is measured through the relative distance of the numerical estimations with respect to the theoretical values and their errors .+ the relative distance between the estimator and the theoretical value is shown in figure [ fig : relatived ] . + .[ fig : relatived ] these distances were also computed from the `` bulk '' ( a ) and the `` fit '' slope ( b ) and with ( blue ) and without time delay ( black ) . as we can observe , the deviation from the theoretical value is larger for values of close to , but is smaller after the `` time correction '' for almost every value of . in figure[ fig : error ] we present the estimator error for defined as where is the number of realizations and is the standard deviation of . similarly as in previous results , the estimator error decreases as approaches to ( for both slopes ) and it is always smaller for for any value of . + .[ fig : error ]in this section we analyse properties of the distribution of time delays . this distribution has been centered with respect to its mean . in figure[ fig : timedelay](a ) , we show the variance ] of the cloning algorithm goes to zero as inducing a longer transient regime between the small and large population regimes .when we plot the variance in log - log scale , as in figure [ fig : timedelay](b ) , we can observe two linear regimes , one characterized by an exponent ( ] ) .they correspond to power - law behaviours in time of the variance of the delays , which remain to be understood .+ this dependence of the dispersion of time delays with can be better seen in the distribution of time delays shown in figure [ fig : histogram ] for various values of .this distribution is wider for values of closer to zero ( figure [ fig : histogram](a ) ) .however if we rescale the distributions of time delays by their respective , as shown in figure [ fig : histogram](b ) , the distributions become independent of as \hat{p } \left(\frac{\delta \tau } { \sigma_{s } \left[\delta\tau \right]}\right) ] which can be in fact very small and this can induce a poor estimation of ( figure [ fig : merge](b ) ) .complementary to this , we found a way of emphasizing the effects of the exponential growth regime in the determination of by using the fact that log - populations after a long enough time become parallel ( figure [ fig : distance1](a ) ) and that once the populations have overcome the discreteness effects , the distance between them becomes constant ( figure [ fig : distance1](b ) ) and the discreteness effects are not strong anymore ( section [ parallel behaviour in log - populations ] ) .we argue in section [ time delay correction ] that this initial discreteness effects or initial `` lag '' between populations could be compensated by performing over the populations a time translation ( equation ( [ eq:13 ] ) ) .this time delay procedure is chosen so as to overlap the population evolutions in their large - time regime ( figure [ fig : delayedpop](b ) ) .the improvement in the estimation of comes precisely from these two main contributions , the time delaying of populations and the discarding of the initial transient regime of the populations .we showed how the the numerical estimations for the ldf are improved as the initial instances of the populations are discarded ( independently of the method used to compute the growth rate of the average population , see figure [ fig : psi2 ] ) .also , it is was shown that if additionally , we perform the time delay procedure , the estimation of is improved even more and closer to the theoretical value ( section [ bulk " and fit " slopes ] ) .this result was confirmed later in section [ relative distance and estimator error ] by computing the relative distance of the numerical estimators with respect to the theoretical value and their errors . as we observed ( figure [ fig : relatived ] ) , the deviation from the theoretical value is higher for values of close to , but is smaller after the `` time correction '' for almost every value of .similarly for the error estimator ( figure [ fig : error ] ) .our numerical study was performed on a simple system , and we hope it can be extended to more complex phenomena. however , there remain open questions even for the death - and - birth system we have studied . the duration of the initial discrete - population regime could be understood from an analytical study of the population dynamics itself .our numerical results also support a power - law behaviour in time of the variance of the delays .furthermore , it appeared that the distribution of the delays takes a universal form , after rescaling the variance to one .those observations open questions for future studies .esteban guevara thanks khashayar pakdaman for his support and discussions .this project was partially funded by the peps labs and the laabs inphyniti cnrs projects .special thanks to the ecuadorian government and the secretara nacional de educacin superior , ciencia , tecnologa e innovacin , senescyt .w. cochran , w. , sampling techniques .wiley eastern , new delhi ( 2007 ) .
we analyse numerically the effects of small population size in the initial transient regime of a simple example population dynamics . these effects play an important role for the numerical determination of large deviation functions of additive observables for stochastic processes . a method commonly used in order to determine such functions is the so - called cloning algorithm which in its non - constant population version essentially reduces to the determination of the growth rate of a population , averaged over many realizations of the dynamics . however , the averaging of populations is highly dependent not only on the number of realizations of the population dynamics , and on the initial population size but also on the cut - off time ( or population ) considered to stop their numerical evolution . this may result in an over - influence of discreteness effects at initial times , caused by small population size . we overcome these effects by introducing a ( realization - dependent ) time delay in the evolution of populations , additional to the discarding of the initial transient regime of the population growth where these discreteness effects are strong . we show that the improvement in the estimation of the large deviation function comes precisely from these two main contributions . _ keywords _ : cloning algorithm , large deviation function , population dynamics , birth - death process , biased dynamics , numerical approaches
the purpose of this paper is to obtain large deviation properties of stochastic differential equations with rapidly fluctuating coefficients in a form that can be used for accelerated monte carlo .such results are not available in the literature .we use methods from weak convergence and stochastic control . consider the -dimensional process satisfying the stochastic differential equation ( sde ) +\sqrt{\epsilon}\sigma\left ( x_{t}^{\epsilon},\frac{x_{t}^{\epsilon}}{\delta}\right ) dw_{t},\hspace{0.2cm}x_{0}^{\epsilon}=x_{0 } , \label{eq : ldpanda1}\ ] ] where as and is a standard -dimensional wiener process .the functions and are assumed to be smooth according to condition [ a : assumption1 ] and periodic with period 1 in every direction with respect to the second variable .if is of order while tends to zero , large deviations theory tells how quickly ( [ eq : ldpanda1 ] ) converges to the deterministic ode given by setting equal to zero .if is of order while tends to zero , homogenization occurs and one obtains an equation with homogenized coefficients . if the two parameters go to zero together then one expects different behaviors depending on how fast goes to zero relative to .using the weak convergence approach of , we investigate the large deviations principle ( ldp ) of under the following three regimes : the weak convergence approach results in a convenient representation formula for the large deviations action functional ( otherwise known as the rate function ) for all three regimes ( theorem [ t : maintheorem2 ] ) .it is based on the representation theorem [ t : representationtheorem ] , which in this case involves controlled sde s with fast oscillating coefficients . along the way, we obtain a uniform proof of convergence of the underlying controlled sde ( csde ) in all three regimes ( theorem [ t : maintheorem1 ] ) .in addition , in some cases we construct a control that nearly achieves the large deviations lower bound at the prelimit level .this control is useful , in particular , for the design of efficient importance sampling schemes .the particular use of the control will appear elsewhere .a motivation for this work comes from chemical physics and biology , and in particular from the dynamical behavior of proteins such as their folding and binding kinetics .it was suggested long ago ( e.g. , ) that the potential surface of a protein might have a hierarchical structure with potential minima within potential minima .the underlying energy landscapes of certain biomolecules can be rugged ( i.e. , consist of many minima separated by barriers of varying heights ) due to the presence of multiple energy scales associated with the building blocks of proteins .roughness of the energy landscapes that describe proteins has numerous effects on their folding and binding as well as on their behavior at equilibrium .often , these phenomena are described mathematically by diffusion in a rough potential where a smooth function is superimposed by a rough function ( see figure [ f : figure1 ] ) .a representative , but by no means complete , list of references is .the situation investigated in these papers is only a special case of equation ( [ eq : ldpanda1 ] ) with , and , where is the boltzmann constant and is the temperature .the questions of interest in these papers are related to the effect of taking with small but fixed .this is almost the same to requiring that goes to much faster than does .our goal is to study the related large deviations principle , so we take as well .it will become clear that the formula for the effective diffusivity ( denoted by in corollary [ c : maincorollary2 ] ) that appears in the aforementioned chemistry and biology literature is obtained under regime .singularly perturbed stochastic control problems and related large deviations problems have been studied elsewhere ( see for example and the references therein ) .in particular , in the authors study the large deviation problem for periodic coefficients , i.e. , and , using other methods . in ,the authors provide an explicit formula for the action functional in regime , whereas in regimes and the action functional is in terms of solutions to variational problems . in the present paper , we derive the same explicit expression for the action functional in regime .in addition , we also obtain the related control that nearly achieves the ldp lower bound at the prelimit level . for regimes and provide an alternative expression , from , for the action functional ( theorem [ t : maintheorem2 ] ) .it follows from these expressions that regime can be seen as a limiting case of regime by simply setting , though we are able to prove the large deviation lower bound in regime 3 only under additional conditions . for both regimeswe derive explicit expressions for the action functional in special cases of interest , and in regime 2 obtain a corresponding control that nearly achieves the ldp lower bound .note that the extension of the results of for regime 2 to include the is non - trivial , since several smoothness properties of the local rate function need to be proven ( see subsection [ s : boundedoptimalcontrolregime2 ] for details ) .apart from , regime has also been studied in under various assumptions and dependencies of the coefficients of the system on the slow and fast motion . in , the local rate functionis characterized as the legendre - fenchel transform of the limit of the normalized logarithm of an exponential moment or of the first eigenvalue of an associated operator . in the present paper, we provide a direct expression for the local rate function ( theorem [ t : maintheorem4regime2 ] ) .we note here that in the case of regime one can weaken the periodicity assumption , using the results of and the methodology of the present paper , and prove an analogous result when the fast variable takes values in . it also seems possible to combine the methods of the present paper together with results in to weaken the periodicity assumption for regime as well ; see remark [ r : thewholeeuclideanspace ] for more details .the paper is organized as follows . in section [s : main ] , we establish notation , review some preliminary results and state the general large deviations result ( theorem [ t : maintheorem2 ] ) . section [ s : limit ] considers the weak limit of the associated controlled stochastic differential equations . in section [ s : lowerboundlowersemicontinuity ]we prove the large deviations upper bound for all three regimes and the compactness of the level sets of the rate function .section [ s : laplaceprincipleregime1 ] contains the proof of the large deviations lower bound ( or equivalently laplace principle upper bound ) for regime , which completes the proof of the large deviations principle for regime .this section also discusses an explicit expression for a control that nearly achieves the large deviations lower bound in the prelimit level . in section [ s : laplaceprincipleregime2 ] , we prove the large deviations lower bound for regime and identify a control that nearly achieves this lower bound .section [ s : laplaceprincipleregime3 ] discusses the large deviations lower bound principle for regime and presents alternative expressions for the rate function in dimension .we work with the canonical filtered probability space equipped with a filtration that satisfies the usual conditions , namely , is right continuous and contains all -negligible sets . in preparation for stating the main results , we recall the concept of a laplace principle . throughout this paperonly random variables that take values in a polish space are considered . by definition , a rate function on a polish space maps into ] under regime , we also impose the following condition .[ a : assumption2 ] let be the unique invariant measure corresponding to the operator equipped with periodic boundary conditions in ( is being treated as a parameter here ) . under regime 1 , we assume the standard centering condition ( see ) for the unbounded drift term : where denotes the -dimensional torus .we note that under conditions [ a : assumption1 ] and [ a : assumption2 ] , for each there is a unique , twice differentiable function that is one periodic in every direction in , that solves the following cell problem ( for a proof see , theorem 3.3.4 ) : we write . our tool for proving the laplace principle will be the weak convergence approach of .the following representation theorem is essential for this approach . a proof of this theorem is given in .the control process can depend on but this is not always denoted explicitly . in the representation andelsewhere we take .analogous results hold for arbitrary .[ t : representationtheorem ] assume condition [ a : assumption1 ] , and given let be the unique strong solution to ( [ eq : ldpanda1 ] ) .then for any bounded borel measurable function mapping ;\mathbb{r}^{d}) ] respectively .let and let solve ( [ eq : ldpanda2 ] ) with in place of .we associate with and a family of occupation measures defined by dt , \label{def : occupationmeasures2}\ ] ] with the convention that if then .the first result , theorem [ t : maintheorem1 ] , deals with the limiting behavior of the controlled process ( [ eq : ldpanda2 ] ) under each of the three regimes , and uses the notion of a viable pair .[ def : viablepair ] a pair ;\mathbb{r}^{d})\times\mathcal{p}(\mathcal{z}\times\mathcal{y}\times\lbrack0,1]) ] , and the following hold for all ] note that a viable pair depends on the initial condition as well .since this is will be deterministic and fixed throughout the paper , we frequently omit writing this dependence explicitly .[ t : maintheorem1 ] given , consider any family of controls in satisfying and assume condition [ a : assumption1 ] .in addition , in regime 1 assume condition [ a : assumption2 ] .then the family is tight .hence given regime , , and given any subsequence of , there exists a subsubsequence that converges in distribution with limit . with probability ,the accumulation point is a viable pair with respect to according to definition [ def : viablepair ] , i.e. , . a proof is given in section [ s : limit ] .the following theorem is the main result of this paper .it asserts that a large deviation principle holds , and gives a unifying expression for the rate function for all three regimes .[ t : maintheorem2 ] let be the unique strong solution to ( [ eq : ldpanda1 ] ). assume condition [ a : assumption1 ] and that we are considering regime , where .in regime 1 assume condition [ a : assumption2 ] and in regime assume either that we are in dimension , or that and for the general multidimensional case .define }\left\vert z\right\vert ^{2}\mathrm{p}(dzdydt)\right ] , \label{eq : generalratefunction}\ ] ] with the convention that the infimum over the empty set is .then for every bounded and continuous function mapping ;\mathbb{r}^{d}) ] . in other words, satisfies the laplace principle with rate function .the proof of this theorem is given in the subsequent sections . in section [ s : laplaceprincipleregime1 ] we prove that the formulation given in ( [ eq : generalratefunction ] ) for the rate function takes an explicit form in regime which agrees with the formula provided in .we also construct a nearly optimal control that achieves the ldp lower bound ( or equivalently the laplace principle upper bound ) at the prelimit level , see theorem [ t : maintheorem3 ] . in sections[ s : laplaceprincipleregime2 ] and [ s : laplaceprincipleregime3 ] , similar constructions are provided for regimes and , respectively . in the case of regime prove the laplace principle lower bound for the general d - dimensional -dependent case .however , for reasons that will be explained in section [ s : laplaceprincipleregime3 ] , we can prove the laplace principle upper bound for the general case in dimension and under the assumption that and are independent of for the general multidimensional case .we conjecture that the full laplace principle holds without this restriction , and note also that the rate function for regime is a limiting case of that of regime obtained by setting .[ r : regularitycondition ] the regularity assumptions imposed in condition [ a : assumption1 ] can be relaxed . due to condition [ a : assumption1 ] , the solution to the cell problem ( [ eq : cellproblem ] ) is twice differentiable , which allows us to apply it s formula . consider the case and assume that they are lipschitz continuous .then , standard elliptic regularity theory ( e.g. , ) shows that the solution to equation ( [ eq : cellproblem ] ) is in . by sobolev s embedding lemma it is also in .then , using a standard approximation argument , one can still prove theorems [ t : maintheorem1 ] and [ t : maintheorem2 ] for regime .we conclude this section with a remark on possible extensions of theorem [ t : maintheorem2 ] to the case .[ r : thewholeeuclideanspace ] in the case of regime 1 and under some additional assumptions , one can extend the results to .in particular , one needs to impose structural assumptions on the coefficients and such that an invariant measure corresponding to the operator exists .also , note that for there are no boundary conditions associated with the cell problem ( [ eq : cellproblem ] ) .one looks for solutions that grow at most polynomially in , as . for more details and specific statements on homogenization for fast oscillating diffusion processes on the whole space ,see . using these results and techniques similar to the ones developed in the current paper, one can prove results that are analogous to theorem [ t : maintheorem1 ] and theorem [ t : maintheorem2 ] for regime and .the situation is a bit more complicated for regimes and .one of the main reasons is that the operators and involve the control variable as well .however , using results on the structure of solutions to ergodic type bellman equations in analogous to and techniques similar to the ones developed in the current paper , it is seems possible that one can prove a result that is analogous to theorem [ t : maintheorem2 ] for regime and .ergodic type bellman equations arise naturally in the study of the local rate function in subsection [ s : boundedoptimalcontrolregime2 ] . assuming special structure on the dynamics , the authors in and have looked at similar problems corresponding to regime when , using other methods . among other assumptions ,the author in assumes that the fast variable enters the equations of motion in an affine fashion , whereas the author in assumes that the diffusion coefficient of the fast motion is independent of the slow motion .however , the arguments used in do not seem to directly extend to the full nonlinear case .in this section we prove theorem [ t : maintheorem1 ] . in particular , in subsection [ ss : tightness ] we prove tightness of the pair and in subsection [ ss : convergenceallregime ] we prove that any accumulation point of is a viable pair according to definition [ def : viablepair ] for regimes .note that the approach is the same for all three regimes .therefore , we present the proof in detail for regime and for regimes and only outline the differences .in this section we prove that the pair is tight .the proof is independent of the regime under consideration .[ p : tightness]consider any family of controls in satisfying and assume condition [ a : assumption1 ] .in addition , in regime 1 assume condition [ a : assumption2 ] .then the following hold . 1 .the family is tight .the family is uniformly integrable in the sense that }\left\vert z\right\vert \mathrm{p}^{\epsilon,\delta}(dzdydt)\right ] = 0.\ ] ] ( i ) .tightness of the family is standard if we take into account the assumptions on the coefficients and the fact that the sequence of controls in satisfy ( [ a : uniformlyadmissiblecontrols ] ) .some care is needed only for regime , because of the presence of the unbounded drift term .recall that is one periodic in every direction in and satisfies applying it s formula to with , we get ds\nonumber\\ & + \int_{0}^{t}\left [ \epsilon\frac{\partial\chi}{\partial x}b+\delta \frac{\partial\chi}{\partial x}\left [ c+\sigma u_{s}^{\epsilon}\right ] + \epsilon\delta\frac{1}{2}\sigma\sigma^{t}:\frac{\partial^{2}\chi}{\partial x^{2}}+\epsilon\frac{1}{2}\sigma\sigma^{t}:\frac{\partial^{2}\chi}{\partial x\partial y}\right ] \left ( \bar{x}_{s}^{\epsilon},\frac{\bar{x}_{s}^{\epsilon}}{\delta}\right ) ds\label{eq : ldpregime1}\\ & + \sqrt{\epsilon}\int_{0}^{t}\left [ \left ( i+\frac{\partial\chi}{\partial y}\right ) \sigma+\delta\frac{\partial\chi}{\partial x}\sigma\right ] \left ( \bar{x}_{s}^{\epsilon},\frac{\bar{x}_{s}^{\epsilon}}{\delta}\right ) dw_{s}-\delta\left [ \chi\left ( \bar{x}_{t}^{\epsilon},\frac{\bar{x}_{t}^{\epsilon}}{\delta}\right ) -\chi\left ( x_{0},\frac{x_{0}}{\delta } \right ) \right ] .\nonumber\end{aligned}\ ] ] from this representation , the boundedness of the coefficients and the second derivatives of and assumption ( [ a : uniformlyadmissiblecontrols ] ) , it follows that for every = 0.\ ] ] this implies the tightness of .it remains to prove tightness of the occupation measures .we claim that the function }\left\vert z\right\vert ^{2}r(dzdydt),\hspace{0.2cm}r\in\mathcal{p}(\mathcal{z}\times\mathcal{y}\times\lbrack0,1])\ ] ] is a tightness function , i.e. , it is bounded from below and its level sets ):g(r)\leq k\} ] w.p.1 .thus it remains to show that satisfy ( [ eq : accumulationpointsprocessviable ] ) , ( [ eq : accumulationpointsmeasureviable ] ) and ( [ eq : accumulationpointsfullmeasureviable ] ). our tool for proving ( [ eq : accumulationpointsprocessviable ] ) will be the characterization of solutions to sde s via the martingale problem .let be smooth , real valued functions with compact support . for a measure ) ] , define }\phi _ { j}(z , y , s)r(dzdyds).\ ] ] let be given such that and let be a real valued , bounded and continuous function with compact support on .we recall that in order to prove ( [ eq : accumulationpointsprocessviable ] ) , it is sufficient to prove for any fixed such collection that , as , \right ] \rightarrow0 \label{eq : martingaleproblemregime1_1}\ ] ] and }\lambda(\bar{x}_{s},y , z)\nabla f(\bar{x}_{s})\mathrm{p}(dzdyds)\rightarrow0 \label{eq : martingaleproblemregime1_2a}\ ] ] in probability . here is defined by and since they show that solves the appropriate martingale problem , relations ( [ eq : martingaleproblemregime1_1 ] ) and ( [ eq : martingaleproblemregime1_2a ] ) imply ( [ eq : accumulationpointsprocessviable ] ) .so , let us prove now that ( [ eq : martingaleproblemregime1_1 ] ) and ( [ eq : martingaleproblemregime1_2a ] ) hold . first , for every real valued , continuous function with compact support and ] be countable and dense , and consider any and .by ( [ def : martingale]) + g(\epsilon ) e_{t}^{\epsilon}}\nonumber\\ & = \frac{1}{\delta}\int_{0}^{t}\frac{g(\epsilon)}{\delta}\left [ \int _ { t}^{t+\delta}\mathcal{g}_{\bar{x}_{s}^{\epsilon},\bar{y}_{s}^{\epsilon } , u_{s}^{\epsilon}}f_{\ell}(\bar{y}_{s}^{\epsilon})ds\right ] dt+\frac { \epsilon}{\delta^{2}}\int_{0}^{t}\frac{g(\epsilon)}{\delta}\left [ \int _ { t}^{t+\delta}\mathcal{l}_{\bar{x}_{s}^{\epsilon}}^{1}f_{\ell}(\bar{y}_{s}^{\epsilon})ds\right ] dt\nonumber\\ & = \frac{g(\epsilon)}{\delta}\left ( \int_{0}^{t}\frac{1}{\delta}\int _ { t}^{t+\delta}\left [ \mathcal{g}_{\bar{x}_{s}^{\epsilon},\bar{y}_{s}^{\epsilon},u_{s}^{\epsilon}}f_{\ell}(\bar{y}_{s}^{\epsilon})-\mathcal{g}_{\bar{x}_{t}^{\epsilon},\bar{y}_{s}^{\epsilon},u_{s}^{\epsilon}}f_{\ell}(\bar{y}_{s}^{\epsilon})\right ] dsdt\right ) \nonumber\\ & \mbox{}+\frac{g(\epsilon)}{\delta}\left ( \int_{0}^{t}\frac{1}{\delta } \left [ \int_{t}^{t+\delta}\mathcal{g}_{\bar{x}_{t}^{\epsilon},\bar{y}_{s}^{\epsilon},u_{s}^{\epsilon}}f_{\ell}(\bar{y}_{s}^{\epsilon})ds\right ] dt\right ) \nonumber\\ & \mbox{}+\frac{\epsilon g(\epsilon)}{\delta^{2}}\left ( \int_{0}^{t}\frac { 1}{\delta}\left [ \int_{t}^{t+\delta}\left [ \mathcal{l}_{\bar{x}_{s}^{\epsilon}}^{1}f_{\ell}(\bar{y}_{s}^{\epsilon})-\mathcal{l}_{\bar{x}_{t}^{\epsilon}}^{1}f_{\ell}(\bar{y}_{s}^{\epsilon})\right ] ds\right ] dt\right ) \nonumber\\ & \mbox{}+\frac{\epsilon g(\epsilon)}{\delta^{2}}\left ( \int_{0}^{t}\frac { 1}{\delta}\left [ \int_{t}^{t+\delta}\left [ \mathcal{l}_{\bar{x}_{t}^{\epsilon}}^{1}f_{\ell}(\bar{y}_{s}^{\epsilon})\right ] ds\right ]dt\right ) \nonumber\\ & = \frac{\delta}{\epsilon}\left ( \int_{0}^{t}\frac{1}{\delta}\left [ \int_{t}^{t+\delta}\left [ \mathcal{g}_{\bar{x}_{s}^{\epsilon},\bar{y}_{s}^{\epsilon},u_{s}^{\epsilon}}f_{\ell}(\bar{y}_{s}^{\epsilon})-\mathcal{g}_{\bar{x}_{t}^{\epsilon},\bar{y}_{s}^{\epsilon},u_{s}^{\epsilon}}f_{\ell}(\bar{y}_{s}^{\epsilon})\right ] ds\right ] dt\right ) \nonumber\\ & \mbox{}+\frac{\delta}{\epsilon}\left ( \int_{\mathcal{z}\times \mathcal{y}\times\lbrack0,t]}\mathcal{g}_{\bar{x}_{t}^{\epsilon},y , z}f_{\ell } ( y)\mathrm{p}^{\epsilon,\delta}(dzdydt)\right ) \nonumber\\ & \mbox{}+\int_{0}^{t}\frac{1}{\delta}\left [ \int_{t}^{t+\delta}\left [ \mathcal{l}_{\bar{x}_{s}^{\epsilon}}^{1}f_{\ell}(\bar{y}_{s}^{\epsilon } ) -\mathcal{l}_{\bar{x}_{t}^{\epsilon}}^{1}f_{\ell}(\bar{y}_{s}^{\epsilon } ) \right ] ds\right ] dt\nonumber\\ & \mbox{}+\int_{\mathcal{z}\times\mathcal{y}\times\lbrack0,t]}\mathcal{l}_{\bar{x}_{t}^{\epsilon}}^{1}f_{\ell}(y)\mathrm{p}^{\epsilon,\delta}(dzdydt ) .\label{eq : martingaleproperty_1}\ ] ] first consider the left hand side of ( [ eq : martingaleproperty_1 ] ) . since is bounded ] is bounded above by a constant times , and so ( [ eq : martingaleproperty ] ) also follows from . finally , we claim that in probability . using condition [ a : assumption1 ] , for some constants and , \end{aligned}\ ] ] and hence the left hand side tends to zero in probability by ( [ a : uniformlyadmissiblecontrols ] ) and since .the same estimate holds for the second term in , and so the claim follows . next consider the right hand side of ( [ eq : martingaleproperty_1 ] ) .the first and the third term in the right hand side of ( [ eq : martingaleproperty_1 ] ) converge to zero in probability by the tightness of , condition [ a : assumption1 ] , ( [ a : uniformlyadmissiblecontrols ] ) and .the second term on the right hand side of ( [ eq : martingaleproperty_1 ] ) converges to zero in probability by the uniform integrability of and by the fact that .so , it remains to consider the fourth term .passing to the limit as , the previous discussion implies that except on a set of probability zero, let .then except on the set of probability zero , continuity in and denseness of imply that ( [ eq : alimit ] ) holds for all ] for every ] to deal with null sets , this property also follows . in this subsection we prove theorem [ t : maintheorem1 ] for .the proof for is similar and thus it is omitted . _ proof of theorem [ t : maintheorem1 ] for . _the proof follows the same steps as the proof of theorem [ t : maintheorem1 ] for , and hence only the differences are outlined .we have and the operator is defined as in ( [ eq : prelimitoperator ] ) , but with this particular function . the proof of ( [ eq : accumulationpointsprocessviable ] )can be carried out repeating the corresponding steps of the proof of theorem [ t : maintheorem1 ] for .a difference is that one skips the step of applying it s formula to that satisfies ( [ eq : cellproblem2 ] ) , since in this case we do not have an unbounded drift term .it remains to discuss ( [ eq : accumulationpointsmeasureviable ] ) .again , define and observe that for , smooth and dense in , defined by ( [ def : martingale ] ) is an . for any , small and recalling that in this case , + g(\epsilon ) e_{t}^{\epsilon}}\\ & \hspace{0.1cm}=\int_{0}^{t}\frac{1}{\delta}\left [ \int_{t}^{t+\delta } \left ( \epsilon\mathcal{a}_{u_{s}^{\epsilon},\bar{x}_{s}^{\epsilon}}^{\epsilon}-\gamma\mathcal{l}_{u_{s}^{\epsilon},\bar{x}_{s}^{\epsilon}}^{2}\right ) f_{\ell}(\bar{y}_{s}^{\epsilon})ds\right ] dt+\gamma\int_{0}^{t}\frac{1}{\delta}\left [ \int_{t}^{t+\delta}\mathcal{l}_{u_{s}^{\epsilon } , \bar{x}_{s}^{\epsilon}}^{2}f_{\ell}(\bar{y}_{s}^{\epsilon})ds\right ] dt.\end{aligned}\ ] ] observing that the operator converges to the operator , we can argue similarly to the corresponding part of the proof of theorem [ t : maintheorem1 ] for and conclude that this section we prove the laplace principle lower bound for theorem [ t : maintheorem2 ] and the compactness of the level sets of the action functional . for each ,let be the unique strong solution to ( [ eq : ldpanda1 ] ) . to prove the laplace principle lower boundwe must show that for all bounded , continuous functions mapping ;\mathbb{r}^{d}) ] . as usual with the weak convergence approach ,the proof is analogous to that of the laplace principle lower bound . in lemma[ l : equicontinuity ] we show precompactness of and in lemma [ l : lowersemicontinuous ] that it is closed . togetherthey imply compactness of .[ l : equicontinuity ] fix and consider any sequence such that for every is viable and }\left\vert z\right\vert ^{2}\mathrm{p}^{n}(dzdydt)<k.\ ] ] then is precompact . for any such that }\lambda(\phi_{s},y , z)\mathrm{p}(dzdyds)\right\vert \nonumber\\ & \leq c_{0}\left [ |t_{2}-t_{1}|+\sqrt{(t_{2}-t_{1})}\sqrt{\int _ { \mathcal{z}\times\mathcal{y}\times\lbrack t_{1},t_{2}]}\left\vert z\right\vert ^{2}\mathrm{p}(dzdydt)}\right ] .\nonumber\end{aligned}\ ] ] this implies the precompactness of .precompactness of follows from the compactness of ] and for every .the function and the operator are defined in definitions [ def : threepossiblefunctions ] and [ def : threepossibleoperators ] respectively . by fatous lemma has a finite second moment in .moreover , observe that the function and the operator are continuous in and and affine in .hence by assumption ( [ a : admissiblelimitingmeasures ] ) and the convergence and , satisfy equation ( [ eq : continuitylemma1_1 ] ) with replaced by . next we show that ( [ eq : continuitylemma1_2 ] ) holds with replaced by .since ( [ a : admissiblelimitingmeasures ] ) holds and , we can send in ( [ eq : continuitylemma1_2 ] ) and obtain finally , it follows from )=t ] for all ] into \leq \inf_{\phi\in\mathcal{c}([0,1];\mathbb{r}^{d})}\left [ s(\phi)+h(\phi)\right ] .\label{eq : laplaceprincipleupperboundregime1}\ ] ] let be given and consider ;\mathbb{r}^{d}) ] can be decomposed as stochastic kernels in the form moreover , by theorem [ t : maintheorem1 ] we have . we will use that both and are affine in . if \times\mathcal{y}\mapsto\mathbb{r}^{d}] _ proof of laplace principle upper bound for regime 2 ._ we need to prove that \leq \inf_{\phi\in\mathcal{c}([0,1];\mathbb{r}^{d})}\left [ s(\phi)+h(\phi)\right ] .\ ] ] for given we can find ;\mathbb{r}^{d}) ] anddefine .due to the linearity of integration , , and therefore taking the infimum over all admissible we get this proves the convexity , and completes the proof of the lemma. for any the subdifferential of at is defined by since is finite and convex is always nonempty .define and for let we have the following lemma .[ l : convexity2 ] consider any and any .then _ proof ._ first we prove that , which follows from for the opposite direction we use that .consider any and any .then since , the last display implies this concludes the proof of the lemma. [ l : boundedoptimalcontrolregime2 ] assume condition [ a : assumption1 ] . then there is a pair that achieves the infimum in the definition of the local rate function such that is , for any fixed , bounded and lipschitz continuous in .also , is the unique invariant measure corresponding to the operator .by lemma [ l : convexity2 ] we get that for any and , according to theorem 6.1 in , this optimization also has a representation via an ergodic control problem of the form where the infimum is over all progressively measurable controls and solutions to the controlled martingale problem associated with .[ the paper works with relaxed controls , but since here the dynamics are affine in the control and the cost is convex , the infima over relaxed and ordinary controls are the same .] the bellman equation associated with this control problem is = \rho .\label{eq : ergodiccontrolproblemregime2_1}\ ] ] using the standard vanishing discount approach and taking into account the periodicity condition ( see for example ) one can show that there is a unique pair , such that and is periodic in with period that satisfies ( [ eq : ergodiccontrolproblemregime2_1 ] ) . since we have a classical sense solution , by the verification theorem for ergodic control . in order to emphasize the dependence of on we write .it also follows from the verification argument that an optimal control is given by .compactness of the state space and the assumptions on the coefficients guarantee that the gradient of is bounded , i.e. , for some constant that may depend on .therefore , such an optimal control is indeed bounded and lipschitz continuous in .existence and uniqueness of the invariant measure follows from the latter and the non - degeneracy assumption .next , we prove that the local rate function is actually differentiable in .recall the operator \cdot\nabla_{y}+\gamma\frac{1}{2}\sigma(y)\sigma(y)^{t}:\nabla_{y}\nabla_{y}.\ ] ] for notational convenience we omit the superscript and write in place of .recall also that for a bounded and lipschitz continuous control there exists a unique invariant measure corresponding to .define the set of functions for a vector , and define the perturbed control for each there is a unique invariant measure corresponding to , and it is straightforward to show that in the weak topology as .moreover , under condition [ a : assumption1 ] , lemma 3.2 in guarantees that the invariant measures and have densities and respectively . in particular , there exist unique weak sense solutions to the equations where and are the formal adjoint operators to and respectively .the densities are strictly positive , continuous and in .observe that in the weak sense .next , for consider the auxiliary partial differential equation by the fredholm alternative and the strong maximum principle this equation has a unique solution .standard elliptic regularity theory yields . then by sobolev s embedding lemma we have that .denote by the usual inner product in .the following lemma will be useful in the sequel .[ l : differentiability1 ] let , , and the solution to ( [ eq : differentiabilityauxiliarypde ] ) . then , keeping in mind ( [ eq : differentiabilityperturbedoperator1 ] ) and that and are densities , the following hold this concludes the proof of the lemma . by lemma [ l : convexity1 ]we already know that is finite and convex . to show that is differentiable, it is enough to show its legendre transform is strictly convex . for \nonumber\\ & = \sup_{(v,\mu)\in\mathcal{b}^{2,o}}\left [ \left\langle \alpha , \int_{\mathcal{y}}\left ( \gamma b(y)+c(y)+\sigma(y)v(y)\right ) \mu(dy)\right\rangle -\int_{\mathcal{y}}\frac{1}{2}\left\vert v(y)\right\vert ^{2}\mu(dy)\right ] .\label{eq : dualfunction}\ ] ] [ l : differentiabilitymainlemma ] the legrendre transform of is a strictly convex function of .suppose that is not strictly convex .then there are not equal such that for all ] . as in lemma[ l : boundedoptimalcontrolregime2 ] , it can be shown that exists and can be chosen to be bounded and lipschitz continuous . also , is the unique invariant measure corresponding to the operator .we will argue that the last display is impossible .first observe that by subtracting we can arrange that is constant for , ] , and thus implies that is strictly convex .define by ( [ eq : differentiabilityauxiliarycontrol ] ) with and .for we have .the definition of by ( [ eq : dualfunction ] ) implies for , let be the solution to ( [ eq : differentiabilityauxiliarypde ] ) with , the component of .we write , and also denote by the solution to ( [ eq : differentiabilityauxiliarypde ] ) with .then by lemma [ l : differentiability1 ] the last display can be rewritten as-\frac{1}{2}\int_{\mathcal{y}}\left\vert \bar{u}(y)\right\vert ^{2}m(y)dy+o(\eta)\end{aligned}\ ] ] where is such that as and can be neglected .now for small ( perhaps negative ) this is strictly bigger than unless the term is zero , i.e. , unless however , in the argument by contradiction and can be replaced by any and , so long as . after performing this substitution and some algebra ,the last display becomes .\end{aligned}\ ] ] we claim that the last display can not be true since and . by considering various choices for and ,it is enough show that the term multiplying is not zero for all .let us assume the contrary , and that for all this implies that define then is a periodic , bounded and function .consider any trajectory such that .differentiation of and use of ( [ eq : falsestetement3 ] ) give which can not be true due to the periodicity and boundedness of .this implies that ( [ eq : falsestetement1 ] ) is false , i.e. , that there is such that this concludes the proof of the lemma .let us now recall the and prove that the local rate function is continuous in .[ l : continuitylocalratefunction ] the local rate function is continuous in .first , we prove that is lower semicontinuous in .we work with the relaxed formulation of the local rate function , but as noted previously .consider such that .we want to prove let such that .the definition of implies that we can find measures satisfying such that and \ ] ] it follows from ( [ l2bound ] ) and the definition of that is tight and any limit point of will be in .hence by fatou s lemma \\ & \geq\frac{1}{2}\int_{\mathcal{z}\times\mathcal{y}\times}\left\vert z\right\vert ^{2}\mathrm{p}(dzdy)\\ & \geq\inf_{\mathrm{p}\in\mathcal{a}_{x,\beta}^{2,r}}\left [ \frac{1}{2}\int_{\mathcal{z}\times\mathcal{y}\times}\left\vert z\right\vert ^{2}\mathrm{p}(dzdy)\right ] \\ & = l_{2}^{r}(x,\beta),\end{aligned}\ ] ] which concludes the proof of lower semicontinuity of .next we prove that is upper semicontinuous .fix . by lemma [ l : boundedoptimalcontrolregime2 ] , we know that the optimal control exists and can be chosen to be bounded and continuous in .hence , there is a unique invariant measure corresponding to the operator which will be denoted by .let be such that and define a control by the formula since is nondegenerate , is uniquely defined , continuous in and uniformly bounded in , i.e. , there exists a constant such that .it follows from = \left [ \sigma(x , y)-\sigma(x_{n},y)\right ] \bar{u}_{\beta}(x , y)+\gamma\left [ b(x , y)-b(x_{n},y)\right ] + \left [ c(x , y)-c(x_{n},y)\right]\ ] ] that in fact converges to uniformly in .since is bounded and lipschitz continuous there is a unique invariant measure corresponding to which will be denoted by . owing to the definition of via ( [ eq : definitionofcontrolcontinuityofl ] ), the operator takes the form \cdot\nabla_{y}+\gamma\frac{1}{2}\sigma(x_{n},y)\sigma(x_{n},y)^{t}:\nabla_{y}\nabla_{y}.\ ] ] hence by condition [ a : assumption1 ] , it follows that in the topology of weak convergence .let be defined by then the weak convergence , the uniform convergence of to , and the continuity in of the function imply that .thus line follows from the choice of a particular control .line follows from the uniform convergence of to , the continuity and boundedness of in , and the weak convergence .line follows from the fact that is the control that achieves the infimum in the definition of .we have shown that if then there exists such that and .we claim that in fact the same is true for any sequence .let be given . since is finite and convex , we can choose , such that the convex hull of has nonempty interior , and for each construct a sequence such that and .since for all sufficiently large is in the interior of the convex hull of , there are for all such such that and . by convexity letting concludes the proof of the lemma .[ l : continuousoptimalcontrolregime2 ] the control constructed in the proof of lemma [ l : boundedoptimalcontrolregime2 ] is continuous in , lipschitz continuous in and measurable in .moreover , the invariant measure corresponding to the operator is weakly continuous as a function of .recall that where is a subdifferential of at . by lemma[ l : differentiabilitymainlemma ] , the subdifferential of with respect to consists only of the gradient . then continuity of follows from this uniqueness and the joint continuity of established in lemma [ l : continuitylocalratefunction ] .lipschitz continuity in of was established in lemma [ l : boundedoptimalcontrolregime2 ] .we insert as the optimizer into ( [ eq : ergodiccontrolproblemregime2_1 ] ) . recall that is an operator in only and denote by the operator with the control variable . after some rearrangement of terms we get the equation where and .this is now in the standard form for the bellman equation of an ergodic control problem . as before a classical sense solution exists , and as a consequencewe have the representation where the infimum is over all progressively measurable controls .since by condition [ a : assumption1 ] and are continuous in uniformly in and since is continuous in , is continuous in .a straight forward calculation shows that for any , the function satisfies a linear equation .this observation and the general theory for uniformly elliptic equations ( see ) together with the continuity in of and and condition [ a : assumption1 ] imply that is continuous in as well .hence , due to the continuity of we conclude that is continuous in .measurability is clear .lastly , due to continuity of the optimal control in , condition [ a : assumption1 ] and uniqueness of for each , we conclude that is weakly continuous as a function of ( see , e.g. , section in ) .in this section we discuss the laplace principle upper bound for regime 3 . for notational conveniencewe drop the superscript from and .we consider the general multidimensional case when and . in remark [ r :multivaluedmapregime3_2 ] we discuss the case when the functions and depend on as well . in subsection [ ss : regime3alternativeformula ] we consider the case . for we can establish the ldp when the coefficients depend on as well and we provide an alternative expression for the rate function together with a control that nearly achieves the large deviations lower bound at the prelimit level .an easy computation shows that this alternate expression is equivalent to the corresponding expression in for , and for .[ remarks on the proof of laplace principle upper bound for regime 3]for each , let be the unique strong solution to ( [ eq : ldpanda1 ] ) . to prove the laplace principle upper boundwe must show that for all bounded , continuous functions mapping ;\mathbb{r}^{d})$ ] into \leq \inf_{(\phi,\mathrm{p})\in\mathcal{v}}\left [ \frac{1}{2}\int_{\mathcal{z}\times\mathcal{y}\times\lbrack0,1]}\left\vert z\right\vert ^{2}\mathrm{p}(dzdydt)+h(\phi)\right ] .\label{eq : laplaceprincipleupperbound}\ ] ] define } \left\vert z\right\vert ^{2}\mathrm{p}(dzdydt).\ ] ] let be given and consider with such that + \eta<\infty.\ ] ] we claim that there is a family of controls such that where is constructed using . with this at handthe result easily follows .the claim follows from the results in section 3 in and section 4 in .note that in the case considered here , the fast motion is restricted to remain in a compact set at all times , the dynamics are affine in the control , is uniformly nondegenerate and the functions and do not depend on .for the construction of the control and precise statements we refer the reader to .[ r : multivaluedmapregime3_2 ] 1 .the difficulties that arise in regime are due to the fact that one has to average with respect to a first order operator . in this case uniqueness of an invariant measureis not guaranteed and is actually difficult to verify in practice .suppose that the functions and depend on as well .it turns out that under some additional lipschitz type conditions in , one can still use the methodology in .these conditions are automatically satisfied for any admissible control if the functions and do not depend on .however , we were unable to verify them when the coefficients depend on without imposing any further restrictions on the class of controls under consideration . for a more detailed discussionsee .in this subsection we give an alternative expression of the rate function for regime in dimension .the proof is analogous to the proof of the statement for regime .we therefore only state the result without proving it .the reason one can prove the ldp for with the coefficients depending on is that the invariant measure takes an explicit form .then , the local rate function is the value function to a calculus of variations problem which can be analyzed by standard techniques . in particular , because everything can be written explicitly , we can easily prove that the infimum of this variational problem is attained at a control for which the corresponding ode has a unique invariant measure .consider a control . without loss of generality one can restrict attention to controls that give nonzero velocity everywhere .the control might depend on as well , but we omit writing it for notational convenience . decomposing the limiting occupation measure as stochastic kernels( as it was done for regimes and ) and fixing the velocity , equations ( [ eq : accumulationpointsprocessviable ] ) and ( [ eq : accumulationpointsmeasureviable ] ) with imply that the corresponding invariant measure that satisfies ( [ eq : accumulationpointsmeasureviable ] ) takes the form for define and the local rate function [ t : maintheorem4 ] assume condition [ a : assumption1 ] and that we are considering regime .let be the -dimensional diffusion process that satisfies ( [ eq : ldpanda1 ] ) .then satisfies the large deviations principle with rate function ;\mathbb{r})\text { is absolutely continuous}\\ + \infty & \text{otherwise.}\end{cases}\ ] ] we conclude this section with the following corollary . as can be easily seen from the form of in theorem[ t : maintheorem4 ] , in the case one obtains a closed form expression for the rate function .[ c : maintheorem5]in addition to the conditions of theorem [ t : maintheorem4 ] , assume that . then satisfies the large deviations principle with rate function ;\mathbb{r})\text { is absolutely continuous},\\ + \infty & \text{otherwise.}\end{cases}\ ] ]we would like to thank hui wang for his initial involvement in this project .o. alvarez , m. bardi , viscosity solutions methods for singular perturbations in deterministic and stochastic control , _ siam journal on control and optimization _ vol .40 , issue 4 , ( 2001 ) , pp . 1159 - 1188 .v. borkar , v. gaitsgory , on existence of limit occupational measures set of a controlled stochastic differential equation , _ siam journal on control and optimization _ , vol 44 , no .4 , ( 2005 ) , pp .1436 - 1473 .r. buckdahn , y. hu , s. peng , probabilistic approach to homogenization of viscosity solutions of parabolic pde s , _ nonlinear differential equations and applications _, vol . 6 , no .4 , ( 1999 ) , pp . 395 - 411 .m. freidlin , r. sowers , a comparison of homogenization and large deviations , with applications to wavefront propagation , _ stochastic process and their applications _ , vol .82 , issue 1 , ( 1999 ) , pp . 2352 .v. gaitsgory , on a representation of the limit occupational measures set of a control system with applications to singularly perturbed control systems , _ siam journal on control and optimization _1 , ( 2004 ) , pp . 325 - 340 .v. gaitsgory , m .-t , nguyen , multiscale singularly perturbed control systems : limit occupational measures sets and averaging , _ siam journal on control and optimization _ , vol .3 , ( 2002 ) , pp .954 - 974 .c. hyeon , d. thirumalai , can energy landscapes roughness of proteins and rna be measured by using mechanical unfloding experiments ?_ usa , vol . 100 , no . 18 , ( 2003 ) , pp .10249 - 10253 .khasminiskii , ergodic properties of recurrent diffusion processes and stabilization of the solution to the cauchy problem for parabolic equations , _ theory of probability and its applications _ , vol . 5 , issue 2 , ( 1960 ) , pp . 179 - 196 .saven , j. wang , p.g.wolynes , kinetics of protein folding : the dynamics of globally connected rough energy landscapes with biases , _ journal of chemical physics _, vol . 101 , no . 12 , ( 1994 ) , pp .11037 - 11043 .veretennikov , on large deviations in the averaging principle for sdes with a `` full dependence '' , correction , arxiv : math/0502098v1 [ math.pr ] ( 2005 ) .initial article in _ annals of probability _ , vol .1 , ( 1999 ) , pp . 284 - 296 .
we study the large deviations principle for locally periodic stochastic differential equations with small noise and fast oscillating coefficients . there are three possible regimes depending on how fast the intensity of the noise goes to zero relative to the homogenization parameter . we use weak convergence methods which provide convenient representations for the action functional for all three regimes . along the way we study weak limits of related controlled sdes with fast oscillating coefficients and derive , in some cases , a control that nearly achieves the large deviations lower bound at the prelimit level . this control is useful for designing efficient importance sampling schemes for multiscale diffusions driven by small noise .
exponential random graphs represent an important and challenging class of models , displaying both diverse and novel phase transition phenomena .these rather general models are exponential families of probability distributions over graphs , in which dependence between the random edges is defined through certain finite subgraphs , in imitation of the use of potential energy to provide dependence between particle states in a grand canonical ensemble of statistical physics .they are particularly useful when one wants to construct models that resemble observed networks as closely as possible , but without specifying an explicit network formation mechanism .consider the set of all simple graphs on vertices ( `` simple '' means undirected , with no loops or multiple edges ) . by a -parameter family of exponential random graphswe mean a family of probability measures on defined by , for , where are real parameters , are pre - chosen finite simple graphs ( and we take to be a single edge ) , is the density of graph homomorphisms ( the probability that a random vertex map is edge - preserving ) , and is the normalization constant , intuitively , we can think of the parameters as tuning parameters that allow one to adjust the influence of different subgraphs of on the probability distribution and analyze the extent to which specific values of these subgraph densities `` interfere '' with one another .since the real - world networks the exponential models depict are often very large in size , our main interest lies in exploring the structure of a typical graph drawn from the model when is large .this subject has attracted enormous attention in mathematics , as well as in various applied disciplines .many of the investigations employ the elegant theory of graph limits as developed by lovsz and coauthors ( v.t .ss , b. szegedy , c. borgs , j. chayes , k. vesztergombi , ) . building on earlier work of aldous and hoover ,the graph limit theory creates a new set of tools for representing and studying the asymptotic behavior of graphs by connecting sequences of graphs to a unified graphon space equipped with a cut metric .though the theory itself is tailored to dense graphs , serious attempts have been made at formulating parallel results for sparse graphs . applying the graph limit theory to -parameter exponential random graphs andutilizing a large deviations result for erds - rnyi graphs established in chatterjee and varadhan , chatterjee and diaconis showed that when is large and are non - negative , a typical graph drawn from the `` attractive '' exponential model ( [ pmf ] ) looks like an erds - rnyi graph in the graphon sense , where the edge presence probability is picked randomly from the set of maximizers of a variational problem for the limiting normalization constant : where is the number of edges in .they also noted that in the edge-(multiple)-star model where is a -star for , due to the unique structure of stars , maximizers of the variational problem for for all parameter values satisfy ( [ max ] ) and a typical graph drawn from the model is always erds - rnyi . since the limiting normalization constant is the generating function for the limiting expectations of other random variables on the graph space such as expectations and correlations of homomorphism densities , these crucial observations connect the occurrence of an asymptotic phase transition in ( [ pmf ] ) with an abrupt change in the solution of ( [ max ] ) andhave led to further exploration into exponential random graph models and their variations .being exponential families with finite support , one might expect exponential random graph models to enjoy a rather basic asymptotic form , though in fact , virtually all these models are highly nonstandard as the size of the network increases .the -parameter exponential random graph models have therefore generated continued research interest .these prototype models are simpler than their -parameter extensions but nevertheless exhibit a wealth of non - trivial attributes and capture a variety of interesting features displayed by large networks .the relative simplicity furthermore helps us better understand how phases transition between one another as tuning parameters vary and provides insight into the expressive power of the exponential construction . in the statistical physics literature , phase transitionis often associated with loss of analyticity in the normalization constant , which gives rise to discontinuities in the observed statistics . for exponential random graph models ,phase transition is characterized as a sharp , unambiguous partition of parameter ranges separating those values in which changes in parameters lead to smooth changes in the homomorphism densities , from those special parameter values where the response in the densities is singular . for the `` attractive '' -parameter edge - triangle model obtained by taking ( an edge ) , ( a triangle ) , and , chatterjee and diaconis gave the first rigorous proof of asymptotic singular behavior and identified a curve across which the model transitions from very sparse to very dense , completely skipping all intermediate structures .in further works ( see for example , radin and yin , aristoff and zhu ) , this singular behavior was discovered universally in generic -parameter models where is an edge and is any finite simple graph , and the transition curve asymptotically approaches the straight line as the parameters diverge .the double asymptotic framework of was later extended in , and the scenario in which the parameters diverge along general straight lines , where is a constant and , was considered .consistent with the near degeneracy predictions in , asymptotically for , a typical graph sampled from the `` attractive '' -parameter exponential model is sparse , while for , a typical graph is nearly complete .although much effort has been focused on -parameter models , -parameter models have also been examined . as shown in , near degeneracy and universalityare expected not only in generic -parameter models but also in generic -parameter models .asymptotically , a typical graph drawn from the `` attractive '' -parameter exponential model where is sparse below the hyperplane and nearly complete above it . for the edge-(multiple)-star model, the desirable star feature relates to network expansiveness and has made predictions of similar asymptotic phenomena possible in broader parameter regions .related results may be found in hggstrm and jonasson , park and newman , bianconi , lubetzky and zhao , radin and sadun , and kenyon et al . .these theoretical findings have advanced our understanding of phase transitions in exponential random graph models , yet some important questions remain unanswered .previous investigations identified near degenerate parameter regions in which a typical sampled graph looks like an erds - rnyi graph , where the edge presence probability or , but the speed of towards these two degenerate states is not at all clear .when a typical graph is sparse ( ) , how sparse is it ? when a typical graph is nearly complete ,how dense is it? can we give an explicit characterization of the near degenerate graph structure as a function of the parameters ?the rest of this paper is dedicated towards these goals .some of the ideas for the sparse case were partially implemented in .theorem [ 2generic ] considers generic `` attractive '' -parameter exponential random graph models and theorem [ singlestar ] derives parallel results for `` repulsive '' edge-(single)-star models .the asymptotic characterization of obtained in these theorems then make possible a deeper exploration into the asymptotics of the limiting normalization constant of the exponential model in theorem [ norm ] , which indicates that though a typical graph displays erds - rnyi feature , the simplified erds - rnyi graph and the real exponential graph are not exact asymptotic analogs in the usual statistical physics sense . in the sparse region ,the erds - rnyi graph does seem to reflect the asymptotic tendency of the exponential graph more accurately , as the two interpretations of the limiting normalization constant coincide when the parameters diverge .lastly , theorems [ generic ] and [ inf ] further extend the near degenerate analysis from -parameter exponential random graph models to -parameter exponential random graph models .this section explores the exact asymptotics of generic -parameter exponential random graph models where near degeneracy .the analysis is then extended to for the edge-(single)-star model . by including only two subgraph densities in the exponent , where is an edge and is a different finite simple graph , the -parameter models are arguably simpler than their -parameter generalizations .as illustrated in chatterjee and diaconis , when is large and is non - negative , a typical graph drawn from the `` attractive '' -parameter exponential model ( [ 2pmf ] ) behaves like an erds - rnyi graph , where the edge presence probability is picked randomly from the set of maximizers of a variational problem for the limiting normalization constant : where is the number of edges in , and thus satisfies this implicitly describes as a function of the parameters and , but a closed - form solution is not obtainable except when , which gives .for large negative , asymptotically behaves like , while for large positive , asymptotically behaves like .we would like to know if these asymptotic results could be generalized .by , taking and , when and when .equivalently , for sufficiently far away from the origin , when and when .as regards the speed of towards these two degenerate states , simulation results suggest that is asymptotically in the former case and is asymptotically in the latter case .see tables [ table1 ] and [ table2 ] and figure [ figure1 ] . even for with small magnitude, the asymptotic tendency of is quite evident .ccccc & & & & + + & & & & + & & & & + & & & & + & & & & + + [ 2generic ] consider an `` attractive '' -parameter exponential random graph model ( [ 2pmf ] ) where .for large and sufficiently far away from the origin , a typical graph drawn from the model looks like an erds - rnyi graph , where the edge presence probability satisfies : * if , * if . as explained in the previous paragraph , in the large limit , the asymptotic edge presence probability of a typical sampled graph is prescribed according to the maximization problem ( [ sd ] ) . for magnitude is sufficiently big , when and when .for , we rewrite ( [ sd ] ) in the following way : take , since , for sufficiently far away from the origin , . using , we then have this implies that as gets sufficiently large .using again , this further shows that asymptotically behaves like .nodes with edges and triangles as sufficient statistics , where and . the simulated graph displays erds - rnyi feature with edge density , matching the asymptotic predictions of theorem [ 2generic ] .table [ table1 ] : and ).,width=288 ] for , we rewrite ( [ sd ] ) in the following way : where . going one step further , we separate from : as the dominating term in the exponent carries a negative sign . take , since , for sufficiently far away from the origin , . using , we then have this implies that as gets sufficiently large , and since , also implies that the sum of all the terms in the exponent . using again, this further shows that asymptotically behaves like , or equivalently , asymptotically behaves like . in the edge-(single)-star model where is a star with edges , due to the unique structure of stars , maximizers of the variational problem for the limiting normalization constant the parameter again satisfies ( [ sd ] ) , and the near degeneracy predictions in theorem [ 2generic ] may be extended from the upper half - plane to the lower half - plane .it was shown in that for large and sufficiently far away from the origin , a typical graph drawn from the `` repulsive '' edge-(single)-star model where is indistinguishable from an erds - rnyi graph , where the edge presence probability when and when . as regards the speed of towards these two degenerate states , simulation results suggest that just as in the `` attractive '' situation , is asymptotically in the sparse case and is asymptotically in the nearly complete case .see table [ table2 ] .even for with small magnitude , the asymptotic tendency of is quite evident .[ singlestar ] consider a `` repulsive '' edge-(single)-star model obtained by taking a star with edges and in ( [ 2pmf ] ) .for large and sufficiently far away from the origin , a typical graph drawn from the model looks like an erds - rnyi graph , where the edge presence probability satisfies : * if , * if . for magnitude is sufficiently big , we examine the maximization problem ( [ sd ] ) separately when and when .first for . assume that for some fixed but arbitrary .we rewrite ( [ sd ] ) in the following way : , we then have this implies that as gets sufficiently negative . using again , this further shows that asymptotically behaves like .ccccc & & & & + + & & & 0.01832 & + & & & & + & & & & + & & & & + & & & & + & & & & + + next for .assume that for some fixed but arbitrary .we rewrite ( [ sd ] ) in the following way : where .going one step further , we separate from : as the dominating term in the exponent carries a negative sign . using , we then have this implies that as gets sufficiently negative , and since , also implies that the sum of all the terms in the exponent . using again, this further shows that asymptotically behaves like , or equivalently , asymptotically behaves like .though the -parameter exponential random graph looks like an erds - rnyi random graph in the large limit , we also note some marked dissimilarities . the limiting normalization constant for the -parameter exponential model ( [ 2pmf ] ) is given by ( [ reduce ] ) , while the `` equivalent '' erds - rnyi model yields that is . since is nonzero for finite ,the two different interpretations of the limiting normalization constant indicate that the simplified erds - rnyi graph and the real exponential model are not exact asymptotic analogs in the usual statistical physics sense .when the relevant erds - rnyi graph is near degenerate , theorems [ 2generic ] and [ singlestar ] give the asymptotic speed of as a function of and , allowing a deeper analysis of the asymptotics of in the following theorem [ norm ] .the theorem is formulated in the context of the edge-(single)-star model , since the asymptotics of are known in broader parameter regions for this model , but the statement for the `` attractive '' situation ( ) applies without modification to generic -parameter models .see figures [ triangle ] and [ star ] .we also note that , in the sparse region , the erds - rnyi graph seems to reflect the asymptotic tendency of the exponential random graph more accurately , as the two interpretations of the limiting normalization constant do coincide when the parameters diverge .-star model.,width=384 ] [ norm ] consider an edge-(single)-star model obtained by taking a star with edges in ( [ 2pmf ] ) .for sufficiently far away from the origin , the limiting normalization constant satisfies : * if and , * if and . for magnitude is sufficiently big , we examine the limiting normalization constant ( [ reduce ] ) separately in the sparse region and in the nearly complete region . in the sparse region ( and ) , from theorems [ 2generic ] and [ singlestar ] , and .this shows that asymptotically behaves like . in the nearly complete region ( and ) , from theorems [ 2generic ] and [ singlestar ] , and .this shows that asymptotically behaves like .we may also analyze the asymptotics of along the boundaries of the near degenerate region .the boundary of the sparse region is given by and , and satisfies though depends on in a complicated way , the asymptotic behavior of can be characterized : using , this shows that asymptotically behaves like .we recognize that the asymptotic behaviors of on the boundary of and inside the sparse region are different : inside , is asymptotically and converges to , while on the boundary , though also converges to is at a much slower rate .the boundary of the nearly complete region is given by and , and satisfies though depends on in a complicated way , the asymptotic behavior of can be characterized : since the dominating term on the left of ( [ comp ] ) is , using , we then have , which shows that is asymptotically larger compared with and further shows that asymptotically behaves like .we recognize that the asymptotic behaviors of on the boundary of and inside the nearly complete region coincide .this section extends the investigation into near degeneracy from generic -parameter exponential random graph models to generic -parameter exponential random graph models . for `` attractive '' models where , we derive parallel results concerning the asymptotic graph structure and the limiting normalization constant . using similar methods ,more results can be deduced for the `` repulsive '' edge-(multiple)-star model where .as illustrated in chatterjee and diaconis , when is large and are non - negative , a typical graph drawn from the -parameter exponential model behaves like an erds - rnyi graph , where the edge presence probability is picked randomly from the set of maximizers of ( [ max ] ) , and thus satisfies where is the number of edges in .we take to be an edge and assume without loss of generality that .[ generic ] consider an `` attractive '' -parameter exponential random graph model ( [ pmf ] ) where .for large and sufficiently far away from the origin , a typical graph drawn from the model looks like an erds - rnyi graph , where the edge presence probability satisfies : * if , * if .the proof follows a similar line of reasoning as in the proof of theorem [ 2generic ] .expectedly though , the argument is more involved since we are working with -parameter families rather than -parameter families . for ,we rewrite ( [ kreduce ] ) in the following way : take , since , for sufficiently far away from the origin , for . using , we then have this implies that for all as get sufficiently large . using again, this further shows that asymptotically behaves like .for , we rewrite ( [ kreduce ] ) in the following way : where .going one step further , for , we separate from : as the dominating term in the exponent carries a negative sign .take , since , for sufficiently far away from the origin , .using , we then have this implies that for all as get sufficiently large , and since , also implies that the sum of all the terms in the exponent + . using again, this further shows that asymptotically behaves like , or equivalently , asymptotically behaves like .[ inf ] consider an `` attractive '' -parameter exponential random graph model ( [ pmf ] ) where .for sufficiently far away from the origin , the limiting normalization constant satisfies : * if , * if . for magnitude is sufficiently big , we examine the limiting normalization constant ( [ max ] ) separately in the sparse region and in the nearly complete region . in the sparse region ( ) , from theorem [ generic ] , for all and .this shows that asymptotically behaves like . in the nearly complete region ( ) , from theorem [ generic ] , for all and .this shows that asymptotically behaves like . in the edge-(multiple)-star model , due to the unique structure of stars , maximizers of the variational problem for the limiting normalizationconstant satisfies ( [ kreduce ] ) for any , and the near degeneracy predictions may be extended to the `` repulsive '' region . using similar techniques as in , it may be shown that for sufficiently far away from the origin and all negative , when and when . then analogous conclusions as in theorems [ generic ] and [ inf ] may be drawn : * and if and , * and + if and .we omit the proof details .the author is very grateful to the anonymous referees for the invaluable suggestions that greatly improved the quality of this paper .she appreciated the opportunity to talk about this work in the special session on topics in probability at the 2016 ams western spring sectional meeting , organized by tom alberts and arjun krishnan .borgs , c. , chayes , j. , lovsz , l. , ss , v.t . ,vesztergombi , k. : counting graph homomorphisms . in : klazar , m. , kratochvil , j. , loebl , m. , thomas , r. , valtr , p. ( eds . )topics in discrete mathematics , volume 26 , pp .315 - 371 .springer , berlin ( 2006 ) hoover , d. : row - column exchangeability and a generalized model for probability . in : koch , g. , spizzichino , f. ( eds . ) exchangeability in probability and statistics , pp .281 - 291 .north - holland , amsterdam ( 1982 )
the exponential family of random graphs has been a topic of continued research interest . despite the relative simplicity , these models capture a variety of interesting features displayed by large - scale networks and allow us to better understand how phases transition between one another as tuning parameters vary . as the parameters cross certain lines , the model asymptotically transitions from a very sparse graph to a very dense graph , completely skipping all intermediate structures . we delve deeper into this near degenerate tendency and give an explicit characterization of the asymptotic graph structure as a function of the parameters .
in this paper , we discuss the optimal hedging strategy based on the mean - variance criterion for the fund and insurance managers in the presence of incompleteness as well as imperfect information in the market .if an unhedgeable risk - factor exists , the fund and insurance managers are forced to work in the physical measure and resort to a certain optimization technique to decide their trading strategies . in the physical measure , however , they soon encounter the problem of _ imperfect information _ which is usually hidden in the traditional risk - neutral world .one of the most important factors in the financial optimizations is the drift term in the price process of a financial security .in fact , many of the financial decisions consist of taking a careful balance between the expected return , i.e. drift , and the size of risk .however , the observation of a drift term is always associated with a noise , and we need to adopt some statistical inference method . in a large number of existing works on the mean - variance hedging problem , which usually adopt the _ duality method _ , pham ( 2001 ) , for example , studied the problem in this _ partially observable drift _ context . in spite of a great amount of literature ,results with explicit solutions which can be directly implementable by practitioners have been quite rare thus far . when the explicit forms are available , they usually require various simplifying assumptions on the dependence structure among the underlying securities and their risk - premium processes , and also on the form of the hedging target , which make the motivations somewhat obscure from a practical point of view .a new approach was proposed by mania & tevzadze ( 2003 ) , where the authors studied a minimization problem for a convex cost function and showed that the optimal value function follows a backward stochastic partial differential equation ( bspde ) .they were able to decompose it into three backward stochastic differential equations ( bsdes ) when the cost function has a quadratic form .although the relevant equations are quite complicated , their approach allows a systematic derivation for a generic setup in such a way that it can be linked directly to the dynamic programming approach yielding hjb equation . in fujii & takahashi ( 2013 ) , we have studied their bsdes to solve the mean - variance hedging problem with partially observable drifts . in the setupwhere kalman - bucy filtering scheme is applicable , we have shown that a set of simple ordinary differential equations ( odes ) and the standard monte carlo simulation are enough to implement the optimal strategy .we have also derived its approximate analytical expression by an asymptotic expansion method , with which we were able to simulate the distribution of the hedging error .the problem of imperfect information is not only about the drifts of securities .fund and insurance managers have to deal with stochastic investment flows from their clients .in particular , the timings of buy / sell orders are unpredictable and their intensities can be only statistically inferred . the same is true for loan portfolios and possibly their securitized products .it is , in fact , a well - known story in the us market that the prepayments of residential mortgages have a big impact on the residential mortgage - backed security ( rmbs ) price , which in turn induces significant hedging demand on interest rate swaps and swaptions .see , for example , as a recent practical review on the real estate finance . in this paper, we extend to incorporate the stochastic investment flows with _ partially observable intensities _ . in the first half of the paper , where we introduce two counting processes to describe the in- and outflow of the investment units , we provide the mathematical preparations necessary for the filtering procedures .then , we explain the solution technique for the relevant bsdes in detail , which gives the optimal hedging strategy by means of a set of simple odes and the standard monte carlo simulation . in the latter half of the paper , we further extend the framework so that we can deal with a portfolio of insurance products .we provide a method to differentiate the effects on the demand for insurance after the insured events based on their loss severities . furthermore, we explain how to utilize jackson s network that is often adopted to describe a network of computers in the queueing analysis .we show that it is quite useful for the modeling of a general network of investment flows , such as the one arising from a group of funds within which investors can switch a fund to invest. although we are primarily interested in providing a flexible framework for the portfolio management , the presented framework may be applicable to manufacturers and energy firms operating multiple lines of production .for example , they can use it to install an efficient overlay of dynamic hedging by financial derivatives , such as commodity and energy futures , in order to minimize the stochastic production as well as storage costs .we consider the market setup quite similar to the one used in except the introduction of the stochastic investment / order flows with partially observable intensities .let be a complete probability space with a filtration where is a fixed time horizon .we put for simplicity .we assume that satisfies the _ usual conditions _ and is big enough in a sense that it makes all the processes we introduce are adapted to this filtration .we consider the financial market with one risk - free asset , tradable stocks or any kind of securities , and non - tradable indexes or otherwise state variables relevant for stochastic volatilities , etc .for simplicity of presentation , we assume that the risk - free interest rate is zero . using a vector notation , the dynamics of the securities prices and the non - tradable indexes are assumed to be given by the following diffusion processes : & & ds_t=(t , s_t , y_t)(dw_t+_t dt ) + & & dy_t=(t , s_t , y_t)(dw_t+_t dt)+(t , s_t , y_t)(db_t+_t dt ) .[ sde - seq ] here , are the standard -brownian motions independent of each other and valued in and , respectively . the known functions , and are measurable and smooth mappings from \times \mbb{r}^d\times \mbb{r}^m ] with . for simplicity , we assume that they do not jump simultaneously .the total number of investment - units for the fund at time is denoted by , which is given by q_t = q_0+a_t - d_t . in this way, we model the change of the investment - units by a simple queueing system with a single server .later , we shall make use of a special type of queueing network to allow investors to switch within a group of funds , which typically bundles money - reserve , bond , equity , bull - bear , or regional equity indexes .see as a standard textbook on queueing systems .we assume that the counting processes have -compensators , i.e. & & _t:=a_t-_0^t ^a(s , x_s- ) ds + & & _t:=d_t-_0^t ^d(s , x_s- ) _ \{q_s->0}ds [ flow - intensity ] are -martingales . here , the intensity processes are modulated by a finite - state markov - chain process which takes its value in one of the unit - vectors , .the dynamics of is assumed to be given by x_t = x_0+_0^t r_s x_s- ds+u_t .[ eq - x ] here is a deterministic -valued continuous function with {i , j} ] , it is automatically satisfied by many stochastic volatility models where depends on monotonically . ] . as a result, we can see that is in fact the augmented filtration generated by , and we express this fact by . if necessary, we can extend the model of in such a way that can be generic -predictable processes , and hence can be dependent on the past history of , as long as assumption ( a1 ) is satisfied .this may represent a possible feedback from the investment flows to the financial market . _ for every , and are strictly positive -predictable processes .+ +\mbb{e}\left[\int_0^t \lambda^d(s , x_{s- } ) ds \right]<\infty ] , respectively .[ lemma-3 ] + + proof : consists of a completely decoupled queueing system with unit entrance and service intensities also in measure .although carries non trivial information through its drift , it does not affect the dynamics of by the model setup . similarly , we also need the following lemma .[ lemma-4 ] + + proof : in measure , becomes a -dimensional standard brownian motion and hence the information generated by its increments is independent of . on the other hand ,the observation of and provides non - trivial information through their intensities , .however , by assumption ( a2 ) ( i ) , any available information on diffusions can only appear in the form generated by and is irrelevant for . + we would like to obtain the filtering equations for & & _ t:=,_t:=and _ t:= . since is valued in , we have ^a_t&:=&= + & = & ( ^a(t,)_t- ) , [ hat - lambda ] and similarly for . here, we have used the inner product defined by ( ^a(t,)_t- ) : = _i=1^n ^a(t,_i)^i_t- where is the -th element of . for notational simplicity ,let us put _t:=[z_t|_t]= [ _ t|_t ] + [ _ t|_t ] .using kallianpur - striebel formula , we have _ t= and _ t= where and .note that and are and martingales , respectively .this fact can be easily proved by bayes formula and assumption .they define the inverse measure - change by : & & |__t=_1,t , |__t=_2,t .+ + of course , can also be given by the bayes formula with ] are the disjoint intervals of ] in the both hands of ( [ xi - x ] ) .due to the bounded nature of and assumption ( a2 ) , we can apply lemma [ lemma-5 ] .in particular , one can see ^_2= ^<. using the fact that for , one obtains the desired result . + since , we obtain _t= , [ hat - x ] where is a -dimensional vector . now , the filtered intensities can be obtained by ( [ hat - lambda ] ). we can show by assumption ( a2 ) that & & _ t = a_t-_0^t _ s^a ds + & & _ t = d_t-_0^t _ s^d _ \{q_s->0 } ds are -martingales .+ + let us comment on how to simulate in the physical measure . can be expressed as & & q_t = q_0+_0^t r_s q_s- ds-_0^t \{(^a_s- ) + ( ^d_s-)_\{q_s->0}}q_s - ds + & & + _ 0^t ( ^a_s-)q_s - da_s + _ 0^t ( ^d_s-)q_s - dd_s . [ q - dynamics2 ] thus , between any two jumps , follows a -predictable continuous process given by the first line of ( [ q - dynamics2 ] ) .when there is a jump , we have q_t=^a_t q_t- a_t + ^d_t q_t- d_t .[ q - jump ] in , and are counting processes whose intensities are and respectively , where is given by ( [ hat - x ] ) .thus , based on these formulas , we can carry out random draw for and by running the s process in parallel . at the jump, also jumps due to the jump of given by ( [ q - jump ] ) .in fact , it is well - known that these jumps in intensities are crucial to reproduce strong clusterings of events observed in defaults , rating migrations , and other herding behaviors among investors. it may be also the case for natural disasters affected by the global climate change . + for later purpose , let us define _t^=^which is -martingale specifying the measure change conditional on : |__t=_t^ .then , the inverse measure change is similarly given by using as |__t=_t^ .we suppose that the manager wants to minimize the square difference between the liability and the value of the hedging portfolio .the terminal liability , which is assumed to be -measurable random variable , would depend on the performance of tradable and/or non - tradable indexes as well as the number of investment - units .it can contain not only the payments to the investors but also the target profit for the management company .in addition to the terminal liability , we assume that there also exist cash flows associated with the payments of dividends , principles for unwound units , and the receipts of management fees , penalties for early terminations and the initial proceeds , etc .it is convenient for us to include the stream of cash flows into the wealth dynamics as & & _ t^(s , w)=w+_s^t _u^ds_u + & & + _ s^t _ u q_u du+_s^t e_u da_u-_s^t g_u dd_u where are -predictable processes representing various cash flows just explained .here , is a -predictable trading strategy for the tradable securities .we suppose that the goal of the fund manager is to solve v(t , w)=ess _ .[ opt - pr ] here , we denote is the set of -predictable trading strategies satisfying the <\infty ] , the process is a -submartingale .+ is optimal if and only if is a -martingale ._ + by lemma [ lemma-8 ] , we can express & & v(t , w)=v(0,w)+_0^t a(u , w)du+_0^t z(u , w)^dn_u+_0^t ( u , w)^dm_u + & & + _ 0^t j^a(u , w)d_u+_0^t j^d(u , w)d_u with appropriate -predictable processes for a given . more precisely , predictable jump components can exist , for example if there exist discrete coupon payments in the process .the necessary extension can be done straightforwardly . assuming that is twice continuously differentiable with respect to for all , we can apply it - ventzell formula .details of the it - ventzell formula are available in theorem 3.3.1 of as well as in theorem 3.1 of .note that the _ forward integral _with respect to the random measure used in simply coincides with the it integral when the integrands are predictable processes as in the current problem .now , the dynamics of is given by & & v(t,_t^)=v(s , w)+_s^t a(u,_u-^)du+_s^t z(u,_u-^)^dn_u + _ s^t ( u,_u-^)^dm_u + & & + _ s^t v_w(u,_u-^)d _ u^,c+ _ s^t dv_w^c(,_^),_^,c_u + _s^t v_ww(u,_u-^)d^,c_u + & & + _ s^t j^a(u,_u-^)d_ u^c + & & + _s^t da_u + & & + _s^t dd_u . herethe superscript denotes the continuous part of the process .arranging the drift term and completing the square in terms of so that it satisfies the conditions for the optimality principle , one can find & & a(t , w)+_\ { v_ww(t , w)|| _ t^_t+ ||^2- } + & & + v_w(t , w)_t q_t+_t^a + & & + _ t^d _ \{q_t->0}=0 .[ eq - drift ] assuming that there exist making vanish , which is the first term inside the of ( [ eq - drift ] ) , the value function is given by the following backward stochastic pde : & & v(t , w)=(h - w)^2-_t^t \{-v_w(s , w ) _s q_s}ds + & & + _ t^t _ s^a ds + & & + _ t^t _ s^d _ \{q_s->0}ds + & & -_t^t z(s , w)^dn_s-_t^t ( s , w)^dm_s-_t^t j^a(s , w)d_s -_t^t j^d(s , w)d_s .+ [ bspde ] although the above bspde looks much more complicated than that appears in with continuous underlyings , we can still exploit the quadratic nature of the problem . by inserting v(t , w)&= & w^2 v_2(t)-2w v_1(t)+v_0(t ) + z(t , w)&= & w^2 z_2(t)-2w z_1(t)+z_0(t),(t , w)=w^2 _ 2(t)-2w _ 1(t)+_0(t ) + j^a(t , w)&=&w^2 j^a_2(t)-2w j^a_1(t)+j^a_0(t),j^d(t ,w)=w^2 j^d_2(t)-2w j^d_1(t)+j^d_0(t ) + [ qd - decomp ] into ( [ bspde ] ) , we can decompose the bspde into the following three -independent bsdes : [ eq - v2 ] & & v_2(t)=1-_t^t ds-_t^t z_2(s)^dn_s -_t^t _ 2(s)^dm_s + + [ eq - v1 ] & & v_1(t)=h-_t^t ds + & & -_t^t\{v_2(s)}ds + & & -_t^t z_1(s)^dn_s-_t^t _1(s)^dm_s -_t^t j^a_1(s)d_s-_t^t j^d_1(s)d_s + & & v_0(t)=h^2-_t^t \ { + 2_s q_s v_1(s)}ds + & & + _ t^t _ s^a ds + & & + _ t^t _ s^d _ \{q_s->0}ds + & & -_t^t z_0(s)^dn_s-_t^t _ 0(s)^dm_s-_t^t j_0^a(s)d_s -_t^t j^d_0(s)d_s . [ eq - v0 ] in the derivation ,we have used the fact that both and are identically zero due to the continuity of the risk - premium process .+ it is difficult to give the general conditions which guarantee the existence and uniqueness of the solutions for ( [ eq - v2 ] ) , ( [ eq - v1 ] ) and ( [ eq - v0 ] ) . in particular , the unboundedness of due to its gaussian nature , makes the problem complicated .however , the following lemma is a simple consequence of the _ optimality principle_. [ lemma-9 ] + furthermore , if there exists the optimal strategy , we can show that it is unique due to the strict convexity of the cost function .( see , remark 2.2 of . )note that the form of the optimal hedging strategy in ( [ pi - opt ] ) can be easily found from ( [ eq - drift ] ) and the decomposition ( [ qd - decomp ] ) .the variance optimal measure used in the duality approach is closely related to .see propositions 1.5.2 and 1.5.3 of mania & tevzadze ( 2008 ) .+ although the three bsdes , and look very complicated at first sight , they have the following nice properties which make the mean - variance ( or quadratic ) hedging particularly useful for a large scale portfolio management : + _ + only follows a non - linear bsde. + ( and hence ) is independent from the hedging target and the cash - flow streams .+ depends on the hedging target and the cash - flow streams , but follows a linear bsde .+ ( and hence ) depends only linearly on the hedging target and the cash - flow streams . _+ + these properties are stemming from the fact that the optimal strategy is given by the projection of the hedging target in on the space spanned by the tradable securities . from , we can see that the optimal hedging strategy is linear in the hedging target as well as the other cash - flow streams for a given horizon .this means that , for a given wealth at time , the optimal hedging positions can be evaluated for each portfolio component separately .therefore , sharing the information about the overall wealth , a large scale portfolio can be controlled systematically by arranging desks in such a way that each desk is responsible for evaluating and hedging a certain sector of portfolio , such as equity - related and commodity - related sub - portfolios , etc .from the discussion in the last section , it becomes clear that solving the bsde for ( [ eq - v2 ] ) is the key .although the existence and uniqueness of the solution for ( [ eq - v2 ] ) are proven for the case with a bounded risk - premium process by kobylanski ( 2000 ) and kohlmann & tang ( 2002 ) , this is not the case in the current setup since arising from the kalman - bucy filter are gaussian and hence unbounded .although the general conditions are not known , we have a very useful method to directly solve it under certain conditions , which are likely to hold in most of the plausible situations .firstly , let us define the following change of variables : v_l(t)&:=&v_2(t ) + z_l(t)&:=&z_2(t)/v_2(t ) + _ l(t)&:= & _2(t)/v_2(t ) .then , ( [ eq - v2 ] ) can equivalently be given by a quadratic - growth bsde & & v_l(t)=-_t^t \{(||z_l(s)||^2-||_l(s)||^2)+ 2_s^z_l(s)+||_s||^2}ds + & & -_t^t z_l(s)^dn_s-_t^t _l(s)^dm_s .[ eq - vl ] we introduce a matrix - valued deterministic function defined by ( t):= ( _ d^_d)(t)-(_m^_m)(t ) where are matrices obtained by restricting to the first ( last ) rows of .furthermore , we use to represent a diagonal matrix whose first elements are and the others zero .[ lemma-10 ] + + proof : consistency between and can be checked easily by it - formula .one can match the dynamics of implied by and , and the dynamics obtained from it - formula applied to the hypothesized solution .see section 5 of for detailed calculation . + the ode for } ] , which is not satisfied unfortunately .however , it is clear that the solutions remain finite in a short enough interval ] order , one can directly check if the condition is satisfied in any case ._ there exists a bounded solution of },a^{[1]},a^{[0]}) ] . _+ for the case where itself follows a jump process or more generally a semimartingale , see a recent work by jeanblanc et.al.(2012 ) and the references therein . they have shown that we can still characterize the optimal strategy in terms of the three bsdes .unfortunately though , the bsde for becomes much more complicated and its solution is not yet known except very simplistic examples . in a differential form , the bsde for in ( [ eq - v1 ] ) is given by & & dv_1(t)=v_1(t)dt+e^v_l(t)dt + & & + z_1(t)^ ( dn_t+dt)+_1(t)^dm_t+j^a_1(t)d_t+j^d_1(t)d_t with the terminal condition .now , let us define_ t^&:=&1-_0^t _ s^^dn_s + & = & ( -_0^t ^dn_s- _ 0^t ||z_l(s)+_s||^2 ds ) . by lemma 3.9 in , is a true -martingale .thus , we can define a probability measure equivalent to on by |__t=_t^ .[ def - pa ] by girsanov - maruyama theorem , n_t^:=n_t+_0^t ds and form the standard -brownian motions .although & & _ t = a_t-_0^t ^a_s ds + & & _ t = d_t-_0^t ^d_s _ \{q_s->0}ds remain -martingales , their intensities are changed indirectly through the dependence on .then , one can easily evaluate as [ lemma-11 ] + + thus , the evaluation of is essentially equivalent to the pricing of an european contingent claim with an intermediate cash - flow stream . in the measure , the dynamics of the underlyings are & & ds_t=(t , s_t , y_t)(dn_t^-z_l(t)dt )+ & & dy_t=(t , s_t , y_t)(dn_t^-z_l(t)dt)+(t , s_t , y_t)(dm_t+_t dt ) + & & d_t=(_t - f_t_t-_d(t)^)dt+ ( t)d n^_t + m_t and are counting processes with intensity , which are , in turn , determined by . the procedures to run and these counting processes are given in remark 3 .assuming depends smoothly on the underlyings , it is easy to see _j&=&_i=1^d _ i , j + _ i = d+1^n_i , j + & & + _ i=1^n _ i , j , 1j d , [ eq - z1 ] which is the sum of the delta sensitivity with respect to each -adapted diffusion process multiplied by its volatility function .one also obtains and as & & j_1^a(t)=v_1(t-;a_t-+1)-v_1(t- ) + & & j_1^d(t)=_\{q_t->0 } where the first term is calculated by shifting the initial value of by , respectively .therefore , the pair of can be estimated by using the standard monte carlo simulations . combining the solution of obtained by the odes and the current value of wealth, one can completely specify the optimal hedging position from ( [ pi - opt ] ) .several numerical examples are available in although intermediate cash flows are not included .since follows a linear bsde , it is easy to see the following : [ lemma-12 ] + the difficulty in the evaluation of is quite similar to that of cva ( credit valuation adjustment ) , where we need to evaluate ( and its martingale coefficients ) in each path and at each point of time .naive application of nested monte carlo simulations would be too time - consuming for the practical use .the most straightforward way is to use the _ least square regression method _ ( lsm ) .if and included in given in ( [ v1-result ] ) have markovian properties with respect to , one can write as v_1(t)=f(t , s_t , y_t , a_t , d_t,_t , q_t ) [ eq - lsm ] with an appropriate measurable function .here , it is important to include and to recover the markovian property .the function is usually approximated by a polynomial function and the associated coefficients are regressed so that the square difference from the simulated is minimized . once the estimated function is given , the evaluation of in each path is straightforward .see and section 8.6 in for details on lsm .( -20,5) ( capital ) ( -200,170)variance ( -80,165) ( -40,120)a ( -279,135) ( -185,35)b although is unnecessary for getting the optimal hedging strategy , we need it to obtain the full value function . notice that the value function can provide valuable information to choose a profitable service - charge policy represented by .for example , consider the situation given in figure [ qd - variance ] , where the value functions for two different cases ( distinguished by ) of the service charges are given .note that remains the same since it is independent from . in this example , the case is definitely better than the case since it achieves a smaller hedging error with a smaller initial capital .if one allows to depend explicitly on , based on some empirical analysis for example , one can use the information of to achieve desirable intensities of investment flows .in this section , we consider a possible extension of the framework to handle the hedging problem for an insurance portfolio. for recent applications of the mean - variance criterion for life and non - life insurance , see and references therein .see for a general review on various control problems for the insurance industry .we shall show that one can work in a more realistic framework with imperfect information based on the method developed in the previous sections . for the underlyings as well as , we assume the same dynamics and the observability given in section [ sec - market ] can be dependent on the past history of as long as they satisfy the listed assumptions . ] .in addition to these processes , we introduce a random measure . the random measure , which describes the occurrence of loss event and its size ,is assumed to be observable to the fund manager .the cumulative loss process to the fund is given by _0^t _ k l(s , x ) ( dsdx ) , [ def - loss ] where is a compact support for the jump size distribution and is a positive constant . is introduced to represent the payment amount to the insured for a given loss at time .it can denote the minimum and/or maximum threshold , or the necessary triggers to be satisfied for the payment to the insured to occur .we assume , for simplicity , that there is no simultaneous jump among . in the current setup, the observable filtration is generated by .we assume that is a -predictable process for any . may represent , for example , various weather related variables such as the strength of the wind , atmospheric pressure , the amount of rainfall for the insurance - covered region for non - life insurance . for life insurance , can contain various indexes of individual health information aggregated at a portfolio level .if the insurance portfolio contains various protections written on quite different perils , covered regions or diseases , it should be better to model each of them separately to achieve a more accurate description .for this issue , we shall discuss an extension in section [ sec - jackson ] .we assume that the compensated random measure in is given by ( dtdx)=(dtdx)-_t(x)^(t , x_t-)_\{q_t->0}dxdt . here is the intensity of the event occurrence , is the density function of the loss given the occurrence of an insured event , and it is assumed to have the compact support for every ] due to the assumption on . the process _t = c_t-_0^t ^(s , x_s- ) _ \{q_s->0}ds is a -martingale . if the provided insurance contract is such that it terminates when an insured event occurs ( such as life insurance ), we can model it easily by redefining the number of contracts as , which is a queueing system with two exits . due to the assumption on and , one can see that the filtering for the risk - premium process is unaffected by the observation of .in particular , lemma [ lemma-4 ] holds also in the current case . as a result, the filtered risk - premium process has the same dynamics given in ( [ eq - zhat ] ) .let us now derive the filtering equation for .this can be done by defining the measure by the new process & & _2,t=1+_0^t_2,s-(-1)d_s + _ 0^t _ 2,s-(-1)d_s + & & + _ 0^t _ 2,s-(-1)d_s [ new - xi2 ] instead of ( [ eq - wtxi2 ] ) .we assume that is a true -martingale so that we can justify the measure change : .then , in addition to given in ( [ eq - wta ] ) and ( [ eq - wtd ] ) , we have _t = c_t-_0^t _ \{q_s->0}ds as a -martingale . the inverse process is given by & & _ 2,t=1+_0^t _ 2,s-(^a(s , x_s-)-1)d_s+_0^t _ 2,s-(^d(s , x_s-)-1)d_s + & & + _ 0^t _ 2,s-(^(s , x_s-)-1)d_s instead of ( [ eq - xi2 ] ) .one can confirm that lemma [ lemma-3 ] holds in the current setup due to the assumption that is -predictable process and the fact that are completely decoupled from the market in measure .thus , the unnormalized filter ] : & & q_t = q_0+_0^t r_sq_s - ds+_i_0^t ( _ s^a(i)-)q_s - d_s(i ) + _ i_0^t ( _ s^d(i)-)q_s - d_s(i ) + & & + _ i , j_0^t ( ^f_s(i , j)-)q_s - d_s(i , j ) + _ i _ 0^t ( _ s^(i)-)q_s - d_s(i ) where s are similarly defined as in lemma [ lemma-7 ] .let us suppose that the wealth process of the fund manager follows & & _ t^(s , w)=w+_s^t _ u^ds_u+_i_s^t _ u(i)q_u(i)du + _ i _s^t e_u(i)da_u(i ) + & & -_i_s^t g_u(i)dd_u(i)-_i , j_s^t f_u(i , j)df_u(i , j ) -_i_s^t _ k_il_i(u , x)_i(dudx ) + where denotes the cost associated with the switching from the -th to the -th fund , and is defined as in section [ sec - ins - setup ] for the fund .all the processes of coefficients are assumed to be -predictable and satisfy the necessary square integrability .the fund manager s problem is to minimize the quadratic hedging error v(t , w)=ess _ .the derivation of the optimal hedging strategy can be performed by a straightforward modification of those in section [ sec - insurance ] .one can check that the bspde for can still be decomposed into the three bsdes and that the optimal hedging strategy is given by the formula ( [ pi - opt ] ) with the same given in lemma [ lemma-10 ] .the expressions for and can be derived easily due to their linearity as before .in this work , the prices of securities , the occurrences of insured events and ( possibly a network of ) the investment flows are used to infer their drifts and intensities by a stochastic filtering technique , which are then used to determine the optimal mean - variance hedging strategy .a systematic derivation of the optimal strategy based on the bsde approach is provided , which is also shown to be implementable by a set of simple odes and the standard monte carlo simulation . as for the management of insurance portfolios, we have given a framework with multiple grades of loss severity , which allows a granular modeling of the change of demand for insurance products after the insured events with different sizes .we have applied the technique used in queueing analysis to treat a complex network of the investment flows , such as those in a group of funds within which investors can switch a fund to invest .although a lot of problems remain unsolved especially with regard to the model specifications , the recent great developments of computer systems capable of handling the so - called _ big data _ and wide interests among industries in the efficient use of information may make the installation of the framework a real possibility in near future .more concrete applications to a specific product or business model using real data will be left for a future research , hopefully in a good collaboration with financial as well as non - financial institutions .this research is partially supported by center for advanced research in finance ( carf ) .fujii , m. , takahashi , a. , 2013 , making mean - variance hedging implementable in a partially observable market_-with supplementary contents for stochastic interest rates- _ , " available at http://ssrn.com/abstract=2279398 .fujii , m. , takahashi , a. and sato , s. , an fbsde approach to american option pricing with an interacting particle method , " carf working paper series , carf - f-302 , available at http://ssrn.com/abstract=2180696 .jeanblanc , m. , mania , m. , santacroce , m. and schweizer , m , 2012 , mean - variance hedging via stochastic control and bsdes for general semimartingales , " _ the annals of applied probability _ ,22 , no . 6 , 2388 - 2428 .kohlmann , m. and tang , s. , 2002 , global adapted solution of one - dimensional backward stochastic riccati equations , with application to the mean - variance hedging , " _ stochastic processes and their applications_ 97 , 255 - 288 .
all the financial practitioners are working in incomplete markets full of unhedgeable risk - factors . making the situation worse , they are only equipped with the imperfect information on the relevant processes . in addition to the market risk , fund and insurance managers have to be prepared for sudden and possibly contagious changes in the investment flows from their clients so that they can avoid the over- as well as under - hedging . in this work , the prices of securities , the occurrences of insured events and ( possibly a network of ) the investment flows are used to infer their drifts and intensities by a stochastic filtering technique . we utilize the inferred information to provide the optimal hedging strategy based on the mean - variance ( or quadratic ) risk criterion . a bsde approach allows a systematic derivation of the optimal strategy , which is shown to be implementable by a set of simple odes and the standard monte carlo simulation . the presented framework may also be useful for manufactures and energy firms to install an efficient overlay of dynamic hedging by financial derivatives to minimize the costs . *p * * keywords :* mean - variance hedging , bsde , filtering , queueing , jackson s network , poisson random measure
there are three criteria for the eradication of an infectious disease : 1 .biological and technical feasibility ; 2 . costs and benefits ; and 3 .societal and political considerations . despite eradicationhopes for malaria , yaws and yellow fever in the twentieth century , smallpox remains the only human disease eradicated .current eradication programs include poliomyelitis ( polio ) , leprosy and guinea worm disease .measles , rubella , and hepatitis a and b are also biologically and technically feasible candidates for eradication . despite strong biological , technical and cost - benefit arguments for infectious - disease eradication , securing societal and political commitments is a substantial challenge . with communities more connected than ever ,the control or eradication of an infectious disease requires coordinated efforts at many levels , from cities to nations . for vaccine - preventable diseases , public health authorities plan immunisation strategies across varying regions with limited resources . the world health organization ( who ) has helped to organise global immunisation efforts , leading to significant global reduction in polio and measles cases .one vaccination strategy that has been utilised in the global fight against polio and measles is mass immunisation , which may be regarded as a pulse vaccination .the complex logistics required for these mass - immunisation campaigns magnifies the need for research into the effectiveness and optimal deployment of pulse vaccination .pulse vaccination has been investigated in several mathematical models , often in disease models with seasonal transmission .many diseases show seasonal patterns in circulation ; thus inclusion of seasonality may be crucial .agur et al .( 1993 ) argued for pulse vaccination using a model of seasonal measles transmission , conjecturing that the pulses may antagonise the periodic disease dynamics and achieve control at a reduced cost of vaccination .shulgin et al . (1998 ) investigated the local stability of the disease - free periodic solution in a seasonally forced population model with three groups : susceptible ( s ) , infected ( i ) and recovered ( r ) .they considered pulse vaccination and explicitly found the threshold pulsing period .recently , onyango and mller considered optimal periodic vaccination strategies in the seasonally forced sir model and found that a well - timed pulse is optimal , but its effectiveness is often close to that of constant - rate vaccination .in addition to seasonality , spatial structure has been recognised as an important factor for disease dynamics and control .heterogeneity in the population movement , along with the patchy distribution of populations , suggests the use of metapopulation models describing disease transmission in patches or spatially structured populations or regions .mobility can be incorporated and tracked in these models in various forms .common models include linear constant fluxes representing long - term population motion ( e.g. migration ) and nonlinear mass - action representing short - term mobility .liu et al .( 2009 ) and burton et al .( 2012 ) considered epidemic models with both types of movement . a possible inherent advantage of pulse vaccination in a spatially structured setting , discussed by earn et al .( 1998 ) , is that the disease dynamics in coupled regions can become synchronised by pulse vaccination , thereby increasing the probability of global disease eradication .earn et al .presented simulations of patch synchronisation after simultaneous pulse vaccinations in a seasonal seir metapopulation model in which the patch population dynamics were initially out of phase . here an additional population class of exposed ( e ) was considered .coordinating simultaneous pulse vaccination campaigns in connected regions may be vital for successful employment of pulse - vaccination strategies .indeed , employing synchronised pulse vaccinations across large areas in the form of national immunisation days ( nids ) and , on an international scale , with simultaneous nids , has been successful in fighting polio .an example of large - scale coordination among nations is operation mecacar ( the coordinated poliomyelitis eradication efforts in mediterranean , caucasus and central asian republics ) , which were initiated in 1995 .the project was viewed as a success and an illustration of international coordination in disease control . however ,public health , including the control of infectious diseases and epidemics , has usually been managed on a national or regional scale , despite the potential impact of population movement .recently , pulse vaccination has been analysed in epidemic metapopulation models . terry ( 2010 ) presented a sufficient condition for eradication in an sir patch model with periodic pulse vaccinations independently administered in each patch with linear migration rates , but left open the problem of finding a threshold quantity for eradication and evaluating the effect of pulse synchronisation and seasonality .yang and xiao ( 2010 ) conducted a global analysis of an sir patch model with synchronous periodic pulse vaccinations and linear migration rates ; however , they did not allow for the different patches to administer the pulses at distinct times and seasons . a major poliovirus transmission route in africa and the middle east is fecal - to - oral transmission . in this indirect route , often facilitated by inadequate water management, water plays an analogous role to that of a reservoir , although such an environmental reservoir does not allow the pathogen to reproduce .nevertheless , its effect on the pathogen dispersal can dramatically modify epidemic patterns .the competition between the direct and indirect transmission routes was examined in the case of highly pathogenic avian influenza h5n1 , showing that indirect fecal - to - oral transmission could lead to a higher death toll than that associated with direct contact transmission . in this article, we consider an sir metapopulation model with both short- and long - term mobility , direct and indirect ( environmental ) transmission , seasonality and independent periodic pulse vaccination in each patch .the primary objectives are to find the effective reproduction number , , prove that it provides a sharp eradication threshold and assess the optimal timing of pulse vaccinations in the sense of minimising .our mathematical model and analysis allow us to evaluate how pulse synchronisation across connected patches affects the efficacy of the overall pulse - vaccination strategy .we also determine how different movement scenarios affect the optimal deployment of vaccinations across the patches , along with considering how environmental transmission affects results .finally , we discuss how pulse vaccination and constant - rate vaccination strategies compare when considering the goal of poliomyelitis global eradication .this paper is organised as follows . in section [ sec2 ] ,we describe and give motivation for the mathematical model . in section [ sec3 ] ,we analyse the disease - free system , which is necessary to characterise the dynamics of the model . in section [ sec4 ] , is defined . in section [ sec5 ], we prove that if , the disease dies out , and if , then it is uniformly persistent . in section [ sec6 ], we consider a two - patch example , which provides insight into the optimal timing of pulse vaccinations , the effect of mobility and environmental transmission parameters on , and a comparison of pulse vaccination to constant - rate vaccination in this setting . in this section, we also prove that pulse synchronization is optimal for a special case of the model and provide numerical simulations to illustrate this result in more general settings .finally , in section [ sec7 ] , we provide a discussion of the implications of our results and future work to consider .we consider a variant of an sir metapopulation model with patches , each with populations of susceptible , infected and recovered denoted by , and for each patch .all three groups migrate from patch to patch , , at the rates , and .the per capita rates at which susceptible , infected and recovered leave patch are , and , respectively .the effect of short - term mobility on infection dynamics is modelled by mass - action coupling terms ; for example , . for this infection rate ,infected individuals from patch are assumed to travel to patch , infect some susceptibles in patch and then return to patch on a shorter timescale than that of the disease dynamics .conversely , susceptibles from patch can travel to patch , become infected and return to patch on the shorter timescale . in the model for poliomyelitispresented herein , both direct contact and indirect environmental routes are considered .the environmental contamination of the virus in each patch is described by a state variable , denoted by . infected individuals in patch ( )shed the virus into the environmental reservoir at the rate .the virus in the environmental reservoir can not reproduce outside of the host and decays at the rate .the virus in the environmental reservoir , , contributes to the infected population in patch through the mass - action term .direct transmission contributing to is represented by the mass - action term . due to possible seasonality of poliovirus circulation ,both direct and environmental transmission parameters , , and are assumed to be periodic with a period of one year .pulse vaccination is modelled through impulses on the system occurring at fixed times .first , we consider a general pulse vaccination scheme with no periodicity .for each patch , pulse vaccinations occur at times where at time , a fraction of the susceptible population is instantly immunised and transferred to the recovered class . therefore and , where and denote limits from the right - hand side and left - hand side , respectively . for each patch , demography is modeled with constant birth rate , , into the susceptible class and a per capita death rate , .the parameter represents the fraction of newborns who are successfully vaccinated .the parameter is the recovery rate .note that both recovery from infection and successful vaccination induce perfect life - long immunity .all parameters are assumed to be non - negative , and the parameters , and are additionally assumed to be positive .we thus arrive at the following mathematical model : consider the non - negative cone of , denoted by .the following theorem shows existence and uniqueness of solutions to ( [ fmod ] ) , the invariance of and ultimate uniform boundedness of solutions . [ p1 ] for any initial condition , there exists a unique solution to system ( [ fmod ] ) , , which is smooth for all and the flow is continuous with respect to initial condition .moreover , the non - negative quadrant is invariant and there exists such that for all .the existence , uniqueness , and regularity for non - impulse times come from results that can be found in . in order to show the invariance of , consider the set . on this set , , so is invariant .also , notice that if .then , by uniqueness of solutions , we find that is invariant . to show ultimate boundedness , consider the total population of individuals , .then , adding all the appropriate equations of ( [ fmod ] ) , we obtain that where and . a simple comparison principle yields .this implies that where and .therefore , if denotes the family of solutions , then there exists such that for all . in order to analyse the asymptotic dynamics of the system , we assume some periodicity in the impulses . according to who guidelines , countries threatened by wild poliovirus should hold nids twice a year with 46 weeks separating the immunisation campaigns within a year .hence we consider a sufficiently flexible schedule in order to cover this guideline .suppose that , for patch , the pulse vaccinations occur on a periodic schedule of period .assume that there exists such that , where ; i.e. , there exists a common period for pulse vaccinations among the patches .for each patch , we assume that there are pulse vaccinations that occur within the period .more precisely , the pulse vaccinations for patch occur at times , where , and .note that the recovered ( or removed ) classes are decoupled from the remaining system and can thus be neglected .we obtain the following model : model ( [ mod ] ) will be analysed in the ensuing sections .in order to obtain a reproduction number , we need to determine the dynamics of the susceptible population in the absence of infection . with this in mind , consider the following characterisation of the vaccinations . within the time interval ] . thus the same is true for the right - hand side of ( [ rhs ] ) . on each piece , solutions have continuous dependence on parameters .therefore we can conclude that solutions will have continuous dependence on parameters for the whole interval .hence is continuous with respect to .so , for sufficiently small , since by ( [ r01 ] ) . the matrix , where represents the right - hand side of ( [ rhs ] ) as a linear vector field , is quasi - positive .without loss of generality , we can assume the non - diagonal entries of are positive .if any are zero , add a sufficiently small constant to that entry and the spectral radius of interest will still fall below unity , and inequality ( [ rhs ] ) will still hold . thus the matrix will be strictly positive ( since the vector field will point away from the boundary ) . then , by the perron frobenius theorem , we find that is a simple eigenvalue with strictly positive eigenvector .hence where and is -periodic .so as .since is quasi - positive , subsystem ( [ rhs ] ) forms a comparison system using theorem 1.2 in .choose a constant such that , where .then for all .hence and as for since .then , for any and sufficiently large time , the impulsive system representation of the left - hand side of the above inequality also has a globally stable -periodic solution .another application of the comparison system principle yields for sufficiently large .continuous dependence on parameters implies that can be made arbitrarily close to as . clearly , the fixed point equation ( from the proof of ( [ lineargas ] ) ) depends continuously on the matrix .thus where is arbitrary and is chosen sufficiently small .the functions and are uniformly continuous for where .it follows that if is chosen sufficiently small , then for all where is arbitrary .the result follows .we now turn our attention the dynamics when . in order to prove that the disease is uniformly persistent in all patches when , we need to make extra assumptions on the -periodic matrix .assume that : * there exists such that is irreducible .biologically , this irreducibility assumption means that , at some time during a period , the patches have the property that infection in an arbitrary patch can cause infection in any other patch through some chain of transmissions or migrations among a subset of patches . if this assumption is satisfied , then the system is uniformly persistent , detailed in the following theorem .[ persist ] suppose that and ( a1 ) holds .then the system ( [ mod ] ) is uniformly persistent ; i.e. , there exists such that if or , for some , then we intend to use the approach of acyclic coverings to prove uniform persistence .we will use theorem 1.3.1 from .let , and . define the poincarmap , where is a solution to the full system ( [ mod ] ) .note that is a continuous map on the complete metric space .in addition , is forward invariant under the semiflow and hence .define the maximal forward invariant set inside by .first , we show that is uniformly persistent ; i.e. , there exists such that , for all , .note that is a compact map and is point dissipative by proposition [ p1 ] .the global attractor of in is the singleton by proposition [ lineargas ] .therefore on the boundary subset , where and is defined in proposition [ lineargas ] .let .then since all eigenvalues of are greater than unity ( where and are defined in proposition [ lineargas ] ) .thus is acyclic .we next show that is isolated .consider the derivative of the poincar map evaluated at , .note that the eigenvalues of are also the floquet multipliers of the linearized system ( [ mod ] ) along the disease - free periodic orbit .the linearization matrix is block triangular .it can be seen that . by assumption ( a1 ) , the eigenvector corresponding to , which we call , has positive `` infection components '' ; i.e. , , .an application of the stable manifold theorem for discrete - time dynamical systems implies that is isolated .therefore the remaining hypothesis to check is that . by way of contradiction , suppose that there exists such that as .let be arbitrary .then there exists such that . in particular , for all .notice that the functions and for ( ) are uniformly continuous since their derivatives are bounded for all . by this uniform continuity and the compactness of ] ) can remain unvaccinated by being in patch 2 when patch 1 employs pulse vaccination and in patch 1 when patch 2 conducts their pulse vaccination .there is evidence that this effect has led to measles epidemics in the coupled regions of burkina faso and cte divoire in africa .synchronising the pulses can most effectively reach the migrant population .indeed , when the average total susceptible population over the year is plotted with respect to , the graph has the same shape as figure [ fig : mig ] . in other words ,synchronising the pulses will produce the highest time - averaged coverage for a fixed proportion , , of susceptibles that can be vaccinated in each pulse .this is of course expected from theorem [ r0theorem ] , where we prove these statements about synchronisation ( locally and in a more restricted setting ) .next , suppose that the patches are only coupled through mass - action cross transmission without seasonality ; i.e. , , and .figure [ fig : cross ] displays numerical calculations of versus the phase difference for this case .again , is always minimised when the pulses are synchronised ; i.e. , . however , this case is more subtle than the previous one .when the average total susceptible population over the year is taken as a function of the phase difference , it is not hard to see that this will be constant as varies between and .thus the optimality of pulse synchronisation can not be explained like the previous case where the ( averaged ) susceptible population was minimised when , and theorem [ r0theorem ] can not be applied . also , observe that the phase difference becomes a non - factor as in figure [ fig : crossb ] . in this case, the contribution of cross transmission becomes equal to within - patch transmission when , causing the infected in a patch to have equal magnitude of correlation with either pulse .this is likely the reason that the phase difference does not affect when .theorem [ r0theorem ] implies that , the reproduction number as a function of phase difference , may be most sensitive to when , , and are large ; this is confirmed in simulations . from figures [ fig : migb ] and [ fig :crossb ] , it is seen that the migration rate and coupling factor strongly affect the amplitude of .if the migration rate is large or if is close to a certain value ( around 0.1 in figure [ fig : crossb ] ) , pulse synchronisation becomes increasingly important , since can vary largely with . in figure[ fig : r0parama ] , observe that , as and increases , while keeping and fixing the other parameters , the amplitude of increases . in figure[ fig : r0paramb ] , we plot the pulse vaccination proportion required for as a function of for three different wild ( before immunisation ) reproduction numbers , .as increases , more vaccination is required to bring to unity and the `` phase effect '' increases . for the cases where the amplitude of is relatively large , it is vital to synchronise the pulses since the parameter can be the difference between extinction and persistence of the pathogen . in figure[ fig : deter ] , there are simulations of the system ( [ 2patch ] ) in the case of in - phase pulses ( ) , resulting in eradication , and out - of - phase pulses ( ) , resulting in disease persistence .now consider identical patches with seasonality , where . here is a seasonal phase - shift parameter , which allows us to vary the timing of the pulses throughout the year . in the examplesimulated in figure [ fig : seas ] , it is optimal to synchronise the pulse vaccinations and to execute just before the high - transmission season .this finding agrees with results obtained for single patch sir models .the importance of synchronising the pulses increases with migration rate , while the sensitivity to timing the pulses with respect to seasonality increases with the seasonal forcing amplitude .also , the sensitivity to timing pulses with respect to each other and seasonality both increase with , , and .vs phase difference between pulses for the system with seasonality .the seasonal transmission is of the form where is the seasonal phase shift .here , the migration rate is , there is no cross - transmission ( ) , and the other parameters are as in table [ table ] . in this case , the results show that it is best to synchronise pulse vaccinations and to execute them during the season before the high - transmission season . ,width=340 ] if the seasonal transmission coefficients for the two patches are not in phase , then optimal timing of pulses with respect to seasonality can be in conflict with synchronising the pulses .this creates a trade - off between synchronising the pulses and optimally timing the pulse in each patch according to the transmission season . in the pulse vaccination operation against polio , operation mecacar ,public health officials had to consider this trade - off . in this case, they decided that pulse synchronisation was most important .theoretically , the optimal timing of pulse vaccinations should depend on the specific parameters , especially the relative size of migration rate to seasonal forcing amplitude . to illustrate this phenomenon, we consider transmission rates and for patch 1 and patch 2 , respectively . here is the phase difference between the seasonal transmission rates of patch 1 and patch 2 . in figure[ fig : seasop ] , is calculated for the case where the seasonal transmission rates are out of phase ; i.e. , . in figure[ fig : seasopa ] , the migration rate is set to and the mass - action coupling is . for this case , the seasonal transmission has a larger effect than the migration , and it is best to desynchronise the pulses so that each pulse occurs in the season before the higher transmission season . in figure[ fig : seasopb ] , the migration rate is assumed to be larger ( ) ; in this scenario , it is best to synchronise the pulses .an interesting and possibly applicable exercise is to compare a constant - vaccination strategy with the pulse - vaccination strategy . from a theoretical standpoint ,it is important to reconcile results obtained for pulse vaccination with the findings for a smooth , constant vaccination rate . on the practical side, disease - control authorities may like to know the optimal vaccination strategy based on a simple cost measure . the basic measure that will be used to quantify the cost of a vaccination strategy is vaccinations per period calculated at the disease - free periodic solution . from an economic perspective, this cost measure has the appeal of simplicity . to understand why this definition can also be the dynamically sound way of measuring cost , it is instructive to consider the case of isolated patches or , without loss of generality , a single patch under a general periodic vaccination strategy .specifically , consider the following system : where the is a -periodic vaccination rate and the transmission rate , , is -periodic . in the case of constant vaccination , , where . for pulse vaccination , , where is the dirac delta mass centred at and .in , the authors rigorously define the appropriate space of periodic vaccination rates to include the dirac delta mass and guarantee existence of a unique disease - free susceptible periodic solution , , for any periodic vaccination rate in this setting .the cost of vaccination ( vaccinations per period calculated at ) is using the next - generation characterisation ( [ rcomp ] ) , the effective reproduction number , , can be explicitly found as onyango and mller studied optimal vaccination strategies in this model in terms of minimising .here we give a simple representation of that can yield insight into comparing vaccination strategies , but do not provide the rigorous construction of the optimal strategy done by onyango and mller .specifically , we rewrite for a general periodic vaccination strategy in a form that compares it to the constant - vaccination strategy of equal cost . first , as noted in , by integrating the equation over one period , the following can be obtained : define the average transmission rate as . for constant vaccination , , so we find that and the effective reproduction number is .for the periodic vaccination rate , we rewrite the effective reproduction number by comparing it to a constant - vaccination strategy of equal cost : if we normalise by letting and denote ( the reproduction number in the absence of vaccination ) , then the following is obtained : clearly , if is constant i.e. , then all vaccination strategies are equivalent , in particular pulse- and constant - vaccination strategies , and .this observation provides justification as to why is an appropriate cost measure from an epidemiological point of view .when is not constant , then a different result is obtained .define and notice that .then acts as a weighting function and can be chosen to maximise , thereby minimising . intuitively , a susceptible profile that is minimal for the range of values where and maximal for would seem to work the best .the rigorous construction of the optimal vaccination strategy was carried out by onyango and mller .they found that a single , well - timed pulse is the optimal strategy ( assuming that the allotted cost can be exhausted by a single pulse ; otherwise , the optimal susceptible profile requires on \subset [ 0 , \tau) ] ) .for the pulse - vaccination strategy , we find that it can be inferred that the advantage of the optimal pulse vaccination over constant vaccination ( in terms of difference in ) increases with , ( when remains fixed ) , and the amplitude of .a natural question to ask is whether similar results can be obtained for the two - patch model .first , for the case of no seasonality , does the equivalence of vaccination strategies hold ?the cost can still be defined as number of vaccinations per period in each patch calculated at the disease - free periodic solution .the vaccination rate has two components . consider the diagonal matrix and the disease - free periodic solution vector .then ; here is a vector containing the cost in each patch .the effective reproduction number , , is defined in section [ sec4 ] , but can not be explicitly expressed for multiple patches . using notation from section [ sec4 ] , we note that the reproduction number , , for constant vaccination in the case of no seasonality ( i.e. , are independent of time ) is found to be , which agrees with the next - generation matrices for autonomous disease - compartmental models . under the conditions of theorem [ r0theorem ] no cross - transmission and no migration of infected the equivalence of vaccination strategies of equal cost holds for constant transmission rate by ( [ r0diag ] ) .numerical simulations showed that shifting the phase difference between the pulse vaccinations can alter the value of when there is cross transmission and no migration ( figure [ fig : cross ] ) , even though the cost remains constant .thus the equivalence of vaccination strategies of equal cost can not hold for the general constant transmission case in system ( [ 2patch ] ) . however , we performed simulations with many different parameters showing that the synchronised pulse vaccinations have values of very close ( within the range of numerical error ) or identical to the reproduction number for the constant - vaccination strategy of equal cost . for the case of no cross transmission ( but migration of _ both _ susceptible and infected ) , simulations produced identical or nearly identical reproduction numbers for constant and pulse vaccinations of equal cost , independent of the phase difference .this is not in contradiction with figure [ fig : mig ] , since shifting the phase difference in the migration model alters the cost of vaccination ; i.e. , synchronised pulses result in more vaccinations per period than desynchronised pulses in model ( [ 2patch ] ) when .one implication of the cases where pulse vaccination and constant vaccination of equal cost agree on the value of is that results obtained in prior work on the autonomous multi - patch model can carry over to the impulsive model .for example , an imbalance in migration rates has been shown to strongly affect in previous work on metapopulation models . the same result is found in the case of pulse vaccination , as shown in figure [ fig : optdeploy ] . for otherwise identical patches , an imbalance in migration rates causes the susceptibles and infected to concentrate more heavily in one patch , which increases the overall effective reproduction number .this affects how the vaccine should be optimally distributed among the two patches , as illustrated in figure [ fig : optdeploy ] . as in the single - patch model , including seasonality induces an advantage of well - timed pulse vaccination over constant vaccination of equal cost . in figure[ fig : compare ] , we see that , as the amplitude of seasonality increases , synchronous pulse vaccinations applied the season before the high - transmission season can become more and more advantageous . simulations also show that the migration rate does not affect for the case of identical patches and simultaneous pulses .the advantage of pulse vaccination over constant vaccination depends on the parameters , as detailed previously . for the simulations in figure [ fig : compare ] ,pulse vaccination can offer a substantive advantage over constant vaccination .hence the inherent advantage of pulse vaccination in a seasonal model may provide motivation for its employment over constant vaccination , contrary to what is stated by onyango and mller .finally , we consider how environmental transmission affects the results . to begin this section ,we state a general theorem about the effective reproduction number for the autonomous ( unpulsed ) version of the general model ( [ mod ] ) with environmental transmission .the following theorem states that the effective reproduction number for the autonomous version of the general model ( [ mod ] ) with environmental transmission is identical to the effective reproduction number of the autonomous model ( [ mod ] ) without environmental transmission , but with the redefined direct transmission parameter .[ envr0 ] denote as the effective reproduction number of the autonomous version ( [ mod ] ) .let and denote the effective reproduction number of the autonomous version of the multi - patch sir sub - model ( no environmental transmission ) in ( [ mod ] ) with the direct transmission parameter as . then . to find the reproduction number , , for the autonomous version of ( [ mod ] ), we utilize the standard next - generation approach .then the infection component linearization at the disease - free equilibrium is , where the matrices and can be written in the block - triangular form : in which are matrices . here and are diagonal matrices with and ( ) as the respective diagonal entries .the entries of matrices , and are as follows : and , where is the kronecker delta function and is the disease - free equilibrium . then is the spectral radius of , so . now define and consider the effective reproduction number , , of the autonomous version of ( [ mod ] ) with no environmental transmission , but with direct transmission rate .it is not hard to see that .thus , and the result is obtained .thus , for the autonomous case , the addition of environmental transmission to an sir metapopulation model , by considering the system ( [ mod ] ) , does not qualitatively affect the effective reproduction number .we should note that environmental transmission can result in a substantive delay in epidemic onset and its duration of first peak when compared to the analogous regime of direct transmission , so the nature of the transient dynamics is affected by environmental transmission . for simulations ,we include the environmental parameters in the two - patch model ( [ 2patch ] ) and suppose the the patches are identical .then , using the notation from the previous identical patch case , we can write the infected - component equations as : where is the total transmission rate , , , is the fraction of environmental transmission , and is fraction of cross - transmission for direct and environmental transmission , respectively .then , by theorem [ envr0 ] , the effective reproduction number for the autonomous model with constant per capita vaccination rate of susceptibles , , is clearly , adding environmental transmission to the identical two - patch model ( [ 2patch ] ) does not alter the autonomous ( without pulse vaccination ) patch reproduction number if we re - define the transmission rate in ( [ r0hat ] ) to be .however , when pulse vaccination is introduced into the model , we find that the fraction of environmental transmission , , affects . in figure[ fig : enva ] , synchronous pulse vaccinations are compared to constant vaccination as is varied .the reproduction number , , under the pulse vaccination shows non - monotone behaviour with respect to , with a maximum occurring around and the minimum occurring at ( where all of the transmission is due to the environment ) .we remark that this graph looks the same for many different values that we utilised for the migration rate and cross transmission , in particular for the case of isolated patches .the other parameters are as in table [ table ] with days and .note that the parameter values for and are absorbed into through a rescaling .of course , we know from before that when , the vaccination strategies yield identical ( proven in the isolated patch case without seasonality ) .in contrast , even in the single patch case , environmental transmission can cause disagreement in the reproduction numbers for pulse- and constant - vaccination strategies of equal cost .this result can be viewed as an impulsive analogue to that showing that sinusoidal transmission alters the reproduction number for an seir model , but leaves the reproduction number for the sir model the same as with constant transmission .indeed , an seir model can be seen as a special case of the no - impulse environmental transmission model . in figure[ fig : envb ] , we vary the phase difference , , between the vaccination pulses for three different values of ( 0 , 0.2 and 1 ) , when .the remaining parameters are as in table [ table ] .we have studied pulse vaccination in metapopulations , using poliovirus vaccination as our focus . by allowing each patch to have distinct , periodic pulse - vaccination schedules connected with a common period along with considering seasonality , environmental transmission and two types of mobility we add more generality and complexity to prior models .the effective reproduction number , , is defined for the model , system ( [ mod ] ) , and found to be a global threshold .if , then the disease dies out ; on the other hand , when , the disease uniformly persists . through theoretical analysis and numerical simulations , we were able to gain insights into optimising vaccination strategies in the metapopulation setting .theorem [ r0theorem ] and the supporting numerical simulations show that synchronising vaccination pulses among connected patches is key in minimising the effective reproduction number .an open problem is to analytically prove that synchronising the pulses minimises under more general conditions than are assumed in theorem [ r0theorem ] .evidence from the epidemiological data suggests that pulse synchronisation at different spatial scales influences the effectiveness of a vaccination campaign .based on field studies , the who recommends that the duration of the vaccination campaign , in the form of national immunisation days , be as short as possible ( 12 days ) .the importance of administering the vaccine across a whole country in 12 days , as opposed to taking a longer period of time , may be in part due to the higher levels of synchronisation for the shorter duration vaccination campaign .increased seroconversion rates also seem to play a role in the optimality of mass vaccinations with short duration . on an international scale , the effectiveness of operation mecacar and a study of the effect of migration on measles incidence after mass vaccination in burkina faso point to the importance of pulse synchronisation .our study highlights the critical role that the who and national governments can play in optimising disease control by synchronising mass vaccination campaigns among countries and regions . another important problem is comparing the effectiveness of periodic mass ( pulse ) vaccination versus routine ( constant ) vaccination .disease - control authorities must consider certain logistical aspects , which may affect the cost of implementing a particular strategy . from a mathematical perspective , the fundamental starting point for comparison is to consider for strategies of equal vaccinations per period .for the case of no seasonality and environmental transmission , we find some cases where the strategies are equivalent in terms of .when seasonality is included , a well - timed pulse - vaccination strategy ( simultaneous pulses administered during the season before the high - transmission season ) is optimal ( assuming the patches have synchronous seasons ) , similar to results for the single - patch sir model .future work will consider comparing pulse vaccination and constant - vaccination strategies in a stochastic model , which yields some insights not seen in the deterministic setting .more work needs to be done in the case of environmental transmission . when indirect transmission was considered to be a major mode of transmission in other studies , a delay in epidemic onset and its extension when compared to the analogous regime of direct transmissionwere observed .this could be explained by the persistence of the virus in the environment leading to new infections generated over a longer duration than that of the direct contact .such a two - step mechanism , with a human - to - environment segment and an environment - to - human segment , could lead to delay and extension of the effective infectious period when compared to that of direct human - to - human transmission .interestingly , we found that varying the fraction of environmental transmission in the system alters the effective reproduction number under pulse vaccination , contrary to results for the case of constant - rate vaccination .further consideration of the interaction of environment - induced delay with the influence of seasonality on environmental transmission and pulse vaccination is the subject of our ongoing work . finally , we mention the importance of incorporating mobility and spatial structure into disease models .in addition to our findings about the how mobility induces an advantage to synchronise pulse vaccination , population movement has other implications for disease control .as found in previous work on autonomous models , imbalance in migration rates among the patches can have a large effect on the overall reproduction number , which may alter the optimal vaccine distribution among patches or may influence disease - control strategies related to movement restriction .the combination of population movement with complexities of control strategies and disease transmission presents many problems for which mathematical modelling may yield valuable insight .we are grateful to two anonymous reviewers whose comments greatly improved the manuscript .rjs ? is supported by an nserc discovery grant . for citation purposes ,note that the question mark in smith ? " is part of his name .birmingham , m.e .aylward , s.l .cochi and h.f .hull ( 1997 ) . national immunization days : state of the art . j. infect .breban , r. , j. drake , d.e .stallknecht , p. rohani ( 2009 ) .the role of environmental transmission in recurrent avian influenza epidemics .plos comput biol 5(4 ) : e1000346 .fine , p.e.m . and i.a.m .carneiro ( 1999 ) .transmissibility and persistence of oral polio vaccine viruses : implications for the global poliomyelitis eradication initiative .american journal of epidemiology 150(10 ) : 10011021 .losos j. report of the work group on viral diseases . in : goodman ra , foster kl , trowbridge fl , figueroa jp , eds . global disease elimination and eradication as public health strategies .bull world health organ .1998;76(suppl2):94102 ., r.j . , p. cloutier , j. harrison and a. desforges ( 2012 ) . a mathematical model for the eradication of guinea worm disease .in : understanding the dynamics of emerging and re - emerging infectious diseases using mathematical models , s. mushayabasa and c.p .bhunu , eds , pp133156 .world health organization regional offices for europe and the eastern mediterranean .operation mecacar : eradicating polio , final report 19952000 .copenhagen , denmark : world health organization regional office for europe , 2001 .zipursky , s. , l. boualam , d.o .cheikh , j. fournier - caruana , d. hamid , m. janssen , u. kartoglu , g. waeterloos and o. ronveaux ( 2011 ) . assessing the potency of oral polio vaccine kept outside of the cold chain during a national immunization campaign in chad .vaccine 29(34):56525656 .
mass - vaccination campaigns are an important strategy in the global fight against poliomyelitis and measles . the large - scale logistics required for these mass immunisation campaigns magnifies the need for research into the effectiveness and optimal deployment of pulse vaccination . in order to better understand this control strategy , we propose a mathematical model accounting for the disease dynamics in connected regions , incorporating seasonality , environmental reservoirs and independent periodic pulse vaccination schedules in each region . the effective reproduction number , , is defined and proved to be a global threshold for persistence of the disease . analytical and numerical calculations show the importance of synchronising the pulse vaccinations in connected regions and the timing of the pulses with respect to the pathogen circulation seasonality . our results indicate that it may be crucial for mass - vaccination programs , such as national immunisation days , to be synchronised across different regions . in addition , simulations show that a migration imbalance can increase and alter how pulse vaccination should be optimally distributed among the patches , similar to results found with constant - rate vaccination . furthermore , contrary to the case of constant - rate vaccination , the fraction of environmental transmission affects the value of when pulse vaccination is present . \1 . department of mathematics , vanderbilt university , nashville tn , cameron.j.browne.edu \2 . department of mathematics and faculty of medicine , the university of ottawa , ottawa on , + rsmith43.ca ( to whom correspondence should be addressed ) \3 . department of mathematics , massachusetts institute of technology , boston ma , lbouro.edu
probabilistic graphical models are widely used to model neural connectivity and the transfer of information between regions of the brain . in brief , vertices indexed by in a directed acyclic graph ( dag ) are identified with random variables that represent neural activity at a particular region and edges between the vertices describe conditional independence statements , whose interpretation depends on both the underlying statistical model for the data and the context in which data are obtained . in many neuroscience applications , subject - specific connectivity ( i.e. the set of edges ) itself is uncertain and an important challenge is to infer this structure from experimental data .there has been considerable statistical research into inference for graphical models in general over the last decade , with particular emphasis on bayesian networks ( bns ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , gaussian graphical models ( ggms ; * ? ? ?* ; * ? ? ?* ) and discrete graphical models .nevertheless there remain two substantive barriers to the inference of graphical models from data : firstly , inferred graphical structure is often not robust to reasonable perturbation of the underlying data .this is due to a combination of the high variance of graphical estimators themselves and any additional variance that is introduced if the structure learning algorithm returns only an approximation to the intended estimator .secondly , conventional model selection criteria for graphical models are often biased towards selecting more complex models ( i.e. more edges ) , since there are typically very many models in which the data - generating model is nested ; these models are also able to fit the data well ( albeit with some coefficients close or equal to zero ; * ? ? ?consequently many more data are required to exclude more complex alternatives .taken together , these factors limit the extent to which neural connectivity can be accurately recovered from data . many experimental designs in neuroscience involve data collected on multiple subjects , indexed by , that may differ with respect to neural connectivity , such that corresponding graphs may be subject - specific .efforts to analyse multi - subject experimental data have previously focussed on hierarchical models and imaging data , rather then connectivity _ per se _given that elements of neural architecture are largely conserved between subjects , it is natural to leverage this similarity in order to improve statistical efficiency , by addressing both the robustness of inferred graphical structure and reducing small sample bias .the statistical chellenge of estimating multiple related graphical models has recently received much attention : for ggms , and others exploited penalties , such as the fused graphical lasso , to couple together inference for multiple related subjects .such penalised likelihood methods are computationally tractable and scale well to high dimensions .these studies demonstrate that it is possible to increase statistical efficiency , often considerably , by formulating an appropriate joint model that couples together multiple graphs .likewise , the methodology improves robustness by requiring that graphical structure is approximately invariant to perturbations of the data that are , in effect , provided by the subjects themselves .whilst useful in many applications , ggms are undirected graphs and hence can not not represent the direction of information flow between neural regions .more fundamentally , ggms do permit causal inference that is typically the scientific objective .for this reason we focus attention on graphical models , such as bns , that are based on dags and have an associated theory of inferred causation .research focussing on dags in this setting includes , who constructed a hierarchical model in which graph structure was conserved between subjects but the parameters that describe the data - generating process were subject - specific . went further by permitting subject - specific graph structure and parameters in the context of gaussian ancestral graphs whose parameters are constrained by a hierarchical model .this latter work is closest in spirit to the methodology that we discuss below , but we do not restrict attention to either stationary data or gaussian data , rendering our approach considerably more flexible .until very recently , estimation of more general dags required either the strong assumption that an ordering of the variables applied equally to all subjects , or the use of expensive computational approximations such as markov chain monte carlo that scale extremely poorly as either the number of variables or number of subjects grows .an exact algorithm that facilitates the joint estimation of multiple dags was recently developed in the sister paper , viewing the estimation problem within a hierarchical bayesian framework ( somewhat similar to a random effects model for the graph structure ) and applying advanced techniques from integer linear programming to obtain a _maximum a posteriori _ estimate of all dags simultaneously .the availability of exact algorithms offers the opportunity to analyse multi - subject neural connectivity using causal dag models , whilst leveraging the similarity between subjects in order to improve statistical efficiency and robustness .this letter illustrates the scope and applicability of these exact algorithms within neuroscience , using a small functional magnetic resonance imaging ( fmri ) time course dataset obtained on six subjects , coupled with multiregression dynamical models ( mdms ; * ? ? ?* ) that permit statistically rigorous causal inference . it is envisaged that exact algorithms will play an important r in future studies of neural connectivity and this letter serves to illustrate their application by example . 0.7 0.55 [ regions ] 0.225exact algorithms are illustrated here with a small fmri dataset consisting of six subjects from the human connectome project .scans were acquired on each subject while they were in a state of quiet repose ; data from one 15 minute session were used , with a spatial resolution of mm and a temporal resolution of 0.7 secs ; see for full details . after correcting for head motion ,all data was registered to a common reference atlas space and 100-dimensional independent component analysis ( ica ) was conducted on the temporally concatenated data .the result of this ica was 100 spatial modes ( common to all subjects ) and 100 corresponding temporal modes ( subject - specific ) ; at this high dimension , the 100 spatial modes are sparse and spatially compact ( though possibly bilaterally symmetric ) and so essentially provide a data - driven parcellation of the brain .hierarchical clustering was used on the ica temporal modes following , and the 10-mode cluster corresponding to motor cortex was selected for study here .thus our data consists of 10 nodes , with a time series for each node for each subject .figure [ ica modes ] displays the neural regions that we consider and figure [ tab ] shows the approximate description of each region ; note that region 4 was spatially diffuse and difficult to characterise , and thus is likely to be an artefactual component .the goal here is to understand neural information transfer at resting state and establish subject - specific connectivity . by its very nature , estimation of resting state connectivityis challenging due to limited information content in the fmri time series .indeed , reported that whilst the presence or absence of connections can sometimes be estimated from fmri time series data , estimating the direction of edges from data remains extremely challenging .the integration of data from multiple related subjects offers one route to increased statistical power and this is the approach that we pursue here .following data preprocessing we are left with a collection of random variables representing the observed activity in subject at region and time point . following recent research by into causal inference based on such fmri time course data , we model the as arising from a causal mdm .specifically , an mdm is defined on a multivariate time series is characterised by a contemporaneous dag , with information shared across time through evolution of the model parameters .we consider the case where satisfies linear gaussian structural equations , though any formulation would be compatible with the methodology that we present .write for the parents of vertex in the dag and write for the collection of univariate time series corresponding to the variables .this mdm is described by the following observation equations where , together with the system equations where is a matrix of autoregressive coefficients and .default choices for , , were assumed following .model selection for mdms is based on bayes factors ( see e.g. * ? ? ?the evidence in favour of the dag under the mdm likelihood can be calculated as in practice eqn .[ mdm evidence ] is evaluated using simple kalman filter recurrences and we refer the reader to for further details . reports that the mdms above are well - suited to the analysis of resting - state fmri data , outperforming the methods surveyed by in both the detection of edges and also the orientation of edges .this promising performance appears to be driven by the information present in temporal spike patterns , as exploited directly in recent work by .the mdms here are reified with the interpretation that edges correspond to neural connectivity .independent estimation for the subject - specific dags based on the mdm score ( eqn . [ mdm evidence ] ) yields graphs that display high between - subject variability ( fig . [ independent_neuro ] ) .thus the causal semantics that are associated with mdms imply that neural connectivity is highly variable between subjects .this is unreasonable on neuroscientific grounds and likely reflects the lack - of - robustness and small sample bias that are often associated with graphical analyses .this motivates a hierarchical statistical model and exact estimation , as we describe below . 0.8 0.8 following unsatisfactory independent estimation, we now proceed to explore exact joint estimation as enabled by the recent methodological advances of .we note that , providing that the quantities used to compute eqn . [ mdm evidence ] above have been cached , the joint analysis below does not require any further computation involving the mdm model equations .write for the collection of all dags on the vertices and write for the collection of all the dags .joint estimation proceeds within a hierarchical bayesian framework that is specified by the `` multiple dag prior '' the functions and are defined below . here denotes an undirected network on the indices that will be used to encode a similarity structure between subjects ; the first product factorises along the edges of .when is complete , eqn .[ joint prior ] encodes an exchangeability assumption that any dag is equally likely _ a priori _ to be similar to any other dag ( ) .such an exchangeability assumption is implicit in much of the recent literature on multiple graphical models .however , exchangeability will be inappropriate when the collection of subjects is heterogeneous , for instance containing groups or subgroups that correspond to differential neural connectivities .the methodology that we present below allows for arbitrary ( and even uncertain ) , relaxing this exchangeability assumption and permitting more flexible estimation .the function is used to encode regularity between pairs of dags , with larger values corresponding to _ a priori _ more similar dag structures . showed that a particularly convenient form of regularity function is obtained by considering hyper - markov properties : .\label{hm}\end{aligned}\ ] ] here is the logical xor operator and ] , ] , whilst the vector contains the local scores and the constants and . by inspection of eqns .[ mdm evidence ] , [ hm ] and [ hyp ] we see that the posterior log - probability + \sum_{k=1}^k \sum_{l = k+1}^k \eta^{(k , l)}[(k , l ) \in a ] \nonumber \\ & & - \sum_{k=1}^k \sum_{l = k+1}^k \sum_{i , j=1}^p \lambda_{j , i}^{(k , l ) } [ ( j \in g_i^{(k ) } ) \oplus ( j \in g_i^{(l ) } ) ] \cap [ ( k , l ) \in a ] \end{aligned}\ ] ] can be written as an inner - product .the inequality constraints and equality constraints are carefully chosen to ensure that the feasible region for consists of precisely those vectors that correspond to well - defined ( multiple ) dag models .this final point is somewhat technical and we refer the reader to for full details . the class of statistical models that is amenable to exact inference is substantial , but here we focus on particularly tractable prior specifications that allows us to clearly illustrate the methodology . specifically , we reduce the number of hyperparameters to two by making the assumption that all edges are _ a priori _ equally likely to be shared between all pairs of subjects ( for all ) and that all pairs of subjects are _ a priori _ equally likely to share similar graph structure ( for all ) .prior elicitation in this reduced class of models therefore requires the specification of hyperparameters and .the impact of the choice of hyperparameters on the map estimators is clarified in the following : [ mix1 ] ( a ) when , consists of dags equal to those computed using independent estimation .( b ) for we have .( c ) for fixed there exists such that whenever we have .[ mixture ] ( d ) there exists such that is the complete network whenever . the above result deals with the extremes of the parameter space ; intuitively we would expect non - trivial values of to interpolate `` smoothly '' between these extremes .the following shows that this intuition is not strictly true .specifically , as is monotonically increased , it is possible for a particular edge to enter and exit the map estimator multiple times and furthermore , non - monotonicity is also exhibited by the network estimator : [ nonmono1 ] ( a ) fix a network and consider varying the hyperparameter .if is non - empty , then there exist values of the sufficient statistics such that ] is not monotonic in .thus the joint map , like other penalised likelihood approaches ( including the glasso for ggms ; * ? ? ?* ) does not obey a monotonicity property .property [ nonmono1 ] makes it surprising that exact algorithms exists in this nontrivial setting . in practice and in results below we have found that , like the glasso , monotonicity holds approximately .0.9 0.49 0.49 the elicitation of hyperparameters such as , should principally be driven by the scientific context , the nature of the data and the rle that inferences are to play in future work .for example , if the estimated networks are the basis for features within a classification algorithm , then elicitation of hyperparameters should target the classification error .however in some settings , including our illustrative example , the non - availability of relevant ancillary data ( e.g. the class labels in classification ) precludes such an objective elicitation .below we therefore illustrate diagnostics that could form the basis for subjective elicitation in quite general settings , based on retrospective inspection of the posterior .the analysis of resting state fmri data is an emerging area of research and currently neither the source nor the extent of subject - specific variation are well - understood .if the extent of variability at resting state was known , this could be directly leveraged to facilitate the objective elicitation of hyperparameters .however this is not currently the case and subjective elicitation is required . the biological knowledge that forms the basis for elicitation is qualitative in nature , as we explain below : firstly , connectivity should not change within a subject over the brief time period under which the fmri experiments were conducted .secondly , recent studies ( e.g. * ? ? ?* ) indicate that the notion of `` resting state '' is poorly defined and can correspond to several contrasting neurological activity profiles ; we would therefore not expect to obtain identical dags under a replication experiment that is unable to control for the precise nature of the resting state .a subjective analysis can be obtained using diagnostics based on retrospective examination of the posterior , that we describe below .specifically , for our fmri dataset , we performed exact estimation of the joint map based on four technical replicate datasets obtained from the first two subjects under identical laboratory conditions . to inform elicitation for the regularity parameter , we fixed the network such that if and only if datasets and were both technical replicates derived from the same subject ( fig .[ learn lambda ] ) .this corresponds to placing an exchangeability assumption on the technical replicates , but prohibiting the sharing of information between subjects .we then computed the total structural hamming distance ( shd ; * ? ? ?* ) between all pairs of dags that are technical replicates ( fig .[ shda ] ) . this diagnostic could be used as the basis for subjective elicitation of in general situations where replicate data are available . below for illustrationwe focus on one such value , , that attributes approximately 50% of variability between technical replicates to extrinsic noise resulting from the experimental design .examination of the bayes factor as a function of provides a second diagnostic to assist with elicitation that may be useful to highlight over - regularisation . in this casethe value scores considerably better compared to the alternative that assigns the same dag to all replicate datasets ( log - bayes factor , fig .[ bfs ] ) .additional diagnostics for the subjective elicitation of are discussed in the subsequent sections . based on the elicitation , for illustration , we employed exact estimation for the joint map under the exchangeability assumption that is the complete network ( eqn .[ opt1 ] ) . in order to limit scope , here we simply consider one dataset per subject ( i.e. no technical replicates were included ) , but data aggregation is naturally accommodated in the methodology we present ( see discussion ) .results in figure [ joint_neuro ] demonstrate that the estimated dag structures are substantially more similar that our original estimate obtained using independent inference ( fig .[ independent_neuro ] ) , with a 23% decrease in total shd between dags .this estimate can be expected to more closely represent the true subject - specific neural connectivity patterns , based on the empirical conclusions of .we note however that validation of this inferred connectivity remains extremely challenging ( e.g. * ? ? ?0.49 0.49 the scientific motivation for multi - subject analysis is typically to elucidate differential connectivity between subjects , either in a purely unsupervised context for exploratory investigation , or in a supervised context to determine whether certain features of connectivity are associated with auxiliary covariates of interest such as disease status . in these casesa statistical model that assumes exchangeability between subjects may be inappropriate and `` regularise away '' the differential connectivity that is of interest .we therefore proceed to jointly estimate both subject - specific dags and the network that describes relationships between the subjests themselves ( eqn .[ opt2 ] ) .elicitation of the hyperparameter , that controls density of the network , could again be performed by retrospective inspection of the posterior . for our resting state fmri datasetwe would proceed by requiring ( i ) a moderate amount of similarity between subjects , motivated by expectation that connectivity should not differ substantially between subjects , and ( ii ) a moderate amount of heterogeneity between subjects , since we aim to highlight any potential differences between the neural connectivity of different subjects .results in figure [ n neuro ] demonstrate that for the six subjects are regularised into three distinct components , , , whilst for the higher value the subjects are regularised into two distinct components , .( when the network is complete and subject - specific dags coincide with fig .[ joint_neuro ] . )examination of the bayes factor as a function of demonstrates that the values , provide considerably better estimates compared to the dags obtained under an exchangeability assumption ( log - bayes factor , respectively ) .this suggests that group and sub - group structure may be present amoung the subjects at the level of neural connectivity and provides evidence against exchangeability of the subjects .finally we illustrate an alternative and novel approach to learning similarities between subjects , called -means clustering of dags , that does not assume exchangeability of the subjects . in brief , additional latent dags are introduced that represent cluster centres or `` prototypes '' , summarising the typical dag structure within their cluster .the ( unknown ) network on the extended vertex set is constrained to have edges that connect each of the vertices to precisely one of the vertices , so that estimation of corresponds exactly to bayesian model - based clustering .our methodology thereby facilitates joint estimation of both subject - specific dags and their optimal cluster assignment .( note that , like in any mixture model , the optimal cluster assignment is defined only up to permutation of the cluster labels . )here we applied -means clustering of dags to the six subjects using clusters ( fig .[ km l2 ] ) and clusters ( fig .[ km l3 ] ) .the optimal cluster assignment with recovers the three distinct components , , that were obtained above via joint estimation of , whilst the optimal cluster assignment with was , which differs from the assignment obtained above in the position of the fourth subject only .this analysis provides an alternative route to investigate similarity between the subjects and offers an alternative route to subjective elicitation of the hyperparameter .we note that the prototypes that summarise cluster - specific graphical structure may be useful as summary statistics for the purposes of dimensionality reduction .in neuroscience experiments it is increasingly common for data to be collected from multiple subjects whose neural connectivities are likely to be related but non - identical . to uncover the causal mechanisms that underpin neural signalling it is necessary to work within a formal statistical theory for inferred causation ,the most well - studied of which is rooted in dags . yet until recently exact estimation for multiple related dags was computationally infeasible . in this letterwe have illustrated , using a small fmri dataset , how recent algorithmic advances enable sophisticated causal inference using multi - subject experimental data .in particular we have seen how novel statistical models , that do not assume exchangeability between subjects , achieve both a better description of the data ( in terms of bayes factors ) and enable the more robust inference of subject - specific connectivity .the model class that we discuss is large and allows for multiple opportunities to integrate prior knowlege , pertaining to both ( i ) the connectivity between specific neural regions , and ( ii ) additional covariates that might associate with subject - specific connectivity , such as age , gender or disease status .the integration of auxiliary covariate data and the more general experimental validation of our techniques require an extensive and thorough investigation involving many more subjects than we analyse here ; this is now the focus of our ongoing research .we focused on a particularly simple formulation with two tuning parameters and illustrated through application how both tuning parameters could be elicited retrospectively through examination of map estimates .this methodology extends naturally to highly structured datasets , for example where each subject is asked to provide multiple fmri time courses . in these casesa combination of the techniques discussed above would permit all data on a particular subject to be aggregated into a single `` prototype '' and then estimation to proceed on the basis of these prototypes . at present an analysis involving subjects and dags of size requires a few minutes serial computation on a 2.10ghz intel core i7 cpu windows host with 8 gb memory .our ongoing research focuses on reducing this computational burden so that exact estimation becomes feasible for much larger datasets that include hundreds of neural regions .recent advances in estimation of single dags involving thousands of nodes suggests that much progress can be made in this direction .causal inference for neural connectivity is central to the study of brain functionality and we envisage that the techniques presented here will play an important r in the future analysis of multi - subject experimental data .\(a ) this follows since when the dags are _ a priori _ independent .since the likelihood also factorises over it follows that the dags are independent in the posterior .\(b ) the objective that we wish to maximise can be written as -\lambda \sum_{i=1}^p \sum_{j=1}^p [ ( j \in g_i^{(k ) } ) \oplus ( j \in g_i^{(l ) } ) ] \cap [ ( k , l ) \in a ] + c\end{aligned}\ ] ] where and does not depend on ] .( c , d ) to prove both statements we can take - \left[\min_{\pi \in \{1:p\}\setminus\{i\ } } s^{(k)}(i,\pi)\right].\end{aligned}\ ] ] for ( c ) note that if and , then the choice strictly maximises the objective function , since a selection incurs a penalty of at least that can not be compensated for by an increase in the likelihood term $ ] .similarly for ( d ) , we have that incurs a penalty of at least that can not be compensated for by an increase in the likelihood term .\(a ) consider the following simple system with two variables and two individuals .individual 1 has parent set scores , for variable 1 and , for variable 2 .individual 2 has parent set scores , for variable 1 and , for variable 2 .then it is directly verified that for , and , for , has no edges and and for , and . in particular , the edge is present in for but absent for . to embed the above example in a larger system with variables and individualswe proceed as follows : without loss of generality , assume . for all variables in and ,assign scores to any parent set that contains variables from both and . for variables in and , and all variables in individuals , take all scores to be zero ( i.e. non - informative ) .then the above proof demonstrates that the edge is present in for but absent for .\(b ) consider the following simple system with two variables and four individuals .individual 1 has parent set scores , for variable 1 and , for variable 2 .individual 2 has parent set scores , for variable 1 and , for variable 2 .individual 3 has parent set scores , for variable 1 and , for variable 2 .individual 4 has parent set scores , for variable 1 and , for variable 2 .take , as defined in property [ mix1 ] , so that whenever .then it is directly verified that for , , , , , and ; for , , , , , and ; for , , , , , and is the complete network . in particular , the edge is present in for but absent for .cjo is supported by the centre for research in statistical methodology ( crism ) uk epsrc ep / d002060/1 .lc is supported by coordenaode aperfeioamento de pessoal de nvel superior ( capes ) , brazil .ten is supported by the wellcome trust , 100309/z/12/z and 098369/z/12/z , and by nih grants u54 mh091657 - 03 , r01 ns075066 - 01a1 and r01 eb015611 - 01 .the authors are grateful to james cussens , jim smith and sach mukherjee for many helpful discussions on the methodology that is presented here , and to stephen smith for the preprocessing and preparation of fmri data .costa , l. , smith , j. , nichols , t. , & cussens , j. ( 2013 ) searching multiregression dynamical models of resting - state fmri networks using integer programming ._ crism working paper , university of warwick _ * 13*:20 .hill , s. , lu , y. , molina , j. , heiser , l.m . , spellman , p.t . , speed , t.p . ,gray , j.w . , mills , g.b . , & mukherjee , s. ( 2012 ) bayesian inference of signaling network topology in a cancer cell line ._ bioinformatics _ * 28*(21):2804 - 2810 .mechelli , a. , penny , w.d . , price , c.j . ,gitelman , d.r ., & friston , k.j .( 2002 ) effective connectivity and intersubject variability : using a multisubject network to test differences and commonalities ._ neuroimage _ * 17*(3):1459 - 1469 .stein , j.l . ,wiedholz , l.m . ,bassett , d.s . ,weinberger , d.r . ,zink , c.f . ,mattay , v.s . , & meyer - lindenberg , a. ( 2007 ) a validated network of effective amygdala connectivity ._ neuroimage _ * 36*(3):736 - 745 .werhli , a.v . , & husmeier , d. ( 2008 ) gene regulatory network reconstruction by bayesian integration of prior knowledge and/or different experimental conditions ._ journal of bioinformatics and computational biology _ * 6*(3):543 - 572 .
directed acyclic graphs ( dags ) and associated probability models are widely used to model neural connectivity and communication channels . in many experiments , data are collected from multiple subjects whose connectivities may differ but are likely to share many features . in such circumstances it is natural to leverage similarity between subjects to improve statistical efficiency . the first exact algorithm for estimation of multiple related dags was recently proposed by ; in this letter we present examples and discuss implications of the methodology as applied to the analysis of fmri data from a multi - subject experiment . elicitation of tuning parameters requires care and we illustrate how this may proceed retrospectively based on technical replicate data . in addition to joint learning of subject - specific connectivity , we allow for heterogeneous collections of subjects and simultaneously estimate relationships between the subjects themselves . this letter aims to highlight the potential for exact estimation in the multi - subject setting .
a muon collider is perhaps unparalleled for exploring the energy frontier , if an economical design for muon cooling and acceleration can be finalized .historically synchrotrons have provided low cost acceleration . herewe present results on a dipole magnet prototype for a relatively fast 400hz synchrotron for muons , which live for 2.2 .low emittance muon bunches allow small apertures and permit magnets to ramp with a few thousand volts , if the energy stored in the magnetic yoke is kept low .to minimize energy stored in the magnetic yoke , grain oriented silicon steel was chosen due to its high permeability as noted in table 1 .thin 0.011 " ak steel tran - cor h-1 laminations and 12 gauge copper wire minimize eddy current losses which go as the square of thickness .the copper wire will eventually be cooled with water flowing in stainless steel tubes .the very low coercivity of grain oriented silicon steel as noted in table 2 minimizes hysteresis losses .the power supply is an lc circuit with a 52 polypropylene capacitor and a fast igbt powerex cm600hx-24a switch .the magnet gap is 1.5 x 36 x 46 mm and .the energy stored in the gap is : an ideal dipole with n = 40 turns of copper wire and a current of = 54a would require a voltage of = 315v to generate 1.8 t at 400hz .= 0.9 mm .relative permeability for 3% grain oriented silicon steel as a function of angle to the rolling direction .dipole magnetic flux needs to be parallel to the rolling direction .the minimum at 1.3 t and 55 comes from the long diagonal ( 111 ) of the steel crystal . [ cols="^,^,^,^,^,^,^,^,^",options="header " , ] = 0.6 mm lccc steel & & h & + & cm & oersteds & (t ) + oriented 3% silicon ' '' '' & 46 & 0.09 & 14000.8 + ultra lowcarbon & 10 & 0.5 & 1900.5 + 3% silicon & 46 & 0.7 & 1600.3 + jfe 6.5% silicon & 82 & 0.2 & 1500.3 + hiperco 50a ( ) & 42 & 0.3 & 2100.1 + our first dipole prototype was made with butt joints as shown on the top of fig.1 .the mitred joints in the second prototype as shown on the bottom of fig .1 work better . to further explore mitred joints , an opera-2d simulation as shown in fig.2opera-2d only allows magnetic properties in the and directions to be entered .it then uses a ^{-0.5}$ ] elliptical approximation .this is problematic for grain oriented silicon steel which has a minimum permeability at . in two dimensions : where and .thus , , and are available for finite element iterations .the following subroutine generates a bh curve at any angle using linear interpolation of a table with 5 angles . ....subroutine bh(np , ang , fang , pm1,pm2,pm3,pm4,pm5,pm ) implicit none integer np , j real fang(5 ) , pm(10 ) , ang , dang real pm1(10 ) , pm2(10 ) , pm3(10 ) , pm4(10 ) , pm5(10 ) c if(ang.ge.fang(1 ) .and .ang.lt.fang(2 ) ) then do 10 j=1,np dang = ( ang - fang(1))/(fang(2 ) - fang(1 ) ) 10 pm(j ) = pm1(j ) + dang*(pm2(j ) - pm1(j ) ) else if(ang.ge.fang(2 ) .and .ang.lt.fang(3 ) ) then do 20 j=1,np dang = ( ang - fang(2))/(fang(3 ) - fang(2 ) ) 20 pm(j ) = pm2(j ) + dang*(pm3(j ) - pm2(j ) ) else if(ang.ge.fang(3 ) .and .ang.lt.fang(4 ) ) then do 30 j=1,np dang = ( ang - fang(3))/(fang(4 ) - fang(3 ) ) 30 perm(j ) = pm3(j ) + dang*(pm4(j ) - pm3(j ) ) else if(ang.ge.fang(4 ) .and .ang.le.fang(5 ) ) then do 40 j=1,np dang = ( ang - fang(4))/(fang(5 ) - fang(4 ) ) 40 pm(j ) = pm4(j ) + dang*(pm5(j ) - pm4(j ) ) end if return end .... to check our steel , the hysteresis loop shown in fig.3 was measured with an epstein frame and hysteresigraph 5500 .fig.4 shows our magnet with mitred joints and fig.5 shows a permanent magnet used to check hall probes .dc magnetic tests were run on our two magnets as shown in fig.6 .the mitred joint magnet starts to become nonlinear at 1.7 t .fig.7 shows our fast ramping igbt power supply .figs.8 and 9 show results of ringing our mitred joint dipole . ) for our steel . and are small . = 17000 and 9000 at 1.7 t and 1.81 t , respectively.,width=309 ] f capacitor power supply .powerex cm600hx-24a igbt and vla500 - 01 gate driver ., title="fig:",height=170 ] f capacitor power supply .powerex cm600hx-24a igbt and vla500 - 01 gate driver ., title="fig:",height=211 ]a 1.8 t dipole can run at 400hz .a magnetic flux circuit with a large yoke path to gap ratio works with high permeability steel . the next step is improving field quality and the accuracy of pole faces , as well as matching calculated and observed losses .simulation of anisotropic steel has proven challenging . transverse beam pipe impedance , which is proportional to the inverse cube of beam pipe diameter , will probably dictate a 12 mm dipole gap .radiation damage of steel needs to be explored and a rogowski profile needs to be added to magnet ends .finally , we are most grateful to s.berg , k.bourkland , a.garren , k.y.ng , r.palmer , r.riley , a.tollestrup , j.tompkins , s.watkins , and j.zweibohmer for their help .d. neuffer , aip conf . proc .* 156 * ( 1987 ) 201 ; r. fernow and j. gallardo , phys . rev .* e52 * ( 1995 ) 1039 ; j. gallardo _et al . , _snowmass 1996 , bnl-52503 ; c.ankenbrandt _ et al . , _ab * 2 * ( 1999 ) 081001 ; m.alsharoa _ et al . , _phys.rev.stab * 6 * ( 2003 ) 081001 ; r.palmer _ et al . , _ phys.rev.stab * 8 * ( 2005 ) 061003 ; r.palmer _ et al . , _arxiv:0711.4275 ; m.bogomilov _ et al._(mice collab . ) , arxiv:1203.4089 ; r.palmer _ et al . , _ab * 12 * ( 2009 ) 031002 ; y. torun _et al . , _ ipac-2012-thppc037 ; t. hart _ et al . , _ipac-2012-moppc046 ; g.lyons _ et al . , _ ipac-2012-tuppr008 ; arxiv:1112.1105 .
a 1.8 t dipole magnet using thin grain oriented silicon steel laminations has been constructed as a prototype for a muon synchrotron ramping at 400hz . following the practice in large 3 phase transformers and our own opera-2d simulations , joints are mitred to take advantage of the magnetic properties of the steel which are much better in the direction in which the steel was rolled . measurements with a hysteresigraph 5500 and epstein frame show a high magnetic permeability which minimizes stored energy in the yoke allowing the magnet to ramp quickly with modest voltage . coercivity is low which minimizes hysteresis losses . a power supply with a fast insulated gate bipolar transistor ( igbt ) switch and a capacitor was constructed . coils are wound with 12 gauge copper wire . thin wire and laminations minimize eddy current losses . the magnetic field was measured with a peak sensing hall probe .
linear and nonlinear transport processes and nonequilibrium phenomena in dilute non - ionic ( neutral ) fluids have been known adequately treatable by means of singlet distribution functions obeying , for example , the boltzmann equations and related kinetic equations for singlet distribution functions . relying on the experience gained from the theories of neutral dilute fluids , theories of ionized gases , plasmas , and charge carriers in semiconductors often rely on singlet distribution functions obeying boltzmann - like kinetic equations and their suitable modifications . however , since ions in ionized fluids interact through long - ranged coulombic interactions , even if the ionized species are dilute in concentration , their spatial correlations are significant , lingering on to manifest their effects even in the infinitely dilute regime of concentration as the thermodynamic properties ( e.g. , activity coefficients ) of ionic solutions demonstrate. therefore it would be very important to find a way , and learn , to incorporate long - range correlations into the theory of nonequilibrium phenomena and transport processes in ionized fluids and therein lies the significance of the limiting theory of conductivity in ionized liquids in the external field of arbitrary strength described in this work .interestingly , in the subject fields of nonlinear phenomena in ionic liquids , the wien effect was one of the earliest experimental examples that exhibited a marked nonlinear deviation from the coulombic law of conduction and , as such , it attracted considerable attention theoretically and experimentally .being a nonlinear effect in ionic conductance which shows a strongly nonlinear , non - coulombic field - dependence of ionic conductance , the phenomenon was studied actively in physical chemistry until several decades ago to understand ionic solutions and their physical properties .recently , there appears to be a revival of experimental studies on wien effect and related aspects in ionic conductance of ionic liquids in the presence of high external electric fields .there are other many fascinating aspects of physical properties of ionic liquids recently being studied actively and reported in the recent literature , although they are mostly in the field of equilibrium phenomena . in the present series of work ,we are interested in nonlinear transport processes and , in particular , learning about the theories of the wien effect on ionic conductance in electrolyte solutions in order to gain insights and theoretical approaches to treat the currently studied properties of ionic fluids . as a first step to this aim, we will study strong binary ( symmetric ) electrolytes because of the relative simplicity of the subject matter .more complicated systems of asymmetric electrolytes , in which the charges in a molecule are asymmetric , will be treated in the sequels to this work in preparation .the ideas of physical mechanisms underlying the wien effect , which might also encompass nonlinear phenomena in general in ionic fluids , proceed as follows .it is founded on the idea of ion atmosphere in debye s theory of electrolyte solutions . according to his theory ,ion atmosphere is formed around ions in the solution , which is spherically symmetric if the ions are spherical and the system is in equilibrium . when the external electric field is applied to the ionic fluid ,the ions of opposite charges begin to move in directions opposite to each other .thus the basic physical mechanisms involved in the ionic movements under the external field are believed to be due to a distortion of the spherically symmetric ion atmosphere into a non - spherical form and its subsequent tendency to relax to a spherically symmetric form .the former effect gives rise to the electrophoretic effect and the latter to the relaxation time effect .it should be emphasized here that the aforementioned effects are on the ionic atmosphere , but not on the ion of attention situated at the center of ion atmosphere .this idea can be translated into a qualitative mathematical form as given below : in experiments , we measure migration of ions and accompanying flow of medium . if the external electric field is denoted , the force on ion of charge is then given by since the ion of charge in the solution creates an ion atmosphere of charge , which is distributed in the ion atmosphere to balance the charge in the solution , and this atmosphere is subjected to a force of .this force tends to move the ion atmosphere in the direction of force , while the central ion of atmosphere is carried by force in the medium in the direction opposite to the motion of ion atmosphere in order to balance the momentum .the velocity of the _ countercurrent _ generated thereby may be readily calculated if it is assumed that the entire countercharge of the atmosphere is distributed in a spherical shell of radius , where is the debye radius of ion atmosphere from the central ion , and that the motion of this sphere of radius surrounding the central charge is governed by the stokes law holding for the motion of a sphere in a viscous fluid .thus , this velocity of the countercurrent is estimated to be where is the velocity of the shell of radius and is the viscosity of the medium .we are thus led to the result that the medium in the interior of the shell travels with this velocity , and that the central ion migrates against a collective current of the medium in the shell .the deduction of this expression qualitatively elucidates the most important part of the effect of electrophoresis .clearly , this effect has to do with hydrodynamic motion of the solvent around the center ion enclosed by the ion atmosphere of radius that moves against the former .one may therefore quantify this qualitative description by means of a hydrodynamic method using the navier stokes equation , but the navier stokes equation requires _ _ a local body - force__local mean external force as an input .this local body - force can not be obtained through a purely phenomenological consideration , but , for example , must be calculated by means of statistical mechanics combined with classical electrodynamics . before proceeding to the remaining effect , it is important to point out that eq .( [ k2 ] ) gives the velocity of a physical object of radius ( i.e. , the radius of ion atmosphere ) in the direction of .the second effect , that is , the relaxation time effect , is seen as follows : if the central ion possessed no atmosphere , it would simply migrate with a velocity , where is the friction constant , but owing to its ion atmosphere , the ion is subjected to a net force , , where is the force arising from the dissymmetry of the ion atmosphere created by the movement of the ions in the external field , and hence it will move , relative to its environment , with a velocity of a magnitude , .this is due to the effect arising from the relaxation of the asymmetric ion atmosphere .consequently , the net velocity of ion is given by here represents the relaxation time effect on relaxation to a spherically symmetric form of the distorted ion atmosphere , and the last term the electrophoretic effect .the aforementioned two effects making up the velocity given in eq .( [ k4 ] ) are believed to underlie in charge conduction in electrolytic solutions .in fact , the mobility of ions induced by an external electric field can be calculated on the basis of the aforementioned two effects , for example , by using eq .( [ k4 ] ) .as we can see from this heuristic discussion , the aforementioned two effects require the velocity of the fluid ( medium ) , which obviously obeys the hydrodynamic equations for the system subjected to an external electric field .since such velocity solutions can be obtained from the stokes equation , more generally , navier stokes equation , we may apply the solutions thereof to calculate the charge conductance and the countercurrent of the medium to learn the mode of charge conductance in electrolyte solutions subjected to an external field .the hydrodynamic equations , however , contain external body - forces , which in the present case are the external electric field .the external electric field or body - force is generally local and depends on the local distribution of charges .the local charge distributions require molecular distributions in the system and a statistical mechanical theory for them a molecular theory .to answer this question , onsager with fuoss formulated a formal framework of theory in which a fokker planck - type differential equations for nonequilibrium pair distribution functions are derived on the assumption of a brownian motion model for ions in a continuous medium of dielectric constant and viscosity .we will refer to these differential equations for pair correlation functions as the onsager fuoss ( of ) equations henceforth .they are coupled to the poisson equations of classical electrodynamics for the ionic potentials .these two coupled systems of differential equations will be referred to as the governing equations in the present work .the governing equations were applied to study the ionic conductance of binary strong electrolytes in an external electric field by wilson in his dissertation .this theory will be referred to as the onsager wilson ( ow ) theory .wilson solved the governing equations and obtained analytic formulas for the electrophoretic and relaxation time coefficients and the equivalent ionic conductance qualitatively displaying the wien effect in the regime of strong electric fields .unfortunately , his dissertation has never been published in public domain , but only important results , such as the electrophoretic and relaxation time coefficients , had been excerpted in the well - known monograph by harned and owen on electro - physical chemistry . tantalized by the possibility of the utility of the theory for recent experimental results for ionic fluids and charge carrier mobilities in semiconductors referred to earlier , we have thoroughly examined the ow theory to learn the details of it ._ surprisingly , we have discovered that the velocity solution of the stokes ( hydrodynamic ) equation in the ow theory can give rise to a divergent result rendering into question the electrophoretic coefficient calculated by wilson s procedure described in his dissertation . _we believe that the basic framework of governing equations the of equations and poisson equations should be correct , but the way the solutions are evaluated by him may be called into question . therefore , it is our principal aim of this work to analyze the solutions of the governing equations in the case of binary strong electrolytes in an external electric field and obtain physically reasonable and thus acceptable theoretical results that can be made use of to study experimental data on conductivity and other transport phenomena in the high field regime .this paper is organized as follows . in sec .ii , we present the governing differential equations , which consist of the of equations for the ionic pair distribution functions and the poisson equations for the potentials of ionic interaction .we note that kirkwood also derived a similar equation for non - ionic liquids in his kinetic theory of liquids .one ( bce ) of the present authors also derived the of equations from the generalized boltzmann equation. since wilson s dissertation has not been published anywhere in a journal , the governing equations and their solutions are discussed to the extent that the present paper can be followed intelligibly . in sec .iii , the solutions of the governing equations the pair distribution functions and potentials of ionic interaction are presented in the case of a strong binary electrolyte solution subjected to an external field .these solutions are given in one - dimensional fourier transforms in an axially symmetric coordinate system , namely , a cylindrical coordinate system whose axial coordinate is parallel to the applied external electric field .the fourier transform is with respect to the axial coordinate .the distribution functions obtained are nonequilibrium pair distribution functions which describe the nonequilibrium ionic liquid structure , and the nonequilibrium ionic potentials of interaction in the external field . since they should be of considerable interest to help us learn about the nonequilibrium ionic liquid properties we study the solutions of the governing equations in detail and obtain , especially , their spatial profiles , indicating how ions and their nonequilibrium part of the potentials are distributed in the external electric field .it should be noted that the distribution functions are the nonequilibrium corrections to the boltzmann distribution function predicted by the debye hckel theory of electrolytes , and similarly for the potentials . in sec .iv , we then discuss the solutions of the stokes equation , which replaces the navier stokes equation in the case of incompressible fluids that we assume the ionic solution of interest is . solving the stokes equation, we obtain the axial and transversal velocity components as well as the nonequilibrium pressure from the solutions of the stokes equation .we present the solution procedure for the stokes equation in detail , because , firstly , wilson s thesis contains only the symmetric part of the solution , leaving out the antisymmetric part that turns out to be comparable to the former in magnitude and , secondly , we believe that the solution procedure of the stokes equations , which combines statistical mechanics and hydrodynamics in a rather intriguing manner , appears to be very much worth learning , especially , if one is interested in nonequilibrium theories of ionic liquids in an external electric field . in this sectionwe also discuss the connection with the electrophoretic and relaxation time coefficients originally obtained by wilson , who evaluated them at the position of the center ion of ion atmosphere , namely , at the coordinate origin .this discussion would show that one of his integrals evaluated at the coordinate origin is divergent .therefore we evaluate explicitly the solutions to explore a way to make the ow theory of ionic conductance unencumbered by such a divergence difficulty . in sec .iv , we also compute numerically the spatial profiles of the axial velocity , and study them to guide us to avoid the divergence difficulty mentioned in connection with wilson s result and choose the optimum position coordinates at which to calculate the relaxation time and electrophoretic coefficients . to this aimwe have either evaluated analytically or reduced to one - dimensional quadratures , by means of contour integration methods , the fourier transform integrals making up the solutions of the stokes equations obtained earlier before computing their spatial profiles .the contour integration methods are described in appendix a. since they , however , do not cover the entire coordinate space owing to the condition imposed by jordan s lemma on applicability of contour integration methods involving integrations along a circle of infinite radius , the integrals must be numerically computed outside the region where the aforementioned condition is violated .the details of the condition are discussed in sec .iv and also in appendix a. these numerical studies reveal the manner in which the ions flow subject to the applied external electric field provide insight into the behavior of the velocity and valuable clues to formulate an empirical rule to select the position parameters ( ) in the electrophoretic factor , so that a physically sensible and non - divergent electrophoretic coefficient and the corresponding relaxation time coefficient can be defined and ionic conductance correctly predicted .this problem is addressed in the companion paper .v is for discussion and concluding remark .let denote the position vector of ion in a fixed coordinate system and the relative coordinate of ion from ion : and let denote the concentration of ion in the atmosphere of ion located at position other words , the distribution function to find ion at distance from ion located at . at equilibriumit is given by the boltzmann distribution function times the density of ion .let us also denote by the velocity of ion in the neighborhood of ion .therefore this velocity also depends on positions of ions and in the following manner: the equation of continuity for ion pair is then given by where .hence , at a steady state the steady - state equation of continuity is given by assuming that the ions , being randomly bombarded by molecules of the continuous medium ( solvent ) of dielectric constant and viscosity , move randomly , namely , execute random brownian motions , in the presence of an applied external field , the velocities and may be assumed given by the brownian motion model where is the velocity of solution at position ( ) ; is the inverse of the friction coefficient of ion , which is related to the diffusion coefficient of ion of charge in the medium of viscosity here is the boltzmann constant and the absolute temperature ; is the total force acting on ion .we assume that forces on ions are linear with respect to charge numbers so that the superposition principle of fields is preserved . here is the applied external force on ion . under the assumptions for and for stated earlier , the steady - state equation of continuity ( [ 2st ] )becomes a coupled set of differential equations satisfied by ion pair distribution functions of the ionic liquid: & = 0,\label{3}\\ \left ( i , j=1,2,\cdots , s\right ) .& \nonumber\end{aligned}\ ] ] we will call this set of differential equations the onsager fuoss ( of ) equations . here for simplicity of notation we have omitted the first position variables in the distribution functions and potentials and typeset them as follows : etc . and and .in fact , for eq .( [ 3 ] ) the coordinate origin may be regarded as fixed on position of ion .these are fokker planck - type equations for and and . in eq .( [ 3 ] ) , is density of ion and is the external ( electric ) field .the potentials appearing in this set of differential equations , eq .( [ 3 ] ) , obey the poisson equations of classical electrodynamics, the two sets ( [ 3 ] ) and ( [ 4 ] ) are coupled to each other and will be henceforth referred to as the governing equations in this work .the two sets of equations ( [ 3 ] ) and ( [ 4 ] ) are subject to the boundary conditions stated below .the number of ions , leaving and entering the interior , , of a surface should be balanced , because no ions are created or destroyed .therefore \delta ] is crossed ( is a fixed parameter ) , whereas changes discontinuously as the negative real axis is crossed , and changes discontinuously as the branch cut ] . is computed with the contour integration methods within the range defined by ineq .( [ vc ] ) and , outside this range , by means of the method of principal values for singular integrals . fig .3nonequilibrium part of the potential is plotted in ( ) plane at . here with denoting the debye hckel potential . in eq . ([ 34p ] ) is not explicitly put in since is the nonequilibrium part of the potential in the external field .therefore should be understood as /\left( \kappa ze/\sqrt{2}\pi d\right ) $ ] . within the range of and satisfying ineq .( [ 33 ] ) [ also see ineq .( [ a49c ] ) ] the contour integration method is used and outside the region the method of principal value integration is used for computation . fig .4the reduced axial velocity profile is plotted in plane at . within the range of and satisfying ineq .( [ vc ] ) the contour integration method is used and outside the region the method of principal value integration is used for computation .the axial velocity profile is directional , being positive the positive direction parallel to the external field before vanishing to zero at large distance whereas being negative in the transversal ( radial ) direction before vanishing to zero as increases .thus the boundary conditions are satisfied in both and directions .this figure indicates the mode of behaviors of the counterflow of the medium to the ionic movement when the external field is turned on .6the projection of surface onto plane .there are two sets of quasi - elliptical level curves ; one with the major axis on the axis and the other on the axis .the former corresponds to the contours of the negative part of the surface projected onto ( ) plane , and the latter to the contours of the positive part projected onto ( ) plane .the outermost level curve is the locus of .this level curve depicts the moving ion atmosphere distorted by the external electric field from the spherical form assumed by the ion atmosphere at .this moving ion atmosphere is seen polarized toward the field direction .7the distorted ion atmosphere is seen to have the center at ( ) on the axis .the field dependence of the center of the ion atmosphere describes the trajectory of its motion .the trajectory is shown in this figure .the curve indicates the mode of migration for the center from the origin of the coordinate system where the center is located when , as the field strength is increased .it decreases to a plateau after reaching a maximum as increases .8plot of an example for at as a function of and its comparison with wilson s electrophoretic coefficient . the solid line , the present theory ; the dotted line , the ow theory .10plot of and example for at as a function of and its comparison with wilson s electrophoretic coefficient . the solid line , the present theory ; the dotted line , the ow theory .13contour for integrals and .this contour also applies to integrals and and and .the bold line denotes the branch cut on the negative real axis .
in this paper , on the basis of the onsager wilson theory of strong binary electrolyte solutions we completely work out the solutions of the governing equations ( onsager fuoss equations and poisson equations ) for nonequilibrium pair correlation functions and ionic potentials and the solutions for the stokes equation for the velocity and pressure in the case of strong binary electrolyte solutions under the influence of an external electric field of arbitrary strength . the solutions are calculated in the configuration space as functions of coordinates and reduced field strength . thus the axial and transversal components of the velocity and the accompanying nonequilibrium pressure are explicitly obtained . computation of velocity profiles makes it possible to visualize the movement and distortion of ion atmosphere under the influence of an external electric field . in particular , it facilitates tracking the movement of the center of the ion atmoshphere along the axis , as the field strength increases . thus it is possible to imagine a spherical ion atmosphere with its center displaced to from the origin . on the basis of this picture we are able to formulate a computation - based procedure to unambinguously select the values of and in the electrophoretic factor for and thereby calculate the ionic conductance . this procedure facilitates to overcome the mathematical divergence difficulty inherent to the method used by wilson in his unpublished dissertation on the ionic conductance theory ( namely , the onsager wilson theory ) for strong binary electrolytes . we thereby define divergence - free electrophoretic and relaxation time factors which would enable us to calculate equivalent conductance of strong binary electrolytes subjected to an external electric field in excellent agreement with experiment . we also investigate the nature of approximations that yield wilson s result from the exact divergence - free electrophoretic and relaxation time coefficients . in the sequels , the results obtained in this work are applied to study ionic conductivity and nonequilibrium pressure effects in electrolyte solutions .
optimization is an important subject with many important application , and algorithms for optimization are diverse with a wide range of successful applications . among these optimization algorithms ,modern metaheuristics are becoming increasingly popular , leading to a new branch of optimization , called metaheuristic optimization .most metaheuristic algorithms are nature - inspired , from simulated annealing to ant colony optimization , and from particle swarm optimization to cuckoo search .since the appearance of swarm intelligence algorithms such as pso in the 1990s , more than a dozen new metaheuristic algorithms have been developed and these algorithms have been applied to almost all areas of optimization , design , scheduling and planning , data mining , machine intelligence , and many others .thousands of research papers and dozens of books have been published . despite the rapid development of metaheuristics ,their mathematical analysis remains partly unsolved , and many open problems need urgent attention .this difficulty is largely due to the fact the interaction of various components in metaheuristic algorithms are highly nonlinear , complex , and stochastic .studies have attempted to carry out convergence analysis , and some important results concerning pso were obtained . however , for other metaheuristics such as firefly algorithms and ant colony optimization , it remains an active , challenging topic . on the other hand , even we have not proved or can not prove their convergence , we still can compare the performance of various algorithms .this has indeed formed a majority of current research in algorithm development in the research community of optimization and machine intelligence . in combinatorial optimization, many important developments exist on complexity analysis , run time and convergence analysis . for continuous optimization , no - free - lunch - theorems do not hold . as a relatively young field, many open problems still remain in the field of randomized search heuristics . in practice , most assume that metaheuristic algorithms tend to be less complex for implementation , and in many cases , problem sizes are not directly linked with the algorithm complexity .however , metaheuristics can often solve very tough np - hard optimization , while our understanding of the efficiency and convergence of metaheuristics lacks far behind .apart from the complex interactions among multiple search agents ( making the mathematical analysis intractable ) , another important issue is the various randomization techniques used for modern metaheuristics , from simple randomization such as uniform distribution to random walks , and to more elaborate lvy flights .there is no unified approach to analyze these mathematically . in this paper , we intend to review the convergence of two metaheuristic algorithms including simulated annealing and pso , followed by the new convergence analysis of the firefly algorithm . then , we try to formulate a framework for algorithm analysis in terms of markov chain monte carlo .we also try to analyze the mathematical and statistical foundations for randomization techniques from simple random walks to lvy flights . finally , we will discuss some of important open questions as further research topics .the formulation and numerical studies of various metaheuristics have been the main focus of most research studies .many successful applications have demonstrated the efficiency of metaheuristics in various context , either through comparison with other algorithms and/or applications to well - known problems .in contrast , the mathematical analysis lacks behind , and convergence analysis has been carried out for only a minority few algorithms such as simulated annealing and particle swarm optimization .the main approach is often for very simplified systems using dynamical theory and other ad hoc approaches . here in this section ,we first review the simulated annealing and its convergence , and we move onto the population - based algorithms such as pso .we then take the recently developed firefly algorithm as a further example to carry out its convergence analysis .simulated annealing ( sa ) is one of the widely used metaheuristics , and is also one of the most studies in terms of convergence analysis .the essence of simulated annealing is a trajectory - based random walk of a single agent , starting from an initial guess .the next move only depends on the current state or location and the acceptance probability .this is essentially a markov chain whose transition probability from the current state to the next state is given by p= , where is boltzmann s constant , and is the temperature . herethe energy change can be linked with the change of objective values .a few studies on the convergence of simulated annealing have paved the way for analysis for all simulated annealing - based algorithms .bertsimas and tsitsiklis provided an excellent review of the convergence of sa under various assumptions . by using the assumptions that sa forms an inhomogeneous markov chain with finite states , they proved a probabilistic convergence function , rather than almost sure convergence , that p , where is the optimal set , and and are positive constants .this is for the cooling schedule , where is the iteration counter or pseudo time .these studies largely used markov chains as the main tool .we will come back later to a more general framework of markov chain monte carlo ( mcmc ) in this paper .particle swarm optimization ( pso ) was developed by kennedy and eberhart in 1995 , based on the swarm behaviour such as fish and bird schooling in nature . since then, pso has generated much wider interests , and forms an exciting , ever - expanding research subject , called swarm intelligence .pso has been applied to almost every area in optimization , computational intelligence , and design / scheduling applications .the movement of a swarming particle consists of two major components : a stochastic component and a deterministic component .each particle is attracted toward the position of the current global best and its own best location in history , while at the same time it has a tendency to move randomly .let and be the position vector and velocity for particle , respectively .the new velocity and location updating formulas are determined by _ i^t+1= _i^t + _ 1 [ ^*-_i^t ] + _ 2 [ _ i^*-_i^t ] .[ pso - speed-100 ] _ i^t+1=_i^t + _ i^t+1 , [ pso - speed-140 ] where and are two random vectors , and each entry taking the values between 0 and 1. the parameters and are the learning parameters or acceleration constants , which can typically be taken as , say , . there are at least two dozen pso variants which extend the standard pso algorithm , and the most noticeable improvement is probably to use inertia function so that is replaced by where ] .the system behaviour can be characterized by the eigenvalues of , and we have .it can be seen clearly that leads to a bifurcation . following a straightforward analysis of this dynamical system , we can have three cases .for , cyclic and/or quasi - cyclic trajectories exist . in this case , when randomness is gradually reduced , some convergence can be observed . for , non - cyclic behaviour can be expected and the distance from to the center is monotonically increasing with . in a special case , some convergence behaviour can be observed . for detailed analysis, please refer to clerc and kennedy .since is linked with the global best , as the iterations continue , it can be expected that all particles will aggregate towards the the global best .firefly algorithm ( fa ) was developed by yang , which was based on the flashing patterns and behaviour of fireflies .in essence , each firefly will be attracted to brighter ones , while at the same time , it explores and searches for prey randomly . in addition , the brightness of a firefly is determined by the landscape of the objective function . the movement of a firefly is attracted to another more attractive ( brighter ) firefly is determined by _i^t+1 = _ i^t + _ 0 e^-r^2_ij ( _ j^t-_i^t ) + _ i^t , [ fa - equ-50 ] where the second term is due to the attraction .the third term is randomization with being the randomization parameter , and is a vector of random numbers drawn from a gaussian distribution or other distributions . obviously ,for a given firefly , there are often many more attractive fireflies , then we can either go through all of them via a loop or use the most attractive one . for multiple modal problems , using a loop while moving toward each brighter one is usually more effective , though this will lead to a slight increase of algorithm complexity .here is ] , whose inverse fourier transform corresponds to a gaussian distribution .another special case is , which corresponds to a cauchy distribution for the general case , the inverse integral l(s ) = _ 0^ ( k s ) dk , can be estimated only when is large .we have l(s ) , s .lvy flights are more efficient than brownian random walks in exploring unknown , large - scale search space .there are many reasons to explain this efficiency , and one of them is due to the fact that the variance of lvy flights takes the following form ^2(t ) ~t^3- , 1 2 , which increases much faster than the linear relationship ( i.e. , ) of brownian random walks .studies show that lvy flights can maximize the efficiency of resource searches in uncertain environments .in fact , lvy flights have been observed among foraging patterns of albatrosses and fruit flies .in addition , lvy flights have many applications . many physical phenomena such as the diffusion of fluorenscent molecules , cooling behavior and noise could show lvy - flight characteristics under the right conditions .it is no exaggeration to say that metahueristic algorithms have been a great success in solving various tough optimization problems . despite this huge success, there are many important questions which remain unanswered . we know how these heuristic algorithms work , and we also partly understand why these algorithms work . however , it is difficult to analyze mathematically why these algorithms are so successful , though significant progress has been made in the last few years .however , many open problems still remain . for all population - based metaheuristics ,multiple search agents form multiple interacting markov chains . at the moment ,theoretical development in these areas are still at early stage .therefore , the mathematical analysis concerning of the rate of convergence is very difficult , if not impossible .apart from the mathematical analysis on a limited few metaheuristics , convergence of all other algorithms has not been proved mathematically , at least up to now .any mathematical analysis will thus provide important insight into these algorithms .it will also be valuable for providing new directions for further important modifications on these algorithms or even pointing out innovative ways of developing new algorithms . for almost all metaheuristics including future new algorithms ,an important issue to be addresses is to provide a balanced trade - off between local intensification and global diversification . at present ,different algorithm uses different techniques and mechanism with various parameters to control this , they are far from optimal .important questions are : is there any optimal way to achieve this balance ?if yes , how ? if not , what is the best we can achieve ?furthermore , it is still only partly understood why different components of heuristics and metaheuristics interact in a coherent and balanced way so that they produce efficient algorithms which converge under the given conditions .for example , why does a balanced combination of randomization and a deterministic component lead to a much more efficient algorithm ( than a purely deterministic and/or a purely random algorithm ) ?how to measure or test if a balance is reached ? how to prove that the use of memory can significantly increase the search efficiency of an algorithm ?under what conditions ?in addition , from the well - known no - free - lunch theorems , we know that they have been proved for single objective optimization for finite search domains , but they do not hold for continuous infinite domains .in addition , they remain unproved for multiobjective optimization . if they are proved to be true ( or not ) for multiobjective optimization , what are the implications for algorithm development ?another important question is about the performance comparison . at the moment, there is no agreed measure for comparing performance of different algorithms , though the absolute objective value and the number of function evaluations are two widely used measures .however , a formal theoretical analysis is yet to be developed .nature provides almost unlimited ways for problem - solving .if we can observe carefully , we are surely inspired to develop more powerful and efficient new generation algorithms .intelligence is a product of biological evolution in nature .ultimately some intelligent algorithms ( or systems ) may appear in the future , so that they can evolve and optimally adapt to solve np - hard optimization problems efficiently and intelligently .finally , a current trend is to use simplified metaheuristic algorithms to deal with complex optimization problems .possibly , there is a need to develop more complex metaheuristic algorithms which can truly mimic the exact working mechanism of some natural or biological systems , leading to more powerful next - generation , self - regulating , self - evolving , and truly intelligent metaheuristics .l. steinhofel , a. albrecht and c. k. wong , convergence analysis of simulated annealing - based algorithms solving flow shop scheduling problems , lecture notes in computer science , * 1767 * , 277 - 290 ( 2000 ) .
metaheuristic algorithms are becoming an important part of modern optimization . a wide range of metaheuristic algorithms have emerged over the last two decades , and many metaheuristics such as particle swarm optimization are becoming increasingly popular . despite their popularity , mathematical analysis of these algorithms lacks behind . convergence analysis still remains unsolved for the majority of metaheuristic algorithms , while efficiency analysis is equally challenging . in this paper , we intend to provide an overview of convergence and efficiency studies of metaheuristics , and try to provide a framework for analyzing metaheuristics in terms of convergence and efficiency . this can form a basis for analyzing other algorithms . we also outline some open questions as further research topics . + * citation details * : yang , x. s. , ( 2011 ) . metaheuristic optimization : algorithm analysis and open problems , in : proceedings of 10th international symposium on experimental algorithms ( sea 2011 ) ( eds . p. m. pardalos and s. rebennack ) , kolimpari , chania , greece , may 5 - 7 ( 2011 ) , lecture notes in computer sciences , vol . 6630 , pp . 21 - 32 .
adaptive coding techniques are frequently employed , especially in wireless communications , in order to dynamically adjust the coding rate to changing channel conditions .an example of adaptive coding technique consists in puncturing a mother code . when the channel conditions are good more bits are punctured and the coding rate is increased . in poor channel conditionsall redundant bits are transmitted and the coding rate drops . however , in harsh conditions , the receiver might not be able to successfully decode the received signal , even if all the redundant bits have been transmitted .in such a case , the coded block can be retransmitted until the sent information is successfully decoded .this is equivalent to additional repetition coding , which further lowers the coding rate below the mother coding rate .however , the use of retransmission techniques might not be suitable nor possible in some situations , such as multicast / broadcast transmissions , or whenever the return link is strictly limited or not available ( such situations are generally encountered in satellite communications ) .the main alternative in this case is the use of _ erasure codes _ that operate at the transport or the application layer of the communication system : source data packets are extended with redundant ( also referred to as _ repair _ ) packets that are used to recover the lost data at the receiver .physical ( phy ) and upper layer ( ul ) codes are not mutually exclusive , but they are complementary to each other .adaptive coding schemes are also required at the upper layer , in order to dynamically adjust to variable loss rates . besides , codes with very small rates or even _ rateless _ are sometimes used at the application layer for fountain - like content distribution applications . in this paperwe propose a coding technique that allows to produce extra redundant bits , such as to decrease the coding rate below the mother coding rate .extra redundant bits can be produced in an incremental way , yielding very small coding rates , or can be optimized for a given target rate below the mother coding rate .as for puncturing , the proposed technique allows for using the same decoder , regardless of how many extra redundant bits have been produced , which considerably increases the flexibility of the system , without increasing its complexity .the proposed coding scheme is based on non - binary low density parity check ( nb - ldpc ) codes or , more precisely , on their _ extended binary image _ .if denotes the size of the non - binary alphabet , each non - binary symbol corresponds to a -tuple of bits , referred to as its binary image .extra redundant bits , called _ extended bits _ , are generated as the xor of some bits from the binary image of the same non - binary coded symbol .if a certain number of extended bits are transmitted over the channel , we obtain an _ extended code _ , the coding rate of which is referred to as _ extended ( coding ) rate_. in the extreme case when all the extended bits are transmitted , the mother code is turned into a _ very small rate _ code , and can be used for fountain - like content distribution applications . a similar approach to fountain codes , by using multiplicatively repeated nb - ldpc codes , has been proposed in .if some extended rate is targeted , we show that the extended code can be optimized by using density evolution methods .the paper is organized as follows .section [ sec : nbldpc_codes ] gives the basic definitions and the notation related to nb - ldpc codes . in section [ sec : extended_nbldpc_codes ] , we introduce the extended nb - ldpc codes and discuss their erasure decoding .the analysis and optimization of extended nb - ldpc codes are addressed in section [ sec : analysis_optimization ] .section [ sec : code_design_performance ] focuses on the code design and presents simulation results , and section [ sec : conclusions ] concludes the paper .we consider nb - ldpc codes defined over , the finite field with elements , where is a power of ( this condition is only assumed for practical reasons ) .we fix once for all an isomorphism of vector spaces : elements of will also be referred to as _ symbols _ , and we say that is the _ binary image _ of the symbol , if they correspond to each other by the above isomorphism .a non - binary ldpc code over is defined as the kernel of a sparse parity - check matrix .alternatively , it can be represented by a bipartite ( tanner ) graph containing symbol - nodes and constraint - nodes associated respectively with the columns and rows of . a symbol - node and a constraint - nodeare connected by an edge if and only if the corresponding entry of is non - zero ; in this case , the edge is assumed to be _ labeled _ by the non - zero entry .as usually , we denote by and the left ( symbol ) and right ( constraint ) edge - perspective degree distribution polynomials . hence , and , where and represent the fraction of edges connected respectively to symbol and constraint nodes of degree- .the design coding rate is defined as , and it is equal to the coding rate if and only if the parity - check matrix is full - rank .for any integer , let = ( k_0 , \dots , k_{p-1})^{\text{t}} ] , then , which is the same as puncturing the first bit , , from the binary image of .moreover , taking some is equivalent to puncturing the whole symbol .the optimization of puncturing distributions for nb - ldpc codes has been addressed in . in this paper , we restrict ourselves to the case when matrices are of the form ] by varying the parameter .figure [ fig : extended_codes ] illustrates an extended code defined over , with , and , which correspond to .the mother coding rate is and the extended coding rate is . ) , while red circles represent ( nontrivial ) extended bits . ]we consider that the extended codeword is transmitted over a binary erasure channel ( bec ) . at the receiver part ,the received bits ( both from the binary image and extended bits ) are used to reconstruct the corresponding non - binary symbols .precisely , for each received bit we know its position within the extended binary image of the corresponding symbol .hence , for each symbol node we can determine a set of _ eligible symbols _ that is constituted of symbols whose extended binary images match the received bits .these sets are then iteratively updated , according to the linear constraints between symbol - nodes . alternatively ( and equivalently ), the extended code can be decoded by using the linear - time erasure decoding proposed in .the _ asymptotic threshold _ of an ensemble of codes is defined as the maximum erasure probability that allows transmission with an arbitrary small error probability when the code length tends to infinity .given an ensemble of codes , its threshold value can be efficiently computed by tracking the fraction of erased messages passed during the belief propagation decoding ; this method is referred to as _ density evolution_. in this paper , the density evolution is approximated based on the monte - carlo simulation of an infinite code , similar to the method presented in .this method has two main advantages : it can easily incorporate the extending distribution , and it can be extrapolated to more general channel models .] the goal of this section is to answer the following questions .first of all , assume that we have given a symbol node that has to be extended by bits .how should they be chosen among the ( nontrivial ) extended bits ? secondly , given an extended coding rate , how should be extended bits distributed over the symbol - nodes ? put differently , which is the optimal extending distribution ?we assume that we have given a symbol - node that has to be extended by bits .a choice of the bits among the extended bits corresponds to an extending matrix $ ] of size , with pairwise distinct columns .for each such a matrix , assume that the extended symbol is transmitted over the bec , and let be the expected number of eligible symbols at the receiver .recall that an eligible symbol is a symbol whose extended binary image match the received bits . if all transmitted bits have been erased, any symbol is eligible .conversely , if the received bits completely determine the non - binary symbol , then there is only one eligible symbol . more generally , let denote the sequence of received bits , and denote the submatrix of determined by the columns that correspond to the received positions of .then the eligible symbols are the solutions of the linear system , and their number is equal to .now , if denotes the erasure probability of the bec , it can be easily verified that : where the second sum takes over all the submatrices constituted of among the columns of .hence , in order to minimize the expected number of eligible symbols , we choose such that is maximal , where is the smallest number of linearly dependent columns of .consider the ensemble of regular ldpc codes defined over the .assume that each symbol - node is extended by bit , such as to achieve an extended rate . according to the choice of the extended bit ( among the nontrivial extended bits ), may be equal to , or .the asymptotic threshold corresponding to each choice of the extended bit is shown in figure [ fig : ext_bit_selec_expl ] .note that extended bits are ordered on the abscissa according to the corresponding . for comparison purposes ,we show also the asymptotic threshold corresponding to the repetition of some bit from the binary image ( trivial extended bit ) , in which case . also, the blue line correspond to the threshold obtained if each symbol node was extended by choosing a random nontrivial extended bit .we observe that the best threshold is obtained when each symbol node is extended by , which isthe xor of the four bits of the binary image .-bits extension for regular and semi - regular nb - ldpc codes over ] we consider two ensembles of regular codes , and one ensemble of semi - regular codes , of coding rate , defined over . for each ensemble of codes, we consider five different cases , in which all symbol nodes are extended by the same number of bits , with , , , , and .accordingly , the extended coding rate , , , , and .the _ normalized gap to capacity _ , defined as : is shown in figure [ fig : ext_bit_selec_expl_overall ] .solid curves correspond to a -optimized choice of the extended bits , while dashed curves correspond to a random choice of the extended bits . for , there is only a small difference between these two strategies .however , when , the gain of the -optimized choice is significant for both regular and semi - regular codes .first of all , we discuss the case of regular codes . in figure[ fig : spread_cluster_regular_expl ] , we consider three ensembles of regular codes over , with coding rate . for each ensemble of codes, we consider five cases , corresponding to values of between and . in each case , a fraction of symbol - nodes are extended by bits , while the remaining symbol - nodes have no extended bit .the fraction is chosen such that the extended coding rate .hence , , for , respectively .the right most point on the abscissa corresponds to a sixth case , in which the extended rate is achieved by extending of symbol - nodes by bits ( hence , , which is the maximum number of extended bits ) . for any of the three ensembles, we can observe that the smallest gap to capacity is obtained for , which means that extended bits are _ spread _ over as many symbol nodes as possible ( in this case , % ) , instead of being _ clustered _ over the smallest possible number of symbol - nodes .] in case of irregular nb - ldpc codes , let be an extending distribution .thus , is the fraction of degree- symbols with nontrivial extended bits , . let denote the average number of extended bits per symbol - node of degree- ; that is : \ ] ] we say that _ the extending distribution is of spreading - type _if for any degree , only if or . in different words , for any degree , the extended bits are uniformly spread over all the symbol - nodes of degree . clearly , a spreading - type distribution is completely determined by the parameters , as we have , , and for .we say that _ the extending distribution is of clustering - type _ if for any degree , only if . in different words , for any degree , the extended bits are clustered over the smallest possible fraction of symbol - nodes of degree . clearly , a clustering - type distribution is completely determined by the parameters , as we have and for .now , let us consider the ensemble of semi - regular ldpc codes over with edge - perspective degree distribution polynomials and .the mother coding rate is , and we intend to extend symbol - nodes such as to achieve extended coding rates .several extending distributions are compared in figure [ fig : ext_distr_irreg_expl ] .there are three spreading - type distributions , which spread the extended bits over all the symbol - nodes , or only over the symbol - nodes of degree either or , and two clustering - type distributions , which cluster the extended bits over the symbol - nodes of degree either or . in all cases ,extended bits ( or , equivalently , extending matrices ) are chosen such as to maximize the corresponding values .we observe that the smallest gap to capacity is obtained for extending distributions that spread extended bits either over the degree- symbol nodes only ( ) , or over all the symbol - nodes ( ) . ]based on the above analysis , we only consider spreading - type extending distributions .such an extending distribution is completely determined by the parameters , and the extended coding rate can be computed by , where is the fraction of symbol - nodes of degree . for given degree distribution polynomials and , and a given extending rate , we use the differential evolution algorithm to search for parameters that minimize the asymptotic gap to capacity .we assume that , for each symbol - node , the extended bits are chosen such as to maximize the corresponding .the optimized extended codes are presented in the next section .in this section we present optimized extending distributions for an irregular mother code over .the mother code has coding rate , and it has been optimized by density evolution . the asymptotic threshold is , and the edge - perspective degree distribution polynomials are : we optimized extending distributions for extended rates .optimized distributions are shown in table [ tab : opt_ext_distr ] , together with the corresponding asymptotic threshold and normalized gap to capacity . for comparison purposes ,we have also indicated the normalized gap to capacity corresponding to a random choice of extended bits . the last column corresponds to extended rate , obtained by extending each symbol - node by the maximum number of extended bits , _i.e. _ bits .it can be observed that the optimized distributions allow to maintain an almost constant value of , for all extended rates . * 0.3 * & * 0.25 * & * 0.2 * & * 2/15 * + & 0 & 0.4610 & 1.0164 & 1.7851 & 2.7442 & 4.1290 & 6.1737 & 11 + & 0 & 0.3731 & 1.2113 & 1.2981 & 2.5055 & 3.5864 & 5.3409 & 11 + & 0 & 0.2487 & 0.0359 & 1.8748 & 1.6831 & 2.3393 & 4.7494 & 11 + & 0 & 0.1309 & 0.4871 & 0.8511 & 1.6415 & 2.9800 & 4.0234 & 11 + & 0.4945 & 0.544 & 0.5939 & 0.6406 & 0.69 & 0.74 & 0.7872 & 0.8543 + & 0.011 & 0.0109 & 0.0102 & 0.0145 & 0.0143 & 0.0133 & 0.016 & 0.0143 + & 0.011 & 0.0234 & 0.0284 & 0.0266 & 0.0251 & 0.0213 & 0.0172 & 0.0143 + finally , figure [ fig : flp_opt_ext_distr_irreg ] presents the bit erasure rate ( ber ) performance of optimized extending distributions for finite code lengths .all the codes have binary dimension ( number of source bits ) equal to 5000 bits ( 1250 -symbols ) .the mother code with rate has been constructed by using the progressive edge growth ( peg ) algorithm , and symbol nodes have been extended according to the optimized distributions ( extension matrices being chosen such as to maximize ) .based on the extended binary image of nb - ldpc codes , we presented a coding technique that allows to produce extra redundant bits , such as to decreases the coding rate of a mother code .the proposed method allows for using the same decoder as for the mother code : extra redundant bits transmitted over the channel are only used to `` improve the quality of the decoder input '' . extending distributions for regular and irregular codeshave been analyzed by using simulated density evolution thresholds of extended codes over the bec .we have also presented optimized extending distributions , which exhibit a normalized gap to capacity , for extended rates from to
based on the extended binary image of non - binary ldpc codes , we propose a method for generating extra redundant bits , such as to decreases the coding rate of a mother code . the proposed method allows for using the same decoder , regardless of how many extra redundant bits have been produced , which considerably increases the flexibility of the system without significantly increasing its complexity . extended codes are also optimized for the binary erasure channel , by using density evolution methods . nevertheless , the results presented in this paper can easily be extrapolated to more general channel models . non - binary ldpc codes , extended binary image , incremental redundancy , very small coding rates .
quantum teleportation ( qt for short ) is the first quantum information processing protocol presented by bennett _ to achieve the transmission of information contained in quantum state determinately .many theoretical schemes have been proposed later .it has also been realized experimentally .latter , to save resource needed in the process of information transmission , lo put forward a scheme for remote preparation of quantum state ( rsp for short ) . compared with qt , in rsp the sender does not own the particle itself but owns all the classical information of the state he or she wants to prepare for the receiver , who is located separately from the sender . the resource consumption is reduced greatly in rsp , as the sender do not need to prepare the state beforehand .the rsp has already attracted many attentions .a number of rsp protocols were presented , such as rsp with or without oblivious conditions , optimal rsp , rsp using noisy channel , low - entanglement rsp , continuous variable rsp and so on .experimental realization was also proved . in rsp protocols ,all the classical information is distributed to one sender , which may lead to information leakage if the sender is not honest . in order to improve the security of remote state preparation ,controllers are introduced , which is the so called controlled remote state preparation ( crsp for short ) , and it has drawn the attention of many researchers .in contrast to the usual rsp , the crsp needs to incorporate a controller .the information could be transmitted if and only if both the sender and receiver cooperate with the controller or supervisor .crsp for an arbitrary qubit has been presented in a network via many agents . a two - qubit state crsp with multi - controllers using two non - maximally ghz states as shared channel is shown in .crsp with two receivers via asymmetric channel , using povm are presented .the five - qubit brown state as quantum channel to realize the crsp of three - qubit state is elaborated in .most of the existing schemes chose to use the ghz - type state , w - type state , bell state or the composite of these states as the shared quantum channel . however in this paper , we choose the general pure three - qubit state as quantum channel , which is not locc equivalent to the ghz state . andfor some special cases , the probability for successful crsp can reach unit . in , the authors proved that for any pure three - qubit state , the existence of local base , which allows one to express a pure three - qubit state in a unique form using a set of five orthogonal state .it is the called generalised schmidt - decomposition for three - qubit state . using the generalised schmidt - decomposition , gao _ et al . _ proposed a controlled teleportation protocol for an unknown qubit and gave analytic expressions for the maximal successful probabilities .they also gave an explicit expression for the pure three - qubit state with unit probability of controlled teleportation .motivated by the ideas of the two papers , we try to investigate the controlled remote state preparation using the general pure three - qubit states and their generalised schmidt - decomposition .the paper is arranged as follows . in sec .2 , the crsp for an arbitrary qubit is elucidated in detail .we find that the successful probability is the same as that of controlled teleportation for qubits with real coefficients . in sec . 3 , the crsp for a general two - qubit state is expounded . for two - qubit state with four real coefficients .the corresponding successful probability is the same as that of controlled teleportation of a qubit . in sec . 4, we conclude the paper .suppose that three separated parties alice , bob and charlie share a general pure three - qubit particle , the particle belongs to alice , to bob and to charlie , respectively .the distribution of the three particles are sketched in fig.1 . in figure 1 ,the small circles represent the particles , the solid line between two circles means that the corresponding two particles are related to each other by quantum correlation .according to , the general pure three qubit state has a unique generalised schmidt - decomposition in the form where for , , .the and in eq.(1 ) are decided uniquely with respect to a chosen general pure three qubit state .now alice wants to send the information of a general qubit to the remote receiver bob under the control of charlie .alice possesses the classical information of this qubit , i.e. the information of and , but does not have the particle itself .next , we make three steps to complete the crsp for ._ step 1 _ the controller charlie firstly makes a single qubit measurement under the base ]. the choice of and could be flexible according to the need of the controller . if and , and will be the base. then charlie broadcasts his measurement outcomes publicly to alice and bob using one classical bit . using eq.(2 ), the quantum channel can be rewritten as where |00\rangle \nonumber\\ & & \quad + e^{-i\eta}\sin\frac{\theta}{2}[a_{2}|01\rangle + a_{3}|10\rangle+a_{4}|11\rangle]\bigg\}_{ab}\end{aligned}\ ] ] |00\rangle \nonumber\\ & & \quad - e^{-i\eta}\cos\frac{\theta}{2}[a_{2}|01\rangle + a_{3}|10\rangle+a_{4}|11\rangle ] \bigg\}_{ab}\end{aligned}\ ] ] if the result of charlie s measurement is , the whole system collapses to with probability while collapses to with probability for the result . to ensure that the particle entangles with the whole system , we assume that and are not equal to at the same time .this is equivalent to and at the same time .note that _ step 1 _ is actually similar to that of controlled teleportation in .we arrange it here to keep the integrity of the paper .more detailed calculation can be found in . _ step 2 _ without loss of generality , we assume that the result of charlie s measurement is . then the whole system collapse to . using the schmidt - decomposition of two - qubit system, there exists bases and for particle and respectively , such that can be expressed as where , in . on receiving the result of charlie s measurement, the sender alice prepares a projective measurement utilizing the classical information of in the following form : then could be reexpressed as next we first discuss the case for real coefficients , i.e. are real. then eq.(6 ) will be alice measures her qubit under base and gets the outcome and with probability and respectively .and alice sends her measurement result to bob by 1 classical bit .the receiver bob s system will collapse to respectively ._ step 3 _ we assume that alice s measurement result is 0 . now according to charlie and alice s result , bob wants to recovery the state on his side .bob needs to introduce an auxiliary particle in initial state , then he makes a unitary operation on his particle and the auxiliary particle , and his state changes to , where {bb^{'}}.\end{aligned}\ ] ] after the unitary operation , bob makes a measurement on his auxiliary particle under the base . the probability for bob to get measurement result is , and he can recovery state successfully .but if the result is , the scheme fails .similarly , if alice s measurement result is 1 , bob also introduces an auxiliary particle in initial state .but the unitary operation is , and the system after the unitary operation is , where {_{bb^{'}}}.\end{aligned}\ ] ] the probability for bob to successfully reconstruct the state is . combining the process of _ step 1 _ and_ step 2 _ , when the controller charlie s measurement result is 0 , the receiver bob can reconstruct the qubit with probability similarly , if charlie s measurement result is 1 with probability , the whole system collapses to . and there are bases and for alice and bob s systems ( for reference ) , so that the schmidt - decomposition for is then continuing to use the last 2 steps as those in charlie s measurement result is 0 , we can get that the successful probability for bob to produce the desired state is . as a result ,for the real case , alice can prepare the qubit at bob s position under the control of charlie with probability , which is the same as that of controlled teleportation in .but the consumption of classical bits is reduced to 2 cbits for the whole process .next we discuss the case for complex coefficients ._ step 1 _ is the same as that of real case . in _ step 2_ , if alice s measurement result is 0 , referring to eq .( 6 ) , the remote state preparation fails .when alice gets the result 1 with probability , the whole system collapses to then _ step 3 _ is the same as that of the real case .the whole successful probability is which is half of the real case . according to the discussion of , the maximally probability for controlled teleportation will reach unit if and only if the shared channel is as for the controlled remote state preparation for a qubit using the above channel , the successful probability can also reach one for the real case , and for the complex case .in the crsp for a two - qubit state , there are also three parties alice , bob and charlie .they share a quantum channel which is the composite of and the bell state , the distribution of particles in the shared quantum channel is displayed in fig.2 , the meaning of symbols is the same as in fig.1 . the particle belongs to charlie , to alice and to bob .now the sender alice possesses the classical information of a general two qubit state , on his particle , and gets the measurement result 0 and 1 with probability and respectively . the whole system collapses to and respectively .he broadcast his measurement result using 1 cbits ._ step 2 _ we assume charlie s measurement result is 0 in _step 1_. then the system state after his measurement is . utilizing schmidt - decomposition , there exists bases and such that {aa^{'}bb^{'}}.\end{aligned}\ ] ] next we first discuss the case in which all the coefficients are real . according to her knowledge of the two - qubit state , alice constructs the measurement basis , then the system for alice and bob can be rewritten as \nonumber\\&&+|\mu_{1}\rangle[\sqrt{\lambda_{00}}(\beta|0^{'}0\rangle -\alpha|0^{'}1\rangle)-\sqrt{\lambda_{01}}(\delta|1^{'}0\rangle -\gamma|1^{'}1\rangle ) ] \nonumber\\&&+|\mu_{2}\rangle[\sqrt{\lambda_{00}}(\gamma|0^{'}0\rangle + \delta|0^{'}1\rangle)-\sqrt{\lambda_{01}}(\alpha|1^{'}0\rangle + \beta|1^{'}1\rangle ) ] \nonumber\\&&+|\mu_{3}\rangle[\sqrt{\lambda_{00}}(\delta|0^{'}0\rangle -\gamma|0^{'}1\rangle)+\sqrt{\lambda_{01}}(\beta|1^{'}0\rangle -\alpha|1^{'}1\rangle)]\bigg\}_{aa^{'}bb^{'}}.\end{aligned}\ ] ] thus alice can get result 0 or 1 with probability /2 ] .the system state after alice s measurement is with respective to the result 0 , 1 , 2 , 3 .alice then broadcasts her measurement result to bob using 2 cbits ._ step 3 _ assume that the measurement result of alice is 0 in _step 2_. then according to the result , bob introduces an auxiliary particle in the initial state , and makes unitary operation on his particles , where here is the identity matrix and the state after bob performing the unitary operation is thereafter , bob makes a projective measurement on his auxiliary particles under basis .he can get result 0 with probability . as for the other three cases , bobcan successfully reconstruct the desired two qubit state with probability , , and .similarly , in the real case , if charlie s measurement result is 1 with probability , then the system state after his measurement is . using the schmidt - decomposition we get where and are the same as those in section 2bob can also reconstruct the two - qubit state using similar method in the above three steps . as a result , for the real case, the total successful probability for the sender alice to prepare the two - qubit state at the position of bob under the control of controller charlie is \nonumber\\&&=2(p_{0}\lambda_{00}+p_{1}\lambda_{10}).\end{aligned}\ ] ] it is the same as that of the controlled teleportation for the real case of a qubit . in the whole processthe consumption of classical resource is 3 cbits . for the casein which there is at least one complex coefficient , in _ step 2 _ , alice constructs measurement basis in the following form , where , here we can assume that . because if , the number of coefficients decrease to two , which is actually the same as the single qubit case .the system for alice and bob can be reexpressed as \nonumber\\&&+|\nu_{1}\rangle[\sqrt{\lambda_{00}}\zeta(\alpha|0^{'}0\rangle -\beta|0^{'}1\rangle)-\sqrt{\lambda_{01}}\zeta^{-1}(\gamma|1^{'}0\rangle -\delta|1^{'}1\rangle ) ] \nonumber\\&&-|\nu_{2}\rangle[\sqrt{\lambda_{00}}(\beta^{*}|0^{'}0\rangle + \alpha^{*}|0^{'}1\rangle)+\sqrt{\lambda_{01}}(\delta^{*}|1^{'}0\rangle + \gamma^{*}|1^{'}1\rangle ) ] \nonumber\\&&+|\nu_{3}\rangle[\sqrt{\lambda_{00}}\zeta(-\beta^{*}|0^{'}0\rangle -\alpha^{*}|0^{'}1\rangle)+\sqrt{\lambda_{01}}\zeta^{-1}(\delta^{*}|1^{'}0\rangle + \gamma^{*}|1^{'}1\rangle)]\bigg\}_{aa^{'}bb^{'}}.\end{aligned}\ ] ] thus alice can get result 0 and 1 with probability /2 ] , respectively .the states after alice s measurement with respect to the result 0 and 1 are and we divide into two cases according to the value of .+ ( i ) , i.e. . in this case , using similar methods as in the real cases above , bob can recover the desired two - qubit state both from states in eq.(11 ) and eq.(12 ) . and the probabilities are both . similar scheme applies to the case that charlie s measurement result is 1 .thus the total successful probability for alice remotely to prepare the two - qubit state at bob s position under the control of charlie is \frac{\lambda_{00}}{\lambda_{00}(|\alpha|^{2}+|\beta|^{2})+\lambda_{01}(|\gamma|^{2}+|\delta|^{2 } ) } \nonumber\\ & & + p_{1}[\frac{\lambda_{10}(|\alpha|^{2}+|\beta|^{2})+\lambda_{11}(|\gamma|^{2}+|\delta|^{2})}{2 } ] \frac{\lambda_{10}}{\lambda_{10}(|\alpha|^{2}+|\beta|^{2})+\lambda_{11}(|\gamma|^{2}+|\delta|^{2})}\bigg\ } \nonumber\\&&=p_{0}\lambda_{00}+p_{1}\lambda_{10 } , \end{aligned}\ ] ] which is half of the case that all the coefficients are real . as for the result 3 and 4, the crsp protocol fails .+ ( ii ) . for this case , as bob does not know the classical information of , only when alice s measurement result is 0 , bob can reconstruct the two - qubit state .thus the successful probability reduces to half of ( i ) as .in this paper , protocols for controlled remote state preparation are presented both for a single qubit and two - qubit state .we utilize the general pure three qubit states as the shared quantum channels , which are not locc equivalent to the ghz state .we discuss protocols for both states with real and complex coefficients , and find that the general pure three - qubit states can help to complete crsp probabilistically .more than that , in some spacial cases , the crsp can be achieved with unit probability , which are deterministic crsp protocols .this overcomes the limitation that most of the existing quantum communication protocols are completed with ghz- , w- or bell states , or the composition of these states .moreover , due to the involvement of controller and multi - partities , this work may have potential application in controlled quantum communication , quantum network communication and distributed computation .bennett , c.h . ,brassard , g. , crpeau , c. , jozsa , r. , peres , a. , wootters , w.k . : teleporting an unknown quantum state via dual classical and einstein - podolsky - rosenchannels .lett . * 70 * , 1895 - 1899 ( 1993 )
the protocols for controlled remote state preparation of a single qubit and a general two - qubit state are presented in this paper . the general pure three - qubit states are chosen as shared quantum channel , which are not locc equivalent to the mostly used ghz - state . it is the first time to introduce general pure three - qubit states to complete remote state preparation . the probability of successful preparation is presented . moreover , in some special cases , the successful probability could reach unit . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
photon pairs from nonlinear optics are so far the only resource to have distributed quantum entanglement over more than a few kilometers , a critical link in future quantum networks , and are well - suited for use in multi - port quantum interferometers for sensing , simulation and computation , both as pairs directly and for heralded single photons . entangled photon pairs have also been used in quantum teleportation and entanglement swapping .these applications require spectrally pure photons : mixedness of the photon states leads to reduced visibility of the interference of independent photons , and therefore lower - quality final states .parametric down - conversion ( pdc ) and four - wave mixing ( fwm ) are the most common sources of photon pairs , and these photons usually possess spectral anti - correlation .this frequency entanglement can be useful for some applications , but is catastrophic for multi - photon interference or entanglement - swapping experiments .a convenient solution is narrowband filtering of both photons , which casts each into a single spectral mode , removing entanglement in favor of the spectral purity of each photon .both fwm sources and pdc sources often use filters much narrower than the photon bandwidths . but is spectral filtering compatible also with high symmetric heralding efficiency , that is , high probability of detecting either photon given detection of its partner ?high heralding efficiency is critical for scaling experiments and communications to many photons and higher rates due to the exponential increase in losses with number of photons , and also of fundamental importance : for reaching scalability in optical quantum computing , in device - independent quantum cryptography , and for tests of local causality with entangled photons .here we show that , for photon pair sources with spectral correlation or anti - correlation , increasing the spectral purity comes at a direct cost of decreasing the symmetric heralding efficiency .this tradeoff is based only on the joint spectral intensity ( jsi ) of the photons , not on the underlying physics that produce a specific jsi , meaning our results are applicable to both pdc and fwm , and to pulsed and continuous - wave pumps .we find a significant drop in achievable symmetric heralding efficiency even with ideal filters .we quantify this tradeoff by introducing the symmetric fidelity of the photon pairs to two pure single photons , and show that it is bounded well below one for spectrally - correlated sources .this is supported by an experiment using a lithium niobate photon - pair source , where we vary filter parameters , and find that heralding efficiency necessarily decreases as purity increases .similar results could be obtained for spatial correlation and spatial filtering , but here we focus on a single spatial mode .previous investigations of filtering in pdc and fwm have largely focused on heralded single photons , where the herald_ing _ photon is strongly filtered and the herald_ed _ photon is unfiltered , allowing both high spectral purity and high single - sided heralding efficiency .recent theoretical work has included also spatial entanglement and purity with spatial and spectral filters , showing again high single - sided heralding efficiency and purity .this is in contrast to source engineering methods , which achieve intrinsically pure states by controlling the dispersion and pump bandwidth .some schemes with tight spectral and time filtering can even outperform this source engineering , when considering production rates as well as purity . for the case where both photons are to be used , hints that filtering is incompatible with high symmetric heralding efficiency have appeared numerous times , but so far no experiments have directly studied the impact of filtering on purity and heralding efficiency simultaneously .one can get a feeling for the intrinsic tradeoff between purity and heralding efficiency from [ fig.intro ] . it shows the joint spectral intensity of an example photon pair state , overlaid with narrowband filters on each photon , labeled signal and idler . to achieve a spectrally pure state , the jsi that remains after filteringmust be uncorrelated between the two photons , either a circle , or an ellipse along the vertical or horizontal axis .but for high symmetric heralding efficiency , the two - photon amplitudes transmitted by each filter individually must overlap , otherwise signal photons will pass the filter without the corresponding idler and vice versa .purity can also be achieved by narrowband filtering just one photon , which decreases the heralding efficiency only for that photon .is multiplied by the pump envelope ( which always has angle ) to produce the total jsi .thus the overall angle of the jsi is somewhere between and . ]an uncorrelated jsi , fully contained within both filters is only possible for certain ranges of the phasematching angle , namely ] can have even without filtering , as the optimal filter bandwidth goes to infinity .this shows clearly the futility of filtering for purity in pdc : the conditions in which filters are needed are only where filtering can not recover perfect fidelity due to lowered heralding efficiency .of course without filters in these conditions the fidelity to a pure single photon would be even lower .we stress that this fidelity bound is generic for all pdc and fwm sources ( with jsis described by the pump - times - phasematching model ) , and is thus a very powerful tool in source design . finally , to show the sharpness of these effects we vary the filter bandwidths independently and set the pump and phasematching bandwidths to and respectively , which for allows an optimal symmetrized fidelity . as shown in [ fig.filters ] , the best filter heralding efficiencies for the signal photon have the largest signal filter and the smallest idler filter ; and vice versa for the idler photon .however the largest purity requires small filters on both arms , resulting in a fidelity that varies slowly over filter bandwidth and never exceeds 0.57 , falling to zero as either filter gets too narrow . ] and matching the interaction length and pump bandwidth , or , i.e. strong filtering .but for , at least one of or , implying at least one heralding efficiency tending to zero . to find the symmetized fidelity we first consider the fidelity of the signal photon to an arbitrary gaussian pure single photon state , after filtering and heralding by the ( filtered ) idler photon . the pure state is with the fidelity ( in the sense of probabilities ) is where , giving differentiating with respect to to find the state which maximizes the fidelity gives for the maximum fidelity a similar procedure for the idler yields combining these for the symmetrized efficiency gives finally we consider the purity - efficiency factor of both photons together , which allows analytic optimization over the filter bandwidths . the factor is for , the pef can have in the best case any two of approach 1 , while the other approaches 0 . for the phasematching angles 12 & 12#1212_12%12[1][0] doi:10.1038/nphys629 [ * * , ( ) ] http://stacks.iop.org/1367-2630/11/085002 [ * * , ( ) ] http://www.opticsexpress.org/abstract.cfm?uri=oe-17-14-11440 [ * * , ( ) ] link:\doibase 10.1364/oe.21.023241 [ * * , ( ) ] http://dx.doi.org/10.1038/nature11472 [ * * , ( ) ] link:\doibase 10.1073/pnas.1517007112 [ * * , ( ) ] http://dx.doi.org/10.1038/ncomms2349 [ * * , ( ) ] http://dx.doi.org/10.1038/nphoton.2013.112 [ * * , ( ) ] http://dx.doi.org/10.1038/nphoton.2013.102 [ * * , ( ) ] link:\doibase 10.1126/science.aab3642 [ * * , ( ) ] link:\doibase 10.1364/optica.2.000832 [ * * , ( ) ] http://dx.doi.org/10.1038/nphoton.2016.180 [ * * , ( ) ] http://dx.doi.org/10.1038/nphoton.2016.179 [ * * , ( ) ] link:\doibase 10.1364/oe.17.010748 [ * * , ( ) ] link:\doibase 10.1103/physreva.85.032337 [ * * , ( ) ] http://dx.doi.org/10.1038/srep09333 [ * * , ( ) ] link:\doibase 10.1103/physreva.65.053817 [ * * , ( ) ] link:\doibase 10.1364/oe.14.012388 [ * * , ( ) ] http://stacks.iop.org/1367-2630/13/i=6/a=065005 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.3549744 [ * * , ( ) ] http://dx.doi.org/10.1038/nphoton.2013.339 [ * * , ( ) ] link:\doibase 10.1103/physreva.81.021801 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/j.optcom.2014.02.024 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.117.210502 [ * * , ( ) ] link:\doibase 10.1364/optica.4.000077 [ * * , ( ) ] http://dx.doi.org/10.1038/srep35975 [ * * , ( ) ] http://stacks.iop.org/1367-2630/12/i=9/a=093027 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.100.060502 [ * * , ( ) ] link:\doibase 10.1103/physreva.81.052303 [ * * , ( ) ] link:\doibase 10.1080/09500340.2010.546894 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.98.230501 [ * * , ( ) ] link:\doibase 10.1103/physreva.86.032325 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.115.250401 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.115.250402 [ * * , ( ) ] http://dx.doi.org/10.1140/epjd/e20020028 [ * * , ( ) ] link:\doibase 10.1364/oe.15.007940 [ * * , ( ) ] http://stacks.iop.org/1367-2630/12/i=6/a=063001 [ * * , ( ) ] link:\doibase 10.1103/physreva.91.013819 [ * * , ( ) ] link:\doibase 10.1103/physreva.94.069901 [ * * , ( ) ] link:\doibase 10.1103/physreva.56.1627 [ * * , ( ) ] link:\doibase 10.1103/physreva.64.063815 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.100.133601 [ * * , ( ) ] * * , ( ) link:\doibase 10.1364/oe.15.014870 [ * * , ( ) ] link:\doibase 10.1364/oe.17.004670 [ * * , ( ) ] link:\doibase 10.1364/oe.18.003708 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.106.013603 [ * * , ( ) ] link:\doibase 10.1364/oe.19.024434 [ * * , ( ) ] link:\doibase 10.1364/oe.21.002707 [ * * , ( ) ] link:\doibase 10.1364/oe.21.013975 [ * * , ( ) ] http://dx.doi.org/10.1038/ncomms2838 [ * * , ( ) ] link:\doibase 10.1364/oe.22.017246 [ * * , ( ) ] link:\doibase 10.1364/oe.24.010869 [ * * , ( ) ] link:\doibase 10.1103/physreva.82.043826 [ * * , ( ) ] link:\doibase 10.1103/physreva.84.033844 [ * * , ( ) ] link:\doibase 10.1364/oe.24.023992 [ * * , ( ) ] link:\doibase 10.1080/09500340.2012.679707 [ * * , ( ) ] http://stacks.iop.org/1367-2630/10/i=9/a=093011 [ * * , ( ) ] link:\doibase 10.1364/oe.24.002712 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.102.123603 [ * * , ( ) ] http://stacks.iop.org/1367-2630/13/i=6/a=065009 [ * * , ( ) ] link:\doibase 10.1364/oe.17.023589 [ * * , ( ) ] link:\doibase 10.1103/physreva.73.012316 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.105.253601 [ * * , ( ) ] http://stacks.iop.org/1367-2630/12/i=11/a=113052 [ * * , ( ) ] http://stacks.iop.org/0953-4075/46/i=5/a=055501 [ ** , ( ) ] https://arxiv.org/abs/1701.04229 [ ( ) ] http://stacks.iop.org/0049-1748/10/i=9/a=a09 [ * * , ( ) ] link:\doibase 10.1364/ol.34.002873 [ * * , ( ) ] link:\doibase 10.1080/09500349414552171 [ * * , ( ) ]
photon pairs produced by parametric down - conversion or four - wave mixing can interfere with each other in multiport interferometers , or carry entanglement between distant nodes for use in entanglement swapping . this requires the photons be spectrally pure to ensure good interference , and have high and symmetric heralding efficiency to know accurately the number of photons involved and to maintain high rates as the number of photons grows . spectral filtering is often used to remove noise and define spectral properties ; however , the effect of spectral filtering on heralding efficiency is not usually considered . for heralded single photons high purity and single - sided heralding efficiency is possible , but when both photons in pair sources are filtered , we show that double - sided heralding efficiencies are strongly affected even by ideal spectral filters with 100% transmission in the passband : any improvement in purity from filtering comes at the cost of lowered heralding efficiency . we demonstrate this effect with analytical and numerical results , and an experiment where we vary the photon filter bandwidths and measure the increase in purity and corresponding reduction in heralding efficiency .
we consider models for populations whose intermingling may be beneficial to both of them , harmful for both of them , or beneficial for one and detrimental for the other one .the classical models always assume individualistic behavior of each population , see e.g. part i of .new models for mimicking the herd group defense of herbivores have been recently introduced , , differing quite a bit from older ideas relying on different assumptions , , or from more recent contributions , , in which the word `` group '' is used with a completely different meaning , or at least it is mathematically modeled in a completely different way from ours .furthermore , the biological literature abounds on social , herd or pack behaviour , but these concepts are either not the same that are considered here or modeled via different mathematical tools , e.g. graph theory or game theory , than those employed here , see for instance and the wealth of literature that is cited in these papers .although deals with a demographic model , an extension to ecoepidemic systems has been proposed , .these kind of systems consider basic population interactions , on which the effects of epidemics are superimposed . in the past twenty yearsa great deal of research effort has been devoted to their understanding , for a brief introduction , see chapter 7 of .these investigations represent a natural outgrowth of the developments of the late nineties in mathematical epidemiology , when the effects of population changes began to be accounted for in models for the spreading of diseases , . in this paperwe confine ourselves to the pure demographic situation , however .we extend the community gathering idea also to predators , allowing them to hunt the prey in a coordinate fashion .thus the models we consider markedly differ from those of , in the biological assumptions and above all in the mathematical structure .in fact , this is not a mere extension of previous work to different population associations , because the predators are behaving individualistically in and therefore predator s pack behavior is absent in those models .when the predators pack hunts the prey in general the individuals that have the major benefit are those that either take the most advantageous positions in the community in order to get the best share of the loot , or simply those that get it because they are stronger .assume therefore that positions on the edge of the pack have the best returns for the individuals that occupy them , since they are the first to fall upon the prey .the main idea of community behaviors for predators had been considered in . in the framework of animals socialized behavior these ideashave recently been discussed also in and carried over to ecoepidemic systems , .we consider two situations for the prey , namely when they behave individualistically or when they gather in herds , following the assumptions of . in the latter situation , the most harmed prey during predators huntingare those staying on the boundary of the herd . herewe also extend this concept to more general types of interactions among populations thriving in the same environment .the cases of symbiosis and competition are also well - known in the literature .again , the classical approach envisions an individualistic behavior for the involved populations . in partthis idea has been introduced in , but assuming that only one population behaves socially , the individuals of the other one live independently of each other .we extend now the analysis to the case in which both populations show a community behavior , both when each one of the two communities benefits from the interactions with the other one , as well as the case in which the communities compete with each other .the systems introduced here mathematically model the interactions occurring on the edge of the pack via suitable nonlinear functions of the populations in place of the classical bilinear terms coming from the mass action law .these are therefore gompertz - like interaction terms , with a fixed exponent , whose value is .its value comes from its geometric meaning , it represents the fact that the perimeter of the patch occupied by the population is one - dimensional , while the patch itself is two - dimensional , as explained in detail below in section 2 .the paper is organized as follows .the next section discusses the model formulation .section 3 contains the various dynamical systems , their adimensionalizations , some mathematical preliminaries and the analysis of the equilibria in which one or more populations are absent .each of the following sections investigates instead the coexistence equilibrium respectively for the cases of symbiosis , the two predator - prey cases and competition .a final discussion concludes the paper .the basic ideas underlying modeling herd behavior have been expounded in . herewe recall the main steps for the benefit of the reader . consider a population that gathers together .let represent its size .if this population lives on a certain territory of size , the number of individuals staying at the outskirts of the pack is directly related to the length of the perimeter of the territory . therefore its length is proportional to .since is distributed over a two - dimensional domain , the density square root , i.e. will therefore count the individuals at the edge of the territory .now let us assume that another population intermingles with the one just considered .we assume that the interactions of the latter occur mainly via the individuals living at the periphery , so that the interaction term for each individual of the population must be proportional to . as a result ,if behaves individualistically , the interactions among the two populations are expressed by . alternatively ,if also gathers in herds , the interactions will occur at the edge of each bunch of individuals , and therefore will contain square root terms for both populations .they will thus be modeled via .interactions between population can be of different types .they can benefit both , in the case of symbiosis .alternatively they can damage both populations , when they compete among themselves directly or for common resources .finally , one population receives an advantage from the other one ; this happens in the predator - prey situation .each of these possible configurations could be subject to pack behavior in one or both populations .the case of one population gathering in herds while the other one behaves individualistically has already been extensively dealt with in . with one exception , that involves pack predation and individual prey , not considered in , we will therefore concentrate on models involving both populations with individuals sticking together .let us denote by and the sizes of two populations in consideration as functions of time . in all the models that follow ,the parameters bear the following meaning .the parameter is the growth rate of the population , with or being its environment s carrying capacity , while , when meaningful , denotes the carrying capacity for the s .further , for the latter is the natural death rate when the s are interpreted as predators , models ( [ mod1 ] ) and ( [ mod2 ] ) , while it is once more a reproduction rate otherwise , i.e. in the symbiotic ( [ symb ] ) and competing ( [ comp ] ) cases .interaction rates between the two populations are denoted by parameter for the population and by for the s .the following systems will be considered , in which all the parameters are assumed to be nonnegative .the symbiotic situation the predator - prey interactions of pack - individualistic type , for a specialized predator the pack predation - herd defense system , for a specialized predator and finally the competing case note that when considering predator - prey interactions we need to impose that , since not the whole prey is converted in food for the predators . as remarked in , singularities could arise in the systems jacobians when one or both populations vanish . for the models ( [ symb ] ) and ( [ comp ] ) we define new variables as follows as well as new adimensionalized parameters therefore the adimensionalized systems read , for ( [ symb ] ) while for ( [ comp ] ) we find for the predator - prey cases( [ mod1 ] ) and ( [ mod2 ] ) the substitutions differ slightly .rescaling for the model ( [ mod1 ] ) is obtained through and defining the new parameters the adimensionalized system can be written as while in absence of predators , the system reduces just to the first equation . in this case , easily , the prey follow a logistic growth , toward the adimensionalized carrying capacity . for ( [ mod2 ] )we have instead define now the adimensionalized parameters the adimensionalized system for becomes note that all the new adimensionalized parameters are combinations of the old nonnegative parameters , , , , ; as a consequence , they must be nonnegative as well . for the later analysis of the equilibria stability it is imperative to consider the jacobians of these systems .we find the following matrices respectively , for ( [ adim ] ) and for ( [ adimc ] ) considering the predator - prey cases , for ( [ mod1s ] ) the jacobian is while the one for ( [ mod2s ] ) reads with this `` geometric '' expression we denote the equilibria in which at least one population vanishes .in fact , they lie on the boundary of the feasible region of the phase plane , the first quadrant .they need a special care in these kinds of group behavior models , because in eliminating the singularity we divide by and .therefore all the simplified models ( [ adim])-([mod2s ] ) hold for strictly positive populations .if one population vanishes , no information can gathered by the latter , we rather have to turn to the original formulations ( [ symb])-([comp ] ) .if one of the two populations disappears the system reduces to one equation . in this circumstancethe surviving population follows a logistic growth toward its own carrying capacity for the models ( [ symb ] ) and ( [ comp ] ) . the same occurs for the prey in absence of predators in models ( [ mod1 ] ) and ( [ mod2 ] ) . in these models when prey are absent, the predators can not survive .in fact when the equation for the predators shows that they exponentially decay to zero .this makes sense biologically , since these are specialistic predators .thus in these two models the disappearance of both populations is a possibility . more generally , the equilibrium corresponding to populations collapse is the origin .its stability can be analysed by a simple linearization of the govering equations . for ( [ symb ] )we find thus symbiotic populations can not both vanish . for competition , ( [ comp ] ), there is a sign change , for which in this case the two populations may disappear , when for the predator - prey cases , ( [ mod1 ] ) leads to so that again the equilibrium is unstable . in the case ( [ mod2 ] )instead we find here again both populations under unfavorable circumstances may well disappear , and this happens when view of the preliminary results of section [ sec : b_e ] here as well as in what follows , we investigate the coexistence of both populations for the simplified models .looking for the coexistence equilibria we are led to the ninth degree equation =0\ ] ] which does not lead to significant results .however , we then consider a graphical analysis of the system of equations originated by ( [ adim ] ) .a typical situation is shown in figure [ s_phase_whole ] for an arbitrary choice of the parameter values . ) .the nullcline corresponds to the blue continuous curve , conversely the nullcline corresponds to the red dashed function .the phase plane of interest is obviously only the set .the figure is obtained for the following parameter values , , , , , , , , .,width=377 ] the internal equilibrium is unique and always feasible . * proof*. as it can be seen , for this parameters choice , all nine roots of the system are real . for other situations ,some of the intersections in the second and fourth quadrant may disappear .but we are only interested in nonnegative populations and . in view of the behavior of the cubic functions, there will always exist a real intersection between these two functions in the first quadrant .moreover , this intersection is unique , leading to the coexistence equilibrium .no hopf bifurcations can arise at the coexistence equilibrium .* proof*. to have hopf bifurcations , we need purely imaginary eigenvalues .this occurs when the trace of the jacobian vanishes and simultaneously the determinant is positive , i.e. it can be easily seen that solving for from the first condition and substituting into the second one , we find which is a contradiction .the trajectories of the system ( [ adim ] ) are ultimately bounded . is globally asympotically stable .* proof*. we follow and just outline the proof .it is enough to take a large enough box in the first quadrant that contains the coexistence equilibrium . on the vertical and on the horizontal sidesit is easy to show that the dynamical system s flow enters into the box .the axes can not be crossed , on biological grounds .mathematically however , the square root singularity prevents the right hand side of the dynamical system to be lipschitz continuous when the corresponding population vanishes , so that the assumption for the uniqueness theorem fails on the axes .but as mentioned in the model formulation , we understand that the differential equations hold only in the interior of the first quadrant , on the coordinate axes they are replaced by corresponding equations in which the vanishing population is removed . thus is a positively invariant set , from which the first claim follows . by the poincar - bendixson theorem , since there are no limit cycles , the coexistence equilibrium must be globally asymptotically stable .the results of the classical case , are summarized in .extensions of classical symbiotic systems have been recently investigated , to models incorporating diseases , or to food chains , . in short , the three equilibria in which at least one population vanishes are unstable , , and .the coexistence equilibrium is unconditionally stable when feasible , i.e. . note that if is infeasible the trajectories are unbounded , which is biologically scarcely possible in view of the environment s limited resources .we now compare the classical model with ( [ adim ] ) in order to understand how socialization may boost the mutual benefit of the system s populations .the symbiotic model ( [ adim ] ) has always a stable coexistence equilibrium , while in the classical model could be infeasible .considering only parameters choices where is feasible , we compare the resulting populations levels for ( [ adim ] ) and the classical model . taking for both cases , , , , , and , the behaviors are shown in figure [ compare ] .starting from the same initial conditions , different equilibria are reached . , , , , , and .trajectories originate from the same initial condition .the full green dots represent the final equilibrium values . rescaled parameter values are , , . ] , , , , , and .trajectories originate from the same initial condition .the full green dots represent the final equilibrium values . rescaled parameter values are , , . ]clearly the population level is higher in the classical model .the numerical values we obtained are , for the herd model and , for the classical model .this makes sense , since in symbiotic models the benefit comes from the mutual interactions between populations . if the latter are scattered in the environment it is more likely for each individual of one population to get in contact with one of the other . on the other hand , when herd behaviour is exhibited, only individuals on the outskirts interact with the other population and as a consequence the innermost individuals receive less benefit since they hardly have the chance to meet the other population .here , we let represent the density of the predators and denote the prey population . we consider now ( [ mod1 ] ) in the adimensionalized form ( [ mod1s ] ) . we can immediately show that the trajectories are bounded .all populations in ( [ mod1s ] ) are bounded .* proof*. introducing the environment total population , and summing the equations in ( [ mod1s ] ) , we have take the maximum of the parabola in on the right hand side , to obtain the above differential inequality leads to because the total population is bounded , also each individual population and is bounded as well .here the coexistence equilibrium is always feasible , the coexistence equilibrium is always locally asymptotically stable . * proof*. if denotes the jacobian matrix ( [ j_pp1 ] ) evaluated at , the routh - hurwitz criterion gives both conditions hold so that the eigenvalues have negative real part and is always a stable equilibrium .the phase plane picture also supports this conclusion as well , figure [ phase1 ] . ) with parameters values , , , , , , .,width=302 ] the coexistence equilibrium is also globally asymptotically stable .* proof*. it follows the outline of proposition 3 .more formally , consider the point , with , on the isocline through the origin in figure [ phase1 ] .the compact set , identified by the rectangle having and the origin as opposite vertices , is positively invariant . on its right vertical sideindeed we have and the system s trajectories must enter into from the right . on the upper side , for , we have here the trajectories of ( [ mod1s ] ) enter into from above . by the poincar - bendixson theorem, global stability follows .note that hopf bifurcations can not arise here , since is a strict inequality .we focus now on ( [ mod2s ] ) .all populations in ( [ mod2s ] ) are bounded .* proof*. the steps are the same as for proposition 4 , with minor changes .the differential inequality here becomes where the last estimate follows by taking the maximum of the cubic in .we then find providing a bound on both populations as well as on each subpopulation .the coexistence equilibrium is feasible for figures [ phase2 ] and [ phase22 ] illustrate geometrically the two situations in which is feasible and when it is infeasible . ) with , is infeasible .parameter values : , , , , , , . ] ) with , is infeasible .parameter values : , , , , , , . ] recalling that in the case of ( [ mod2 ] ) the origin might be stable , ( [ pp_disapp ] ) , and that when ( [ 2e2_feas ] ) becomes an equality the coexistence equilibrium vanishes , we have the following result .there is a transcritical bifurcation for which emanates from the origin when the parameter raises up to attain the critical value .* proof*. the characteristic polynomial at the origin is the routh - hurwitz stability conditions then become the second claim follows comparing the first inequality in ( [ 2e0_stab ] ) with ( [ 2e2_feas ] ) .in fact , at the origin becomes unstable , while instead becomes feasible .coexistence for the system ( [ mod2s ] ) is a locally asymptotically stable equilibrium either if and ( [ tr ] ) holds ; or if and ( [ tr ] ) holds .but if we find that ( [ tr ] ) is not true and is unstable .* proof*. let the jacobian evaluated at be denoted by .the routh - hurwitz conditions are now , which always holds by the feasibility condition ( [ 2e2_feas ] ) , and there are a few different situations for ( [ tr ] ) , represented in figure [ fig - stab ]. parameter space in which the coexistence equilibrium of ( [ mod2s ] ) is stable.,width=302 ] when locally asymptotically stable , the equilibrium is also globally asymptotically stable .* proof*. proceeding as for ( [ mod1s ] ) , we take the point with the rectangle with the origin and as opposite vertices is a positively invariant set .recall that the part of the horizontal sides of interest here is . on the right vertical and upper horizontal sides of indeed , we have all trajectories thus enter into .the only locally asymptotically stable equilibrium in its interior must also be globally asymptotically stable by the poincar - bendixson theorem .we summarize the equilibria of system ( [ mod2s ] ) in the following table .lccc parameter conditions & & & bifurcation + & stable & unfeasible + & & & transcritical + & unstable & stable + & unstable & stable + & & & hopf + & unstable & unstable + & unstable & unfeasible + the system ( [ mod2s ] ) admits a hopf bifurcation at the coexistence equilibrium when the bifurcation parameter crosses the critical value * proof*. in addition to the transcritical of proposition 8 , we show now that special parameters combinations originate hopf bifurcations near . recall that purely imaginary eigenvalues are needed , and this occurs when the trace of the jacobian vanishes .thus ( [ tr ] ) must become an equality and the constant term is positive , .but the latter holds from ( [ 2e2_feas ] ) . this resultis observed in figure [ fig - stab ] , where the thick straight line indicates the critical parameter values .figure [ fig - cycles ] shows the limit cycles for the dimensionalized model ( [ mod2 ] ) , letting the simulation run for long times to show that the oscillations are indeed persistent . ) ; right : corresponding limit cycle in the phase plane .the original parameter values are , , , , , with coexistence equilibrium ; they correspond to , in the rescaled model ( [ mod2s]).,title="fig:",width=264 ] ) ; right : corresponding limit cycle in the phase plane .the original parameter values are , , , , , with coexistence equilibrium ; they correspond to , in the rescaled model ( [ mod2s]).,title="fig:",width=264 ] the system ( [ mod2 ] ) admits trajectories for which the prey go to extinction in finite time , if their initial conditions lie in the set * proof*. we follow with suitable modifications the argument exposed in . from the second equation in ( [ mod2 ] )we get the differential inequality from which , where the latter function denotes the solution of the differential equation corresponding to ( [ d_ineqp ] ) , with . from the first equation in ( [ mod2 ] )we have further let denote the solution of the differential equation obtained from ( [ d_ineqq ] ) using the rightmost term , with .it follows that . using the integrating factor , we obtain .\ ] ] the last term on the right is an increasing function of , so that there is a for which if and only if since , we have if the following inequality for the initial conditions of the trajectories is satisfied , from which the set given in ( [ d_ineqp ] ) is immediately obtained . in order to compare these results quantitatively ,we consider also the classical model with logistic correction .if we rescale it , however , since it does not contain the square root terms , we would find a different adimensionalization , rendering the comparison difficult .thus we rather return to the original formulations ( [ mod1 ] ) and ( [ mod2 ] ) . the classical lotka - volterra model with logistic correction for the prey, has two attainable equilibria : the prey - only equilibrium , from which via a transcritical bifurcation at coexistence arises , feasible for and always stable .its explicit representation follows , together with the one of coexistence for the system with individualistic hunting and prey herd response , , in dimensionalized form , the dimensional form of the coexistence equilibria of the two models presented here , respectively for ( [ mod1 ] ) and ( [ mod2 ] ) are the equilibrium prey populations of the first two models depend only on the system parameters and , i.e. the predators mortality and predation efficiency .thus they are independent of their own reproductive capabilities and of the environment carrying capacity .further , when the predators hunting efficiency is larger than the predators own mortality , i.e. , the equilibrium prey value is much lower if they gather in herds , i.e. in , while on the contrary the predators attain instead higher values , again at .conversely , when the prey grouping together , , allows higher equilibrium numbers than for their individualistic behavior ; the predators instead settle at lower values if the prey use a defensive strategy , , and higher ones with individualistic prey behavior , at . for ( [ mod1 ] ) and ( [ mod2 ] ) , i.e. with coordinated hunting, these values involve also the prey own intrinsic characteristics .in particular for ( [ mod2 ] ) the ratio of the predators hunting efficiency versus their mortality determines if the predators at equilibrium will be more than the prey , see .a similar result possibly extends for the model of pack hunting coupled with loose prey , ( [ mod1 ] ) , but at the predators population at equilibrium contains the prey population squared and in principle the latter may not exceed , so that the conclusion would not be immediate . indeed , at the equilibria and , the prey populations are the multiplication of the fractions in the brackets , always smaller than , by the carrying capacity , which may or not be large .the result could indeed give a population smaller than .this in principle is not a contradiction , because the population need not necessarily be counted by individuals , but rather its size could be measured by the weight of its biomass .we now deal with ( [ comp ] ) in the rescaled version ( [ adimc ] ) .the coexistence equilibria are the roots of the eighth degree equation a better interpretation treats the problem as an intersection of cubic functions , }(x)=b(1-x^2)x , \quad x_{[2]}(y)=\frac{c}{a}(1-y^2)y.\ ] ] depending on the behavior of the cubic functions , there could be either three intersections ( the origin and one each in the second and fourth quadrants ) or five ( the previous ones and one more in the first and third quadrants ) , or nine .the latter configuration is graphically shown in figure [ c_phase_whole ] .the feasible coexistence equilibria are just the intersections in the first quadrant .note that no intersections in the first quadrant exist when the slopes at the origin of the two cubic functions ( [ cubic_c ] ) satisfy the inequality }'(0)<y_{[2]}'(0) ] .this condition , rephrased in terms of the parameters , becomes ) for the functions }(x) ] .parameter values : , , , , , , , , .,scaledwidth=48.0% ] thus , for there is at most one real positive root , the one corresponding to the intersection in the fourth quadrant , that is however not feasible , see the left frame in figure [ scenario ] . thus no coexistence equilibrium arises .to better analyse the situation , we apply descartes rule of signs to ( [ 8th_deg_c ] ) .there are three sign variations , since the first four coefficients have alternating signs. the last one must be positive , because having already ruled out the case ( [ a_bc ] ) , we are left with .descartes rule shows that in this case there are at most 4 real positive roots .recall that these roots correspond to the abscissae of the intersections of the curves ( [ cubic_c ] ) . as discussed above we know that one positive root corresponds to the intersection that always exists in the fourth quadrant , figure [ c_phase_whole ] .this root must then be excluded . as a consequence in this case we have just one or three coexistence equilibria , see the center and right frames in figure [ scenario ] .sufficient conditions for three versus one equilibria to exist is that the cubic functions ( [ cubic_c ] ) have maximum -coordinate and -coordinate respectively in the first quadrant greater than 1 .this happens when both the following conditions hold the three possible situations are shown in figure [ scenario ] ., we show here the coexistence equilibria possible scenarios : left no feasible equilibrium exists for the parameter values , , , , , , , , ; center and , just one feasible equilibrium , for the parameter values , , , , , , , , ; right and , for the parameter values , , , , , , , , .the three equilibria , and are ordered left to right , for increasing values of their abscissae.,title="fig:",scaledwidth=110.0% ] [ 0equilibrium ] , we show here the coexistence equilibria possible scenarios : left no feasible equilibrium exists for the parameter values , , , , , , , , ; center and , just one feasible equilibrium , for the parameter values , , , , , , , , ; right and , for the parameter values , , , , , , , , .the three equilibria , and are ordered left to right , for increasing values of their abscissae.,title="fig:",scaledwidth=110.0% ] [ 1equilibrium ] , we show here the coexistence equilibria possible scenarios : left no feasible equilibrium exists for the parameter values , , , , , , , , ; center and , just one feasible equilibrium , for the parameter values , , , , , , , , ; right and , for the parameter values , , , , , , , , .the three equilibria , and are ordered left to right , for increasing values of their abscissae.,title="fig:",scaledwidth=110.0% ] [ 3equilibria ] in summary we have the following result . if no feasible coexistence equilibria exist .if at least one feasible equilibrium exists , .further , in such case , and are sufficient conditions for three equilibria to exist , i.e. , and , ordered for increasing values of their abscissae .the trajectories of the system ( [ adimc ] ) are ultimately bounded .* proof*. observe that decreases when and similarly decreases for .this in the phase plane corresponds to having the flow entering a suitable box with one corner in the origin and the opposite one of size large enough to contain the vertices of the cubics in all cases of figure [ scenario ] .thus we can take , , where and denote respectively the relative maxima heights of the cubics .the equilibria for which either one of the conditions hold are unstable .* proof*. if both ( [ p13_assump ] ) hold , the first routh - hurwitz condition applied to ( [ j_comp ] ) is but for the assumptions ( [ p13_assump ] ) it can not be satisfied .if only one of ( [ p13_assump ] ) is satisfied , say the first one , from the condition on the trace we obtain and substituting into the determinant , we have the estimate so that the second routh - hurwitz condition is not satisfied . hence the claim .considering figure [ scenario ] , in the case of just one equilibrium , it must have at least one coordinate to the left ( or below ) the one of the local maximum of the function .in the plot , it has the abscissa smaller than the one of the local maximum of the function }(x) ] is negative at .hence for the abscissa of we must have .similarly , using the slope of }(y)$ ] at .it follows that , .thus in turn .since we are in the case , follows .there is thus a subcritical pitchfork bifurcation for which from the unstable three equilibria arise , with the equilibrium becoming stable and the other ones being unstable .no hopf bifurcations arise in this model as they do not in the symbiotic one . using the same technique as in the proof of proposition 14, the condition on the trace becomes an equality , so that by solving it for we get .substituting into the second routh - hurwitz condition , we obtain the contradiction . in figure [fig : tri ] we show the behavior of the two populations in the three possible cases . , , , , , , , , .center : bistability and competitive exclusion , only one population survives ; achieved with parameter values , , , , , , , , .right : tristability , either one population only survives , or the other one , or both together ; achieved with parameter values , , , , , , , , .the green full dots represent the stable equilibria , the empty red circles are instead the initial conditions.,title="fig:",scaledwidth=110.0% ] [ origin ] , , , , , , , , . center : bistability and competitive exclusion , only one population survives ; achieved with parameter values , , , , , , , , .right : tristability , either one population only survives , or the other one , or both together ; achieved with parameter values , , , , , , , , .the green full dots represent the stable equilibria , the empty red circles are instead the initial conditions.,title="fig:",scaledwidth=110.0% ] [ bistab ] , , , , , , , , . center : bistability and competitive exclusion , only one population survives ; achieved with parameter values , , , , , , , , .right : tristability , either one population only survives , or the other one , or both together ; achieved with parameter values , , , , , , , , .the green full dots represent the stable equilibria , the empty red circles are instead the initial conditions.,title="fig:",scaledwidth=110.0% ] [ tristab ] the classical competition model , shows under suitable circumstances the competitive exclusion principle .thus , only one population survives , while the other one is wiped out .the system s outcome depends only on its initial conditions , so that if the system has population values lying in the attracting set of one of the equilibria , the dynamics will be drawn to it unless the environmental conditions , i.e. the parameters in the model , abruptely change . instead, we have found here that in presence of community behavior of both populations , the same occurs , but there is another possibility , namely tristability . when the conditions arise , the coexistence equilibrium may be present together with the equilibria in which one population vanishes .therefore the system s outcome is once more determined by the initial conditions , but this time the phase plane is partitioned into three basins of attractions , corresponding each to one of the possible equilibria .it would be interesting to compute explicitly the boundary of each one of them .for this task an extension of the algorithms presented in would be needed .we now compare the population levels when a coexistence equilibrium is stable in both classical and new model .considering the parameters , , , , and , with suitable initial conditions , the behavior of the two models is shown in figure [ compare_competition ] . from the same initial conditions , trajectories of the two models evolve toward different equilibria . , , , , , , , , .the full green dots represent the equilibrium points . ] , , , , , , , , .the full green dots represent the equilibrium points . ]the population levels are thus higher in the herd model , at and while for the classic model we find and .this is not surprising for the same reasons for which the opposite behavior occurs in the symbiotic models . in herd models ,only individuals at the outskirts meet individual of the other species .this means that individuals at the centre of the flock here receive less harm from the competition . on the contrary , in the classic model, individuals of the two populations are mixed together , so that the whole populations are harmed by the competition .we have presented four models for non - classical population interactions , in that the populations involved in some way exhibit a socialized way of living .this investigation completes the one undertaken in , in that all the situations that are possible in terms of individualistic or gathering populations behavior are now analysed .the models missing in are presented here : we allow predators to hunt in packs , as well as both intermingling populations to gather together , in the two cases of symbiosis and competition , so that they interact not on an individualistic basis , but rather is some coordinate fashion .the newly introduced symbiotic model on a qualitative basis behaves like the classical one .the populations settle always at the coexistence equilibrium .only , their levels are quantitatively smaller than in the classical case since the mutually beneficial interactions in the new model are somewhat reduced . for predator - prey interactions in presence of predators pack hunting , we may have the prey behave in herds or individualistically . the most prominent discrepancy between these two casesis the fact that both populations may disappear , under specific unfortunate conditions , when the prey use a defensive coordinate strategy .this does not happen instead if they move loose in the environment , i.e. exhibit individualistic behavior , since they attain a coexistence equilibrium .this finding is quite counterintuitive , because it could imply that the defensive mechanism is ineffective . butan interpretation could be provided , since herds are more easily spotted by predators than individuals who can more easily hide in the terrain configuration .once the prey herds are completely wiped out , the predators also will disappear , since they are assumed not to be generalist , i.e. their only food source is the prey under consideration .ecosystem extinction has also been rarely observed in the model without pack predation , .the system with prey herd behavior also shows limit cycles , i.e. the populations can coexist also through persistent oscillations , not only at a stable equilibrium , which instead is the only possible system s outcome for the model with individualistic prey .a similar result had been discovered earlier in case of individualistic predators hunting , , constituting the major difference between the prey group defense model with uncoordinated predation and the classical predator - prey system . finally , on the quantitative side ,the coexistence population values for these two models with pack hunting differ , but without specific informations on the parameter values it is not possible to assess which system will provide higher population values .the competition system presented here allows again the extinction of both populations , under unfavorable circumstances , while this never happens for the classical model .ecosystem disappearance occurs when ( [ comp_disapp ] ) holds , a condition that in the nondimensional model is equivalent to , as stated in proposition 12 .when the competition system thrives , it does at higher levels for both populations than those achieved in the classical model .thus in this case populations coordinated behavior boosts their respective sizes , in case the system parameters are in the range for which coexistence occurs .but the major finding in this context of social behavior among all possible populations behavior is found for the competition case .indeed the system in suitable conditions can show the phenomenon of competitive exclusion as the classical model does , but in addition we have discovered that both populations can thrive , together with the situations predicted by the competitive exclusion principle .in other words , we have found that the rather simple model ( [ comp ] ) or ( [ adimc ] ) may exhibit tristability , see once more the right picture in figure [ fig : tri ] .this appears to be a novel and quite interesting finding further characterizing the systems with socialized behaviors .the authors do not know of any other simple related model with such behavior .e. caccherano , s. chatterjee , l. costa giani , l. il grande , t. romano , g. visconti , e. venturino , _ models of symbiotic associations in food chains _ , in symbiosis : evolution , biology and ecological effects , a.f .camiso and c.c .pedroso ( editors ) , nova science publishers , hauppauge , ny , 189 - 234 , 2012 .e. cagliero , e. venturino , _ ecoepidemics with group defense and infected prey protected by herd _ , proceedings of the 12th international conference on computational and mathematical methods in science and engineering , cmmse 2012 , j. vigo - aguiar , a.p .buslaev , a. cordero , m. demiralp , i.p .hamilton , e. jeannot , v.v .kozlov , m.t .monteiro , j.j .moreno , j.c .reboredo , p. schwerdtfeger , n. stollenwerk , j.r .torregrosa , e. venturino , j. whiteman ( editors ) 1 ( 2012 ) 247 - 266 .r. cavoretto , s. chaudhuri , a. de rossi , e. menduni , f. moretti , m.c .rodi , e. venturino , _ approximation of dynamical system s separatrix curves _ , numerical analysis and applied mathematics icnaam 2011 , t. simos , g. psihoylos , ch .tsitouras , z. anastassi ( editors ) , aip conf . proc .1389 , 1220 - 1223 ( 2011 ) ; doi : 10.1063/1.3637836 .r. cavoretto , a. de rossi , e. perracchione , e. venturino , _ reconstruction of separatrix curves and surfaces in squirrels competition models with niche _ , proceedings of the 2013 international conference on computational and mathematical methods in science and engineering , i.p .hamilton , j. vigo - aguiar , h. hadeli , p. alonso , m.t .de bustos , m. demiralp , j.a .ferreira , a.q.m .khaliq , j.a .lpez - ramos , p. oliveira , j.c .reboredo , m. van daele , e. venturino , j. whiteman , b. wade ( editors ) almeria , spain , june 24th-27th , 2013 , v. 3 , p. 400- 411 .m. haque , e. venturino , _ mathematical models of diseases spreading in symbiotic communities _ , in j.d .harris , p.l .brown ( editors ) , wildlife : destruction , conservation and biodiversity , nova science publishers , new york , 2009 , 135 - 179 .
models of coordinated behavior of populations living in the same environment are introduced for the cases when they either compete with each other , or they both gain by mutual interactions , or finally when one hunts the other one . the equilibria of the systems are analysed , showing that in some cases the populations may both disappear . coexistence leads to global asymptotic stability for symbiotic populations , or to hopf bifurcations for predator - prey systems . finally , a new very interesting phenomenon is discovered in one of these rather simple models . indeed tristability may be achieved in the competition case , for which competitive exclusion is allowed to occur together with populations coexistence . * keywords * : predator - prey , symbiosis , competition , group gathering , tristability , ecosystems . * ams subject classification : * 92d25 , 92d40
the statistical scattering of waves through open chaotic cavities has been of great interest to many groups along the years .the investigations that have been carried out are relevant to a variety of problems , like the electronic transport through ballistic quantum dots , or the scattering of classical waves ( e.g. , electromagnetic or elastic waves ) in chaotic billiards .the approach provided by random - matrix theory has been particularly fruitful in the study of the statistical fluctuations of transmission and reflection of waves by a number of systems , including billiards with a chaotic classical dynamics . within this approachwe wish to focus our attention on the model of refs .[ ] , which was introduced originally in the context of nuclear physics and was then applied to the domain of chaotic cavities .we recall that , very generally , we can describe a scattering process in terms of a scattering matrix . inthe model referred to above , the statistical features of the problem are represented by a measure in -matrix space which , through the assumption of ergodicity " , gives the probability of finding in a given volume element as the energy changes and wanders through that space .the problem is , of course , to find that measure .the key assumption is made that in the scattering process two distinct time scales occur , associated , respectively , with a prompt , or direct , response due to the presence of short paths , and a delayed , or equilibrated , response due to very long paths .it turns out that the prompt , or direct , processes can be expressed in terms of the energy average of , , also known as the _ optical _ matrix .the statistical distribution of the scattering matrix is then constructed through a maximum - entropy ansatz " , assuming that it depends parametrically solely on the optical matrix .the notion of ergodicity , which allows replacing energy averages by ensemble averages , e.g. , , is essential to the argument .the statistical properties of the conductance predicted by the maximum - entropy model we just described have been studied in the past ; these predictions have been also compared with the results of computer simulations which consist in solving the scalar schrdinger equation numerically for a number of structures .although in those structures the two time scales referred to above were not as well separated as in nuclear physics problems , they seemed to us to be sufficiently distinct to allow a meaningful description .it is the purpose of the present article to investigate further the validity of the maximum - entropy model , by extending our earlier studies in the following three ways .first , we wish to provide further predictions of our approach for other physical quantities in addition to the conductance . for this purposewe analyze the zero - frequency limit of the shot - noise power spectrum at zero temperature .for one open channel ( ) we show that the problem can be reduced to quadratures and , in a number of cases , we can even study analytically the influence of direct processes on the average , , of the zero - frequency shot - noise power spectrum over an ensemble of cavities .for an arbitrary number of channels , on the other hand , we show that can be evaluated analytically when direct processes are absent [ .second , we wish to extend the computer simulations mentioned above in a number of ways : \i ) in some of the cavities used in the present paper the short paths consist of whispering gallery modes ( wgm ) , which were excluded in refs . [ ] by the type of cavities that were used and the way the leads were attached .it is their effect that we wish to describe in terms of the optical matrix which , as we said , is precisely a measure of the short - time processes occurring in the scattering problem .information on the time scales involved could be provided by an analysis of the structure of in the complex - energy plane .although we do not have direct access to the poles of the matrix , the complex eigenvalues of the so - called effective hamiltonian " ( which essentially consists of the hamiltonian of the closed cavity plus the coupling to the continuum ) give evidence of a sea " of fine - structure , long - lived , resonances , plus a collection of shorter lived , more widely separated states .this evidence is indicated in the present paper and studied in detail in refs .\ii ) earlier numerical simulations were performed for cavities with an applied magnetic field ( the unitary universality class characterized by the dyson parameter ) , in the presence of direct processes and for one channel ( ) .the present simulations are performed for cavities with time - reversal invariance ( the orthogonal universality class , characterized by the dyson parameter ) , also in the presence of direct processes and for one ( ) and two ( ) open channels .third , we shall pay closer attention to the discrepancies between theory and numerical experiments .indeed , discrepancies similar to the ones that we shall observe in this paper were already present , to a certain extent , in ref .[ ] , but were overlooked at that time . the paper is organized as follows . in the next sectionwe first give a brief presentation of the maximum - entropy model , recalling the assumptions that are used in its derivation ; these considerations will be important in the discussion to be presented in sec .[ discussion ] .we then study a number of predictions of the model with regards to the statistical properties of the conductance and the shot - noise power spectrum at zero temperature . in sec .[ num ] we present the results of the numerical simulations and the comparison with theory .[ num n1 ] is devoted to the one - channel case ( ) and sec .[ num n2 ] to two channels ( ) .finally , we discuss our results in sec .[ discussion ] , putting particular emphasis on the discrepancies found between theory and numerical simulations .we include an appendix , where some of the algebraic details of the relevant one- and two - channel statistical distributions are given .we present below the main ideas behind the maximum - entropy model briefly described in the introduction .this model was introduced in the past in the domain of nuclear physics and was later used to study the quantum mechanical scattering occurring inside ballistic cavities ( whose classical dynamics is chaotic ) connected to the outside by means of waveguides .the scattering problem can be described in terms of a scattering matrix . if the cavity is connected to two waveguides supporting channels each , the dimensionality of the matrix is . as we mentioned in sec .[ intro ] , the model proposes a measure in -matrix space which , through the assumption of ergodicity , describes the probability of finding in a given volume element as the energy changes and wanders through that space .we write such a probability as where , referred to as the probability density , depends parametrically on the optical matrix , as detailed below . in the above equation , is the _ invariant measure _ for the universality class [ we shall assume throughout that . herewe shall consider the cases ( the orthogonal case ) and ( the unitary case ) , corresponding to cavities with and without time - reversal invariance , respectively , and in the absence of spin .the problem is , of course , to find .to this end , a number of assumptions are made , as we now explain ( see refs .\1 ) the study of the statistical properties of over an ensemble of cavities is simplified by idealizing , for real , as a _stationary random ( matrix ) function _ of satisfying the condition of _ergodicity_. \2 ) as explained in sec .[ intro ] , we assume that our scattering problem can be characterized in terms of two time scales , arising from the prompt and equilibrated components ; the prompt response can be described in terms of the averaged matrix , also known as the _ optical _ matrix .\3 ) we assume to be far from thresholds , so that , locally , is a meromorphic matrix function which is analytic in the upper half of the complex - energy plane and has resonance poles in the lower half plane . from thisfollow what we have called in the past the analitycity - ergodicity " ( ae ) properties : this expression involves , on its left - hand side , only matrix elements , whereas matrix elements are absent ; on the right - hand side , only the optical matrix appears .more generally , if is a function that can be expanded as a series of non - negative powers of the matrix elements , we must have the _ reproducing property _ one can then show that the probability density , known as _ poisson s kernel _, ^{(2\beta n+2-\beta ) /2 } } { |{\rm det}(i - s\left\langle s\right\rangle ^{\dag})|^{2\beta n+2-\beta } } , \label{poisson}\ ] ] is such that the average matrix is the optical matrix , the ae requirements ( [ ae ] ) and hence the reproducing property ( [ reprod ] ) are satisfied , and the entropy ] is greater than or equal to that of any other probability density satisfying the ae requirements for the same . with regards to the information - theoretic content of poisson s kernel , we have to distinguish between _ i ) _ _ general properties _ , like unitarity of the matrix ( flux conservation ) , analyticity of implied by causality , and the presence or absence of time - reversal invariance ( and spin - rotation symmetry when spin is taken into account ) which determines the universality class ( orthogonal , unitary or symplectic ) , and _ ii ) _ _ particular properties_ , parametrized by the ensemble average , which controls the presence of short - time processes .system - specific _ details other than the optical matrix are assumed to be irrelevant . _the optical matrix is the only physically relevant parameter " assumed in the model . from the probability distribution of eqs .( [ dp ] ) and ( [ poisson ] ) one can find the statistical properties of the quantities of interest over an ensemble of cavities .in this paper we shall be concerned with the conductance and the zero - frequency shot noise power spectrum .the dimensionless dc conductance [ at zero temperature and for the spinless case is given by landauer s formula where ( ) are the eigenvalues of the hermitean matrix , and the transmission matrix is an block of the -dimensional matrix which , in turn , is written as \ ; . \label{s}\ ] ] the zero - frequency limit of the shot - noise power spectrum at zero temperature can be expressed as the average of over an ensemble of cavities will be written in the two alternative ways : [ < p > ] here , is the result that would obtain if the noise were a poissonian process , i.e. , if there were no correlations among electrons and the electronic transport were completely random ; is the dimensionless conductance , eq . ( [ conductance ] ) .we see that since the shot - noise power is not determined simply by the conductance , it is only in the limit ( ) that we recover the poissonian result .it is clear that we need , for our purposes , the joint probability distribution of the s .this can be found from eq .( [ poisson ] ) as [ w(tau_a ) ] for and , respectively .the quantity is a normalization constant .the unitary matrices are the ones that occur in the polar decomposition of the matrix \left [ \begin{array}{cc } -\sqrt{1-\tau } & \sqrt{\tau}\\ \sqrt{\tau } & \sqrt{1-\tau } \end{array } \right ] \left [ \begin{array}{cc } v^{(3 ) } & 0 \\ 0 & v^{(4 ) } \end{array } \right ] \ ; , \label{s polar}\ ] ] where stands for the diagonal matrix constructed from the the eigenvalues ( ) of the hermitian matrix [ see eq .( [ s ] ) ] and the are arbitrary unitary matrices for , with the restrictions ^t ] for . in what follows we study , in particular , the cases in which the two waveguides connecting the cavity to the outside may support one , two , or an arbitrary number of open channels . in this casewe have only one , which coincides with the conductance , whose probability distribution can thus be written from eqs .( [ w(tau_a ) ] ) as [ w(g ) beta12 n=1 ] the polar representation of for is written down explicitly in eq .( [ s polar n=1 ] ) of the appendix . in the absence of direct processes ,i.e. , , the distribution of eqs .( [ w(g ) beta12 n=1 ] ) reduces to the well known results [ w(g ) n=1 s0 ] for the orthogonal ( ) and unitary ( ) cases , respectively .the distribution for the unitary case , eq .( [ w(g ) beta2 n=1 ] ) , can be integrated explicitly . as an example, for the particular case , corresponding to direct reflection and no direct transmission , and assuming , for simplicity , the equivalent - channel " case ( ) , one finds ^{5/2 } } .\label{w(g ) beta2 n=1 anal}\end{aligned}\ ] ] for the case of direct transmission and no direct reflection , the result is obtained from the previous equation by replacing by and by .the distribution for the unitary case given in eq . ( [ w(g ) beta2 n=1 anal ] ) allows us to study the effect of direct processes on the averaged shot - noise power spectrum of eq .( [ < p > 2 ] ) ; this case is particularly suited to gain some physical insight , since the result for can be expressed analytically in a remarkably simple fashion . for the particular case of direct reflection and no direct transmission ( ) , and assuming , one finds , from eq .( [ w(g ) beta2 n=1 anal ] ) , the result : while for direct transmission and no direct reflection ( ) , and assuming , one obtains in fig .[ fanobeta2n1 ] the behavior of the ratio as a function of for the direct reflection case ( ) , eq .( [ direct_r ] ) , is shown as the upper solid curve ; the lower solid curve shows the case of direct transmission as a function of ( when ) , eq .( [ direct_t ] ) . for the upper curve ,the ratio increases as a function of ; since , as , , at first sight one would expect , in this limit , the ratio to increase towards the poissonian value unity . that this is not the case is due to the fact that both and tend to zero linearly with as this quantity tends to zero . for the orthogonal symmetry class ( ) we have not succeeded in finding an analytical expression for the conductance distribution , even for the particular cases studied above . for these cases ,the ratio was thus calculated numerically from eq .( [ w(g ) beta1 n=1 ] ) and the results are also presented in fig .[ fanobeta2n1 ] for comparison with the unitary case ; we observe that the ratio is always larger for than for .we wish to point out a property of the average shot - noise power of eq .( [ < p > 1 ] ) , in the present one - channel case .poisson s kernel of eq .( [ poisson ] ) has the property that has been called `` covariance '' : , where , and being fixed unitary matrices for , with for , the same transformation being applied to the optical .the invariant measure is invariant under this transformation . for , one can verify that the unitary matrices ] switches and and the corresponding optical parameters .the above transformations keeps invariant . as a consequence , remains invariant under the interchange , for , and for the particular case mentioned above .we observe that , indeed , the numerators of eqs .( [ direct_r ] ) and ( [ direct_t ] ) , which are proportional to , do fulfill this property .however , for the case considered here , this symmetry does not apply . as a function direct reflection ( indicated in the upper horizontal line as the abscissa ) for the case shown as the two upper curves .the two lower curves show the same ratio as a function direct transmission ( indicated in the lower horizontal line as the abscissa ) for the case .the dashed lines correspond to the orthogonal universality class ( ) and the solid lines to the unitary class ( ) . ] in the present one - channel case one can write down an expression for the distribution of the dimensionless " shot - noise power spectrum [ see eq .( [ p ] ) ] which lies in the range ( we are using the notation of ref . ) .since is a function of the conductance , we can make an elementary change of variables and write {\tau = \tau(\eta ) } \\ \tau=\frac12\left [ 1\pm \sqrt{1 - 4\eta } \right ] .\end{aligned}\ ] ] [ w(eta ) ] thus the distribution in question is given by : where is given in eqs .( [ w(g ) beta12 n=1 ] ) . for , the result of this last equation ( [ w(eta ) ] )reduces to eq .( 95 ) of ref . . in the two - channel casethe matrix is two - dimensional and has two eigenvalues , whose joint probability distribution can be written from eqs .( [ w(tau_a ) ] ) as ^{5/2 } } { |{\rm det}(i - s\left\langle s\right\rangle ^{\dag})|^{5 } } d\mu ( v^{(1)})d\mu ( v^{(2 ) } ) \label{w(tau ) beta1 n=2 } \nonumber \\ \\ w_{\langle s \rangle}^{(2)}(\tau _ 1 , \tau _ 2 ) & = & 6 ( \tau _ 1 - \tau _ 2)^2 \int \cdots \int \frac { [ { \rm det}(i-\left\langle s\right\rangle \left\langle s\right\rangle ^{\dag})]^{4 } } { |{\rm det}(i - s\left\langle s\right\rangle ^{\dag})|^{8 } } d\mu ( v^{(1 ) } ) \cdots d\mu ( v^{(4 ) } ) .\label{w(tau ) beta2 n=2 } \nonumber \\\end{aligned}\ ] ] [ w(tau ) beta12 n=2 ] here , is the invariant measure for the unitary matrices used to represent in its polar form , eq .( [ s 2 ] ) ; the explicit form of is given in eqs .( [ dmu vi ] ) and ( [ range ] ) . from the above expressions we can evaluate the probability distribution of the conductance as and the ratio for the shot - noise power spectrum as in the absence of direct processes , , we obtain for the well known results : [ w(tau ) beta12 n=2 s=0 ] and for the conductance distribution ^ 3 .\end{aligned}\ ] ] [ w(t ) beta12 n=2 s=0 ] in the absence of direct processes , , various results concerning the average and variance of the conductance are known and will not be reproduced here . not known , to our knowledge , is the behavior of the shot - noise power spectrum for arbitrary , even for .we calculate below , for such a situation , the average for the orthogonal and the unitary cases . the numerator of ( [ < p > 2 ] )can be written as ^{\ast } \rangle _ 0^{(\beta ) } -\sum _ { a , b , c , d = 1}^{n}\left\langle s_{ab}^{21 } s_{cd}^{21}\left[s_{cb}^{21 } s_{ad}^{21}\right]^{\ast}\right\rangle_0^{(\beta ) } \label{f0}\end{aligned}\ ] ] the notation indicates an average over the invariant measure for the universality class . in the last line of eq .( [ f0 ] ) the upper indices indicate the block of the matrix in eq .( [ s ] ) .averages of monomials of the type ^{\ast } \right\rangle_0^{(\beta ) } \label{m}\ ] ] were studied in ref . and , for and , respectively .we now consider these two cases separately . in the orthogonal case , , we denote , just as in ref .[ ] . in that reference one finds the results \nonumber \\ & & + b\left[m_{\alpha \beta}^{\alpha ' \gamma'}m_{\gamma \delta}^{\beta ' \delta ' } + m_{\alpha \beta}^{\beta ' \delta ' } m_{\gamma \delta}^{\alpha ' \gamma ' } + m_{\alpha \beta}^{\alpha ' \delta'}m_{\gamma \delta}^{\beta ' \gamma ' } + m_{\alpha \beta}^{\beta ' \gamma ' } m_{\gamma \delta}^{\alpha ' \delta ' } \right ] , \label{m 4}\end{aligned}\ ] ] where [ m 24 ] substituting the results ( [ m 24 ] ) in eq .( [ f0 ] ) we find for the average of , eq .( [ < p > 2 ] ) , for the orthogonal case : in the unitary case , , we denote , just as in ref . [ ] . in that reference one finds the results \nonumber \\ & & -\frac{1}{2n[(2n)^2 -1]}\left [ \delta_{\alpha \gamma } ^{\alpha ' \gamma ' } \delta_{\beta \delta}^{\delta ' \beta ' } + \delta_{\alpha \gamma } ^{\gamma ' \alpha'}\delta_{\beta \delta}^{\beta ' \delta ' } \right ] \label{m 4}\end{aligned}\ ] ] which has to be substituted in eq .( [ f0 ] ) . for , eq . ( [ < p > 2 ] ) , we find : for a large number of open channels , , eqs .( [ < p > beta1 ] ) and ( [ < p > beta2 ] ) give , just as in refs .[ ] . the ratio from eqs . ( [ < p > beta1 ] ) and ( [ < p > beta2 ] ) is plotted in fig .[ fanobeta12 n ] as a function of the number of channels .we observe that this ratio is always larger for the orthogonal ( ) than for the unitary case ( ) , just as was noticed in the results shown in fig .[ fanobeta2n1 ] for the one - channel case .this effect indicates that time reversal symmetry pushes the distribution towards small s [ for this effect is given by eq .( [ w(g ) beta1 n=1 s0 ] ) ] in such a way that gets closer to poisson s value .the maximum - entropy approach that we have been discussing is expected to be valid for cavities in which the classical dynamics is completely chaotic a property that refers to the long - time behavior of the system as in such structures the long - time response is equilibrated and classically ergodic . in refs .[ ] the scalar schrdinger equation was integrated numerically for a number of 2d cavities in order to examine to what extent our approach really holds . in those referencesthe analysis was performed for the conductance distribution .the cavities were subjected to a magnetic field ( ) and they were connected to the outside by waveguides admitting one open channel ( ) . moreover ,the structures were such that they obviously supported short paths associated with direct reflection from a barrier , direct transmission from one lead to the other , or skipping - orbit trajectories in the presence of the magnetic field . in what followswe consider the numerical solution of the schrdinger equation for 2d structures which again support direct processes . now the system is not immersed in a magnetic field , so that it is time - reversal invariant ( ) .we mainly study the one - channel case , ( sec . [ num n1 ] below ) , although we also present some results for ( sec .[ num n2 ] ) .in addition to the conductance distribution , the average of the zero - frequency shot noise power spectrum is also studied , in order to examine further the applicability of the model .ensembles of similar systems are obtained by introducing an obstacle inside the cavity and changing its position ( see figs .[ w(t)22 - 23 ] , [ wgm ] and [ w(t ) 75 ] below ) . in all casesthe optical matrix was extracted from the data and used as an input in eq .( [ poisson ] ) , or in the various results of the sec .[ pk ] , to produce the theoretical predictions to be compared with numerical experiments . in this senseall of our fits are parameter free " . for details of the numerical studywhen the energy lies inside the interval ] .we need to study in energy intervals not too close to either threshold , in order to avoid threshold singularities .[ w(t)22 - 23 ] shows , as insets , the structures for which the numerical study was performed : they consist of a bunimovich stadium connected to two waveguides directly , as in panels ( a ) , ( b ) and ( c ) , or through a smaller half stadium , as in ( d ) .the structures are spatially asymmetric .the histograms were obtained by solving the schrdinger equation inside these structures and collecting the data in the energy interval $ ] ( in the units explained above ) , and then across an ensemble of 200 positions of the obstacle , which is also shown in the figure . in that energy interval ,20 equally - spaced points were considered : these points are farther apart than the correlation energy , as it appears from the negligible correlation coefficient ( over the ensemble ) that was obtained for the transmission and reflection amplitudes for two successive points .the optical matrix , obtained as an energy plus an ensemble average of , i.e. , , was extracted from the data and the optical reflection and transmission matrix elements are given in table [ s - opt - fig3 ] ..the optical reflection and transmission matrix elements for the four cases in fig .[ w(t)22 - 23 ] . [ cols="^,^,^,^",options="header " , ] [ shot noise n=2 ]the statistical properties of the dc conductance in chaotic cavities have been investigated in the past in the framework of the maximum - entropy model described in the previous sections . within the same framework ,in the present paper we have gone further by studying , in addition to the conductance , the zero - frequency shot - noise power spectrum .the shot noise is a more complicated quantity than the conductance , in the sense that it involves electron correlations due to the pauli principle .we have been particularly interested in the effect that direct processes consisting of whispering gallery modes have on the conductance and on the shot - noise power ; these modes were promoted by choosing properly the structure of the cavities and the position of the leads .this kind of direct processes were , in fact , avoided in previous publications by some of the present authors .for the two symmetries ( ) studied here we have found that the ratio , as a function of the number of channels for , is larger for than for , indicating that small values of the transmission eigenvalues are favored by time - reversal symmetry .we have found that the agreement between the theoretical predictions and the results of computer simulations performed for one and two open channels is generally good . however , the systematic discrepancies that we have observed lead us to revise the notions under which our model has been constructed .indeed , the maximum - entropy model described in sec . [ pk ] relies on a number of assumptions .for instance , the extreme idealization is made of regarding as a stationary random ( matrix ) function " of energy . as a consequence ,the optical matrix is constant with energy and the characteristic time associated with direct processes is literally zero .the property of _ stationarity _ allows defining the notion of _ ergodicity _ which , together with _analitycity _ , gives the _ reproducing property _, eq . ( [ reprod ] ) , which is essential for the definition of poisson s kernel ( pk ) of eq .( [ poisson ] ) .needless to say , in realistic dynamical problems stationarity is only approximately fulfilled , so that one has to work with energy intervals across which the local " optical matrix is _ approximately constant _ , while , at the same time , such intervals should contain many fine - structure resonances .this compromise can actually be realized in nuclear physics , where the optical arises from the tail of many distant resonances or from a single - particle resonance that lies so far away in the complex- plane to act as a smooth background on top of the fine - structure compound - nucleus resonances : hence the huge separation between the two time scales .in contrast , as we saw in sec . [ num ] , such a compromise is difficult to fulfill for the physical systems studied here : this we believe to be the origin of the discrepancies observed between theory and numerical simulations .( indeed , discrepancies similar to the ones that we have observed in this paper were already there , to a certain extent , in refs .[ ] , but were overlooked at that time . ) in the present paper we give evidence that reducing literally to a point and collecting data over an ensemble constructed by changing the position of the obstacle inside the cavity , the agreement between theory and experiment is significantly improved , being excellent in several cases . in other words ,_ pk gives a good description of the statistics of the data taken across the ensemble_. it is interesting to remark that also in ref .[ ] cases had been found in which stationarity obviously did not hold .energy averages were out of the question in those cases , so that an ensemble was generated by adding noise " along the wall : it was found that pk gave an excellent description of the data collected across the ensemble at a fixed .this point was merely indicated at that time and no results were published . thus the results shown in the present paper give evidence that pk is valid beyond the situation where it was originally derived , which required the properties of analyticity , stationarity and ergodicity , plus a maximum - entropy ansatz .it is as though the reproducing property of eq .( [ reprod ] ) were valid even in the absence of stationarity and ergodicity ( analyticity is always there , of course ) . even at the present moment we are unable to give an explanation of this fact .a few remarks are in order in connection with this point .let us take the invariant measure of sec .[ pk ] as a model for the description of scattering by a chaotic cavity described by the scattering matrix and assumed to have ideal coupling to the leads .brouwer has shown ( see ref .[ ] , sec .v ) that when such a chaotic cavity is coupled to the leads by means of a tunnel barrier ( non - ideal coupling ) described by a fixed scattering matrix , say , the resulting , constructed using the combination law of and , is distributed according to pk .brouwer s proof , being essentially a change of variables from to the final , does not require stationarity , or ergodicity , or the maximum - entropy ansatz ; however , it neglects evanescent modes between the barrier and the cavity .in other words , the reproducing property , which is fulfilled identically for the invariant measure , is not destroyed by the presence of the tunnel barriers .the latter certainly give rise to a nonzero , so that the direct processes described by this , being produced by the tunnel barriers , take place outside the cavity ( see fig . 2 in ref . ) .in contrast , when direct processes take place inside the system , it is not possible , in general , to write the total as the combination of an and a _ fixed _ , as required by brouwer s analysis .take , for instance , the system shown in fig .[ w(t)22 - 23](d ) .if we had , say , a long neck " between the small cavity and the big one , then we could define scattering matrices for the former and for the latter and combine them , disregarding evanescent modes , to obtain the total scattering matrix .however , this is not the case for the actual system under study . as an approximation, we might think of assigning to the small and big cavities of the system of fig .[ w(t)22 - 23](d ) the scattering matrices and , respectively , that would occur if we added the neck between the two ; the total obtained by combining these open - channel and would represent an approximation to the actual problem ; however , we are not in a position to know how close this approximation would be to the exact solution : we leave this open question for future investigation .once again we seem to find that the valididty of pk for the systems studied in the previous section goes beyond the domain in which brouwer s result was derived .brouwer has also shown that pk for the matrix can be obtained from a lorentzian ensemble of hamiltonians with an arbitrary number of levels . in the limit lorentzian ensemble becomes equivalent to a gaussian ensemble . in this limit ,in which we believe that the gaussian ensemble describes a chaotic cavity , the problem becomes once again stationary in energy .it thus seems that a derivation of pk or at least of the reproducing property for chaotic cavities with a general type of direct processes and in the absence of stationarity is , to our knowledge , still missing .when this work was completed , the present authors became aware of a study of the shot noise problem by d. savin et al ., and p. braun et al . in which results similar to those of our sec .[ n arb ] have been obtained .one of the authors ( p.a.m . )whishes to acknowledge the hospitality of the max - planck - institut fr physik komplexer systeme ( mpi - pks ) in dresden , for making possible a long - term visit during which the present work could be almost completed .he also acknowledges financial support by conacyt , mxico , under contract no .he also wishes to thank h. u. baranger , c. lewenkopf , m. martnez and t. h. seligman for useful discussions .e.n.b . and v.a.g .are also grateful to the mpi - pks for its hospitality during their stay in dresden .for completeness , we present the explicit parametrization of the matrix in the polar representation for and and some of its applications . we write the two - dimensional matrix in the polar representation as = \left [ \begin{array}{cc } r & t ' \\ t & r ' \end{array } \right ] \label{s polar n=1}\ ] ] and the optical as , \label{s opt}\ ] ] where the various entries are complex numbers . for has the restrictions and .the distribution of the conductance for can be reduced to quadratures , with the result given in the text , eq .( [ w(g ) beta12 n=1 ] ) .the expressions given below are used in the present work when carrying out the numerical computations ; since these are performed for the orthogonal case , , we restrict ourselves to this universality class . for write the four - dimensional matrix in the polar representation as ^t & v^{(1 ) } \sqrt{\tau}\;[v^{(2)}]^t\\ v^{(2)}\sqrt{\tau}\;[v^{(1)}]^t & v^{(2)}\sqrt{1-\tau}\;[v^{(2)}]^t \end{array } \right ] = \left [ \begin{array}{cc } r & t ' \\ t & r ' \end{array } \right ] \ ; . \label{s 2}\ ] ]the reflection and transmission matrices , , etc . , are two dimensional .the matrix is two dimensional and diagonal : , with .the matrices and are two - dimensional unitary matrices which can be written as .\label{vi 2}\ ] ] the optical is written as , \label{s opt n2}\ ] ] where the various entries are two - dimensional matrices . the joint probability distribution of is given in eq .( [ w(tau ) beta1 n=2 ] ) of the text : it is a 10-dimensional integral , with given in eq .( [ dmu vi ] ) , the range of variation of the parameters being specified by ( [ range ] ) .
in the past , a maximum - entropy model was introduced and applied to the study of statistical scattering by chaotic cavities , when short paths may play an important role in the scattering process . in particular , the validity of the model was investigated in relation with the statistical properties of the conductance in open chaotic cavities . in this article we investigate further the validity of the maximum - entropy model , by comparing the theoretical predictions with the results of computer simulations , in which the schrdinger equation is solved numerically inside the cavity for one and two open channels in the leads ; we analyze , in addition to the conductance , the zero - frequency limit of the shot - noise power spectrum . we also obtain theoretical results for the ensemble average of this last quantity , for the orthogonal and unitary cases of the circular ensemble and an arbitrary number of channels . generally speaking , the agreement between theory and numerics is good . in some of the cavities that we study , short paths consist of whispering gallery modes , which were excluded in previous studies . these cavities turn out to be all the more interesting , as it is in relation with them that we found certain systematic discrepancies in the comparison with theory . we give evidence that it is the lack of stationarity inside the energy interval that is analyzed , and hence the lack of ergodicity a property assumed in the maximum - entropy model that gives rise to the discrepancies . indeed , the agreement between theory and numerical simulations is improved when the energy interval is reduced to a point and the statistics is then collected over an ensemble obtained by varying the position of an obstacle inside the cavity . it thus appears that the maximum - entropy model is valid beyond the domain where it was originally derived . an understanding of this situation is still lacking at the present moment .
the worldwide efforts to build a viable quantum computer have one source of motivation in common : the potential to solve certain problems faster on a quantum computer than on any classical computer .there are a number of ways to specify quantum algorithms but the formulation of quantum algorithms as a uniform family of quantum circuits is the most popular choice .deriving an efficient quantum circuit for a given unitary matrix is a daunting , and , frustratingly , often impossible , task .there exist a small number of efficient quantum circuits , and even fewer quantum circuit design methods .if efficient quantum circuits are rare and difficult to derive , then is only natural to try to reuse these quantum circuits in the construction of other quantum circuits .we present in this paper a new design principle for quantum circuits that is exactly based on this idea .suppose that we want to realize a given unitary matrix as a quantum circuit .suppose that we know a number of quantum circuits realizing unitary matrices of the same size as .we choose a small subset of these unitary matrices such that the algebra generated by the matrices contains . roughly speaking ,if the generated algebra has some structure , e.g. is a finite dimensional ( twisted ) group algebra , then we are able to write down a quantum circuit realizing , which reuses the implementations of the matrices . as a motivating example serves the discrete hartley transformation , which is a variant of the discrete fourier transform defined over the real numbers . in section [ motivation ] , we show how the hartley transforms can be realized by combining quantum circuits for the discrete fourier transform and its inverse .we generalize the idea behind this construction in the subsequent sections. an essential ingredient of our method are circulant matrices and certain block - diagonal matrices , which are introduced in sections [ circulant ] and [ group - indexed ] . in section [ design ] we present the main result of this paper .we show how to derive a quantum circuit for a unitary matrix , which can be expressed as a linear combination of unitary matrices with known quantum circuits . for ease of exposition , we do not state the theorem in full generality ; the generalizations of the method are discussed in sections [ sec : generalization ] and [ projcirculants ] .there is a relation between kitaev s method for eigenvalue estimation of unitary operations and the present method which is explored in section [ kitaevrelation ] .section [ examples ] demonstrate the design principle with the help of some simple examples .we revisit the hartley transform in section [ hartley ] , and discuss fractional fourier transforms in section [ fractional ] .we give an interpretation of the well - known teleportation circuit in terms of projective circulants in section [ teleport ] ._ notations ._ we denote by , , , and the ring of integers , the ring of integers modulo , the field of real numbers , and the field of complex numbers , respectively .the group of unitary matrices is denoted by .we denote the identity matrix in by .we consider quantum computations that manipulate the state of two - level systems .a two - level system has two clearly distinguishable states and , which are used to represent a bit .we refer to such a two - level system as a quantum bit , or shortly a qubit .the state of quantum bits is mathematically represented by a vector in of norm 1 .we choose a distinguished orthonormal basis of and denote its basis vectors by , where with .a quantum gate on qubits is an element of the group of unitary matrices .we will use single - qubit gates and controlled - not gates .a _ single - qubit gate _ acting on qubit is given by a matrix of the form with .controlled - not gate _ with control qubit and target qubit is defined by where denotes addition modulo 2 .we denote this gate by . we will refer to single - qubit gates and controlled - not gates as _elementary gates_. it is well - known that the single - qubit gates and the controlled - not gates are universal , meaning the set generates the unitary group . in other words ,each matrix can be expressed in the form with , .of special interest are the shortest possible words for .we denote by the smallest such that there exists a word , with , , such that . the complexity measure turns out to be rather rigid .it is desirable to allow a variation which gives additional freedom .we say that a unitary matrix realizes with the help of ancillae provided that maps for all .we define to be the minimum of all unitary matrices realizing with the help of ancillae . as examples, we mention the following bounds on the complexity for well - known transforms acting on quantum bits : the hadamard transform ; the discrete fourier transform when realized without ancillae , and when realized with ancillae .various unitary signal transformations with fast quantum realizations can be found in .assume that we have already found an efficient quantum circuit for a given unitary matrix with quantum gates , some constant .we would like to find an efficient quantum circuit for a polynomial function of , allowing ancillae qubits .if we succeed , then this would prove that for some constant .as an example , consider the discrete hartley transform of length , which is defined by {k , l=0,\ldots , n-1},\ ] ] where the function is defined by .the discrete hartley transform is well - known in classical signal processing , cf .if we denote the discrete fourier transform by , then is an immediate consequence of the definitions .let .we will now derive an efficient quantum circuit implementing the hartley transform with one auxiliary quantum bit .( 50,50 ) setunit(1.6 mm ) ; qubits(5 ) ; dropwire(1,2 ) ; label.lft(btex etex , ( qcxcoord , qcycoord[2 ] ) ) ; label.lft(btex etex , ( qcxcoord , qcycoord[0]+0.9 cm ) ) ; label.lft(btex etex , ( qcxcoord+0.7cm , qcycoord[0]+1.0 cm ) ) ; gate(gpos 2 , btex etex ) ; circuit(1.5cm)(gpos 0,1,btex etex ) ; circuit(1.5cm)(icnd 2 , gpos 0,1,btex etex ) ; label(btex etex , ( qcxcoord+0.6cm , qcycoord[0]+1.0 cm ) ) ; gate(gpos 2 , btex etex ) ; circuit(1.5cm)(icnd 2 , gpos 0,1,btex etex ) ; gate(gpos 2 , btex etex ) ; label.rt(btex etex , ( qcxcoord-0.7cm , qcycoord[0]+1.0 cm ) ) ; label.rt(btex etex , ( qcxcoord , qcycoord[0]+0.9 cm ) ) ; label.rt(btex etex , ( qcxcoord , qcycoord[2 ] ) ) ; [ factorhart ] the discrete hartley transform can be realized by the circuit shown in figure [ hartleycirc ] , where denotes the unitary circulant matrix and the hadamard transform .* let denote the unitary matrix effecting a discrete fourier transform on the least significant bits if the most significant ( ancilla ) bit is set ; in terms of matrices .similarly , let .we now show that the circuit shown in figure [ hartleycirc ] computes the linear transformation for all vectors of unit length . proceeding from left to right in the circuit ,we obtain as desired .note that we have used the property that the discrete fourier transform has order four , i.e. , .we cast this factorization and the corresponding complexity cost in terms of elementary quantum gates in the following theorem .the discrete hartley transform can be implemented with quantum gates on a quantum computer .* recall that the discrete fourier transform can be implemented with quantum gates , see .the claim is an immediate consequence of lemma [ factorhart ] .we pause here to discuss some noteworthy features of the preceding example .we notice that the discrete fourier transform satisfies the relation , and that all powers have fast implementations with known quantum circuits .we constructed the hartley transform as a linear combination of some of those powers , namely as a linear combination of and using the circuit shown in figure [ hartleycirc ] .a nice feature of this circuit is that any improvement in the design of quantum algorithms for the discrete fourier transform will directly lead to an improved performance of the discrete hartley transform , since the circuits for the discrete fourier transform a simply _ reused _ in the hartley transform circuit .we will generalize this idea in the following sections .the methods are much more general. we will even be able to combine several different circuits , assuming that some regularity conditions are satisfied .the factorization of the hartley transform implied by lemma [ factorhart ] will be obtained as a special case of this more general theory in section [ hartley ] .our goal is to derive a circuit implementing linear combinations of matrices .this is not an easy task , because all operations need to be unitary .we assume that the algebra generated by the matrices has some structure which we can exploit when deriving the circuit .our approach will be particularly successful when the generated algebra is a finite dimensional ( twisted ) group algebra . in this case , we can write down a _ single _ generic circuit which is able to implement _ any _ unitary matrix contained in . in the case of a group algebra ,a group - circulant determines which matrix is implemented by the generic circuit .recall the definition of a group - circulant : let be a finite group of order .choose an ordering of the elements of and identify the standard basis of with the group elements of .let denote a vector indexed by the elements of .then the -matrix is a group - circulant for the group .the following example covers the important special case of cyclic circulants .let be the cyclic group of order generated by and the elements of ordered according to .the circulant corresponding to takes the form we see that each row is obtained from the previous one by a cyclic shift to the right .the following crucial observation connects group - circulants with the coefficients in linear combinations .the linear independence of the representing matrices of a finite group ensures that the circulant is a unitary matrix .let be a positive integer .let be a set of linearly independent unitary matrices which form a finite subgroup of .furthermore , let be a linear combination of the matrices , with certain coefficients . then the associated group circulant matrix is unitary .* proof . * multiplying with yields since the matrices are linearly independent , it is possible to compare coefficients with , which shows that holds , where denotes the kronecker - delta . in other words , the rows of the circulant matrix orthogonal ._ if the representing matrices are not linearly independent , then the group circulant is in general not unitary .in fact , it is not difficult to see that for _ each _ unitary matrix there _ is _ a choice of coefficients such that the group - circulant is not unitary .however , we will see in theorem [ thm : unitarytrick ] that even in this case it is possible to choose the coefficients such that the associated group - circulant is unitary .the notion of circulant matrices is based on ordinary representations of a given finite group .it is possible to generalize the concepts presented in this section to projective representations . in doing so, a greater flexibility in forming linear combinations can be achieved .this will be studied in detail in section [ projcirculants ] .let be a finite group , and denote by an ordinary matrix representation of acting by unitary matrices on a system of quantum bits .our goal is to derive an efficient implementation of a block diagonal matrix containing the representing matrices , this will be an essential step in creating a linear combination of these matrices .we need an efficient implementation of this block diagonal matrix , and a suitable encoding of the group elements will allow us to find such an implementation . for simplicity , we assume that is a -group ,that is , for some integer , but the ideas easily generalize to arbitrary solvable groups .it is possible to find a composition series of the group such that the quotient group contains exactly two elements , , see .a _ transversal _ of is a sequence of elements such that , and the quotient group is generated by the image of the element , the essence of this somewhat technical construction is that we obtain a unique presentation of each element in the form this allows to `` address '' each group element by a binary string of bits .abusing notation , we identify the element with its exponent vector , and we write to denote the matrix let denote the block diagonal matrix this block diagonal matrix contains the representing matrix of each group element .we need only an implementation of the matrices , because the representing matrices satisfy the relation . we will conditionally apply these matrices on the system of quntum bits .we have control bits , one for each matrix .we need a lemma which allows us to give an estimate of the complexity of our implementation .let be an elementary gate , i.e. , an element of .then the conditional gate can be implemented using at most elementary gates .if is a single - qubit gate , then can be implemented with at most six elementary gates .if is a controlled - not gate , then is a toffoli gate , which can be implemented with 14 elementary gates .we now state the main theorem of this section , which gives an upper bound on the complexity of the case operator : [ blockdecomp ] let be a finite group of order with a unitary matrix representation .let denote a transversal of . if is the maximum number of operations necessary to realize one of the matrices , , then the block diagonal matrix can be realized with at most elementary operations .( 50,50 ) setunit(2 mm ) ; qubits(5 ) ; qcycoord[1 ] : = qcycoord[1]+0.5 cm ; qcycoord[2 ] : = qcycoord[2]+0.5 cm ; qcycoord[3 ] : = qcycoord[3 ] ; label.lft(btex etex , ( qcxcoord , qcycoord[3]+3 mm ) ) ; label.lft(btex etex , ( qcxcoord , qcycoord[0]+0.6 cm ) ) ; qcxcoord : = qcxcoord+4 mm ; label.lft(btex etex,(qcxcoord , qcycoord[2]+0.6 cm ) ) ;label.lft(btex etex,(qcxcoord , qcycoord[0]+0.6 cm ) ) ; wires(4 mm ) ; label(btex etex , ( qcxcoord , qcycoord[0]+0.8 cm ) ) ; circuit(1.5cm)(icnd 2 , gpos 0,1 , btex etex ) ; label(btex etex , ( qcxcoord , qcycoord[0]+0.8 cm ) ) ; circuit(1.5cm)(icnd 3 , gpos 0,1 , btex etex ) ; label(btex etex , ( qcxcoord , qcycoord[3]+3 mm ) ) ; label(btex etex , ( qcxcoord+2 mm , qcycoord[3]+4 mm ) ) ; label(btex etex , ( qcxcoord+4 mm , qcycoord[3]+5 mm ) ) ; wires(2 mm ) ; label(btex etex , ( qcxcoord , qcycoord[0]+0.8 cm ) ) ; wires(2 mm ) ; circuit(1.5cm)(icnd 4 , gpos 0,1 , btex etex ) ; label(btex etex , ( qcxcoord , qcycoord[0]+0.8 cm ) ) ; wires(4 mm ) ; * proof .* we observe that due to binary expansion of the exponent vectors the operation can be implemented as in figure [ twiddledecomp ] .the statement concerning the number of gates of this factorization follows immediately from the previous lemma .a familiar example is given by the additive cyclic group .assume that this group is represented by , where and is some unitary matrix satisfying .a composition series is given by the subgroups .a transversal for the group is , for instance , given by the group elements , that is , .the implementation described in the previous theorem realizes the powers .an arbitrary power is realized by setting the control bits according to the binary expansion , with .suppose that we want to realize a unitary matrix by a quantum circuit .we assume that some unitary matrices with efficient quantum circuits are known to us .familiar examples are discrete fourier transforms , permutation matrices , and so on .suppose that some of the matrices generate a finite dimensional group algebra containing the matrix , then , simply put , a quantum circuit can be found for .the following theorem describes how this can be accomplished . to ease the presentation , we do not state the theorem in its most general form .the more technical generalizations will be discussed in the subsequent sections .[ completecircuit ] let be a finite group of order , and denote by a transversal of , that is , each element can be uniquely represented in the form , where .let be a unitary representation of such that the images form a set of linearly independent unitary operations .suppose that is a linear combination of the representing matrices , with coefficients . if denotes the associated group - circulant , with elements ordered according to the choice of the transversal , then the matrix is realized by the circuit given in figure [ generic ] .( 50,50 ) setunit(1.4 mm ) ; qubits(6 ) ; dropwire(1,2 ) ; qcycoord[3 ] : = qcycoord[3 ] + 5 mm ; label.lft(btex etex , ( qcxcoord , qcycoord[3 ] ) ) ; label.lft(btex etex , ( qcxcoord , qcycoord[2 ] ) ) ; label.lft(btex etex , ( qcxcoord , qcycoord[0]+9 mm ) ) ; label(btex etex , ( qcxcoord+5 mm , qcycoord[2]+6.5 mm ) ) ; label(btex etex , ( qcxcoord+5 mm , qcycoord[0]+10 mm ) ) ; gate(gpos 2 , btex etex , 3 , btex etex ) ; circuit(1.3cm)(icnd 2 , gpos 0,1 , btex etex ) ; label.rt(btex etex , ( qcxcoord , qcycoord[0 ] ) ) ; label.rt(btex etex , ( qcxcoord , qcycoord[1 ] ) ) ; label.rt(btex etex , ( qcxcoord , qcycoord[2 ] ) ) ; label.rt(btex etex , ( qcxcoord , qcycoord[3 ] ) ) ; qcxcoord : = qcxcoord + 7 mm ; circuit(1.3cm)(icnd 3 , gpos 0,1 , btex etex ) ; label(btex etex , ( qcxcoord+9 mm , qcycoord[0]+10 mm ) ) ; circuit(1.1cm)(gpos 2,3 , btex etex ) ; circuit(1.3cm)(icnd 3 , gpos 0,1 , btex etex ) ; label.rt(btex etex , ( qcxcoord , qcycoord[0 ] ) ) ; label.rt(btex etex , ( qcxcoord , qcycoord[1 ] ) ) ; label.rt(btex etex , ( qcxcoord , qcycoord[2 ] ) ) ; label.rt(btex etex , ( qcxcoord , qcycoord[3 ] ) ) ; qcxcoord : = qcxcoord + 7 mm ; circuit(1.3cm)(icnd 2 , gpos 0,1 , btex etex ) ; label(btex etex , ( qcxcoord+5 mm , qcycoord[2]+6.5 mm ) ) ; label(btex etex , ( qcxcoord+5 mm , qcycoord[0]+10 mm ) ) ; gate(gpos 2 , btex etex , 3 , btex etex ) ; label.rt(btex etex , ( qcxcoord , qcycoord[3 ] ) ) ; label.rt(btex etex , ( qcxcoord , qcycoord[2 ] ) ) ; label.rt(btex etex , ( qcxcoord , qcycoord[0]+9 mm ) ) ; * proof .* note that by the choice of the transversal the ordering of the elements of is fixed .we define to be the transformation , where the first factors are equal to the hadamard transform .the transformation corresponds to the leftmost and to the rightmost transformation in figure [ generic ] .furthermore , we define and .observe that , , and are the remaining transformations in figure [ generic ] .the circuits for and are shown in factorized form , hereby exploiting the group - structure of the case - operator .we obtain the factorization of as in figure [ twiddledecomp ] . to verify that the circuit indeed computes , we first consider the matrix identity to be more precise , the entry at position of this block - structured matrix is equal to .this means that each row of blocks of ( [ blockmat ] ) contains the set of matrices in some permuted order .the same holds for the columns of this matrix .hence we can conclude that the first row of the matrix is given by . hence applying to the columns of this matrixwill produce with some unitary matrix and zero - matrices of the appropriate sizes .note that the entries in the same rows resp .columns as must vanish , since as well as the other operations used in ( [ blockfact ] ) are unitary ._ note that the assumptions in the theorem can be considerably relaxed .the restriction to 2-groups is not necessary .in fact , the implementation of case operators can be extended to arbitrary solvable groups .moreover , we will show in the next section that the representing matrices do not need to be linearly independent ; one can always find suitable unitary group circulants .finally , it is not necessary that the representing matrices form an ordinary representation ; the extension to projective representations is discussed in section [ projcirculants ] .note that the cost of implementing a circuit for is determined by the cost of the transversal elements , , and by the cost of the group circulant .if the group is of small order , say the order is bounded by , then the efficiency of the implementation of a transformation which has been decomposed according to theorem [ completecircuit ] depends only on the complexity of the transformations .a family of representations with this property will be studied in section [ fractional ] .the theorem in the previous section assumed that the representing matrices of the group are linearly independent .the next theorem shows that one can drop this assumption entirely .[ thm : unitarytrick ] let be an ordinary representation of a finite group . if a unitary matrix can be expressed as a linear combination then the coefficients can be chosen such that the associated group circulant is unitary .* proof . *a finite group has a finite number of non - equivalent irreducible representations .let , , be a representative set of the non - equivalent irreducible unitary representations of the finite group .let be the complex - valued function on , which is determined by the value of the -coefficient of the representing matrix .we obtain functions in this way .it was shown by schur that these functions are orthogonal , unless , , and ; see ( * ? ? ?* section 2.2 ) .denote by let denote the direct sum of the irreducible representations , which are not contained in , that is , we have for some .indeed , comparing coefficients yields a system of linear equations .this system of equations can be solved , since the coefficient functions are orthogonal .the circulant corresponding to the coefficients is unitary by theorem [ procunit ] . ignoring the representing matrices , we obtain as a linear combination of the representing matrices , as claimed .we have assumed in theorem [ completecircuit ] that is obtained as a linear combination of matrices , which form an ordinary representation of a finite group .it turns out that the quantum circuit used for the implementation of can also be used , with a minor modification , for projective representations .we recall a few basic facts about projective representations and then give the appropriate generalization of the circulant matrices introduced in section [ circulant ] .note that projective representations have been used in quantum information theory .they for instance turn out to be the adequate formalism to describe a class of unitary error bases .let be a projective unitary representation of a finite group with factor set .in other words , is a function from to the nonzero complex numbers such that holds for all .the associativity of matrix multiplication implies the relations for all .this shows that is a -cocycle of the group with trivial action on .we assume that the neutral element of the group is represented by the identity matrix , which implies the values of the factor system are of modulus 1 , since the representation matrices are unitary .this shows , in particular , the relations let be a vector which is labeled by the elements of .we define a _ projective group circulant _ for with respect to to be the matrix projective circulants have been introduced by i. schur .we show that in the analog situation to theorem [ completecircuit ] the associated projective circulants are unitary .[ procunit]let be an -dimensional unitary projective representation of a finite group .suppose that the operators are linearly independent .if a unitary matrix can be expressed as a linear combination then the projective group circulant of the coefficients is unitary .* proof.* it suffices to show that the rows of the projective group circulant are pairwise orthogonal and of unit length , or more explicitly that holds for all .we will show that these orthogonality relations can be derived from the matrix identity .multiplying and yields notice that the multiplication rules ( [ mult ] ) imply therefore , can be expressed as setting , and in ( [ cocycle ] ) shows the identity which allows to simplify the previous expression for to the substitution yields setting , , and in the cocycle relation ( [ cocycle ] ) shows the identity this allows to write in the form comparing coefficients on both sides yields using ( [ unit ] ) , this shows that the orthogonality relations ( [ ortho ] ) hold .thus , the projective group circulant is indeed unitary , as claimed .we remark that a theorem analogous to theorem [ completecircuit ] holds in the situation where is a projective representation of a finite group . in this casethe matrices have to be replaced by a suitably rescaled transversal and the circulant matrix has to be replaced by the corresponding projective circulant .theorem [ procunit ] guarantees that the latter matrix is unitary . using projective representationsa greater flexibility can be achieved . in section [ teleport ]we give an example for a projective representation of the group which is given by the pauli matrices .kitaev presented in and a quantum circuit that allows to estimate an eigenvalue of a unitary matrix provided that the corresponding eigenstate is given ; see also and for further descriptions of this scenario .the phase estimation provides a unified framework for shor s algorithm and the algorithms for abelian stabilizers . in the following we give a brief account of this method . starting from a unitary transformation on qubits and an eigenvector ,we want to generate an estimate of .the precision of this approximation is controlled by the number of digits we want to compute in the binary expansion of , i.e. , .if we assume that , in addition to and , we are given efficient quantum circuits implementing for , then it is possible to accomplish the task of approximating by means of an efficient quantum circuit .this circuit , which is given in figure [ kitaevcircuit ] , consists of three parts .reading from left to right , we have a quantum circuit acting on two registers : the first holds the eigenstate and the second , which ultimately will contain the approximation , is initialized with the state . in a first step an equal - weighted superposition of all binary strings on the second register is generated by application of a hadamard transform to each wire .then the transformation is performed for .note that if has finite order , then is a case - operator ( in the sense of section [ group - indexed ] ) for the cyclic group . in a third stepan inverse fourier transform is computed giving the best -bit approximation of which is stored in the second register .( 50,50 ) setunit(1.4 mm ) ; qubits(6 ) ; dropwire(1,2 ) ; qcycoord[3 ] : = qcycoord[3 ] + 5 mm ; label.lft(btex etex , ( qcxcoord , qcycoord[3 ] ) ) ; label.lft(btex etex , ( qcxcoord , qcycoord[2 ] ) ) ; label.lft(btex etex , ( qcxcoord , qcycoord[0]+9 mm ) ) ; label(btex etex , ( qcxcoord+5 mm , qcycoord[2]+6.5 mm ) ) ; label(btex etex , ( qcxcoord+5 mm , qcycoord[0]+10 mm ) ) ; gate(gpos 2 , btex etex , 3 , btex etex ) ; circuit(1.3cm)(icnd 2 , gpos 0,1 , btex etex ) ; label.rt(btex etex , ( qcxcoord , qcycoord[0 ] ) ) ; label.rt(btex etex , ( qcxcoord , qcycoord[1 ] ) ) ; label.rt(btex etex , ( qcxcoord , qcycoord[2 ] ) ) ; label.rt(btex etex , ( qcxcoord , qcycoord[3 ] ) ) ; qcxcoord : = qcxcoord + 7 mm ; circuit(1.3cm)(icnd 3 , gpos 0,1 , btex etex ) ; label(btex etex , ( qcxcoord+9 mm , qcycoord[0]+10 mm ) ) ; circuit(1.1cm)(gpos 2,3 , btex etex ) ; wires(2 mm ) ; label.rt(btex etex , ( qcxcoord , qcycoord[2]+5 mm ) ) ; label.rt(btex etex , ( qcxcoord , qcycoord[0]+9 mm ) ) ; the connection between the algorithm for phase estimation and the circuit given in figure [ generic ] is established as follows . for the special case of generated by a unitary transformation of order first apply the circuit given in figure [ kitaevcircuit ] to each element of a basis of eigenvectors of in order to obtain the ( exact ) eigenvalues in the second register .note that there are at most different eigenvalues of .we then perform the scalar multiplication for certain and all .finally we run the circuit given in figure [ kitaevcircuit ] backwards and observe that where the vector is given by .here {k , l=0,\ldots , n-1}$ ] denotes the ( unnormalized ) discrete fourier transform .note , that the method presented in section [ design ] is more general than this twofold application of the circuit for eigenvalue estimation as it allows to work with representations of arbitrary finite groups .the decomposition method introduced in the previous sections is demonstrated by means of the _ hartley transforms _ which have been introduced in section [ motivation ] and _ fractional fourier transforms _ ( cf . ) which are a class of unitary transformations used in classical signal processing . using the method of linear combinations of unitary operationswe show how to compute them efficiently on a quantum computer .finally , we show that the quantum circuit for teleportation of a qubit can be interpreted with the help of this method .the efficient quantum circuit shown in figure [ hartleycirc ] of section [ motivation ] can be recast in terms of theorem [ completecircuit ] .first recall that the identity with and shows that the hartley transform is a linear combination of the powers of the discrete fourier transform .we can simplify this to obtain , where is defined as .since is an involution , we can apply apply theorem [ completecircuit ] in the special situation where .hence the circulant is in this case the circulant matrix since for the matrices are linearly independent , we can use theorem [ completecircuit ] to conclude that has to be unitary .hence we can implement using one auxiliary qubit with a quantum circuit as in figure [ generic ] . combining the circuits for and for we finally obtain the factorization of shown in figure [ hartleycirc ] of section [ motivation ] .a matrix having the property with is called an -th root of ( where in general this root is not uniquely determined ) . in case of the discrete fouriertransform we can use the property that to define an -th root of via where the coefficients for are defined by note that like in the previous example of the discrete hartley transforms in section [ hartley ] , we have used the property that generates a finite group of order four to obtain the linear combination shown in eq .( [ fractfact ] ) .it was shown in that the one - parameter family has the following properties : * is a unitary matrix for .* and .* for . * , for .using theorem [ generic ] we immediately obtain that can be computed in elementary quantum operations for all , since the complexity of the discrete fourier transform of length is ( cf . ) and the circulant matrix appearing in this case is which can be implemented in . more precisely we have using the results of we can reduce the computational complexity of to if we have no restrictions on the number of ancilla qubits .( 50,50 ) setunit 1.6 mm ; qubits(6 ) ; ysave : = qcycoord[2]-1/2qcheight ; label.lft(btex etex,(qcxcoord , qcycoord[5 ] ) ) ;label.lft(btex etex,(qcxcoord , qcycoord[4 ] ) ) ; label.lft(btex etex,(qcxcoord , ysave ) ) ; dropwire(1,2 ) ; ysave : = ysave + 1 mm ; label(btex etex,(qcxcoord+5.5mm , ysave ) ) ; gate(gpos 2,btex etex , 3 , btex etex ) ; circuit(1.8*qcheight)(icnd 2,gpos 0,1,btex etex ) ; circuit(1.8*qcheight)(icnd 3,gpos 0,1,btex etex ) ; label(btex etex,(qcxcoord+5.0mm , ysave ) ) ; circuit(1.2*qcheight)(gpos 2,3,btex etex ) ; circuit(1.8*qcheight)(icnd 3,gpos 0,1,btex etex ) ; circuit(1.8*qcheight)(icnd 2,gpos 0,1,btex etex ) ; label(btex etex,(qcxcoord+5.0mm , ysave ) ) ; gate(gpos 2,btex etex , 3 , btex etex ) ; label.rt(btex etex,(qcxcoord , qcycoord[3 ] ) ) ; label.rt(btex etex,(qcxcoord , qcycoord[2 ] ) ) ; label.rt(btex etex,(qcxcoord , ysave ) ) ; hence , we obtain the following theorem which summarizes the complexity of computing a fraction fourier transform .let and be the fractional fourier transform of length and parameter . then . in this section ,we show how the well - known quantum circuit for teleportation of an unknown quantum state ( cf . ) can be interpreted with the help of our method .the essential feature of the circuit in figure [ generic ] is that a measurement on the upper quantum bits can be carried out immediately after the transformation has been performed .this has the advantage that the transformations etc .can be classically conditioned .this explains the classical communication part of the teleportation circuit . to obtain the epr states and the bell measurement we use some easy reformulations of the transformations exploiting the nature of the projective circulant in case of matrices .suppose that alice wants to teleport a quantum state of a qubit in her possession to a qubit in bob s possession at a remote destination .if the destination qubit is in the state , then , conceptually , the task is to apply a unitary operation such that .specifically , the matrix can be chosen to be of the form clearly , it would not be feasible for alice to communicate the specification of to bob by classical communication .therefore , she has to proceed in a different way .recall that the matrices form a basis of the vector space of complex matrices .thus , the matrix can be written as a linear combination .note that the pauli basis is an orthonormal basis of with respect to the inner product .as a result , the coefficient corresponding to can be easily computed by .consequently , we obtain the pauli matrices define a projective representation of the abelian group ( see , e.g. , ) . applying the method described in section [ projcirculants ] , the decomposition ( [ addtele ] ) gives rise to the projective circulant matrix defined as follows : \beta + \overline{\beta } & \alpha - \overline{\alpha } & \beta - \overline{\beta } & \alpha + \overline{\alpha } \\[0.5ex ] \alpha + \overline{\alpha } & -(\beta - \overline{\beta } ) & \alpha - \overline{\alpha } & -(\beta + \overline{\beta } ) \\[0.5ex ] \beta - \overline{\beta } & -(\alpha + \overline{\alpha } ) &\beta + \overline{\beta } & -(\alpha - \overline{\alpha } ) \end{array } \right).\ ] ] we can express by a sequence of hadamard gates , controlled - not gates , and the single - qubit gate .indeed , a straightforward calculation shows that \beta & 0 & 0 & -\overline{\alpha } \\[0.5ex ] 0 & \beta & -\overline{\alpha } & 0 \\[0.5ex ] 0 & \alpha & \overline{\beta } & 0 \end{array } \right ) = : \widetilde{c_u}.\ ] ] applying suitable permutations from the left and the right to the matrix we finally obtain the expression overall we obtain that is given by the circuit shown in figure [ cugate ] . ( 50,50 ) setunit 1.6 mm ; qubits(2 ) ; wires(2 mm ) ; circuit(2qcheight)(gpos 0,1 , btex etex ) ; wires(2 mm ) ; label(btex etex,(qcxcoord+1/2qcstepsize , qcycoord[0]+2.5 mm ) ) ; qcxcoord : = qcxcoord + qcstepsize ; wires(2 mm ) ; gate(gpos 1,btex etex ) ; cnot(icnd 0 , gpos 1 ) ; cnot(icnd 1 , gpos 0 ) ; gate(gpos 1,btex etex ) ; cnot(icnd 1 , gpos 0 ) ; gate(gpos 1,btex etex ) ; wires(2 mm ) ; we now turn to the circuit implementing the transformation using the linear combination ( [ addtele ] ) . in the followingwe will modify the generic circuit step by step using elementary identities of quantum gates .we start with the identity ( 50,50 ) setunit 1.6 mm ; qubits(3 ) ; label.lft(btex etex,(qcxcoord , qcycoord[2 ] ) ) ; label.lft(btex etex,(qcxcoord , qcycoord[1 ] ) ) ; label.lft(btex etex,(qcxcoord , qcycoord[0 ] ) ) ; wires(3*unit ) ; gate(gpos 0 , btex etex ) ; wires(3unit ) ; label.rt(btex etex,(qcxcoord , qcycoord[2 ] ) ) ; label.rt(btex etex,(qcxcoord , qcycoord[1 ] ) ) ; label.rt(btex etex,(qcxcoord , qcycoord[0 ] ) ) ; qcxcoord : = qcxcoord + qcstepsize ; label(btex etex,(qcxcoord+2 mm , qcycoord[1 ] ) ) ; qcxcoord : = qcxcoord + qcstepsize ; wires(2 mm ) ; gate(gpos 1 , btex etex , 2 , btex etex ) ; cnot(icnd 1 , gpos 0 ) ; gate(icnd 2 , gpos 0 , btex etex ) ; circuit(2qcheight)(gpos 1,2 , btex etex ) ; cnot(icnd 1 , gpos 0 ) ; gate(icnd 2 , gpos 0 , btex etex ) ; gate(gpos 1 , btex etex , 2 , btex etex ) ; wires(2 mm ) ; which is obtained directly from the method of sections [ design ] and [ projcirculants ] . the matrix is given by a linear combination of pauli matrices in eq .( [ addtele ] ) .this limear combination determines .we rewrite as a product of cnot gates and local unitary transformations using eq .( [ cu ] ) .we obtain the circuit ( 50,50 ) setunit 1.6 mm ; qubits(3 ) ; qcxcoord : = qcxcoord + qcstepsize ; wires(1 mm ) ; gate(gpos 1 , btex etex , 2 , btex etex ) ; cnot(icnd 1 , gpos 0 ) ; gate(icnd 0 , gpos 2 , btex etex ) ; gate(gpos 2 , btex etex ) ; cnot(icnd 1 , gpos 2 ) ; cnot(icnd 2 , gpos 1 ) ; gate(gpos 2 , btex etex ) ; cnot(icnd 2 , gpos 1 ) ; gate(gpos 2 , btex etex ) ; cnot(icnd 1 , gpos 0 ) ; gate(icnd 2 , gpos 0 , btex etex ) ; gate(gpos1 , btex etex , 2 , btex etex ) ; wires(1 mm ) ; where we also turned the first controlled- gate upside down .now we can use the basic fact that to rewrite the controlled- framed by the two hadamard gates on the top wire in the following way : ( 50,50 ) setunit 1.6 mm ; qubits(3 ) ; qcxcoord : = qcxcoord + qcstepsize ; wires(1 mm ) ; gate(gpos 1 , btex etex ) ; cnot(icnd 1 , gpos 0 ) ; cnot(icnd 0 , gpos 2 ) ; cnot(icnd 1 , gpos 2 ) ; cnot(icnd 2 , gpos 1 ) ; gate(gpos 2 , btex etex ) ; cnot(icnd 2 , gpos 1 ) ; gate(gpos 2 , btex etex ) ; cnot(icnd1 , gpos 0 ) ; gate(icnd 2 , gpos 0 , btex etex ) ; gate(gpos 1 , btex etex , 2 , btex etex ) ; wires(1 mm ) ; we can simplify the sequence of the first five gates of the last circuit .indeed , since we start from the state , the state resulting from applying the first five gates is .this state can be obtained by applying the first two of these five gates alone .this simplification yields the following circuit ( 50,50 ) setunit 1.6 mm ; qubits(3 ) ; qcxcoord : = qcxcoord + qcstepsize ; wires(2 mm ) ; draw ( qcxcoord , qcycoord[0]-5 mm ) ( qcxcoord , qcycoord[2]+5 mm ) ( qcxcoord+2qcstepsize , qcycoord[2]+5 mm ) ( qcxcoord+2qcstepsize , qcycoord[0]-5 mm ) cycle dashed evenly ; gate(gpos 1 , btex etex , 2 , btex etex ) ; cnot(icnd 1 , gpos 0 ) ; wires(3 mm ) ; draw ( qcxcoord , qcycoord[1]-5 mm ) ( qcxcoord , qcycoord[2]+5 mm ) ( qcxcoord+2qcstepsize , qcycoord[2]+5 mm ) ( qcxcoord+2qcstepsize , qcycoord[1]-5 mm ) cycle dashed evenly ; cnot(icnd 2 , gpos 1 ) ; gate(gpos 2 , btex etex ) ; wires(3 mm ) ; draw ( qcxcoord , qcycoord[0]-5 mm ) ( qcxcoord , qcycoord[2]+5 mm ) ( qcxcoord+3qcstepsize , qcycoord[2]+5 mm ) ( qcxcoord+3qcstepsize , qcycoord[0]-5 mm ) cycle dashed evenly ; cnot(icnd 1 , gpos 0 ) ; gate(icnd 2 , gpos 0 , btex etex ) ; gate(gpos 1 , btex etex , 2 , btex etex ) ; wires(2 mm ) ; which decomposes into three stages : ( i ) _ epr pair and state preparation _ in which , starting from the ground state , two of the bits are turned into an epr state while the third qubit holds the unknown state , ( ii ) _ bell measurement _ of the two most significant qubits , and ( iii ) a _ reconstruction _ operation which is a conditional transformation on qubit one depending on the outcome of the measurement of qubits two and three . ( 50,50 ) setunit 1.6 mm ; qubits(3 ) ; qcxcoord : = qcxcoord + qcstepsize ; wires(2 mm ) ; circuit(2.5qcheight)(gpos 0 , 1,2 , btex prep etex ) ; wires(1 mm ) ; circuit(2.5qcheight)(gpos 1,2 , btex bell etex ) ; wires(1 mm ) ; circuit(4qcheight)(gpos 0 , 1,2 , btex recover etex ) ; wires(2 mm ) ; hence , this circuit equals the teleportation circuit for an unknown quantum state , see for instance ( * ? ? ?* section 1.3.7 ) or . in summary, we have seen that it is possible to derive a transformation of a quantum circuit corresponding to linear combination of the transformation as a sum of pauli matrices into the teleportation circuit .the factorization of a unitary matrix in terms of elementary quantum gates amounts to solve a word problem in a unitary group .this problem is quite difficult , in particular since only words of small length , which correspond to efficient algorithms , are of practical interest .few methods are known to date for the design of quantum circuits .several ad - hoc methods for quantum circuit design have been proposed , mostly heuristic search techniques based on genetic algorithms , simulated annealing or the like .such methods are confined to fairly small circuit sizes , and the solutions produced by such heuristics are typically difficult to interpret . the method presented in this paper follows a completely different approach .we assume that we have a set of efficient quantum circuits available .our philosophy is to reuse and combine these circuits to build a new quantum circuit .we have developed a sound mathematical theory , which allows to solve such problems under certain well - defined conditions . following this approach ,we have demonstrated that the discrete hartley transforms and fractional fourier transforms have extremely efficient realizations on a quantum computer .it should be stressed that the method is by no means exhausted by these examples . from a practical point of view , it would be interesting to build a database of moderately sized matrix groups which have efficient quantum circuits .this database could in turn be searched for a given transformation by means of linear algebra .it is an appealing possibility to automatically derive quantum circuit implementations in this fashion .the research of a.k . has been partly supported by nsf grant eia 0218582 , and by a texas a&m titf grant .part of this work has been done while m.r .was at the institute for algorithms and cognitive systems , university of karlsruhe , karlsruhe , germany , and during a visit to the mathematical sciences research institute , berkeley , usa .he wishes to thank both institutions for their hospitality .his research has been supported by the european community under contract ist-1999 - 10596 ( q - acta ) , cse , and mitacs .a. klappenecker .wavelets and wavelet packets on quantum computers . in m.a .unser , a. aldroubi , and a.f .laine , editors , _ wavelet applications in signal and image processing vii _ , pages 703713 .spie , 1999 .m. pschel , m. rtteler , and th .in _ proceedings applied algebra , algebraic algorithms and error - correcting codes ( aaecc-13 ) _ , volume 1719 of _ lecture notes in computer science _ , pages 148159 .springer , 1999 .
the design of efficient quantum circuits is an important issue in quantum computing . it is in general a formidable task to find a highly optimized quantum circuit for a given unitary matrix . we propose a quantum circuit design method that has the following unique feature : it allows to construct efficient quantum circuits in a systematic way by reusing and combining a set of highly optimized quantum circuits . specifically , the method realizes a quantum circuit for a given unitary matrix by implementing a linear combination of representing matrices of a group , which have known fast quantum circuits . we motivate and illustrate this method by deriving extremely efficient quantum circuits for the discrete hartley transform and for the fractional fourier transforms . the sound mathematical basis of this design method allows to give meaningful and natural interpretations of the resulting circuits . we demonstrate this aspect by giving a natural interpretation of known teleportation circuits . # 1 [ theorem]key lemma [ theorem]example andreas klappenecker and
massive datasets describing the activity patterns of large human populations now provide researchers with rich opportunities to quantitatively study human dynamics , including the activities of groups or teams .new tools , including electronic sensor systems , can quantify team activity and performance . with the rise in prominence of network science ,much effort has gone into discovering meaningful groups within social networks and quantifying their evolution .teams are increasingly important in research and industrial efforts and small , coordinated groups are a significant component of modern human conflict .there are many important dimensions along which teams should be studied , including their size , how work is distributed among their members , and the differences and similarities in the experiences and backgrounds of those team members .recently , there has been much debate on the `` group size hypothesis '' , that larger groups are more robust or perform better than smaller ones .scholars of science have noted for decades that collaborative research teams have been growing in size and importance . at the same time , however , social loafing , where individuals apply less effort to a task when they are in a group than when they are alone , may counterbalance the effectiveness of larger teams .meanwhile , case studies show that leadership and experience are key components of successful team outcomes , while specialization and multitasking are important but potentially error - prone mechanisms for dealing with complexity and cognitive overload . in all of these areas ,large - scale , quantitative data can push the study of teams forward .teams are important for modern software engineering tasks , and researchers have long studied the digital traces of open source software projects to better quantify and understand how teams work on software projects .researchers have investigated estimators of work activity or effort based on edit volume , such as different ways to count the number of changes made to a software s source code .various dimensions of success of software projects , such as popularity , timeliness of bug fixes , or other quality measures have been studied .successful open source software projects show a layered structure of primary or core contributors surrounded by lesser , secondary contributors . at the same time , much work is focused on case studies of small numbers of highly successful , large projects . considering these studies aloneruns the risk of survivorship bias or other selection biases , so large - scale studies of large quantities of teams are important complements to these works .users of the github web platform can form teams to work on real - world projects , primarily software development but also music , literature , design work , and more .a number of important scientific computing resources are now developed through github , including astronomical software , genetic sequencing tools , and key components of the compact muon solenoid experiment s data pipeline .a `` github for science '' initiative has been launched and github is becoming the dominant service for open scientific development .github provides rich public data on team activities , including when new teams form , when members join existing teams , and when a team s project is updated .github also provides social media tools for the discovery of interesting projects .users who see the work of a team can choose to flag it as interesting to them by `` starring '' it .the number of these `` stargazers '' allows us to quantify one aspect of the * success * of the team , in a manner analogous to the use of citations of research literature as a proxy for `` impact '' .of course , as with bibliometric impact , one should be cautious and not consider success to be a perfectly accurate measure of _ quality _ , something that is far more difficult to objectively quantify .instead this is a measure of popularity as would be other statistics such as web traffic , number of downloads , and so forth . in this study, we analyze the memberships and activities of approximately 150,000 teams , as they perform real - world tasks , to uncover the blend of features that relate to success . to the best of our knowledgethis is the largest study of real - world team success to date .we present results that demonstrate ( i ) how teams distribute or focus work activity across their members ,( ii ) the mixture of experiential diversity and collective leadership roles in teams , and ( iii ) how successful teams are different from other teams while accounting for confounds such as team size .the rest of this paper is organized as follows : in sec .[ sec : methods ] we describe our github dataset ; give definitions of a team , team success , and work activity / focus of a team member ; and introduce metrics to measure various aspects of the experience and experiential diversity of a team s members .in sec . [sec : results ] we present our results relating these measures to team success . in sec .[ subsec : combined ] we present statistical tests on linear regression models of team features to control for potential confounds between team features and team success .lastly , we conclude with a discussion in sec . [sec : discuss ] .public github data covering 1 january 2013 to 1 april 2014 was collected from githubarchive.org in april 2014 . in their own words , `` github archive is a project to record the public github timeline , archive it , and make it easily accessible for further analysis '' .these activity traces contain approximately 110 m unique events , including when users create , join , or update projects .projects on github are called `` repositories '' . for this workwe define a * team * as the set of users who can directly update ( `` push to '' ) a repository .these users constitute the * primary * team members as they have either created the project or been granted autonomy to work on the project .the number of team members was denoted by .activity or workload was estimated from the number of pushes .a push is a bundle of code updates ( known as commits ) , however most pushes contain only a single commit ( see ; see also ref . ) . as with all studies measuring worker effort from lines - of - code metrics ,this is an imperfect measure as the complexity of a unit of work does not generally map to the quantity of edits .users on github can bookmark projects they find interesting .this is called `` stargazing '' .we take the maximum number of stargazers for a team as its measure of * success * .this is a popularity measure of success , however the choice to bookmark a project does imply it offers some value to the user . to avoid abandoned projects , studied teams have at least one stargazer ( ) and at least two updates per month on average within the githubarchive data .these selection criteria leave teams .we also collect the time of creation on github for each team project .this is useful for measuring confounds : for example , older teams may tend to both have more members and have more opportunities to increase success. of the teams studied , 67.8% were formed within our data window .beyond considering team age as a potential confounder , we do not study temporal dynamics such as team formation in this work .a small number of studied teams ( 1.08% ) have more than ten primary members ( ) ; those teams were not shown in figures , but they were present in all statistical analyses .lastly , to ensure our results are not due to outliers , in some analyses we excluded teams above the 99th percentile of .despite a strong skew in the distribution of , these highly popular teams account for only 2.54% of the total work activity of the teams considered in this study ( 2.27% when considering teams with members ) . [ [ secondary - team ] ] secondary team + + + + + + + + + + + + + + github provides a mechanism for external , non - team contributors to propose work that team members can then choose to use or not .these proposals are called pull requests .( other mechanisms , such as discussions about issues , are also available to non - team contributors . )these secondary or external team contributors are not the focus of this work and have already been well studied by oss researchers .however , it is important to ensure that they do not act as confounding factors for our results , since more successful teams will tend to have more secondary contributions than other teams .so we measure for each team , the number of unique users who submit at least one pull request , and the number of pull requests .we will include these measures in our combined regression models . despite their visibility in github , pull requests are rare ; in our data , 57.7% of teams we study have , and when present pull requests are greatly outnumbered by pushes on average : ( median ) , averaged over all teams with at least one pull request .the number of team members , , does not fully represent the size of a team since the distribution of work may be highly skewed across team members . to capture the * effective team size * , accounting for the relative contribution levels of members ,we use , where , and is the fraction of work performed by team member .this gives when all , as expected .this simple , entropic measure is known as perplexity in linguistics and is closely related to species diversity indices used in ecology and the herfindahl - hirschman index used in economics. denote with the set of projects that user works on ( has pushed to ) .( projects in need at least twice - monthly updates on average , as before , but may have so as to better capture s background , not just successful projects . )we estimate the * experience * of a team of size as and the experiential * diversity * as where the sums and union run over the members of the team .note that .experience measures the quantity of projects the team works on while diversity measures how many or how few projects the team members have in common , the goal being to capture how often the team has worked together .lastly , someone is a * lead * when , for at least one project they work on , they contribute more work to that project than any other member . a non - lead member of team may be the lead of project .the number of leads in team of size is : where if user is the lead of team , and zero otherwise .the first sum runs over the members of team , the second runs over all projects .of course , the larger the team the more potential leads it may contain so when studying the effects of leads on team success we only compare teams of the same size ( comparing while holding fixed ) .otherwise , and already account for team size .we began our analysis by measuring team success as a function of team size , the number of primary contributors to the team s project .since is , at least partially , a popularity measure , we expect larger teams to also be more successful .indeed , there was a positive and significant relationship ( , rank correlation ) between the size of a team and its success , with 300% greater success on average for teams of size compared to solos with ( fig .[ fig : introtodata ] ) .this strong trend holds for the median success as well ( inset ) . while this observed trend was highly significant , the rank correlation indicates that there remains considerable variation in that is not captured by team size alone .our next analysis reveals an important relationship between team focus and success . unlike bibliographic studies , where teams can only be quantified as the listed coauthors of a paper ,the data here allow us to measure the intrinsic work or volume of contributions from each team member to the project . for each team we measured the contribution of a member to the team s ongoing project , how many times that member updated the project ( see methods ) .team members were ranked by contribution , so counts the work of the member who contributed the most , the second heaviest contributor , and so forth .the total work of a team is .we found that the distribution of work over team members showed significant skew , with often more than 23 times greater than ( fig .[ fig : teamsarefocused]a and ) .this means that the workloads of projects are predominantly carried by a handful of team members , or even just a single person .larger teams perform more total work , and the heaviest contributor carries much of that effort : the inset of fig .[ fig : teamsarefocused]a shows that , the fraction of work carried by the rank one member , falls slowly with team size , and is typically far removed from the lower bound of equal work among all team members .see for more details .this result is in line with prior studies , supporting the plausibility of our definition of a team and our use of pushes to measure work .this focus in work activity indicates that the majority of the team serves as a support system for a core set of members .does this arrangement play a role in whether or not teams are successful ? we investigated this in several ways .first , we asked whether or not a team was * dominated * , meaning that the lead member contributed more work than all other members combined ( ) .highly successful `` top '' teams , those in the top 10% of the success distribution , were significantly more likely to be dominated than average teams , those in the middle 20% of , or `` bottom '' teams , those in bottom 10% of the .[ fig : teamsarefocused]b ) .can this result be due to a confounding effect from success ?more successful projects will tend to have more external contributors , for example , which can change the distribution of work .for example , in one scenario a team member may be a `` community manager '' merging in large numbers of external contributions from non - team members . to test this we examined only the 57.7% of teams that had no external contributions ( ) and tested among only those teams whether dominated teams were more successful than non - dominated teams . within this subset of teams ,dominated teams had significantly higher than non - dominated teams ( mann - whitney u test with continuity correction , ) .the mann - whitney u test ( mwu ) is non - parametric , using ranks of ( in this case ) to mitigate the effects of skewed data , and does not assume normality .we conclude from this that external contributions do not fully explain the relationship between workload focus and team success .next , we moved beyond the effects of the heaviest contributor by performing the following analysis .for each team we computed its * effective * team size , directly accounting for the skew in workload ( see methods for full details ) .this effective size can be roughly thought of as the average number of unique contributors per unit time and need not be a whole number .for example , a team of size where both members contribute equally will have effective size , but if one member is responsible for 95% of the work the team would have .note that and are positively correlated ( ) .figure [ fig : teamsarefocused]c shows that ( i ) all teams are effectively much smaller than their total size would indicate , for all sizes , and ( ii ) top teams are significantly smaller in effective size ( and therefore more focused in their work distribution ) than average or bottom teams with the same .further , success is significantly , negatively correlated with , for all ( fig .[ fig : teamsarefocused]d ) .more focused teams have significantly more success than less focused teams of the same size , regardless of total team size . performed by the -th most active member , where is the total work of the team , for different size teams .larger teams perform more work overall , but the majority of work is always done by a small subset of the members ( note the logarithmic axis ) .inset : the fraction of work performed by the most active team member is always high , often larger than half the total . the dashed line indicates the lower bound of uniform work distribution , .a team is * dominated * when the most active member does more work than all other members combined .top teams are significantly more likely to be dominated than either average teams or bottom teams for all .( _ top team _ : above the 90th percentile in ; _ average team _ : greater than the 40th percentile of and less than or equal to the 60th percentile of ; _ bottom team _ : at or below the 10th percentile of . )the effective team size ( see methods ) , a measure that accounts for the skewed distribution of work in panel a , is significantly smaller than . moreover , top teams are significantly more focused , having smaller effective sizes , than average or bottom teams at all sizes .this includes the case , which did not show a significant difference in panel b. the dashed line denotes the upper bound .success is universally higher for teams with smaller , independent of , further supporting the importance of focused workloads .the solid lines indicates the average trend for all teams .these results are not due to outliers in ; see .[ fig : teamsarefocused ] ] further analyses revealed the importance of team composition and its role in team success .team members do not perform their work in a vacuum , they each bring experiences from their other work .often members of a team will work on other projects .we investigated these facets of a team s composition by exploring ( i ) how many projects the team s members have worked on , ( ii ) how diverse are the other projects ( do the team members have many or few other projects in common ) , and ( iii ) how many team members were `` leads '' of other projects .an estimate of experience , , the average number of other projects that team members have worked on ( see methods ) , was significantly related to success . however , the trend was not particularly strong ( see ) and , as we later show via combined modeling efforts , this relationship with success was entirely explainable by the teams other measurable quantities. it may be that the volume of experience does not contribute much to the success of a team , but this seems to contradict previous studies on the importance of experience and wisdom . to investigate , we turned to a different facet of a team s composition , the diversity of the team s background .successful teams may tend to be comprised of members who have frequently worked together on the same projects in the past , perhaps developing an experiential shorthand .conversely , successful teams may instead have multiple distinct viewpoints , solving challenges with a multi - disciplinary perspective . to estimate the distinctness of team member backgrounds , the diversity was measured as the fraction of projects that team members have worked on that are unique ( see methods ) .diversity is low when all members have worked on the same projects together ( ) , but grows closer to as their backgrounds become increasingly diverse .a high team diversity was significantly correlated with success , regardless of team size ( fig .[ fig : teamcomposition ] ) .even small teams seem to have benefited greatly from diversity : high- duos averaged nearly * eight times * the success of low- duos .the relationship between and was even stronger for larger teams ( fig .[ fig : teamcomposition ] inset ) , implying that larger teams can more effectively translate this diversity into success .even if the raw volume of experience a team has does not play a significant role in the team s success , the diversity of that experience was significantly correlated with team success .see also our combined modeling efforts .considerable attention has been paid recently to collective leadership , where decision - making structures emerge from the mass of the group instead of being imposed via a top - down hierarchy .the open collaborations studied here have the potential to display collective leadership due to their volunteer - driven , self - organized nature .the heaviest contributor to a team is most likely to occupy such a leadership role .further , since teams overlap , a secondary member of one team may be the `` lead , '' or heaviest contributor to another .this poses an interesting question : even though teams are heavily focused , are teams more successful when they contain many leads , or few ?a team with many leads will bring considerable experience , but most of its members may also be unable to dedicate their full attention to the team . to answer this ,we measured , the number of team members who are the lead of at least one project ( , see methods ) , and found that teams with many leads have significantly higher success than teams _ of the same size _ with fewer leads ( fig .[ fig : teamshaveleads ] ) .only one team member can be the primary contributor to the team , so a team can only have many leads if the other members have focused their work activity on other projects .team members that are focused on other projects can potentially only provide limited support , yet successful teams tend to arrange their members in exactly this fashion .of course , the strong focus in work activity ( fig .[ fig : teamsarefocused ] ) is likely interrelated with these observations .however , we will soon show that both remain significantly related to success in combined models .were removed as before .[ fig : teamshaveleads ] ] expanding on this observation , table [ tbl : teamsizeleadsimpact ] illustrates the extreme case of teams of size with a single lead ( ) compared with teams of the same size comprised entirely of leads ( ) .the latter always displayed significantly higher success than the former ( mwu test , see table ) , independent of team size , underscoring the correlations displayed in fig .[ fig : teamshaveleads ] .often the difference was massive : teams of size , for example , averaged more than 1200% higher success when than when ..teams composed entirely of leads ( ) are significantly more successful ( mwu test on ) than teams of the same size with one lead ( ) , regardless of team size .teams above the 99th percentile in were excluded to ensure the differences were not due to outliers .[ tbl : teamsizeleadsimpact ] [ cols= " < , < , < , < , < , < , < , < " , ] examining the regression coefficients showed that the number of leads was the variable most strongly correlated with team success .team age , effective team size , and team size play the strongest roles after in team success , and all three were also significant in the presence of the other variables .the coefficient on was negative while for it was positive , further underscoring our result that , while teams should be big , they effectively should be small .next , the diversity of the team , followed by the total work done on the project , were also significant measures related to success . finally , overall team experience was not significant in this model ( ) . we conclude that , while and are correlated by themselves , any effects of are explained by the other quantities . what about secondary contributions , those activities made by individuals outside the primary team ?we already performed one test showing that dominated teams are more successful than non - dominated teams even when there are no secondary contributions .continuing along these lines , we augmented this linear model with two more dependent variables , and .regressing on this expanded model ( see for details ) did not change the significance of any coefficients at the level ; remained insignificant ( ) .both new variables were significant ( ) .note that there were no multicollinearity effects in either regression model ( condition numbers < 10 ) .we conclude that secondary contributions can not alone explain the observations relating team focus , experience , and lead number to team success .there has been considerable debate concerning the benefits of specialization compared to diversity in the workplace and other sectors .our discoveries here show that a high - success team forms a diverse support system for a specialist core , indicating that both specialization and diversity contribute to innovation and success .team members should be both specialists , acting as the lead contributor to a team , and generalists , offering ancillary support for teams led by another member .this has implications when organizations are designing teams and wish to maximize their success , at least as success was measured in these data .teams tend to do best on average when they maximize ( fig .[ fig : introtodata]b ) while minimizing ( fig .[ fig : teamsarefocused]d ) and maximizing ( fig .[ fig : teamcomposition ] ) and ( fig .[ fig : teamshaveleads ] ) . of course , some tasks are too large for a single person or small team to handle , necessitating the need for mega - teams of hundreds or even thousands of members .our results imply that such teams may be most effective when broken down into large numbers of small , overlapping groups , where all individuals belong to a few teams and are the lead of at least one .doing so will help maximize the experiential diversity of each sub - team , while ensuring each team has someone `` in charge '' .an important open question is what are the best ways to design such pervasively overlapping groups , a task that may be project- or domain - specific but which is worth further exploration .the negative relationship between effective team size and success ( as well as the significantly higher presence of dominated teams among high success teams ) further belies the myth of multitasking and supports the `` surgical team '' arguments of brooks .focused work activity , often by even a single person , is a hallmark of successful teams .this focus both limits the cognitive costs of task switching , and lowers communication and coordination barriers , since so much work is being accomplished by one or only a few individuals .we have provided statistical tests demonstrating that the relationship between focus and success can not be due to secondary / external team contributions alone .work focus could possibly be explained by * social loafing * where individual members of a group contribute less effort as part of the group than they would alone , yet loafing does not explain the correlation between , e.g. , leads and success ( fig .[ fig : teamshaveleads ] ) .likewise , our team composition results on group experience , experiential diversity , and the number of leads can not be easily explained as a confound with success or secondary contributions : they study specific features of the individuals who comprise a team , those features are not related to the successes of other projects an individual may work on , and they strictly control for total team size ( except for the number of leads , so for that measure we only compared teams with the same ) .the measures we used for external team contributions , and , may be considered measures of success themselves , and studying or even predicting their levels from team features may prove a fruitful avenue of future work .lastly , there are two remaining caveats worth mentioning. we do not specifically control for automatically mirrored repositories ( where a computer script copies updates to github ) . accurately detecting such projects at scaleis a challenge beyond the scope of this work .however , we expect most will either be filtered out by our existing selection criteria or else they will likely only have a single ( automated ) user that only does the copying .the second concern is work done outside of github or , more generally , mismatched assignments between usernames and their work .this is also challenging to fully address ( one issue is that the underlying git repository system does not authenticate users ) .we acknowledge this concern for our workload focus results , but even it can not explain the significant trends we observed on team composition such as the density of leads .noise due to improperly recorded or `` out - of - band '' work has in principle affected all quantitative studies of online software repositories .all data analyzed are made publicly available by the github archive project ( https://www.githubarchive.org ) .we have no competing interests .mk participated in data collection and data analysis , and helped draft the manuscript ; jb conceived of the study , designed the study , carried out data collection and analysis , and drafted the manuscript .all authors gave final approval for publication .we thank josh bongard , brian tivnan , paul hines , michael szell , and albert - lszl barabsi for useful discussions , and we gratefully acknowledge the computational resources provided by the vermont advanced computing core , supported by nasa ( nnx-08ao96 g ) .jb has been supported by the university of vermont and the vermont complex systems center .bird c , pattison d , dsouza r , filkov v , devanbu p. latent social structure in open source projects . in : proceedings of the 16th acm sigsoft international symposium on foundations of software engineering .sigsoft 08/fse-16 .acm ; 2008 .p. 2435 .de montjoye ya , stopczynski a , shmueli e , pentland a , lehmann s. the strength of the strongest ties in collaborative problem solving .2014 06;4 . available from : http://dx.doi.org/10.1038/srep05277 .scholtes i , mavrodiev p , schweitzer f. from aristotle to ringelmann : a large - scale analysis of team productivity and coordination in open source software projects . empirical software engineering .2015;p . 142 .alali a , kagdi h , maletic j , et al .what s a typical commit ?a characterization of open source software repositories . in : program comprehension , 2008 .icpc 2008 .the 16th ieee international conference on .ieee ; 2008 .p. 182191 .kalliamvakou e , gousios g , blincoe k , singer l , german dm , damian d. the promises and perils of mining github . in : proceedings of the 11th working conference on mining software repositories .acm ; 2014 .p. 92101 .
complex problems often require coordinated group effort and can consume significant resources , yet our understanding of how teams form and succeed has been limited by a lack of large - scale , quantitative data . we analyze activity traces and success levels for ,000 self - organized , online team projects . while larger teams tend to be more successful , workload is highly focused across the team , with only a few members performing most work . we find that highly successful teams are significantly more focused than average teams of the same size , that their members have worked on more diverse sets of projects , and the members of highly successful teams are more likely to be core members or ` leads ' of other teams . the relations between team success and size , focus and especially team experience can not be explained by confounding factors such as team age , external contributions from non - team members , nor by group mechanisms such as social loafing . taken together , these features point to organizational principles that may maximize the success of collaborative endeavors .
metallic first mirrors ( fms ) will play an essential role in iter to ensure well controlled fusion reactions and proper plasma analysis .they will be the first elements of a majority of optical diagnostics , guiding the light originating from the plasma or from probing light sources through the neutron shielding towards detectors . due to their proximity to the plasma, fms will experience high particle fluxes ( charge - exchange neutrals and neutrons but also ultraviolet , x - ray and gamma radiations ) leading to erosion and/or deposition . especially net deposition of particles eroded from the main wall , i.e. mainly beryllium ( be ) and tungsten ( w ) , can severely degrade the fms reflectivity and therefore endanger the reliability of optical diagnostics .insitu plasma sputtering is currently considered as one of the most promising cleaning techniques to remove deposits from fms .porous films containing be were already reported to grow in jet and in pisces - b . in our specific setup ,aluminium ( al ) depositions were used to simulate this kind of films to avoid be due to its toxicity .al and be have similar chemical properties .molybdenum ( mo ) mirrors were used as metallic mirrors as they are currently considered as one of the best candidates for fms . in previous works ,the successful cleaning of mo mirrors ( 18 mm diameter ) was achieved using a radio - frequency ( rf ) plasma operating at a frequency of 13.56 mhz .it was possible to remove pure al / al , pure w / w and mixed al / al / w deposits using argon ( ar ) , neon and ar + deuterium ( d ) mixture with ion energies between 150 and 350 ev while maintaining good optical properties .nevertheless , the size of fms used in iter ( for example : 200 mm for edge thomson scattering diagnostic ) and the presence of iter s permanent magnetic field ( several tesla at the first wall ) may affect the plasma cleaning process .to investigate the effects of these conditions ( large mirror and magnetic field ) , two separate and distinct set of experiments were conducted : ( i ) plasma cleaning without magnetic field on a poly - crystalline mo mirror with a diameter of 98 mm ( mirror a ) deposited with al and al .( ii ) plasma cleaning on mirrors consisting of a stainless steel plate of 25 mm diameter and a 300 nm coating of nano - crystalline mo ( mirror b ) .they are subsequently coated with dense al .the cleaning is performed in the presence of a magnetic field ( 0.35 t ) where the angle between the field lines and the mirror s surface was varied from 0 to 90 .each experiment presented in this work is a two - stage operation : firstly the deposition of the film on the mirror and secondly the removal of this film with plasma generated by applying rf directly to the mirror at a frequency of 13.56 mhz ( rf capacitively coupled discharge where the mirror serves the electrode ) .this type of discharge leads to the formation of a negative dc component on the mirror called self - bias .this self - bias has an influence on the sputtering energy of the ions .for the first set of experiments done at the university of basel without magnetic field ( see section [ wnmf ] ) , al films have been deposited on mirror a with magnetron sputtering as described in .the process was done in an d and ar environment at a pressure of 3 pa ( ar partial pressure : 18% ) where ar was used to enhance the al deposition rate .this leads to a similar film reported by marot _( fig . 2 in ) ,relevant to what is expected in iter .plasma cleaning was performed with an ar plasma ( 0.5 pa ) and for different ion energies ( 200 to 350 ev ) . for the second set of experiments , mirrors bwere coated with pure and dense al .to do so , the deposition was done in an ar and o environment of 1.5 pa ( partial pressure of ar : 50% ) with facing magnetron sputtering .plasma cleaning in a magnetic field environment was carried out at the sultan facility in epfl - crpp villigen ( see section [ wmf ] ) .the cleaning of the al film was done using ar plasma ( 1.5pa ) with 200 ev ion energy ( when not mentioned , 20w of rf power were needed to achieve 200 ev ion energy ) .the vacuum chamber was set outside of sultan facility where the magnetic field was 0.35 t ( fig .[ sultan ] ) .the angle , , between the magnetic field lines and the mirror s surface could be varied from 0 to 90 . for the surface composition analysis ,the mirror a was characterized by means of energy dispersive x - ray photospectroscopy ( edx ) with a sem - fei nova nano sem230 at 15kv , mirrors b by means of x - ray photoelectron spectroscopy ( xps ) .the setup and fitting procedure are described elsewhere . for both types of mirrors ,total and diffuse reflectivity were measured with a varian cary 5 spectrophotometer ( 250 - 2500 nm ) and surface morphology was investigated using a scanning electron microscope ( sem ) hitachi s-4800 field emission at 5 kv . representation ( top view ) of the sultan facility and the position of the mirror and plasma in the magnetic field lines .the angle between the field lines and the mirrors surface is denoted .,scaledwidth=40.0% ]the polished mirror a was deposited with a 260 nm thick al / al film .the total reflectivity of this mirror decreased drastically in the uv range compared to the polished one ( fig .[ rbm ] ) . to remove the deposited film , four cleaning cycles ( ar , 0.5 pa )were necessary .two with a self - bias of 200 v for 20 and 42 hours , and two with a self - bias of 350 v for 30 and 42 hours .the edx measurements carried out after each cleaning cycle ( fig . [ edx ] ) clearly showed a homogeneous cleaning over the whole surface , i.e. the fraction of al decreased with the same speed along the x and y axis , except at the edge of the mirror .no electrical shielding of the mirror i.e. no metallic surrounding at ground potential was used for the cleaning : more ions were collected at the edge thus increasing the cleaning rates . the last edx measurement showed a total removal of all al from the mirror surface .this can also be seen by the recovering of the total reflectivity ( fig .[ rbm ] ) .on the other hand , the diffuse reflectivity increased from a few percent up to 55% after 130 hours of cleaning .ar ions bombardment ( especially 72 hours at 350 ev ) on poly - crystalline mirror is known to lead to a high roughness causing an increase in the diffuse reflectivity as reported by voitsenya _the high diffuse reflectivity may exclude poly - crystalline mo mirrors for systems which require mirror cleaning .total and diffuse reflectivity measurements of mo polished mirror , after deposition of an al/ film and after plasma cleaning .the measurements were done on position 1,2 and 3 from fig .[ edx ] ( a)).,scaledwidth=40.0% ] ( a ) picture of the mo mirror after the third cleaning cycle . 1,2 and 3 are the positions for the reflectivity measurements . the x and y axis on the mirror are used for the edx measurements . ( b ) and ( c ) edx measurements where the al atomic concentration are normalised by the measured mo and al values ,scaledwidth=45.0% ] the next step to demonstrate the feasibility of fms plasma cleaning is to apply it to the mock - up of iter edge thomson scattering mirror ( 200 mm , fig [ ts ] ) .this mock - up was designed with a shielding to avoid edge effects .the mirror itself is composed of stainless steel with 5 polished ncmo insets to ease the characterization .these experiments were just started .picture of the iter s edge thomson scattering mirror mock - up .5 mo mirrors can be inserted.,scaledwidth=40.0% ] for this experiment 6 mirrors b were coated with a dense al film .4 were coated with a 5 nm thick film ( mirror b1 - b4 ) and 2 were coated with a 50 nm thick film ( mirror b5 , b6 ) .only mirror b1 was characterized by xps ( table [ table ] ) after the coating and the cleaning was performed without magnetic field .the remaining 5 samples were cleaned in a magnetic field environment ( 0.35 t ) and characterized by xps .the cleaning of mirror b1 was done in an ar environment ( 1.5 pa ) and ion energy of 200 ev as reference .after 2h30 cleaning , 25% mo was present on the surface and only 6% al was left .this cleaning time served as reference for the other samples .as seen in the table [ table ] , the al film was completely removed for mirrors b2 and b3 for equal to 90 and 45 , respectively .as the samples were in air after cleaning , the mo surface was oxidized and adsorbed carbon ( c ) was measured .fitting of the mo3d xps spectrum revealed 2 oxide components : moo ( 229.6 ev ) and moo ( 232.4 ev ) .for mirror b4 , the cleaning was done with the field lines parallel to the mirror s surface ( 0 ) . to achieve a self - bias of 200 v ,the rf power was increased from 20 to 145w .the plasma was only stable for 50 minutes and xps measurements ( not shown here ) revealed that the surface was pure stainless steel , i.e. the deposited al film and the ncmo coating were removed . for the momentthis result is not understood .the cleaning of thicker films was carried out for 8h30 at 90 and 45 for mirror b5 and b6 , respectively .in comparison to previous cleaning , the surface was more oxidized but the al was fully removed .the specular reflectivity of these two mirrors was below the reference as seen in fig [ ssmo12refl ] .the calculated reflectivity of a mo surface with a 5 and 10 nm mo oxide film on top is also plotted .the similar reflectivity curves and the xps results confirmed the oxidation after cleaning .the diffuse reflectivity ( fig .[ ssmo12refl ] ) was below 3% indicating no roughening of the mirror after cleaning .this effect may be due to two reasons : the lower energy of ar ions and the nano - crystalline structure of mirror b rather than a poly - crystalline mirror ( like mirror a ) .specular and diffuse reflectivity of mirror b5 and b6 cleaned for 8h15 and equal to 90 and 45 .a reference reflectivity of a nano - crystalline mo film and reflectivity calculated for a mo oxide film ( 5 nm and 10 nm thick ) are plotted for comparison.,scaledwidth=45.0% ] to validate the cleaning procedure in a higher magnetic field , a new vacuum chamber was designed and will be installed in a superconducting magnet used to operate a gyrotron located at the crpp lausanne ( fig . [ lausanne ] ) .depending on the depth on where the chamber will be inserted in the superconducting magnet , the magnetic field will vary between 1 and 3.5 t. the rotatable electrode provides the possibility to perform experiments for various angles .this project is ongoing and first results will be presented soon .a 98 mm diameter mirror with deposits has been cleaned using rf ar plasma without magnetic field and a bias up to 350 v. being a poly - crystalline mo mirror , the diffuse reflectivity increased drastically under the ion bombardment , highlighting the need for single or nano - crystalline materials for iter s fms . in a magnetic field ( 0.35 t ) , al films were removed from mo mirrors with ar rf plasma and for several mirror s orientation to the field .a decrease of the optical performance was observed , mainly due to oxidation of the mirror s surface .the cleaning performance seems to be enhanced when the field lines are parallel ( within a few degrees ) to the mirror surface .experiments with a 200 mm mock - up mirror and also cleaning in a magnetic field of 3.5 t were started .an other important issue is to validate the cleaning process on mirrors deposited with be and w ( laboratory and tokamak deposits ) : first tests were started in the jet beryllium handling facility under an efda fusion technology 2013 task and are looking promising for jet - ilw mirrors .the authors would like to thank p. bruzzone and his team for the possibility to work with the sultan facility at epfl - crpp villigen .this work was supported by the iter organization under contract number 4300000557 , 4300000852 and 4300000953 .the views and opinions expressed herein do not necessarily reflect those of the iter organization .the swiss federal office of energy , the federal office for education and science , the swiss national foundation ( snf ) and the swiss nanoscience institute ( sni ) are acknowledged for their financial support . [ 5 ]
to avoid reflectivity losses in iter s optical diagnostic systems , plasma sputtering of metallic first mirrors is foreseen in order to remove deposits coming from the main wall ( mainly beryllium and tungsten ) . therefore plasma cleaning has to work on large mirrors ( up to a size of 200 mm ) and under the influence of strong magnetic fields ( several tesla ) . this work presents the results of plasma cleaning of aluminium and aluminium oxide ( used as beryllium proxy ) deposited on molybdenum mirrors . using radio frequency ( 13.56 mhz ) argon plasma , the removal of a 260 nm mixed aluminium / aluminium oxide film deposited by magnetron sputtering on a mirror ( 98 mm diameter ) was demonstrated . 50 nm of pure aluminium oxide were removed from test mirrors ( 25 mm diameter ) in a magnetic field of 0.35 t for various angles between the field lines and the mirrors surfaces . the cleaning efficiency was evaluated by performing reflectivity measurements , scanning electron microscopy and x - ray photoelectron spectroscopy . plasma cleaning , iter , erosion & deposition , reflectivity , surface analysis 52.77.bn , 52.80.pi , 78.20.-e , 82.80.pv
geogebra is an educational mathematics software tool , with millions of users . in 2005 , its founder markus hohenwarter broadened its software development into an open source project .geogebra s features ( including dynamic geometry , computer algebra , spreadsheets and function investigation ) primarily focus on facilitating student experiments in euclidean geometry , and not on formal reasoning . including automated deduction tools in geogebra s dynamic geometry system ( dgs ) could introduce a whole new range of learning and teaching scenarios . since automated theorem proving ( atp ) in geometry has reached a rather mature stage , in 2010 some atp experts agreed on starting a project of incorporating and testing a number of different automated provers for geometry in geogebra .this collaboration was initiated by toms recio . since the initial kickstart this project reached the following milestones : 1 .a workshop for * theoretical planning * took place in santiago de compostela , spain , february 2011 .a second workshop for * implementation planning * took place in alcal de henares , spain , january 2012 .a * prototype * implementation was presented in alcal de henares in june 2012 by demonstrating 44 test cases using 5 different theorem prover methods .first public release * in geogebra 5.0 in october 2014 with 60 test cases .full * documentation * and fixing several issues according to users feedback in july 2015 in . 6 . *extension * of the set of the translated dynamic geometry construction tools to cover 200 test cases .in this paper we report about the last milestone . in section [ overview ]we give a comprehensive overview about the first milestones .section [ symeqs ] summarizes our results by focusing on the general improvements in geogebra .section [ benchmarks ] shows some tables concerning our test results .section [ future ] sketches up our next planned steps for another milestone in the implementation .an interactive prover system designed mainly for secondary school students can differ from expert prover systems in some aspects . for example , _ gclc _ and _ opengeoprover _ process a program code written in its special language and print the output as a precise report about the computation details .by contrast , a dgs tool should collect all pieces of information about the relationships of the objects purely by analyzing the construction being created by point - and - click edits and possibly some other input parameters for the prover commands ; finally the output is typically a yes / no answer and eventually some extra prescribed conditions to avoid degeneracy cases .there is a plenty of literature on reports on successful applications of dgs by extending one with an atp subsystem . among them , here we mention _ geoproof _ and _ laducation _ which are open - sourced , and thus it is possible to continue their efforts by external researchers also .a publicly available variant of laducation was already able to import a geogebra construction and set up an equation system which was solved by an external computer algebra system ( cas ) .other systems including _ jgex _ and _ cinderella _ are not open - sourced , but built upon a similar approach : visualizations in the dgs must be supported by atp computations .our project harnessed geogebra s success in the classrooms and tried to address some problems of the existing dgs / atp prototypes including small distribution , being unmaintained or incomplete operation . in our solution in geogebra a user creates a dynamic geometry construction which contains free points and dependent points as usual .all dependent points are already determined by the free points , however , all free points can be dragged by the user as desired .when a free point is dragged , some dependent points will also be changed by following the definitions in the construction steps .in such a way geometric theorems can be visualized by experiment .this technique is well known in the world of dgs .going one step forward , an atp subsystem can give a more sound answer whether the visually obvious facts ( for example , if three dependent points in a given construction are always collinear ) generally hold .geogebra s command line interface with its * prove * and * provedetails * commands and the graphical _ relation tool _ introduce a higher level interface to investigate the problem setting by using an atp subsystem . proving euclidean elementary geometry theorems was introduced in geogebra with its version 5 in september 2014 .a report shows a benchmark about 60 theorems which can be directly checked with the * prove * and * provedetails * commands in geogebra .more details are shown in about how the prover subsystem is embedded to geogebra s user interface intuitively by using and extending the relation tool .there are several approaches to compute a proof internally by using geogebra s _ portfolio prover _ , including * wu s method by using opengeoprover externally , and also * the area method ( via opengeoprover ) , moreover * recio s exact check method and * the grbner basis method . in our present work we focused on theinternally implemented _ grbner basis method _ which translates the geometric objects to algebraic equations directly and manipulates on the algebraic equation system by eliminating the dependent variables .our work could be however used for wu s method also , since we just defined a set of equations to translate geometric construction tools into an algebraic approach .we used complex algebraic geometry in our computations which is a standard way to set up a euclidean geometry question ( see ) .we report about our contributions to geogebra in two major areas : 1 .implementation of symbolic equations for various geometric tools ( section [ symeqs ] ) .2 . creating a number of tests to extend the benchmarks ( section [ benchmarks ] ) .geogebra s geometry tools have been classified by as `` easy to use '' , `` middle '' and `` difficult to use '' .preiner defines two criteria for a tool to be easy ( p. 121 ) : 1 .the tool does not depend on already existing objects , or just requires existing points which can also be created ` on the fly ' by clicking on the drawing pad .the order of actions is irrelevant and no additional keyboard input is required .the tool directly affects only one type of existing object or all existing objects at the same time and requires just one action .again , the order of actions is irrelevant and no additional keyboard input is required .the basic concept in our work was to implement theorem proving features for the easier tools in geogebra .also it was important that the usually discussed classroom theorems can be quickly constructed by using the easier tools .the classroom theorems usually require points , segments , rays , lines and circles , and angles .for some more advanced topics tangents , parabolas , ellipses and hyperbolas may be needed . implementing _ angles _ and _ conics _ may have theoretical difficulties in our approach . for _ angles _, we refer to the fact that it is not possible to define only the interior bisector of an angle : we always need to work together with internal and external angles at the same time ( cf .this is a consequence of handling angles : there is no way to check equality unless one computes the tangent of them , that is , instead of checking one verifies and these formulas are equivalent only if we set up some restrictions , say . in this sense we can not distinguish and . for _ conics_ , ellipses and hyperbolas must also be handled as non - distinguishable objects , because using the synthetic approach we need to define them with their foci , and the defining relations are the same .more precisely , given foci and and conic point , another point is an element of the conic if and only if in the case of an ellipse and ( that is , ) in the case of a hyperbola . since the lengths in these equations are non - negative quantities , we either need to add constraints and ( which are not possible in complex algebraic geometry due to lack of inequalites ) , or we need to use the squared quantities , , and and express these equations exclusively by them . in this second casewe need to eliminate the non - squared quantities from the equation . with the help of the following computer algebra commandwe learn that for both the ellipse and the hyperbola we get the same product of 8th degree ( here we used giac for computations ) : .... > > factor(eliminate([ac+cb = ap+pb , ac^2=ac^2,cb^2=cb^2,ap^2=ap^2,pb^2=pb^2],[ac , cb , ap , pb ] ) ) .... returns .... [ ( ac - cb - ap - pb)*(ac - cb - ap+pb)*(ac - cb+ap - pb)*(ac - cb+ap+pb)*(ac+cb - ap - pb)*(ac+cb - ap+pb)*(ac+cb+ap - pb)*(ac+cb+ap+pb ) ] .... which has the same result as for the input .... > >factor(eliminate([(ac - cb)^2=(ap - pb)^2,ac^2=ac^2,cb^2=cb^2,ap^2=ap^2,pb^2=pb^2],[ac , cb , ap , pb ] ) ) .... interpreting the result , it is only possible to define the set in the complex algebraic geometry sense which consists of 8 theoretical curves : 1 . , the ellipse , 2 . , which according to the triangle inequality is possible only in a degenerate case when , and ( and also ) are collinear , 3 . , similar to the previous collinear case , 4 . which is possible only in a degenerate case when , 5 . , similar to the former collinear cases , 6 . , one branch of the hyperbola , 7 . , the other branch of the hyperbola , 8 . , again similar to the former collinear cases . that is , we indeed obtained that an ellipse and a hyperbola can not be distinguished in this model ( but all other non - degenerate curves can be distinguished from them ) .this issue will give some limitations to investigate special features of conics , but still enable investigating some common features of them .for example , the following generalization of pascal s hexagon theorem for conics holds ( see fig .[ pascal ] ) : let be the union of an ellipse and a hyperbola , both defined with foci , and circumpoint .let denote line and let the perpendicular bisector of be .let be the reflection of to the line and to .also let us take an arbitrary point on and by reflection to and , respectively , obtain points and .now the intersections of and , and , moreover and will be collinear . a consequence of this example that some _ formulas _ can also be difficult to distinguish , and may require further investigation by using elimination and factorization with the help of a cas . in general , when a construction is given , it is important to identify geometrical _ hypotheses _ which are non - distinguishable from other geometrical hypotheses because they are translated with the same algebraic formula .when the prover _ disproves _ the respective statement in the algebraic translation , it should not be interpreted that the geometry statement was _this is the case when attempting to prove that the internal bisectors of a triangle are concurrent : the algebraic translation actually _ disproves _ that the union of the internal and external angle bisectors are concurrent .also it is important to identify geometrical _ theses _ which are non - distinguishable from other geometrical theses because they are translated with the same algebraic formula .when the prover _ proves _ the respective statement in the algebraic translation , it should not be interpreted that the geometry statement was _true_. apart from considering these issues , we managed to handle many typical classroom situations , and we report that most `` easy '' tools are implemented , and also some other tools from the `` middle '' and `` difficult to use '' toolset .the following basic geometrical shapes are now implemented : segment , line , ray and vector , each defined by two points , circle defined with center and through point or through three points , angle , parabola with focus point and directrix , ellipse and hyperbola defined with two focus points .this table summarizes them , and also those tools which can operate on the basic geometrical shapes ( the latter ones printed in italicized description , underlined objects are new enhancements compared to ) : > = .5x > = 1.2x > = .5x>=1.8x tool & description & difficulty & implementation remarks + & point & easy & + & line & easy & + & segment & easy & + & circle through 3 points & easy & + & _ midpoint or center _ & easy & points and segments + & _ perpendicular bisector _ & easy & at line and segment + & & easy & + & & middle & + & & middle & + & & middle & + & circle with center through point & middle & + & & middle & + & _ intersect _ & middle & line with line ( can not decide properly for segments ) , with circle , , , ; circle with circle ( for other conics we can not decide properly ) + & _ perpendicular line _ & middle & at line through point + & _ parallel line _ & middle & with line through point + & & middle & + & & middle & + & & middle & + & & middle & + & & difficult & + & & difficult & + & & & + & & & + & & & + & & & + the remaining , yet unimplemented `` easy '' tools in geogebra are : _ conic through 5 points _ and _ slope_. the former one is actually not widely used in the classroom , and the latter is a non - synthetic tool , that is , it is related to _ analytic geometry_. some other missing , but planned features are listed in section [ future ] .let be a circle with center and circumpoint .let be a line through and .now not considering some degenerate cases reflecting line about the image is a circle , that is , for arbitrary point its reflection about always lies on the same circle ( which is the circumcircle of points , and , where and are the mirror images of and about , respectively ) . in other words ,_ an inversion translates lines to circles in general_. to use geogebra s relation tool ( see fig .[ gt - inv ] ) one needs to set up the construction as described in the algebra view on the left ( either by selecting tools from the top , or by using commands in the input bar on the bottom ) .finally one has to select the relation tool from the top and choose point and line ( or enter the command * relation[d , d ] * in the input bar ) .geogebra now numerically checks if , the answer is yes , and the user can request a symbolical check by clicking on `` more '' .finally geogebra concludes that under some _ non - degeneracy _ conditions the statement is generally true . from the computational point of view , geogebra here uses the grbner basis method .thus it sets up the follwoing 6 equations in 13 variables , but point will be fixed to the origin ( so there are only 11 variables remaining ) .the following log information is printed only in debug mode in geogebra , including the timestamp in the first column : .... 19:48:26.550 //free point a(v1,v2 ) 19:48:26.550 // free point b(v3,v4 ) 19:48:26.550 c = circle[a , b ] / * circle through b with center a * / 19:48:26.551 // free point c(v5,v6 ) 19:48:26.551 a = line[b , c ] / * line through b , c * / 19:48:26.551 d = point[a ] / * point on a * / 19:48:26.555 // constrained point d(v7,v8 ) 19:48:26.555 hypotheses : 19:48:26.555 1 . -1*v7*v6+v8*v5+v7*v4 + -1*v5*v4 + -1*v8*v3+v6*v3 19:48:26.556 c ' = mirror[c ,c ] / * c mirrored at c * / 19:48:26.560 // constrained point c'(v9,v10 ) 19:48:26.561 2 .-1*v9*v6 ^ 2 + -1*v9*v5 ^ 2+v5*v4 ^ 2+v5*v3 ^ 2 + 2*v9*v6*v2 + -2*v5*v4*v2 + -1*v9*v2 ^ 2+v5*v2 ^ 2+v6 ^ 2*v1 + 2*v9*v5*v1+v5 ^ 2*v1 + -1*v4 ^ 2*v1 + -2*v5*v3*v1 + -1*v3 ^ 2*v1 + -2*v6*v2*v1 + 2*v4*v2*v1 + -1*v9*v1 ^ 2 + -1*v5*v1 ^ 2 + 2*v3*v1 ^ 2 19:48:26.562 3 .-1*v10*v6 ^ 2 + -1*v10*v5 ^ 2+v6*v4 ^ 2+v6*v3 ^ 2 + 2*v10*v6*v2+v6 ^ 2*v2+v5 ^ 2*v2 + -2*v6*v4*v2 + -1*v4 ^ 2*v2 + -1*v3 ^ 2*v2 + -1*v10*v2 ^ 2 + -1*v6*v2 ^ 2 + 2*v4*v2 ^ 2 + 2*v10*v5*v1 + -2*v6*v3*v1 + -2*v5*v2*v1 + 2*v3*v2*v1 + -1*v10*v1 ^ 2+v6*v1 ^ 219:48:26.562 d ' = mirror[d , c ] / * d mirrored at c * / 19:48:26.566 // constrained point d'(v11,v12 ) 19:48:26.567 4 .-1*v11*v8 ^ 2 + -1*v11*v7 ^ 2+v7*v4 ^ 2+v7*v3 ^ 2 + 2*v11*v8*v2 + -2*v7*v4*v2 + -1*v11*v2 ^2+v7*v2 ^ 2+v8 ^ 2*v1 + 2*v11*v7*v1+v7 ^ 2*v1 + -1*v4 ^ 2*v1 + -2*v7*v3*v1 + -1*v3 ^ 2*v1 + -2*v8*v2*v1 + 2*v4*v2*v1 + -1*v11*v1 ^ 2 + -1*v7*v1 ^ 2 + 2*v3*v1 ^ 2 19:48:26.568 5 .-1*v12*v8 ^ 2 + -1*v12*v7 ^ 2+v8*v4 ^ 2+v8*v3 ^ 2 + 2*v12*v8*v2+v8 ^ 2*v2+v7 ^ 2*v2 + -2*v8*v4*v2 + -1*v4 ^ 2*v2 + -1*v3 ^ 2*v2 + -1*v12*v2 ^ 2 + -1*v8*v2 ^ 2 + 2*v4*v2 ^ 2 + 2*v12*v7*v1 + -2*v8*v3*v1 + -2*v7*v2*v1 + 2*v3*v2*v1 + -1*v12*v1 ^2+v8*v1 ^ 2 19:48:26.568 hypotheses have been processed .19:48:26.574 substitutions : { v1=0 , v2=0 } 19:48:26.574 thesis reductio ad absurdum ( denied statement ) ...19:48:26.586 6 .-1 + -1*v13*v11*v10 ^ 2*v4+v13*v12 ^ 2*v9*v4+v13*v11 ^ 2*v9*v4 + -1*v13*v11*v9 ^ 2*v4+v13*v11*v10*v4 ^ 2 + -1*v13*v12*v9*v4 ^ 2 + -1*v13*v12 ^ 2*v10*v3 + -1*v13*v11 ^ 2*v10*v3+v13*v12*v10 ^ 2*v3+v13*v12*v9 ^ 2*v3+v13*v11*v10*v3 ^ 2 + -1*v13*v12*v9*v3 ^ 2+v13*v11*v10 ^ 2*v2 + -1*v13*v12 ^ 2*v9*v2 + -1*v13*v11 ^ 2*v9*v2+v13*v11*v9 ^ 2*v2 + -1*v13*v11*v4 ^ 2*v2+v13*v9*v4 ^ 2*v2+v13*v12 ^ 2*v3*v2+v13*v11 ^ 2*v3*v2 + -1*v13*v10 ^ 2*v3*v2 + -1*v13*v9 ^ 2*v3*v2 + -1*v13*v11*v3 ^ 2*v2+v13*v9*v3 ^ 2*v2 + -1*v13*v11*v10*v2 ^ 2+v13*v12*v9*v2 ^ 2+v13*v11*v4*v2 ^ 2 + -1*v13*v9*v4*v2 ^ 2 + -1*v13*v12*v3*v2 ^ 2+v13*v10*v3*v2 ^ 2+v13*v12 ^ 2*v10*v1+v13*v11 ^ 2*v10*v1 + -1*v13*v12*v10 ^ 2*v1 + -1*v13*v12*v9 ^ 2*v1 + -1*v13*v12 ^ 2*v4*v1 + -1*v13*v11 ^ 2*v4*v1+v13*v10 ^ 2*v4*v1+v13*v9 ^ 2*v4*v1+v13*v12*v4 ^ 2*v1 + -1*v13*v10*v4 ^ 2*v1+v13*v12*v3^ 2*v1 + -1*v13*v10*v3 ^ 2*v1 + -1*v13*v11*v10*v1 ^ 2+v13*v12*v9*v1 ^ 2+v13*v11*v4*v1 ^ 2 + -1*v13*v9*v4*v1 ^ 2 + -1*v13*v12*v3*v1 ^ 2+v13*v10*v3*v1 ^ 2 19:48:26.592 eliminating system in 11 variables ( 6 dependent ) .... then the underlying cas ( here giac ) eliminates variables v8 , v9 , v10 , v11 , v12 and v13 to describe non - degeneracy conditions between the coordinates of the free points .the obtained equation system in factorized form is produced in the following output ( which is compatible with singular s arrays , cf . ) : .... [ 1 ] : [ 1 ] : _ [ 1]=1 _ [ 2]=-v6 ^ 2-v5 ^ 2 _ [ 3]=v7 [ 2 ] : 1,1,1 [ 2 ] : [ 1 ] : _ [ 1]=1 _ [ 2]=v4 ^ 2+v3 ^ 2 _ [ 3]=v5 _ [ 4]=v7 [ 2 ] : 1,1,1,1 ... [ 12 ] : [ 1 ] : _ [ 1]=1 _ [ 2]=v4*v6*v7 ^ 3-v4*v6*v7 ^ 2*v3-v4*v6*v7*v5 ^ 2+v4*v6*v5 ^ 2*v3-v6 ^2*v5*v3 ^ 2+v7 ^ 3*v5*v3-v7 ^ 2*v5*v3 ^ 2-v7*v5 ^ 3*v3 [ 2 ] : 1,1 [ 13 ] : [ 1 ] : _ [ 1]=1 _ [ 2]=v4 ^ 2+v3 ^ 2 _ [ 3]=-v5*v4+v6*v3 _ [ 4]=-1 [ 2 ] : 2,2,1,1 .... this is interpreted by geogebra as 13 possible sets of degeneracy conditions . here because of its geometrical meaning , simplicity and being fully synthetic the 13th set will be selected , which means : `` if ` ( v4 ^ 2+v3 ^ 2)^2*(-v5*v4+v6*v3 ) ` differs from , then the thesis will be true on all possible values of the coordinates of the free points '' . since , and , this clearly means that the two non - degeneracy conditions being shown are `` differs from '' ( that is , circle is non - degenerate ) and `` , and are not collinear '' ( that is , such a line must be chosen for which is not going through the center of ) .finally , geogebra concludes that .... 19:48:26.714 statement is generally true 19:48:26.714 benchmarking : 487 ms 19:48:26.717 output for provedetails : null = { true , { " arecollinear[a , b , c ] " , " areequal[a , b ] " } } .... this computation is done faster than half of a second . technically speaking , geogebra is a java application . from the developers point of view , the java public interface symbolicparametersbotanaalgo has to be implemented in geogebra s algo * classes by creating suitable algebraic equations ( and corresponding new variables ) to describe the symbolic background of a newly used tool . to check the validity of a thesis ,the public interface symbolicparametersbotanaalgoare must be implemented .currently the following checks are implemented : collinearity , concurrency , concyclicity , congruency , equality , parallelism , perpendicularity , incidence , and formula checking ( to prove equations ) . in our improvementsthe benchmark suite was extended by additional 140 theorems .57 of these extra tests were chosen from these tests were computed in chou s book by using wu s characteristic method . herewe summarize our results by sharing a list of the recent benchmarking outputs .geogebra s prover benchmarking system is available as a command line tool in its source folder ` test / scripts / benchmark / prover/ ` .geogebra s desktop version runs as a java native application on the mostly used operating system platforms including windows , mac os x and linux . due to the internally used native giac cas each platform requires its own compiled version of the embedded computer algebra system .the following table is the output of the `` jar - paper '' scenario , launched by the command line ` xvfb - run ./runtests -s jar - paper -r ` in this folder .this scenario tests the * prove * command exclusively .see also .* the first column abbreviates the name of the test cases . *column e1 ( `` engine 1 '' ) refers to recio s exact check method programmed by simon weitzhofer .* column e2 ( `` engine 2 '' ) refers to the grbner basis method via singularws ( also known as botana s method ) programmed by the authors of this paper .( see for more on singularws . ) * column e2/giac refers to grbner basis method via the giac computer algebra tool ( instead of singularws ) programmed by bernard parisse and the authors of this paper .* column e3a ( `` engine 3a '' ) refers to opengeoprover s wu s method implementation programmed by ivan petrovi and predrag janii . * column e3b ( `` engine 3b '' ) refers to opengeoprover s area method programmed by damien desfontaines . *the auto approach refers to the automatic selection of methods which is already implemented in geogebra and it usually starts with `` engine 1 '' and then it continues with `` engine 2 '' ( either via singularws or giac : if singularws is available , then in singularws , otherwise in giac ) . if the grbner basis method is not conclusive , then `` engine 3a '' is tried .if it is not conclusive either , then opengeoprover s area method ( engine 3b ) is used .see for more details about the used methods .explanation of the used colors : * green means that the test returns a correct yes / no answer .intensity of green means speed ( the lighter the slower ) .numbers are in milliseconds .* pink means that geogebra returns the wrong answer . *yellow means the output is not conclusive , thus using this method geogebra shows `` undefined '' , i.e. there is no error here . *the r. ( `` result '' ) column provides some extra information about the result , such as f ( `` false '' ) when the statement was false on purpose .the s. ( `` speed '' ) column shows the timing .highlighted entries are the best results , italicized entries are the slowest ( but working ) results in a row .the test cases are also available for download in geogebra s .ggb format from the geogebra online source code directly . for testing we used a pc with 16 gb ram , 8 intel(r )core(tm ) i7 cpu 860 @ 2.80ghz , and linux mint 17.2 .we highlight that : * our theorem corpus has a significant number of test cases .. . * the best performing theorem prover when using our corpus is the complex algebraic geometry prover via singular . herethe ( * ? ? ?* , 4 ) algorithm was used .timing is remarkably under one second in most test cases .* the table can be misleading when investigating other columns .actually , there is no implementation for intersections with conics in geogebra for recio s method .also e2/giac can use a different algorithm with better ( but slightly slower ) results .some geogebra commands are not yet implemented in the communication layer between geogebra and opengeoprover , that is , columns e3a and e3b show only a limited amount of positive test cases . * for the end user the significant case is the last column , since singular is disabled by default to ensure the same behavior on offline and online runs .the web version runs in a web browser .all major browsers including google chrome , mozilla firefox and internet explorer are supported .the following table was generated by using the command line ` xvfb - run ./runtest -p " auto web " -r ` in this folder .it compares the outputs of the * prove * command in the desktop version ( `` auto '' ) and the web version ( `` web '' ) .we highlight that : * the web version does not return any incorrect output in any cases . *it is properly working in 125/200 cases ( 62.5% ) which is 86.2% of the performance ratio of the desktop version . *the web version is definitely slower than the desktop version by a ratio between 2 and 6 . * despite its limited availability and speed , the web version is already applicable in many classroom situations .the users only need a web browser which should be accessed not only on desktop computers and laptops , but also on tablets and mobile phones . to sum up , we list some important theorems which are usually discussed in secondary schools .now they can be proven with geogebra s help , that is , at least a yes / no answer is provided for many theorems , including : * . . .thales theorem .* concurrency of medians , bisectors , altitudes .euler line .the midline theorem , varignon s theorem . the nine points circle .simson s theorem .* basic properties of translations and . * * , . desargues s theorem , pappus theorem . *the underlined theorems can be proven with the internal complex algebraic geometry prover in geogebra by using the enhancements implemented in the last milestone in our work .finally , we summarize the currently planned new features in the forthcoming versions of geogebra .c x x geogebra tool & description & to implement + & _ area _ & of conics + & _ translate by vector _ & line , segment , ray , circle , parabola , ellipse , hyperbola , polygons + & _ reflect about line _ & ellipse , hyperbola + & _ reflect about circle _ & line + & _ rotate around point _ & general angles + there is still room for further enhancements : * improve formula handling by eliminating non - squared quantities automatically and identifying formulas for a correct decision about the truth of the statement . *currently it is not possible to mirror a line about a circle directly : in this case the implementation should handle that the object type is changing from line to circle in general . *the * showproof * command might be implemented in cases when a readable proof can be produced automatically . *allow proofs for 3d euclidean geometry ( cf . ) . *improve grbner bases computations in giac to implement transcendent coefficients ( see ( * ? ? ?* , 4 ) ) .this would speed up computations in a number of cases which are currently infeasible : an indirect reduction of variables would be achieved in this way .* geogebra s * locusequation * command is capable of computing algebraic loci . it would be possible to unify the code base for the locus and the prover subsystem , and the unified systemcould be maintained and improved easier .also implementing conic sections for recio s exact check method would speed up geogebra s proofs significantly .the theorem proving subsystem in geogebra is a joint work with contributions from several researchers , programmers and teachers .we are especially thankful to bernard parisse for improving the giac cas to be competitive to singular and some commercial systems , making possible that geogebra has a robust embedded theorem prover also .we are grateful to toms recio , predrag janii , julien narboux and francisco botana for their useful hints to improve the text of this paper , and to markus hohenwarter and judit robu for supporting our work .
we report about significant enhancements of the complex algebraic geometry theorem proving subsystem in geogebra for automated proofs in euclidean geometry , concerning the extension of numerous geogebra tools with proof capabilities . as a result , a number of elementary theorems can be proven by using geogebra s intuitive user interface on various computer architectures including native java and web based systems with javascript . we also provide a test suite for benchmarking our results with 200 test cases .
long - term memory is thought to be stored in biochemical modulation of the inter - neuron synaptic connections . as each neuron forms about synapses such a mechanism of memory seems to be very effective , allowing potentially the storage of bits per neuron .however , although the synaptic modifications persist for a long time , it may take as long as minutes to form them .since the external environment operates at much shorter time scales , the synaptic plasticity is virtually useless for the short - term needs of survival .such needs are satisfied by the working memory ( wm ) , which is stored in the state of neuronal activity , rather than in modification of synaptic conductances ( miyashita , 1988 ; miyashita and chang , 1988 ; sakai and miyashita , 1991 ; funahashi _ et al . , _1989 ; goldman - rakic _ et al . , _ 1990 ; fuster , 1995 ) .wm is believed to be formed by recurrent neural circuits ( wilson and cowan , 1972 ; amit and tsodyks , 1991a , b ; goldman - rakic , 1995 ) .a recurrent neural network can have two stable states ( attractors ) , characterized by low and high firing frequencies .external inputs produce transitions between these states . after transitionthe network maintain high or low firing frequency during the delay period .such a bipositional `` switch '' can therefore store one bit of information . during the last few yearsit has become clear that nmda receptor is critically involved in the mechanism of wm .it is evident from the impairments of the capacity to perform the delayed response task produced by the receptor blockade ( krystal _ et al . , _ 1994 ; adler _ et al . , _ 1998 ,pontecorvo _ et al . , _ 1991 ; cole _ et al . , _ 1993 ; aura _ et al . , _ 1999 ) .similarly the injection of the nmda receptor antagonists brings about weakening of the delayed activity demonstrated in the electrophysiological studies ( javitt et al . , 1996 ,dudkin _ et al . , _at the same time intracortical perfusion with glutamate receptor agonists both improves the performance in the delayed response task and increases the duration of wm information storage ( dudkin _ et al . , _1997a and b ) .such evidence is especially interesting since administration of nmda receptor antagonists ( pcp or ketamine ) reproduces many of the symptoms of schizophrenia including the deficits in wm . out of many unusual properties of nmdachannel two may seem to be essential for the memory storage purposes : the non - linearity of its current - voltage characteristics and high affinity to the neurotransmitter , resulting in long lasting excitatory post synaptic current ( epsc ) .the former have been recently implicated in being crucial for wm ( lisman _ et al . , _ 1998 ) .it was shown that by carefully balancing nmda , ampa , and gaba synaptic currents one can produce an n - shaped synaptic current - voltage characteristic , which complemented by the recurrent neural circuitry results in the bistability .this approach implies therefore that a single neuron can be bistable and maintain a stable membrane state corresponding to high or low firing frequencies ( camperi and wang , 1998 ) .such a bistability therefore is extremely fragile and can be easily destroyed by disturbing the balance between nmda , gaba or ampa conductances .this may occur in the experiments with the nmda antagonist .the single neuron bistability should be contrasted to a more conventional paradigm , in which the bistability originates solely from the network feedback .it is based on the non - linear relation between the synaptic currents entering the neuron and its average firing rate ( amit and tsodyks , 1991a , b ; amit and brunel , 1997 ) .such a mechanism is not sensitive to the membrane voltage and therefore can be realized using e.g. ampa receptor .thus it may seem that this mechanism is ruled out by the nmda antagonist experiments .in this paper we study how the properties of the synaptic receptor can influence the behavior of the conventional bistable network .we therefore disregard the phenomena associated with the non - linearity of synaptic current - voltage relation ( lisman _ et al . , _ 1998 ) . in effectwe study the deviations from the mean - field picture suggested earlier ( see e. g. amit and tsodyks , 1991a , b ) .the main result of this paper concerns the question of stability of the high frequency attractor .this state is locally stable in the mean - field picture , i. e. small synaptic and other noises can not kick the system out of the basin of attraction surrounding the state . howeverthis state is not guaranteed to be globally stable . after a certain period of time a large fluctuation of the synaptic currentsmakes the system reach the edge of the attraction basin and the persistent memory state decay into the low frequency state .we therefore find how long can the working memory be maintained , subject to the influence of noises . before the network reaches the edge of attraction basin it performs an excursion into the region of parameters which is rarely visited in usual circumstances .this justifies the use of term `` instanton '' to name such an excursion , emphasizing the analogy with the particle traveling in the classically forbidden region in quantum mechanics. such analogy have been used before in application to the perception of ambiguous signals by bialek and deweese , 1995 .the present study however is based on microscopic picture , deriving the decay times from the synaptic properties . in the most trivial scenariothe network can shut itself down by not releasing neurotransmitter in a certain fraction of synapses of _ all _ neurons .thus all the neurons cross the border of attraction basin simultaneously .this mechanism of memory state decay is shown to be ineffective .instead the network chooses to cross the border by _ small groups _ of neurons , taking advantage of large combinatorial space spanned by such groups .the cooperative nature of decay in this problem is analogous to some examples of tunneling of macroscopic objects in condensed matter physics ( lee and larkin , 1978 ; levitov _ et al . , _ 1995 ) .the decay time of the memory state evaluated below _ increases _ with the size of the network growing ( see section [ global_stability ] ) .this is a natural result since in a large network the relative strength of noise is small according to the central limit theorem .therefore if one is given the minimum time during which the information has to be stored , there is the minimum size of the network that can perform this task .we calculate therefore the minimum number of neurons necessary to store one bit of information in a recurrent network .this number weakly depends on the storage time and for the majority of realistic cases is equal to 5 - 15 .two complimentary measures of wm stability , the memory decay time and the minimum number of neurons , necessary to store one bit , are shown below to depend on the synaptic channel properties .the former grows exponentially with increasing duration of epsc , which is unusually large for the nmda channel due to high affinity to glutamate .this allows to interpret the nmda channel blocking experiments in the framework of conventional network bistability . in recent studywang , 1999 considered a similar problem for the network subjected to the influence of _ external _ noises , disregarding the effects of finiteness of the probability of neurotrasmitter release . in our work we analyze the effects of synaptic failures on the stability of the persistent state .we therefore consider the _ internal _ sources of noise .our study is therefore complimentary to wang , 1999 . analytical calculations presented beloware confirmed by computer simulations . for the individual neurons we usethe modified leaky integrate - and - fire model due to stevens and zador , 1998 , which is shown to reproduce correctly the timing of spikes _ in vitro_. such realistic neurons may have firing frequencies within the range 15 - 30hz for purely excitatory network in the absence of inhibitory inputs ( section [ mean_field ] ) .this solves the high firing frequency problem ( see e.g. amit and tsodyks , 1991a ) .the resolution of the problem is based on the unusual property of the stevens and zador neuron , which in contrast to the simple leaky integrator has _ two _ time scales .these are the membrane time constant and characteristic time during which the time - constant changes .the latter , being much longer than the former , determines the minimum firing frequency in the recurrent network , making it consistent to the physiologically observed values ( see section [ mean_field ] for more detail ) .in this section we first examine the properties of single neuron , define our network model , and , finally solve the network approximately , using the mean - field approximation .the stevens - zador ( sz ) model of neuron is an extension of the standard leaky integrator ( see e.g. tuckwell , 1998 ) .it is shown to accurately predict the spike timings in layer 2/3 cells of rat sensory neocortex .the membrane potential satisfies the leaky integrator equation with time - varying resting potential and integration time : here is the time elapsed since the last spike generated , and the input current is measured in volts per second .when the membrane voltage reaches the threshold voltage the neuron emits a spike and the voltage is reset to . ( a ) ( b )this model is quite general to describe many types of neurons , differing only by the functions , , and the parameters and . for the pyramidal cells in cortical layer 2/3 of ratsthe functions can be fitted by \label{sze}\ ] ] and .\label{sztau}\ ] ] the parameters of the model for these cells have the following numerical values : , , , , , and ( stevens and zador , 1998 ) . the resting potential and integration time with these parametersare shown in figure [ fig10 ] . to resolve the singularity an implicit runge - kutta scheme should be used in numerical integration of . ] in the next step we calculate the transduction function , relating the average external current to the average firing frequency .this function is evaluated in appendix [ appendixa ] and is shown in figure [ fig20 ] . the closed hand expression for this functioncan not be obtained .some approximate asymptotic expressions can be found however . for large frequencies ( )it is approximately given by the linear function ( dashed line in figure [ fig20 ] ) : where and for small frequencies ( ) we obtain : } \label{small_frequencies}\ ] ] the neuron therefore starts firing significantly when current exceeds the critical value we ignore spontaneous activity in our consideration assuming all firing frequencies below to be zero .we also disregard the effects related to refractory period since they are irrelevant at frequencies .we consider the network consisting of sz neurons , establishing all - to - all connections .after each neuron emits a spike a epsc is generated in the input currents of all cells with probability .this is intended to simulate the finiteness of probability of the neurotransmitter release in synapse .the total input current of -th neuron is therefore where enumerates the neurons making synapses on the -th neuron ( all neurons in the network ) , is the time of spike number emitted by cell , and is the external current . is a boolean variable equal to with probability ( in our computer simulations always equal to 0.3 ) .it is the presence of this variable that distinguishes our approach from wang , 1999 .we chose the epsc represented by to be ( see amit and brunel , 1997 ) where , if , and otherwise .as evident from ( [ epsc ] ) is the duration of epsc .it is therefore the central variable in our consideration . if the number of neurons in the network is large its dynimics is well described by the mean - field approximation .how large the number of neurons should be is discussed in the next section . for the purposes of mean - field treatmentit is sufficient to use the average firing frequency and the average input current to describe the network completely .thus the network dynamics can be approximated by one `` effective '' neuron receiving the average input current and firing at the average frequency .assume that this hypothetical neuron emits spikes at frequency ( point 1 in figure [ fig20 ] ) .due to the network feedback this results in the input current equal to the average of eq .( [ totalepsc ] ) , displaced in time by the average duration of epsc , i.e. this corresponds to the transition between points 1 and 2 in figure [ fig20 ] . at last the transition between the input current and the firing frequency is accomplished by the transduction function ( points 2 and 3 ) .the delay due to this transition is of the order of somatic membrane time constant ( ) and is negligible compaed to .we therefore obtain the equation on the firing frequency ( wilson and cowan , 1972 ) , or using the taylor expansion to obtain the steady state solutions we set the time derivatives in ( [ mfequation ] ) to zero here is the function inverse to ( [ mfcurrent ] ) .it is shown in figure [ fig20 ] by the straight solid line .this equation has three solutions , two of which are stable . they are marked in by letters and in the figure .point represents the high frequency attractor .frequencies obtained in sz model neurons are not too high .they are of the order of , i.e. in the range hz .this coinsides with the range of frequencies observed in the delayed activity experiments ( miyashita , 1988 ; miyashita and chang , 1988 ; sakai and miyashita , 1991 ; funahashi _ et al . , _ 1989 ; goldman - rakic _ et al . , _ 1990 ; fuster , 1995 ) .the reason for the relatively low firing frequency rate is as follows . for the leaky integrator model the characteristic firing rates in the recurrent network are of the order of , i. e. are in the range . for the sz neuron even smaller ( see figure [ fig10]b ) , and therefore it seems that the firing rates should be larger than for leaky intergator .this however is no true , since for the sz neuron , in contrast to the leaky integrator , there is the second time scale .it is the characteristic time of variation of the time constant msec ( see figure [ fig10]b ) . since the time constant itself is very short it becomes irrelevant for the spike generation purposes at low firing rates and the second time scale determines the characteristic frequency .it is therefore in the range .finally we would like to discuss the stability of attactor .it can be locally stable , i.e. small noises can not produce the transition from state to state .the condition for this follows from the linerized near equilibrium eq .( [ mfequation ] ) : very rarely , however , a large fluctuation of noise can occur , that kicks the system out of the basin of attraction of state .it is therefore _ never _ globally stable .this is the topic of the next section .computer simulations show that our network can successfully generate delayed activity response .an example is shown in figure [ fig25]a where a short pulse of external current ( dashed line ) produced transition to the high frequency state .this state is well described by the mean - field treatment given in the previous section .this is true , however , only for the networks containing a large number of neurons .if the size of the network is smaller than some critical number the following phenomenon is observed ( see figure [ fig25]b ) .the fluctuations of the current reach the edge of the attraction basin ( dotted line ) and the network abruptly shuts down , jumping from high frequency to the low frequency state .the quantitative treatment of these events is the subject of this section .( a ) ( b ) similar decay processes have been observed by funahashi _et al . , _ 1989 in prefrontal cortex .one of the examples of such error trials is shown in figure [ fig27 ] .although the decay is abrupt , the moment at which it occurs is not reprodusable from experiment to experiment .it is of interest therefore to study the distribution of the time interval between the initiation of the delayed activity and the moment of its decay .our simulations and the arguments given in appendix [ fluctuations ] show that the decay can be considered a poisson process .the decay times have therefore an exponential distribution ( see figure [ fig30 ] ) . since failure to maintain the persistent activity entails the loss of the memory and incorrect performance in the delayed responce task , at least in monkey s prefrontal cortex ( funahashi _ et al . _ , 1989 ) , one can use the conclusion about poisson distribution to interpret some psychophysical data . in some experiments on rats performingthe binary delayed matching to position tasks deterioration of wm is observed as a function of delay time ( cole _ et al . , _the detereoration was characterized by `` forgetting '' curve , with matching accuracy decreasing from about 100% at zero delay to approximately 70% at 30 second delay . the performance of the animal approaches the regime of random guessing with 50% of correct responses in this binary task ( figure [ fig40 ] ) .our prediction for the shape of the `` forgetting '' curve that follows from the poisson distribution of delayed activity times is this prediction is used to fit the experimental data in figure [ fig40 ] .this figure also shows the effect of competitive nmda antagonist cpp .application of the antagonist reduces the average memory retention time from about 40 seconds to 15 seconds .the presence of non - compatitive antagonists impairs performance even at zero delay ( cole _ et al . , _ 1993 ; pontecorvo_et al . , _non - competitive antagonists have therefore an effect on the components of animal behavior different from wm .when the delayed component of `` forgetting '' curve is extracted it can be well fitted by an expression containing exponential similar to ( [ correct ] ) .such fits also show that the average wm storage time decreases with application of nmda antagonist .our calculation in appendix [ fluctuations ] show that the average memory storage time is given by .\label{averagetime}\ ] ] here is the distance to the edge of the attraction basin from the stable state and is the average feedback current .this result holds if .because is of the order of tens of seconds and is approximately msec the exponential in ( [ averagetime ] ) is of the order of - for the realistic cases .there are therefore two ways how synaptic receptor blockade can affect .first , the attenuation of the epsc decreases the average firing frequency .second , it moves the system closer to the edge of the attraction basin , reducing .both factors increase the effect of noise onto the system , decreasing the average memory storage time .this is manifested by eq .( [ averagetime ] ) .another consequence of the formula is the importance of nmda receptor for the wm storage .it is based on the large affinity of the receptor to glutamate , leading to long epsc ( msec ) , compared for example to ampa receptor ( msec ) .( [ averagetime ] ) implies that if ampa receptor is used in the bistable neural net and all other parameters ( , , , and ) are kept the the same , the memory storage time is equal to msec .thus it is not suprising that nmda receptor is chosen by evolution as a mediator of wm and the highest density of the receptor is observed in places involved into the wm storage , i.e. in prefrontal cortex ( cotman _ et al . , _ 1987 ) .we now would like to illustrate what processes lead to the decay time given by ( [ averagetime ] ) .consider a simple network consisting of three neurons ( figure [ fig50 ] ) .assume that the ratio for this network is equal to 1/3 .this implies that the neurons have to lose only 1/3 of their recurrent input current due to a fluctuation to stop firing .this can be accomplished by various means .our research shows that the most effective fluctuation is as follows . due to probabilistic nature of synaptic transmitionsome of the synapses release glutamate when spike arrives onto the presinaptic terminal ( full circles in figure [ fig50 ] ) some fail to do so ( open circle ) .it is easy to see that if the same synapse fails to release neurotransmitter in responce to any spike arriving during the time interval the reverberations of current terminate .indeed , the failing synapse ( open circle ) deprives the neuron of 1/3 of its current .this is just enough to put the input current into the neuron below the threshold .the neuron therefore stops firing .this deprives the entire network , consisting of three neurons , of 1/3 of its feedback current. therefore the delayed activity in this network terminates . in the the most reasonable alternative mechanism of decaythe average mean - field current would reach the edge of the attraction basin i.e. would be reduced by 1/3 .this can be accomplished by shutting down three synapses in three different neurons instead of one .such mechanism is therefore less effective than the proposed above .quantitatively the difference is manifested in reducing the exponent of the factor containg in eq .( [ averagetime ] ) from 3 to 2 .this bring about an _ increase _ in the average storage time .the mean - field mechanism is therefore less restrictive than the proposed one and is disregarded in this paper . since an increase of the number of neurons in the network dramatically influences the memory storage time according to eq .( [ averagetime ] ) , another characteristic of the reliability of wm circuit is the minimum number of neurons necessary to store one bit of information during time .we first study this quantity computationally as follows .we run the network simulation many times with the same values of and .we then determine the average decay time .having done this we decrease or increase depending on wheater is larger or smaller than a given value ( 20 sec in all our simulations ) .this process converges to the number of neurons necessary to sustain the delayed activity for the given value of .the process is then repeated for different values of synaptic time - constant . however , the network feedback is always renormalized so that the firing frequency stays the same , close to the physiologically feasible value .this corresponds to the attractor state shown in figure ( [ fig20 ] ) .the resulting dependence of versus is shown in figure [ fig52 ] by markers .there are two sets of computational results in figure [ fig52 ] .the higher values of are obtained for the network with no noise in the external inputs ( dots in figure [ fig52 ] ) .the only source of noise in such a network is therefore the unreliability of synaptic connections .it appears for this case that _ no _ delayed activity can exist for below 37 msec .the lower values of are obtained for the case of white noise added to the external input ( triangles ) .the amplitute of white noise is of the total input current and the correlation time is 1 msec .the limiting value of synaptic time constant for which the dalayed activity is not possible is much smaller for this case ( msec ) .this result may seem counter - intuitive .having added the external noise we increased the viability of the high frequency state , reducing the effects of the internal synaptic noise .this however is not so suprising if one takes the neuronal synchrony into account .synchrony of neuronal firing results in oscillations in the average input current ( see inset in figure [ fig20]a ) . such oscillations periodically bring the system closer to the edge of the attraction basin , creating additional opprotunities for decay . thus the syncronous network is less stable than the asynchronous one .external noice attenuates neuronal synchrony , smearing the ascillations of the average input current .thus the system with noise in the external inputs should have a larger decay time and smaller critical number of neurons .this idea is discussed quantitatively below in this section .this is similar to the stabilisation of the mean - field solutions in the networks of pulse - coupled oscillators ( abbott and vreeswijk , 1993 ) .figure [ fig54 ] shows the results of similar calculations for the attractor state with a higher average firing frequency ( ) .this network shows higher reliability and smaller values of for both synchronous and asyncronous ( 10% noise added ) regimes .this is consistent with eq .( [ averagetime ] ) .the results of analytical calculations ( solid lines ) show satisfactory agreement with the computer modeling ( markers ) .the rest of the section is dedicated to the discussion of different aspects of the analytical calculations and their results .a more thorough treatment of the problem can be found in appendix [ fluctuations ] . to evaluate the minimum number of neurons in the closed hand form we solve eq .( [ averagetime ] ) for this equation is valid for . in the opposite case it has to be amended . to obtain the correct expression in the latter casethe following considerations should be taken into account .it is obvious that when the mean - field attractor becomes less and less stable in the local sense , i.e. the dimensionless feedback coefficient [ eq.([feedbackcoefficient ] ) ] approaches unity , the global stability should also suffer .indeed , if the system is weakly locally stable the fluctuations in the average current are large . this should facilitate global instability .the facilitation can be accounted for by noticing that the transition from high frequency state to the low frequency one is most probable when the average current is low .hence to obtain the most realistic probability of transition and correct values for and one has to decrease the values of average current and distance to the edge of the attraction basin by where the standard deviation of the average current is calculated in appendix [ fluctuations ] here is the average current before the shift .this correction decreases the ratio , decreasing [ see eq .( [ averagetime ] ) ] .this implies the reduced reliability of the network due to the average current fluctuations . in the simple example with three neurons , in principle , each of them can contribute to the process of decay .if the network is larger the number of potentially dangerous groups grows as a binomial coefficient , where is the size of the dangerous group .the correction to can be expressed as follows ( appendix [ fluctuations ] ) : here is the combinatorial correction here + ( 1-x ) \ln \left[1/(1-x)\right ] . }\end{array}\ ] ] since the combinatorial contribution is proportional to it is negligible compared to at large values of .therefore as claimed by eq . ( [ nmin1 ] ) in this limit . on the other handif is small the main contribution to the critical number of neurons comes from .the right hand sides of eqs .( [ nmin1 ] ) and [ nmin2 ] contains dependence on through the shift in distance to the attraction edge and the coefficient .this dependence is weak however since the former represents a very small correction and the latter depends on only logarithmically . nevertheless to generate a numerically precise predictionwe iterate eq .( [ nmin1 ] ) , ( [ nmin ] ) , and ( [ nmin1 ] ) until a consistent value of is reached .the results are shown by lower solid lines in figures [ fig52 ] and [ fig54 ] .they are in a good agreement with numerical simulations in which the synchrony of firing was suppressed by external noise .the synchronizaton of neuronal firing can be critical for the stability of the high frequency attractor .consider the following simple example ( wang , 1999 ) .consider the network consisting on only _one _ neuron .assume that the external input current to the neuron exceeds the firing threshold at certain moment and the neuron starts emiting spikes .assume that the external current is then decreased to some value below the threshold .the only possibility for the neuron to keep firing is to receive enough feedback current from itself , so that the _ total _ input current is above threshfold .suppose then for illustration purposes that the synapse that the neuron forms on itself ( autapse ) is infinitely reliable , i.e. every spike arriving on the presynaptic terminal brings about the release of neurotransmitter .it is easy to see that if the duration of epsc is much shorter than the interspike interval the persistent activity is impossible .indeed , if epsc is short , at the moment , when the new spike has to be generated there is almost no residual epsc .the input current at that moment is almost entirely due to external sources , i.e. is below the threshold .the new spike generation is therefore impossible .we conclude that the persistent activity is impossible if epsc is short , i. e. for .note , that this conclusion is made assuming _ no noise _ in the system .the natural question is why this conclusion is relevant to the network consisting of many neurons ?our computer modeling shows that in the large network the neuronal firing pattern is highly sychronized on the scales of the order of .in essence all neurons fire simultaneously and periodically with the period . therefore they behave as one neuron .hence the delayed activity in such network is impossible if even in the absence of synatic noise .thus the synchrony facilitates the decay of the delayed activity brought about by the noise .this is consistent with the results of computer simulations presented if figures [ fig52 ] and [ fig54 ] .as it was mentioned above synchrony facilitates the instability by producing the oscillations in the average current and therefore by bringing the edge of the attraction basin closer .it therefore effectively decreases .the magnitude of this decrease is estimated in appendix [ fluctuations ] to be and here is the average current given by ( [ renormalization ] ) .this correction , which is the amplitude of the oscillations of the average current due to synchrony , is in good agreement with computer modeling ( see insert in in figure [ fig25]a ) .the synchrony does not change the average current in the network significantly .therefore the latter should not be shifted as in ( [ renormalization ] ) .these corrections imply that if the original value of is equal to the sum [ see eq .( [ deltai ] ) ] , the effective distance to the edge of attraction basin vanishes .the delayed activity can not be sustained under such a condition .this occurs at small values of and determines the positions of vertical asymptotes ( dashed lines ) in figures [ fig52 ] and [ fig54 ] .this gives a quantitative meaning to the argument of impossibility of stable delayed activity in synchronous network at small values of synaptic time constant given above ( see also wang , 1999 ) .when the synchrony is diminished by external noise , the cut off value of is determined only by and is therefore much smaller .it is about 5 msec in our computer similations .synchrony also affects the values of the critical number of neurons for large and small ( and respectively ) these equations are derived in appendix [ fluctuations ] . to obtain the value of critical number of neurons eq . ( [ nmin ] ) should be used .let us compare the latter equations to ( [ nmin1 ] ) and ( [ nmin2 ] ) . in the limit the expressions for and to the same asymptote , whereas both and go to zero .since again , as in asynchronous case , both and depend on the quantity that we are looking for ( however , again , very weakly ) , an iterative procedure should be used to determine the consistent value of .having applied this iterative procedure we obtain the ipper solid lines in figures [ fig52 ] and [ fig54 ] .we therefore obtain an excellent agreement with the results of the numerical study .in addition we calculate the cut - off , below which the delayed activity is not possible for the synchronous case . for the attractor with hzthe cut - off value is msec , while for the higher frequency attractor ( hz ) the value is msec .these values are shown in figures [ fig52 ] and [ fig54 ] by dashed lines .we therefore conclude that the delayed activity mediated by ampa receptor is impossible in the synchronous case .in this work we derive the relationship between the dynamic properties of the synaptic receptor channels and the stability of the delayed activity .we conclude that the decay of the latter is a poisson process with the average decay time exponentially depending on the time constant of epsc .our quantitative conclusion applied to ampa receptor , having a short epsc , implies that it is incapable of sustaining the persistent activity in case of synchronization of firing in the network . for the case of asynchronous networkone needs a large number of neurons to store one bit of information with ampa receptor . on the other hand nmda receptor seems to stay away from these problems , providing reliable quantum of information storage with about 15 neurons for both synchronized and asynchronous case .we therefore suggest an explanation to the obvious from experiments high significance of the nmda channel for wm .one can conclude from our study that if the time constant of nmda channel epsc is further increased , the wm can be stored for much longer time .assume that is increased by a factor of 2 , for instance , by genetic enhancement ( tang _ et al . , _if the network connectivity , firing frequency , and average currents stay the same as in the wild type , the exponential in eq .( [ averagetime ] ) is increased by a factor of .this implies that the working memory can be stored by such an animal for days , instead of minutes .alternatively the the number of neurons responsible for the storage of quantum of information can be decreased by a factor keeping the storage time the same .this implies the higher storage capacity of the brain of mutant animals .we predict that with normal nmda receptor the number of neurons able to store one bit of information for sec is about 15 .should our theory be applicable to rats performing delayed matching to position task ( cole _ et al . , _1993 ) the conclusion would be that the recurrent circuit responsible for this task contains 15 neurons . of coursethis would imply that other sources of loss of memory , such as distraction , are not present .the use of very simple model network allowed us to look into the nature of the global instability of delayed activity .the principal result of this paper is that the unreliability of synaptic conductance provides the most effective channel for the delayed activity decay .we propose the optimum fluctuation of the synaptic noises leading to the loss of wm .decay rates due to such a fluctuation agree well with the results of the numerical study ( section [ global_stability ] ) . thus any theory or computer simulation that does not take the unreliability of synapses into accountdoes not reproduce the phenomenon exactly .we also studied the decay process in the presence of noise in the afferent inputs .the study suggests that the effect of noise on the wm storage reliability is not monotonic .addition of small white noise ( % of the total external current ) increases the reliability , by destroying synchronization .it is therefore beneficial for the wm storage .further increase of noise ( % ) destroys wm , producing transitions between the low and high frequency states .we conclude therefore that there is an optimum amount of the afferent noise , which on one hand smoothes the synchrony out and on the other hand does not produce the decay of delayed activity itself . further work is needed to study the nature of influence of the external noise in the case of unreliable synapses . if the afferent noise is not too large the neurons fire in synchrony .the natural consequences of synchrony are the decrease of the coefficient of variation of the interspike interval for single neuron and the increase of the crosscorrelations between neurons .the latter prediction is consistent with findings of some multielectrode studies in monkey prefrontal cortex ( see dudkin _et al . , _ 1997a ) . on the other hand sinchrony may also be relevant to the phenomenon of temporal binding in the striate cortex ( engel _ et al . , _1999 ; roskies , 1999 ) .we argue therefore that temporal binding with the precision of dosens of millisconds can be accomplished by formation of recurrent neural networks .we also suggest the solution to the high firing frequency problem by using a more precise model of the spike generation mechanism , i.e. the leaky integrator model with varying in time resting potential and integration time .the minimum firing frequency for the recurrent network based on such model is determined by the rate of variation of the potential and time - constant and is within the range of physiologically observed values .further experimental work is needed to see if the model is applicable to other types of neurons , such as inhibitory cells . in conclusionwe have studied the stability of delayed activity in the recurrent neural network subjected to the influence of noise .we conclude that the global stability of the persistent activity is affected by properties of synaptic receptor channel .nmda channel , having a long epsc duration time , is a reliable mediator of the delayed responce . on the other hand ampa receptoris much less reliable , and for the case of synchronized firing in principle can not be used to sustain responce .effect of the nmda channel blockade on the wm task performance is discussed .the author is grateful to thomas albright , paul tiesinga , and tony zador for discussions and numerous helpful suggestions .this work was supported by the alfred p. sloan foundation .in this appendix we calculate the single - neuron transduction function .the first step is to find the membrane voltage as a function of time . from eq .( [ szequation ] ) using the variation of integration constant we obtain . } \end{array}\ ] ] here and we introduced the refractory period both for generality and to resolve the peculiarity at . for constant current and functions given by ( [ sze ] ) and ( [ sztau ] ) the expression for voltagecan be further simplified : } \\ \\ { \displaystyle+ \left(i-\frac{\delta e}{\tau_0}\right)\frac { \tau_0}{\alpha } \frac{l_{1/\alpha}\left ( e^{\alpha t/\tau_0}\right)e^{-s(t ) } } { \left(e^{\alpha t_0/\tau_0 } - 1\right)^{1/\alpha } } } , \end{array}\ ] ] where and is defined for integer by and is obtained for fractional by analytical continuation . for example solving the equation produces the interspike interval and frequency . the solution can not be done in the closed form .some asymptotes can be calculated however .the calculation depends on the value of parameter . _i ) _ . in this limit is very small and can be neglected everywhere , except for when multiplied by large factor .the latter product the equation on the interspike interval is .\ ] ] solving this quadratic equation with respect to we obtain the asymptote given by eq .( [ large_frequencies ] ) . _ii ) _ . in this casethe solution can be found directly from eq .( [ szequation ] ) by assuming . solving the resulting algebraic equation for obtain eq .( [ small_frequencies ] ) above .the average and standard deviation of the input current of each neuron , given by eqs .( [ totalepsc ] ) and ( [ epsc ] ) are and the distribution of is therefore , according to the central limit theorem . \label{indistribution}\ ] ] the probability that the input current is below threshold , i.e. the neuron does not fire , is given by the error function derived from distribution ( [ indistribution ] ) where , and . in derivation of ( [ singleneuronprobability ] )we used the asymptotic expression for the error function when .the reverberations of current will be impossible if neurons are below threshold .when this occurs the feedback current to each neuron is reduced with respect to the average current by , i.e. is below threshold , and further delayed activity is impossible .the probability of this event is given by the binomial distribution : here the binomial coefficient accounts for the large number of groups of neurons that can contribute to the decay .since the input currents stay approximately constant during time interval , we brake the time axis into windows with the duration .denote by the average probability of decay of the high frequency activity during such a little window .assume that windows have been passed since the delayed activity commenced .the probability that the decay occurs during the -th time window , i.e. exactly between and , is here is the probability that the decay did not happen during either of early time intervals .after simple manipulations with this expression we obtain for the density of probability of decay as a function of time this is a poisson distribution and decay is a poisson process .the reason for this is that the system retains the values of input currents during the time interval msec .therefore all processes separated by longer times are independent .since the decay time is of the order of - seconds , the attempts of system to decay at various times can be considered independent .we finally notice that since the values of current are preserved on the scales of we can conclude that given by eq .( [ p_0 ] ) .the average decay time can then be estimated using ( [ poisson2 ] ) in the limit the minimum size of the network necessary to maintain the delayed activity is small .we can assume therefore to be small in this limit .the same assumption can be made for .thus for an effective network ( using the neurons sparingly ) we can assume the combinatorial term in eq .( [ p_0 ] ) to be close to unity .since in the expression for it is also compensated by the small prefactor of the exponential [ see eq .( [ singleneuronprobability ] ) ] , we can assume for the effective network. therefore in the limit the main dependence of the parameters of the model is concentrated in the exponential factor .this set of assumptions in combination with eq .( [ averagetimeex ] ) leads to the expression ( [ averagetime ] ) in the main text .if more precision is needed eq .( [ averagetimeex ] ) can be used directly . to find the critical number of neurons we calculate logarithm of both sides of eq .( [ averagetimeex ] ) .we obtain using asymptotic expression for the logarithm of the binomial coefficient , which can be derived from the stirling s formula , + ( 1-x ) \ln \left[1/(1-x)\right ] } , \end{array}\ ] ] we derive the quadratic equation for - \ln \frac { \bar{t}}{\tau_{_{\rm epsc } } } \approx 0 . } \end{array}\ ] ] solving this equation we obtain the set of equations ( [ nmin1 ] ) , ( [ nmin ] ) , and ( [ nmin2 ] ) in the main text . in this subsectionwe assume that all the neurons in the network fire simultaneously . the important quantities which will be studied are the time dependence of the averaged over network current and the fluctuations of the input current into single neuron .the former is responsible for the shift in the distance to the edge of the attraction basin ( eq . ( [ shifts ] ) ) , the latter determines the average decay time , as follows from the previous subsection .assume that the neuron fired at .the average number of epsc s arriving to the postsynaptic terminal is due to finitness probability of the synaptic vesicle release and all - to - all topology of the connections .the average current at times is contributed by the spikes at and by all previous spikes at times , : where we introduced the notation .the average over time value of this averaged over neurons current is , where by angular brackets we denote the time average : .the average current therefore experiences oscillations between and .the minimum value of is below the average current .this minimum value , according to the logic presented in the main text , determines the shift of the distance to the edge of the attraction basin .this shift is therefore .our computer modeling however shows that the actual amplitude of the current oscillations is consistently about % of the value predicted by this argument .this was tested at various values of parameters .the explanation of this % factor is as follows . in case if all the neurons fire simultaneously the shape of the dependence of the average current versus time is saw - like .however the spikes do not fire absolutely simultaneously .the uncertainty in the spiking time is of the order of .this uncertainty , which is intrinsic to the recurrent synaptic noises , and therefore is difficult to calculate exactly , smears out the saw - like dependence of the average current of time ( see inset in figure [ fig25]a ) . approximately this smearing can be accounted for by dumping the higher harmonics of the saw - like dependence .when only principal harmonic remains , the amplitude of the saw - like curve is reduced by a factor .this is consistent with the numerical result .we therefore conclude that the shift of is ,\ ] ] what leads to eq .( [ deltais ] ) in the text .we now turn to the calculation of the standard deviation of the input current into one neuron .similar to ( [ geometrici ] ) , using the central limit theorem we obtain this quantity is most important at when the average current reaches its minimum and the neuron stops firing with maximum probability .performing the summation we therefore obtain substituting this value into eq .( [ singleneuronprobability ] ) in the previous subsection and repeating the subsequent derivation we obtain eqs .( [ nmin1s ] ) and ( [ nmin2s ] ) in the main text . in this subsectionwe calculate the fluctuations of the average netwotk current .we consider asynchronous case for simplicity .the conclusions are perfectly good for synchronous case , for the reasons that will become clear later in this subsection , and agree well with computer similations .the average current satisfies linarized equation similar to linearized version of eq .( [ mfequation ] ) here is the deviation of the averaged over network current from the equilibrium value and is the noise .the unitless network feedback coefficient is defined by ( [ feedbackcoefficient ] ) . as evident from this equation has a slow time constant [ since in our simulations ] . on the other handthe noise is determined by synapses and has a correlation time that is relatively small ( ) .the correlation function of noise in the average current can be found from eq .( [ totalepsc ] ) here angular brackets imply averaging over time and the value of follows from eq .( [ deltain ] ) and the central limit theorem ( dispersion of the average is equal to the dispersion of each of the homogenious constituents divided by the number of elements ) .we conclude therefore that the correlation function of can then be easily found from eq .( [ lini ] ) using the fourier transform .define .then since the expression for is readily obtained by inverting the fourier transform ( [ comega ] ) . in the limit , which holds in our simulations the answer is . \label{answc}\ ] ] the value of taken at determines the standard deviation of the average current ( [ deltai ] ) .the fluctuations described by the correlation function ( [ answc ] ) have a large correlation time compared to the firing frequency .we conclude therefore that synchrony should not affect the long range component of the correlation function .cole bj , klewer m , jones gh , stephens dn ( 1993 ) contrasting effects of the competitive nmda antagonist cpp and the non - competitive nmda antagonist mk 801 on performance of an operant delayed matching to position task in rats .psychopharmacology ( berl ) 111:465 - 71 .dudkin kn , kruchinin vk , chueva iv ( 1997 ) synchronization processes in the mechanisms of short - term memory in monkeys : the involvement of cholinergic and glutaminergic cortical structures .neurosci behav physiol 27:303 - 8 .javitt dc , steinschneider m , schroeder ce , arezzo jc ( 1996 ) role of cortical n - methyl - d - aspartate receptors in auditory sensory memory and mismatch negativity generation : implications for schizophrenia .proc natl acad sci usa 93:11962 - 7 .krystal jh , karper lp , seibyl jp , freeman gk , delaney r , bremner jd , heninger gr , bowers mb jr , and charney ds ( 1994 ) subanesthetic effects of the noncompetitive nmda antagonist , ketamine , in humans .psychotomimetic , perceptual , cognitive , and neuroendocrine responses .arch gen psychiatry 51:199 - 214 .pontecorvo mj , clissold db , white mf , ferkany jw ( 1991 ) n - methyl - d - aspartate antagonists and working memory performance : comparison with the effects of scopolamine , propranolol , diazepam , and phenylisopropyladenosine .behav neurosci 105:521 - 35 .
the influence of the synaptic channel properties on the stability of delayed activity maintained by recurrent neural network is studied . the duration of excitatory post - synaptic current ( epsc ) is shown to be essential for the global stability of the delayed response . nmda receptor channel is a much more reliable mediator of the reverberating activity than ampa receptor , due to a longer epsc . this allows to interpret the deterioration of working memory observed in the nmda channel blockade experiments . the key mechanism leading to the decay of the delayed activity originates in the unreliability of the synaptic transmission . the optimum fluctuation of the synaptic conductances leading to the decay is identified . the decay time is calculated analytically and the result is confirmed computationally . 2
the complex langevin method solves in principle the sign problems arising in simulations of various systems , in particular in qcd with finite chemical potential . after being proposed in the early 1980s by klauder and parisi , it enjoyed a certain limited popularity , but very quickly certain problems were found .the first one was instability of the simulations with absence of convergence ( runaways ) , the second one convergence to a wrong limit .nevertheless in recent years the method has been revived with sometimes impressive success .in particular the use of adaptive stepsize has eliminated the problem of runaways .but nagging problems remained due to the lack of clear criteria to decide when an apparently convergent simulation actually represented the truth .this was linked to the lack of a clear mathematical basis for the method , that would at the same time also provide criteria for its applicability .the purpose of the present paper is to clarify the situation at least to some extent .while we are still not able to close certain mathematical gaps and reach a complete analytic solution to the problems that have plagued the method , we give some strong numerical evidence that the method is correct in some cases and also suggest a plausible explanation for the failure in other cases ; this leads to some pragmatic conclusions suggesting how to proceed in practice in a way that promises credible results .the paper is organized as follows . in sec .[ secii ] we give a formal justification of the method , highlighting the assumptions underlying the derivation . in sec .[ seciii ] three main questions raised by the formal arguments are listed .we then focus on one particular issue , boundary effects , in sec .[ seciv ] , and present detailed case studies in sec .tentative conclusions are given in sec .[ secvi ] .for simplicity we concentrate here on models in which the fields take values in flat manifolds or , where is the dimensional torus with coordinates .the complications that arise when the fields live in nontrivial manifolds , as is of course the case in qcd , have been successfully dealt with in the literature ( see for instance ref . for real , refs . for complex langevin dynamics ) .but these complications are not really relevant for our discussion . as is well known , the idea is to simulate a complex measure , with a holomorphic function on a real manifold , by setting up a stochastic process on the complexification of , such that the expectation values of _ entire holomorphic observables _ in this stochastic process converge to the ones with respect to the complex measure .the complex langevin equation ( cle ) on is dz= -s dt+dw , [ cle0 ] where denotes the increment of the wiener process and the equation is to be interpreted as a real stochastic process , namely with a slight generalization of eq .( [ cle1 ] ) that has been considered and will play a role in this investigation is where and are independent wiener processes , and .this is usually referred to as complex noise .the introduction of a nonzero makes it possible to solve the fokker - planck equation ( see below ) numerically and also allows a random walk discretization of the complex langevin process : \right ) , \label{e.rws1}\\ { \delta y(t ) } = & \pm \omega_y , \ \p_{y,\pm } = { \frac{1}{2}}\left(1\pm \tanh \left[\frac{\omega_y}{2 n_i } k_y\right]\right ) , \label{e.rws2}\\ & \omega_x = \sqrt{2n_r\delta t } , \ \ \ \\omega_y = \sqrt{2n_i\delta t } , \end{aligned}\ ] ] where are the transition probabilities and we have defined the steps such as to have the same in both sub - processes , to ensure correct evolution . by it calculus , if is a twice differentiable function on and z(t)=x(t)+iy(t ) is a solution of the complex langevin equation ( [ cle2 ] ) , we have [ ito ] f(x(t),y(t))= l f(x(t),y(t ) ) , where is the langevin operator [ eq : lo ] l=_x + _ y , and denotes the noise average of corresponding to the stochastic process described by eq .( [ cle2 ] ) . in the standard way eq .( [ cle2 ] ) leads to its dual fokker - planck equation ( fpe ) for the evolution of the probability density , [ realfpe ] p(x , y;t)= l^t p(x , y;t ) , with l^t=_x+ _y. is the formal adjoint ( transpose ) of with respect to the bilinear ( not hermitian ! ) pairing f , p= f(x , y ) p(x , y ) dxdy , i.e. , lf , p= f , l^t p. note that the fpe has the form of a continuity equation p(x , y;t)=_x j_x+_y j_y , where is the probability current in the dimensional space , given by j_x=(n_r_x - k_x)p , j_y= ( n_i_y - k_y)p .we will also consider the evolution of a complex density on under the following complex fpe [ complexfpe ] ( x;t)= l_0^t ( x;t ) , where now the complex fokker - planck operator is [ fpc0 ] l_0^t = _ x .a slight generalization will be useful : for any we consider a complex fokker - planck operator given by [ fpc1 ] l_y_0^t=_x . is the formal adjoint of l_y_0= _ x. the operators act on suitable complex valued distributions ( measures ) on , parameterized by the real variables .but they do not allow a probabilistic interpretation , because they do not preserve positivity .for any eq .( [ complexfpe ] ) with replaced by has the complex density [ rhostat ] _y_0(x ; ) as its ( hopefully unique ) stationary solution .we next consider expectation values .let be an entire holomorphic observable with at most exponential growth ; then we set [ eq : op ] o_p(t ) and o_(t ) .we would like to show that o_p(t)=o_(t ) , provided the initial conditions agree , o_p(0)=o_(0 ) , which is true if we choose [ init ] p(x , y;0)=(x;0)(y - y_0 ) ( for any ) . in the limit the dependence on the initial condition should of course disappear by ergodicity .the goal is to establish a connection between the ` expectation values ' with respect to and for the class of observables chosen ( entire holomorphic with at most exponential growth ) .the idea is to move the time evolution from the densities to the observables and make use of the cauchy - riemann ( cr ) equations .formally ( i.e. without worrying about boundary terms and existence questions ) this works as follows : first we use the fact that we want to apply the complex fp operators only to functions that have analytic continuations to all of . on those analytic continuationswe may act with the langevin operator l _ z , whose action on holomorphic functions agrees with that of , since on such functions and so that the difference vanishes. the proliferation of langevin / fokker - planck operators may be somewhat bewildering , but it is important to realize that are really all different operators : while and act on functions on ( i.e. functions of ) , agreeing on holomorphic functions , but disagreeing on general functions , acts on functions on , i.e. functions of . we now use to evolve the observables by the equation [ obsevol ] _ t o(z;t)=l o(z;t)(t0 ) with the initial condition , which is formally solved by [ obssol ] o(z;t ) = o(z ) . in eqs .( [ obsevol ] , [ obssol ] ) , because of the cr equations , the tilde may be dropped , and we will do so now .so we will have [ obsevol2 ] _l o(z;t)(t0 ) , with its formal solution [ obssol2 ] o(z;t ) = o(z ) .in fact eq . ( [ obsevol ] ) is also equivalent to the family of equations [ obsevol3 ] _l_y_0 o(x+iy_0;t)(t0 ) y_0 . the first thing to notice is that will be holomorphic if is .this can be seen as follows : let be the cauchy - riemann operators defined by the fact that by the holomorphy of commutes with , we find [ crobsevol ] _l |_j o(z;t)(t0 ) ; this is just eq .( [ obsevol ] ) again with replaced by . under the assumption that generates a semigroup acting on , eq .( [ crobsevol ] ) has a unique solution ; since the initial condition is we conclude that for all , i.e. satisfies the cr equations for all and all .so is holomorphic in each component separately . by hartogs theorem this implies joint holomorphy .we now consider , for , f(t,)p(x , y;t- ) o(x+iy;)dxdy , and claim that it interpolates between the and the expectations : f(t,0)= o_p(t ) , f(t , t)= o_(t ) .the first equality is obvious , while the second one can be seen as follows , using eqs .( [ init ] , [ obssol2 ] ) , where we only had to assume that we can integrate by parts in without worrying about boundary terms .our desired result follows if we can show that is independent of . to see this , we differentiate integration by partsthen shows that the two terms cancel , hence and thus [ equival ] o_p(t)= o_(t ) .it is important to notice that this holds for all ; whereas the left hand side seems to depend on , the right hand side is manifestly independent of it . if we knew in addition that [ conv ] _t o_(t ) = o _ ( ) , with given by eq .( [ rhostat ] ) with , we could now conclude that the expectation values of the langevin process relax to the desired values ; this convergence would follow if we knew that the spectrum of lies in a half plane and is a nondegenerate eigenvalue .but note that we do not really need convergence of for eq .( [ conv ] ) to hold , since it will only be tested against analytic observables .nevertheless the numerical evidence in many cases points to the existence of a unique stationary probability density .the corresponding probability currents are divergenceless , but unlike the situation in the real langevin process , they can not vanish . a general feature of the stationary distribution that can be read off the fpe is the following :assume that is a local stationary point of , then ( n_r_x+n_i_y)p = _ x ( k_xp)+_y ( k_y p ) , for .so if , a local maximum of can only occur where the divergence of the drift force is negative and a minimum where it is positive . for conclusion is even stronger : where the divergence is negative ( positive ) , there can not be a local minimum ( maximum ) in for fixed .these properties provide some checks on numerical solutions .there are three main questions raised by the formal arguments in the previous section : \(1 ) can the operators and their transposes be exponentiated ; in more mathematical language : do these operators generate semigroups on some suitable space of functions ?\(2 ) are the various integrations by parts justified , which underlie the shifting of the time evolution from the measure to the observables and back , or are there boundary terms to worry about ?\(3 ) are the spectra of and their transposes entirely in the left half plane and is 0 a nondegenerate eigenvalue ? concerning the first question , there are treatises ( see for instance refs. ) giving rather general sufficient conditions for the existence of a semigroup generated by differential operators of the general type considered here .unfortunately it seems that the cases we have to deal with here are not covered by those general results ; the main difficulties are ( 1 ) the strong growth of the drift given by the gradients of the action in some complex directions and ( 2 ) the fact that the drift is not always ` restoring ' .question ( 1 ) for is intimately related to the question whether the stochastic process given by the complex langevin equation exists for arbitrary long times .this is not obvious because typically in the classical ( no noise ) limit there are trajectories that go to infinity in finite time .while those trajectories occur only for a subset of measure zero of initial conditions , it is not obvious what happens after adding the noise . on the one hand, the noise will typically kick the process away from the unstable trajectories ; on the other hand it may also kick it near the unstable trajectory , inducing very large excursions of the process .we will later illustrate this with some examples . but let us say that the accumulated numerical evidence points not only to the existence of the process for arbitrarily large times , but also to the existence of a unique equilibrium measure for the process ; unfortunately we could neither find results in the mathematical literature that would imply this , nor could we prove ourselves that this is the case .concerning the exponentiation of , which should be easier , we still could not establish mathematically that it is possible on a space of functions containing the most obvious observables , such as exponentials . there is a useful criterion for the existence of a bounded semigroup generated by an operator on a hilbert space : generates a bounded semigroup if it is _ dissipative _ , i.e. if .unfortunately even in the simplest cases of a quadratic the corresponding langevin and fp operators are _ not _ dissipative .so they can at best generate exponentially bounded semigroups ; if in addition the spectrum is in the left half plane , convergence to the equilibrium should still take place . for the second question it would be necessary to have good control over the falloff of the solutions of the fpe in the imaginary directions : if we insert the observables into eq .( [ equival ] ) , we get for the fourier transform ( fourier coefficients in the compact case ) of the complex density ( k;t ) = p(x , y;t ) e^ikx - ky dxdy .this makes sense for all _ only _ if decays more strongly than any exponential in imaginary direction .our case studies described in the following sections indicate that this does not seem to be the case : in our first example the decay is probably exponential , but not stronger ; in our second example the decay seems to be even weaker ( with ) , so that exponentials can not be used as observables .the remainder of the paper is mainly devoted to studying question ( 2 ) in some toy models . finally let us remark that the third question is more difficult than the first one , and again the answer is not known rigorously . but again the numerical evidence strongly suggests a positive answer , depending on the model and the parameter values , in many interesting and relevant cases ( see e.g. refs .in this paper we are mainly concerned with question ( 2 ) .even though we have very little analytic control , careful numerical studies reveal that , as remarked , the answer is generally _ no _ ! in our case studies we find indications that the probability density indeed relaxes to an equilibrium density , but that that limiting density decays at best like an exponential for , in other cases only power - like .this limits the class of observables for which the integrations by parts can be performed without boundary terms . but let us first take a closer look at the integrations by parts that occur in the formal arguments of sec .[ secii ] .the danger lies in a possibly insufficient falloff in the imaginary ( ) directions , whereas in the real ( ) direction we have either compactness or sufficient falloff due to the behaviour of the action .let us remark that for the operators and are uniformly strictly elliptic ; this is important , because it implies regularity for any solution of the stationary fpe .it is also to be expected that the semigroup has a smooth ( even real analytic ) kernel so that for will be smooth ( real analytic ) .this is supported by our numerical studies .so the problematic point is to show that the two terms on right - hand side of eq .( [ interpol ] ) actually cancel .this may fail because the observables typically grow in imaginary direction , whereas the decay of ( always assuming it exists ) may be insufficient to compensate for it . let us see in a little more detail how the argument for the independence of may fail . for simplicity of presentationwe consider the one - dimensional case ( ) .we write the langevin operator ( [ eq : lo ] ) as l = l_n_i=0+n_i , where is the langevin operator for .then let us consider we use the formula e^tl^t= _ 0^t d e^l^te^(t-)l^t , and integration by parts to rewrite eq .( [ noisedep ] ) as _ 0^t dp(x , y;t- ) o(x+iy ; ) dxdy + x. the term denoted by collects possible boundary terms arising in the integration by parts .it vanishes only if the decay of is strong enough to offset any possible growth of ; otherwise it may either converge to a finite nonzero value or diverge . by the cr equationsthe first term vanishes , but the uncontrolled boundary term remains .let us look at a simple example that shows how and when the formal argument fails .consider ( where the second term vanishes on account of the cr equations ) . by the formal argument this would be zero , being just a boundary term , but careful application of integration by parts , at first over the finite domain , gives \big|_{-y_-}^{y_+}. & \end{aligned}\ ] ] here it is clear that what matters is the combined asymptotic behaviour of and , and depending on the observable , may be zero , finite , or divergent .of course the form of the boundary terms is less simple when using integration by parts in eqs .( [ interpol],[noisedep ] ) , but we expect that it is still the decay of the products like , , that is relevant . when trying to investigate the effects of the boundary numerically , one would in principle like to use the probability density obtained without a cutoff .in practice this is , however , not feasible , and we therefore introduce a cutoff in the imaginary direction , which is not sent to infinity .such a device is necessary for the solution of the fpe , and even though the cle does not require it , for the purpose of comparison we also introduce it there .this will , however , introduce additional problems with the formal arguments relying on the cr equations as well as integration by parts .concretely we proceed as follows : we restrict each to lie between and and impose periodic boundary conditions on both the observables and the probability densities .this has a number of consequences .firstly , observables will in general not be continuous across the ` seam ' , where we identify with .they can therefore not be interpreted as continuous functions and a priori the it formula ( [ ito ] ) does not hold ( it may still hold in the sense of distributions , which should be sufficient for our purposes ) .furthermore , the jump across the seam will mean that the cr equations are no longer satisfied everywhere .for the evolved observables the cr equations can not be expected to hold anywhere exactly , as the violation that occurred initially only at the boundary gets propagated everywhere by the langevin evolution .similarly , the drift in the fpe is expected to be discontinuous at the seam .therefore we have to expect that has a jump there as well ; this in turn forces us to interpret the fpe in the sense of distributions. one might wonder whether it would not be better to limit the fluctuations in the imaginary direction by introducing a smooth cutoff ;but since such a smooth cutoff function will necessarily be nonholomorphic it will destroy the formal arguments even more ; we therefore stick with the simplest choice of a periodic cutoff .we conclude therefore that the introduction of a cutoff and imposing periodic boundary conditions leads to a breakdown of the formal arguments given in sec .[ secii ] .although it is difficult to quantify precisely the effect of this , it seems reasonable to expect that it is still the behavior of and similar products at large that determines what is happening . for the cle it is also clear that a very large cutoff will practically not be felt , because the system very rarely will make contact with it .this is borne out by our numerics which clearly shows convergence to the limit of infinite cutoff .for the fpe , on the other hand the issue is less clear , because there are very large boundary terms arising from the gradients of the drift across the ` seam ' . in any case , for the fpe we can not directly compare with the cutoff - free results , because these do not exist .to understand in more detail how boundary terms affect the behaviour of complex langevin simulations , we studied in some detail the u(1 ) one - link model in the hopping ( hdm ) approximation that was already discussed in ref . for .the action is s =- z -(z - i)= - a(z - ic ) , with a= , c=. the complex drift force is correspondingly and the two components of the drift read as discussed in ref . there are two fixed points at and ; the first one is attractive , the second one repulsive .a special feature of this model is that for the drift is purely in direction .if in addition , the langevin process will never leave the line if it starts there ( we emphasize that the properties discussed in this paragraph do not hold for the full u(1 ) one - link model , which was studied in detail in ref .it is therefore straightforward to find an explicit solution to the stationary fpe , [ fpesol ] p(x , y ; ) e^-s(x+ic)(y - c ) .it follows that this model is actually equivalent to one with a real action , once we shift and replace by ( and now embody the dependence on the parameters of the model ) .the numerics presented below show that the line is an attractor for the langevin process ; this indicates that the solution ( [ fpesol ] ) is unique ( with proper normalization ) .these properties imply that for the dynamics is completely understood . when , the presence of the repulsive fixed point is responsible for the occurrence of large excursions , which are well known to be the scourge of complex langevin simulations .for large the drift terms dominate over the noise , and the langevin process is essentially just a deterministic motion ; the ` classical ' trajectories are given by z(t)=i+ic , where the complex integration constant is related to the starting point by c= ( ) .it is easy to see that all trajectories , except those starting on the unstable trajectories ( ) are attracted to the stable fixed point at . as an illustration we show in fig .[ u1 ] a scatter plot of a langevin simulation clearly exhibiting the classical orbits .there were 500 update steps between consecutive points and we used . to enhance the classical features , this simulation was done with the noise terms in the cle ( see eqs .( [ eqcle1 ] , [ cledisc ] ) below ) suppressed by a factor 10 ; this is of course equivalent to replacing by while multiplying the force terms by a factor of 100 .this scatter plot should capture some generic features that will also be present for other choices of the model parameters .one - link model at with reduced noise ( see text ) . ] in order to numerically study the role of boundary terms and large values at nonzero , we introduce a cutoff in imaginary direction , placed symmetrically around , i.e. , and impose periodic boundary conditions . to see the effect of this cutoff ,we compute numerically the expectation values of the observables for , for various values of and , both by simulating the langevin process and by numerical solution of the fpe . note that these observables grow exponentially at large .the exact values are given by e^ikx= e^-kc , where are the modified bessel functions of the first kind .the langevin process is discretized in the usual way , where and are pseudorandom numbers with zero mean and variance 2 .we use periodic boundary conditions in the imaginary direction , as stated above .we also use an adaptive step size , choosing such that the product of . to estimate the statistical error we run 100 trajectories with independent random starting points . in the u(1 ) one - link model , obtained from a numerical solution of the real fpe , for various values of : ( top ) , ( middle ) , ( bottom ) . see the main text for further details ., title="fig : " ] in the u(1 ) one - link model , obtained from a numerical solution of the real fpe , for various values of : ( top ) , ( middle ) , ( bottom ) .see the main text for further details ., title="fig : " ] in the u(1 ) one - link model , obtained from a numerical solution of the real fpe , for various values of : ( top ) , ( middle ) , ( bottom ) .see the main text for further details ., title="fig : " ] dependence of ( lower points ) and ( higher points ) from fpe for various values of the cutoff .the bottom figure zooms in on smaller values of .the lines are guides to the eye , the horizontal dotted lines indicate the correct results ( 0.483564 and 0.592966 respectively ) ., title="fig : " ] + dependence of ( lower points ) and ( higher points ) from fpe for various values of the cutoff .the bottom figure zooms in on smaller values of .the lines are guides to the eye , the horizontal dotted lines indicate the correct results ( 0.483564 and 0.592966 respectively ) ., title="fig : " ] to solve the fpe numerically , we employ the periodicity in and consider the fourier decompositions with the inverse transformations given by the fpe can be rewritten in terms of these modes as \notag\\ & + \,\frac{a}{2}\sinh(y - c)\partial_y\left[\widehat p(k-1,y;t)+ \widehat p(k+1,y;t)\right ] .\;\;\;\;\end{aligned}\ ] ] this equation is solved numerically with a discretized time step and a spatial discretization in of .we vary the cutoff in the direction from up to ( in some cases ) . in the direction , with , we use points . we found that convergence was reached after time steps , corresponding to a langevin time .we found convergence for all the values of and studied .for , we were not able to solve the fpe numerically , due to the singular behaviour , but the solution is known analytically , see eq .( [ fpesol ] ) .we fix the parameters of the model to be = 1.0 , = 0.25 , = 0.5 . in fig .[ f.p_dis ] we show some examples of the real probability distribution at large langevin time for various and a given cutoff .the direction goes from left to right and the compact direction from back to front .we observe that at small , the distribution resembles the analytic result at .the distribution is very narrow in the direction and boundary effects are not expected to play a role . increasing results in a wider distribution , and boundaryeffects become clearly visible .the apparent non - smooth behaviour at the edge is to be expected from the discontinuity across the seam , discussed at the end of the previous section . after obtainingthe distribution , expectation values of observables follow from eq .( [ eq : op ] ) . in tables 1 7 ( see end of paper ) we compare the results of the langevin simulation and the fpe for increasing values of from up to .note that the langevin equation can be solved without a cutoff ( ) .the imaginary parts of the observables are consistent with zero .the data of the tables are also summarized in figs .[ f.ni-dep ] , [ f.l - dep ] ( the fpe and the cle data are indistinguishable at the scale of the figures ; the rightmost points in fig .[ f.l - dep ] correspond to for fpe and for cle ) . )dependence of ( lower data ) and ( upper data ) for various values of , for fpe ( open symbols ) and cle ( full symbols , note that the errorbars are much smaller that the points ) .the lines are guides to the eye .the horizontal dotted lines indicate the correct results . ]the following facts can be inferred from these results : + ( 1 ) cle and fpe give rather similar results , but sometimes they differ by several ( statistical error of the cle simulation ) . + ( 2 ) all the data show a clear dependence on , in contrast to the conclusion of the formal arguments . for larger values both cle and fpe give results different from the exact values .+ ( 3 ) the best results are generally obtained for the smallest . in this casethere is also the weakest dependence , in fact no dependence whatsoever for .+ obviously the presence of the cutoff and periodic boundary conditions affects the cle and fpe in a similar way .but the dependence shows the failure of the formal argument , even for .at least for one has a clear case of ` convergence to a wrong limit ' .the data of the cle with and are actually not really converged , except for observing the scatter plots for different langevin times it appears unclear whether the observables really reach an equilibrium distribution they tend to drift out to infinity , whereas their averages , while remaining small , suffer from huge fluctuations . on the other hand the observables seem to reach a stable distribution for and any .guralnik and pehlevan studied an instructive toy model on and , called gp model henceforth .its action is [ gpaction ] s = -i(z+z^3 ) , and was studied in connection with pt invariant but non - hermitian hamiltionians ( where pt indicates the combined action of parity and time reversal ) .the action ( [ gpaction ] ) leads to the drift forces k_x=-2x y , k_y=(1+x^2-y^2 ) .there is a stable fixed point at and an unstable one at .the ` classical trajectories ' obtained by leaving out the noise are given by z(t)=. since this is a mbius transformation from to the trajectories are circles .they can be imagined to emerge from the unstable fixed point at and go to the stable fixed point as .those classical trajectories again can be seen clearly in the large excursions , since there the noise becomes negligible .[ guralnikclass ] shows the result from a langevin simulation at and ( in this case there are 50000 update steps between two consecutive points ) .dependence of in the gp model at . ]dependence of in the gp model at . ] . ] guralnik and pehlevan give the exact results of the first three moments at , they are they solved the discretized langevin equation numerically , using , and obtained good agreement with those exact results .we did some more and probably longer simulations at , , , .the results for are shown in fig .[ imz ] ; again they show a clear dependence on , in conflict with the formal reasoning . while for small there is agreement with the exact result , for larger we again have convergence to the wrong limit .( top ) and i m ( bottom ) in the gp model with , .,title="fig : " ] ( top ) and i m ( bottom ) in the gp model with , .,title="fig : " ] the results for show a similar behaviour , except that at the result is , with a statistical error that is huge compared to the error found at .the data for show an even more dramatic failure for larger : they diverge for , whereas for fluctuations are large .finally we measured , which has an exact value of . heredivergence becomes manifest already for ( but it might occur for all ) . in order to have an idea of the equilibrium distribution for we show in fig .[ plots2 ] a scatter plot of 50000 configurations in the complex plane .non - gaussian behaviour is quite clear from this plot .noticeable is the appearance of sharp edges of the distribution , possibly indicating jumps , but no functions like in the case in the hopping expansion . to obtain these pictures we sampled over 60000 points taken at equal intervals of 0.5 in langevin time .similar distributions have been observed in ref . for the full u(1 ) one - link model .the non - gaussian character is further demonstrated in fig . [ plots ] : the histograms for both and deviate strongly from a gaussian distribution .the question of convergence vs. divergence apparently depends on the values of , but it is more plausible that there is no qualitative difference between the different positive values of , only the time needed to observe the asymptotic behaviour is different . on the other hand for there seems to be really a qualitative difference : the distributions develop discontinuities or even functions ; but more important is the fact that they seem to drop very rapidly in the imaginary direction .we tentatively conclude that for the systems relax to equilibrium measures that show at least exponential decay in imaginary direction ( in our simple model this decay is of course much stronger the measure is zero for ) . for situation is less clear , but it seems that in the model we get a decay at least like , but probably not stronger than any exponential . in the model of guralnik andpehlevan the data suggest a power - like decay ( the power appears to be near 2 ) .the type of decay for depends on the model considered ; but for some observables their growth in imaginary direction may conspire with the falloff of the equilibrium measure in such a way that we obtain convergence to a wrong limit . in any caseone should not expect convergence of the mean value for _ all _ holomorphic observables . for very small and limited simulation time, the behaviour is of course indistinguishable from the one at , and one may reach a quasi - convergence to the right limit , even if an infinitely long simulation would diverge .if we try to generalize boldly from our toy model to real lattice gauge theories , we expect that for and for most interesting observables , such as wilson loops , polyakov loops etc . , we have to expect boundary terms contributing even as the boundary is sent to infinity . formultiply charged loops and the situation could be even worse : those boundary contributions may diverge as the boundary is moved to infinity .but these problems probably will not occur for and do not show up at very small values of either , at least if simulations are not run excessively long .we realize that definite conclusions about the falloff of the probability density in the direction have not yet been reached ; we intend to return to a more detailed study of this question in a future paper .\(1 ) in complex langevin simulations one should use . if one wants to do a random walk simulation or an iterate of the fokker - planck operator , this is not possible , but one should make sure that .\(3 ) wilson or polyakov loops of higher charge will generally have much larger fluctuations .in general they are not needed for physics applications , but they may be worth looking at because they can give information about the probability distribution .g. aarts , jhep * 0905 * ( 2009 ) 052 [ 0902.4686 [ hep - lat ] ] . c. pehlevan and g. guralnik , nucl .b * 811 * , 519 ( 2009 ) [ 0710.3756 [ hep - th ] ] .g. guralnik and c. pehlevan , nucl . phys .b * 822 * , 349 ( 2009 ) [ 0902.1503 [ hep - lat ] ] .g. aarts , f. james , e. seiler and i. o. stamatescu , phys .* b * ( to appear ) [ arxiv:0912.0617 [ hep - lat ] ] .g. g. batrouni , g. r. katz , a. s. kronfeld , g. p. lepage , b. svetitsky and k. g. wilson , phys .d * 32 * ( 1985 ) 2736 .h. gausterer and h. thaler , j. phys .* 31 * ( 1998 ) 2541 .
we analyze to what extent the complex langevin method , which is in principle capable of solving the so - called sign problems , can be considered as reliable . we give a formal derivation of the correctness and then point out various mathematical loopholes . the detailed study of some simple examples leads to practical suggestions about the application of the method .
many codes have been produced by the astronomical community , which address many different aspects related to the modelling of astrophysical phenomena .most of the time these codes are built as software packages , which can then be downloaded from the interested user on his / her own computing system and installed .sometimes these packages are packed as precompiled `` distributions '' , for specific architectures and/or operating systems , containing already binaries and executable files which , at the end of the installation process , are put in some convenient areas . in other cases ,the source codes are given , and instructions for compiling them on different systems are provided .anyhow , the final user is asked to install the software directly on a computer system available .+ on the other hand , the diffusion of html languages and of the related technologies have made feasible a different approach to code usage , where one does not need anymore to download and install locally large software packages .this approach makes use of _ portals _ ,i.e. some multifunction www interfaces , to allow remote users to prepare and execute some computational tasks on some predetermined platforms , where the user has been authorised to execute his / her jobs .the advantage is that the user is relieved from the duty of downloading and compiling the source code : the latter phase can sometimes be rather cumbersome , particularly to an unexperienced user .the obvious disadvantage of this approach is that the user has lesser control over the code he is using , particularly because he can operate very few ( if any ) changes on the source code . + usually , web portals for the astronomical community have been mostly designed and used to enable database searches . a complex extension of this concept , namely the astrophysical virtual observatory ( avo , http://www.euro-vo.org/ ) is currently under active development as a joint , large - scale effort involving few international institutions .the project presente here , astrocomp ( http://www.astrocomp.it ) , is a portal specifically designed to allow the access to _ simulation _ codes , rather than pre - existing databases .the end user will be able to remotely prepare the simulation , after browsing among those existing on the specified platforms , perform the run , and finally retrieve the data , interactively operating with the portal .+ this paper introduces the portal , its main functionalities and gives a short description of the software presently available ( more codes can be added on request of the authors ) .section 2 contains a description of the portal , while in sect .3 an account of the languages and technologies used to build the portal is present .4 describes characteristics of the numerical codes available , while sect .5 gives introductory info on the practical use of the facilities of the portal .finally , section 6 contains the conclusions .the astrocomp project is being developed by the inaf - astrophysical observatory of catania , the university of roma la sapienza and enea , funded by a consiglio nazionale delle ricerche ( cnr ) grant in the frame of the program ` agenzia 2000 ' .+ astrocomp is a portal ( http://www.astrocomp.it ) based on web technologies , aimed at managing and using codes for astrophysics , employing user - friendly interfaces ; at present , the codes available are , mainly , related to the simulation of gravitational systems of astrophysical and cosmological interest .the main functionality of the portal is a user - friendly application which allows the scientist to submit a job in a grid of computing systems . indeed , even if , at the moment , there is a direct link between the software and the system where the user job will run , the database architecture is not related to a specific system platform , and the connection with the system is made with several parameters that can be easily changed .+ at present , the computational platforms registered in astrocomp are the cineca ( casalecchio di reno , bologna , italy ; http://www.cineca.it ) mpp systems : ibm sp4 with 512 pes and ibm linux cluster with 512 pes through the formal agreement between inaf ( istituto nazionale di astrofisica ) and cineca , and the ibm sp system with 32 pes sited at the catania astrophysical observatory. registered astrocomp users can freely use them ( within the assigned time quota ) .astrocomp runs on apache - advanced extranet : this allows to set up connections with no practical limits on the number of authorized users .+ apache is a powerful , flexible , compliant web server , it has a flexible configuration and it is extensible to third - party modules . the access to the astroadmin section , i.e. the reserved area written for the administrator for authorization and authentication , is based on the apache facilities .dynamical web pages are realized with a server - side scripting php ( recursive acronym for `` php : hypertext preprocessor '' ) , a widely used open source general - purpose scripting language that is especially suited for web development and can be embedded in html .the php programming language provides a powerful extension to html to create advanced and interactive web pages .+ astrocomp has a database of the software and of the systems and codes managed by the portal ; this is done with the mysql database management system .the database tables contain a description of the codes properties and of the hardware resources .the portal is designed with a relational database containing the information related to the hardware systems accessible to astrocomp , that is arranged such to ensure an easy implementation of new hpc resources .the astrocomp system administrator updates the hardware table and in particular fills the fields related to os commands : user login , directories , disk quota , job submission and job management etc .the administration of all the portal features is done through a reserved section .a new code can be quickly added to the portal and its behaviour can be controlled on the user - side .there are no practical limits to the variety of codes that can be included in the astrocomp structure .+ the portal architecture allows us to handle each code considering the i / o files , shell scripts , boundary condition data , log files , etc .a job is an entity described by a state : idle , queued or running and some other parameters .an astrocomp section allows the user to know the system status of an hpc giving updated information like memory and cpu usage .in the following sections we shortly describe the codes that are presently available in astrocomp .more exhaustive descriptions of these codes can be found in the portal .fly is a tree parallel code that follows the evolution of a newtonian three - dimensional n - body collisionless system .it is based on the tree barnes - hut algorithm ( ) and periodical boundary conditions are implemented by means of ewald s ( ) summation technique .the evolutive differential equations are integrated using the classical ( second order ) ` leap - frog ' integration scheme with a fixed time - step .all the particles have assigned the same mass , and the spatial cubic grid has fixed size .the supported cosmological models are : standard cold dark matter , lambda models ( with cold dark matter ) , and open models .+ the code was originally developed on a cray t3e system using the logically shared memory access routines ( shmem ) but it also runs on sgi origin systems and on ibm sp by using low - level application programming interface routines ( lapi ) .fly is included in the cpc ( computer physics communications ) program library .more details can be found in and at http://www.ct.astro.it / fly/. atd ( adaptive tree decomposition tree - code ) is a parallel n - body code for the simulations of the dynamics of collisonal and collisionless boundary - free self - gravitating stellar systems .the gravitational interaction among the ` particles ' is computed with an algorithm based on the barnes & hut scheme ( ) including up to the quadrupole moment in the multipolar expansion .a ` dynamical ' tree reconstruction is also implemented in the code .this permits a more frequent update of the small scale ( neighbouring ) interactions ( evaluated by direct summation ) in respect to the large scale forces , without the need of a complete reconstruction of the whole data tree structure .the time integration of the trajectories of the particles representing stars in the numerical model , is performed using individual and variable time - steps in a ` leap - frog ' algorithm corrected in such a way to preserve 2 order accuracy also during the time - step change .the code has been parallelized adopting an original scheme for the distribution of the computational work among processors called ` adaptivetree decomposition ' ( interested readers can find an exhaustive description of this scheme in ) .we carried out two different parallel versions of the code which can run both on shared memomy platforms ( using openmp language directives ) and on distributed memory computers ( employing suitable mpi calls ) .further details on the code features and its usage can be found in the portal itself .mara is an mpi code for modelling sequences of light curves of single and binary stars with surface brightness inhomogeneities , in particular cool spots .it allows to derive sequences of maps of the spot surface distribution which are useful to study magnetic solar - like activity in close binaries and single stars .moreover , it allows to determine the best set of photometric parameters for a close active binary by correcting the systematic errors due to the light curve distortion induced by spots (; ) .anyone interested in using the facilities provided by astrocomp has to register , first . the steps to do are : 1 .register as a new user by clicking on the registration form button on the left - hand side of the home - page ; 2 .fill in carefully the form and submit it ; 3 .wait for a confirmation e - mail sent by the portal administrator .the scientific staff will evaluate the request and assign the access to the portal services according to three different ` user classes ' : a , b and c. the a class has the highest limit on the cpu time and disk usage ; the b class supplies the user with a limited amount of cpu time and disk quota . while the c class assigns only few resources to execute very short jobs .clicking on the user area button , any registered user can have the access to all the computational facilities of astrocomp , after the normal login procedure . at present, the astrocomp user is allowed to : * start a new simulation by choosing the code to used and by determining the simulation parameters via the parameters on - line form ( initial conditions can also be uploaded by the local user s host to the server ) ; * choose among different platforms in the pool of the available resources of astrocomp , taking into account the work - load and the accessibility of each system ; or * browse the status of a previously launched job , possibly checking the intermediate results with a preliminary ` on - the - fly ' visualization tool already available in the portal ; * download the final and/or intermediate results .moreover , anyone who visits the portal without being , necessarily , a registered user can also examine all the implemented software clicking on the software button on the home - page ( http://www.astrocomp.it/software ) , read the enclosed documentations and manuals , and take a look at the features and the status of all the computing machines that can be employed clicking on hardware .( http://www.astrocomp.it/user/hardware ) the user chooses the system where the job will be submitted , then he fills in some forms specifying the parameters and all the variables involved in the job .the complete job history is stored in a mysql table .the user can also easily retrieve the parameter collection and can re - use them for a new job submission .+ the last phase consists in the job preparation and submission it is the portal itself that copies and/or uploads to the remote system , all the needed files and shell scripts and compiles and submits the code . in the next future the grid environement will allow the user to register and store the output files in a storage element ( e.g. datagrid environment , see http://eu-datagrid.web.cern.ch/ ) .the aim of astrocomp ( http://www.astrocomp.it ) is to run astrophysical codes on a grid of systems , avoiding the effort to learn new specific operating system and/or parallelization commands .even if , at present , astrocomp allows the execution on a limited set of systems , according to the design of the portal and its future development we foresee to execute astrocomp on an effective computational grid .cineca ( http://www.cineca.it ) has already selected this project as a pilot project for internal mpp grid usage .a formal agreement between the italian national institute of nuclear physics ( infn ) and the inaf - astrophysical observatory of catania allows us to build a local datagrid node at the inaf site .astrocomp will also be ported to the datagrid in the next future .+ all scientists interested in the astrocomp facility are welcome and invited to ask for the inclusion of their own software in the astrocomp database .barnes , j. & hut , p. , nature , 1986 , 324 , 446 .becciani , u. & antonuccio , v. , computer physics comm . , 2001 , 136 , 54 .ewald , p. p. , ann .phys . , 1921 , 64 , 253 .lanza , a. & becciani , u. , in preparation .miocchi & capuzzo dolcetta , a&a , 2002 , 382 , 758 .rodon , lanza & becciani , a&a , 2001 , 371 , 174 .
astrocomp is a joint project , developed by the inaf - astrophysical observatory of catania , university of roma la sapienza and enea . the project has the goal of providing the scientific community of a web - based user - friendly interface which allows running parallel codes on a set of high - performance computing ( hpc ) resources , without any need for specific knowledge about parallel programming and operating systems commands . astrocomp provides , also , computing time on a set of parallel computing systems , available to the authorized user . at present , the portal makes a few codes available , among which : fly , a cosmological code for studying three - dimensional collisionless self - gravitating systems with periodic boundary conditions ; atd , a parallel tree - code for the simulation of the dynamics of boundary - free collisional and collisionless self - gravitating systems and mara , a code for stellar light curves analysis . other codes are going to be added to the portal . , , , , , hpc , grid computing , web - based interface , numerical astrophysics
the growth of cosmological structure in the universe is determined primarily by ( newtonian ) gravitational forces . unlike the electrostatic force , which can be both attractive and repulsive and for which shielding is important , the ubiquitous attraction of the gravitational force leads to extremely dense structures , relative to the average density in the universe .galaxies , for example , are typically times more dense than their surrounding environment , and substructure within them can be orders of magnitude more dense .modelling such large density contrasts is difficult with fixed grid methods and , consequently , particle - based solvers are an indispensable tool for conducting simulations of the growth of cosmological structure .the lagrangian nature of particle codes makes them inherently adaptive without requiring the complexity associated with adaptive eulerian methods .the lagrangian smoothed particle hydrodynamics ( sph, ) method also integrates well with gravitational solvers using particles , and because of its simplicity , robustness and ability to easily model complex geometries , has become widely used in cosmology .further , the necessity to model systems in which orbit crossing , or phase wrapping , occurs ( either in collisionless fluids or in collisional systems ) demands a fully lagrangian method that tracks mass . while full six - dimensional ( boltzmann ) phase - space models have been attempted , the resolution is still severely limited on current computers for most applications .particle solvers of interest in cosmology can broadly be divided into hybrid direct plus grid - based solvers such as particle - particle , particle - mesh methods ( 3m, ) and `` tree '' methods which use truncated low order multipole expansions to evaluate the force from distant particles .full multipole methods , are slowly gaining popularity but have yet to gain widespread acceptance in the cosmological simulation community .there are also a number of hybrid tree plus particle - mesh methods in which an efficient grid - based solver is used for long - range gravitational interactions with sub - grid forces being computed using a tree .special purpose hardware has rendered the direct pp method competitive in small simulations ( fewer than 16 million particles ) , but it remains unlikely that it will ever be competitive for larger simulations .the 3 m algorithm has been utilized extensively in cosmology .the first high resolution simulations of structure formation were conducted by efstathiou & eastwood using a modified 3 m plasma code . in 1998the virgo consortium used a 3 m code to conduct the first billion particle simulation of cosmological structure formation .the well - known problem of slow - down under heavy particle clustering , due to a rapid rise in the number of short - range interactions , can be largely solved by the use of adaptive , hierarchical , sub - grids .only when a regime is approached where multiple time steps are beneficial does the adaptive 3 m ( 3 m ) algorithm become less competitive than modern tree - based solvers .further , we note that a straightforward multiple time - step scheme has been implemented in 3 m with a factor of 3 speed - up reported .3 m has also been vectorized by a number of groups including summers . shortly after , both ferrell & bertschinger and theuns adapted 3 m to the massively parallel architecture of the connection machine .this early work highlighted the need for careful examination of the parallelization strategy because of the load imbalance that can result in gravitational simulations as particle clustering develops .parallel versions of 3 m that use a 1-dimensional domain decomposition , such as the p4 m code of brieu & evrard develop large load imbalances under clustering rendering them useful only for very homogeneous simulations .development of vectorized treecodes predates the early work on 3 m codes and a discussion of a combined tree+sph ( treesph ) code for massively parallel architectures is presented by dav .there are now a number of combined parallel tree+sph solvers and tree gravity solvers .pearce & couchman have discussed the parallelization of 3m+sph on the cray t3d using cray adaptive fortran ( craft ) , which is a directive - based parallel programming methodology .this code was developed from the serial hydra algorithm and much of our discussion in this paper draws from this first parallelization of 3m+sph .a highly efficient distributed memory parallel implementation of 3 m using the cray shmem library has been developed by macfarland , and further developments of this code include a translation to mpi-2 , the addition of 3 m subroutines and the inclusion of an sph solver .treecodes have also been combined with grid methods to form the tree - particle - mesh solver .the algorithm is somewhat less efficient than 3 m in a fixed time - step regime , but its simplicity offers advantages when multiple time - steps are considered . another interesting , and highly efficient n - body algorithm is the adaptive refinement tree ( art ) method which uses a short - range force correction that is calculated via a multi - grid solver on refined meshes .there are a number of factors in cosmology that drive researchers towards parallel computing .these factors can be divided into the desire to simulate with the highest possible resolution , and hence particle number , and also the need to complete simulations in the shortest possible time frame to enable rapid progress .the desire for high resolution comes from two areas .firstly , simultaneously simulating the growth of structure on the largest and smallest cosmological scales requires enormous mass resolution ( the ratio of mass scales between a supercluster and the substructure in a galaxy is ) .this problem is fundamentally related to the fact that in the currently favoured cold dark matter cosmology structure grows in a hierarchical manner .a secondary desire for high resolution comes from simulations that are performed to make statistical predictions . to ensure the lowest possible sample variance the largest possible simulation volumeis desired . for complex codes ,typically containing tens of thousands of lines , the effort in developing a code for distributed - memory machines , using an api such as mpi , can be enormous .the complexity within such codes arises from the subtle communication patterns that are disguised in serial implementations . indeed , as has been observed by the authors , development of an efficient communication strategy for a distributed memory version of the 3 m codehas required substantially more code than the 3 m algorithm itself ( see ) .this is primarily because hybrid , or multi - part solvers , of which 3 m is a classic example , have data structures that require significantly different data topologies for optimal load balance at different stages of the solution cycle .clearly a globally addressable work space renders parallelization a far simpler task in such situations .it is also worth noting that due to time - step constraints and the scaling of the algorithm with the number of particles , doubling the linear resolution along an axis of a simulation increases the computational work load by a factor larger than 20 ; further doubling would lead to a workload in excess of 400 times greater .the above considerations lead to the following observation : modern smp servers with their shared memory design and superb performance characteristics are an excellent tool for conducting simulations requiring significantly more computational power than that available from a workstation .although such servers can never compete with massively parallel machines for the largest simulations , their ease of use and programming renders them highly productive computing environments .the openmp ( http://www.openmp.org ) api for shared - memory programming is simple to use and enables loop level parallelism by the insertion of pragmas within the source code .other than their limited expansion capacity , the strongest argument against purchasing an smp server remains hardware cost .however , there is a trade - off between science accomplishment and development time that must be considered above hardware costs alone .typically , programming a beowulf - style cluster for challenging codes takes far longer and requires a significantly greater monetary and personnel investment on a project - by - project basis .conversely , for problems that can be efficiently and quickly parallelized on a distributed memory architecture , smp servers are not cost effective .the bottom line remains that individual research groups must decide which platform is most appropriate .the code that we discuss in this paper neatly fills the niche between workstation computations and massively parallel simulations .there is also a class of simulation problems in cosmology that have particularly poor parallel scaling , regardless of the simulation algorithm used ( the fiducial example is the modelling of single galaxies , see ) .this class of problems corresponds to particularly inhomogeneous particle distributions that develop a large disparity in particle - update timescales ( some particles may be in extremely dense regions , while others may be in very low density regions ) .only a very small number of particlesinsufficient to be distributed effectively across multiple nodeswill require a large number of updates due to their small time - steps . for this type of simulation the practical limit of scalability appears to be order 10 pes .the layout of the paper is as follows : in section 2 we review the physical system being studied .this is followed by an extensive exposition of the 3 m algorithm and the improvements that yield the 3 m algorithm .the primary purpose of this section is to discuss some subtleties that directly impact our parallelization strategy . at the same timewe also discuss the sph method and highlight the similarities between the two algorithms .section 2 concludes with a discussion of the serial hydra code .section 3 begins with a short discussion of the memory hierarchy in risc ( reduced instruction set computer ) systems , and how eliminating cache - misses and ensuring good cache reuse ensures optimal performance on these machines .this is followed by a discussion of a number of code optimizations for risc cpus that also lead to performance improvements on shared memory parallel machines ( primarily due to increased data locality ) . in particularwe discuss improvements in particle bookkeeping , such as particle index reordering .while particle reordering might be considered an expensive operation , since it involves a global sort , it actually dramatically improves run time because of bottlenecks in the memory hierarchy of risc systems .in section 4 we discuss in detail the parallelization strategies adopted in hydra_omp . to help provide further understanding we compare the serial and parallel call trees . in section 5we consolidate material from sections 3 & 4 by discussing considerations for numa machines and in particular the issue of data placement .performance figures are given in section 6 , and we present our conclusions in section 7 .the simulation of cosmic structure formation is posed as an initial value problem .given a set of initial conditions , which are usually constrained by experimental data , such as the wmap data , we must solve the following gravito - hydrodynamic equations ; 1 .the continuity equations , where denotes gas and dark matter .2 . the euler and acceleration equations , 3 .the poisson equation , 4 . the entropy conservation equation , where the conservation of entropy is a result of ignoring dissipation , viscosity and thermal conductivity ( an ideal fluid ) . the dynamical system is closed by the equation of state .we assume an ideal gas equation of state , with in our code , although many others are possible .alternatively , the entropy equation can be substituted with the conservation of energy equation , and the equation of state is then .we note that the use of a particle - based method ensures that the continuity equations are immediately satisfied .let us first discuss the basic features of the 3 m algorithm , a thorough review can be found in .the fundamental basis of the 3 m algorithm is that the gravitational force can be separated into short and long range components , , where will be provided by a fourier - based solver and will be calculated by summing over particles within a given short range radius .the force is typical known as the pm force , for particle - mesh , while the range force is typical known as the pp force , for particle - particle .the accuracy of the force can be improved by further smoothing the mesh force , , and hence increasing the range over the which the short - scale calculation is done , at the cost of an increased number of particle particle interactions . the first step in evaluating that pm force is to interpolate the mass density of the particle distribution on to a grid which can be viewed as a map from a lagrangian representation to an eulerian one .the interpolation function we use is the the ` triangular shaped cloud ' ( tsc ) ` assignment function ' ( see for a detailed discussion of possible assignment functions ) .two benefits of using tsc are good suppression of aliasing from power above the nyquist frequency of the grid and a comparatively low directional force error around the grid spacing .the mass assignment operation count is , where is the number of particles .once the mass density grid has been constructed it is fourier transformed using an fft routine , which is an operation , where is the extent of the fourier grid in one direction . the resulting k - space fieldis then multiplied with a green s function that is calculated to minimize errors associated with the mass assignment procedure ( see hockney & eastwood for a review of the ` q - minimization ' procedure ) . following this convolution ,the resulting potential grid is differenced to recover the force grid .we use a 10-point differencing operator which incorporates off - axis components and reduces directional force errors , but many others are possible .finally , the pm accelerations are found from the force grid using the mass assignment function to interpolate the acceleration field .the pm algorithm has an operation cost that is approximately where and are constants ( the cost of the differencing is adequately approximated by the logarithmic term describing the fft ) .resolution above the nyquist frequency of the pm code , or equivalently sub pm grid resolution , is provided by the pair - wise ( shaped ) short - range force summation .supplementing the pm force with the short - range pp force gives the full p m algorithm , and the execution time scales approximately in proportion to + log l + , where is a constant and n corresponds to the number of particles in the short range force calculation within a specified region .the summation is performed over all the pp regions , which are identified using a chaining mesh of size _ ls_ ; see [ chaining ] for an illustration of the chaining mesh overlaid on the potential mesh .p m suffers the drawback that under heavy gravitational clustering the short range sum used to supplement the pm force slows the calculation down dramatically - the n term dominates as an increasingly large number of particles contribute to the short range sum .although acutely dependent upon the particle number and relative clustering in a simulation , the algorithm may slow down by a factor between 10 - 100 or possibly more . while finer meshes partially alleviate this problem they quickly become inefficient due to wasting computation on areas that do not need higher resolution .adaptive p m remedies the slow - down under clustering of 3 m by isolating regions where the n term dominates and solving for the short range force in these regions using fft methods on a sub - grid , which is then supplemented by short range calculations involving fewer neighbours .this process is a repeat of the 3 m algorithm on the selected regions , with an isolated fft and shaped force . at the expense of a little additional bookkeeping, this method circumvents the sometimes dramatic slow - down of 3 m .the operation count is now approximately , ,\ ] ] where is the number of refinements .the and are all expected to be very similar to the , and of the main solver , while the are approximately four times larger than due to the isolated fourier transform .ideally during the course of the simulation the time per iteration approaches a constant , roughly 2 - 4 times that of a uniform distribution ( although when the sph algorithm is included this slow - down can be larger ) .when implemented in an adaptive form , with smoothing performed over a fixed number of neighbour particles , sph is an order scheme and fits well within the p m method since the short - range force - supplement for the mesh force can be used to find the particles which are required for the sph calculation .there are a number of excellent reviews of the sph methodology and we present , here , only those details necessary to understand our specific algorithm implementation .full details of our implementation can be found in .we use an explicit ` gather ' smoothing kernel and the symmetrization of the equation of motion is achieved by making the replacement , in the ` standard ' sph equation of motion ( see , for example ) .note that the sole purpose of ` kernel averaging ' in this implementation , denoted by the bar on the smoothing kernel , is to ensure that the above replacement is correct to .hence the equation of motion is , the artificial viscosity , , is used to prevent interpenetration of particle flows and is given by , where , and with bars being used to indicate averages over the indices .shear - correction , is achieved by including the term which reduces theunwanted artificial viscosity in shearing flows .note that the lack of symmetry in is not a concern since the equation of motion enforces force symmetry .the energy equation is given by , the solution of these equations is comparatively straightforward . as in the 3 m solver it is necessary to establish the neighbour particle lists .the density of each particle must be evaluated and then , in a second loop , the solution to the force and energy equations can be found . since the equation of motion does not explicitly depend on the density of particle ( the artificial viscosity has also been constructed to avoid this ) we emphasize that there is no need to calculate all the density values first and then calculate the force and energy equations . if one does calculate all densities first , then clearly the list of neighbours is calculated twice , or alternatively , a large amount of memory must be used to store the neighbour lists of all particles . using our methodthe density can be calculated , one list of neighbours stored , and then the force and energy calculations can be quickly solved using the stored list of neighbours ( see ) .as emphasized , the list data - structure used in the short - range force calculation provides a common feature between the 3 m and sph solvers .hence , once a list of particle neighbours has been found , it is simple to sort through this and establish which particles are to be considered for the gravitational calculation and the sph calculation .thus the incorporation of sph into ap m necessitates only the coordination of scalings and minor bookkeeping .the combined adaptive p - sph code , ` hydra ' , in serial fortran 77 form is available on the world wide web from http://coho.physics.mcmaster.ca/hydra .the solution cycle of one time - step may be summarized as follows , 1 .assign mass to the fourier mesh .convolve with the green s function using the fft method to get potential .difference this to recover mesh forces in each dimension .3 . apply mesh force and accelerate particles .4 . decide where it is more computationally efficient to solve via the further use of fourier methods as opposed to short - range forces and , if so , place a new sub - mesh ( refinement ) there . 5 .accumulate the gas forces ( and state changes ) as well as the short range gravity for all positions not in sub - meshes .repeat 1 - 5 on all sub - meshes until forces on all particles in simulation have been accumulated .update time - step and repeat note that the procedure of placing meshes is hierarchical in that a further sub - mesh may be placed inside a sub - mesh .this procedure can continue to an arbitrary depth but , typically , even for the most clustered simulations , speed - up only occurs to a depth of six levels of refinement . a pseudo call - tree for the serial algorithm can be seen in [ ctree ] .the purpose of each subroutine is as follows , *startup reads in data and parameter files * inunit calculates units of simulation from parameters in start - up files * updaterv time - stepping control * output check - pointing and scheduled data output routines * accel selection of time - step criteria and corrections , if necessary , for comoving versus physical coordinates * force main control routine of the force evaluation subroutines * rfinit & load set up parameters for pm and pp calculation , in load data is also loaded into particle buffers for the refinement .* clist & uload preparation of particle data for any refinements that may have been placed , uload also unloads particle data from refinement buffers * refforce call pm routines , controls particle bookkeeping , call pp routines . *green & igreen calculation of green s functions for periodic ( green ) and isolated ( igreen ) convolutions .* mesh & imesh mass assignment , convolution call , and calculation of pm acceleration in the periodic ( mesh ) and isolated ( imesh ) solvers. * cnvlt & icnvlt green s function convolution routines . *four3 m 3 dimensional fft routine for periodic boundary conditions . * list evaluation of chaining cell particle lists * refine check whether refinements need to be placed . * shforce calculate force look - up tables for pp * shgravsph evaluate pp and sph forcesthe architecture of risc cpus incorporates a memory hierarchy with widely differing levels of performance .consequently , the efficiency of a code running on a risc processor is dictated almost entirely by the ratio of the time spent in memory accesses to the time spent performing computation .this fact can lead to enormous differences in code performance .the relative access times for the hierarchy are almost logarithmic .access to the first level of cache memory takes 1 - 2 processor cycles , while access to the second level of cache memory takes approximately 5 times as long .access to main memory takes approximately 10 times longer .it is interesting to note that smp - numa servers provide further levels to this hierarchy , as will be discussed later . to improve memory performance , when retrieving a word from main memory three other wordsare typically retrieved : the ` cache line ' .if the additional words are used within the computation on a short time scale , the algorithm exhibits good cache reuse .it is also important to not access memory in disordered fashion , optimally one should need memory references that are stored within caches .thus to exhibit good performance on a risc processor , a code must exhibit both good cache reuse and a low number of cache misses . in practice, keeping cache misses to a minimum is the first objective since cache reuse is comparatively easy to achieve given a sensible ordering of the calculation ( such as a fortran do loop ) .a number of optimizations for particle codes that run on risc processors are discussed in decyk .almost all of these optimizations are included within our serial code , with the exception of the mass assignment optimizations .indeed a large number of their optimizations , especially those relating to combining x , y , z coordinate arrays into one 3-d array , can be viewed as good programming style .while decyk demonstrate that the complexity of the periodic mass assignment function prevents compilers from software pipelining the mesh writes , we do not include their suggested optimization of removing the modulo statements and using a larger grid .however , the optimization is naturally incorporated in our isolated solver .the first optimization we attempted was the removal of a ` vectorizeable ' numerical recipes fft used within the code ( fourn , see ) .although the code uses an optimized 3-d fft that can call the fourn routine repeatedly using either 1-d or 2-d fft strategy ( to reduce the number of cache misses exhibited by the fourn routine when run in 3-d ) , the overall performance remains quite poor .therefore we replaced this routine with the fftpack ( see ) routines available from netlib , and explicitly made the 3-d fft a combination of 1-d ffts .although there is no question that fftw provides the fastest ffts on almost all architectures we have found little difference between fftpack and fftw within our parallel 3-d fft routine .the greatest performance improvement is seen in the isolated solver where the 3-d fft is compacted to account for the fact that multiple octants are initially zero .linked lists ( hereafter the list array is denoted ll ) are a common data structure used extensively in particle - in - cell type codes ( see , for an extensive review of their use ) . for a list of particles which is cataloged according to cells in which they reside , it is necessary to store an additional array which holds the label of the first particle in the list for a particular cell .this array is denoted ihc for integer head of chain .list traversal for a given cell is frequently programmed in fortran using an if ... then ...goto structure ( although it can be programmed with a do while loop ) , with the loop exiting on the if statement finding a value of zero in the linked list . since the loop ` index ' ( the particle index i ) is found recursively the compiler can not make decisions about a number of optimization processes , particularly software pipelining , for which loops are usually better . additionally ,if the particles indices are not ordered in the list traversal direction then there will usually be a cache miss in finding the element ll(i ) within the linked list array . within the particle data arrays ,the result of the particle indices not being contiguous is another series of cache misses . sincea number of arrays must be accessed to recover the particle data , the problem is further compounded , and removal of the cache miss associated with the particle indices should improve performance significantly .the first step that may be taken to improve the situation is to remove the cache misses associated with the searching through the linked list . to do this the list must be formed so that it is ordered . in other words the first particle in cell j ,is given by ihc(j ) , the second particle is given by ll(ihc(j ) ) , the third by ll(ihc(j)+1 ) _ et cetera_. this ordered list also allows the short range force calculation to be programmed more elegantly since the if .. then .. goto structure of the linked list can be replaced by a loop .however , since there remains no guarantee that the particle indices will be ordered , the compiler is still heavily constrained in terms of the optimizations it may attempt , but the situation is distinctly better than for the standard linked list .tests performed on this ordered list algorithm show that a 30% improvement in speed is gained over the linked list code ( see [ timings ] ) .cache misses in the data arrays are of course still present in this algorithm .as has been discussed , minimizing cache misses in the particle data arrays requires accessing them with a contiguous index .this means that within a given chaining cell the particle indices must be contiguous .this can be achieved by reordering the indices of particles within chaining cells at each step of the iteration ( although if particles need to be tracked a permutation array must be carried ) .this _ particle reordering _ idea was realized comparatively early and has been discussed in the literature .a similar concept has been applied by springel who uses peano - hilbert ordering of particle indices to ensure data locality . however , in 3 m codes , prior to the implementation presented here only macfarland and anderson and shumaker , actually revised the code to remove linked lists , other codes simply reordered the particles every few steps to reduce the probability of cache misses and achieved a performance improvement of up to 45% . since the adaptive refinements in use the same particle indexing method , the particle ordering must be done within the data loaded into a refinement , hierarchical rearrangement of indices results from the use of refinements .the step - to - step permutation is straightforward to calculate : first the particle indices are sorted according to their z - coordinate and then particle array indices are simply changed accordingly .it is important to note that this method of particle bookkeeping removes the need for an index list of the particles ( although in practice this storage is taken by the permutation array ) .all that need be stored is the particle index corresponding to the first particle in the cell and the number of particles in the cell . on a risc system particle reorderingis so efficient that the speed of the simulation algorithm _ more than doubled_. for example , at the end of the santa barbara galaxy cluster simulation , the execution time was reduced from 380 seconds to 160 seconds on a 266 mhz pentium iii processor . on a more modern 2 ghz amd opteron , which has four times the l2 cache of a pentium iii , considerably better prefetch , as well as an on - die memory controller to reduce latency, we found the performance improvement for the final iterations to be a reduction in time from 29 seconds to 17 .this corresponds to a speed improvement of a factor of 1.7 , which , while slightly less impressive than the factor of 2.4 seen on the older pentium iii , is still a significant improvement .a comparison plot of the performance of a linked list , ordered list and ordered particle code is shown in [ timings ] .particle - grid codes , of the kind used in cosmology , are difficult to parallelize efficiently .the fundamental limitation to the code is the degree to which the problem may be subdivided while still averting race conditions and unnecessary buffering or synchronization .for example , the fundamental limit on the size of a computational atom in the pp code is effectively a chaining cell , while for the fft routine it is a plane in the data cube . in practice ,load balance constraints come into play earlier than theoretical limits as the work within the minimal atoms will rarely be equal ( and can be orders of magnitude different ) .clearly these considerations set an upper bound on the degree to which the problem can be subdivided , which in turn limits the number of processors that may be used effectively for a given problem size .the code is a good example of gustafson s conjecture : a greater degree of parallelism may not allow arbitrarily increased execution speed for problems of fixed size , but should permit larger problems to be addressed in a similar time . at an abstract level, the code divides into essentially two pieces : the top level mesh and the refinements .parallelization of the top level mesh involves parallelizing the work in each associated subroutine .since an individual refinement may have very little work a parallel scheme that seeks to divide work at all points during execution will be highly inefficient .therefore the following division of parallelism was made : conduct all refinements of size greater than particles across the whole machine , for refinements with less than particles use a list of all refinements and distribute one refinement to each processor ( or thread ) in a task farm arrangement . on the t3dthe limiting was found to be approximately 32,768 particles , while on more modern machines we have found that 262,144 is a better limit . in the following discussionthe term processor element ( pe ) is used to denote a parallel execution thread . since only one thread of executionis allotted per processor ( we do not attempt load balancing via parallel slackness ) , this number is equivalent to the number of cpus , and the two terms are used interchangeably .the call tree of the parallel algorithm is given in figure [ ptree ] .the openmp api supports a number of parallel constructs , such as executing multiple serial regions of code in parallel ( a single program multiple data model ) , as well as the more typical loop - based parallelism model ( sometimes denoted par do s ) , where the entire set of loop iterations is distributed across all the pes .the pragma for executing a loop in parallel , c$omp parallel do is placed before the do loop within the code body .specification statements are necessary to inform the compiler about which variables are loop ` private ' ( each processor carries its own value ) and ` shared ' variables . a full specification of the details for each loop takes only a few lines of code , preventing the ` code bloat ' often associated with distributed memory parallel codes .we use loop level parallelism throughout our code . to optimize load balance in a given routineit is necessary to select the most optimal iteration scheduling algorithm .the openmp directives allow for the following types of iteration scheduling : * static scheduling - the iterations are divided into chunks ( the size of which may be specified if desired ) and the chunks are distributed across the processor space in a contiguous fashion . a cyclic distribution , or a cyclic distribution of small chunks is also available . * dynamic scheduling -the iterations are again divided up into chunks , however as each processor finishes its allotted chunk , it dynamically obtains the next set of iterations , via a master - worker mechanism .* guided scheduling - is similar to static scheduling except that the chunk size decreases exponentially as each set of iterations is finished .the minimum number of iterations to be allotted to each chunk may be specified .* runtime scheduling - this option allows the decision on which scheduling to use to be delayed until the program is run . the desired scheduling is then chosen by setting an environment variable in the operating system .the code uses both static and dynamic scheduling .while the step - to - step permutation is in principle simple to calculate , the creation of the list permutation array must be done carefully to avoid race conditions .an effective strategy is to calculate the chaining cell residence for each particle and then sort into bins of like chaining cells .once particles have been binned in this fashion the rearrangement according to z - coordinates is a local permutation among particles in the chaining cell .our parallel algorithm works as follows : 1 .first calculate the chaining cell that each particle resides in , and store this in an array 2 . perform an increasing - order global sort over the array of box indices 3 . using a loop over particle indices , find the first particle in each section of contiguous like - indices ( the ihc array ) 4 .use this array to establish the number of particles in each contiguous section ( the nhc array ) 5 .write the z - coordinates of each particle within the chaining cell into another auxiliary array 6 .sort all the non - overlapping sublists of z - coordinates for all cells in parallel while at the same time permuting an index array to store the precise rearrangement of particle indices required 7 .pass the newly calculated permutation array to a routine that will rearrange all the particle data into the new order the global sort is performed using parallel sorting by regular sampling , with a code developed in part by j. crawford and c. mobarry .this code has been demonstrated to scale extremely well on shared - memory architectures provided the number of elements per cpu exceeds 50,000 .this is significantly less than our ideal particle load per processor ( see section 6 ) . for the sorts within cells ,the slow step - to - step evolution of particle positions ensures data rearrangement is sufficiently local for this to be an efficient routine .hence we expect good scaling for the sort routines at the level of granularity we typically use .a race condition may occur in mass assignment because it is possible for pes to have particles which write to the same elements of the mass array .the approaches to solving this problem are numerous but consist mainly of two ideas ; ( a ) selectively assign particles to pes so that mass assignment occurs at grid cells that do not overlap , thus race condition is avoided or ( b ) use ghost cells and contiguous slabs of particles which are constrained in their extent in the simulation space .the final mass array must be accumulated by adding up all cells , including ghosts .ghost cells offer the advantage that they allow the calculation to be load - balanced ( the size of a slab may be adjusted ) but require more memory . controlling which particles are assigned does not require more memory but may cause a load imbalance .because the types of simulation performed have particle distributions that can vary greatly , both of these algorithms have been implemented .the particles in the simulation are ordered in the z - direction within the chaining cells . because the chaining cells are themselves ordered along the z - axis ( modulo their cubic arrangement ) a naive solution would be to simply divide up the list of particles . however, this approach does not prevent a race condition occurring , it merely makes it less likely . in the craft code the race condition was avoided by using the ` _ atomic update _ ' facility which is a lock__fetch__update__store__unlock hardware primitive that allows fast updating of arrays where race conditions are present .modern cache coherency protocols are unable to provide this kind of functionality .using the linked / ordered list to control the particle assignment provides an elegant solution to the race condition problem . since the linked list encodes the position of a particle to within a chaining cell , it is possible to selectively assign particles to the mass array that do not have overlapping writes. to assure a good load balance it is better to use columns ( , where is the size of the chaining mesh and is a number of chaining cells ) of cells rather than slabs ( ) .since there are more columns than slabs a finer grained distribution of the computation can be achieved and thus a better load balance .this idea can also be extended to a 3-d decomposition , however in simple experiments we have found this approach to be inefficient for all but the most clustered particle distributions ( in particular cache reuse is lowered by using a 3-d decomposition ) . chaining mesh cells have a minimum width of 2.2 potential mesh cells in and [ chaining ] displays a plot of the chaining mesh overlaid on the potential mesh .when performing mass assignment for a particle , writes will occur over all 27 grid cells found by the tsc assignment scheme . thus providing a buffer zone of one cellis not sufficient to avoid the race condition since particles in chaining cells one and three may still write to the same potential mesh cell .a spacing of two chaining mesh cells is sufficient to ensure no possibility of concurrent writes to the same mesh cell .the `` buffer zones '' thus divide up the simulation volume into a number of regions that can calculated concurrently and those that can not .moreover , there will be need to be a series of barrier synchronizations as regions that can be written concurrently are finished before beginning the next set of regions .the size of the buffer zone means that there are two distinct ways of performing the mass assignment using columns : * columns in groups .assign mass for particles in each of the columns simultaneously and then perform a barrier synchronization at the end of each column .since the columns are in groups there are nine barriers .* columns which are grouped into groups . in this casethe number of barriers is reduced to four , and if desired , the size of the column can be increased beyond two while still maintaining four barriers .however , load - imbalance under clustering argues against this idea .see [ 2by2 ] for a graphical representation of the algorithm . to improve load balance ,a list of the relative work in each column ( that can be evaluated before the barrier synchronization ) is calculated by summing over the number of particles in the column .once the workload of each column has been evaluated , the list of relative workloads is then sorted in descending order .the calculation then proceeds by dynamically assigning the list of columns to the pes as they become free .the only load imbalance then possible is a wait for the last pe to finish which should be a column with a low workload .static , and even cyclic , distributions offer the possibility of more severe load imbalance . for portability reasons , we have parallelized the fft by hand rather than relying on a threaded library such as provided by fftw .the 3-d fft is parallelized over ` lines ' by calling a series of 1-d ffts .we perform the transpose operation by explicitly copying contiguous pieces of the main data array into buffers which have a long stride .this improves data locality of the code considerably as the stride has been introduced into the buffer which is a local array .the ffts are then performed on the buffer , and values are finally copied back into the data arrays . the convolution which follows the fft relies upon another set of nested loops in the axis directions . to enable maximum granularity we have combined the z- and y - directions into one larger loop which is then statically decomposed among the processors .parallel efficiency is high for this method since if the number of processors divides the size of the fft grid we have performed a simple slab decomposition of the serial calculation .the short range forces are accumulated by using 3 nested loops to sort through the chaining mesh . as in mass assignment ,a race condition is present due to the possibility of concurrent writes to the data arrays . again, in the craft code , this race condition was avoided by using the atomic update primitive . because a particle in a given chaining mesh cell may write to its 26 nearest - neighbour cells it is necessary to provide a two cell buffer zone .we can therefore borrow the exact same column decomposition that was used in mass assignment .tests showed that of the two possible column sorting algorithms discussed in section [ cpa ] , columns are more efficient than the columns .the difference in execution time in unclustered states was negligible , but for highly clustered distributions ( as measured in the santa barbara cluster simulation ) , the method was approximately 20% faster .this performance improvement is attributable to the difference in the number of barrier synchronizations required by each algorithm ( four versus nine ) and also the better cache reuse of the columns .as discussed earlier , the smaller sub - meshes ( ) are distributed as a task farm amongst the pes .as soon as one processor becomes free it is immediately given work from a pool via the dynamic scheduling option in openmp .load imbalance may still occur in the task farm if one refinement takes significantly longer than the rest and there are not enough refinements to balance the workload over the remaining pes .note also the task farm is divided into levels , the refinements placed within the top level , termed ` level one refinements ' must be completed before calculating the ` level two refinements ' , that have been generated by the level one refinements . however , we minimize the impact of the barrier wait by sorting refinements by the number of particles contained within them and then begin calculating the largest refinements first .this issue emphasizes one of the drawbacks of a shared memory codeit is limited by the parallelism available and one has to choose between distributing the workload over the whole machine or single cpus .it is not possible in the openmp programming environment to partition the machine into processor groups .this is the major drawback that has been addressed by the development of an mpi version of the code .because of the comparatively low ratio of work to memory read / write operations the code is potentially sensitive to memory latency issues . to test this sensitivity in a broad sense ,we have examined the performance of the code for a range of problem sizes , from particles to , the smallest of which is close to fitting in l2 cache .a strong latency dependence will translate into much higher performance for problem sizes resident in cache as opposed to those requiring large amounts of main memory .we also consider the performance for both clustered and unclustered particle distributions since the performance envelope is considerably different for these two cases .the best metric for performance is particle updates per second , since for the unclustered distribution p m has an operation dependence dominated by factors , while in the clustered state the algorithm dominated by the cost of the sph solution which also scales as .the results are plotted in figure [ latency ] , as a function of memory consumption .we find that the simulations show equal performance for both the linked list and ordered particle code under both clustering states .however , for larger problem sizes the unclustered state shows a considerable drop - off in performance for the linked list code , while the ordered particle code begins to level off at the problem size .the clustered distributions show little sensitivity to problem size , which is clearly indicative of good cache reuse and a lack of latency sensitivity .we conclude that the algorithm is comparatively insensitive to latency because the solution time is dominated largely by the pp part of the code which exhibits good cache reuse .the increased performance improvement seen for the ordered particle code is caused by the increased data locality .on numa architectures this has a direct benefit as although the penalty for distant memory fetches is large ( several hundreds of nanoseconds ) the cache reuse ensures this penalty is only felt rarely .we have found that the locality is sufficiently high to render direct data placement largely irrelevant on the sgi origin .the only explicit data placement we perform is a block distribution of the particle data over pes .the constant reordering of particles ensures that this is an effective distribution . for the remainder of the arrays we use the `` first touch '' placement paradigm , namely that the first pe to request a specific memory pageis assigned it . despite the simplicity, this scheme works very effectively .since the granularity of the chaining cells is smaller than the smallest memory page size , prefetching is better strategy than memory page rearrangement .this works particularly effectively in the pp part of the algorithm where a comparatively large amount of work is done per particle . in this section of codewe specify that two cache lines should always be retrieved for each cache miss , and we also allow the compiler to make further ( aggressive ) prefetching predictions .the net effect of this is to almost completely hide the latency on the origin .this can be seen in the performance scaling , where excellent results are achieved up to 64 nodes ( see section [ perf ] ). however , there is one particularly noticeable drawback to numa architectures .a number of the arrays used within the pm solver are equivalenced to a scratch work space within a common block .first touch placement means that the pages of the scratch array are distributed according to the layout of the first array equivalenced to the common block .if the layout of this array is not commensurate with the layout of subsequent arrays that are equivalenced to the scratch area then severe performance penalties result .our solution has simply been to remove the scratch work space and suffer the penalty of increased memory requirements .our initial tests of correctness of large simulations ( ) , comparing serial to parallel runs , showed variation in global values , such as the total mass within the box at the 0.01 percent level .however , this turned out to be a precision issue , as increasing the summation variables to double precision removed any variation in values . with these changes made ,we have confirmed that the parallel code gives identical results to the serial code to machine - level rounding errors .an extensive suite of tests of the code are detailed in and .our standard test case for benchmarking is the ` santa barbara cluster ' used in the paper by frenk . this simulation models the formation of a galaxy cluster of mass in a einstein - de sitter scdm cosmology with parameters , =0.1 , =0.6 , , and box size 64 mpc .our base simulation cube has particles , which yields 15300 particles in the galaxy cluster , and we use an s2 softening length of 37 kpc .particle masses are for dark matter and for gas . to prepare a larger data set we simply tile the cube as many times as necessary .an output from z=7.9 is used as an ` unclustered ' data set , and one from z=0.001 as a ` clustered ' data set .we were given access to two large smp machines to test our code on , a 64 processor sgi origin 3000 ( o3k , hereafter ) at the university of alberta and a 64 processor hewlett packard gs1280 alphaserver .both of these machines have numa architectures , the o3k topology being a hypercube , while the gs1280 uses a two dimensional torus .the processors in the o3k are 400 mhz mips r12000 ( baseline specfp2000 319 ) while the gs1280 processors are 21364 ev7 alpha cpus running at 1150 mhz ( baseline specfp2000 1124 ) .there is an expected raw performance difference of over a factor of three between the two cpus , although in practice we find the raw performance difference to be slightly over two .we conducted various runs with differing particle and data sizes to test scaling in both the strong ( fixed problem size ) and weak ( scaled problem size ) regimes .the parallel speed - up and raw execution times are summarized in tables [ tab1 ] & [ tab2 ] and speed - up is shown graphically in figure [ scaling ] .overheads associated with i / o and start - up are not included .further , we also do not include the overhead associated with placing refinements on the top level of the simulation , as this is only performed every 20 steps . with the exception of the clustered run , parallel scaling is good ( better than 73% ) to 32 processors on both machines for all runs .the clustered simulation does not scale effectively because the domain decomposition is not sufficiently fine to deal with the load imbalance produced by this particle configuration .only the largest simulation has sufficient work to scale effectively beyond 32 processors . to estimate the scaling of the we estimated the speed - up on 8 nodes of the gs1280 as 7.9 ( based upon the slightly lower efficiencies observed on the compared to the o3k ) , while on the o3k we estimated the speed up as 8.0 .we then estimated the scaling from that point .speed - up relative to the 8 processor value is also given in table 1 , and thus values may be scaled as desired ..parallel scaling efficiencies and wall clock timings for a full gravity - hydrodynamic calculation on the sgi origin 3000 .results in parenthesis indicate that the values are estimated .the 64 processor results for the two smallest runs have been omitted because they resulted in a slowdown relative to the 32 processor run . [cols="^,<,^,^,^,^,^",options="header " , ] [ tab2 ] to quantify our results further we summarize the performance of the code using a popular performance metric for cosmological codes , namely the number of particle updates per second . as a function of the number of nodes within the calculationthis also gives a clear picture of the scaling achieved . because the simulation results we obtained were run using the combined gravity - hydrodynamic solver it is necessary for us to interpolate the gravitational speed .to do this we calculated the ratio of the code speed with and without hydrodynamics , and also without the pp correction , on 1 cpu of our local gs160 alphaserver , and on 1 cpu of the o3k . to ensure this approximation is as reasonable as possible we calculated the ratios for both the z=7.9 and z=0.001 datasets .relative to the speed obtained for the combined solver , the gravity - only solver was found to be 1.63(1.29 ) times faster for the z=7.9 dataset and 1.84(1.49 ) times faster for the z=0.001 dataset , for the gs1280 ( and o3k ) .the pm speed was found to 2.4(2.5 ) times faster for the z=7.9 dataset and 9.21(10.3 ) times faster for the z=0.001 dataset . in figure [ pups ]we show the estimated number of gravitational updates per second achieved on in both the clustered and unclustered state of the simulation ( other simulation sizes show almost identical speeds ) on the gs1280 .the clustered state is approximately three times slower than the unclustered state for all simulation sizes . to provide comparison to other published workwe have also included results presented by dubinski for a simulation conducted on a grid using a distributed memory tree - pm code ( `` gotpm '' ) .although a direct comparison of speed is not as instructive as might be hoped , since both the machine specifications and particle distributions differ , it is intriguing that the raw pm speed of both codes are very similar , with our code showing a moderate speed advantage ( between 2.4 and 1.8 times faster depending on clustering ) .comparing the speed of the full solutions ( for the simulation ) in the clustered state shows hydra to be 2.3 times faster , although the initial configuration is 3.9 times faster , while reportedly tree - pm codes have a roughly constant cycle time with clustering .this highlights the fact that while tree - pm codes have a roughly constant cycle time with clustering , there is still significant room for improving the execution on unclustered data sets .it is also worth noting that , as yet , our implementation of 3 m lacks any multiple time - step capability , and implementing a mechanism that steps refinements within different time bins has potentially very significant performance gains .such an integrator would bear similarities to the mixed - variable symplectic integrators used in planetary integrations .although overall performance is the most useful measure of utility for the code , analysis of the time spent in certain code sections may elucidate performance bottlenecks .hence , for timing purposes , we break the code into three main sections ; the top level pm , the top level pp and the refinement farm .the speed of list making and particle book - keeping is incorporated within these sections . the execution time is initially dominated by the solution time for the top level grid , but the growth of clustering makes the solution time strongly dependent upon the efficiency of the refinement farm . while the top level solution ( necessarily ) involves a large number of global barriers , the refinement farm only uses a small number and performs a large number of independent operations .the only exception is a critical section where the global list of refinements is updated , however we ensure the critical section is only entered if a refinement has indeed derived new refinements . thus , potentially , the refinement farm can scale better than the top level solution .in figure [ farmvstop ] we plot the relative scaling of the top level solution compared to the refinement farm for a several different particle numbers .provided sufficient work is available for distribution , the refinement farm is seen to scale extremely well , with parallel efficiencies of 99% and 83% observed for the data set on 64 processors for the o3k and gs1280 respectively .conducting high resolution simulations of cosmological structure formation necessitates the use of parallel computing .although distributed architectures provide an abundance of cheap computing power , the programming model for distributed systems is fundamentally complex . shared memory simplifies parallel programming greatly since the shared address space means that only the calculation itself need be distributed across nodes . in this paperwe have discussed a code for parallel shared memory computers that exhibits only marginally higher complexity than a serial version of the code and which also exhibits excellent performance .additional constructs for parallel execution introduce only a small ( 10% ) penalty for running on 1 node compared to the serial code .although the code does have some problems with regards load balancing , in particular a deficit in performance occurs when a refinement is too large to be calculated as part of the task farm but is not large enough to be efficient across the whole machine , these situations are comparatively rare .the poor scaling of sph under heavy clustering is the most significant cause of load imbalance .in particular , if the heavy calculational load is confined to one refinement that is part of the task farm all threads will block until this refinement is completed .the most satisfactory solution to this problem is to substitute an alternative algorithm for the sph in high density regions .we will present details of an algorithm that improves the sph cycle time for high density regions elsewhere ( thacker in prep ) .most of the performance limitations can be traced to applying a grid code in a realm where it is not suitable .as has been emphasized before , treecodes are particularly versatile , and can be applied to almost any particle distribution . however , for periodic simulations they become inefficient since ewald s method must used to calculate periodic forces .fft - based grid methods calculate the periodic force implicitly , and exhibit particularly high performance for homogeneous particle distributions under light to medium clustering .highly clustered ( or highly inhomogeneous ) particle distributions are naturally tailored to the multi - timestepping capability of treecodes . although we see scope for introducing a multi - time stepping version of 3 m where sub - grids are advanced in different time step bins it is unclear in details what efficiencies could be gained .there are clearly parts of the algorithm , such as mass assignment , that are unavoidably subject to load imbalances .we expect that since the global grid update would be required infrequently the global integrator can still be made efficient .an efficient implementation of multiple time - steps is the last area where an order of magnitude improvement in simulation time can be expected for this class of algorithm . in terms of raw performance ,the code speed is high relative to the values given by dubinski et al . on the gs1280 the full solution time for the unclustered distributioneven exceeds that of the pm solution quoted for gotpm on 64 processors .3 m has been criticized previously for exhibiting a cycle time that fluctuates depending upon the underlying level of clustering .the data we have presented here shows the range in speeds is comparatively small ( a factor of 4 ) .we would also argue that since the cost of the short range correction is so small at early times , this criticism is misplaced .while recent implementations of tree - pm have an approximately constant cycle time irrespective of clustering , the large search radius used in the tree correction leads to the tree part of the algorithm dominating execution time for all stages of the simulation .conversely , only at the end of the simulation is this true for hydra .arguments have also been presented that suggest the pm cycle introduces spurious force errors that can only been corrected by using a long range pp correction ( out to 5 pm cells ) .it is certainly true that pm codes implemented with the so called ` poor man s poisson solver ' , and cloud - in - cell interpolation do suffer from large ( % ) directional errors in the force around 2 - 3 grid spacings .however , as has been shown , first by eastwood ( see for references ) and more recently by couchman , a combination of higher order assignment functions , q - minimized green s functions , and directionally optimized differencing functions can reduce errors in the inter - particle forces to sub 0.3% levels ( rms ) .surprisingly , although cic gives a smooth force law ( as compared to ngp ) , it does not reduce the angular isotropy of the mesh force .indeed , in two dimensions , moving from cic to tsc interpolation reduces directional errors from 50% to 6% and q - minimization of the green s function reduces the anisotropy to sub 0.5% levels .furthermore , the technique of interlacing can be used to improve the accuracy of the pm force still further , but the additional ffts required for this method rapidly lead to diminished returns .to date we have used this code to simulate problems ranging from galaxy formation to large - scale clustering . as emphasized in the introduction ,the simple programming model provided by openmp has enabled us to rapidly prototype new physics algorithms which in turn has lead to the code being applied across a diverse range of astrophysics . developing new physics models with this code takes a matter of hours , rather than the days typical of mpi coding .we plan to make a new version of the code , incorporating more streamlined data structures and minor algorithmic improvements , publically available in the near future .we thank an anonymous referee for comments which improved the paper .runs on the gs1280 machine were performed on our behalf by andrew feld of hewlett packard .we thank john kreatsoulas for arranging time for us on this machine .figures 1 , 2 , 4 and 5 were prepared by dr l. campbell .rjt is funded in part by a cita national fellowship .hmpc acknowledges the support of nserc and the canadian institute for advanced research .sharcnet and westgrid computing facilities were used during this research . c. s. frenk ,s. d. m. white , p. bode , j. r. bond , g. l. bryan , r. cen , h. m. p. couchman , a. e. evrard , n. gnedin , a. jenkins , a. m. khokhlov , a. klypin , j. f. navarro , m. l. norman , j. p. ostriker , j. m. owen , f. r. pearce , u .- l . pen , m. steinmetz , p. a. thomas , j. v. villumsen , j. w. wadsley , m. s. warren , g. xu , and g. yepes , _ astrophys . j. _ , * 525 * , ( 1999 ) , 554 .
we discuss the design and implementation of hydra_omp a parallel implementation of the smoothed particle hydrodynamics adaptive 3 m ( sph - a3 m ) code hydra . the code is designed primarily for conducting cosmological hydrodynamic simulations and is written in fortran77+openmp . a number of optimizations for risc processors and smp - numa architectures have been implemented , the most important optimization being hierarchical reordering of particles within chaining cells , which greatly improves data locality thereby removing the cache misses typically associated with linked lists . parallel scaling is good , with a minimum parallel scaling of 73% achieved on 32 nodes for a variety of modern smp architectures . we give performance data in terms of the number of particle updates per second , which is a more useful performance metric than raw mflops . a basic version of the code will be made available to the community in the near future . + 3 g 50 50m1 3mp m 3map m # 1 currsize normalsize currsize simulation , cosmology , hydrodynamics , gravitation , structure formation +
in this article , we describe our research prototype system that can pick piled waste from a conveyor belt . the motivation for this prototype is grounded in the existing industrial robotic application of our company : robotic waste sorting .zenrobotics robots have been sorting waste on industrial waste processing sites since 2014 . at one of our sites ,4200 tons of construction and demolition waste has been processed . of that waste ,2300 tons of metal , wood , stone and concrete objects have been picked up from the conveyor by our sorting robots .performance of the robot in this environment is critical for paying back the investment .currently the robots are able to identify , pick and throw objects of up to 20 kg in less than 1.8 seconds , 24/7 . the current generation robot was taught to grasp objects using human annotations and a reinforcement learning algorithm as mentioned in .robotic recycling is rapidly growing , and is already transforming the recycling industry .robots ability to recognize , grasp and manipulate an extremely wide variety of objects is crucial . in order to provide this ability in a cost - effective way , new training methods which do not rely on hardcoding or human annotationwill therefore be required .for example , changing the shape of the gripper or adding degrees of freedom might require all picking logic to be rewritten or at least labor - intensive retraining unless the system is able to learn to use the new gripper or degrees of freedom by itself .we have chosen to tackle a small subproblem of the whole sorting problem : learning to pick objects autonomously .this problem differs from the more studied problems of `` cleaning a table by grasping '' and bin picking in several ways : 1 ) the objects are novel and there is a large selection of different objects .objects can be broken irregularly . in effect, anything can ( and probably will ) appear on the conveyor eventually .2 ) the objects are placed on the conveyor belt by a random process and easily form random piles .3 ) on the other hand , this problem is made slightly easier by the fact that it is not necessary to be gentle to the objects ; fragile objects will likely have been broken by previous processes already .scratching or colliding with objects does not cause problems as long as the robot itself can tolerate it ( see fig .[ fig : gripper ] ) .our solution starts with no knowledge of the objects and works completely autonomously to learn how to make better pickups using feedback , for example from sensors in the gripper like opening or force feedback . in the following sections, we will first describe the system in detail , describe our experiments with the system and conclude .in this section we describe our prototype system in detail .the hardware of our system consists of a waste merry - go - around ( fig .[ fig : merry - go - around ] ) , a 3d camera ( asus xtion ) , and a gantry type robot ( a prototype version of our product model ) .the gantry robot includes a wide - opening gripper and a large - angle compliance system ( fig . [fig : gripper ] ) .the gripper has evolved in previous versions of our product step by step to be morphologically well - adapted to the task .the gripper is position - controllable and has a sensor giving its current opening .in addition to the gripper opening , the robot has four degrees of freedom , the coordinates and rotation around the vertical axis ( i.e. , the gripper always faces down ) . in our prototype system , we make use of our product s existing software modules that handle conveyor tracking and motion planning to execute a pick for a given, a data structure similar to the rectangle representation of jiang et al . containing gripper coordinates , gripper angle , and gripper opening for grasping an object . in our prototype , we replace those modules of our product that use information from line cameras to decide where to grip .recently several methods have been developed ( see and the references therein ) for calibrating sensors to robots . for the present prototype , we use a simplified automatic procedure for calibrating the 3d camera s coordinates to the gantry coordinates ( fig .[ fig : grippermask ] ) . the gripper s angle and opening parameters are calibrated separately using known gripper geometry parameters .coordinate of the tip of the closed gripper is detected from each position and stored with the corresponding gantry coordinates ( the 3d camera image and detected gripper tip for one position is shown in the image ) .a projective transformation is fitted to the data . ] the 3d camera image , [ fig : linesearch ] , and [ fig : handle - evaluation ] show depth images from an earlier version of our prototype using a higher resolution industrial ensenso n20 depth sensor instead of the asus xtion that was used in the expreriments reported here . ]is projected using gpu into an isometric heightmap defined on gantry coordinates ( fig .[ fig : dcam - projection ] ) .the projection code marks pixels that are occluded by objects to their maximum possible heights and additionally generates a mask indicating such unknown pixels .the handle generation happens in two stages : first , we exhaustively search through all _ closed handles _ , that is , gripper configurations where each finger of the gripper touches the heightmap and the heightmap rises between the two points ( fig .[ fig : linesearch ] ) .the full set of closed handles are weighted by the sum + [ h(s_1 - 1\textnormal { pixel})-h(s_1)]\ ] ] of height differences at the gripper contact points shown in fig .[ fig : linesearch ] .a sample of 200 handles is generated using probabilities proportional to the weights .after this , each handle in the sample is duplicated for all possible extra - openings allowed by the heightmap ( taking into account the nonlinear movement of the gripper as it opens and closes ) and the maximum opening of the gripper .this completes the hard - coded stage of handle generation . for every handle of the first stage, features are generated from the heightmap around the handle .the features are based on * pixel ( cm ) slices of the heightmap aligned at the left finger , center , and right finger of the gripper ( including a margin of 4 cm around the rectangle inside the gripper fingers ) , * the opening of the handle and extra opening to be applied when grasping , and * the height of the handle ( which is subtracted from the heightmap slices so as to yield translation invariant features ) . of these, the image features are further downsampled and transformed by a dual - tree complex wavelet transform to yield the inputs for a random forest that is trained to classify the handles into those that succeed and those that fail .the handle that gets the best score ( most votes from the random forest ) is chosen for picking ( except when its score is below 0.1 in which case it is only attempted with a 5% probability in order to avoid picking the empty belt for aesthetic reasons ) . when there is no trained model available , a random handle from the output of the first stage is chosen for picking . during each picking attempt, the system monitors the gripper opening and if the gripper closes ( almost ) completely before completing the throw , it is determined that the object has slipped and the pick is aborted .this post - verification signal yields the necessary feedback for training .the features and result of each pick attempt are stored and a background process reads these training samples periodically and trains a new handle model based on all collected data .when a new model is trained , the system starts using it on the next pick attempt .the immediate feedback from failed and successful attempts allows the system to learn quickly and autonomously and to adapt to novel objects .in this experiment , the conveyor under the system was cleared for calibration , the calibration was run , and the conveyor was started at a slow constant speed . when there were objects coming under the robot ,the picking software was started .the system started picking with just the hard - coded first stage model .after every 100 pick attempts , the system trained the second - stage model using data from all pick attempts from the beginning and started using the newly trained model on subsequent picks . for technical reasons related to data collection ,the system was paused briefly every 15 minutes .the results of this experiment are shown in fig .[ fig : exper23]a .the same experiment was repeated running the training every 10 seconds .the results are shown in fig .[ fig : exper23]b . from these results , it is clear that the immediate feedback from post - verification allows autonomous learning that can be very fast .a ) b ) in this experiment , the conveyor under the system was cleared for calibration , the calibration was run , and after moving the conveyor until there were objects in the working area , the picking software was started .then , the conveyor movement was controlled manually , moving it short distances at a time , so as to let the robot pick the conveyor clean .the system started picking using just the hard - coded first stage model and the second stage model was trained on data from all picking attempts from the beginning every 10 seconds . the picking performance improved during the experiment as in the other experiments .although somewhat more pick attempts will fail than on a constantly moving conveyor , the system will retry picking any objects left on the working area until it succeeds .the accompanying video shows how , after some training , the system clears a large pile from the conveyor ( fig .[ fig : emptying ] ) .we have demonstrated a prototype system that is able to pick a pile of novel waste objects from a conveyor and which has autonomously learned to select better points to pick from .we have shown that performing this task with a 4-dof robot with a single camera not on top of the system is possible .it is easy to think of several ways to improve the performance of the system . forthe picking the conveyor clean -task , simply adding better edges to the conveyor and making the working area slightly larger would help - currently the working area is very limited due to the 3d camera used .the machine learning algorithm used is very simple . enlargingthe set of candidate handles could boost performance significantly and would be easy to parallelize on the gpu .it would also be possible to make the hard - coded first stage less conservative regarding shadows . on the other hand, it would be possible to address some of the specific types of errors that were observed : * grasping shadow : our current handle model does not make use of the mask indicating areas with unknown height ( i.e. , areas occluded by objects from the 3d camera s point of view ) ; using this information in the features would allow learning to better handle the shadows ; alternatively two 3d cameras could be used to reduce shadows * grasping at object ( corner ) that just came in range : this could be improved by additional logic to avoid handles at the edge * grasping at empty belt : when there are no objects , small variations of the conveyor height , small particles , or sensor noise may yield handles ; we have reduced such pick attempts by avoiding picking ( except by small probability ) when the score of the best handle is below certain threshold * thin objects : the postverification may yield incorrect failure signal when grasping a thin object and the system may learn to avoid picking thin objects ; this shows the importance of the feedback signal * heavy stones slipping : could use slower throw , adding throw acceleration as another degree of freedom for the generated handles . on the other hand , with this system , the point of diminishing returns is quickly reached because the system can retry picks that failed .the difference between an 80% success rate and 90% success rate is relatively minor , as opposed to the same difference in a line scanning system where 80% would mean double the number of unpicked objects from 90% . at the moment, the cycle time of the prototype , around 6 seconds , is a far cry from our production system s 1.8 s cycle time .however , there is no fundamental reason why such a cycle time could not be reached by this type of system ; the difference is mostly caused by the prototype being very conservative about when the images are taken and not being yet optimized .more interesting extensions of the systems in terms of practical applicability would be , e.g , learning to control the conveyor in order to maximize some function of the amount of picked material and the percentage of objects that get picked ; sorting objects by some characteristic while picking , and learning to carefully pick one object at a time . in the current setup , the last one was not a problem ; two - or - more - object picks were rare but this may be more related to the size of the objects and the gripper .the authors would like to thank the zenrobotics team of research assistants for helping in this work , especially risto sirvi for supervising many of the experiments and risto sirvi and sara vogt for annotating experiment data .the authors would also like to thank risto bruun , antti lappalainen , arto liuha , and ronald tammepld for discussions and plc work , timo tossavainen for many discussions , and risto bruun , juha koivisto , and jari siitari for hardware work .this work also makes use of the contributions of the whole zenrobotics team through the parts of our product that were reused in this prototype .t. j. lukka , t. tossavainen , j. v. kujala , and t. raiko , `` zenrobotics recycler robotic sorting using machine learning , '' in _ proceedings of the international conference on sensor - based sorting ( sbs ) _ , 2014 .d. rao , q. v. le , t. phoka , m. quigley , a. sudsang , and a. y. ng , `` grasping novel objects with depth segmentation , '' in _ proceedings of the international conference on intelligent robots and systems ( iros ) _ , 2010 , pp . 25782585 .y. domae , h. okuda , y. taguchi , k. sumi , and t. hirai , `` fast graspability evaluation on single depth maps for bin picking with general grippers , '' in _ proceedings of the international conference on robotics and automation ( icra ) _ , 2014 , pp. 19972004 .d. holz , m. nieuwenhuisen , d. droeschel , j. stckler , a. berner , j. li , r. klein , and s. behnke , `` active recognition and manipulation for mobile robot bin picking , '' in _ gearing up and accelerating cross - fertilization between academic and industrial robotics research in europe_.1em plus 0.5em minus 0.4emspringer , 2014 , pp .133153 .m. nieuwenhuisen , d. droeschel , d. holz , j. stuckler , a. berner , j. li , r. klein , and s. behnke , `` mobile bin picking with an anthropomorphic service robot , '' in _ proceedings of the international conference on robotics and automation ( icra ) _ , 2013 , pp .23272334 .y. jiang , s. moseson , and a. saxena , `` efficient grasping from rgbd images : learning using a new rectangle representation , '' in _ proceedings of the international conference on robotics and automation ( icra ) _ , 2011 , pp .33043311 .v. pradeep , k. konolige , and e. berger , `` calibrating a multi - arm multi - sensor robot : a bundle adjustment approach , '' in _ proceedings of international symposium on experimental robotics ( iser ) _ , 2014 , pp .
we present a research picking prototype related to our company s industrial waste sorting application . the goal of the prototype is to be as autonomous as possible and it both calibrates itself and improves its picking with minimal human intervention . the system learns to pick objects better based on a feedback sensor in its gripper and uses machine learning to choosing the best proposal from a random sample produced by simple hard - coded geometric models . we show experimentally the system improving its picking autonomously by measuring the pick success rate as function of time . we also show how this system can pick a conveyor belt clean , depositing 70 out of 80 objects in a difficult to manipulate pile of novel objects into the correct chute . we discuss potential improvements and next steps in this direction .
when a suspension of particles reaches an asymmetric bifurcation , it is well - known that the particle volume fractions in the two daughter branches are not equal ; basically , for branches of comparable geometrical characteristics , but receiving different flow rates , the volume fraction of particles increases in the high flow rate branch .this phenomenon , sometimes called the zweifach - fung effect ( see * ? ? ?* ; * ? ? ?* ) , has been observed for a long time in the blood circulation . under standard physiological circumstances ,a branch receiving typically one fourth of the blood inflow will see its hematocrit ( volume fraction of red blood cells ) drop down to zero , which will have obvious physiological consequences .the expression attraction towards the high flow rate branch is sometimes used in the literature as a synonymous for this phenomenon .indeed , the partitioning not only depends on the interactions between the flow and the particles , which are quite complex in such a peculiar geometry , but also on the initial distribution of particles . + apart the huge number of in - vivo studies on blood flow ( see for a review ) , many other papers have been devoted to this effect , either to understand it , or to use it in order to design sorting or purification devices . in the latter case, one can play at will with the different parameters characterizing the bifurcation ( widths of the channels , relative angles of the branches ) , in order to reach a maximum of efficiency . as proposed in many papers , focusing on rigid spheres can already give some keys to understand or control this phenomenon ( see * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) . in - vitro behavior of red blood cells has also attracted some attention ( see * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?the problem of particle flow through an array of obstacles , which can be somehow considered as similar , has also been studied recently ( see * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?all the latter papers consider the low - reynolds - number limit , which is the relevant limit for applicative purposes and for the biological systems of interest .therefore , this limit is also considered throughout this paper .+ in most studies as well as in in vivo blood flow studies , which are for historical reasons the main sources of data , the main output is the particle volume fraction in the two daughter branches as a function of the flow rate ratio between them .such data can be well described by empirical laws that still depend on some ad - hoc parameters but allow some rough predictions ( see * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , which have been exhaustively compared recently ( see * ? ? ?* ) . and the dashed line stands for the separating streamline between the flows that will eventually enter branches 1 and 2 in the absence of particles .( b ) the t - bifurcation that is studied in this paper and also in in order to get rid of geometrical effects as much as possible . ] on the other hand , measuring macroscopic data such as volume fractions does not allow to identify the relevant parameters and effects involved in this asymmetric partitioning phenomenon . for a given bifurcation geometry and a given flow rate ratio between the two outlet branches , the final distribution of the particles can be straightforwardly derived from two data : first , their spatial distribution in the inlet ; second , their trajectories in the vicinity of the bifurcation , starting from all possible initial positions .if the particles follow their underlying unperturbed streamlines ( as would a sphere do in a stokes flow in a straight channel ) , their final distribution can be easily computed , although particles near the apex of the bifurcation require some specific treatment , since they can not approach it as much as their underlying streamline does .the relevant physical question in this problem is thus to identify the hydrodynamic phenomenon at the bifurcation that would make flowing objects escape from their underlying streamlines , which would have as a consequence that a large particle would be driven towards one branch while a tiny fluid particle located at the same position would go to the other branch . in order to focus on this phenomenon, we need to identify more precisely the other parameters that influence the partitioning , for a given choice of flow rate ratio between the two branches . + 1 . _the bifurcation geometry ._ and made it clear , for instance , that the partitioning in y - shaped bifurcations depends strongly on the angles between the two branches ( see figure [ fig : schema]a ) . for instance , while the velocity is mainly longitudinal , the effective available cross section to enter a perpendicular branch is smaller than in the symmetric y - shaped case . even in the latter case , the position of the apex of the bifurcation relatively to the separation line between the fluids going in the two branches might play a role , due to the finite size of the flowing objects .2 . _ the radial distribution in the inlet channel ._ in an extreme case where all the particles are centered in the inlet channel and follow the underlying fluid streamline , they all enter in the high flow rate branch ; more generally the existence of a particle free layer near the walls favours the high flow rate branch , since the depletion in particles it entails is relatively more important for the low flow rate branch , which receives fluid that occupied less place in the inlet branch .the existence of such a particle free layer near the wall has been observed for long in blood circulation , under the name of plasma skimming .more generally , it can be due to lateral migration towards the centre , which can be of inertial origin ( high reynolds number regime)(see * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , or viscous one .in such a situation of low reynolds number flow , while a sphere does not migrate transversally due to symmetry and linearity in the stokes equation , deformable objects such as vesicles ( closed lipid membranes ) ( see * ? ? ?* ; * ? ? ?* ) , red blood cells ( see * ? ? ?* ; * ? ? ?* ) that exhibit similar dynamics as vesicles ( see * ? ? ?* ; * ? ? ?* ) , drops ( see * ? ? ?* ; * ? ? ?* ) or elastic capsules ( see * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , might adopt a shape that allows lateral migration .this migration is due to the presence of walls ( see * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) as well as to the non - constant shear rate ( see * ? ? ?* ; * ? ? ?even in the case where no migration occurs , the initial distribution is still not homogeneous : since the barycentre of particles can not be closer to the wall than their radius , there is always some particle free layer near the walls .this sole effect will favour the high flow rate branch ._ interactions between objects ._ as illustrated in or , interactions between objects tend to smoothen the asymmetry of the distribution , in that the second particle of a couple will tend to go in the other branch as the first one .a related issue is the study of trains of drops or bubbles at a bifurcation , that completely obstruct the channels and whose passage in the bifurcation greatly modifies the pressure distribution in its vicinity , and thus influences the behaviour of the following element ( see * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?+ in spite of the huge literature on this subject , but probably because of the applicative purpose of most studies , the relative importance of these different parameters are seldom quantitatively discussed , although most authors are fully aware of the different phenomena at stake .as we want to focus in this paper on the question of cross streamline migration in the vicinity of the bifurcation , we will consider rigid spheres , for which no transverse migration in the upstream channel is expected , that are in the vanishing concentration limit and flow through symmetric bifurcations , that is the symmetric y - shaped and t - shaped bifurcations shown on figure [ fig : schema ] , where the two daughter branches have same cross section and are equally distributed relatively to the inlet channel .+ indeed , this rigid spheres case is already quite unclear in the literature . in the following ,we first make a short review of some previous studies that consider a geometrically symmetric situation and thoroughly re - analyze their results in order to detect whether the zweifach - fung effect they see is due to initial distribution or to some attraction in the vicinity of the bifurcation , which was generally not done ( section [ sec : litt ] ) .we then present in sections [ sec : method ] and [ sec : results ] our two - dimensional simulations and quasi - two - dimensional experiments ( in a sense that the movement of the three - dimensional objects is planar ) .we mainly focus on the t - shaped bifurcation , in order to avoid as much as possible the geometrical constraint due to the presence of an apex .our main result is that there is some attraction towards the low flow rate branch ( section [ sec : mig ] ) .this result is then analyzed and explained through basic fluid mechanics arguments , which are compared to the ones previously evoked in the literature . in a second time , we discuss which consequences this drift has on the final distribution in the daughter branches . to do so , we focus on what the particles concentrations at the outlets would be in the simplest case , that is particles homogeneously distributed in the inlet channel , with the sole ( and unavoidable ) constraint that they can not approach the walls closer than their radius ( denominated as _ depletion effect _ in the following , see figure [ fig : schema](b ) ) .this is done through simulations , which allow us to easily control the initial distribution in particles ( section [ sec : distr ] ) .consequences for the potential efficiency of sorting or purification devices are discussed .we finally come back , in section [ sec : consistency ] , to some of the previous studies found in the literature with which quantitative comparisons can be done in order to check the consistency between them and our results .+ before discussing the results from the literature and presenting our own data , we shall introduce useful common notations ( see figure [ fig : schema]b ) . the half - width of the inlet branch is set as the length scale of the problem .the inlet channel divides into two branches of width ( the case will be mainly considered here by default , unless otherwise stated ) , and spheres of radius are considered .the flow rate at the inlet is noted , and and are the flow rates at the upper and lower outlets ( ) . in the absence of particles, all the fluid particles situated initially above the line will eventually enter branch 1 .this line is called the ( unperturbed ) fluid separating streamline . is the initial transverse position of the considered particle far before it reaches the bifurcation ( ) . and are the numbers of particles entering branches 1 and 2 by unit time , while have entered the inlet channel .the volume fractions in the branches are , where v is the volume of a particle . with these notations, we can reformulate our question : if , does the particle experience a net force in the direction ( e. g. a pressure difference ) that would push it towards one of the branches , while a fluid particle would remain on the separating streamline ( by definition of ) ?if so , for which position does this force vanish , so that the particle follows the streamlines and eventually hits the opposite wall and reaches an ( unstable ) equilibrium position ? if and , then one will talk about _ attraction towards the low flow rate branch_. following these notations , we have : where is the mean density in particles at height in inlet branch , and are respectively the particles and flow longitudinal upstream velocities . and given by the same formula with .the zweifach - fung effect can then be written as follows : if ( branch 1 receives less flow than branch 2 ) then ( branch 1 receives even less particles than fluid ) or equivalently ( the particle concentration is decreased in the low flow rate branch ) .in the literature , the most common symmetric case that is considered is the y - shaped bifurcation with daughter branches leaving the bifurcation with a angle relatively to the inlet channel , and cross sections identical as the one of the inlet channel ( figure [ fig : schema]a ) ( see * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ) .the t - shaped bifurcation ( figure [ fig : schema]b ) has attracted little attention ( see * ? ? ?* ; * ? ? ?all studies but show results for rigid spherical particles , while some results for deformable particles are given in and .explicit data on a possible attraction towards one branch are scarce as they can only be found in a recent two - dimensional simulations paper ( see * ? ? ?in three other papers , dealing with two - dimensional simulations ( see * ? ? ? * ) or experiments in square cross section channels ( see * ? ? ? * ; * ? ? ?* ) , the output data are the concentrations at the outlets . in this section, we re - analyze their data in order to discuss the possibility of an attraction towards one branch .experiments in circular cross section channels were also developed ( see * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , on which we comment in a second time . in the two - dimensional simulations presented in , some trajectories around the bifurcation are shown , however the authors focused on an asymmetric y - shaped bifurcation .in addition , some data for in a symmetric y - shaped bifurcation and are presented ._ ran experiments with balls of similar size ( ) in a symmetric y - shaped bifurcation with square cross section and also showed data for as a function of ( see * ? ? ?experiments with larger balls ( ) in square cross section channels were carried out in .once again , the output data are the ratios . in both experiments, the authors made the assumption that the initial ball distribution is homogeneous , as considered also in the simulation paper by audet and olbricht .in all the latter papers , although the authors are sometimes conscious that the depletion and attraction effects might screen each other , the relative weight of each phenomenon is not really discussed .however , yang __ consider explicitly that there must be some _ attraction towards the high flow rate branch _ and give some qualitative arguments for it .this opinion , initially introduced by fung ( see * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , is widely spread in the literature ( see * ? ? ?* ; * ? ? ?* ; * ? ? ?we shall come back to the underlying arguments in the following .+ in figure [ fig : compar ] we present the data of as a function of taken from for ( two - dimensional simulations ) , for ( experiments ) and for ( experiments ) .it is very instructive to compare these data with the corresponding values calculated with a very simple model based on the assumption that no particular effect occurs at the bifurcation , that is , the particles follow their underlying streamline ( _ no - attraction assumption _ ) .to do so , we consider the two - dimensional case of flowing spheres and calculate the corresponding according to equation ( [ eq : n1 ] ) .the no - attraction assumption implies that and , as in the considered papers , the density is considered constant for .the particles velocity is given by our simulations presented in section [ sec : distr ] .since we consider only flow ratios , this two - dimensional approach is a good enough approximation to discuss the results of the three - dimension experiments , as the fluid separating plane is orthogonal to the plane where the channels lie ; moreover , the position of this plane differs only by a few percent from the position of the separating line in two dimensions . in all curves, it is seen that , if , then , which is precisely the zweifach - fung effect .note that this effect is present even under the no - attraction assumption : as already discussed , the sole depletion effect is sufficient to favour the high flow rate branch .let us first consider spheres of medium size ( : audet and olbricht / yang _ et al .if we compare the data from the literature with the theoretical curve found under the no - attraction assumption , we see that the enrichment in particles in the high flow rate branch is less pronounced in the simulations by audet and olbricht and of the same order in the experiments by yang _et al._. therefore , we can assume that in the two - dimensional simulations by audet and olbricht , there is an attraction towards the low flow rate branch , which lowers the enrichment of the high flow rate branch .the case of the experiments is less clear : it seems that no peculiar effect takes place .the case is even more striking : under the no - attraction assumption , we can see that for , because and no sphere can enter the low flow rate branch .in the meantime , a non negligible amount of particles are found to enter branch 1 for by roberts and olbricht in their experiments ( see figure [ fig : compar ] ) .it is clear from this that there must be some attraction towards the low flow rate branch .+ for channels with circular cross sections , the data found in the literature do not all tell the same story , although spheres of similar sizes are considered . in , spheres are considered in a t - shaped bifurcation .the y - shaped bifurcation was considered twice by the same research group , with very similar spheres : ( see * ? ? ?* ) and ( see * ? ? ?in a circular cross section channel , the plane orthogonal to the plane where the channels lie , parallel to the streamlines in the inlet channel and located at distance from the inlet channel wall corresponds to the flow separating plane for . at low concentrations , very few spheresare observed in branch 1 for in ( figure 3d ) and ( figure 3 ) , in agreement with a no - attraction assumption . in , the authors also show their data can be well described by the theoretical curve calculated by assuming the particles follow their underlying streamlines . in marked contrast with these results , a considerable amount of spheres is still observed in branch 1 in the same situation in ( figure 4 ) .similarly , in ( figure 4 ) , many particles with are found to enter the low flow rate branch 1 even when , which would indicate some attraction towards the low flow rate branch .thus , in a channel with circular cross section , the results are contradictory . in the pioneering work presented in ,a t - shaped bifurcation is also considered , with flexible disks mimicking red blood cells , but the deformability of these objects and the noise in the data do not allow us to make any reasonable discussion . + more recently , presented simulations of two - dimensional spheres with and two - dimensional deformable objects mimicking red blood cells in a symmetric y - shaped bifurcation .the values of as a function of the flow rate ratios and the spheres radius are clearly discussed . for spheres , it is shown that if , that is , there is an _ attraction towards the low flow rate branch _ , which increases with .deformable particles are also considered .however , it is not possible to discuss from their data ( as , probably , from any other data ) whether the cross streamline migration at the bifurcation is more important in this case or not : for deformable particles , transverse migration towards the centre occurs , due to the presence of walls and of non homogeneous shear rates .this migration will probably screen the attraction effect , at least partly , and it seems difficult to quantify the relative contribution of both effects .in particular , depends on the ( arbitrary ) initial distance from the bifurcation . in , attraction towards the low flow rate branchis also quickly evoked , but considered as negligible since the authors mainly focus on large channels and interacting particles .+ finally , from our new analysis of previous results of the literature ( and despite some discrepancies ) it appears that there should be some attraction towards the low flow rate branch , although the final result is an enrichment of the high flow rate branch due to the depletion effect in the inlet channel .this effect was seen by barber __ in their simulations . on the other hand, if one considers the flow around an obstacle , as simulated in , it seems that spherical particles are attracted towards the high flow rate side .+ from this we conclude that the different effects occurring at the bifurcation level are neither well identified nor explained .moreover , to date , no direct experimental proof of any attraction phenomenon exists . in section[ sec : mig ] , we show experimentally that attraction towards the low flow rate branch takes place and confirm this through numerical simulations .it is then necessary to discuss whether this attraction has important consequences on the final distributions in particles in the two daughter channels .this was not done explicitly in .it is done in section [ sec : distr ] where we discuss the relative weight of the attraction towards the low flow rate branch and the depletion effect , which have opposite consequences , by using our simulations .we studied the behaviour of hard balls as a first reference system .since the potential migration across streamlines is linked to the way the fluid acts on the particles , we also studied spherical fluid vesicles .they are closed lipid membranes enclosing a newtonian fluid .the lipids that we used are in liquid phase at room temperature , so that the membrane is a two - dimensional fluid . in particular , it is incompressible ( so that spherical vesicles will remain spherical even under stress , unlike drops ) , but it is easily sheared : it entails that a torque exerted by the fluid on the surface of the particle can imply a different response whether it is a solid ball or a vesicle .moreover , since vesicle suspensions are polydisperse , it is a convenient way to vary the radius of the studied object .the experimental setup is a standard microfluidic chip made of polydimethylsiloxane bonded on a glass plate ( figure [ fig:5branches ] ) .we wish to observe what happens to an object located around position that is , in which branch it goes at the bifurcation . in order to determine the corresponding , we need to scan different initial positions around . one solution would be to let a suspension flow and hope that some of the particles will be close enough to the region of interest . in the meantime , as we shall see , the cross streamline effect is weak and requires precise measurement , and noticeable effects appear only at high radius , typically . with such objects, clogging is unavoidable , which would modify the flow rates ratio , and if a very dilute suspension is used , it is likely that the region of interest will only partly be scanned . to branch 1 after having been focused on a given streamline thanks to flows from lateral branches and . ]therefore we designed a microfluidic system allowing to use only one particle , that would go through the bifurcation with a controlled initial position , would be taken back , its position modified , would flow again in the bifurcation , and so on .moreover , we allowed continuous modification of the flow rate ratio between the two daughter branches .the core of the chip is the five branch crossroad shown in inset on figure [ fig:5branches ] .these five branches have different lengths and are linked to reservoirs placed at different heights , in order to induce a flow by hydrostatic pressure gradient . a focusing device ( branches , and )is placed before the bifurcation of interest ( branches 1 and 2 ) , in order to control the lateral position of the particle .particles are initially located in the central branch , where the flow is weak and the incoming particles are pinched between the two lateral flows . in order to modify the position of the particle ,the relative heights of the reservoir linked to the lateral branches are modified . the total flow rate and the flow rate ratios between the two daughter branches after the bifurcation are controlled by varying the heights of the two outlet reservoirs .note the flow rates ratio also depends on the heights of the reservoirs linked to inlet branches , and . since the two latter must be continuously modified to vary the position of the incoming particle in order to find for a given flow rate ratio , it is convenient to place them on a pulley so that their mean height is always constant ( the resistances of branches and being equal ) . if the total flow rate is a relevant parameter ( which is not be the case here since we consider only stokes flow of particles that do not deform ) , one can do the same with the two outlet reservoirs . in such a situation ,if reservoir of branch is placed at height 0 , reservoirs of branches and at heights , and reservoirs of branches 1 and 2 at height and , the flow rate ratio is governed by setting and can be modified independently in order to control .once the particle has gone through the bifurcation , height and the height of reservoir are modified so that the particle comes back to branch , and is modified in order to get closer and closer to position . ( or , equivalently , ) is a function of , , and the flow resistances of the five branches of rectangular cross sections , which are known functions of their lengths , widths and thicknesses ( see * ? ? ?the accuracy of the calculation of this function was checked by measuring for small particles , that must be equal to .note that the length of the channel is much more important than the size of a single flowing particle , so that we can neglect the contribution of the latter in the resistance to the flow : hence , even though we control the pressures , we can consider that we work at fixed flow rates . finally ,as it can be seen on figure [ fig : discrimination ] , our device allows us to scan very precisely the area of interest around the sought , so that the uncertainty associated to it is very low . + starting just above and just below its separating line . no clear difference between these two starting positionscan be seen with eyes , which illustrates the accuracy we get in the measurement of . is set to . ] at the bifurcation level , channels widths are all equal to m .their thickness is m . we used polystyrene balls of maximum radius m in soapy water ( therefore ) and fluid vesicles of size .vesicles membrane is a dioleoylphosphatidylcholine lipid bilayer enclosing an inner solution of sugar ( sucrose or glucose ) in water .vesicles are produced following the standard electroformation method ( see * ? ? ?maximum flow velocity at the bifurcation level was around 1 mm.s , so that the reynolds number re . in the simulations, we focus on the two - dimensional problem ( invariance along the axis ) .our problem is a simple fluid / structure interaction one and can be modeled by navier - stokes equations for the fluid flow and newton - euler equations for the sphere .these two problems can be coupled in a simple manner : * the action of fluid on the sphere is modeled by the hydrodynamic force and torque acting on its surface .they are used as the right hand sides of newton - euler equations . *the action of the sphere on fluid can be modeled by a non - slip boundary conditions on the sphere ( in the navier - stokes equations ) .however , this explicit coupling can be unstable numerically and its resolution often requires very small time steps .in addition , as we have chosen to use finite element method ( for accuracy reasons ) and since the position of the sphere evolves in time we have to remesh the computational domain at each time step or in best cases at each few time steps .for all these reasons we chose another strategy to model our problem . instead of using newton - euler equations for modeling the sphere motion and navier - stokes equations for the fluid flow ,we use only the stokes equations in the whole domain of the bifurcation ( including the interior of the sphere ) .the use of stokes equations is justified by the small reynolds number in our case and the presence of the sphere is rendered by a second fluid with a huge viscosity on which we impose a rigid body constraint .this type of strategy is widely used in the literature with different names e.g. the so called fpd ( fluid particle dynamics ) method ( see * ? ? ?* ; * ? ? ?* ) ) but we can group them under the generic name of penalty - like methods . the one that we use is mainly developed by lefebvre _( see * ? ? ?* ; * ? ? ?* ) and we can find a mathematical analysis of these types of methods in . in what follows we describe briefly the basic ingredients of the finite element method and penalty technique applied to our problem . the fluid flow is governed by stokes equations that can be written as follow : where : * , and are respectively the viscosity , the velocity and the pressure fields of the fluid , * is the domain occupied by the fluid .typically if we denote by the whole bifurcation and by the rigid particle , * is the border of , * is some given function for the boundary conditions .it is known that under some reasonable assumptions the problem ( [ eq:1])-([eq:2])-([eq:2a ] ) has a unique solution ( see * ? ? ?in the sequel we will use the following functional spaces : as we will use for the numerical resolution of problem ( [ eq:1])-([eq:2])-([eq:2a ] ) , we need to rewrite it in a variational form ( an equivalent formulation of the initial problem ) . for sake of simplicity , we start by writing it in a standard way ( fluid without sphere ) , then we modify it using penalty technique to take into account the presence of the particle . in what followwe describe briefly these two methods , the standard variational formulation for the stokes problem and the penalty technique .let us first recall the deformation tensor which will be useful in the sequel thanks to incompressibility constraint we have hence , the problem ( [ eq:1])-([eq:2])-([eq:2a ] ) can be rewritten as follows : find such that : by simple calculations ( see appendix for details ) we show that problem ( [ eq:1b])-([eq:2b])-([eq:2ab ] ) is equivalent to this one : find such that : where denotes the double contraction .we chose to use the penalty strategy in the framework of that we will describe briefly here ( see for more details ) .the first step consists in rewriting the variational formulation ( [ eq:3bs])-([eq:4bs])-([eq:5bs ] ) by replacing the integrals over the real domain occupied by the fluid ( ) by those over the whole domain ( including the sphere ) . which means that we extend the solution to the whole domain .more precisely , by the penalty method we replace the particle by an artificial fluid with huge viscosity .this is made possible by imposing a rigid body motion constraint on the fluid that replaces the sphere ( in ) . obviously , the divergence free constraint is also insured in .the problem ( [ eq:3bs])-([eq:4bs])-([eq:4bs ] ) is then modified as follows : find such that : where is a given penalty parameter . finally ,if we denote the time discretization parameter by , the velocity and the pressure at time by , the velocity of the sphere at time by and its centre position by , we can write our algorithm as : solves : the implementation of algorithm ( [ eq:15])-([eq:15a])-([eq:3bpf])-([eq:4bpf])-([eq:5bpf ] ) is done by using a user - friendly finite element software : ` freefem++ ` ( see * ? ? ?finally , we consider the bifurcation geometry shown in figure [ fig : schema](b ) and impose no - slip boundary conditions on all walls and we prescribe parabolic velocity profiles at the inlets and outlets such that , for a given choice of flow rate ratio , . for a given initial position of the sphere of given radius at the outlet , the full trajectory is calculated until it definitely enters one of the daughter branches .a dichotomy algorithm is used to determine the key position .spheres of radius up to are considered . in practice ,the penalty technique may deteriorate the preconditionning of our underlying linear system . to overcome this problem, one can regularize equation ( [ eq:4bpf ] ) by replacing it with this one : where is a given parameter .the t - bifurcation with branches of equal widths is considered .branch 1 receives flow from high values , so for indicates attraction towards the low flow rate branch ( see also figure [ fig : schema]b ) .( a ) data from quasi - two - dimensional experiments and comparison with two - dimensional case for one particle size . the two - dimensional and three - dimensional fluid separating lines are shown to illustrate the low discrepancy between the two cases , as requested to validate our new analysis of the literature in section [ sec : litt ] .the horizontal dotted line shows the maximum position for spheres .its intersection with the curve yields the critical flow rate ratio below which no particle enters branch 1 , the low flow rate branch .this expected critical flow rates for the two- and three - dimensional cases are shown by arrows .( b ) data from two - dimensional simulations . ] in figure [ fig : lignessep ] we show the position of the particle separating line relatively to the position of the fluid separating line when branch 1 receives less fluid than branch 2 ( see figure [ fig : schema]b ) , which is the main result of this paper . for all particles considered , in the simulations or in the experiments , we find that the particle separating line lies below the fluid separating line , the upper branch being the low flow rate branch .these results clearly indicate an attraction towards the low flow rate branch : while a fluid element located below the fluid separating streamline will enter into the high flow rate branch , a solid particle can cross this streamline and enter into the low flow rate branch , providing it is not too far initially .it is also clear that the attraction increases with the sphere radius . in particular , in the experiments ( figure [ fig : lignessep]a ) , particles of radius like fluid particles . balls show a slight attraction towards the low flow rate branch , while the effect is more marked for big balls of radius .vesicles show comparable trend and it seems from our data that solid particles or vesicles with fluid membrane behave similarly in the vicinity of the bifurcation . in the simulations ( figure [ fig : lignessep]b ) we see clearly that for a given , the discrepancy between the fluid and particle behaviour increases when decreases .on the contrary , in the quasi - two - dimensional case of the experiments , the difference between the flow and the particle streamlines seems to be rather constant in a wide range of values .finally , for small enough values of , the attraction effect is more pronounced in the two - dimensional case than in the quasi - two - dimensional one , as shown on figure [ fig : lignessep](a ) for .this was to be expected , since this effect has something to do with the non zero size of the particle and the real particle to channel size ratio is lower in the experiments for a given , due to the third dimension . in all cases , below a given value of , the critical position would enter the depletion zone , so that no particle will eventually enter the low flow rate branch .the corresponding critical is much lower in the two - dimensional case than in the experimental quasi - two - dimensional situation ( see figure [ fig : lignessep]a ) . the first argument for some attraction towards one branchwas initially given by fung ( see * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) and strengthened by recent simulations ( see * ? ? ?* ) : a sphere in the middle of the bifurcation is considered ( ) and it is argued that it should go to the high flow rate branch since the pressure drop is higher than because ( see figure [ fig : schema](b ) for notations ) .this is true ( we also found when ) but this is not the point to be discussed : if one wishes to discuss the increase in volume fraction in branch 2 , therefore to compare the particles and fluid fluxes and , one needs to focus on particles in the vicinity of the fluid separating streamline ( to see whether or not they behave like the fluid ) and not in the vicinity of the middle of the channel . on the other hand ,this incorrectly formulated argument by fung has led to the idea that there must be some attraction towards the high flow rate branch in the vicinity of the fluid separating streamline ( see * ? ? ?* ) , which appears now in the literature as a well established fact ( see * ? ?* ; * ? ? ?* ) . in ,fung s argument is rejected , although it is not explained why .arguments for attraction towards the low flow rate branch ( that is , on figure [ fig : schema](b ) ) are given , considering particles in the vicinity of the fluid separating streamline .the authors main idea is , first , that some pressure difference builds up on each side of the particle because it goes more slowly than the fluid .then , as the particle intercepts a relatively more important area in the low flow rate branch region ( ) than in the high flow rate region , they consider that the pressure drop is more important in the low flow rate region , so that .the authors call this effect daughter vessel obstruction. indeed , it is not clear in this paper where the particles must be for this argument to be valid : at the entrance of the bifurcation , in the middle of it , or close to the opposite wall as we could think since their arguments are used to explain what happens in case of daughter branches of different widths .indeed , we shall see that the effects can be quite different according to this position and , furthermore , the notion of relatively larger part intercepted is not the key phenomenon to understand the final attraction towards the low flow rate branch , even though it clearly contributes to it .particle when for ( a ) branches of equal widths , ( b ) daughter branches 2.5 times wider than the inlet branch and ( c ) daughter branches 7.5 times wider than the inlet branch .the unperturbed fluid separating streamline starting at is shown in black .the particle is shown approximatively at its stagnation point . ] to understand this , let us focus on the simulated trajectories starting around shown on figure [ fig : traj](a ) ( , ) .these trajectories must be analysed in comparison with the unperturbed flow streamlines , in particular the fluid separating streamline , starting at and ending up against the front wall at a stagnation point .particles starting around show a clear attraction towards the low flow rate branch ( displacement along the axis ) as they enter the bifurcation .more precisely , there are three types of motions : for low initial position ( in particular ) , particles go directly into the high flow rate branch .similarly , above , the particles go directly into the low flow rate branch . between some and , the particles first move towards the low flow rate branch , butfinally enter the high flow rate branch : the initial attraction towards the low flow rate branch becomes weaker and the particle eventually follows the streamlines entering the high flow rate branch .this non monotonous variation of for a particle starting just below is also seen in experiments , as shown in figure [ fig : discrimination ] , right part : the third position of the vesicle is characterized by a slightly higher than the initial one . back to the simulations , note that , at this level , there is still some net attraction towards the low flow rate branch : the particle stagnation point near the opposite wall is still below the fluid separating streamline ( that is , on the high flow rate side ) .this two - step effect is even more visible when the width of the daughter branches is increased , so that the entrance of the bifurcation is far from the opposite wall , as shown on figures [ fig : traj](b , c ) .the second attraction is , in such a situation , more dramatic : for , the particle stagnation point is even on the other side of the fluid separating streamline , that is , there is some attraction towards the high flow rate branch ! thus , there are clearly two antagonistic effects along the trajectory .in the first case of branches of equal widths , where the opposite wall is close to the bifurcation entrance , the second attraction towards the high flow rate branch coexists with the attraction towards the low flow rate branch and finally only diminishes it .+ these two effects occur in two very different situations . at the entrance of the channel , an attraction effectmust be understood in terms of streamlines crossing : does a pressure difference build up orthogonally to the main flow direction ? near the opposite wall , the flow is directed towards the branches and being attracted means flowing up- or downstream . in both cases , in order to discuss whether some pressure difference builds up or not , the main feature is that , in a two - dimensional stokes flow between two parallel walls , the pressure difference between two points along the flow direction scales like , where is the flow rate and the distance between the two walls .this scaling is sufficient to discuss in a first order approach the two effects at stake .+ the second effect is the simplest one : indeed , the sphere is placed in a quasi - elongational , but asymmetric , flow . as shown on figure [ fig : schemasimple](b ) , around the flow stagnation point , the particle movement is basically controlled by the pressure difference , than can be written . focusing on the component of the velocity field , which becomes all the more important as is larger than 1 , we have . around the flow stagnation point, the pressure difference has then the same sign as and is thus negative , which indicates attraction towards the high flow rate branch . for wide daughter branches ,when this effect is not screened by the first one , this implies that the stagnation point for particles is above the fluid separating line , as seen on figure [ fig : traj](c ) .the argument that we use here is similar to the one introduced by fung ( see * ? ? ?* ; * ? ? ?* ) but resolves only one part of the problem . following these authors ,it can also be pointed out that the shear stress on the sphere is non zero : in a two - dimensional poiseuille flow of width , the shear rate near a wall scales as , so the net shear stress on the sphere is directed towards the high flow rate branch , making the sphere roll along the opposite wall towards this branch . finally , this situation is similar to the one of a flow around an obstacle , that was considered in as a model situation to understand what happens at the bifurcation .indeed , the authors find that spheres are attracted towards the high velocity side of the obstacle .however , we show here that this modeling is misleading , as it neglects the first effect , which is the one which eventually governs the net effect .+ this first effect leads to an attraction towards the low flow rate branch . to understand this ,let us consider a sphere located in the bifurcation with transverse position .the exact calculation of the flow around it is much too complicated , and simplifications are needed . just as we considered the large case to understand the second mechanism eventually leading to attraction towards the high flow rate branch , let us consider the small limit to understand the first effect : as soon as the ball enters the bifurcation , it hits the front wall .on each side , we can write in a first approximation that the flow rate between the sphere and the wall scales as , where is the pressure difference between the back and the front of the sphere , and the distance between the sphere and the wall ( see figure [ fig : schemasimple]a ) . ) .( b ) opposite wall : attraction towards the high flow rate branch ( ) . ]since the ball touches the front wall , the flow rate is either or and is , by definition of , the integral of the unperturbed poiseuille flow velocity between the wall and the line , so , where ( see figure [ fig : schemasimple](a ) for notations ) .we have then , on each side : to make the things clear , let us consider then the extreme case of a flat particle : . then is a decreasing function of , that is , a decreasing function of .therefore , the pressure drop is more important on the low flow rate side , and finally : there is an attraction towards the low flow rate branch .this is exactly the opposite result from the simple view claiming that there is some attraction towards the high flow rate branch since scales as so as .since one has to discuss what happens for a sphere in the vicinity of the separating line , and are not independent .this is the key argument .note finally that there is no need for some obstruction arguments to build up a different pressure difference on each side .it only increases the effect since the function decreases faster than the function .one can be even more precise and take into account the variations in the gap thickness as the fluid flows between the sphere to calculate the pressure drop by lubrication theory .still , it is found that is a decreasing function of .+ in the more realistic case , the flow repartition becomes more complex , and the particle velocity along the axis is not zero . yet , as it is reaching a low velocity area ( the velocity along axis of the streamline starting at drops to 0 ) , its velocity is lower than its velocity at the same position in a straight channel .in addition , as the flow velocities between the sphere and the opposite wall are low , and since the fluid located e. g. between and the top wall will eventually enter the top branch by definition , we can assume it will mainly flow between the sphere and the top wall .note this is not true in a straight channel : there are no reasons for the fluid located between one wall and the line , where is the sphere lateral position , to enter completely , or to be the only fluid to enter , between the wall and the particle .therefore , we can assume that the arguments proposed to explain the attraction towards the low flow rate branch remain valid , even though the net effect will be weaker .note finally that , contrary to what discussed for the second effect , the particle rotation probably plays a minor role here , as in this geometry the shear stress exerted by the fluid on the particle will mainly result in a force acting parallel to the axis .+ finally , this separation into two effects can be used to discuss a scenario for bifurcations with channels of different widths : if the inlet channel is broadened , the first effect becomes less strong while the second one is not modified , which results in a weaker attraction towards the low flow rate branch .if the outlet channels are broadened , as in figures [ fig : traj](b , c ) , it becomes more subtle .let us start again by the second effect ( migration up- or downstream ) before the first effect ( transverse migration ) . as seen on figure [ fig : traj ] , the position of the particle stagnation point ( relatively to the flow separating line ) is an increasing function of , so the second effect is favoured by the broadening of the outlets : for , we end up with the problem of flow around an obstacle , while for small , one can not write that the width of the gap between the ball and the wall is just , therefore independent from , as it also depends on the position of the particle relatively to . in other words , in such a situation , the second effect is screened by the first effect . on the other hand , as increases ,the distance available for transverse migration becomes larger , which could favour the first effect , although the slow down of the particle at the entrance of the bifurcation becomes less pronounced .finally , it appears to be difficult to predict the consequences of an outlet broadening : for instance , in our two - dimensional simulations presented in figure [ fig : traj ] ( , ) , varies from 0.27 when the outlet half - width is equal to 1 , to 0.31 when is equal to 2.5 and drops down to 0.22 for ! note that the net effect is always an attraction towards the low flow rate branch ( ) .for daughter branches of different widths , it was illustrated in that the narrower branch is favoured .this can be explained through the second effect ( see figure [ fig : schemasimple]b ) : the pressure drop increases when the channel width decreases , which favours the narrower branch even in case of equal flow rates between the branches . for two - dimensional spheres of different radii in the inlet channel where a poiseuille flow of velocity is imposed at infinity .the full lines show the fits by quartic law . ]as there is some attraction towards the low flow rate branch , we could expect some enrichment of the low flow rate branch .however , as already discussed , even in the most uniform situation , the presence of a free layer near the walls will favour the high flow rate branch .we discuss now , through our simulations , the final distribution that results from these two antagonistic effects . as in most previous papers of the literature, we focus on the case of uniform number density of particles in the inlet ( in equation [ eq : n1 ] ) . in order to compute the final splitting of the incoming particles as a function of flow rate ratio needs to know , according to equation ( [ eq : n1 ] ) , the position of the particle separating line and the velocity of the particles in the inlet channel . from figure[ fig : lignessep ] we see that depends roughly linearly on , so we will consider a linear fit of the calculated data in order to get values for all . the longitudinal velocity was computed for all studied particles as a function of transverse position . as shown on figure [ fig : profils ] , the function is well described by a quartic function , which is an approximation also used in . values for the fitting parameters for this velocity profile and for the linear relationship are given in table [ table : quartic ] ..values for the fitting parameters for the longitudinal velocity of a two - dimensional sphere of radius in a poiseuille flow of imposed velocity at infinity ; for , the velocity profile is too flat to be reasonably fitted by a 3-parameter law , since all velocities are equal to in the explored interval ] , while we focused on in order to compare with the t - shaped bifurcation .in addition , their apex has a radius 0.75 ( for the case ) while ours is sharper ( radius of 0.1 ) .these differences seem to impact only partly the results , as discussed above .we can expect this slight discrepancy to be due to the treatment of the numerical singularities that appear when the particle is close to one wall . for , the maximum position is 0.33 , which is close to the separating streamline position .it is also interesting to compare our results in the y - shaped bifurcation with the results in the t geometry , which was chosen to make the discussion easier .we can see that , for low enough , the attraction towards the low flow rate branch is slightly higher .this can be understood by considering a particle with initial position slightly below the critical position found in the t geometry : in the latter geometry , it will eventually enter the high flow rate branch , by definition of .as shown in figure [ fig : conclulitt](b ) , in the y geometry , this movement is hindered by the apex since the final attraction towards the high flow rate branch occurs near the opposite wall ( the second effect discussed in section [ sec : discussion ] ) .finally from this comparison we see that comparing results in t and symmetric y geometry is relevant but for highly asymmetric flow distributions . + in section[ sec : litt ] , the analysis of the two - dimensional simulations for spheres shown in showed that there should be some attraction towards the low flow rate branch .our simulations for showed that this effect is non negligible ( figure [ fig : lignessep]b ) and modifies greatly the final distribution ( figure [ fig : volfrac ] ) .finally , we can see in figure [ fig : concluaudet ] that our simulations give similar results as the simulation by audet and olbricht . as for the experiments presented in for , we showed that the final distribution was consistent with a no - attraction assumption . as we showed in figure[ fig : lignessep](a ) , in a three - dimensional case , the attraction towards the low flow rate region is weak for spheres of radius or smaller , which is again coherent with the results of yang _et al._. note that , while their results were considered by the authors as a basis to discuss some attraction effect towards the high flow rate branch , we see that their final distributions are just reminiscences of the depletion effect in the inlet channel . the other consistent set of studies in the literature deals with large balls in three dimensional channels .we have studied balls of radius that stop entering branch 1 when ( figure [ fig : lignessep]a ) , while this critical flow rate would be around in case they would follow the fluid streamlines .this critical flow rate is expected to be slightly higher for larger balls of radius , but far lower than , which would be the no - attraction case . in the experiments of , some balls are still observed in branch 1 when ( figure [ fig : compar ] ) , indicating a stronger attraction effect towards the low flow rate branch , which can be associated to the fact that the authors considered a square cross section channel , while the confinement in the third direction is in our case .the experiments with circular cross section channels lead to contradictory results : in and , the results were consistent with a no - attraction assumption , therefore they are in contradiction with our results . on the contrary , in ,the critical flow rate for is around 0.2 , which would show a stronger attraction than in our case .note that all these apparently contradictory observations are to be considered keeping in mind that the data of as a function of are sometimes very noisy in the cited papers .in this paper , we have focused explicitly on the existence and direction of some cross streamline drift of particles in the vicinity of a bifurcation with different flow rates in the daughter branches .a new analysis of some previous unexploited results of the literature first gave us some indications on the possibility of an attraction towards the low flow rate branch .then the first direct experimental proof of attraction towards the low flow rate branch was shown and arguments for this attraction were given with the help of two - dimensional simulations .in particular , we showed that this attraction is the result of two antagonistic effects : the first one , that takes place at the entrance of the bifurcation , induces migration towards the low flow rate branch , while the second one takes place near the stagnation point and induces migration towards the high flow rate branch but is not strong enough , in standard configurations of branches of comparable sizes , to counterbalance the first effect .this second effect is the only one that was previously considered in most papers of the literature , which has lead to the misleading idea that the enrichment in particles in the high flow rate branch is due to some attraction towards it . on the contrary ,it had been argued by barber __ that there should be some attraction towards the low flow rate branch .by distinguishing the two effects mentioned above , we have tried to clarify their statements . in a second step ,we have discussed the consequences of such an attraction on the final distribution of particles .it appears that the attraction is not strong enough , even in a two - dimensional system where it is stronger , to counterbalance the impact of the depletion effect . even in the most homogeneous case where the particles are equally distributed across the channel butcan not approach the wall closer than their radius , the existence of a free layer near the walls favours the high flow rate branch , which eventually receives more particles than fluid . however , these two antagonistic phenomena are of comparable importance , and none can be neglected : the particle volume fraction increase in the high flow rate branch is typically divided by two because of the attraction effect . on the other hand ,the initial distribution is a key parameter for the prediction of the final splitting . for deformable particles ,initial lateral migration can induce a narrowing of their distribution , which will eventually favours the high flow rate branch .for instance , in , the authors had to adjust the free layer width in their simulations in order to fit experimental data on blood flow .on the other hand , in a network of bifurcations , the initially centered particles will find themselves close to one wall after the first bifurcation , which can favour a low flow rate branch in a second bifurcation .note finally that , as seen in , these effects become weaker when the confinement decreases .typically , as soon as the sphere diameter is less than half the channel width , the variations of volume fraction do not exceed a few percent . for applicative purposes ,the consequences of this attraction have been discussed and some prescriptions have been proposed . of course , one can go further than our symmetric case and modify the angle between the branches , or consider many - branch bifurcations , and so on .however , the t - bifurcation case allowed to distinguish between two goals : concentrating a population of particles , or obtaining a particle - free fluid .the optimal configuration can be different according to the chosen goal .similar considerations are also valid when it is about doing some sorting in polydisperse suspensions , which is an important activity ( see * ? ? ?* ) : getting an optimally concentrated suspension of big particles might not be compatible with getting a suspension of small particles free of big particles .+ now that the case of spherical particles in a symmetric bifurcation has been studied and the framework well established , we believe that quantitative discussions could be made in the future about the other parameters that we put aside here . in particular , discussing the effect of the deformability of the particles is a challenging problem if one only considers the final distribution data , as the deformability modifies the initial distribution , but most probably also the attraction effect . in a network ,the importance of these contributions will be different according , in particular , to the distance between two bifurcations , so they must be discussed separately .considering concentrated suspensions is of course the next challenging issue .particles close to each other will obviously hydrodynamicaly interact , but so will distant particles , through the modification of the effective resistance to flow of the branches . in such a situation ,considering pressure driven or flow rate driven fluids will be different . for concentrated suspensions of deformable particles in a network , like blood in the circulatory system , the relevance of a particle - based approach can be questioned .historical models for the major blood flow phenomena are continuum models with some ad - hoc parameters , which must be somehow related to the intrinsic mechanical properties of the blood cells ( for a recent example , see ) .building up a bottom - up approach in such a system is a long quest . for dilute suspensions ,some links between the microscopic dynamics of lipid vesicles and the rheology of a suspension have been recently established ( see * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) . for red blood cells , that exhibit qualitatively similar dynamics ( see * ? ?* ; * ? ? ?* ; * ? ? ?? * ; * ? ? ?* ) , we can hope that such a link will soon be established , following . for confined and concentrated suspensions ,the distribution is known to be non homogeneous , which has direct consequences on the rheology ( the fahraeus - lindquist effect ) . once again , while empirical macroscopic models are able to describe this reality , establishing the link between the viscosity of the suspension and the local dynamics is still a challenging issue .the final distribution of the flowing bodies is the product of a balance between migration towards the center , which has already been discussed in the introduction of the present paper , and interactions between them that can broaden the distribution ( see * ? ? ?* ; * ? ? ?the presence of deformable boundaries also needs to be taken into account , as shown in . in the meantime, the development of simulations techniques for quantitative three - dimensional approaches is a crucial task , which is becoming more and more feasible ( see * ? ? ?* ; * ? ? ?* ) .we introduce first the scalar product in as follows : the variational formulation of problem ( [ eq:1b])-([eq:2b])-([eq:2ab ] ) is obtained by taking the scalar product of the equation ( [ eq:1b ] ) in with a test function and we multiply equation ( [ eq:2b ] ) by a test function .it leads to this problem : find such that : applying green s formula to equation ( [ eq:13 ] ) we obtain where denotes the outer unit normal on . taking into account that vanishes on ( recall that we have chosen the test function ) , the problem ( [ eq:13])-([eq:13a])-([eq:13b ] ) is now equivalent to this one : find such that : note that is symmetric ( ) . so that we can write , the variational formulation of our initial problem ( [ eq:1])-([eq:2])-([eq:2a ] ) is given by : find such that : as we have the first integral in equation ( [ eq:3b ] ) can be rewritten thanks to this identity indeed , by integration by part and using the incompressibility constraint we have .thus we can retrieve the formulation of our problem as a minimization of a kind of energy .the velocity field is then the solution of this problem where
the problem of the splitting of a suspension in bifurcating channels dividing into two branches of non equal flow rates is addressed . as observed for long , in particular in blood flow studies , the volume fraction of particles generally increases in the high flow rate branch and decreases in the other one . in the literature , this phenomenon is sometimes interpreted as the result of some attraction of the particles towards this high flow rate branch . in this paper , we focus on the existence of such an attraction through microfluidic experiments and two - dimensional simulations and show clearly that such an attraction does not occur but is , on the contrary , directed towards the low flow rate branch . arguments for this attraction are given and a discussion on the sometimes misleading arguments found in the literature is proposed . finally , the enrichment in particles in the high flow rate branch is shown to be mainly a consequence of the initial distribution in the inlet branch , which shows necessarily some depletion near the walls . particle / fluid flow ; microfluidics ; blood flow
reachable sets have attracted several mathematicians since longer times both in theoretical and in numerical analysis .one common definition collects end points of feasible solutions of a control problem starting from a common inital set and reaching a point _ up to _ a given end time , the other definition is similar but prescribes a _ fixed _ end time in which the point is reached .the former definition automatically leads to a monotone behavior of the reachable sets with respect to inclusion , since the reachable set up to a given time is the union of reachable sets for a fixed time .reachable sets with and without control constraints appear in control theory ( e.g. in stability results ) , in optimal control ( e.g. in analysis for robustness ) and in set - valued analysis . for reachable sets at a given end time of linear or nonlinear control problems , properties like convexity for linear control problems at a given end time ( due to aumann and his study of aumann s integral for set - valued maps in ) , closedness and connectedness under weak assumptions for nonlinear systems ( see e.g. ) , are well - known .the lipschitz continuity of reachable sets with respect to the initial value is also established and is a result of the filippov theorem which proves the existence of neighboring solutions for lipschitz systems . to mention one further resultis the density of solutions of the non - convexified control problem in the relaxed system in which the right - hand side is convexified . on the other hand reachable sets appear in many applications .they appear in natural generalizations of differential equations with discontinuous right - hand side and hybrid systems ( e.g. via the filippov regularization in ) , in gradient inclusions with maximally monotone right - hand side ( see e.g. ) , as generalizations of control problems ( see e.g. ) , .many practical examples are mentioned in and in references therein .the approaches for the numerical computation of reachable sets mainly split into two classes , those for reachable sets up to a given time and the other ones for reachable sets at a given end time .we will give here only exemplary references , since the literature is very rich .there are methods based on overestimation and underestimation of reachable sets based on ellipsoids , zonotopes or on approximating the reachable set with support functions resp . supporting points .other popular and well - studied approaches involve level - set methods , semi - lagrangian schemes and the computation of an associated hamilton - jacobi - bellman equation , see e.g. or are based on the viability concept and the viability kernel algorithm .further methods are set - valued generalizations of quadrature methods and runge - kutta methods initiated by the works . solvers for optimal control problems are another source for methods approximating reachable sets , see . in a more detailed review of some methods up to 1994 appeared , see also and the books and book chapters in for a more recent overview and references therein . here, we will focus on set - valued quadrature methods and set - valued runge - kutta methods with the help of support functions or supporting points , since they do not suffer on the wrapping effect or on an exploding number of vertices and the error of restricting computations only for finitely many directions can be easily estimated .furthermore , they belong to the most efficient and fast methods ( see ( * ? ? ?3.1 ) , ( * ? ? ?* chap . 9 , p. 128 ) ) for linear control problems to which we restrict the computation of the minimum time function .these methods enjoy an increasing attention also in neighboring research fields , e.g. in the computation of viability kernels or reachable sets for hybrid systems in as well as in the computation of interpolation of set - valued maps , minkowski sums of convex sets ( ) as well as of the dini , michel - penot and mordukhovich subdifferentials .we refer to ( and references therein ) for technical details on the numerical implementation , although we will lay out the main ideas of this approach for reader s convenience . in optimal control theory the regularity of the minimum time functionsis studied intensively , see e.g. in and references therein . forthe error estimates in this paper it will be essential to single out example classes for which the minimum time function is lipschitz ( no order reduction of the set - valued method ) or hlder - continuous with exponent ( order reduction by the square root ) .minimum time functions are usually computed by solving hamilton - jacobi - bellman ( hjb ) equations and by the dynamic programming principle , see e.g. . in this approach , the minimal requirement onthe regularity of is the continuity , see e.g. .the solution of a hjb equation with suitable boundary conditions gives immediately after a transformation the minimum time function and its level sets provide a description of the reachable sets .a natural question occurring is whether it is also possible to do the other way around , i.e. reconstruct the minimum time function if knowing the reachable sets .one of the attempts was done in , where the approach is based on pde solvers and on the reconstruction of the optimal control and solution via the value function . on the other hand ,our approach in this work is completely different .it is based on very efficient quadrature methods for convex reachable sets as described in section 3 . in this articlewe present a novel approach for calculating the minimum time function .the basic idea is to use set - valued methods for approximating reachable sets at a given end time with computations based on support functions resp . supporting points . by reversing the time andstart from the convex target as initial set we compute the reachable sets for times on a ( coarser ) time grid . due to the strictly expanding condition for reachable sets , the corresponding end timeis assigned to all boundary points of the computed reachable sets .since we discretize in time and in space ( by choosing a finite number of outer normals for the computation of supporting points ) , the vertices of the polytopes forming the fully discrete reachable sets are considered as data points of an irregular triangulated domain . on this simplicial triangulation, a piecewise linear approximation yields a fully discrete approximation of the minimum time function . the well - known interpolation error and the convergence results for the set - valued method can be applied to yield an easy - to - prove error estimate by taking into account the regularity of the minimum time function .it requires at least the continuity and involves the maximal diameter of the simplices in the used triangulation .a second error estimate is proved without explicitely assuming the continuity of the minimum time function and depends only on the time interval between the computed ( backward ) reachable sets .the computation does not need the nonempty interior of the target set in contrary to the hamilton - jacobi - bellman approach , for singletons the error estimate even improves .it is also able to compute discontinuous minimum time functions , since the underlying set - valued method can also compute lower - dimensional reachable sets .there is no explicit dependence of the algorithm and the error estimates on the smoothness of optimal solutions or controls .these results are devoted to reconstructing discrete optimal trajectories which reach a set of supporting points from a given target for a class of linear control problems and also proving the convergence of discrete optimal controls by the use of nonsmooth and variational analysis .the main tool is attouch s theorem that allows to benefit from the convergence of the discrete reachable sets to the time - continuous one .the plan of the article is as follows : in section 2 we collect notations , definitions and basic properties of convex analysis , set operations , reachable sets and the minimum time function .the convexity of the reachable set for linear control problems and the characterization of its boundary via the level - set of the minimum time function is the basis for the algorithm formulated in the next section .we briefly introduce the reader to set - valued quadrature methods and runge - kutta methods and their implementation and discuss the convergence order for the fully discrete approximation of reachable sets at a given time both in time and in space . in the next subsection we present the error estimate for the fully discrete minimum time function which depends on the regularity of the continuous minimum time function and on the convergence order of the underlying set - valued method .another error estimate expresses the error only on the time period between the calculated reachable sets .the last subsection discusses the construction of discrete optimal trajectories and convergence of discrete optimal controls .various accompaning examples can be found in the second part .in this section we will recall some notations , definitions as well as basic knowledge of convex analysis and control theory for later use .let be the set of convex , compact , nonempty subsets of , be the euclidean norm and the inner product in , be the closed ( euclidean ) ball with radius centered at and be the unique sphere in .let be a subset of , be an real matrix , then , denotes the _ lub - norm _ of with respect to , i.e. the spectral norm . the _ convex hull _ , the _ boundary _ and the _ interior _ of a set are signified by respectively .we define the support function , the supporting points in a given direction and the set arithmetic operations as follows .let .the _ support function _ and the _ supporting face _ of in the direction are defined as , respectively , an element of the supporting face is called _ supporting point_. let . then the _ scalar multiplication _, the _ image of a set under a linear map _ and the _ minkowski sum _ are defined as follows : in the following propositions we will recall known properties of the convex hull , the support function and the supporting points when applied to the set operations introduced above ( see e.g. ( * ? ? ?0 ) , ( * ? ? ?4.6 , 18.2 ) , ) .especially , the convexity of the arithmetic set operations becomes obvious .let and .then , [ prop : minkowski ] let , , and .+ then and .moreover , by means of the support function or the supporting points , one can fully represent a convex compact set , either as intersection of halfspaces by the minkowski duality or as convex hull of supporting points .let .then where is an arbitrary selection of .we also recall the definition of hausdorff distance which is the main tool to measure the error of reachable set approximation .let .then the _ distance function _ from to is and the hausdorff distance between and is defined as the next proposition will be used for a special form of the space discretization of convex sets via the convex hull of finitely many supporting points .let , choose with a finite set of normed directions [ prop : approx_conv_set ] with , and consider the approximating polytope where is an arbitrary selection of , .then where stands for the diameter of the set .some basic notions of nonsmooth and variational analysis which are needed in constructing and proving the convergence of controls are now introduced .the main references for this part are .let be a subset in and be a function .the _ indicator function _ of and the _ epigraph _ of be defined as [ pro : indicator ] let be a closed , convex and nonempty set . then is a lower semicontinuous , convex function and is a closed , convex set .see e.g. ( * ? ? ?* exercise 2.1 ) .[ def : sub ] + let be a given closed convex set and be a lower semicontinuous , convex function. then is _ normal _ to at if the set of such vectors is the _ normal cone _ to at , denoted by .we say that is a _ subgradient _ of at if is an element of the normal cone .the possibly empty set of all subgradients of at , denoted by , is called the _ ( moreau - rockafellar ) subdifferential _ of at .[ def : setconvg ] for a sequence of subsets of , the _ outer limit _ is the set and the _ inner limit _ is the set the _ limit _ of the sequence exists if the outer and inner limit sets are equal : we also need two more convergence terms for set - valued maps and functions .[ def : graph_epi_graph_lim ] consider and the set - valued map .then the _ graph _ of is defined as a sequence of functions , , converges _ epi - graphically _ , if the outer and the inner limit of their epigraphs coincide .the _ epi - limit _ is the function for which its epigraph coincides with the set limit of the epigraphs in the sense of painlev - kuratowski ( see ( * ? ? ?* definition 7.1 ) ) .we say that the sequence of set - valued maps with _ converges graphically _ to a set - valued map if and only if its graphs , i.e. the sets , converge to in the sense of definition [ def : setconvg ] ( see ( * ? ? ?* definition 5.32 ) ) .we cite here attouch s theorem in a reduced version which plays an important role for convergence results of discrete optimal controls and solutions .[ theo : attouch ] let and be lower semicontinuous , convex , proper functions from to .+ then the epi - convergence of to is equivalent to the graphical convergence of the subdifferential maps to .now we will recall some basic notations of control theory , see e.g. ( * ? ? ? * chap .iv ) for more detail . consider the following linear time - variant control dynamics in the coefficients are and matrices respectively , is the initial value , is the set of control values . under standard assumptions , the existence and uniqueness ofare guaranteed for any measurable function and any .let , a nonempty compact set , be the _ target _ and the set of _ admissible controls _ and is the solution of .we define the _ minimum time starting from to reach the target _ for some as the _ minimum time function to reach from _ is defined as see e.g. ( * ? ? ?we also define the _ reachable sets for fixed end time _ , _ up to time _ resp ._ up to a finite time _ as follows : }\mathcal{r}(s)= \mbox { } { \lbrace y_0 \in { { \mathbb r}}^n : \textit { there exists } u\in \mathcal{u},\,y(s , y_0,u)\in \mathcal{s } \text { for some } s\in [ t_0,t ] \rbrace } , \\ \mathcal{r } & : = { \lbrace y_0 \in { { \mathbb r}}^n : \textit { there exists some finite time with } y_0 \in \mathcal{r}(t ) \rbrace } = \bigcup_{t\in [ t_0,\infty ) } \mathcal{r}(t ) .\end{aligned}\ ] ] by definition is a sublevel set of the minimum time function , while for a given maximal time and some ] , i.e. for all the reader can find sufficient conditions for assumption [ standassum](iv ) for in ( * ? ? ?17 ) , ( * ? ? ?2.22.3 ) . under this assumption, it is obvious that [ rem : strict_expand ] under our standard hypotheses , the control problem can equivalently be replaced by the following linear differential inclusion with absolutely continuous solutions ( see ( * ? ? ?* appendix a.4 ) ) .we recall the notion of aumann s integral of a set - valued mapping defined as follows .consider and the set - valued map \rightarrow { { \mathbb r}}^n ] .the integrable linear growth condition holds due to assumptions [ standassum](i ) so that the filippov - gronwall theorem in ( * ? ? ?* theorem 2.3 ) applies yielding the compactness of the closure of the set of solutions in the maximum norm on . as a consequence the compactness of follows easily .let ] with . we distinguish two cases .1 . 2 . for every there exists such that for all \cap [ t_0,t_f] ] with .assuming we get the contradiction from assumption [ standassum](iv ) .`` '' : assume that there exists ( i.e. ) be such that .since by and we assume that , then . hence , there exists with the continuity of ensures for \cap i ] , i.e. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ then [ prop : bd_descr_w_level_set ] _the proof can be found in ( * ? ? ?* proposition 7.1.4 ) ._ assumption ( iv) implies that the considered system is small - time controllable , see ( * ? ? ? * chap .iv , definition 1.1 ) .moreover , under the assumption of small - time controllability the nonemptiness of the interior of and the continuity of the minimum time function in are consequences , see ( * ? ? ?iv , propositions 1.2 , 1.6 ) .assumption ( iv) is essentially weaker than ( iv ) , since the convexity of and the strict expandedness of follows by remark [ rem : strict_expand ] . in the previous propositionwe can allow that is lower - dimensional and are still able to prove the inclusion `` '' in , since the interior of would be empty and can not lie in the interior which also creates the ( wanted ) contradiction . for the other inclusion `` '' the nonemptiness of the interior of in proposition [ prop : bd_descr_monotone_case_w_level_set ] resp .the one of in proposition [ prop : bd_descr_w_level_set ] is essential .therefore , the expanding property in assumptions ( iv ) resp .( iv) can not be relaxed by assuming only monotonicity in the sense for as ( * ? ? ?* example 2.6 ) shows .consider the linear control dynamics . for a given , the problem of computing approximately the minimum time to reach by following the dynamics deeply investigated in literature .it was usually obtained by solving the associated discrete hamilton - jacobi - bellman equation ( hjb ) , see , for instance , . neglecting the space discretization we obtain an approximation of . in this paper, we will introduce another approach to treat this problem based on approximation of the reachable set of the corresponding linear differential inclusion .the approximate minimum time function is not derived from the pde solver , but from iterative set - valued methods or direct discretization of control problems .our aim now is to compute numerically up to a maximal time based on the representation by means of set - valued methods to approximate aumann s integral .there are many approaches to achieving this goal .we will describe three known options for discretizing the reachable set which are used in the following .consider for simplicity of notations an equidistant grid over the interval ] in the corresponding set - valued quadrature method .this choice is obvious in the approaches , and e.g. for set - valued riemann sums in or . in the recursive formulas for the set - valued riemann sum, this means that for . recall that from ( ii ) the discrete reachable set reads as follows . or equivalently we set for some and a piecewise constant grid function with , .if there does not exist such a grid control which reaches from by the corresponding discrete trajectory , . then the discrete minimum time function is defined as y_0\in \mathcal{s } } } \ , t_h(y_0,y , u_h).\ ] ] in all of the constructions ( i)(iii ) described above , is a convex , compact and nonempty set .the key idea of the proof of this proposition is to employ the linearity of , in conjunction with the convexity of and proposition [ prop : minkowski ] .in particular , it follows analogously to the proof of ( * ? ? ? * proposition 3.3 ) .[ dhrrh ] consider the linear control problem . assume that the set - valued quadrature method and the ode solver have the same order .furthermore , assume that and have absolutely continuous -nd derivative , the -st derivative is of bounded variation uniformly with respect to all and is uniformly bounded for .then where is a non - negative constant. see ( * ? ? ?* theorem 3.2 ) .for the requirements of theorem [ dhrrh ] are fulfilled if are absolutely continuous and are bounded variation ( see , ( * ? ? ?1.6 , 2.3 ) ) .the next subsection is devoted to the full discretization of the reachable set , i.e. we consider the space discretization as well .since we will work with supporting points , we do this implicitly by discretizing the set of normed directions .this error will be adapted to the error of the set - valued numerical scheme caused by the time discretization to preserve its order of convergence with respect to time step size as stated in theorem [ dhrrh ] .then we will describe in detail the procedure to construct the graph of the minimum time function based on the approximation of the reachable sets .we will also provide the corresponding overall error estimate . for a particular problem , according to its smoothness in an appropriate sense we are first able to choose a difference method with a suitable order , say for some , to solve numerically effectively , for instance euler scheme , heun s scheme or runge - kutta scheme etc .. then we approximate aumann s integral in by a quadrature formula with the same order , for instance riemann sum , trapezoid rule , or simpson s rule etc . to obtain the discrete scheme of the global order .we implement the set arithmetic operations in only approximately as indicated in proposition [ prop : approx_conv_set ] and work with finitely many normed directions satisfying to preserve the order of the considered scheme approximating the reachable set . with this approximationwe generate a finite set of supporting points of and with its convex hull the fully discrete reachable set . to reach this target , we also discretize the target set and the control set appearing in and , e.g. along the line of proposition [ prop : approx_conv_set ] : hence , are polytopes approximating resp . .let be the fully discrete version of ( it will be defined later in details ) .our aim is to construct the graph of up to a given time based on the knowledge of the reachable set approximation .we divide ] of its range .+ [ algorithm ] 1 .set , as in , .2 . compute as follows where 3 .compute the set of the supporting points and set where is an arbitrary element of and set 4 .if , set and go back to step 2 .otherwise , go to step 5 .5 . construct the graph of by the ( piecewise ) linear interpolation based on the values at the points , .the algorithm computes the set of vertices of the polygon which are supporting points in the directions .the following proposition is the error estimate between the fully discrete reachable set and .[ dhr_deltahl ] let assumptions [ standassum](i)(iii ) , together with for the set - valued combination method in ( ii ) , be valid .furthermore , finitely many directions are chosen with then , for small enough , where are some positive constants and ._ the proof can be found in ( * ? ? ?* proposition 7.2.5 ) ._ if is a singleton , we do not need to discretize the target set .the overall error estimate in even improves in this case , since .as we can see in this subsection the convexity of the reachable set plays a vital role .therefore , this approach can only be extended to special nonlinear control systems with convex reachable sets . in the following subsection , we provide the error estimation of obtained by the indicated approach under assumptions [ standassum ] , the regularity of and the properties of the numerical approximation . after computing the fully discrete reachable sets in subsection [ subsec : algorithm ] ,we obtain the values of for all , . for all boundary points and some , we define the task is now to define a suitable value of in the computational domain if is neither a boundary point of reachable sets nor lies inside the target set .first we construct a simplicial triangulation over the set of points with grid nodes in .hence , * is a simplex for , * , * the intersection of two different simplices is either empty or a common face * all supporting points in the sets are vertices of some simplex , * all the vertices of each simplex have to belong either to the fully discrete reachable set or to for some .for the triangulation as in figure [ fig : part_triang ] , we introduce the maximal diameter of simplices as + assume that is neither a boundary point of one of the computed discrete reachable sets nor an element of the target set and let be the simplex containing .then where with and being the vertices of .if lies in the interior of , the index of this simplex is unique .otherwise , lies on the common face of two or more simplices due to our assumptions on the simplicial triangulation and is well - defined .let be the index such that .since is either or due to , we have the latter holds , since the convex combination is bounded by and equality to only holds , if all vertices with positive coefficient lie on the boundary of the reachable set .the following theorem is about the error estimate of the minimum time function obtained by this approach .[ errt ] assume that is continuous with a non - decreasing modulus in , i.e. let assumptions [ standassum ] be fulfilled , furthermore assume that holds .then where is the supremum norm taken over .we divide the proof into two cases .1 . for some .+ let us choose a best approximation of so that where we used in the latter equality. clearly , , show that then due to for some .+ let be a simplex containing with the set of vertices .then where .we obtain where we applied the continuity of for the first term and the error estimate of case 1 for the other . combining two cases and noticing that if , we get the proof is completed .[ rem_errt ] theorem [ dhrrh ] provides sufficient conditions for set - valued combination methods such that holds .see also e.g. for set - valued euler s method resp . for heun s method .if the minimum time function is hlder continuous on , becomes for some positive constant .the inequality shows that the error estimate is improved in comparison with the one obtained in and does not assume explicitly the regularity of optimal solutions as in .one possibility to define the modulus of continuity satisfying the required property of non - decrease in theorem [ errt ] is as follows : an advantage of the methods of volterra type studied in which benefit from non - standard selection strategies is that the discrete reachable sets converge with higher order than 2 .the order 2 is an order barrier for set - valued runge - kutta methods with piecewise constant controls or independent choices of controls , since many linear control problems with intervals or boxes for the control values are not regular enough for higher order approximations ( see ) .there are many different triangulations based on the same data . among them, we can always choose the one with a smaller diameter close to the hausdorff distance of the two sets by applying standard grid generators .for example , from the same set of data we can build the two following grids and it is easy to see in figure [ fig : diff_triang ] that the left one ( for which only three edges are emerging from the corner of the bigger reachable set ) gives a better approximation , since the maximal diameter in the triangulation at the right is much bigger .[ errtt2 ] let the conditions of theorem [ errt ] be fulfilled . furthermore assume that the step size is so small such that in is smaller than , where then and where is the supremum norm taken over . for some choose a constant such that .since does not intersect the complement of bounded with and both are compact sets , there exists such that we will show that a similar inclusion as holds for the discrete reachable sets for small step sizes .if the step size is so small that in is smaller than , then we have the following inclusions : {palurb } } \mathcal{r}(t_i ) + \frac{2}{3}\varepsilon b_1(0 ) & \subset { \operatorname{int}}\mathcal{r}_{h\delta}(t_{i+1 } ) \nonumber \intertext{and } \mathcal{r}_{h\delta}(t_i ) + \frac{\varepsilon}{3 } b_1(0 ) & \subset \big ( \mathcal{r}(t_i ) + \frac{\varepsilon}{3 } b_1(0 ) \big ) + \frac{\varepsilon}{3 } b_1(0 ) \subset { \operatorname{int}}\mathcal{r}_{h\delta}(t_{i+1 } ) \label{est2levfull}. \end{aligned}\ ] ] we have , then with .2 . , then with . to prove 1 )the inequality is clear .assume that for some .then . by the estimates , and , it follows that which is a contradiction to the assumption .hence , .assume that .then , .furthermore , can not be an element of , since otherwise which is a contradiction to .+ therefore , which contradicts .hence , the starting assumption must be wrong which proves .+ to prove 2 ) if we assume for some , then and by estimate .but this contradicts .therefore , .assuming for some , then .furthermore , if is an element of , which is a contradiction to . + therefore , which contradicts .hence , the starting assumption must be wrong which proves .consequently , 1 ) and 2 ) are proved .+ notice that 1 .the case 1 ) means & \quad & ( i \geq 1 ) , \\t(x_j ) & = t_0 & \quad & ( i = 0 ) \end{aligned}\ ] ] and due to .2 . from the case 2 ) , we obtain \quad ( i \geq 2 ) , \\t_{h\delta}(x_j ) & - t(x ) < t_i - t_{i-2 } = 2 \delta t , \\t_{h\delta}(x_j ) & - t(x ) >t_{i-1 } - t_{i+1 } = -2 \delta t. \end{aligned}\ ] ] therefore , for ( similarly with estimates for ) . altogether , is proved . in this subsectionwe first prove the convergence of the normal cones of to the ones of the continuous - time reachable set in an appropriate sense . using this resultwe will be able to reconstruct discrete optimal trajectories to reach the target from a set of given points and also derive the proof of -convergence of discrete optimal controls . in the following only convergence under weaker assumptions andno convergence order 1 as in are proved ( see more references therein for the classical field of direct discretization methods ) .we also restrict to linear minimum time problems .+ the following theorem plays an important role in this reconstruction and will deal with the convergence of the normal cones .if the normal vectors of converge to the corresponding ones of , the discrete optimal controls can be computed with the discrete pontryagin maximum principle under suitable assumptions .+ for the remaining part of this subsection let us consider a fixed index .we choose a space discretization with ( compare with ( * ? ? ?3.1 ) ) and often suppress the index for the approximate solutions and controls .[ theo : normaconver ] consider a discrete approximation of reachable sets of type ( i)(iii ) with under assumptions [ standassum ] , the set - valued maps converge graphically to the set - valued map for .let us recall that , under assumptions [ standassum ] and by the construction in subsec .[ subsec : sv_discr_meth ] , , are convex , compact and nonempty sets .moreover , we also have that the indicator functions are lower semicontinuous convex functions ( see proposition [ pro : indicator ] ) . by (* example 4.13 ) the convergence in with respect to the hausdorff set also implies the set convergence in the sense of definition [ def : setconvg ] .hence , ( * ? ? ?* proposition 7.4(f ) ) applies and shows that the corresponding indicator functions converge epi - graphically . since the subdifferential of the ( convex ) indicator functions coincides with the normal cone by ( * ? ? ? * exercise 8.14 ) , attouch s theorem [ theo : attouch ] yields the graphical convergence of the corresponding normal cones. the remainder deals with the reconstruction of discrete optimal trajectories and the proof of convergence of optimal controls in the _ -norm _, i.e. as for being defined later , where the _ -norm _ is defined for as .to illustrate the idea , we confine to a special form of the target and control set , i.e. ^m,\,t\in [ 0,t_i] ] and so is .therefore , .now we compute the inner product of : now assume that for some indices . then define another sequence of controlsas follows let be the end point of the discrete trajectory following .we have which implies that or which contradicts the construction of , an outer normal vector of at . therefore , .+ conversely , assume that for some nontrivial discrete adjoint response , the controls satisfies for every indices .we will show that the end point of the corresponding trajectory will lie at the boundary of , not at any point belonging to its interior .suppose , by contradiction , lies in the interior of .let be a point reached by a sequence of controls in in such that our assumption implies that for all .as above , due to , we show that which is a contradiction to .consequently , . motivated by the outer normality of the adjoints in continuous resp . discrete time and the maximum conditions, we define the optimal controls as follows ) , \\ \hat{u}_h(t)&=\hat{u}_{kj } & & \text{if } t\in [ t_{kj},t_{k(j+1)}),\,k=0, ... ,i-1,\ , j=0, ... ,n-1,\\ \hat{u}_h(t_{(i-1)n})&=\hat{u}_{(i-1)(n-1 ) } & & \text{for } t = t_{(i-1)n } , \end{aligned } \right.\ ] ] where and is the _ signum function _ and , . owing to theorem [ theo : normaconver ] , we have that the set - valued maps converge graphically to which implies that for every sequence in the graphs there exists an element of the graph such that where .thus are chosen such that is realized .then it is obvious that as with uniformly in . for a function , we denote the total variation , where is a usual total variation of the -th components of over a bounded interval .now if we assume furthermore that if the system is normal , converges to in the -norm .consider that the minimum time problem with the dynamics in .assume that the normality condition holds , i.e. for each ( nonzero ) vector along an edge of ^m ] if .then , under assumptions [ standassum ] , as for any . due to defined as in on is the optimal control to reach the state of the corresponding optimal solution from the origin .moreover , it has a finite number of switchings see ( * ? ? ?2.5 , corollary 2 ) .therefore , the total variation , ) ] .then taking a sum over we obtain )+h\sum_{k=0}^{i-1}\sum_{j=0}^{n-1 } \|{\operatorname{sign } } ( \eta(t_{kj})\bar b))^\top-{\operatorname{sign}}(\eta_{kj}\bar b))^\top \|_1.\\\ ] ] since has a finite number of switchings and are non - trivial with the convergence as for , the variation )$ ] and are bounded. therefore , the proof is completed .the authors want to express their thanks to giovanni colombo , especially for pointing us to attouch s theorem , and to lars grne . both of them supported us with helpful suggestions and motivating questions .they are also grateful to matthias gerdts about his comments to optimal control .aubin , a. m. bayen , and p. saint - pierre . .springer , heidelberg , second edition , 2011 .first edition : j .- p .aubin in systems & control : foundations & applications , birkhuser boston inc . , boston , ma , 2009 .r. baier .selection strategies for set - valued runge - kutta methods . in z.li , l. g. vulkov , and j. wasniewski , editors , _ numerical analysis and its applications , third international conference , naa 2004 , rousse , bulgaria , june 29 - july 3 , 2004 , revised selected papers _ ,volume 3401 of _ lecture notes in comput ._ , pages 149157 , berlin heidelberg , 2005 .springer .r. baier and e. farkhi .the directed subdifferential of dc functions . in a.leizarowitz , b. s. mordukhovich , i. shafrir , and a. j. zaslavski , editors , _ nonlinear analysis and optimization ii : optimization . a conference in celebration of alex ioffe s 70th and simeon reich s 60th birthdays , june 1824 , 2008 , haifa , israel _ , volume 513 of _ ams contemporary mathematics _ , pages 2743 .ams and bar - ilan university , 2010 .r. baier , e. farkhi , and v. roshchina . on computing the mordukhovich subdifferential using directed sets in two dimensions .in r. s. burachik and jen - chih yao , editors , _ variational analysis and generalized differentiation in optimization and control . in honor of boris s. mordukhovich _ , volume 47 of _ springer optimization and its applications _ , pages 5993 .springer , new york london , 2010 .r. baier and f. lempio . approximating reachable sets by extrapolation methods . in p.j. laurent , a. le mhaute , and l. l. schumaker , editors , _ curves and surfaces in geometric design .papers from the second international conference on curves and surfaces , held in chamonix - mont - blanc , france , july 1016 , 1993 _ , pages 918 , wellesley , 1994 . a k peters .r. baier and f. lempio .computing aumann s integral . in a.b. kurzhanski and v. m. veliov , editors , _ modeling techniques for uncertain systems , proceedings of a conference held in sopron , hungary , july 610 , 1992 _ , volume 18 of _ progress in systems and control theory _ , pages 7192 , basel , 1994 .birkhuser .m. bardi and m. falcone .discrete approximation of the minimal time function for systems with regular optimal trajectories . in a.bensoussan and j. l. lions , editors , _ analysis and optimization of systems .proceedings of the 9th international conference antibes , june 1215 , 1990 _ , volume 144 of _ lecture notes in control and inform ._ , pages 103112 .springer , berlin heidelberg , 1990 .m. bardi , m. falcone , and p. soravia . numerical methods for pursuit - evasion games via viscosity solutions .in _ stochastic and differential games _ , volume 4 of _ ann . internat .games _ , pages 105175 .birkhuser boston , boston , ma , 1999 .m. falcone. numerical solution of dynamic programming equations .appendix a. in m. bardi and i. capuzzo - dolcetta , editors , _ optimal control and viscosity solutions of hamilton - jacobi - bellman equations _ , systems & control : foundations & applications , pages 471504 .birkhuser boston inc . ,boston , ma , 1997 .a. girard , c. le guernic , and o. maler .efficient computation of reachable sets of linear time - invariant systems with inputs . in _hybrid systems : computation and control _ , volume 3927 of _ lecture notes in comput ._ , pages 257271 .springer , berlin , 2006 .l. grne and t. jahn . computing reachable sets via barrier methods on simd architectures . in j.eberhardsteiner , h. j. bhm , and f. g. rammerstorfer , editors , _ proceedings of the 6th european congress on computational methods in applied sciences and engineering ( eccomas 2012 ) held at the university of vienna , vienna , austria , september 1014 , 2012 _ , pages 20762095 , vienna , austria , 2012 .vienna university of technology .paper no . 1518 , e - book .n. kirov and m. krastanov .volterra series and numerical approximations of odes . in z.li , l. g. vulkov , and j. wasniewski , editors , _ numerical analysis and its applications , third international conference , naa 2004 , rousse , bulgaria , june 29 - july 3 , 2004 , revised selected papers _ , volume 3401 of _ lecture notes in comput ._ , pages 337344 , berlin heidelberg , 2005 . springer. m. krastanov and n. kirov .dynamic interactive system for analysis of linear differential inclusions . in a.b. kurzhanski and v. m. veliov , editors , _ modeling techniques for uncertain systems , proceedings of a conference held in sopron , hungary , july 610 , 1992 _ , volume 18 of _ progress in systems and control theory _ ,pages 123130 , basel , 1994 .birkhuser . c. le guernic and a. girard .reachability analysis of hybrid systems using support functions . in a.bouajjani and o. maler , editors , _ computer aided verification .proceedings of the 21st international conference ( cav 2009 ) held in grenoble , june 26july 2 , 2009 _ , volume 5643 of _ lecture notes in comput ._ , pages 540554 , berlin , 2009 .springer .f. lempio .set - valued interpolation , differential inclusions , and sensitivity in optimization . in r.lucchetti and j. revalski , editors , _ recent developments in well - posed variational problems _ , volume 331 of _ mathematics and its applications _ , pages 137169 , dordrecht boston london , 1995 .kluwer academic publishers .z. r , j. gravesen , and b. jttler . computing convolutions and minkowski sums via support functions . in p.chenin , t. lyche , and l. l. schumaker , editors , _ curve and surface design .avignon 2006 .proceedings of the 6th international conference on curves and surfaces , june 29july 5 in avignon , france _ , mod .methods math ., pages 244253 , brentwood , tn , 2007 .nashboro press .
the first part of this paper is devoted to introducing an approach to compute the approximate minimum time function of control problems which is based on reachable set approximation and uses arithmetic operations for convex compact sets . in particular , in this paper the theoretical justification of the proposed approach is restricted to a class of linear control systems . the error estimate of the fully discrete reachable set is provided by employing the hausdorff distance to the continuous - time reachable set . the detailed procedure solving the corresponding discrete set - valued problem is described . under standard assumptions , by means of convex analysis and knowledge of the regularity of the true minimum time function , we estimate the error of its approximation . numerical examples are included in the second part .
the methods developed in studying complex physical systems have been successfully applied throughout decades to analyze financial data .the quantitative study of financial data continue to attract the growing interest motivated by the existence of universal features in the dynamics of different markets , such as power - law tails of the return distributions , scaling as a first approximation and deviations from scaling of the empirical return distributions , volatility clustering , and leverage effect .the phenomenological and microscopic models have been proposed to explain the established stylized facts .the field of research connected to modeling financial markets has been named econophysics .a stock s volatility represents the simplest measure of its riskiness or uncertainty .formally , the volatility is the annualized standard deviation of the stock s returns during the period of interest .the random walk model proposed by bachelier in 1900 year presupposes a constant volatility .there is an ample empirical evidence , however , that the volatility is not a constant , but represents a random variable .two well established stylized facts concerning the volatility are long ranged volatility - volatility correlations that are also known as volatility clustering and return - volatility correlations that are also known as leverage effect .the volatility is a key variable to control risk measures associated with the dynamics of prices of financial assets .the implied volatility extracted from options prices represents a market estimate of future volatility . a pure exposure to future volatility is provided by the volatility swaps .the volatility enters all options pricing models , so its knowledge has a great value for estimate of the equilibrium options state - price distributions .the volatility clustering manifests itself in the occurrence of large changes of the index at neighboring times ( observed localized outbursts ) .the leverage effect has its origin in the observed negative correlation between the past returns and future volatility .the possible explanation to this effect is due to the fact that negative returns increase financial leverage and extend the risk for investors and thereby a stock s volatility .a statistical study demonstrates clearly that the leverage effect is one - directional : past returns correlate with future volatility only . in this paper, we propose an analytical method to evaluate future volatility as a linear function of the lagged volatility and lagged returns .the method takes the volatility clustering and leverage effect into account and provides for stationary stochastic processes the smallest forecasting error in the class of all linear functions . in this precise sense, we talk on the best linear forecast ( blf ) of the volatility .the blf problem for a stationary stochastic process was formulated and solved by kolmogorov in 1941 year and wiener in 1949 year .a modern review of the blf methods can be found in ref .we apply these methods to construct the blf volatility function for the dow jones 30 industrial average ( djia ) .the outline of the paper is as follows : in the next sect . , we remove the leverage effect from the original time series to work with a reduced volatility that has by definition a vanishing covariance with the past returns .the spectral density of a stochastic process can be factorized , if its correlation function represents a superposition of the exponential functions .an explicit expression is derived for the amplitude .the analytical properties of the amplitude in the complex -plane are important to provide an explicit representation of the predictor function . in sect .3 , the blf problem is analyzed further to account for the reduced volatility clustering and to construct the blf function . in sect .4 , we fit 100 + years of data of the daily historical volatility of the djia in order to determine parameters of the blf function .numerical estimates are given to illustrate the developed method .the minimization of the forecasting error for the reduced volatility predictor function is shown to be equivalent to the minimization of the forecasting error of the original volatility time series .an explicit expression for the forecasting error is given . in conclusion , a connection of the blf method with the arch models , in which future variance is also represented as a linear combination of the past observables , is discussed .the evolution of a market index value or a stock price is described by equation ( see e.g. ) : the value is a noise added to the path followed by with the expectation value =0 ] the volatility represents a generic measure of the magnitude of market fluctuations . we consider a discrete version of the random walk problem by setting , and the sampling intervals are enumerated by integer time parameter . the volatility is a hidden variable and its extraction form the market observables is a separate difficult task .the possible estimator of the volatility is defined in terms of returns in what follows , the term volatility refers to the estimator , the annualizing factor will not apply .a use of the variance estimator would complexify the problem due to divergences connected to the existence of power - law tails ( variance of variance is infinite , =\infty , ] , so depends on the lagged price increments only . note that =\mathrm{e}[\chi ] , ] .due to the definition ( [ mod ] ) and in virtue of equation =\delta _ { ts}\mathrm{var}[\xi ] , \label{corr1}\ ] ] that holds true for sampling intervals greater than 20 min , we have =0 .\label{corr2}\ ] ] the reduced volatility does not experience the leverage effect .so , its predictor depends on the past only .it is possible therefore to focus on the volatility clustering only , while the leverage effect is taken into account explicitly through eq.([mod ] ). the autocorrelation function /\mathrm{e}% [ \chi ^{2}]] + \sum_{s=0}^{+\infty } c(s)\zeta ( t - s ) , \label{firepr}\ ] ] provided that the spectral function admits the factorization and the amplitude is regular at ( see e.g. ) .the expansion coefficients equal it is remarkable that only retarded enter the summation in eq.([firepr ] ) .this is a consequence of the convergence of the taylor expansion of the amplitude at , which is in turn a consequence of the analyticity at : the convergence radius of the expansion is associated with the first pole at .the stationary stochastic process can be interpreted as a result of filtering the normal sequence .= 12 cm the blf function for the time horizon has the form +\sum_{s=\tau}^{+ \infty } \frac{\xi _ { \tau } ( 0)^{(s)}}{s!}(\chi ( t - s)-\mathrm{e}[\chi ] ) . \label{xihat}\ ] ] the weight coefficients are derivatives of the function at . here, for constructing a linear prognosis function , the overall normalization factor in is not important , since it drops out from the ratio . in virtue of eq.([corr2 ] ) , =0 .\label{corr3}\ ] ] in terms of the stochastic process , the blf function looks like + \sum_{s=\tau}^{+\infty } c(s)\zeta ( t - s ) .\label{zeta}\ ] ] at we obtain and at the last two equations complete solution of the blf problem for the case when the correlation function is a superposition of the exponent functions ( [ corr ] ) .the terms in the right side of eq.([lgt0 ] ) are not all positive definite . .parameters and entering the fit of the autocorrelation function ( [ corr ] ) of the reduced volatility , parameters which determine the roots of equation and parameters which determine the additive representation ( [ phi additive ] ) of the function .the value of represents the normalization constant according to eq.([parameters ] ) .[ cols= " < , < , < , < , < , < , < , < " , ] the blf volatility function looks like +\sum_{s=\tau}^{+ \infty } \frac{\xi _ { \tau } ( 0)^{(s)}}{s!}(\chi ( t - s)-\mathrm{e}[\chi ] ) + \sum_{s=\tau}^{+ \infty } \mathrm{cov}[\eta ( 0),\xi ( -s)]\mathrm{var}% ^{-1}[\xi ] \xi ( t - s ) \label{final}\ ] ] where the unknown future returns set qual to zero : =0 ] , in agreement with the fact that . ] due to the power - law tails of the return distributions .nonlinear models for volatility forecasting , which take into account besides the volatility clustering and leverage effect also heavy tails of the returns distributions and the approximate scaling , represent an alternative class of the stochastic volatility models .the efficiency of such models can be tested in general using monte carlo simulations and/or backtests over historical data .the approach of refs . is more general , since it allows a calculation of the probability density function of the volatility .the blf method predicts the average volatility only .it can , however , be extended to forecasting for arbitrary such that <\infty . ] of the future distribution are known , the reconstruction of the probability density function of the volatility must be possible within the blf method also .the blf problem for a stationary stochastic process was formulated in 1941 year by kolmogorov and later by wiener .a modern review of the blf methods can be found in ref . . in this paper , we reported an explicit analytical solution of the blf problem for practically important case when the autocorrelation function represents a superposition of exponential functions .the autocorrelation function of the volatility in a financial time series is known to be fitted well by such a superposition .we applied the obtained results to construct the blf volatility function for the djia .the popular autoregressive conditional heteroskedasticity ( arch ) models of time dependent volatility , proposed by engle ( for a review see ) , describe the variance as a linear function of the past observables .the arch models are conceptually very close to the blf approach .eq.([final ] ) expresses the forecasting volatility also as a linear function of the past volatility and past returns .eq.([final ] ) gives , however , the best linear forecast with the proved smallest forecasting error ( [ theorerror ] ) . the weight coefficients allow to evaluate the magnitude and number of terms needed for the arch models to quantify future variance with sufficiently good precision .the arch models receive an additional support and more general framework through the blf formula ( [ final ] ) .the accurate estimates of the future volatility are important for risk management and options pricing .the blf formula ( [ final ] ) represents an interest as the proved most accurate estimate in the class of all linear functions of the past volatility and past returns .the author wishes to thank e. alessio and v. frappietro for several useful discussions and the dow jones global indexes for providing the djia historical quotes .this work has been supported in part by federal program of the russian ministry of industry , science and technology no .40.052.1.1.1112 .k. demeterfi , e. derman , m. kamal and j. zou , _ more than you ever wanted to know about volatility swaps , _ goldman sachs , quantitative strategies research notes , 1999 .[ http://www.ederman.com/emanuelderman/gsqspapers/volswaps.pdf ] e. alessio , v. frappietro , m. i. krivoruchenko , and l. j. streckert , _ multivariate distribution of returns in financial time series _ , 2003 .in : proceedings of the international conference of computational methods in sciences and engineering 2003 ( iccmse 2003 ) , ed . t.e .simos ( world scientific publishing co. , singapore , 2003 ) , pp .323 - 326 [ http://arxiv.org/abs/cond-mat/0310300 ] m. i. krivoruchenko , e. alessio , v. frappietro , and l. j. streckert , _ modeling stylized facts for financial time series _ , 2003 .talk given at the conference `` applications of physics in financial analysis 4 '' , warsaw , 13 - 15 november , 2003 .[ http://arxiv.org/abs/cond-mat/0401009 ]
the autocorrelation function of volatility in financial time series is fitted well by a superposition of several exponents . such a case admits an explicit analytical solution of the problem of constructing the best linear forecast of a stationary stochastic process . we describe and apply the proposed analytical method for forecasting volatility . the leverage effect and volatility clustering are taken into account . parameters of the predictor function are determined numerically for the dow jones 30 industrial average . connection of the proposed method to the popular arch models is discussed .
this article concerns the problem of computing equilibrium averages of time homogeneous , ergodic markov chains in the presence of metastability .a markov chain is said to be _metastable _ if it has typically very long sojourn times in certain subsets of state space , called _metastable sets_. a new method , called the parallel replica method ( or parrep ) , is proposed for efficiently simulating equilibrium averages in this setting .markov chains are widely used to model physical systems . in computational statistical physics the main setting for this article markov chains are used to understand macroscopic properties of matter , starting from a mesoscopic or microscopic description .equilibrium averages then correspond to bulk properties of the physical system under consideration , like average density or internal energy .a popular class of such models are the markov state models .markov chains also arise as time discretizations of continuous time models like the langevin dynamics , a popular stochastic model for molecular dynamics . for examples of markov chain modelsnot obtained from an underlying continuous time dynamics , see for example .it should be emphasized that the discrete in time setting is generic even if the underlying model is continuous in time , what must be simulated in practice is a time - discretized version . in computational statistical physics ,metastability arises from entropic barriers , which are bottlenecks in state space , as well as energetic barriers , which are regions separating metastable states through which crossings are unlikely ( due to , for example , high energy saddle points in a potential energy landscape separating the states ). see figures 12 for simple examples of entropic and energetic barriers .the method proposed here is closely related to a recently proposed algorithm , also called parrep , for efficient simulation of metastable markov chains on a coarsened state space .that algorithm can be considered an adaptation of a.f .voter s parallel replica dynamics to a discrete time setting .( for a mathematical analysis of a.f .voter s original algorithm , see . ) parrep was shown to be consistent with an analysis based on quasistationary distributions ( qsds ) , or local equilibria associated with each metastable set .parrep uses parallel processing to explore phase space more efficiently in real time .a cost of the parallelization is that only a _coarse _ version of the markov chain dynamics , defined on the original state space modulo the collection of metastable sets , is obtained . in this articleit is shown that a simple modification of the parrep algorithm of nonetheless allows for computation of equilibrium averages of the original , _ uncoarsened _ markov chain .-140pt on state space with an entropic barrier .at each step , a direction up , down , left or right is selected at random , each with probability .then moves one unit in this direction , provided this does not result in crossing a barrier , i.e. , one of the edges of the two boxes pictured .the walk can cross from the left box to the right box only through the narrow pathways indicated .the metastable sets are . ]-130pt -130pt on state space with energy barriers .the random walk moves one unit left or right according to a biased coin flip : if and the slope of the pictured graph at is , then with probability , , and with probability , .the metastable sets are . ]-130pt the parrep algorithm proposed here is very general .it can be applied to any markov chain , and gains in efficiency can be expected when the chain is metastable and the metastable sets can be properly identified ( either a priori or on the fly ) . in particular , it can be applied to metastable markov chains with both energetic and entropic barriers , and no assumptions about barrier heights , temperature or reversibility are required .while there exist many methods for sampling from a distribution , most methods , particularly in markov chain monte carlo , rely on a priori knowledge of relative probabilities of the distribution .in contrast with these methods , parrep does not require _ any _ information about the equilibrium distribution of the markov chain .the article is organized as follows .section [ sec : qsd ] defines the qsd and notation used throughout .section [ sec : parrep ] introduces the parrep algorithm for computing equilibrium averages ( algorithm [ alg1 ] ) . in section[ sec : numerics ] , consistency of the algorithm is demonstrated on the simple models pictured in figures 1 and 2 .a proof of consistency in an idealized setting is given in the appendix .some concluding remarks are made in section [ sec : conclude ] .throughout , is a time homogeneous markov chain on a standard borel state space , and is the associated measure when , where denotes equality in law . all sets and functions are assumed measurable without explicit mention .the collection of metastable sets will be written , with elements of denoted by .formally , is simply a set of disjoint subsets of state space .[ d1 ] a probability measure with support in is called a quasistationary distribution ( qsd ) if for all and all , that is , if , then conditionally on , .it is not hard to check that , if for every probability measure supported in and every , then is the unique qsd in .informally , if holds , then is close to whenever it spends a sufficiently long time in without leaving .of course depends on , but this will not be indicated explicitly .let be ergodic with equilibrium measure , and fix a bounded real - valued function defined on state space .the output of parrep is an estimate of the average of with respect to .the algorithm requires existence of a unique qsd in each metastable set , so it is assumed for each there is a unique satisfying .this assumption holds under very general mixing conditions ; see .the user - chosen parameters of the algorithm are the number of replicas , ; the decorrelation and dephasing times , and ; and a polling time , . the parameters and are closely related to the time needed to reach the qsd ; both may depend on . to emphasize this ,sometimes or are written .the parameter is a polling time at which the parallel replicas resynchronize .see below for further discussion .[ alg1 ] set the simulation clock to zero : , and set . then iterate the following : * * _ decorrelation step . _ * + evolve from time until time , where is the smallest number such that there exists with .meanwhile , update then set and proceed to the dephasing step , with now the metastable state having . * * _ dephasing step . _* + generate independent samples , , of the qsd in . then proceed to the parallel step . * * _ parallel step .* + \(i ) set and .let be replicas of , that is , markov chains with the same law as which are independent of and one another . set , ... , + \(ii ) evolve all the replicas from time to time .+ \(iii ) if none of the replicas leave during this time , update and return to ( ii ) above . otherwise , let be the smallest number such that leaves during this time , let ] , for the remainder of section [ sec : ee ] , fix satisfying assumption [ a3 ] , and define [ l6a ] let be any probability measure on with support in .then for all , by assumption [ a3 ] , whenever . by definition of the extended process , the following holds . first, for any and any , second , for any , third , for any , let .for any and , due to , and , where by convention the product and intersection from to do not appear above if and .lemmas [ l6aa]- [ l6a ] lead to the following .[ l6 ] there exists and such that for all probability measures on and all , fix a probability measure on . since , by assumption [ a2 ] one may choose and such that for all probability measures on and all , let and define as in . for , define probability measures on by , for , by lemma [ l3 ] and , for all and , so by lemma [ l6aa ] , for all , let and fix .define a probability measure on with support in by , for and , by lemma [ l6a ] and , taking completes the proof .finally ergodicity of the extended process can be proved , using the tools of section [ sec : harris ] .[ t2 ] there exists a ( unique ) measure on such that for any probability measure on and any bounded measurable function , moreover , for any probability measure on , first , it is claimed is a harris chain . recall that and are defined as in .lemma [ l6 ] shows that for any , define a probability measure on with support in by : for and , let with .then with , . from assumption [ a3 ] , for any , one can check is a harris chain by taking , , and as above in the definition of harris chains in section [ sec : harris ] . next it is proved that is ergodic .let be the auxiliary chain defined as in section [ sec : harris ] .note that this shows the second assumption of lemma [ l5 ] holds , that is , since is in the set .consider now the first assumption .it must be shown that by lemma [ l6 ] , one can choose and such for all probability measures on and all , define a probability measure on by and let be the probability measure on which is the restriction of to . by andlemma [ l4 ] with , for all , using , for , now by , for , thus holds .the result now follows from lemma [ l5 ] .next , ergodicity of , the parrep process with one replica , is proved .[ t3 ] for all probability measures on and all bounded measurable functions , fix a probability measure on and a bounded measurable function . define by and define a probability measure on by , for and , by theorem [ t2 ] , there exists a ( unique ) measure on such that and define a measure on by , for , from this and the definition of , so by lemma [ l3a ] and , also , by lemma [ l3 ] and , using assumption [ a2 ] one can conclude .so from , here the main result , theorem [ maintheorem ] , is finally proved .the idea is to use ergodicity of along with the fact that the _ average _ value of the contribution to from a parallel step of algorithm [ alg1 ] does not depend on the number of replicas. a law of large numbers applied to the contributions to from all the parallel steps will then be enough to conclude .note that the law of depends on the number of replicas , but this is not indicated explicitly .fix a probability measure on and a bounded measurable function .define by let algorithm [ alg1 ] start at .the quantity will be decomposed into contributions from the decorrelation step and the parallel step .let denote the contribution to from the decorrelation step up to time , and let denote the contribution to from the parallel step up to time .thus , let start at , with defined as in . because the starting points sampled in the dephasing step are independent of the history of algorithm , each parallel step in particular the pair is independent of the history of the algorithm .this and theorem [ t0 ] imply that has the same law for every number of replicas .in particular when , from lemma [ l3a ] , meanwhile , from the preceding independence argument , where are iid random variables and counts the number of sojourns of in by time : from idealization [ a0 ] and definition [ d1 ] , each term in the sum in or of the parallel step has expected value .so from linearity of expectation and theorems [ t0 ] , for any number of replicas , & = \left({\mathbb e}[\tau_{acc}]-1\right)\int_s f\,d\nu \\&=\left({\mathbb p}_\nu(x_1 \notin s)^{-1}-1\right)\int_s f\,d\nu.\end{split}\end{aligned}\ ] ] combining , and , for any number of replicas , where it is assumed the processes on the left and right hand side of are independent .let start at . from definition of and , when the number of replicas is , where the processes in are assumed independent . since is markov , the number of time steps for which is either finite almost surely , or infinite almost surely . by theorem[ t0 ] and assumption [ a4 ] , the expected value of each of the sojourn times of in is , so the sojourn times are finite almost surely .this means that either has infinitely many sojourns in almost surely , or has finitely many sojourns in almost surely . thus : define and for , note that are iid and if almost surely as , then by the strong law of large numbers there is a constant ( depending on ) such that from , and the strong law of large numbers , there is a constant such that and due to this does not depend on the number of replicas . by using theorem [ t3 ] along with and , now using , and , for any number of replicas , author would like to acknowledge gideon simpson ( drexel university ) , tony lelivre ( ecole des ponts paristech ) and lawrence gray ( university of minnesota ) for fruitful discussions .metastability for markov chains : a general procedure based on renormalization group ideas . in g.grimmett , editor , _ probability and phase transition _ , volume 420 of _ nato asi series _ , pp . 303322 , springer verlag ( 1994 )
an algorithm is proposed for computing equilibrium averages of markov chains which suffer from metastability the tendency to remain in one or more subsets of state space for long time intervals . the algorithm , called the parallel replica method ( or parrep ) , uses many parallel processors to explore these subsets more efficiently . numerical simulations on a simple model demonstrate consistency of the method . a proof of consistency is given in an idealized setting . the parallel replica method can be considered a generalization of a.f . voter s parallel replica dynamics , originally developed to efficiently simulate metastable langevin stochastic dynamics .
we read newspapers and watch tv every day .there are many issues and many controversies .since media are free , we can hear arguments from every possible side .how do we decide what is wrong or right ?the first condition to accept a message is to understand it ; messages that are too sophisticated are ignored .so it seems reasonable to assume that our understanding depends on our ability and our current knowledge .here we show that the consequences of this statement are surprising and funny .to demonstrate this , we propose a computational model with two assumptions .the first is that messages can be represented as points on a plane of a finite area , say , a square .a consequence is that we can measure the distance between messages .the second assumption is that we can understand a message if it is not too far from what we already know . as a direct consequence of these two assumptions ,we obtain a simple model of learning . in this model the mind is represented by an area around the messages understood by the mind s owner .her / his ability is represented by a critical distance .a new message can be grasped if its distance to the closest previously understood message is shorter than .if this distance is longer , the message is ignored .let us consider a new area of experience : differential calculus , traffic regulations , stock market , foreign policy or classic latin grammar can serve as examples . initially we know a small area on a square .step by step , we expand our knowledge each time when a new message is found to be comprehensible .the speed at which the known area expands is determined by the parameter . if it is comparable with size a of the square , the mind understands everything after a few messages .however , if is small , initially most of the incoming messages are ignored , and the area of understanding expands very slowly .this is demonstrated in fig .[ fig:1 ] , where we see an area equivalent to the gained knowledge for after 10 messages in panel a ) , and for after 100 messages in panel b ) . in fig .[ fig:2 ] we show the fraction of messages that are understood as a function of the number of all messages , also for these two values of .\a ) the ` mental history ' of a single actor : positions of understood messages for a ) and b ) .each actor starts from one message at the center of the square . in casea ) the actor understands almost all messages after a few steps . in caseb ) the actor remains confined with her / his knowledge , with a bias towards right ( the bias direction is random).,title="fig : " ] + b ) the ` mental history ' of a single actor : positions of understood messages for a ) and b ) .each actor starts from one message at the center of the square . in casea ) the actor understands almost all messages after a few steps . in caseb ) the actor remains confined with her / his knowledge , with a bias towards right ( the bias direction is random).,title="fig : " ] the fraction of messages understood as a function of the number of incoming messages , for ( upper , green curve ) and ( lower , red curve ) .each actor starts from one message at the center of the square . in this log - log plot, the lower curve shows a vertical step between two subsequently understood messages .initially , such events are rather rare . ]as we are political animals , let us apply the model to our political beliefs . in this field ,public discussions are most aggressive and arguments least convincing . trying to be objective as scientists should be we choose our square to be symmetrically divided between two orientations : left and right .the vertical axis can be interpreted as a measure of the distance between authoritarian and libertarian , as in .let us suppose that our model mind is target of a stream of messages , evenly distributed in the square .again , if is close to 1 , the situation is rather trivial ; the model mind quickly arrives at full understanding .however , for small values of the ratio the situation is less trivial because bias comes into play .let us assume at first that our mind is initially unbiased ; its owner accepts the first message if it appears within a circle of radius around the square centre .yet we can expect that a certain degree of bias will soon develop : it is unlikely that the left - right symmetry is preserved for the trajectory of a single mind .an example of this effect is shown clearly in fig .[ fig:1]b . to explain this, we refer to theory of random walk .suppose that our model is simplified to one dimension , with steps towards left and right at discrete times with equal probabilities .suppose also that our mind made a step in a given direction .we can then ask the question : how long will it take until it returns ? the theory tells us : infinitely long on average . herewe touch upon an important feature of our model .it is clear that each mind will reach full understanding after a sufficiently long time .however , the difference between able ( large ) and less able ( small ) minds manifests itself always within a finite time .it is just our own lifetime which is finite , and its length provides a time scale for everything we do , including understanding things . compared with the infinitely long case from the previous paragraph , this means that , once biased , many of us will never reach objectivity again .to generalize our model , let us consider a large number of minds , which are again target of a stream of messages . as we have seen , whether we include their initial bias or not is of secondary importance .now we are going to design a test of social common sense in our artificial society . as the messages are evenly distributed , neither left nor right arguments prevail . knowing this, we can expect that a reasonable person remains objective .what is the result ? to answer this , let us introduce a probability that a given mind s owner , when asked about her / his preference , is going to answer ` right ' .likewise , a probability is assigned to the answer ` left ' , with the obvious condition . for each mind, the probability will be calculated as follows .the number of all messages he or she understood within a given time is .this set is divided into and , where is the number of understood messages placed on the left part of the square , and n(r ) for the right part .obviously , .then , , where is the mean -coordinate of messages on the right half - plane , and , is the mean absolute value of the -coordinate of all messages .probability is now calculated separately for each mind .what is the probability distribution of itself ?the answer is shown in fig .[ fig:3 ] , for different values of the ability parameter . as we see , both plots preserve the left - right symmetry within the accuracy of statistical errors .for large values of the resulting probability distribution is centred around the value . by contrast , for small the distribution consists of two sharp maxima close to and .in other words , in the former case of large ability a statistical mind answers ` left ' and ` right ' with equal probabilities .this is equivalent to the answer `` i do nt know '' , the only reasonable answer because the incoming messages do not provide arguments for a more decisive statement .however and this is our most important result for a society characterized with small ability a statistical mind answers either surely ` left ' , or surely ` right ' . in other words , in the case of small mental ability all opinions are extreme .histogram of the individual probabilities for ( central peak ) and ( binomial curve ) .the small asymmetry of the latter is a statistical fluctuation .the results are averaged over actors , 100 messages and runs ; one run means one set of messages , the same for all actors . here , initial positions of actors ( their first understood messages ) are evenly distributed on the square . ]the model has been further developed to include consequences of interpersonal communication : minds not only hear but also articulate their opinions , which are included to the stream of messages . to mention two main results , we note that an intensive communication leads to a clustering of opinions , which become more extreme even for the case of moderate ability .on the other hand , the latter unanimity disappears if messages are addressed to minds which are neighbours in the square of issues .then , again , the opinions are less extreme .these results lead one to be cautious about situations in which unanimity is treated as good and conflict as evil .alas , in our world unanimity is almost always against somebody else . in that casethe contradistinction is not ` unanimity vs. conflict ' , but rather ` diversity vs. extreme ' . paraphrasing paul graldy, one could say that it is the political party who chooses the man who will choose her .this means that everybody will be chosen by some party . yet a simple `` i do nt know '' seems a good remedy against an extreme ` yes ' or an extreme ` no ' .what is funny ( at least for us ) is that this is the result of a model based on statistical mechanics .k. k. ( born 1952 ) and k. m. ( born 1972 ) have been working together at agh university , krakow , for some 15 years on networks , sociophysics and other interdisciplinary applications of statistical mechanics .k. k. ( ph.d . in physics ,full professor ) teaches nonlinear dynamics and game theory , k. m. ( ph.d . in physics , assistant professor ) teaches cellular automata .both are members of the polish physical society , while k. k. is also member of the polish sociological society .k. m. is an individual member of the european physical society as well .9 k. kuakowski , physica a * 388 * , 469 ( 2009 ) .w. feller , _ an introduction to probability theory and its applications _ , vol .i ( j. wiley and sons , ny 1961 ) .k. malarz , p. gronek , and k. kuakowski , jasss j. artif .s. * 14 * ( 1 ) , 2 ( 2011 ) .k. malarz and k. kuakowski , acta phys . pol .a * 121 * , b-86 ( 2012 ) .
having equally valid premises _ pro _ and _ contra _ , what does a rational human being prefer ? the answer is : nothing . we designed a test of this kind and applied it to an artificial society , characterized by a given level of mental ability . a stream of messages from media is supplemented by ongoing interpersonal communication . the result is that high ability leads to well - balanced opinions , while low ability produces extreme opinions .
hydrodynamic stability is typically studied by the method of linearisation and subsequent modal analysis .this approach considers the asymptotic behaviour of small perturbations to a steady or time - periodic base flow .such asymptotic behaviour is determined by the eigenvalues of a linear operator arising from the analysis , describing the time - evolution of the eigenmodes .many canonical problems , such as flow in a channel , permit such stability analysis to be performed about a velocity field which depends on a single coordinate .however in more complex geometries we can extend the classical hydrodynamic stability analysis to use fully resolved computational stability analysis of the flow field .this is referred to as _ biglobal stability analysis _ or _ direct linear stability analysis _ in analogy to direct numerical simulation ( dns ) .this approach is able to resolve fully the base flow in two or three dimensions and to perform a stability analysis with respect to perturbations in two or three dimensions .this methodology does not need to resort to any approximations beyond the initial linearisation and the imposition of inflow and outflow conditions .in particular the biglobal stability analysis method allows us to consider flows with rapid streamwise variation in two spatial dimensions such as the case of interest , flow over a low pressure turbine blade . by postulating spanwise homogeneous modal instabilities of the form : , asymptotic instability analysis becomes a large scale eigenvalue problem for the modal shape and eigenvalue .this permits use of algorithms and numerical techniques which provide the leading eigenvalues and eigenmodes for the resulting large problems , typically through iterative techniques such as the arnoldi method .this approach is extremely effective at determining absolute instabilities in many complex geometry flows , both open and closed including weakly nonlinear stability .direct linear stability analysis has not been routinely applied to convective instabilities that commonly arise in open domain problems with inflow and outflow conditions .one reason is that such flows are not typically dominated by modal behaviour , but rather by significant growth of transients that can arise owing to the non - normality of the eigenmodes .a large - scale eigenvalue analysis is not designed to detect such behaviour , although for streamwise - periodic flow , it is possible to analyse convective instability through direct linear stability analysis .to examine this situation , hydrodynamic stability analysis has been extended to cover non - modal stability analysis or transient growth analysis .this approach poses an initial value problem to find the linear growth of infinitesimal perturbations over a prescribed finite time interval .much of the initial focus in this area has been on large linear transient amplification and the relationship of this to subcritical transition to turbulence in plane shear flows .this approach was recently employed in backward - facing - step flows . with different emphasis from the present approach ,ehrenstein & gallaire have directly computed modes in boundary - layer flow to analyze transient growth associated with convective instability and hpffner _ et al _ investigated the transient growth of boundary layer streaks . a comparable study to the present work , studied steady and periodic flows past a cylinder .a preliminary study along the lines of the current work , concentrating on a steady base flow , is described in .methods developed for direct linear stability analysis of the navier stokes equations in general geometries have been previously described in detail , and extensively applied .subsequently , large - scale techniques have been extended to the transient growth problem . in a method suitable for such direct optimal growth computations for the linearized navier stokes equations in general geometries was described in detail .the extension to periodic base flows was presented in , for the case of a stenotic / constricted pipe flow .the approach has been applied to steady flows past a low pressure turbine blades , a backward facing step and a cylinder , , and is the method adopted in this study .recent work on the flow past the same t-106/300 low - pressure turbine blade ( lpt ) as used in this study , concentrated on a biglobal stability analysis in order to understand the instability mechanisms in this class of flows . at a reynolds number of 2000 ,the base flow displays periodic shedding .the work imposed periodic boundary conditions which imply synchronous shedding from all blades .relevant to the current study , the work found that these periodic boundary conditions caused the marginally stable flow to go very marginally unstable ( floquet multiplier just over ) at a reynolds number of 2000 and spanwise wavelengths of approximately to , where is the projection of the chord length on the streamwise axis .this instability is understood to be due to strict enforcement of the periodic boundary conditions , which is arguably not physical .relaxing the strict periodicity by using a double - bladed mesh resulted in stable eigenmodes ( ) for all reynold numbers explored , presumably because small subharmonic effects were sufficient to supress synchronous shedding and allow asynchronous shedding .the same geometry is analysed here . for our purposes ,the flow is best characterised as marginally unstable with floquet multiplier ( ) .this work highlights that modal analysis with secondary instabilities does not explain transition in this flow .abdessemed et al also considered the transient growth problem using a steady base flow at a reynolds number of 895 , where significant transient growth up to order was observed .the present study extends this to periodic base flow at a higher reynolds number of 2000 , where the base flow is periodic .the paper is outlined as follows . in section [sec : method ] we outline the direct stability analysis method and the direct transient growth analysis needed to determine the peak growth and the associated perturbations . in section [ sec : results ] we present results for transient growth about a periodic base flow , considering in turn variation by spanwise wavelength , period and starting point in the base flow phase .the flow over the blade is governed by the incompressible navier stokes equations , written in non - dimensional form as [ eq : fullnse ] where (x , y , z , t)$ ] is the velocity field , is the kinematic ( or modified ) pressure field and is the flow domain illustrated in figure [ fig : mesh ] . in what followswe define reynolds number as , with being the inflow velocity magnitude , the projection of the axial blade chord on the streamwise axis , and the kinematic viscosity .thus , we non - dimensionalise using as the length scale , as a velocity scale , so the time scale is then . in the present workall numerical computations of the base flows , whose two and three - dimensional energy growth characteristics we are interested in , will exploit the homogeneity in and require only a two - dimensional computational domain .we first consider a base flow about which we wish to study the linear stability .the base flows for this problem are two - dimensional , time - dependent flows that obey equations [ eq : fullnse ] with and is defined as the associated base - flow pressure .the boundary conditions imposed on in the base flow equations are uniform velocity at the inflow , fully developed ( ) at the outflow , periodic connectivity at the lower and upper boundaries and no - slip conditions at the blade surface .our interest is in the evolution of infinitesimal perturbations to the base flows .the linearized navier stokes equations governing these perturbations are found by substituting where is the pressure perturbation , into the navier stokesequations and keeping the lowest order ( linear ) terms in .the resulting equations are [ eq : pert_full ] these equations are to be solved subject to appropriate initial conditions and the boundary conditions .the initial condition is an arbitrary incompressible flow which we denote by , i.e. . the boundary conditions we consider are homogeneous dirichlet on all boundaries , _ i.e. _ . as discussed in , such homogeneous dirichlet boundary conditionssimplify the treatment of the adjoint problem because they lead to corresponding homogeneous dirichlet boundary conditions on the adjoint fields .we note that the action of equations ( [ eq : pert_full ] ) ( a ) and ( b ) on an initial perturbation over time interval may be stated as the modal decomposition of this forward evolution operator determines the asymptotic stability of the base flow . in this casethe solution is proposed to be the sum of eigenmodes , and we obtain the eigenvalue problem since for the case of interest is -periodic , we set and consider this as a temporal floquet problem , in which case the are floquet multipliers and the eigenmodes of are the -periodic floquet modes evaluated at a specific temporal phase .our primary interest is in the energy growth of perturbations over an arbitrary time interval , .we treat and as parameters to be varied in this study . as is conventional we define transient growth with respect to the energy norm of the perturbation flow , derived from the inner product where is the kinetic energy per unit mass of a perturbation , integrated over the full domain .the transient energy growth over interval is where we introduce , the adjoint of the forward evolution operator in .the action of is obtained by integrating the adjoint linearized navier stokes equations [ eq.adj ] backwards in time over interval . the action of the symmetric component operator on is obtained by serial time integration of and , _i.e. _ we first use to initialise the integration of forwards in time over interval , then use the outcome to initialise the integration of backwards in time over the same interval .the optimal perturbation is the eigenfunction of corresponding to the compound operator s dominant eigenvalue , and so we seek the dominant eigenvalues and eigenmodes of the problem we use to denote the maximum energy growth obtainable at time from initial time , while the global maximum is denoted by .specifically , we note that the eigenfunctions correspond to right singular vectors of operator , while their ( -normalised ) outcomes under the action of are the left singular vectors , _i.e. _ where the sets of vectors and are each orthonormal .the singular values of are , where both and are real and non - negative .while long - time asymptotic growth is determined from the eigenvalue decomposition , optimal transient growth is described in terms of the singular value decomposition . specifically , the optimal initial condition and its ( normalised ) outcome after evolution over time are respectively the right and left singular vectors of the forward operator corresponding to the largest singular value .the square of that singular value is the largest eigenvalue of and is the optimal energy growth . as already mentioned for an open flow , the most straightforward perturbation velocity boundary conditions to apply on both the inflow and outflow are homogeneous dirichlet , _i.e. _ , for both the forward and adjoint linearized navier stokes equations .the primitive variable , optimal growth formulation adopted in this work is discussed in further detail in and follows almost directly from the treatments given by for strictly parallel or weakly non - parallel basic states .macro element mesh made up of approximately 2000 elements _( left ) _ full mesh , _ ( inset ) _ enlarged view around the blade and _ ( right ) _ the double blade configuration .within each element drawn , a polynomial expansion of order is applied ] spectral/ elements are used for spatial discretisation , coupled with a fourier decomposition in the homogeneous direction .time integration is carried out using a velocity - correction scheme .the same discretisation and time integration schemes are used to compute base flows , and the actions of the forward and adjoint linearised navier stokes operators .the base flows are pre - computed and stored as data for the transient growth analysis in the form of time - slices .the base flow over one period of the evolution is reconstructed as required using fourier interpolation . as a check, the results for and were repeated with 16 time slices and found to differ by under .figure [ fig : mesh ] shows the computational domain for the t-106/300 low pressure turbine blade .the blade geometry is approximated by a cubic b - spline interpolation over 200 points to give a smooth flow surface .the hybrid mesh consists of approximately 2000 elements , 270 structured elements for the boundary layer around the blade surface and an unstructured mesh for the remainder of the field .the elements each have a polynomial order of , with degrees of freedom for the quadrilateral elements and for the triangular elements .comparison with higher polynomial - order results showed that computations at the chosen polynomial order are sufficient to resolve the eigenvalue to about .+ + + + + + + + to calculate the base flow around which we apply perturbations , a two - dimensional dns was performed .the two - dimensional periodic base flow is calculated at a reynolds number .the present study considers two and three - dimensional linear perturbations to this base flow .spatial periodicity between the planes has been assumed when computing the flow around one single blade .the imposition of zero dirichlet boundary conditions on the outflow for all perturbation computations implies a natural limit to the integration period .care was therefore be taken that the solution does not depend on the boundary conditions , in particular the outflow conditions .this was ensured by comparing results using an extended domain of twice the length to ensure independence of boundary conditions and solution for the cases under study . doubling the domain length ( for the case )changed the result for by less than , for by less than and for by less than .all these results are within plotting accuracy where presented . in the case of ,the results for the extended domain were presented .throughout the study the reynolds number is fixed at 2000 , corresponding to a periodic base flow ( after the first hopf bifurcation at re=905 ) .the shedding period of the base flow is and the phase of the initial condition in the base flow shedding cycle is fixed at as depicted in figure [ fig : baseflow ] and as defined in the previous section , unless otherwise specified . as indicated in the introduction , the flow is best characterised as marginally unstable with floquet multiplier ( ) .the dependence of growth on and is shown in figure [ fig : t_lz_surf ] and the same data is shown for in figure [ fig : t_beta_surf ] .we find that , while for spanwise wavelengths above about , the growth achievable depends little on spanwise wavelength , below , the optimal growth in a given is limited , particularly for longer times .as tending to infinity represents a purely two - dimensional problem , it appears that it is the two - dimensional perturbations that dominate potential instabilities at this reynolds number .growth of about over is achievable for . for longer timesthe disturbance has convected too far downstream to be of interest . to examine the case of shorter optimal disturbances we take and vary as a parameter .figure [ fig : lz , t0.25,t00 ] shows the first two leading growth values associated to the two most significant optimum modes .it can be seen that the growth is moderate and concentrated at short .the maximum occurs at for this integration time . for slightly longer times ( and above )the growth rises with and then flattens out , as shown in figure [ fig : lz , t1.00,t00 ] .this effect is also evident in figure [ fig : lz , t16.00,t00 ] .taking the peak of the first mode at ( see figure [ fig : lz , t1.00,t00 ] ) and varying produces the results shown in figure [ fig : t , lz1.95,t00 ] .again , strong growth is seen up to after which a plateauing is observed as the disturbance aligns to the least stable eigenmode ( nearly marginally stable ) . the optimal mode for and shown in figure [ fig : t01.00,lz01.95,t00,s1 ] , which shows a disturbance beginning at the trailing edge and the shear layer exciting the near wake .the spanwise constant optimal mode for is shown in figure [ fig : t08.00,beta0,t00,s1 ] .in contrast to the short mode shown in the left panel of figure [ fig : t01.00,lz01.95,t00,s1 ] , the wake mode is excited some distance downstream .although the initial disturbance also involves separation at the shear layer and a disturbance at the trailing edge , the initial disturbance extends further up the blade surface .figure [ fig : t01.00,lz01.95,t00,3d ] shows iso - surfaces of streamwise vorticity at of the same mode combined with the base flow . , for , at zero initial phase , showing spanwise vorticity .the right - hand plot shows the same for the mode associated with the second singular value . ] , for , at zero initial phase , showing spanwise vorticity .the right - hand plot shows the same for the mode associated with the second singular value . ] , for , at zero initial phase .the mode has been combined with the base flow and the plots show iso - surfaces of streamwise vorticity at , for , at zero initial phase .the mode has been combined with the base flow and the plots show iso - surfaces of streamwise vorticity at ( 2d case ) , for , at zero initial phase , showing spanwise vorticity ] the preceding discussion has assumed an initial phase of ( relating to the point in the shedding cycle ) at as defined in figure [ fig : baseflow ] .an investigation was also carried out to find the effects on the maximum attainable growth of varying this initial phase . at short growth horizons ,the growth achieved is only weakly dependent on and is achieved just before as can be seen in figure [ fig : t0_surfs ] .furthermore the figure also shows the dependence on initial phase remains weak over a range of integration times .we conclude from the figure that the dependence of growth on the initial phase is relatively unimportant .the relevant previous study demonstrated that although subharmonic effects were small , they were sufficient to induce asynchronous shedding and prevented the flow from going unstable .for the present problem we do not consider the asymptotic or long - time behaviour of perturbations but their behaviour over a shorter time horizon .results for the double - bladed mesh were tested for times up to eight shedding periods and agree well with the single blade situation . at these longer times ,consideration of the least stable eigenmode would be more suitable .this leads to the conclusion that subharmonic effects are relatively unimportant in the transient growth problem and the assumed periodic boundary condition is a valid means to reduce the computational domain for the problem under study .the transient behaviour of perturbations to linearised flow past a periodic array of t-106/300 low pressure turbine fan blade was investigated .the analysis was carried out at a reynolds number of 2000 associated with periodic vortex shedding , used as a periodic base flow .it is known from asymptotic analysis , the flow past an array of these turbine blades is marginally stable .the current analysis shows that long wavelength optimal perturbations associated with long time integration periods convect far downstream and eventually align with the asymtotically least stable eigenmode .the discovery of converging optimum growth for long integration times confirms the lack of a strong asymptotic instability .it is found that the long - wavelength perturbations tend toward a purely two - dimensional case and that these perturbations are associated to maximum optimum growth in the asymptotic case approached by long time - integration .however , as we already know from two - dimensional dns , the two - dimensional baseflow is stable for nonlinear flows and therefore also stable in a linear sense .it therefore may be assumed that the identified optimum growth associated to long wavelength perturbations is less significant than perturbations that are limited in spanwise wavelength . when considering short integration times it is indeed found that short - wavelength perturbations are of higher significance .short integration times have maximum optimal growth at shorter spanwise wavelengths and convect only a short distance downstream , exciting the near wake .we might hypothesise that in the presence of the neglected nonlinearity , the associated optimum modes cause shear layer separation , triggering wake instability .this would have to be demonstrated in a full nonlinear dns .furthermore , the spanwise length of a real lpt blade is naturally limited , giving an additional reason why short wavelength perturbation are more important than long wavelength disturbances of theoretical spanwise extent .our results are consistent with the understanding that transient growth mechanisms are associated with shear in the base flow feeding the perturbation energy growth .the results presented are consistent therefore with the cylinder results reported in the literature ( both on transient growth mechanisms and on receptivity via the adjoint ) and with the previous work on the lpt fan blade used in this study .it is hoped that the understanding developed will prove useful in controlling laminar boundary layer separation with a view to improving performance .for instance , the spatially periodic pattern in the shear layer can be quantified in the time - domain indicating disturbance frequencies susceptible to optimum amplification .a sharma wishes to thank the uk engineering and physical sciences research council ( epsrc ) for their support and s sherwin wishes to acknowledge financial support from the epsrc advanced research fellowship .partial support has been received by the air force office of scientific research , under grant no .f49620 - 03 - 1 - 0295 to nu - modelling s.l . , monitored by dr t. beutner ( now at darpa ) , lt col dr r. jefferies and dr j. d. schmisseur of afosr anddr s. surampudi of the european office of aerospace research and development .theofilis , v. , fedorov , a. , obrist , d. , dallmann , u.c . : the extended grtler - hmmerlin model for linear instability of three - dimensional incompressible swept attachment - line boundary layer flow . j. fluid mech . * 487 * , 271313 ( 2003 ) tuckerman , l. , barkley , d. : bifurcation analysis for timesteppers . in : e.doedel , l. tuckerman ( eds . ) numerical methods for bifurcation problems and large - scale dynamical systems , vol .543466 . springer , new york ( 2000 )
a direct transient growth analysis for two dimensional , three component perturbations to flow past a periodic array of t-106/300 low pressure turbine fan blades is presented . the methodology is based on a singular value decomposition of the flow evolution operator , linearised about a steady or periodic base flow . this analysis yields the optimal growth modes . previous work on global mode stability analysis of this flow geometry showed the flow is asymptotically stable , indicating a non - modal explanation of transition may be more appropriate . the present work extends previous investigations into the transient growth around a steady base flow , to higher reynolds numbers and periodic base flows . it is found that the notable transient growth of the optimal modes suggests a plausible route to transition in comparison to modal growth for this configuration . the spatial extent and localisation of the optimal modes is examined and possible physical triggering mechanisms are discussed . it is found that for longer times and longer spanwise wavelengths , a separation in the shear layer excites the wake mode . for shorter times and spanwise wavelengths , smaller growth associated with excitation of the near wake are observed .
in a variety of applications researchers are interested in comparing two treatment groups on the basis of several , potentially dependent outcomes .for example , to evaluate if a chemical is a neuro - toxicant , toxicologists compare a treated group of animals with an untreated control group in terms of various correlated outcomes such as tail - pinch response , click response and gait score , etc . ;the statistical problem of interest is to compare the multivariate distributions of the outcomes in the control and treatment groups .moreover , the outcome distributions are expected to be ordered in some sense .the theory of stochastic order relations [ ] provides the theoretical foundation for such comparisons . to fix ideas let and be -dimensional random variables ( rvs ) ; is said to be smaller than in the multivariate stochastic order , denoted , provided for all upper sets [ ] .if for some upper set the above inequality is sharp , we say that is strictly smaller than ( in the multivariate stochastic order ) which we denote by . recall that a set is called an upper set if implies that whenever , that is , if , .note that comparing and with respect to the multivariate stochastic order requires comparing their distributions over all upper sets in .this turns out to be a very high - dimensional problem .for example , if and are multivariate binary rvs , then provided where and are the corresponding probability mass functions . here where is the family of upper sets defined on the support of a -dimensional multivariate binary rv .it turns out that the cardinality of , denoted by , grows super - exponentially with .in fact , , , , and .the values of and are also known , but is not . however , good approximations for are available for all ; cf .obviously the number of upper sets for general multivariate rvs is much larger . since in many applications is large , it would seem that the analysis of high - dimensional stochastically ordered data is practically hopeless . as a consequence ,the methodology for analyzing multivariate ordered data is underdeveloped .it is worth mentioning that as well as studied stochastically ordered bivariate multinomial distributions .they noted the difficulty of extending their methodology to high - dimensional data due to the large number of constraints that need to be imposed .recently proposed a framework for testing for order among , -dimensional , ordered multivariate binary distributions . in this paperwe address the dimensionality problem by considering an easy to understand stochastic order which we refer to as the linear stochastic order .[ def - lst]the rv is said to be smaller than the rv in the ( multivariate ) linear stochastic order , denoted , if for all where in ( [ l - st ] ) denotes the usual ( univariate ) stochastic order .note that it is enough to limit ( [ l - st ] ) to all nonnegative real vectors satisfying , and accordingly we denote by the positive part of the unit sphere in .we call each a `` direction . '' in other words the rvs and are ordered by the linear stochastic order if every nonnegative linear combination of their components is ordered by the usual ( univariate ) stochastic order .thus instead of considering all upper sets in we need for each to consider only upper sets in .this is a substantial reduction in dimensionality .in fact we will show that only one value of need be considered .note that the linear stochastic order , like the multivariate stochastic order , is a generalization of the usual univariate stochastic order to multivariate data .both of these orders indicate , in different ways , that one random vector is more likely than another to take on large values . in this paperwe develop the statistical theory and methodology for estimation and testing for linearly ordered multivariate distributions . for completeness we note that weaker notions of the linear stochastic order are discussed by and applied to various optimization problems in queuing and finance . comparing linear combinationshas a long history in statistics .for example , in phase i clinical trials it is common to compare dose groups using an overall measure of toxicity . typically , this quantity is an ad hoc weighted average of individual toxicities where the weights are often known as `` severity weights ; '' cf . and . this strategy of dimension reduction is not new in the statistical literature and has been used in classical multivariate analysis when comparing two or more multivariate normal populations .for example , using the union - intersection principle , the comparison of multivariate normal populations can be reduced to the comparison of all possible linear combinations of their mean vectors .this approach is the basis of roy s classical largest root test [ , ] .our proposed test may be viewed as nonparametric generalization of the classical normal theory method described above with the exception that we limit consideration only to nonnegative linear combinations ( rather than all possible linear combinations ) since our main focus is to make comparisons in terms of stochastic order .we emphasize that the linear stochastic order will allow us to address the much broader problem of directional ordering for multivariate ordered data , that is , to find the direction which best separates two ordered distributions . based on our survey of the literature, we are not aware of any methodology that addresses the problems investigated here .this paper is organized in the following way . in section [ sec2 ] some probabilistic properties of the linear stochastic orderare explored , and its relationships with other multivariate stochastic orders are clarified . in section [ sec3 ]we provide the background and motivation for directional inference under the linear stochastic order and develop estimation and testing procedure for independent as well as paired samples .in particular the estimator of the best separating direction is presented and its large sampling properties derived .we note that the problem of estimating the best separating direction is a nonsmooth optimization problem .the limiting distribution of the best separating direction is derived in a variety of settings .tests for the linear stochastic order based on the best separating direction are also developed .one advantage of our approach is that it avoids the estimation of multivariate distributions subject to order restrictions .simulation results , presented in section [ sec4 ] , reveal that for large sample sizes the proposed estimator has negligible bias and mean squared error ( mse ) .the bias and mse seem to depend on the true value of the best separating direction , the dependence structure and the dimension of the problem .furthermore , the proposed test honors the nominal type i error rate and has sufficient power . in section [ sec5 ]we illustrate the methodology using data obtained from the national toxicology program ( ntp ) .concluding remarks and some open research problems are provided in section [ sec6 ] .for convenience all proofs are provided in the where additional concepts are defined when needed .we start by clarifying the relationship between the linear stochastic order and the multivariate stochastic order .first note that if and only if for all which is equivalent to for all where is the collection of all upper half - planes , that is , sets which are both half planes and upper sets . thus . the converse does not hold in general .let and be bivariate rvs such that and .it is easy to show that is smaller than in the linear stochastic order but not in the multivariate stochastic order .the following theorem provides some closure results for the linear stochastic order .[ thm - closure ] if , then for any affine increasing function ; if , then for each subset if for all in the support of , then if are independent rvs with dimensions and similarly for and if in addition , then ; finally , if and where convergence can be in distribution , in probability or almost surely and if for all , then .theorem [ thm - closure ] shows that the linear stochastic order is closed under increasing linear transformations , marginalization , mixtures , conjugations and convergence . in particular parts ( ii ) and ( iii ) of theorem [ thm - closure ]imply that if , then and for all and ; that is , all marginals are ordered as are all convolutions .although the multivariate stochastic order is in general stronger than the linear stochastic order , there are situation in which both orders coincide .[ thm - ervs]let and be continuous elliptically distributed rvs supported on with the same generator . then if and only if .note that the elliptical family of distributions is large and includes the multivariate normal , multivariate and the exponential power family ; see .thus theorem [ thm - ervs ] shows that the multivariate stochastic order coincides with the linear stochastic order in the normal family .incidentally , in the proof of theorem [ thm - ervs ] we generalize the results of on multivariate stochastic ordering of elliptical rvs .another interesting example is the following : [ thm - mvbs]let and be multivariate binary rvs .then is equivalent to if and only if . in the proof of theorem [ thm - ervs ] distributional properties of the elliptical family play a major role .in contrast , theorem [ thm - mvbs ] is a consequence of the geometry of the upper sets of multivariate binary rvs which turn out to be upper half planes if and only if .we now explore the role of the dependence structure .[ thm - copula]let and have the same copula .then if and only if .theorem [ thm - copula ] establishes that if two rvs have the same dependence structure as quantified by their copula function [ cf . ] , then the linear and multivariate stochastic orders coincide .such situations arise when the correlation structure among outcomes is not expected to vary with dose .the orthant orders are also of interest in statistical applications .we say that is smaller than in the upper orthant order , denoted , if for all where is the collection of upper orthants , that is , sets of the form for some fixed .the lower orthant order is similarly defined ; cf . or .it is obvious that the orthant orders are weaker than the usual multivariate stochastic order , that is , and . in general the linear stochastic order does not imply the upper ( or lower ) orthant order , nor is the converse true .however , as stated below , under some conditions on the copula functions , the linear stochastic order implies the upper ( or lower ) orthant order .[ thm - ort]if and for all ^{p} ] , then . note that and above are the copula and tail - copula functions for the rv [ cf .joe ( ) ] and are defined in the and similarly for and .further note that the relations and/or indicate that the components of are more strongly dependent than the components of .this particular dependence ordering is known as positive quadrant dependence .it can be further shown that strong dependence and the linear stochastic order do not in general imply stochastic ordering .additional properties of the linear stochastic order as they relate to estimation and testing problems are given in section [ sec - dir ] .[ sec3 ] there exists a long history of well - developed theory for comparing two or more multivariate normal ( mvn ) populations . methods for assessing whether there are any differences between the populations [ which differ ?in which component(s ) ? and by how much ? ] have been addressed in the literature using a variety of simultaneous confidence intervals and multiple comparison methods ; cf . . of particular interest to us is roy s largest root test . to fix ideas consider two multivariate normal random vectors and with means and , respectively , and a common variance matrix . using the union - intersection principle expressed the problem of testing versus as a collection of univariate testing problems , by showing that and are equivalent to and where and .implicitly roy s test identifies the linear combination that corresponds to the largest `` distance '' between the mean vectors , that is , the direction which best separates their distributions . the resulting test , known as roy s largest root test , is given by the largest eigenvalue of where is the matrix of between groups ( or populations ) sums of squares and cross products , and is the usual unbiased estimator of . in the special case when there are only two populations , this test statistic is identical to hotelling s statistic . from the simultaneous confidence intervals point of view , the critical values derived from the null distribution of this statistic can be used for constructing scheffe s simultaneous confidence intervals for all possible linear combinations of the difference . further note that the estimated direction corresponding to roy s largest root test is where and are the respective sample means .our objective is to extend and generalize the classical multivariate method , described above , to nonnormal multivariate ordered data. our approach will be nonparametric . recall that comparing mvns is done by considering the family of statistics for all .in the case of nonnormal populations , the population mean alone may not be enough to characterize the distribution .in such cases , it may not be sufficient to compare the means of the populations but one may have to compare entire distributions .one possible way of doing so is by considering rank statistics .suppose and are independent random samples from two multivariate populations .let be the rank of in the combined sample . for fixed the distributions of and can be compared using a rank test .for example , if we use our comparison is done in terms of wilcoxon s rank sum statistics .it is well known that rank tests are well suited for testing for univariate stochastic order [ cf . , ] where the restrictions that must be made .although any rank test can be used , the mann whitney form of wilcooxon s ( wmw ) statistic is particularly attractive in this application .therefore in the rest of this paper we develop estimation and testing procedures for the linear stochastic order based on the family of statistics where varies over .note that ( [ psi - nm ] ) unbiasedly estimates the following result is somewhat surprising .[ prop - smax]let and be independent mvns with means and common variance matrix .then roy s maximal separating direction also maximizes .proposition [ prop - smax ] shows that the direction which separates the means , in the sense of roy , also maximizes ( [ psi ] ) .thus it provides further support for choosing ( [ psi - nm ] ) as our test statistic . note that in general may not belong to .since we focus on the linear statistical order , we restrict ourselves to .consequently we define and refer to as the best separating direction . further note that if and are independent and continuous and if , then for all .this simply means that tends to be smaller than more than of the time .note that probabilities of type ( [ psi ] ) were introduced by and further studied by for comparing estimators .random variables satisfying such a condition are said to be ordered by the precedence order [ ] .once is estimated we can plug it into ( [ psi - nm ] ) to get a test statistic .hence our test may be viewed as a natural generalization of roy s largest root test from mvns to arbitrary ordered distributions . however , unlike roy s method , which does not explicitly estimate , we do . on the other handthe proposed test does not require the computation of the inverse of the sample covariance matrix whereas roy s test and hotteling s test require such computations .consequently , such tests can not be used when whereas our test can be used in all such instances . in the above description and independent for all and and therefore the probability is independent of both and .however , in many applications such as repeated measurement and crossover designs , the data are a random sample of dependent pairs for which are i.i.d .for example , such a situation may arise when and , where are pair - specific random effects and the rvs ( as well as ) are i.i.d .in this situation is independent of and is well defined .moreover the objective function analogous to ( [ psi - nm ] ) is in the following we consider both sampling designs which we refer to as : ( a ) independent samples and ( b ) paired or dependent samples .results are developed primarily for independent samples , but modification for paired samples are mentioned as appropriate .consider first the case of independent samples , that is , and are random samples from the two populations .rewrite ( [ psi - nm ] ) as where .the maximizer of ( [ psi - nm - z ] ) is denoted by , that is, finding ( [ s - max - hat ] ) with is a nonsmooth optimization problem .consider first the situation where . in this casewe maximize ( [ psi - nm - z ] ) subject to .geometrically is a quarter circle spanning the first quadrant .now let , and without any loss of generality assume that .we examine the behavior of the function as a function of . clearly if , that is , if , then for all we have . in other words any value of on the arc maximizes . similarly if then for all we have and again the entire arc maximizes .now let and .it follows that provided .thus for all on the arc ] .the value of is given by ( [ z->theta ] ) .in other words each is mapped to an arc on as described above .now , the function ( [ psi - nm - z ] ) simply counts the number of arcs covering each .the maximizer of ( [ psi - nm - z ] ) lies in the region where the maximum number of arcs overlap .clearly this implies that the maximizer of ( [ psi - nm - z ] ) is not unique . a quick way to findthe maximizer is the following : [ alg - max2]let denote the number of s which belong to the second or fourth quadrant . map and order the resulting angles as }<\cdots < \theta _ { [ m]} ] and }=\pi/2 ] where ,1}=\cos(\theta _ { [ i]}) ] .if a maximum is attained at } ] or },\theta _ { [ j+1]}] ] on where is a quadratic function and is a zero mean gaussian process described in the body of the proof .theorem [ them - lst3 ] shows that in paired samples is consistent , but in contrast with theorem [ them - lst1 ] it converges at a cube - root rate to a nonnormal limit .the cube root rate is due to the discontinuous nature of the objective function ( [ psi - n ] ) .general results dealing with this kind of asymptotics for independent observations are given by .the main difference between theorems [ them - lst1 ] and [ them - lst3 ] is that the objective function ( [ psi - nm ] ) is smoothed by its -statistic structure while ( [ psi - n ] ) is not .since the parameter space is the surface of a unit sphere it is natural to define the confidence set for centered at by where satisfies . for more detailssee or .hence the confidence set is the set of all which have a small angle with . in theory onemay appeal to theorem [ them - lst1 ] to derive the critical value for any .however the limit law in theorem [ them - lst1 ] requires knowledge of unknown parameters and functions . for this reason, we explore the bootstrap for estimating .since in the case of paired samples , the estimator converges at cube root rate rather than the square root rate , the standard bootstrap methodology may yield inaccurate coverage probabilities ; see and .for this reason we recommend the `` m out of n '' bootstrap methodology .for further discussion on the `` m out of n '' bootstrap methodology one may refer to , , .consider first the case of independent samples where interest is in testing the hypothesis thus ( [ h0vsh1 ] ) tests whether the distributions of and are equal or ordered ( later on we briefly discuss testing versus ) . in this sectionwe propose a new test for detecting an ordering among two multivariate distributions based on the maximal separating direction .the test is based on the following observation : [ thm - teststat]let and be independent and continuous rvs . if, then for all , and if both and for some hold , then .theorem [ thm - teststat ] says that if it is known a priori that , that is , the rvs are either equal or ordered [ which is exactly what ( [ h0vsh1 ] ) implies ] , then a strict linear stochastic ordering implies a strict ordering by the usual multivariate stochastic order . in particular under the alternative there must exist a direction for which .[ rmtestphilosophy - i]the assumption that is natural in applications such as environmental sciences where high exposures are associated with increased risk . nevertheless if the assumption that is not warranted then the alternative hypothesis formulated in terms of the linear stochastic order actually tests whether there exists a for which .this amounts to a precedence ( or pitman ) ordering between and .[ rmtestphilosophy - ii]in the proof of theorem [ thm - teststat ] we use the fact that given that we have provided for some . note that if , then .thus it is possible to test ( [ h0vsh1 ] ) by comparing means ( or any other monotone function of the data ) .although such a test will be consistent it may lack power because tests based on means are often far from optimal when the data is not normally distributed . the wmw procedure , however , is known to have high power for a broad collection of underlying distributions . hence ( [ h0vsh1 ] ) can be reformulated in terms of the linear stochastic .in particular it justifies using the statistic to the best of our knowledge this is the first general test for multivariate ordered distributions . in practicewe first estimate and then define and where and .hence ( [ snm ] ) is nothing but a wmw test based on the s and s .it is also a kolmogorov smirnov type test . the large sample distribution of ( [ snm ] ) is given in the following .[ thm - testlimit]suppose the null ( [ h0vsh1 ] ) holds .let and .then where is a zero mean gaussian process with covariance function given by ( [ cu , v - h0 ] ) .since by slutzky s theorem the power of test ( [ snm ] ) converges to the power of a wmw test comparing the samples and .the `` synthetic '' test , assuming that is known , serves as a gold standard as verified by our simulation study .furthermore , the power of the test under local alternatives , that is , when and is bounded by the power of the wmw test comparing the distributions of and . alternatives to the `` sup '' statistic ( [ snm ] ) are the `` integrated '' statistics\,d\mathbf{s } \quad\mbox{and}\nonumber\\[-8pt]\\[-8pt ] i_{n , m}^{+}&=&\int_{\mathbf{s}\in\mathcal{s}_{+}^{p-1 } } \bigl[n^{1/2}\bigl(\psi_{n , m}(\mathbf{s})-1/2\bigr ) \bigr]_{+}\,d\mathbf{s},\nonumber\end{aligned}\ ] ] where {+}=\max ( 0,x ) ] is also an upper set and \subseteq [ s-\bolds{\mu}^{\prime}] ] where } ] , , the value of ( [ psi - nm - pf ] ) may increase or decrease .it follows that for all } ) , \ldots,\psi_{n , m}^{\prime}(\mathbf{s } _ { [ m+1 ] } ) \} ] are defined in algorithm [ alg - max2 ] .therefore the maximum value of ( [ psi - nm ] ) is an element of the above list .now suppose that } ] or } ) = \psi_{n , m}^{\prime } ( \mathbf{s } _ { [ i+1 ] } ) ] or } , \theta _ { [ i+1 ] } ] ] for all and , so by the strong law of large numbers and both converge to zero with probability one .now, the set is compact , and the function is continuous in for all values of and bounded [ in fact .thus the conditions in theorem 3.1 in are satisfied , and it follows that as . similarly as .since is bounded all its moments exist ; therefore from theorem 5.3.3 in we have that with probability one .moreover it is clear that the latter holds uniformly for all .thus , by assumption for all so we can apply theorem 2.12 in to conclude that that is , is strongly consistent .this completes the first part of the proof .since the densities of and are differentiable , it follows that is continuous and twice differentiable .in particular at , the matrix exists and is positive definite .a taylor expansion implies that it is also obvious that and finally as noted above for all as .therefore by theorem 1 in we have that that is , converges to at a rate .this completes the second part of the proof .the functions and on the right - hand side of ( [ hajek - p-1 ] ) all admit a quadratic expansion .a bit of algebra shows that for in an neighborhood of , we have \\[-8pt ] & & { } + \frac{1}{2 } ( \mathbf{s}-\mathbf{s}_{\max } ) ^{t}\mathbf{v } ( \mathbf{s}-\mathbf{s}_{\max } ) + o_{p } ( 1/n ) , \nonumber\end{aligned}\ ] ] where , for the function is the gradient of evaluated at , and the matrix is given by note that the term in ( [ hajek - p-2 ] ) absorbs in ( [ hajek - p-1 ] ) as well as the higher - order terms in the quadratic expansions of and . now by the clt and slutzky s theorem ,we have that where finally it follows by theorem 2 in that where , completing the proof . proof of theorem [ them - lst2 ] suppose that and are discrete rvs with finite support .let and where and and are finite .define the set .a simple argument shows that where is finite , the sets are distinct , and with .thus is a simple function on , and is the set associated with the largest .we will assume , without any loss of generality , that for all .now note that where where and .clearly is also a simple function .moreover for large enough and we will have and for all and , and consequently is defined over the same sets as , that is , where with and .furthermore the maximizer of is any provided that is associated with the largest .hence,\\[-8pt ] & \leq & \sum_{k=2}^{k}\mathbb{p}({\hat } { \alpha}_{1}\leq{\hat}{\alpha}_{k})\leq(k-1 ) \max_{2\leqk\leq k}\mathbb{p}({\hat}{\alpha}_{1}\leq{\hat } { \alpha}_{k}).\nonumber\end{aligned}\ ] ] a bit of rearranging shows that where note that may be viewed as a kernel of a two sample -statistic .moreover is bounded ( here denotes set cardinality ) and by assumption . applying theorem 2 and the derivations in section 5b in we have that where as .finally from ( [ bound-1 ] ) and ( [ bound-2 ] ) we have that where and completing the proof .proof of theorem [ them - lst3 ] choose .we have already seen that under the stated conditions , is continuous , and therefore for each the set is open .the collection is an open cover for .since is compact there exists a finite subcover for where .hence each belongs to some and therefore by construction for all . by the law of large numbers as for each . since is finite , now for any , , and .this implies that we can choose large enough so .moreover this bound holds for all and so on .since is arbitrary we conclude that as . by assumption for all , so we can apply theorem 2.12 in to conclude that that is , is strongly consistent .this completes the first part of the proof .we have already seen that holds .we now need to bound , where denotes the outer expectation and .we first note that the bracketing entropy of the upper half - planes is of the order .the envelope function of the class where is bounded by whose squared norm is note that we may replace the rv in ( [ vv-2 - 2 ] ) with the rv whose mass is concentrated on the unit sphere . the condition that implies that the angle between and is of the order and therefore is computed as surface integral on a spherical wedge with maximum width .it follows that ( [ vv-2 - 2 ] ) is bounded by where is the area of , and is the supremum of the density of .clearly since the density of is bounded by assumption .thus by corollary 19.35 in we have it now follows that which implies by theorem 5.52 in and theorem 14.4 of that that is , converges to at a cube root rate .this completes the second part of the proof .the limit distribution is derived by verifying the conditions in theorem 1.1 of , denoted henceforth by kp . first note that ( [ kp-1 ] ) is condition ( i ) in kp .since is consistent , condition ( ii ) also holds , and condition ( iii ) holds by assumption .the differentiability of the density of implies that is twice differentiable .the uniqueness of the maximizer implies that is positive definite , and hence condition ( iv ) holds ; see also example 6.4 in kp for related calculations .condition ( v ) in kp is equivalent to the existence of the limit which can be rewritten as.\end{aligned}\ ] ] with some algebra we find that this limit exists and equals where is the usual dirac function ; hence integration is with respect to the surface measure on .it follows that condition also ( v ) holds .conditions ( vi ) and ( vii ) were verified in the second part of the proof .thus we may apply theorem 1.1 in kp to get where by kp and is a zero mean gaussian process with covariance function .this completes the proof .proof of proposition [ prop - unique ] note that now , by assumption the df is independent of .therefore is uniquely maximized on if and only if the function is uniquely maximized on .if , then , and we wish to maximize a linear function on .it is easily verified ( by using ideas from linear programming ) that the maximizer is unique if which is true by assumption .incidentally , it is easy to show directly that is maximized at where now let and assume that a unique maximizer does not exist ; that is , suppose that is maximized by both and .it is clear that for all that is , the value of is constant along rays through the origin .the rays passing through and , respectively , intersect the ellipsoid at the points and .it follows that , moreover and maximize on the ellipsoid .now since we must have . recall that a linear function on ellipsoid is uniquely maximized ( just like on a sphere ; see the comment above ) .therefore we must have which implies that as required .proof of theorem [ thm - teststat ] if , then for all we have . by assumptionboth and are continuous rvs , so .suppose now that both and for some , hold .then we must have .since we have for .one of these inequalities must be strict ; otherwise contradicts the fact that .now use theorem 1 in to complete the proof .proof of theorem [ thm - testlimit ] the functions and defined in the proof of theorem [ them - lst1 ] are donsker ; cf .example 19.7 in .hence by the theory of empirical processes applied to ( [ hajek - p-1 ] ) , we find that where is a zero mean gaussian process , and convergence holds for all .we also note that ( [ kp-2 ] ) is a two - sample -processes . a central limit theorem for such processesis described by .hence by the continuos mapping theorem , and under , we have where the covariance function of , denoted by , is given by\\[-8pt ] & & \qquad{}+\frac{1}{1-\lambda}\mathbb{p}\bigl(\mathbf{u}^{t } \mathbf{x}_{1}\leq\mathbf{u}^{t } \mathbf{x}_{2},\mathbf{v}^{t}\mathbf{x}_{3}\leq \mathbf{v}^{t}\mathbf{x}_{2}\bigr)-\frac{1}{4\lambda(1-\lambda)},\nonumber\end{aligned}\ ] ] where are i.i.d . from the common df .proof of theorem [ thm - consistent&monotone ] suppose that .then for some we have which implies that . by definition so .it follows from the proof of theorem [ them - lst1 ] that with probability one .thus, therefore by slutzky s theorem, where is the critical value for an level test based on samples of size and and .hence the test based on is consistent .consistency for and is established in a similar manner .now assume that so that for all .fix , , and choose . without any loss of generality assume that .define and .clearly and take values in .now , for we have where we use the fact that it follows that for .moreover and are all independent and it follows from theorem 1.a.3 in that .thus .the latter holds for every value of , and therefore it holds unconditionally as well , that is, it follows that for all where and are defined in ( [ psi - nm ] ) and the superscripts emphasize the different arguments used to evaluate them .thus and as a consequence as required .the monotonicity of the power function of and follows immediately from the fact that for all .we thank grace kissling ( niehs ) , alexander goldenshluger , yair goldberg and danny segev ( university of haifa ) , for their useful comments and suggestions .we also thank the editor , associate editor and two referees for their input which improved the paper .
researchers are often interested in drawing inferences regarding the order between two experimental groups on the basis of multivariate response data . since standard multivariate methods are designed for two - sided alternatives , they may not be ideal for testing for order between two groups . in this article we introduce the notion of the linear stochastic order and investigate its properties . statistical theory and methodology are developed to both estimate the direction which best separates two arbitrary ordered distributions and to test for order between the two groups . the new methodology generalizes roy s classical largest root test to the nonparametric setting and is applicable to random vectors with discrete and/or continuous components . the proposed methodology is illustrated using data obtained from a 90-day pre - chronic rodent cancer bioassay study conducted by the national toxicology program ( ntp ) .
in the monte carlo simulation , we sometimes suffer from the problem of slow dynamics .the critical slowing down near the critical point , the phase separation dynamics at low temperature , the slow dynamics due to the randomness or frustration , and the low - temperature slow dynamics in quantum monte carlo simulation are examples of the problems of slow dynamics .we may classify the attempts to conquer the slow dynamics in the monte carlo simulation into two categories .the first category is the cluster algorithm , such as the swendsen - wang ( sw ) algorithm and the wolff algorithm .the second one is the extended ensemble method .the multicanonical method , the broad histogram method , and the flat histogram method are examples of the second category .recently , wang and landau proposed an efficient algorithm to accelerate the calculation of the energy density of states ( dos ) .yamaguchi and okabe have successfully used the wang - landau algorithm for the study of the antiferromagnetic -state potts models . tomita and okabe recently proposed an effective cluster algorithm , which is called the probability - changing cluster ( pcc ) algorithm , of tuning the critical point automatically .the pcc algorithm is an extension of the sw algorithm ; we change the probability of connecting spins of the same type , essentially the temperature , in the process of the monte carlo spin update .we showed the effectiveness of the pcc algorithm for the two - dimensional ( 2d ) and three - dimensional ( 3d ) potts models , determining the critical point and exponents .we can extract information on critical phenomena with much less numerical effort .the pcc algorithm is quite useful for studying random systems , where the distribution of the critical temperature , , is important .we applied the pcc algorithm to the 2d diluted ising model , investigating the crossover and self - averaging properties of random systems .we also extended the pcc algorithm to the problem of the vector order parameter ; studying the 2d xy and clock models , we showed that the pcc algorithm is also useful for the kosterlitz - thouless ( kt ) transition .the combination of approaches of two categories , the cluster algorithm and the extended ensemble method , is a challenging problem .janke and kappler proposed a trial to combine the multicanonical method and the cluster algorithm ; their method is called the multibondic ensemble method .recently , yamaguchi and kawashima improved the multibondic ensemble method , and also showed that the combination of the wang - landau algorithm and the improved multibondic ensemble method yields much better statistics compared to the original multibondic ensemble method . here, we pick up two recent topics of new monte carlo algorithms .we first discuss the generalization of the pcc algorithm .this generalized scheme is based on the finite - size scaling ( fss ) property of the correlation ratio .second , for the algorithm to combine the cluster algorithm and the extended ensemble method , we derive a rigorous broad histogram relation for the bond number , and propose the flat histogram method for the bond number .we start with reviewing the idea of the pcc algorithm .we explain the case of the ferromagnetic -state potts model , whose hamiltonian is given by we construct the kasteleyn and fortuin ( kf ) clusters using the probability , where is the probability of connecting spins of the same type , .the correspondence of the spontaneous magnetization of the -state potts model and the percolation probability of the bond percolation model was discussed by hu .then , we check whether the system is percolating or not .if the system is percolating ( not percolating ) in the previous test , we decrease ( increase ) by .spins are updated following the same rule as the sw algorithm . after repeating the above processes, the distribution of for monte carlo samples approaches the gaussian distribution of which mean value is ; is the probability of connecting spins , such that the existence probability becomes 1/2 .the existence probability is the probability that the system percolates .since follows the fss near the critical point , where is the critical value of for the infinite system and is the correlation - length critical exponent , we can estimate from the size dependence of using eq .( [ scale ] ) and , in turn , estimate through the relation . in the original formulation of the pcc algorithm , we used the kf representation in two ways .first , we make a cluster flip as in the sw algorithm .second , we change the probability of connecting spins of the same type , , depending on the observation whether clusters are percolating or not .the point is that has the fss property with a single scaling variable .we may use quantities other than which have a similar fss relation . in the fss analysis of the monte carlo simulation, we often use the binder ratio , which is essentially the ratio of the moments of the order parameter .the moment ratio has the fss property with a single scaling variable , as far as the corrections to fss are negligible .the angular brackets indicate the thermal average .the moment ratio derived from a snapshot spin configuration is always one ; therefore , the instantaneous moment ratio can not be used for the criterion of judgment whether we increase or decrease the temperature . as another quantity ,we may treat the correlation ratio , the ratio of the correlation functions with different distances . for an infinite system at the critical point, the correlation function decays as a power of , with the decay exponent .precisely , the distance is a vector , but we have used a simplified notation . away from the critical point , the ratio of the distance and the correlation length plays a role in the scaling of the correlation function .moreover , for finite systems , two length ratios , and , come in the scaling form of the correlation function . then , the ratio of the spin - spin correlation functions with different distances and takes the fss form with a single scaling variable , if we fix two ratios , and . in casethe correlation length diverges with a power law , , eq .( [ corr_ratio ] ) becomes the same form as eq .( [ mom_ratio ] ) ; however , eq . ( [ corr_ratio ] ) is also applicable to the case of the kt transition , where the correlation length diverges more strongly than the power - law divergence . in order to examine the fss properties of the correlation ratio , we here give the result of the 2d 6-state clock model on the square lattice with the periodic boundary conditions .the 2d -state clock model is known to exhibit two phase transitions of the kt type at and ( ) for .we simulate the 2d 6-state clock model by the use of the wang - landau algorithm .we show the temperature dependence of both the moment ratio and the correlation ratio in fig .[ fig_1 ] . as for the distances and , we choose and ; we take the horizontal or vertical direction of the lattice for the orientation of two sites . for the correlation ratio ,the curves of different sizes merge in the intermediate kt phase ( ) , and spray out for the low - temperature ordered and high - temperature disordered phases , which is expected from the fss form of eq .( [ corr_ratio ] ) .then , we can make a fss analysis based on the kt form of the correlation length , where .using the data of for =12 , 16 , 24 , 32 , 48 , and 64 , we estimate two kt transition temperatures . the best - fitted estimates are =0.698(4 ) and =0.898(4 ) , in units of , which are compatible with the recent results using the pcc algorithm , =0.7014(11 ) and =0.9008(6 ) . on the contrary , as seen from fig .[ fig_1 ] , the corrections to fss are larger for the moment ratio , which makes the fss analysis difficult . we have shown that the correlation ratio is a good estimator especially for the kt transition .it is due to the fact that we only use the property of correlation function ; the characteristic of the kt transition is that the correlation function shows a power - law decay at all the temperatures of the kt phase . and the correlation ratio of the 2d 6-state clock model for = 12 , 16 , 24 , 32 , 48 , and 64 . ]we may use the fss properties of the correlation ratio for the generalization of the pcc algorithm . instead of checking whether the clusters are percolating or not, we ask whether the instantaneous correlation ratio is larger or smaller than some fixed value .of course , we can use other sets of distances .we decrease ( increase ) the temperature , if is smaller ( larger ) than .we start the simulation with some temperature .we make the amount of the change of temperature , , smaller during the simulation ; in the limit of , the system approaches the canonical ensemble . as an example, we apply the generalized scheme of the pcc algorithm to the study of the 2d 6-state clock model .we treat the systems with linear sizes = 8 , 16 , 32 , 64 , and 128 .we start with = 0.005 , and gradually decrease to the final value , 0.0001 .after 20,000 monte carlo sweeps of determining , we make 10,000 monte carlo sweeps to take thermal average ; we make 20 runs for each size to get better statistics and to evaluate the statistical errors .we calculate to check whether it is larger than or not . and ( b ) the logarithmic plots of at of the 2d 6-state clock model for = 8 , 16 , 32 , 64 , and 128 , where .the closed circles are data for , and open circles are those for . ,title="fig : " ] and ( b ) the logarithmic plots of at of the 2d 6-state clock model for = 8 , 16 , 32 , 64 , and 128 , where .the closed circles are data for , and open circles are those for ., title="fig : " ] we use the fss analysis based on the kt form of the correlation length , eq .( [ corr_length ] ) . for the dependence of , we have the relation we plot as a function of with for the best - fitted parameters in fig . [ fig_2](a ) . here, we concentrate on the high - temperature transition .the value of has been set to be 0.87 and 0.85 .the error bars are smaller than the size of marks .we should mention that the data with different are collapsed on a single curve in this plot , which means that depends on in eq .( [ t_kt ] ) and the difference of can be absorbed in the dependence of .the present estimate of is 0.899(3 ) .this value is consistent with the estimate of the pcc algorithm using the percolating properties , 0.9008(6 ) .we also estimate the critical exponent from the size dependence of the correlation at . in fig .[ fig_2](b ) , we plot at as a function of in a logarithmic scale . for the estimate of , we use the fss form including small multiplicative logarithmic corrections our estimate is = 0.250(3 ) from the data for , and = 0.265(2 ) from the data for , which are compatible with the theoretical prediction , 1/4 ( = 0.25 ) . the detailed description of the generalized scheme of the pcc algorithm based on the fss of the correlation ratio together with its application to the 2d quantum xy model of will be reported elsewhere .in this section , we consider the combination of the cluster algorithm and the extended ensemble method .one calculates the energy dos , , in the multicanonical method and the wang - landau method ; the energy histogram is checked during the monte carlo process .in contrast , the dos for bond number , , is calculated in the multibondic ensemble method or its improvement by yamaguchi and kawashima ; the histogram for bond number , , is checked in the monte carlo process . in proposing the broad histogram method , oliveira _ et al ._ paid attention to the number of potential moves , or the number of the possible energy change , , for a given state .the total number of moves is for a single - spin flip process , where is the number of spins .the energy dos is related to the number of potential moves as where denotes the microcanonical average with fixed .this relation is shown to be valid on general grounds , and hereafter we call eq .( [ bhr ] ) as the broad histogram relation ( bhr ) for the energy .one may use the number of potential moves for the probability of updating states .alternatively , one may employ other dynamics which has no relation to , but eq .( [ bhr ] ) is used when calculating the energy dos .it is interesting to ask whether there is a relation similar to the bhr , eq .( [ bhr ] ) , for the bond number . here ,using the cluster ( graph ) representation , we derive the bhr for the bond number .we consider the -state potts model as an example . with the framework of the dual algorithm , the partition function is expressed in the double summation over state and graph as where is a function that takes the value one when is compatible to and takes the value zero otherwise .a graph consists of a set of bonds .the weight for graph , , is defined as for the -state potts model , where is the number of `` active '' bonds in .we say a pair is satisfied if , and unsatisfied otherwise .satisfied pairs become active with a probability for given . by taking the summation over and with fixing the number of bonds ,the expression for the partition function becomes where is the total number of nearest - neighbor pairs in the whole system .here , is the dos for the bond number defined as the number of consistent combinations of graphs and states such that the graph consists of bonds .then , the canonical average of a quantity is calculated by where is the microcanonical average with the fixed bond number for the quantity .thus , if we obtain and during the simulation process , we can calculate the canonical average of any quantity .in deriving the bhr for the bond number , we follow a method similar to that used to derive the bhr for the bond number . instead of using the relation between states , we consider the relation between graphs .the number of potential moves from the graph with the bond number to the graph with , , for fixed is equal to that of the number of potential moves from the graph with to that with , . taking a summation over states and using the definition of the microcanonical average with the fixed bond number , we have this is the bhr for the bond number .it should be noted that is a possible number of bonds to add , and related to the number of satisfied pairs for the given state , , by with use of the microcanonical average with fixed bond number for , we have the relation on the other hand , the possible number of bonds to delete , , is simply given by , that is , from the bhr for the bond number , eq .( [ eq : broad ] ) , we have then , substituting eqs .( [ eq : transition1 ] ) and ( [ eq : transition2 ] ) into eq .( [ eq : broad2 ] ) , we obtain the bond - number dos , , as when calculating the bond - number dos from the bhr for the bond number , we only need the information on , the microcanonical average with fixed of the number of satisfied pairs .it is much simpler than the case of the bhr formulation for the energy dos .moreover , in the computation of , we can use an improved estimator . if a pair of sites belong to the different cluster , this pair is satisfied with a probability of .if a pair of sites belong to the same cluster , this pair is always satisfied .then , we can employ an improved estimator as where represent a cluster that a site belongs to .only the information on graph is needed .let us consider the update process for the monte carlo simulation . in the multibondic ensemble method ,a graph is updated by adding or deleting a bond for a satisfied pair of sites based on .we may use the number of potential move for the bond number , , for the probability of update . using eqs .( [ eq : broad ] ) , ( [ eq : transition1 ] ) , and ( [ eq : transition2 ] ) , we get the probability to delete a bond , and the probability to add a bond , respectively .the actual monte carlo procedure is as follows .we start from some state ( spin configuration ) , and an arbitrary graph consistent with it .we add or delete a bond of satisfied pairs with the probability ( [ eq : prob3 ] ) or ( [ eq : prob4 ] ) .after making such a process as many as the number of total pairs , , we flip every cluster with the probability 1/2 .and we repeat the process . since we do not know the exact form of , we use the accumulated average for .the dynamics proposed here can be regarded as the flat histogram method for the bond number , which we call the cluster - flip flat histogram method . as converges to the exact value, the histogram becomes flat .we calculate the bond - number dos , and then calculate various quantities by eq .( [ eq : canonical ] ) . as a test, we calculate the ising model on the simple cubic lattice with the periodic boundary conditions by using the cluster - flip flat histogram method .we show as a function of for by the solid line in fig .[ fig_3](a ) ; we give by the dotted line .the number of monte carlo sweeps ( mcs ) is .the difference between the solid and dotted lines represents the number of potential moves , whereas the difference between the dotted line and the horizontal axis represents .we note that , which is expected from eq .( [ eq : np2 ] ) .the logarithm of the bond - number dos , , obtained by is shown in fig .[ fig_3](b ) as a function of .the temperature dependence of the specific heat is shown in fig .[ fig_3](c ) , which reproduces the result obtained by the conventional method . and ( b ) of the ising model obtained by the cluster - flip flat histogram method. the dotted line in ( a ) denotes .( c ) the temperature dependence of the specific heat per spin , where we have used the units of ., title="fig : " ] and ( b ) of the ising model obtained by the cluster - flip flat histogram method .the dotted line in ( a ) denotes .( c ) the temperature dependence of the specific heat per spin , where we have used the units of ., title="fig : " ] as another example , we simulate the 3d 3-state potts model on the simple cubic lattice .a first - order phase transition occurs in this model .we show for by the solid line in fig .[ fig_4](a ) ; we give by the dotted line .the number of mcs is .the number of potential moves and are given in the same manner as the case of the ising model .it is to be noted that for the 3-state potts model .the logarithm of the bond - number dos , , obtained by is shown in fig .[ fig_4](b ) .the temperature dependence of the free energy is given in fig .[ fig_4](c ) .the first - order point where the derivative of the free energy has a jump is indicated by the arrow . and ( b ) of the 3d 3-state potts model of obtained by the cluster - flip flat histogram method .the dotted line in ( a ) denotes .( c ) the temperature dependence of the free energy per spin , where we have used the units of ., title="fig : " ] and ( b ) of the 3d 3-state potts model of obtained by the cluster - flip flat histogram method .the dotted line in ( a ) denotes .( c ) the temperature dependence of the free energy per spin , where we have used the units of ., title="fig : " ] the detailed report on the subject of this section will be published in a separate paper .there , the results for the application to the 2d ising and 10-state potts models will be given , and the efficiency of the method will be discussed .we have discussed the recent progress in monte carlo algorithms .first , we have argued the generalization of the pcc algorithm based on the study of the fss property of the correlation ratio , the ratio of the correlation functions with different distances .we apply this generalized scheme of the pcc algorithm to the 2d 6-state clock model .since we do _ not _ use the percolating property of the system , we can apply the pcc algorithm where the mapping to the cluster formalism does _ not _ exist .it can be applied to many problems .for example , the cluster formalism does not work well for frustrated systems , but we can use the generalized pcc algorithm .we can also apply the generalized scheme to a wide variety of quantum systems .second , we have discussed the combination of the cluster algorithm and the extended ensemble method .we have derived the rigorous bhr for the bond number , investigating the cluster ( graph ) representation of the spin models .we have shown that the bond - number dos can be calculated in terms of .we have proposed a monte carlo dynamics based on the number of potential moves for the bond number , which is regarded as the flat histogram method for the bond number .we have applied the cluster - flip flat histogram method to the 3d ising and 3-state potts models .we thank n. kawashima for fruitful discussions and the collaboration of a part of the present work .we also thank h. otsuka , j .- s . wang and c .- k .hu for valuable discussions .the computation in this work has been done using the facilities of the supercomputer center , institute for solid state physics , university of tokyo .this work was supported by a grant - in - aid for scientific research from the ministry of education , science , sports and culture , japan .r. h. swendsen and j. s. wang , phys .lett . * 58 * , 86 ( 1987 ) .u. wolff , phys .lett . * 62 * , 361 ( 1989 ) .b. a. berg and t. neuhaus , phys .b * 267 * , 249 ( 1991 ) ; phys .lett . * 68 * , 9 ( 1992 ) .j. lee , phys .rev . lett . * 71 * , 211 ( 1993 ) .wang , eur .j. b * 8 * , 287 ( 1998 ) .j. s. wang and l. w. lee , comp .. commun . * 127 * , 131 ( 2000 ) ; j. s. wang , physica a * 281 * , 147 ( 2000 ) .f. wang and d. p. landau , phys .lett . * 86 * , 2050 ( 2001 ) ; phys .e * 64 * , 056101 ( 2001 ) . c. yamaguchi and y. okabe , j. phys .a * 34 * , 8781 ( 2001 ) .
we describe a generalized scheme for the probability - changing cluster ( pcc ) algorithm , based on the study of the finite - size scaling property of the correlation ratio , the ratio of the correlation functions with different distances . we apply this generalized pcc algorithm to the two - dimensional 6-state clock model . we also discuss the combination of the cluster algorithm and the extended ensemble method . we derive a rigorous broad histogram relation for the bond number . a monte carlo dynamics based on the number of potential moves for the bond number is proposed , and applied to the three - dimensional ising and 3-state potts models . , , cluster algorithm ; finite - size scaling ; correlation ratio ; broad histogram relation
agents may engage in conversation for a range of reasons , e.g. to acquire information , to establish a contract , to make a plan , or to be social . at each point in a dialogue, agents must make communicative choices about what to say and how and when to say it .this paper focuses on agents communicative choice in collaborative planning dialogues , dialogues whose purpose is to discuss and agree on a plan for future action , and potentially execute that plan .i will argue that agents choices in communicative action , their algorithms for language behavior , must be determined with respect to two relatively unexplored factors in models of collaborative planning dialogues : ( 1 ) agents resource limits , such as limits in attentional and inferential capacity ; and ( 2 ) features of collaborative planning tasks that affect task difficulty , such as inferential complexity , the degree of belief coordination required , and tolerance for errors .a primary dimension of communicative choice is the degree of explicitness .for example , consider a simple task of agent a and agent b trying to agree on a plan for furnishing a two room house .imagine that a wants b to believe the proposition realized by and believes that b can infer this from the propositions realized in : and are abstractions from naturally occurring examples in which the propositions realized here are realized in a number of different ways . herethe focus is on the logical relationships between the contents of each proposition : a and b are minor premises and c is the major premise for the inference under discussion . ] in naturally - occurring dialogues , a may produce utterances realizing the propositions in to , or other variations .the communicative choices in through illustrate a general fact : for any communicative act , the same effect can be achieved with a range of acts at various levels of explicitness .this raises a key issue : on what basis does a choose among the more or less explicit versions of the proposal in 3 to 6 ?the single constraint that has been suggested elsewhere in the literature is the redundancy constraint : a should not say information that b already knows , or that b could infer .the redundancy constraint appears in the form of simple dictums such as ` do nt tell people what they already know ' , as grice s quantity maxim do not make your contribution more informative than is required and as constraints on planning operators for the generation and recognition of communicative plans .so , if we assume that b knows b and c , then the only possibility for what a can say is .the redundancy constraint is based on the assumption that agent a should leave implicit any information she believes that b already knows or she believes that b could infer , in other words , that agent b can always ` fill in what is missing ' by a combination of retrieving facts from memory and making inferences . in section [ iru - sec ], i will show that agents in natural dialogues consistently violate the redundancy constraint .i will argue that this should not be particularly surprising since the redundancy constraint is based on several simplifying assumptions : 1 .unlimited working - memory assumption : everything an agent knows is always available for reasoning ; 2 .logical omniscience assumption : agents are logically omniscient ; 3 .fewest utterances assumption : utterance production is the only process that should be minimized ; 4 .no autonomy assumption : assertions and proposals by agent a are accepted by default by agent b. when agents are autonomous and resource - limited , these assumptions do not always hold , and the problem of communicative choice remains . the plan for the paper is as follows : section [ iru - sec ] motivates a number of hypotheses about the relationship of communicative choice , resource limits and task features using evidence from natural collaborative planning dialogues .these hypotheses are the basis of a model of collaborative planning presented in section [ model - sec ]. then section [ dw - sec ] describes how the model is implemented in a testbed for collaborative planning dialogues called design - world , which supports experiments on the interaction of agents communicative choice , resource limits , and features of the task . at this point , in section [ method - sec ], i review the steps of the method applied so far , and motivate the use of simulation as a method for testing the hypotheses .section [ results - sec ] presents the experimental results and discusses the extent to which the hypotheses were confirmed , and then section [ discussion - sec ] discusses the theoretical implications of these results and the extent to which they can be generalized to other tasks , agent properties , and communication strategies .naturally occurring collaborative planning dialogues are design , problem solving , diagnostic or advice - giving dialogues . in order to generate hypotheses about the relation of communicative choice to agent properties and task features ,this section examines communicative choice in naturally occurring collaborative planning dialogues .most of the examples discussed below are excerpts from a corpus of dialogues from a radio talk show for financial planning advice , but i will also draw on data from collaborative design , collaborative construction , and computer support dialogues .dialogue , in general , is modeled as a process by which conversants add to what is assumed to be already mutually believed or intended .this set of assumed mutual beliefs and intentions is called the discourse model , or the common ground . in collaborative planning dialogues ,the conversants are attempting to add mutual beliefs about the current state of the world and mutual beliefs and intentions about a plan for future action to the discourse model .it is obvious that the efficacy of the final plan and the efficiency of the planning process must be affected by agents algorithms for communicative choice .however previous work has not systematically varied factors that affect communicative choice , such as resource limits and task complexity .furthermore , most previous work has been based on the redundancy constraint , and apparently , its concomitant simplifying assumptions ( but see ) . to explore the relation of communicative choice to effective collaborative planning, the analysis of naturally occurring collaborative planning dialogues in this paper focuses on communicative acts that violate the redundancy constraint .these acts are informationally redundant utterances , irus , defined as : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * definition of informational redundancy * + an utterance is informationally redundant in a discourse situation 1 .if expresses a proposition , and another utterance that entails has already been said in .if expresses a proposition , and another utterance that presupposes or implicates has already been said in . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ iru - def ] a statistical analysis of the financial advice corpus showed that about 12 % of the utterances are irus .as mentioned in section [ intro - sec ] , this should not be particularly surprising since the definition of irus reflects several simplifying assumptions .for example , the definition reflects the logical omniscience assumption because it assumes that all the entailments of propositions uttered in a discourse and certain default inferences from propositions uttered in a discourse become part of the discourse model .the definition reflects the no autonomy assumption because it assumes that merely saying an utterance u that expresses a proposition p is sufficient for adding p to the discourse model .the fact that irus occur shows that the simplifying assumptions are not valid .the distributional analysis suggests that there are at least 3 functional categories of irus : * communicative functions of irus * : + * attitude : to provide evidence supporting beliefs about mutual understanding and acceptance * attention : to manipulate the locus of attention of the discourse participants by making a proposition salient . *consequence : to augment the evidence supporting beliefs that certain inferences are licensed irus have antecedents in the dialogue which are the utterances that originally realized the content of the iru either through direct assertion or by an inferential chain ; in the definition above is an antecedent for .the 3 communicative functions of irus were identified by correlations with distributional features based in part on relations between the iru and its antecedent , such as textual distance , discourse structure relations , and logical relations .the distributional analysis also analyzed utterance features such as the intonational realization of the iru , the form of the iru , and the relation of the iru to adjacent utterances .below , i will briefly give examples of each type of iru .for each type i will explain how the four simplifying assumptions of previous dialogue models predict that the utterance is informationally redundant. then we will consider hypothetical agent and task properties under which irus function as hypothesized above .attitude irus provide evidence supporting beliefs about mutual understanding and acceptance by demonstrating the speaker s attitude to an assertion or proposal made by another agent in dialogue .an attitude iru , said with a falling intonation typical of a declarative utterance , is given in -27 , where m repeats what h has asserted in -26 .m and h have been discussing how m and her husband can handle funds invested in iras ( individual retirement accounts ) . in , and in the other naturally occurring examples below , the antecedents of the irus are _ italicized _ and the irus are in caps .the iru in 27 provides direct evidence that m heard exactly what h said .according to arguments elaborated below and elsewhere , m s response indirectly provides evidence that she accepts and therefore believes what h has asserted . ' '' '' [ cols= " < , < " , ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we will use these cost parameters to explore three extremes in this space : ( 1 ) when processing is free ; ( 2 ) when retrieval effort dominates other processing costs ; and ( 3 ) when communication effort dominates other processing costs .the parameters support modeling various instantiations of the agent architecture given in figure [ irma - fig ] .for example , varying the cost of retrieval models different assumptions about how the beliefs database , plan library and working memory are implemented .varying the cost of communication models situations in which communication planning is very costly .the relation between the values of these parameters and the utilities of the steps in the plan determines experimental outcomes , rather than the absolute values . as an example of the effect of varying these costs ,consider the plots of performance distributions shown in figures [ baseline - fig ] and [ retcost - baseline - fig ] for low , mid and high awm . in these figures ,performance is plotted on the x - axis and number of simulations at that performance level are given by bars on the y - axis .the performance distributions in figure [ baseline - fig ] demonstrate the increase in quality of solution that we would expect with increases in awm , given no processing costs .figure [ retcost - baseline - fig ] shows what happens when processing is not free : here a retrieval cost of .001 means that every memory access reduces quality of solution by 1/1000 of a point ( remember that the utilities of plan steps range between 10 and 56 ) . as figure [ retcost - baseline - fig ] shows , the ability to access the whole beliefs database in reasoning does not always improve performance since high awm agents perform similarly to mid awm agents .section [ iru - sec ] proposed hypotheses about the function of irus in human to human collaborative planning dialogues , and then section [ model - sec ] presented a model for collaborative planning dialogues based on the observations in section [ iru - sec ] . section [ dw - sec ] then described design - world as a testbed of the model , and sections [ task - def - sec ] and [ comm - choice - sec ] introduced a number of parameters of the testbed that are intended to model the features of the human - human dialogues and support testing of the hypotheses . herei wish to summarize the mapping between the naturally occurring dialogues and the design of the testbed in order to clarify the basis for the experiments in the next section .the testbed and the experimental parameters are based on the following mapping between human - human collaborative planning dialogues and the testbed . first , the planning and deliberation aspects of human processing are modeled with the irma architecture , and resource limits on these processes are modeled by extending the irma architecture with a model of attention / working memory ( awm ) which has been shown to model a limited but critical set of properties of human processing .second , the processing of dialogue is tied to the agent architecture .third , the mapping of a warrant relation between an act and a belief in naturally occurring examples such as [ walnut - examp ] is modeled with a warrant relation between an act and a belief in design - world as seen in the explicit - warrant communication strategy in section [ comm - choice - sec ] .fourth , the mapping assumes that arbitrary content based inferences in natural dialogues such as that discussed in relation to example [ certif - examp ] can be mapped to content based inferences in design - world such as those required for doing well on the matched - pair tasks .fifth , the mapping is based on the assumption that task difficulty in naturally occurring tasks such as those in the financial advice domain can be related to three abstract features : ( 1 ) inferential complexity as measured by the number of premise required for making an inferences ; ( 2 ) degree of belief coordination required on intentions , inferences and beliefs underlying a plan ; and ( 3 ) task determinacy and fault tolerance .finally the mapping assumes that it is reasonable to evaluate the performance of the agents in collaborative planning dialogues by using domain plan utility for a measure of the quality of solution and defining the cost to achieve that solution as collaborative effort , appropriately parameterized .the details of this mapping specifies how the testbed implements the model of collaborative planning and provides the basis for extrapolating from the testbed experimental results to the human - human dialogues that are being modeled .the testbed provides an excellent environment for testing the hypotheses to the extent that the model captures critical aspects of human - human dialogues .the experiments examine the interaction between tasks , communication strategies and awm resource limits .every experiment varies awm over three ranges : low , mid , and high . in order to run an experiment on a particular communicative strategy for a particular task ,200 dialogues for each awm range are simulated .because the awm model is probabilistic , each dialogue simulation has a different result .the awm parameter yields a performance distribution for very resource limited agents ( low ) , agents hypothesized to be similar to human agents ( mid ) , and resource unlimited agents ( high ) .sample performance distributions for quality of solution ( with no collaborative effort subtracted ) from runs of two all - implicit agents for each awm setting are shown in figure [ bill - kim - hist - fig ] . to test our hypotheses ,we want to * compare * the performance of two different communicative strategies for a particular task , under different asssumptions about resource limits and processing costs . to see the effect of communicative strategy and awm over the whole range of awm settings ,we first run a two - way analysis of variance ( anova ) with awm as one factor and communication strategy as another .the anova tells us whether : ( 1 ) awm alone is a significant factor in predicting performance ; ( 2 ) communication strategy alone is a significant factor in predicting performance ; and ( 3 ) whether there is an interaction between communication strategy and awm . however , anova alone does not enable us to determine the particular awm range at which a communication strategy aids or hinders performance , and many of the hypotheses about the benefits of particular communication strategies are specific to how resource limited an agent is .furthermore , whenever strategy affects performance positively for one value of awm and negatively for another value of awm , the potential effects of strategy can not be seen from the anova alone . therefore , we conduct planned comparisons of strategies using the modified bonferroni test ( hereafter mb ) , within each awm range setting to determine which awm range the strategy affects . .05 , 5.06 for a p .025 , 6.66 for a p .01 , and 9.61 for a p .002 . ] on the basis of these comparisons we can say whether a strategy is beneficial for a particular task for a particular awm range . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a strategy a is beneficial as compared to a strategy b , for a particular awm range , in the same task situation , with the same cost settings , if the mean of a is significantly greater than the mean of b , according to the modified bonferroni test ( mb ) test ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the converse of beneficial is detrimental : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a strategy a is detrimental as compared to a strategy b , for a particular awm range , in the same task situation , with the same cost settings , if the mean of a is significantly less than the mean of b , according to the modified bonferroni test ( mb ) test . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ strategies need not be either beneficial or detrimental , there may be no difference between two strategies . also with the definition given above a strategymay be both beneficial and detrimental depending on the range of awm that the two strategies are compared over , i.e. a strategy may be beneficial for low awm agents and detrimental for high awm agents .a difference plot such as that in figure [ free - ret - iei - fig ] is used to summarize a comparison of two strategies , strategy 1 and strategy 2 . in the comparisons below, strategy 1 is either close - consequence , .since only one agent will ever produce a closing statement for any dialogue segment , only one agent is given the option in the simulations .] explicit - warrant , or matched - pair - inference - explicit and strategy 2 is the all - implicit strategy .* differences * in performance means between two strategies are plotted on the y - axis against awm ranges on the x - axis .each point in the plot represents the difference in the means of 200 runs of each strategy at a particular awm range .these plots summarize the information from 1200 simulated dialogues .remember that the standard task is defined so that the quality of solution that agents achieve for a design - house plan , constructed via the dialogue , is the sum of the utilities of each valid step in their plan .the task has multiple correct solutions and is fault tolerant because the point values for invalid steps in the plan are simply subtracted from the score , with the effect that agents are not heavily penalized for making mistakes .furthermore , the task has low inferential complexity : the only inferences agents are required to make are those for deliberation and means - end reasoning . in both of these cases , to make these inferences , agents are only required to access a single minor premise all - implicit agents do fairly well at the standard task , under assumptions that all processing is free , as shown in the performance plot in figure [ baseline - fig ] . however , as retrieval costs increase , high awm agents do nt do as well as when retrieval is free , because they expend too much effort on retrieval during collaborative planning .compare the high awm distribution in figure [ baseline - fig ] with that in figure [ retcost - baseline - fig ] .thus for the standard task , high awm agents have the potential to benefit from communication strategies that reduce the total effort for retrieval , when retrieval is not free .in addition , although the task has minimal inferential complexity , easy access to information that is used for deliberation , which the explicit - warrant strategy provides , could benefit low awm agents , since they might otherwise make nonoptimal decisions . furthermore , although the task is fault tolerant , agents are still penalized for making errors since errors do not contribute to performance .thus for the standard task , communication strategies such as close - consequence that can reduce the number of errors could be beneficial .below we will compare the all - implicit strategy to the explicit - warrant strategy and the close - consequence strategy .[ [ explicit - warrant-1 ] ] explicit - warrant + + + + + + + + + + + + + + + + the explicit - warrant strategy can be used in the standard task to test hypothesis a1 : agents produce attention irus to support the processes of deliberating beliefs and intentions .it can also be used to test hypothesis a4 : the choice to produce an attention iru is related to the degree to which an agent is resource limited in attentional capacity .thus one prediction is that the explicit - warrant strategy will result in higher performance for low awm agents even when processing is free by ensuring that they can access the warrant and use it in deliberation , thus making better decisions .figure [ free - ret - iei - fig ] plots the differences in the performance means between the explicit - warrant strategy and the all - implicit strategy for low , mid and high awm agents .a two - way anova exploring the effect of awm and the explicit - warrant strategy for the standard task shows that awm has a large effect on performance ( f= 336.63 , p .000001 ) .there is no main effect for communicative strategy ( f = 1.92 , p 0.16 ) .however , there is an interaction between awm and communicative choice ( f=1136.34 , p .000001 ) . by comparing performance within a particular awm range for each strategy we can see which awm settings interact with communicative strategy .the planned comparisons using the modified bonferonni ( mb ) test show that the explicit - warrant strategy is neither beneficial nor detrimental in the standard task , in comparison with the all - implicit strategy , if all processing is free ( mb(low ) = 0.29 , ns ; mb(mid ) = 2.79 , ns ; mb(high ) = 0.39 , ns ) .note that there is a trend towards the explicit - warrant strategy being detrimental at mid awm .the hypothesis based on the corpus analysis was that low awm agents might benefit from communicative strategies that include irus .however , this hypothesis is disconfirmed .further analysis of this result suggests a hypothesis not apparent from the corpus analysis : any beneficial effect of an iru can be cancelled for resource limited agents because irus may displace other information from working memory that is more useful . in this case , despite the fact that the warrant information is useful for deliberation , making the warrant salient displaces information that can be used to generate other options .when agents are very resource - limited making an optimal decision is not as important as being able to generate multiple options .the explicit - warrant strategy can also be used in the standard task to test hypothesis i1 : strategies that reduce collaborative effort overall may be beneficial .thus , another prediction is that by providing the warrant used in deliberating a proposal with every proposal , the explicit - warrant strategy has the potential to reduce resource consumption when accessing memory has some processing cost .figure [ ret - iei - fig ] plots the differences in the performance means between the explicit - warrant strategy and the all - implicit strategy for low , mid and high awm agents when retrieval effort dominates processing . a two - way anova exploring the effect of awm and the explicit - warrant strategy for the standard task , when retrieval cost dominates processing , shows that awm has a large effect on performance ( f= 330.15 , p .000001 ) .there is also a main effect for communicative strategy ( f = 5.74 , p 0.01 ) , and an interaction between awm and communicative choice ( f= 1077.64 , p .000001 ) . the planned comparisons using the mb test to compare performance at each awm range show that , in the standard task , in comparison with the all - implicit strategy , the explicit - warrant strategy is neither beneficial nor detrimental for low awm agents ( mb(low ) = 0.27 , ns ) .however , hypothesis i1 is confirmed because the explicit - warrant strategy is beneficial for mid awm agents mb(mid ) = 86.43 , p .002 .the explicit - warrant strategy also tends towards improving performance for high awm agents mb(high ) = 2.07 , p .10 ) . for higher awm values ,this trend is because the beliefs necessary for deliberating the proposal are made available in the current context with each proposal , so that agents do nt have to search memory for them . as an additional test of hypothesis i1 ,a final experiment tests the explicit - warrant strategy against the all - implicit strategy in a situation where the cost of communication dominates other processing costs .figure [ exp - iei - fig ] plots the differences in the performance means between the explicit - warrant strategy and the all - implicit strategy for low , mid and high awm agents when communication effort dominates processing .a two - way anova exploring the effect of awm and the explicit - warrant strategy for the standard task , when communication effort dominates processing , shows that awm has a large effect on performance ( f= 409.52 , p .000001 ) .there is also a main effect for communicative strategy ( f = 28.12 , p 0.000001 ) , and an interaction between awm and communicative choice ( f= 960.24 , p .000001 ) .the planned comparisons using the mb test to compare performance at each awm range show that in this situation , when communication effort dominates processing , the explicit - warrant strategy is neither beneficial nor detrimental for mid awm agents ( mb(mid ) = 0.12 , ns . however , the explicit - warrant strategy is detrimental for both low and high awm agents , mb(low ) = 7.69 , p .01 ; mb(high ) = 39.65 , p .01 ) .since this strategy includes an extra utterance with every proposal and provides no clear benefits , it is detrimental to performance in the standard task when communication effort dominates processing .below , when we compare this situation with that in the zero - nonmatching - beliefs task , we will see that this is due to the fact that the standard task has low coordination requirements .[ [ close - consequence-1 ] ] close - consequence + + + + + + + + + + + + + + + + + the close - consequence strategy of making inferences explicit can be used in the standard task to test hypothesis c4 : the choice to produce a consequence iru is related to a measure of ` how important ' the inference is .even though the standard task is fault tolerant , every invalid step reduces the quality of solution of the final plan .making act - effect inferences explicit decreases the likelihood of making this kind of error .the difference plot in figure [ cost - clc - kim - fig ] plots performance differences between the close - consequence strategy and the all - implicit strategy , in the standard task , when all processing is free .a two - way anova exploring the effect of awm and the close - consequence strategy in this situation shows that awm has a large effect on performance ( f= 249.20 , p .000001 ) , and that there is an interaction between awm and communicative choice ( f= 919.27 , p .000001 ). planned comparisons between strategies for each awm range shows that the close - consequence strategy is detrimental in comparison with all - implicit for low awm agents ( mb(low ) = 8.70 , p .01 ) .this is because generating options contributes more to performance for agents with low awm than avoiding errors , and the additional utterances that make inferences explicit in the close - consequence strategy has the effect of displacing facts that could be used in means end reasoning to generate options .there is no difference in performance for mid awm agents ( mb(mid ) = .439 , ns ) .however , comparisons between the two strategies for high awm agents shows that the close - consequence strategy is beneficial in comparison with all - implicit ( mb(high ) = 171.71 , p .002 ) .see figure [ cost - clc - kim - fig ] .this is because the belief deliberation algorithm increases the probability of high awm agents choosing to believe out of date beliefs about the state of the world .the result is that they are more likely to have invalid steps in their plans .thus the close - consequence strategy is beneficial because reinforcing the belief that a furniture item has been used makes it less likely that agents will believe that they still have that furniture item .this result is not predicted by any hypotheses , but as discussed in section [ irma - sec ] , this property of the belief deliberation mechanism has some intuitive appeal . in any case , this result provides a data point for the benefit of a strategy for making inferences explicit when the probability of making an error increases if that inference is not made .remember that the zero - nonmatching - beliefs task requires a greater degree of belief coordination by requiring agents to agree on the beliefs underlying deliberation ( warrants ) .thus , it it increases the importance of making particular deliberation - based inferences , and can therefore be used to test hypotheses a1 , a4 and a5 . belowwe will compare the performance of agents using the all - implicit strategy with the explicit - warrant strategy in the zero - nonmatching - beliefs task .figure [ iei - nmb - fig ] plots the mean performance differences of the explicit - warrant strategy and the all - implicit strategy in the zero - nonmatching - beliefs task .a two - way anova exploring the effect of awm and communicative strategy for the zero - nonmatching - beliefs task , shows that awm has a large effect on performance ( f= 471.42 , p .000001 ) .there is also a main effect for communicative strategy ( f = 379.74 , p 0.000001 ) , and an interaction between awm and communicative choice ( f= 669.24 , p .000001 ) .comparisons within each awm range of the two communicative strategies in this task shows that the explicit - warrant strategy is highly beneficial for low and mid awm agents ( mb(low ) = 260.6 , p 0.002 ; mb(mid ) = 195.5 , p 0.002 ) .the strategy is also beneficial for high awm agents mb(high ) = 4.48 , p 0.05 ) .when agents are resource limited , they may fail to access a warrant .the explicit - warrant strategy guarantees that the agents always can access the warrant for the option under discussion .thus , even agents with higher values of awm can benefit from this strategy , since the task requires such a high degree of belief coordination .hypothesis i1 can also be tested in this task .we can ask whether it is possible to drive the total effort for communication high enough to make it inefficient to choose the explicit - warrant strategy over all - implicit .however , the benefits of the explicit - warrant strategy for low and mid awm agents for this task are so strong that they can not be reduced even when communication cost is high ( mb(low ) = 246.4 , p 0.002 ; mb(mid ) = 242.7 , p 0.002 ) .see figure [ iei - exp - nmb - fig ] .in other words , even when every extra warrant message increases collaborative effort by 10 and reduces performance by 10 , if the task is zero - nonmatching - beliefs , resource - limited agents using explicit - warrant do better .contrast figure [ iei - exp - nmb - fig ] with the standard task and same cost parameters in figure [ exp - iei - fig ] .however , when communication cost is high , the strategy becomes detrimental for high awm agents ( mb(high ) = 7.56 , p 0.01 ) .these agents can usually access warrants and the increase in belief coordination afforded by the explicit - warrant strategy does not offset the high communication cost .the two versions of the matched - pair tasks described in section [ task - def - sec ] ( 1 ) increase the inferential complexity of the task and ( 2 ) increase the degree of belief coordination required by requiring agents to be coordinated on inferences that follow from intentions that have been explicitly agreed upon .both tasks increases inferential difficulty to a small degree : all - implicit agents do fairly well at making matched pair inferences .the matched - pair - same - room task requires the same inferences as the matched - pair - two - room task , but these inferences should be easier to make in the matched - pair - same - room since the inferential premises are more likely to be salient .the matched - pair tasks provide an environment for testing hypotheses a2 , a3 , a4 and a5 .the attention strategy that is used to test these hypotheses is the matched - pair - inference - explicit strategy ; this strategy makes the premises for matched - pair inferences salient , thus increasing the likelihood of agents making matched - pair inferences .the predictions are that this strategy should be beneficial for low and possibly for mid awm agents , but that high awm agents can access the necessary inferential premises without attention irus .furthermore , we predict that the beneficial effect should be stronger for the matched - pair - two - room task .figure [ imi - imi2-mpr - fig0 ] plots the performance differences between all - implicit agents and matched - pair - inference - explicit agents for the matched - pair - same - room task .a two - way anova exploring the effect of awm and communicative strategy in this task , shows that awm has a large effect on performance ( f= 323.93 , p .000001 ) .there is no main effect for communicative strategy ( f = .03 , ns ) , but there is an interaction between awm and communicative choice ( f= 1101.51 , p .000001 ) .comparisons within awm ranges between agents using the all - implicit strategy and agents using the matched - pair - inference - explicit strategy in the matched - pair - same - room task ( figure [ imi - imi2-mpr - fig0 ] ) shows that matched - pair - inference - explicit strategy is beneficial for low awm agents ( mb(low ) = 4.47 , p .05 ) , but not significantly different for either mid or high awm agents ) . in the matched - pair - same - room task the content of the iru was recently inferred and is likely to still be salient , thus the beneficial effect is relatively small and is restricted to very resource limited agents .in contrast , in the matched - pair - two - room task , the effect on performance of the matched - pair - inference - explicit strategy is much larger , as we predicted .figure [ imi - imi2-mpr - fig1 ] plots the mean performance differences of agents using the matched - pair - inference - explicit strategy and those using the all - implicit strategy .the all - implicit agents do not manage to achieve the same levels of mutual inference as matched - pair - inference - explicit agents . a two - way anova exploring the effect of awm and communicative strategy in this task, shows that awm has a large effect on performance ( f= 171.79 , p .000001 ) .there is a main effect for communicative strategy ( f = 57.12 , p .001 ) , and an interaction between awm and communicative choice ( f= 567.34 , p .000001 ) .comparisons within awm ranges between agents using the all - implicit strategy and agents using the matched - pair - inference - explicit strategy in the matched - pair - two - room task ( figure [ imi - imi2-mpr - fig1 ] ) shows that matched - pair - inference - explicit strategy is beneficial for low , mid and high awm agents ( mb(low ) = 21.94 , p .01 ) ; mb(mid ) = 7.71 , p .01 ) ; mb(high ) = 38.85 , p .002 ) . in other words , this strategy is highly effective in increasing the ability of low , mid and high awm agents to make matched pair inferences in the matched - pair - two - room task .we predicted the strategy to be beneficial for low and possibly for mid awm agents because it gives agents access to premises for inferences which they would otherwise be unable to access .this confirms the effect of the hypothesized discourse inference constraint .however , we did not expect it to be beneficial for high awm agents .this surprising effect is due to the fact that , in the case of higher awm values , the matched - pair - inference - explicit strategy keeps the agents coordinated on which inference the proposing agent intended in a situation in which multiple inferences are possible . in other words , when agents have high awm they can make * divergent * inferences , and a strategy of making inferential premises salient improves agents inferential coordination .thus the strategy controls inferential processes in a way that was not predicted based on the corpus analysis alone .hypothesis i1 can also be tested in this task .we can ask whether it is possible to drive the effort for communication high enough to make it inefficient to choose the matched - pair - inference - explicit strategy over all - implicit .figure [ imi - imi2-mpr - fig4 ] plots the mean performance differences between these two strategies when communication cost is high .comparisons within each awm range shows that this strategy is still beneficial for low , mid and high awm agents even with a high communication cost ( mb(low ) = 19.10 , p .01 ) ; mb(mid ) = 3.94 , p .05 ) ; mb(high ) = 10.46 , p .01 ) . in other wordsit would be difficult to find a task situation that required coordinating on inference in which this strategy was not beneficial .this result is strong support for the discourse inference constraint , which may explain the prevalence of this strategy in naturally occurring dialogues , remember that the zero - invalids task is a fault - intolerant version of the task in which any invalid intention invalidates the whole plan .thus the zero - invalids task provides an environment for testing hypotheses c2 and c4 with respect to the inferences made explicit by the close - consequence strategy .figure [ clc - inval - fig ] plots the mean performance differences between agents using the close - consequence strategy and agents using the all - implicit strategy in the zero - invalids task .a two - way anova exploring the effect of awm and communicative strategy in this task , shows that awm has a large effect on performance ( f= 223.14 , p .000001 ) .there is a main effect for communicative strategy ( f = 75.81 , p .001 ) , and an interaction between awm and communicative choice ( f= 103.38 , p .000001 ) .the close - consequence strategy was detrimental in the standard task for low awm agents .comparisons within awm ranges between agents using the all - implicit strategy and agents using the close - consequence strategy in the zero - invalids task shows that there are no differences in performance for low awm agents in the fault - intolerant zero - invalids task ( mb(low ) = 3.64 , ns ) .however , the close - consequence strategy is beneficial for mid and high awm agents ( mb(mid ) = 26.62 , p .002 ) ; mb(high ) = 267.72 , p .002 ) . in other words ,this strategy is highly beneficial in increasing the robustness of the planning process by decreasing the frequency with which agents make mistakes .this is a direct result of * rehearsing * the act - effect inferences , making it unlikely that attention - limited agents will forget these important inferences .this paper showed how agents choice in communicative action can be designed to mitigate the effect of their resource limits in the context of particular features of a collaborative planning task . in section [ model - sec ] ,i presented a model of collaborative planning in dialogue and discussed a number of parameters that can affect either the efficacy of the final plan or the efficiency of the collaborative planning process .then in section [ results - sec ] , i presented the results of experiments testing hypotheses about the effects of these parameters on collaborative planning dialogues .these results contribute to the development of the model of collaborative planning dialogue presented here .in addition , since the testbed implementation is compatible with many current theories , these results could be easily incorporated into other dialogue planning algorithms , _ inter alia_. a secondary goal of this paper was to argue for a particular methodology for dialogue theory development .the method was specified in section [ method - sec ] .the design - world testbed was introduced in section [ dw - sec ] and sections [ task - def - sec ] and [ comm - choice - sec ] described the parameterizations of the model that support testing the hypotheses .four parameters for communicative strategies were tested : ( 1 ) all - implicit ; ( 2 ) close - consequence ; ( 3 ) explicit - warrant ; and ( 4 ) matched - pair - inference - explicit .four parameters for tasks were tested : ( 1 ) standard ; ( 2 ) zero - nonmatching - beliefs ; ( 3 ) matched - pair ( mp ) ; ( 4 ) zero - invalid .three situations of varying processing effort were tested . in this section, i will first summarize the hypotheses and the experimental results in section [ summary - sec ] , then i will discuss how the experimental results might generalize to situations not implemented in the testbed .section [ future - work - sec ] proposes future work and section [ conc - sec ] consists of concluding remarks .the hypotheses that were generated by the statistical analysis of the dialogue corpora are repeated below for convenience from sections [ iru - sec ] and [ dw - plan - eval - sec ] . * hypoth - c1: agents produce consequence irus to demonstrate that they made the inference that is made explicit .* hypoth - c2 : agents choose to produce consequence irus to ensure that the other agent has access to inferrable information . *hypoth - c3 : the choice to produce a consequence iru is directly related to a measure of ` how hard ' the inference is .* hypoth - c4 : the choice to produce a consequence iru is directly related to a measure of ` how important ' the inference is .* hypoth - c5 : the choice to produce a consequence iru is directly related to the degree to which the task requires agents to be coordinated on the inferences that they have made . * hypoth - a1 : agents produce attention irus to support the processes of deliberating beliefs and intentions . *hypoth - a2 : there is a discourse inference constraint whose effect is that inferences in dialogue are derived from propositions that are currently discourse salient ( in working memory ) .* hypoth - a3 : the choice to produce an attention iru is related to the degree of inferential complexity of a task as measured by the number of premises required to make task related inferences .* hypoth - a4 : the choice to produce an attention iru is related to the degree to which an agent is resource limited in attentional capacity .* hypoth - a5 : the choice to produce an attention iru is related to the degree to which the task requires agents to be coordinated on the inferences that they have made .* hypoth - i1 : strategies that reduce collaborative effort without affecting quality of solution are beneficial .below i will summarize the experimental results reported in section [ results - sec ] with respect to the hypotheses above .hypotheses c3 and c4 were tested by comparing the close - consequence strategy with the all - implicit strategy in the standard task . in this experimental setup ,the inference made explicit by the consequence iru was neither hard to make nor critical for performance .hypothesis c3 was only weakly tested by the experiments because agents always make this inference .the results in figure [ cost - clc - kim - fig ] show that the close - consequence strategy is detrimental for low awm agents .this is because irus can displace useful information from working memory and because the inference made explicit with this iru is not ` hard enough ' .the standard task also provides a weak test of hypothesis c4 .the fact that the standard task is fault tolerant means that making the inference is not as critical as it might be .however , errors can results from either not making the inference or forgetting it once it is made . at lower values of awm ,the probability of such errors is not that high .however , the results shown in figure [ cost - clc - kim - fig ] show that the probability of error is higher for high awm agents in this case , because of their belief deliberation algorithm , and thus the close - consequence strategy is beneficial for high awm agents , even in the standard task .the zero - invalids task provides another test of hypothesis c4 by increasing the importance of the inference made explicit by the close - consequence strategy .figure [ clc - inval - fig ] shows that hypothesis c4 is confirmed because the close - consequence strategy is beneficial for low , mid and high awm agents .in addition to the reasons discussed for the standard task , this strategy is beneficial for high awm agents because they have more potential to improve their scores by ensuring that they do nt make errors .the experiments did not test hypothesis c1 because agents in the testbed are not designed to actively monitor evidence from other agents as to what inferences they might have made .hypothesis c5 was not tested by the experiments because agents always rectify the situation if they detect a discrepancy in beliefs about act effect inferences : they reject proposals whose preconditions do not hold .hypotheses a1 , a4 and a5 were tested by experiments in which the explicit - warrant strategy was compared with the all - implicit strategy in the standard task .hypothesis a1 is disconfirmed for low awm agents .figure [ free - ret - iei - fig ] shows that the explicit - warrant strategy is neither beneficial nor detrimental for low awm agents for the standard task , when processing is free .this counterintuitive result arises because , when agents are highly resource limited , irus can displace other information that is more useful . to test hypothesis i1 in this situation, we also examined two situations where processing is not free .when communication cost dominates other processing costs , the explicit warrant strategy is detrimental for low and high awm agents .however , when retrieval cost dominates other processing costs , the explicit warrant strategy is beneficial for mid awm agents and there is a trend toward a beneficial effect for high awm agents .thus these two situations show that hypothesis i1 is confirmed : processing effort has a major effect on whether a strategy is beneficial .we also tested hypotheses a1 , a4 and a5 with experiments in which the explicit - warrant strategy was compared with the all - implicit strategy in the zero - nonmatching - beliefs task ( see figures [ iei - nmb - fig ] and [ iei - exp - nmb - fig ] ) .this task increases the importance of making deliberation based inferences by requiring agents to be coordinated on these inferences in order to do well on the task .in this situation , we saw a very large beneficial effect for the explicit - warrant strategy , which is not diminished by increasing communication effort .thus in situations in which agents are required to be coordinated on these inferences , strategies which include attention irus can be very important .hypotheses a2 , a3 , a4 , and a5 were tested by experiments comparing the matched - pair - inference - explicit strategy with the all - implicit strategy in the two versions of the matched - pair task .the results shown in figures [ imi - imi2-mpr - fig0 ] and [ imi - imi2-mpr - fig1 ] provide support for these hypotheses .however these results also included an unpredicted benefit of attention irus for inferentially complex tasks where agents must coordinate on inferences .figure [ imi - imi2-mpr - fig1 ] shows that both mid and high awm agents performance improves with the matched - pair - inference - explicit strategy .this can be explained by the fact that attention irus increase the likelihood that agents will make the * same * inference , rather than * divergent * inferences , when multiple inferences are possible .furthermore , although the matched - pair - inference - explicit strategy is specifically tied to matched - pair inferences , it provides a test of a general strategy for making premises for inferences salient , when tasks are inferentially complex and require agents to remain coordinated on inferences .thus it provides strong support for the discourse inference constraint . to generalize this strategy to other cases of plan - related inferences ,the clauses in the strategy plan operator that specifically refer to matched - pair inferences can be replaced with a more general inference , e.g. the more general ( generates ?act3 ) , where the generates relation is to be inferred .hypothesis i1 was tested by examining extremes in cost ratios for retrieval effort and communication effort whenever a hypothesis about the beneficial effects of irus was confirmed .figure [ exp - iei - fig ] shows that high communication effort can make the explicit - warrant strategy detrimental in the standard task . figure[ iei - exp - nmb - fig ] shows that high communication effort does not eliminate the benefits of the explicit - warrant strategy in the zero - nonmatching - beliefs task .figure and [ imi - imi2-mpr - fig4 ] shows that high communication effort does not eliminate the benefits of the matched - pair - inference - explicit strategy in the matched - pair - two - room task . thus the strategy of making premises for inferences salient is robust against extremes in processing effort .this section addresses concerns raised in that simulation is ` experimentation in the small ' .hanks writes that ( , section 5.1.5 ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the ultimate value arguably the _ only _ value of experimentation is to constrain or otherwise inform the designer of a system that solves interesting problems . in order to do sothe experimenter must demonstrate three things : 1 . that her results the relationships she demonstrates between agent characteristics and world characteristics extend beyond the particular agent , world , and problem specification she studied , 2 . that the solution to the problem area she studied in isolation will be applicable when that same problem area is encountered in a larger , more complex world , and 3 . that the relationship demonstrated experimentally actually constrains or somehow guides the design of a larger more realistic agent ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the list in 1 to 3 are all different ways of saying that the results should generalize beyond the specifics of the experiment , and this after all is a basic issue with all experimental work .typically generalizations can be shown by a series of multiple experiments modifying multiple variables as we have done here .for example , the modifications to the task are specifically designed to test whether beneficial communicative strategies generalize across tasks .however , we might also ask to what extent do the variables manipulated in the simulation abstract out key properties of real situations ?below i will briefly discuss why the results presented above are potentially generalizable .i will focus on generalizations along three dimensions : ( 1 ) task ( or environmental ) properties ; ( 2 ) agent architectural properties ; and ( 3 ) agent behaviors .these dimensions are the same as those in cohen s ` ecological triangle ' .[ [ generalizations - about - tasks ] ] generalizations about tasks + + + + + + + + + + + + + + + + + + + + + + + + + + + the design - world task was selected as a simple planning task that requires negotiation of each step .the structure of this task is isomorphic to a subcomponent of many collaborative planning tasks .in addition , to test generalizability of hypothesized benefits across tasks , we examined more complex variants of the task by manipulating three abstract features : ( 1 ) inferential complexity as measured by the number of premises required for making a task related inference and ( 2 ) degree of belief coordination required on intentions , inferences and beliefs underlying a plan ; and ( 3 ) the task determinacy and fault tolerance of the plan .these general features can certainly be applied to other tasks in other domains .in fact it is difficult to think of a task or domain in which these features could not be applied .[ [ generalizations - about - agent - properties ] ] generalizations about agent properties + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + design - world agents are artificial agents that are designed to model the resource limited qualities of human agents .the planning and deliberation aspects of human processing are modeled with the irma architecture , and resource limits on these processes are modeled by extending the irma architecture with a model of attention / working memory ( awm ) which has been shown to model a limited but critical set of properties of human processing . the way that agents process dialogueis tied to the agent architecture .the experimental results will extend to dialogues between artificial agents to the extent that those agents exhibit similar cognitive properties . here, we looked at a resource bound on access to memory as modeled by a size of memory subset limit , however size is directly correlated to * time * to access memory .artificial agents are often time limited in rapidly changing worlds , so it seems quite plausible that artificial agents would benefit from similar communicative strategies .for example , i would predict that agents in the phoenix simulation testbed would benefit from the strategies discussed here . in other workartificial agents do ` make inferences explicit ' by communicating to other agents partial computations when the other agent might have been able to make these computations .in addition , defining inferential complexity as a direct consequence of the number of premises simultaneously in memory bears a strong resemblance to problems artificial processors have when a computation requires a large working set .the experimental results should extend to dialogues between humans and artificial agents because design - world agents are designed to model humans. however it may be desirable to change the definition of collaborative effort for modeling human - computer interaction to allow the computer to handle processing that is easy for the computer to do and for the human to handle processing that is easy for the human to do .furthermore , most of the claims about the awm model are based on a limited set of human working memory properties , and these properties will also hold for other cognitively based architectures such as soar .[ [ generalizations - about - agent - behaviors ] ] generalizations about agent behaviors + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in this work the agent behaviors that were tested were the agent communication strategies .one reason to believe that the strategies are general to human - human discourse is that they were based on observed strategies in different corpora of natural collaborative planning dialogues .it is possible to find all three types of irus in the trains , map - task and design corpora , as well as in the financial advice domain .in addition to this empirical evidence , there are further reasons why we might expect generalizations .the communicative acts and discourse acts used by design - world agents are similar to those used in .thus communicative strategies based on these acts should be implementable in any of these systems .the experimental results based on these strategies should generalize to other discourse situations because the strategies are based on general relations between utterance acts and underlying processes , such as supporting deliberation and inference .for example , the mapping of a warrant relation between an act and a belief in naturally occurring examples such as [ walnut - examp ] was modeled with a warrant relation between an act and a belief in design - world , as seen in the explicit - warrant communication strategy .the claims made about the use of the explicit - warrant communication strategy should generalize to any dialogue planning domain where agents use warrants to support deliberation . similarly , content based inferences in natural dialogues such as that discussed in relation to example [ certif - examp ] were modeled with content based inferences in design - world such as those required for doing well on the matched - pair tasks .this inferential situation was designed to test the discourse inference constraint , that inferences in dialogue are restricted to premises that are currently salient .both experimental and corpus based evidence was provided in support of the discourse inference constraint .the claims made about the use of the matched - pair - inference - explicit communication strategy , based on experimental evidence , should generalize to any dialogue strategy where agents make premises for inferences available , and to any planning domain where agents are required to make content based inferences in support of deliberation or planning .the evaluation metrics applied to these strategies should also generalize whenever domain plan utility is a reasonable measure of the quality of solution for a dialogue task .the model of collaborative planning dialogues presented in section [ model - sec ] draws from previous work on cooperative dialogue , and the results are applicable to other current research on collaborative planning . the agent architecture andthe model of deliberation and means - end reasoning is based on the work of and , and on pollack s tileworld simulation environment .the use of irma as an underlying model of intention deliberation to provide a basis for a collaborative planning model was first proposed in , and has been incorporated into other work .the architecture includes a specific model of limited working memory , but most of the claims about the model are based on its recency and frequency properties , which might also be provided by other cognitively based architectures such as soar . since the testbed architecture is consistent with that assumed in other work , the experimental results should be generalizable to those frameworks .the relationship between discourse acts and domain - based options and intentions in this work is based on litman s model of discourse plans and is similar to the approach in .the emphasis on autonomy at each stage of the planning process and the belief reasoning mechanism of design - world agents is based on the theory of belief revision and the multi - agent simulation environment developed in the automated librarian project .the design - world testbed is based on the methods used in the tileworld and phoenix simulation environments : rapidly changing robot worlds in which an artificial agent attempts to optimize reasoning and planning .tileworld is a single agent world in which the agent interacts with its environment , rather than with another agent .design - world uses similar methods to test a theory of the effect of resource limits on communicative behavior between two agents .design - world is also based on the method used in carletta s jam simulation for the edinburgh map - task .jam is based on the map - task dialogue corpus , where the goal of the task is for the planning agent , the instructor , to instruct the reactive agent , the instructee , how to get from one place to another on the map .jam focuses on efficient strategies for recovery from error and parametrizes agents according to their communicative and error recovery strategies .given good error recovery strategies , carletta argues that ` high risk ' communicative strategies are more efficient , but did not attempt to quantify efficiency .in contrast , the approach here provides a way of quantifying what is an effective or efficient strategy , and the results suggest that a combination of the agents resource limitations and the task definition determine when strategies are efficient . future work could test carletta s claims about recovery strategies within this extended framework . to my knowledge, none of this earlier work has considered the factors that affect the range of variation in communicative choice , or the effects of different choices , or measured how communicative choice affects the construction of a collaborative plan and the ability of the conversants to stay coordinated . nor have other theories of collaborative planning been explicit about the agent architecture , or tested specific ideas about resource bounds in dialogue , and none have used utility as the basis for agents communicative choice . in addition , no earlier work on cooperative task - oriented dialogue argued that conversational agents resource limits and task complexity are major factors in determining effective conversational strategies in collaboration . a promising avenue for future work is to investigate beneficial strategies for teams of heterogeneous agents .in the experiments here , pairs of agents in dialogue were always parameterized with the same resource limits .pilot studies of dialogues between heterogeneous agents suggest that strategies that are not effective for homogeneous agents may be effective for heterogeneous ones .for example , in i tested an attention iru strategy in which agents would tell one another about all the options they knew about at the beginning of planning each room .this strategy is not beneficial for homogeneous agents because irus can displace other useful information .however if one agent is not limited , then it can be helpful for the resource limited agent to exploit the capabilities of the more capable agent by telling the other agent important facts before it forgets them .another extension would be to extend the agent communication strategies or to test additional ones .for example , other work proposes a number of strategies for information selection and ordering in dialogue and provides some evidence that these strategies are efficient or efficacious .support for these claims could be provided by design - world experiments in which agents used these strategies to communicate .future work could also modify the properties of the world or of the task .for example , it would be possible to make design - world more like tileworld by making the world change in the course of the task , by adding or removing furniture .these results may also be incorporated as input into decision algorithms in which agents decide online which strategy to pursue , and investigate additional factors that determine when strategies are effective in collaborative planning dialogues .the results presented here show what information an agent should consider .for example , a comparison between low , mid and high awm agents shows how to design decision algorithms for agents who have to decide whether to expend additional effort .another promising avenue is make the agents capable of remembering and learning from past mistakes so that they can adapt their strategies to the situation .finally , these results should be incorporated into the design of multi - agent problem - solving systems and into systems for human - computer communication , such as those for teaching , advice and explanation , where for example the use of particular strategies might be premised on the abilities of the learner or apprentice .the goal of this paper was to show how agents choice in communicative action , their algorithms for language behavior , can be designed to mitigate the effect of their resource limits in the context of particular features of a collaborative planning task . in this paper ,i first motivate a number of hypotheses based on a statistical analysis of natural collaborative planning dialogues .then a functional model of collaborative planning dialogues is developed based on these hypotheses , including parameters that are hypothesized to affect the generalizability of the model .the model is then implemented in a testbed in which these parameters can be varied , and the hypotheses are tested .the method used here can be contrasted with other work on dialogue modeling .much previous work on dialogue modeling only carries out part of the process described above : only the initial part of the process up to specifying a functional model is completed .followon research that is based on these models must judge the model according to subjective criteria such as how well it fits researcher s intuitions or how elegant the model is .the models developed here on the basis of empirical evidence can also be judged according to these subjective criteria , but this work carries out additional steps to further test and refine the model suggested by the corpus analysis . implementing a model with parameters to test the generalizability of the model and testing hypotheses in a testbed implementation provides a way to check subjective evaluations and suggests many ways in which our initial hypotheses must be refined and further tested .the design - world testbed is the first testbed for conversational systems that systematically introduces several different types of independent parameters that are hypothesized to affect the efficacy of a collaborative plan negotiated through a dialogue , and the efficiency of that dialogue process .experiments in the testbed examined the interaction between ( 1 ) agents resource limits in attentional capacity and inferential capacity ; ( 2 ) agents choice in communication ; and ( 3 ) features of communicative tasks that affect task difficulty such as inferential complexity , degree of belief coordination required , and tolerance for errors .the results verified a number of hypotheses that depended on particular assumptions about agents resource limits that were not possible to test by corpus analysis alone .several unpredicted and counterintuitive results were also demonstrated by the experiments .first , the task property of belief coordination in * combination * with resource limits ( as in the zero - nonmatching - beliefs and matched - pair tasks ) , were shown to produce the most robust benefits for irus , rather than resource limits alone as originally hypothesized .second , i predicted that irus would always be beneficial for low awm agents , but found that irus can be detrimental for these agents through a side effect of displacing other , more useful , beliefs from working memory .third , it would seem plausible that high awm agents should always perform better than either low or mid awm agents since these agents always have access to more information .however the results showed that there are two situations in which this is not an advantage : ( 1 ) when accessing information has some cost ; and ( 2 ) when access to multiple beliefs can lead agents to make divergent inferences . in this case , restricting agents to a small shared working set is a natural way to limit inferential processes .this limit intuitively corresponds to potential benefits of limited working memory for humans and explains how humans manage to coordinate on inferences in conversation .these results clearly demonstrate that factors not previously considered in dialogue models must be taken into account of claims if cooperativity , efficiency , or efficacy are to be supported . in addition , i have shown that a theory of dialogue that includes a model of resource - limited processing can account for both the observed language behavior in human - human dialogue and the experimental results presented here .the work reported in this paper has benefited from discussions with steve whittaker , aravind joshi , ellen prince , mark liberman , max mintz , bonnie webber , scott weinstein , candy sidner , owen rambow , beth ann hockey , karen sparck jones , julia galliers , phil stenton , megan moser , johanna moore , christine nakatani , penni sibun , ellen germain , janet cahn , jean carletta , jon oberlander , julia hirschberg , alison cawsey , rich thomason , cynthia mclemore , jerry hobbs , pam jordan , barbara di eugenio , susan brennan , rebecca passonneau , rick alterman and paul cohen .i am grateful to julia galliers for providing me with an early implementation of the belief revision mechanism used in the automated librarian project , and to julia hirschberg who provided me with tapes of the financial advice talk show .thanks also to the two anonymous reviewers who provided many useful suggestions .this research was partially funded by aro grants daag29 - 84-k-0061 and daal03 - 89-c0031pri , darpa grants n00014 - 85-k0018 and n00014 - 90-j-1863 , nsf grants mcs-82 - 19196 and iri 90 - 16592 and fellowship int-9110856 for the 1991 summer science and engineering institute in japan , and ben franklin grant 91s.3078c-1 at the university of pennsylvania , and by hewlett - packard laboratories .edmund h. durfee , piotr gmytrasiewics , and jeffrey rosenschein .the utility of embedded communications and the emergence of protocols . in _aaai workshop on planning for interagent communication _ , 1994 .brian logan , steven reece , and karen sparck jones .modelling information retrieval agents with belief revision . in _ seventh annual international acm sigir conference on research and development in information retrieval _ , pages 91100 , london , 1994 .springer - verlag .ellen f. prince .the zpg letter : subjects , definiteness and information status .in s. thompson and w. mann , editors , _ discourse description : diverse analyses of a fund raising text _ , pages 295325 .john benjamins b.v ., 1992 .richmond thomason .propagating epistemic coordination through mutual defaults i. in r. parikh , editor , _ proceedings of the third conference on theoretical aspects of reasoning about knowledge _ , pages 2939 .morgan kaufmann , 1990 .i. zukerman and j. pearl .comprehension - driven generation of meta - technical utterances in math tutoring . in _ proceedings of the the 5th national conference on artificial intelligence _ , pages 606611 .morgan kaufmann publishers , inc . , 1986 .
this paper shows how agents choice in communicative action can be designed to mitigate the effect of their resource limits in the context of particular features of a collaborative planning task . i first motivate a number of hypotheses about effective language behavior based on a statistical analysis of a corpus of natural collaborative planning dialogues . these hypotheses are then tested in a dialogue testbed whose design is motivated by the corpus analysis . experiments in the testbed examine the interaction between ( 1 ) agents resource limits in attentional capacity and inferential capacity ; ( 2 ) agents choice in communication ; and ( 3 ) features of communicative tasks that affect task difficulty such as inferential complexity , degree of belief coordination required , and tolerance for errors . the results show that good algorithms for communication must be defined relative to the agents resource limits and the features of the task . algorithms that are inefficient for inferentially simple , low coordination or fault - tolerant tasks are effective when tasks require coordination or complex inferences , or are fault - intolerant . the results provide an explanation for the occurrence of utterances in human dialogues that , prima facie , appear inefficient , and provide the basis for the design of effective algorithms for communicative choice for resource limited agents . by -by - 8.9 in 0.5 in 6.5 in
finding hidden patterns or regularities in data sets is a universal problem which has a long tradition in many disciplines from computer science to social sciences .for example , when the data set can be represented as a graph , i.e. a set of elements and their pairwise relationships , one often searches for tightly knit sets of nodes , usually called communities or modules .the identification of such communities is particularly crucial for large network data sets that require new mathematical tools and computer algorithms for their interpretation .most community detection methods find a partition of the set of nodes where most of the links are concentrated within the communities . herethe communities are the elements of the partition , and so each node is in one and only one community .a popular class of algorithms seek to optimise the modularity of the partition of the nodes of a graph .the simplest definition of modularity for an undirected graph , i.e. the adjacency matrix is symmetric , is \ , \label{modadef}\end{aligned}\ ] ] where and is the degree of node .the indices and run over the nodes of the graph .the index runs over the communities of the partition .modularity counts the number of links between all pairs of nodes belonging to the same community , and compares it to the expected number of such links for an equivalent random graph in which the degree of all nodes has been left unchanged . by construction with larger that more links remain within communities then would be expected in the random model . uncovering a node partition whichoptimises modularity is therefore likely to produce useful communities . this node partitioning approach has , however , the drawback that nodes are attributed to only one community , which may be an undesirable constraint for networks made of highly overlapping communities. this would be the case , for instance , for social networks , where individuals typically belong to different communities , each characterised by a certain type of relation , e.g. friendship , family , or work . in scientific collaboration networks ( for example ) , authors may belong to different research groups characterised by different research interests . such inter - community individuals are often of great interest as they broker the flow of information between otherwise disconnected contacts , thereby connecting people with different ideas , interests and perspectives .only a few alternative approaches have been proposed in order to uncover overlapping communities of nodes , for example .our suggestion is to define communities as a partition of the links rather than of the set of nodes .a node may then have links belonging to several communities and in this it belongs to several communities .the central node in a bow tie graph is a simple example , see fig .[ fbowtiec ] .this link partition approach should be especially efficient in situations when the nodes of a network are connected by different types of links , i.e. in situations where the nodes are heterogeneous while the links are very homogeneous . in the case of the social networkmentioned above , this would occur when the friendship network and work network of individuals only have a very small overlap .this paper is organised as follows . in section [ sdynmod ], we review a definition of modularity which uses the statistical properties of a dynamical process taking place on the nodes of a graph . in section [ slinkpart ] ,we propose three dynamical processes taking place on the links of the graph and derive their corresponding modularities , now defined for a partition of the links of a network .to do so , we make connections to the concept of a line graph and with the projection of bipartite networks . in section [ sempanal ] , we optimise the three modularities for some examples and interpret our results . in section [ sdiscussion ]we conclude and propose ways to improve our method .to motivate our link partition quality function , let us first consider how to interpret the usual modularity ( [ modadef ] ) in terms of a random walker moving on the nodes .suppose that the density of random walkers on node at step is and the dynamics is given by from now on , we will only consider networks that are undirected ( the adjacency matrix is symmetric ) , connected ( there exists a path between all pairs of nodes ) , non - bipartite ( it is not possible to divide the network into two sets of nodes such that there is no link between nodes of the same set ) , and simple ( without self - loops nor multiple links ) . if the first three conditions are respected , it is easy to show that the stationary solution of the dynamics is generically given by .let us now consider a node partition of the network and focus on one community .if the system is at equilibrium , it is straightforward to show that the probability a random walker is in on two successive time steps is while the probability of finding two independent walkers at nodes in are this observation allows us to reinterpret as a summation over the communities of the difference of these two probabilities .this interpretation suggests natural generalisations of modularity allowing to tune its resolution .indeed , is based on paths of length one but it can readily be generalised to paths of arbitrary length as \ , , \label{stability}\end{aligned}\ ] ] where .this quantity is called the stability of the partition . because is an eigenvector of eigenvalue one of , one can show that the symmetric matrix corresponds to a time - dependent graph where the degree of node is always equal to .therefore can be interpreted as the modularity of , a matrix that connects more and more distant nodes of the original adjacency matrix as time grows .it can be shown that optimising ( [ stability ] ) typically leads to partitions made of larger and larger communities for increasing times and that the optimal partition when is made of two communities .the above discussion suggests that we should look at a random walker moving on the links of network in order to define the quality of a link partition .such a walker would therefore be located on the links instead of the nodes at each time and move between adjacent links , i.e. links having one node in common . in the case of the random walk on the nodes ( [ discrete ] ) ,a walker at node follows one of its links with probability , i.e. all links are treated equally . however , a link between nodes and is characterised by two quantities and , so a random walk on the links is more subtle . in the following ,we will focus on two different types of dynamical process that account differently for the degrees and ( see fig .[ frandwalks ] ) .in the first process , a walker jumps with the same probability to one of the links leaving and .when , the walker goes with a different probability through or , and we therefore call this process an `` link - link random walk '' ( see fig[frandwalks]a ) . in the second process , a walker jumps to one of the two nodes too which it is attached , say , then moves to an link attached to that node ( excluding the link it came from ) .thus it will arrive at an link leaving node with a probability , and similarly it will arrive at a link attached to the other node with probability . we will refer to this as a `` link - node - link random walk '' ( see fig [ frandwalks]b ) .this process is well - defined unless the link is a leaf , namely one of its extremities has a degree one , say . in that case, the walker will jump with a probability to one of the links leaving .these two types of dynamics are different in general except if the degrees at the extremities and of each link are equal . in the case of a connected graph , this condition is equivalent to demanding that the graph is regular , i.e. the degree of all the nodes is a constant .when this condition is not respected , the link - link random walk favours the passage of the walker through the extremity having the largest degree .the difference between the two processes will be maximal when the network is strongly disassortative , namely when links typically relate nodes with very different degrees . in order to study these two types of random walk more carefully , it is useful to represent a network by its incidence matrix .the elements of this matrix ( is the number of links ) are equal to if link is related to node and otherwise . the incidence matrix of may be seen as the adjacency matrix of a bipartite network , ( see fig.[fbowtieall]b ) , the incidence graph in some euclidean space of no particular interest , and each link of is a line which always intersects with exactly two points .] of where the two types of nodes correspond to the nodes and the links of the original graph . by construction, all the information of the graph is incorporated in .for instance , the degree of a node and the number of nodes attached to a link ( always equal to two ) are given by the adjacency matrix of the graph can also be obtained this operation ( [ adjdef ] ) can be interpreted as a projection of the bipartite incidence graph onto the unipartite network . in a similar way, an adjacency matrix for the links can be obtained by projecting the bipartite network onto its links . in the following ,we will focus on two standard types of projection that , as we will show , are directly related to the two random walks introduced above . of eqn .( [ adjdef ] ) , has other equivalent graph representations . in ( b ) the incidence matrix ( of eqn .( [ adjdef ] ) ) of the bow tie is shown as a bipartite network , the incidence graph .the line graph of the bow tie , , is the unweighted version of the graph labelled ( c , d ) , with adjacency matrix of eqn .( [ adjc ] ) . the weighted version in diagram ( c ,d ) has an adjacency matrix of eqn .( [ adjd ] ) . the weighted line graph with self loops , labelled ( )has an adjacency matrix of eqn .( [ adjdtilde ] ) .circles represent entities which correspond to nodes of the original graph , while triangles come from links in the original graph ., title="fig : " ] of eqn .( [ adjdef ] ) , has other equivalent graph representations . in ( b ) the incidence matrix ( of eqn .( [ adjdef ] ) ) of the bow tie is shown as a bipartite network , the incidence graph .the line graph of the bow tie , , is the unweighted version of the graph labelled ( c , d ) , with adjacency matrix of eqn .( [ adjc ] ) . the weighted version in diagram ( c ,d ) has an adjacency matrix of eqn .( [ adjd ] ) .the weighted line graph with self loops , labelled ( ) has an adjacency matrix of eqn .( [ adjdtilde ] ) .circles represent entities which correspond to nodes of the original graph , while triangles come from links in the original graph ., title="fig : " ] + of eqn .( [ adjdef ] ) , has other equivalent graph representations . in ( b ) the incidence matrix ( of eqn .( [ adjdef ] ) ) of the bow tie is shown as a bipartite network , the incidence graph .the line graph of the bow tie , , is the unweighted version of the graph labelled ( c , d ) , with adjacency matrix of eqn .( [ adjc ] ) . the weighted version in diagram ( c ,d ) has an adjacency matrix of eqn .( [ adjd ] ) .the weighted line graph with self loops , labelled ( ) has an adjacency matrix of eqn .( [ adjdtilde ] ) .circles represent entities which correspond to nodes of the original graph , while triangles come from links in the original graph ., title="fig : " ] the simplest way to project a bipartite graph consists of taking all the nodes of one type for the nodes of the projected graph .a link is added between two nodes in this projected graph if these two nodes had at least one node of the other type in common in the original bipartite graph .the operation ( [ adjdef ] ) is of this type .when applied to the links of the graph , the second type of vertex in the bipartite incidence graph , it leads to the adjacency matrix whose elements are it is easy to verify that this adjacency matrix is symmetric and that its elements are equal to 1 if two links have one node in common , and zero otherwise .it is interesting to note that this adjacency matrix corresponds to another well known graph , usually called the _ line graph _ of g and denoted by ( see fig.[fbowtieall]c ) .it is a simple graph with nodes . by construction, each node of degree of the original graph corresponds to a fully connected clique in .thus it has links .line graphs have been studied extensively and among their well - known properties , whitney s uniqueness theorem states that the structure of can be recovered completely from its line graph , for any graph other than a triangle or a star network of four nodes .this result implies that projecting the incidence matrix onto does not lead to any loss of information from the network structure .this is a remarkable result that is not generally true when projecting generic bipartite networks .it is now straightforward to express the dynamics of link - link random walk ( fig.[frandwalks]a ) in terms of the projected adjacency matrix now is the density of random walkers on link at step , and where and are the extremities of .this dynamical process therefore only depends on the sum of the degrees and .the stationary solution is found to be , where . when is simple , then . by reapplying the steps described in , it is now straightforward to derive a quality function for the link partition of the graph . \label{modcdef}\ ] ] this is just the usual modularity ( [ modadef ] ) for a graph with adjacency matrix .as we noted , a single node in leads to a connected clique of links in the line graph .this seems to suggest that the line graph gives too much prominence to the high degree nodes of the original graph .our response is to define a weighted line graph whose links are scaled by a factor of . in order to derive the quality of a link partition associated to the link - node - link random walk , it is useful to project the incidence matrix in a different way and to define another graph with a symmetric adjacency matrix given by this weighted line graph has the intuitive property that the degree of a link is equal to two ( a link always has two extremities ) unless is a leaf in ( then except for one trivial case ) .for example this weighted line graph of the bow tie network is shown in fig.[fbowtieall]d . only if is regular will this weighted line graph be equivalent ( up to an overall scale ) to the original unweighted line - graph .this construction is a well - known method for projecting bipartite networks .for instance in the case of collaboration networks the normalisation is justified by the desire that two authors should be less connected if they wrote a joint paper with many co - authors than a paper with few authors .this weighted line graph allows us to write the dynamics of the link - node - link random walk in a natural way and , by reusing the above arguments to define another quality function for the link partition of a graph ,\ ] ] where is twice the number of links minus the number of leaves in the original graph , .again , this is the same functional form as the usual modularity , of ( [ modadef ] ) , only the adjacency matrix has changed .the random walks proposed in the previous sections have been defined on the line graph , and therefore consist of walkers moving among adjacent links of the original graph .however , such processes can not be related to the original random walk ( [ rw ] ) on the nodes of , because a walker moving on links can pass at two subsequent steps through the same node of while such self - loops are forbidden in ( [ rw ] ) .this observation suggests an alternative approach where the dynamics would be driven by the original random walk ( [ rw ] ) but would be projected on the links of the network . to do so ,let us assume that a walker has not moved yet and is located at node . in that case, it is reasonable to assume that all the neighbouring links of are connected by a weight .the corresponding adjacency matrix for the links is therefore given by and is based on an unconstrained unbiased two - step random walk on the bipartite incidence graph to transitions in the link - link walk of fig.[frandwalks]a .that is we could define an unweighted line graph with self loops with adjacency matrix . since it differs from the standard unweighted line graph only by the addition of a self - loop to every node , this can be interpreted within the scheme of who add self - loops to control the number and size of communities found . ] . unlike our previous line graph constructions , of ( [ adjc ] ) and of ( [ adjd ] ) , this weighted line graph has self loops .it is illustrated for the bow tie graph in fig.[fbowtieall]e .all nodes in have strength two , , reflecting the fact that the links in the original graph all have two ends . is constructed when a walker is located on a node and has not moved yet .the motion of the walker according to ( [ rw ] ) generates a new adjacency matrix , , defined as where we note that .the corresponding graph is still regular with , and it is again weighted with self - loops . the quality function associated with this dynamics is simply ,\ ] ] where again .this quality function is particularly interesting because it has a simple relationship to the modularity of the original graph , of ( [ modadef ] ) . to show thislet us assign a weight representing the strength of the membership of link in community .such weights may be defined and constrained in many ways .for instance , in a link partition we have for any , i.e. every link belongs to just one community . in order to translate into a community structure on the nodes , it is natural to use the incidence matrix , of ( [ adjdef ] ) and to define the rectangular matrix through if is an link partition then the projected node community structure is simply the fraction of links in community incident at node . also if then so is . now using the definition of the adjacency matrix in ( [ adjdef ] ), we find that the modularity of the original graph for some node community is v_{\beta d } \\ & = & \frac{1}{w } \sum_{c , d } \sum_{i , j } v_{i c } \left [ a_{ij } - \frac{k_i k_j}{w } \right ] v_{j d } \\ & = & q({{{\mathbf{\textsf{a } } } } } ; \{v_{ic}\ } ) \label{ecvcrel } \end{aligned}\ ] ] thus finding modularity optimal link partitions of the line graph with adjacency matrix of ( [ adje1def ] ) , is equivalent to the optimisation of the modularity of the original graph but with a different constraint on the node community from that imposed when finding node partitions .in the previous sections , we have proposed three quality functions , and for the partition of the links of a network .each represents a different dynamical process and therefore explores the structure of the original graph in a different way .in order to tune the resolution of the optimal partitions , it is straightforward to define the stabilities , and of the three processes by generalising the concept of modularity to paths of arbitrary length ( see section ii ) .the optimal partitions of these quality functions can be found by applying standard modularity optimisation algorithms to the corresponding line graphs . in this paper , we have used two different algorithms and have verified that both algorithms give consistent results . as a first check ,let us look at the bow tie graph of figure [ fbowtiec ] .the optimisation of the three quality functions , and lead to the expected partition into two triangles , with the values =0.1 , , . in this case , the central node belongs equally to the two link communities , a situation which is a far superior way to split the network than a node partition .the best node partition gives when three nodes in one triangle form one community and the remaining two nodes form a second community . in order to compare node partitions and link partitions in the following, we will use the idea of a ` boundary link ' and a ` boundary node ' .a boundary link of a node partition is one which connects two nodes from different communities .we will then define a boundary node of an link partition to be a node which is connected to links from more than one link community .thus the central node of the bow tie graph is a boundary node .a less contrived graph is the karate club of zachary , which is made of thirty four members .historically this split into two distinct factions .it is standard to compare the partition produced by a community detection method to the actual split of the club .the node partition having the largest value of modularity contains four communities , but the resolution can be lowered by optimising the stability for larger values of .when is large enough , the optimal partition is always made of two communities ( see figure [ fkaratevp ] ) , e.g. , that agree with zachary s partition into `` sink '' and `` source '' communities using the ford - fulkerson binary community algorithm .the link partitions found by optimising , and are shown in fig .[ fkaratecde ] .they are respectively made of , and communities .these three partitions are consistent with the historical two - way split of the network , as the boundary links of the two - way partition of fig .[ fkaratevp ] are always connected to a boundary node of a link partition . in general , however , the three optimal partitions are as different as their corresponding dynamical processes are . the most striking difference is observed around node . in the node partition optimising , this node is connected to several boundary links and connects the community of nodes ( 5,6,7,11,17 ) to the rest of the network .such a position is consistent with the link partitions obtained from and , but not with the link partition optimising . in this latter case, one observes that node 1 is rather the focus of one of the link communities on the left hand side in fig .[ fkaratecde ] .this difference originates from the high degree of node 1 which implies that a link - link random walk is biased to pass through this node ( see fig.[frandwalks ] ) , and therefore heavily connects its adjacent links .this is a general problem of the unweighted line graph that gives too much emphasis to high degree nodes ( also noted in ) and therefore tends to produces communities centred around hubs .such a problem does not take place for the weighted line graphs and , and in both these cases node 1 is a boundary node , part of several communities .the main difference between the optimal partitions of and is the number of the communities in each , as expected because the line graph connects more distance links of the original graph than .let us also note that the optimal partition of resembles very much the one of , as suggested by ( [ ecvcrel ] ) .+ + before concluding , let illustrate how longer random walks can be used to tune the resolution of the link partition .we focus on the weighted line graph , whose optimal partition into seven communities is difficult to compare against the standard two and four community node partitions of fig.[fkaratevp ] . let us therefore focus on the stability , which is based on paths of length of a random walker on .as expected , larger and larger communities are uncovered when is increased and , when is large enough , we obtain a two way link partition ( see fig.[fkarated2 ] ) that shows a perfect match with the node partition shown in fig.[fkaratevp ] . of the karate club.,scaledwidth=50.0% ] as a final example , let us use the university of south florida free association norms data set to create a simple network .we end up with 5018 words connected by 58536 links and from this a line graph with 1266910 links is created . ] in the manner of .we obtain a link partition by optimising the modularity for the weighted line graph of ( [ adjd ] ) but where the null model term has been scaled by a factor of in order to control the resolution and in this case obtain 321 communities in the whole network .the corresponding quality function can be seen as a linear approximation of the stability .it is easier to optimise for large networks . in fig.[fsfwan ]we show part of the network near the word ` bright ' which is part of eleven communities .the topology of our communities is much less constrained than those of k - clique percolation which means we can pick out a wider range of structures .there are some tight clique - like subsets , e.g. the names of the planets . at the other extremethe method finds more tree like structures such as the sequence ` lit - on - switch - lever - handle ' which is the backbone of another community linked to bright . on the other hand this flexibility in the structure can produce a confusing picture since many words are members of several communities though mostly having just one or two links per community .for instance for the word ` bright ' , it is linked to eight of its eleven communities by just one link .however one can exploit this feature to start to define strength of membership in different communities .for instance for visualisation , we have found it useful to view only those words which have a large number of links within one community , as in fig.[fsfwan ] . where the null model factor was .this controls the number of communities found .the subgraph shown contains the word ` bright ' along with nodes which have at least of their links in one of the communities connected to ` bright ' . , scaledwidth=30.0% ]when describing a network , there seems to be a natural tendency to put the emphasis on its nodes whereas a graph is a both a set of nodes and a set of links .it is therefore not surprising that node partitioning has been studied extensively in recent years while link partitioning has been overlooked so far . in this paper, we have shown that the quality of a link partition can be evaluated by the modularity of its corresponding line graph .we have highlighted that optimising the modularity of some of our weighted line graphs uncovers meaningful link partitions .our approach has several advantages .a key criticism of the popular node partitioning methods is that a node must be in one single community whereas it is often more appropriate to attribute a node to several different communities .link partitioning overcomes this limitation in a natural way .moreover , the equivalence of a link partition of a graph with the node partitioning of the corresponding line graph means that one can use existing node partitioning code with only the expense of producing a line graph transformation and an increase in memory to accommodate the larger line graph .even the memory cost can be reduced to be since we have shown our link partitioning is equivalent to a process occurring on the links of the original graph , so a line graph need not be produced explicitly .our method can be seen as a generalisation of the popular k - clique percolation , which finds sets of connected k - cliques . by way of comparisonwe find collections of two - cliques which are more densely connected than expected in an equivalent null model .thus the link partitioning of our paper can be seen as an extension of two - clique percolation that allows for the uncovering of finer modules , i.e. two - clique percolation trivially uncovers connected components .an interesting generalisation would be to apply our approach to the case of triangles , 4-cliques , etc .to do so , one has to replace the incidence matrix ( relating nodes and links ) by a more general bipartite graph , representing the membership of nodes in a clique of interest . our random walk analysis in terms of this bipartite graphwould then proceed in the same fashion , and should allow to uncover finer modules than those obtained by k - clique percolation .all our expressions also hold for the case of weighted networks .even multiedges can be accommodated if we start from the incidence matrix , .however the beauty of our approach is that any type of graph analysis , be it community detection or something else , can be applied to a line graph rather than the original graph . in this way , one can view a network from a completely different angle yet use well established techniques to obtain fresh information about its structure .r.l . would like to thank m. barahona and v. eguiluz for interesting discussions , and uk epsrc for support .after this work was finished , we received the paper of ahn et al. who also look at edge partitions but not in terms of weighted line graphs .a. noack and r. rotta , lecture notes in computer science * 5526 * , 257 - 268 ( 2009 ) .j. reichardt and s. bornholdt , phys .e * 74 * , 016110 ( 2006 ) .m. girvan and m.e.j .newman , proc .usa * 99 * , 7821 ( 2002 ) .g. agarwal and d. kempe , eur .j. b * 66 * , 409 - 418 ( 2008 ) .nelson , c.l . mcevoy and t.a .schreiber , the university of south florida word association , rhyme , and word fragment norms ` http://www.usf.edu/freeassociation/ ` .
in this paper , we use a partition of the links of a network in order to uncover its community structure . this approach allows for communities to overlap at nodes , so that nodes may be in more than one community . we do this by making a node partition of the line graph of the original network . in this way we show that any algorithm which produces a partition of nodes can be used to produce a partition of links . we discuss the role of the degree heterogeneity and propose a weighted version of the line graph in order to account for this .
the geant4 monte carlo toolkit was originally developed to support the high energy physics experiments of cern ( allison 2006 ) ; it has since enjoyed widespread usage in the medical physics community for many years ( foppiano 2004 , paganetti 2004 , aso 2007 , wroe 2007 , oborn 2009 , constantin 2010 ) .geant4 possesses electromagnetic physics models that have been validated for materials and photon / electron energies relevant to radiotherapy ( poon and verhaegen 2005 , tinslay 2007 ) .geant4 possesses many features that make it suitable for use in radiotherapy .it enables the simulation of complex geometries using combinatorial geometry , has support for voxelised geometries such as computed tomography ( ct ) data ( aso 2007 ) , and the ability to incorporate tesselated volumes generated by computer aided design ( cad ) programs ( constantin 2010 ) .this capability is particularly useful in radiation detector development where complicated compositions and geometries can be modelled and sources of artefacts can be identified and mitigated ( othman 2010 ) .time dependent geometries can also be modelled , which is applicable to modern radiotherapy modalities such as proton therapy ( paganetti 2004 ) , tomotherapy , sliding window imrt , volumetric modulated arc therapy , as well as tumour motion tracking technologies in these modalities .lastly , geant4 is able to model neutron production from photo - nuclear reactions , which is useful for out - of - field dosimetry studies relevant to higher energy photon beams . with this in mind , there is increasing interest in the use of geant4 as a monte carlo tool for external beam radiotherapy ( jan 2011 , grevillot , constantin 2010 , foppiano 2004 ) .a recently developed multi - threaded version of geant4 will enable porting of geant4 applications to many core processing units , further placing mc in the reach of routine treatment plan verification studies ( dong 2010 ) .this note describes an accurate model of a varian radiotherapy linac created using the geant4 toolkit , integration with a dicom - rt interface , and validation by comparison with experimental data .this tool will ultimately be used for routine independent verification of treatment plans and to support various research projects within the group including : gel dosimetry , plastic scintillator development , neutron dosimetry , and ultrasound based organ motion tracking .the geometry of the 6 mv varian ix clinac model is illustrated in figure [ fig : geometry ] and is based on the vendor supplied documentation .particular attention was paid to target , flattening filter , primary collimator , jaw , and mlc geometry and material composition . for the most part, geant4 s standard combinatorial geometry ( allison 2006 ) classes were used to model the linac components .for complex imrt and rapidarc treatments , contributions to the dose distribution from mlc leakage can be significant ( bush 2008 ) ; an accurate model of mlcs is therefore essential .each individual mlc leaf was modelled using solidworks , including target and isocentre half - leaves , full leaves , and outboard leaves using the approach of constantin ( 2010 ) .the leaf components were then exported individually as step files ( an iso compliant file format ) before being converted to geometry description markup language ( gdml ) using the fastrad software .gdml is an xml extension used by geant4 to allow definition of geometry without the need for hard coding .these gdml files were then loaded into the geant4 application using the gdml parser . in order to create the mlc bank ,each leaf was placed along the y - axis according to manufacturer specification , taking into account the interleaf gap and leaf divergence from the source .a number of user interface commands were developed to allow configuration of many aspects of the geometry without the need for recompilation between runs .this includes gantry angle , collimator angle , jaw and mlc positions , phantom composition and dimensions ( solid water or water ) , phantom source - to - surface distance ( ssd ) , and voxellisation of the readout geometry .the international electrotechnical commission ( iec 2002 ) has defined a standard for coordinate systems and positive rotation directions for gantry and collimator in radiotherapy .it provides a hierarchical approach , with each component coordinate system described relative to its mother coordinate system .geant4 similarly uses a hierarchy of mother - daughter volumes in the construction of a simulation geometry , enabling straight - forward implementation of the iec standard .standard parameterised electromagnetic physics models were used , taking into account the following processes : for photons , the photoelectric effect , compton scattering , rayleigh scattering , and pair production ; for electrons , bremsstrahlung production , ionisation , and multiple scattering ; and for positrons , multiple scattering , ionisation , and the annihilation process . to preclude the tracking of very low energy secondary particles, geant4 uses the concept of range cuts ; that is , if a secondary particle is produced with residual range less than the range cut , it is not tracked and assumed to deposit all energy at the point of generation ; range cuts were used throughout the geometry .this corresponds to secondary electron energy thresholds of 84.7 kev in water , 352 kev in tungsten , and 250 kev in copper .phase space files were used to reduce computation times . using an approach similar to that of bush ( 2007 ) , each simulation was executed in three phases .phase i involves the simulation of a number of primary electrons incident on the target , the simulation of the subsequent electromagnetic cascade , and the scoring of photons at a plane below the ionisation chamber ( scoring plane 1 ) .this section of the geometry remains fixed during any treatment plan and need only be simulated once .the details of each particle crossing this scoring plane ( position , direction cosines , energy , particle - type , particle weight ) were recorded to a binary phase space file and the particle was no longer tracked . during phase ii , particles were sampled from the first phase space file and transported through the jaws and mlcs , then recorded again at a second scoring plane below the mlcs ( scoring plane 2 ) .the region between scoring plane 1 and scoring plane 2 does not change between control points ( beamlets ) during an imrt / rapidarc treatment .the first phase space file was recycled times to populate the second phase space file .phase iii of the simulation sampled the second phase space file and transported particles through the patient / phantom geometry ; this phase space file was again recycled times .uniform bremmstrahlung splitting ( ubs ) was implemented using the approach of faddegon ( 2008 ) with a splitting factor of .a simple form of geometry biasing was also implemented via `` kill zones '' : surfaces placed above the target and around the primary collimator so as to remove particles from the simulation that are unlikely to contribute to a response in the readout geometry .values of and were optimised via calculation of simulation efficiency for a square field using the methodology of karakow and walters ( 2006 ) .dose is scored in a voxellised water or patient geometry using the approach of aso ( 2007 ) . the dose delivered to each voxel per primary electronis calculated for each control point of a treatment plan ( or simulation run ) along with an estimation of the standard error .all information needed to fully describe a radiotherapy treatment plan can be defined in a number of files using the dicom - rt format . in order to facilitate a monte carlo simulation of treatment plans ,a dicom - rt interface is required . to this end , the vega library ( locke and zavgorodni 2008 ) was incorporated into the geant4 application .this library enables parsing of all dicom - rt files as well as the exporting of an mc calculated dose distribution into a dicom - rt dose file for subsequent importation back into the tps . in our implementation , each beam ( gantry angle , collimator angle ) and control point ( jaws , mlc settings , and number of mus ) is read from the tps dicom - rt plan file and translated into geant4 user interface ( macro ) commands .these in turn modify the geant4 simulation geometry between runs and simulate a pre - determined number of particles from the phase space file .treatment plans generally give the absolute dose distribution within the phantom / patient , whereas the monte carlo calculation results in dose per primary particle .analogous to the calibration of a linac monitor chamber , the virtual monitor chamber of a monte carlo simulation may be calibrated using the method outlined by popescu ( 2005 ) . in this approach, the reference conditions of the linac are simulated , typically a field at an ssd of 100 cm and calibration depth of ( iaea 2000 ) .a simulation is executed to determine the dose per primary particle at the calibration depth , as well as the dose to the virtual monitor chamber per primary particle .the absolute dose distribution under general conditions can then be determined by : where is the normalised dose per incident particle , is the absolute dose per monitor unit as measured at the reference position during calibration of the linac , is the contribution to the monitor chamber dose per incident particle by the beam entering from above ( phase i of the simulation ) , is the contribution by the beam entering from the rear ( phases ii and iii of the simulation ) , is this contribution under reference conditions , is the simulated dose per primary particle under reference conditions , and finally is the number of monitor units delivered for a particular irradiation ( or control point ) .according to the suggested protocol of verhaegen and seuntjens ( 2003 ) the first stage in the commissioning of a monte carlo linac model is to optimise primary electron beam parameters in order to obtain agreement with experimental results .these parameters are - the electron beam energy , and - the standard deviation in the approximated gaussian fluence distribution of the primary electron beam that is normally incident on the target ; ie . ,the spot - size .dose distributions in a scanditronix water phantom were measured using a standard imaging exradin a16 chamber and scanditronix ic13 chamber ( for larger fields ) as part of the commissioning procedure for the tps .a subset of this was used for the tuning process ; namely , a and square field irradiation of a water phantom .the dose distribution in a water phantom was simulated using a spatial resolution of .initial values of and were based on manufacturer specifications .a comparison between simulated and experimental percentage depth dose ( pdd ) profiles , sensitive to beam energy , and a crossplane profile ( sensitive to spot - size variations ) at shallow depth ( ) was made .the spot - size and energy were tuned by nominal amounts until a gamma evaluation ( low , 1998 ) criterion was achieved for 98% of data points . following primary beam tuning ,a subset of the complete commissioning dataset was used for validation .field - sizes relevant to imrt and rapidarc treatments ranging from to were considered with the square fields defined by the x and y jaws .comparison was made using pdds and cross - plane profiles at depths of , , , , and . againa gamma evaluation criterion was used to quantify accuracy of the simulation and fine adjustments in the model were made where necessary . in order to validate the cad model of the multi - leaf collimator , a methodology similar to the approach of heath and seuntjens ( 2003 )leaf leakage measurements were performed using ebt2 film placed at depth in a solid water phantom at ssd of .the phantom was irradiated by a jaw - defined field with mlcs fully closed with a collimator rotation of 90 degrees .optical density changes in the ebt2 film were measured using an epson perfection v700 scanner , 72 dpi resolution ( spatial resolution of ) and 48 bit colour .perspex frames were used to separate the film pieces from the scanner surface in order to avoid newton s rings artefacts ( kairn 2010 ) .analysis of the film pieces was conducted using imagej image processing software .firstly the glass and frame were scanned to obtain a baseline image for the scanner , .next the film piece was placed on the frame and a scan acquired , . for both images ,the red colour channel was isolated and converted to 32 bit . in order to calculate optical density the logarithm of the ratio of glass to film imageswas taken : ( hartmann 2010 ) . to calculate net optical density variation induced by radiation dose ,the film was rescanned in this manner ( placed at the same position on the frame as the pre - scanned film ) .co - registration of images is performed ( typically translation of the image by a few pixels ) and the images are subtracted to calculate the net optical density image ( kairn 2010 ) .calibration pieces of dimensions were used from the same sheet as the measurement pieces and were exposed to doses in the range 0 - 400 cgy using a square field at depth of in a solid water phantom ( at an ssd of ) .the pieces were scanned prior to irradiation and at after irradiation and net optical densities were calculated as above . a 2nd order polynomialfit to the calibration curve was made and used to convert net optical density to dose . to ensure that measured doses lay within the range of doses used for the calibration of film , two irradiations were conducted ( heath and seuntjens 2003 ) .the first was approximately 50 times the number of mus required to give an open field dose of 100 cgy , given that the dose under the leaves is approximately 2% of the open field dose .likewise , the dose under the abutted leaves is approximately 20% of the open field dose , as such 5 times the mus were considered for this measurement ( heath and seuntjens , 2003 ) . a simulation of the experimentwas then performed .validation of the linac model , dicom - rt ( plan ) interface , and method of absolute dose calibration , was carried out by simulating a commonly used imrt qa procedure known as the chair test ( van esch 2003 ) .this test is commonly used to verify the correct functioning of the leaf motion controller , as well as correctness of treatment planning parameters such as transmission and dosimetric leaf separation .similarly , it was used here to validate these aspects of the geant4 simulation .a typical chair test plan was delivered to a mapcheck two dimensional detector array at 5 cm depth in the mapphan solid water phantom ( ) , the plan was then simulated with the simulation tool , using the tps generated dicom - rt ( plan ) file for input .primary electron beam parameters that provided the best - fit to experimental pdds and crossplane profiles at a depth of 15 mm in water phantom were found to be and .this result is in contrast to published values for tuned beam energy that can often vary from the nominal beam energy by up to 5% ( mesbahi 2006 , verhaegen and seuntjens 2003 ) .particular attention was paid to the geometry and composition of the key components of the linac . in particular ,the correct density of target and flattening filter is essential , an over - estimation by a few percent in flattening filter density leads to significant underestimation of beam `` horns '' in the cross - plane profile for larger fields .furthermore , the simulated pdd was particularly sensitive to the range cuts employed in the simulation .too high a cut value leads to an over - estimation of the average beam energy ( beam is too hard ) which in turn can lead to an underestimation of the peak dose , requiring an unusually low tuned beam energy to achieve agreement . for each simulation primary electrons were used and simulation times were approximately 2 hrs on 100 cores ( 2.33ghz 64bit intel xeon processor cores ) of a high performance computing facility .figure [ fig : commissionpdd ] shows pdd curves , both experimental and simulated , for a number of field sizes relevant to rapidarc and imrt treatments .the curves are normalised to the dose at a depth of 10 cm .the uncertainty of simulation results is approximately 2% and curves agree within the specified gamma criteria for at least 98% of points .figure [ fig : commissionprofile ] shows crossplane dose profiles , both experimental and simulated , for a number of field sizes and depths in a water phantom .agreement between experiment and simulation is observed to satisfy a gamma criterion of ( 98% of data points ) and verifies the accuracy of the primary beam model , geometry of the beam modifying components ( barring the mlcs ) , physics processes , and the spectral properties of the 6 mv photon beam . in order to mitigate the problem of inhomogeneity of ebt2 film sensitivity reported in the literature ( kairn 2010 ), a simple correction technique was used for which two pieces from the same sheet were used to measure the dose distribution with one of the pieces rotated 180 degrees . using fiducial markers ,the resulting dose distributions were co - registered and averaged .two leakage profiles were considered for comparison with simulation predictions ( see figure [ fig : films ] ) : the first being parallel to the direction of leaf travel on the beam axis , referred to herein as the abutted leaf leakage profile ; the second was perpendicular to the direction of leaf travel and offset from the beam axis by several centimetres , referred to here - in as the interleaf leakage profile .the results of the abutted and interleaf leakage profiles are shown in figure [ fig : leakage ] , dose profiles are normalised to the dose at the same position in the phantom for a square field .the abutted leaf leakage profiles show excellent agreement , satisfying the criterion of .the interleaf leakage profiles show good agreement between the mean value of leakage dose : ( standard deviation ) measured compared to simulated . both in good agreement with the results of heath and seuntjens ( 2003 ) for millenium mlc modelling using beamnrc .heath and seuntjens ( 2003 ) observed a discrepancy at for the abutted leaf leakage profiles and attributed this to the existence of a single calibration point below on the calibration curve .this effect was also observed during the current study and mitigated by additional low dose calibration points below .results of the chair - test measurement and simulation are shown in figure [ fig : chair ] .figure [ fig : chair](b ) shows an x - profile along the y - axis , through the ` seat ' of the chair , where agreement between simulation and experiment is good .figure [ fig : chair](a ) shows an x - profile at a y - offset of ; ie . , across the back of the chair .discrepancy in absolute doses of around 4% between experiment and simulation can be seen for the region that is blocked by the mlcs for the entire irradiation ( ) .the discrepancy was thought to be due to an over - response by the mapcheck array to the lower energy photons in this region ( dominated by particles scattered in the phantom or transmitted through the leaves ) . to test this hypothesis , leaf leakage measurements shown in figure [ fig : leakage ]were repeated using the mapcheck device .this yielded an interleaf leakage measurement of , around higher than simulation and film measurements , yet agreeing within the stated uncertainty .figure [ fig : chair](c ) shows an x - profile at a y - offset of ; ie ., across the legs of the chair .again discrepancy is seen in the central region that is either directly under a stationary leaf - body , or traversed by fully closed leaves . despite these discrepancies , these results do confirm the ability of the tool to simulate the control points of a treatment plan , accurately reproducing leaf movements , number of mus per control point , as well as the method of absolute dose calculation .a geant4 based simulation tool has been developed that is capable of accurately simulating the dosimetric properties of a 6 mv varian ix clinac .the simulation includes detailed modelling of key components using cad software and was tuned and validated against both film and ionisation chamber dosimetry measurements in solid water and water phantoms .the accuracy of the multi - leaf collimator model and dicom - rt interface was verified against mapcheck measurements in a solid water phantom subject to irradiation by a chair - test .this tool will form the basis of a treatment plan verification tool for radiotherapy and the model will be extended to include higher energy beams as well as cad modelled electron applicators for modelling electron beams .further validation will be performed by simulating rapidarc treatment plans delivered to homogeneous solid water phantoms and anthropomorphic phantoms and comparisons with experimental measurement .this project is funded by the queensland cancer physics collaborative ( queensland health ) , australia .computational resources and services used in this work were provided by the hpc and research support unit , queensland university of technology , brisbane , australia ( 404 core lyra altix se compute cluster ) .the authors would like to thank the geant4 collaboration for providing the toolkit and examples , regular updates , documentation , and the online user forum .the authors would also like to thank tanya kairn and john kenny of premion for discussions and guidance related to the use of ebt2 film .allison j _ et al _ 2006 geant4 developments and applications _ ieee trans . nucl .* 53 * 270278 constantin m , constantin d e , keall p j , narula a , svatos m and perl j 2010 linking computer - aided design ( cad ) to geant4-based monte carlo simulations for precise implementation of complex treatment head geometries n211n220 faddegon b a , asai m , perl j , ross c , sempau j , tinslay j , salvat f 2008 benchmarking of monte carlo simulation of bremsstrahlung from thick targets at radiotherapy energies _ med . phys . _ * 35(10 ) * 43084317 foppiano f , mascialino b , pia m.g and piergentili m 2004 a geant4-based simulation of an accelerator s head used for intensity modulated radiation therapy _ ieee nucl .rec . _ * 4 * 21282132 othman m a r , cutajar d l , hardcastle n , guatelli s and rosenfeld a b 2010 monte carlo study of mosfet packaging , optimised for improved energy response : single mosfet filtration _ rad .dos . _ * 141(1 ) * 1017 oborn b m , metcalfe p e , butson m j and rosenfeld a b 2009 high resolution entry and exit monte carlo dose calculations from a linear accelerator 6 mv beam under the influence of transverse magnetic fields * 36(8 ) * 35493559 van esch a , bohsung j , sorvari p , tenhunen m , paiusco m , iori m , engstrm p , nystrm h and huyskens d p 2002 acceptance tests and quality control ( qc ) procedures for the clinical implementation of intensity modulated radiotherapy ( imrt ) using inverse planning and the sliding window technique : experience from five radiotherapy departments ._ radiother .oncol . _ * 65(1 ) * 5370
a geant4 based simulation tool has been developed to perform monte carlo modelling of a 6 mv varian ix clinac . the computer aided design interface of geant4 was used to accurately model the linac components , including the millenium multi - leaf collimators ( mlcs ) . the simulation tool was verified via simulation of standard commissioning dosimetry data acquired with an ionisation chamber in a water phantom . verification of the mlc model was achieved by simulation of leaf leakage measurements performed using gafchromic film in a solid water phantom . an absolute dose calibration capability was added by including a virtual monitor chamber into the simulation . furthermore , a dicom - rt interface was integrated with the application to allow the simulation of treatment plans in radiotherapy . the ability of the simulation tool to accurately model leaf movements and doses at each control point was verified by simulation of a widely used intensity - modulated radiation therapy ( imrt ) quality assurance ( qa ) technique , the chair test .
auditory signal processing based on phenomenological models of human perception has helped to advance the modern technology of audio compression .it is of interest therefore to develop a systematic mathematical framework for sound signal processing based on models of the ear .the biomechanics of the inner ear ( cochlea ) lend itself well to mathematical formulation ( among others ) .such models can recover main aspects of the physiological data for simple acoustic inputs ( e.g. single frequency tones ) . in this paper, we study a nonlinear nonlocal model and associated numerical method for processing complex signals ( clicks and noise ) in the time domain .we also obtain a new spectrum of sound signals with nonlinear hearing characteristics which can be of potential interest for applications such as speech recognition .linear frequency domain cochlear models have been around for a long time and studied extensively .the cochlea , however , is known to have nonlinear characteristics , such as compression , two - tone suppression and combination tones , which are all essential to capture interactions of multi - tone complexes . in this nonlinear regime , it is more expedient to work in the time domain to resolve complex nonlinear frequency responses with sufficient accuracy .the nonlinearity in our model resides in the outer hair cells ( ohc s ) , which act as an amplifier to boost basilar membrane ( bm ) responses to low - level stimuli , so called active gain .it has been shown that this type of nonlinearity is also nonlocal in nature , encouraging near neighbors on the bm to interact .one space dimensional transmission line models with nonlocal nonlinearities have been studied previously for auditory signal processing .higher dimensional models give sharper tuning curves and higher frequency selectivity . in section 2 ,we begin with a two space dimensional ( 2-d ) macromechanical partial differential equation ( pde ) model .we couple the 2-d model with the bm micromechanics of the active linear system in .we then make the gain parameter nonlinear and nonlocal to complete the model setup , and do analysis to simplify the model .in section 3 , we discretize the system and formulate a second order accurate finite difference scheme so as to combine efficiency and accuracy .the matrix we need to invert at each time step has a time - independent part ( passive ) and a time - dependent part ( active ) .in order to speed up computations , we split the matrix into the passive and active parts and devise an iterative scheme .we only need to invert the passive part once , thereby significantly speeding up computations .the structure of the system also allows us to reduce the complexity of the problem by a factor of two , giving even more computational efficiency .a proof of convergence of the iterative scheme is given in the appendix .in section 4 , we discuss numerical results and show that our model successfully reproduces the nonlinear effects such as compression , multi - tone suppression , and combination difference tones .we demonstrate such effects by inputing various signals into the model , such as pure tones , clicks and noise .a nonlinear spectrum is computed from the model and compared with fft spectrum for the acoustic input of a single tone plus gaussian white noise .the conclusions are in section 5 .the cochlea consists of an upper and lower fluid filled chamber , the scala vestibuli and scala tympani , with a shared elastic boundary called the basilar membrane ( bm ) ( see figure [ fig : coch ] ) .the bm acts like a fourier transform with each location on the bm tuned to resonate at a particular frequency , ranging from high frequency at the basal end to low frequency at the apical end .the acoustic wave enters the ear canal , where it vibrates the eardrum and then is filtered through the middle ear , transducing the wave from air to fluid in the cochlea via the stapes footplate . a traveling wave of fluid moves from the base to the apex , creating a skew - symmetric motion of the bm .the pressure difference drives the bm , which resonates according to the frequency content of the passing wave .we start with simplification of the upper cochlear chamber into a two dimensional rectangle \times [ 0,h] ] is the active gain control . in ,this is a constant , but in our case will be a nonlinear nonlocal functional of bm displacement and bm location .bringing to the left , we have where , \ \{ k_{\mathrm{a}}}= \left[\begin{array}{cc } k_{4 } & -k_{4 } \\ 0 & 0 \end{array } \right ] \label{eq : actmat}\ ] ] thus , the micromechanics consist of equations ( [ eq : passmat])([eq : actmat ] ) .a compressive nonlinearity in the model is necessary to capture effects such as two - tone suppression and combination tones .also , to allow for smoother bm profiles , we make the active gain nonlocal .thus we have and gain where are constants . solving the pressure laplace equation on the rectangle using separation of variables , we arrive at where substituting ( [ eq : sov ] ) into ( [ eq : force ] ) and then discretizing ( [ eq : active ] ) in space into grid points , we have where \ ] ] \ ] ] \ ] ] \ ] ] , , and are now block diagonal , where and . also , , and .the numbers are numerical integration weights in the discretization of ( [ eq : integral ] ) and are chosen based on the desired degree of accuracy .note that we can write , where and is symmetric and positive definite .the result of separation of variables produced the matrix , which is essentially the mass of fluid on the bm and _ dynamically couples _ the system .in formulating a numerical method , we note that the matrices in ( [ eq : disc ] ) can be split into a time - independent passive part and a time - dependent active part . in splitting in this way , we are able to formulate an iterative scheme where we only need to do one matrix inversion on the passive part for the entire simulation .thus , using second order approximations of the first and second derivates in ( [ eq : disc ] ) , we arrive at where superscript denotes discrete time , denotes iteration and \ ] ] proof of convergence will follow naturally from the next discussion .notice that this is a system .we shall simplify it to an system and increase the computational efficiency .we write and in block matrix form as \ ] ] \ ] ] where it is easily seen that the left inverse of is given by \ ] ] where \nonumber \\ & = & \{2\alpha { m_{\mathrm{f}}^{\mathrm{s}}}+ [ 2m_{1 } + p_{1 } + p_{3}(i - \tilde{m}_{2}^{-1}p_{3})]w^{-1}\}w \nonumber \\ & \equiv & { d_{\mathrm{s}}}w \label{eq : dsdef}\end{aligned}\ ] ] note that is invertible since is positive definite , thus invertible , and all other terms are positive diagonal matrices , and thus their sum is positive definite and invertible .we also have \label{eq : lpinvla}\ ] ] letting , we have \label{eq : iter1 } \\ \vec{v}^{n+1,k+1 } & = & w^{-1}\tilde{m}_{2}^{-1}p_{3}[\zeta_{2}^{n } + { d_{\mathrm{s}}}^{-1}\gamma^{n } p_{4}(\vec{u}-\vec{v})^{n+1,k } ] \label{eq : iter2}\end{aligned}\ ] ] where \label{eq : zeta1}\ ] ] \label{eq : zeta2}\ ] ] at each time step , we do 2 matrix solves in ( [ eq : zeta1 ] ) and ( [ eq : zeta2 ] ) to initialize the iterative scheme .then , since the same term appears in both equations ( [ eq : iter1 ] ) and ( [ eq : iter2 ] ) , for each we only have to do 1 matrix solve . in practice ,since is symmetric , positive definite and _ time - independent _ , we compute the cholesky factorization of at the start of the simulation and use the factorization for more efficient matrix solves at each step . as a side note ,if we subtract ( [ eq : iter2 ] ) from ( [ eq : iter1 ] ) , we have one equation for the ohc displacement .we start with a modification of the parameters in ( see table [ tab : params ] ) .it is known that higher dimensional models give higher sensitivity .this is the case with this model .the 1-d model gives a 90 db active gain at 16 khz , whereas the 2-d model gives a 160 db active gain .thus , we need to tune the system to reduce the gain .there are many ways to do this , and the method we choose is to increase all the damping coefficients in the table by the following : .model parameters in cgs units [ cols= " < , < , < , < " , ] in an isocontour plot , a probe is placed at a specific location on the bm where the time response is measured and analyzed for input tones covering a range of frequencies .figure [ fig : sens ] shows isointensity curves for , which corresponds to .the characteristic place ( cp ) for a frequency is defined as the location on the bm of maximal response from a pure tone of that frequency in the fully linear active model ( ) . the characteristic frequency ( cf ) at a bm location is the inverse of this map .the left plot is the linear steady state active case .the parameter is the active gain , and for each value of the active gain we get a curve that is a function of the input frequency .the value of this function is the ratio , where is bm displacement at the characteristic place and is pressure at the eardrum .this is known as sensitivity .it is basically an output / input ratio and gives the transfer characteristics of the ear at that particular active level .notice that when , the bm at the characteristic place is most sensitive at the corresponding characteristic frequency , but at lower values of the gain , the sensitivity peak shifts to lower frequencies .analogously , the second plot in figure [ fig : sens ] shows isointensity curves for the nonlinear time domain model where now the parameter is the intensity of the input stimulus in db spl ( sound pressure level ) . for the time domain ,we measure the root - mean - square bm amplitude from 5 ms ( to remove transients ) up to a certain time .note that for high - intensity tones , the model becomes passive while low - intensity tones give a more active model .this shows _compression_. again , there is a frequency shift of the sensitivity peak ( about one - half octave ) from low to high - intensity stimuli in agreement with , so called half - octave shift .the plot agrees well with figure 5 in .the first non - sinusoidal input we look at is a click . in the experiment in the left plot of figure [ fig :click ] , we put probes at varying characteristic places associated with frequencies ranging from 0.5 - 4 khz to measure the time series bm displacement .the click was 40 db with duration 0.1 ms starting at 0.4 ms .all responses were normalized to amplitude 1 .the plot is similar to figure 4 in . in the right plot of figure[ fig : click ] , a probe was placed at cp for 6.4 khz and the time series bm volume velocity was recorded for various intensities and the sensitivity plotted .this shows , similar to figure [ fig : sens ] , the compression effects at higher intensities .see figure 9 in for a similar plot .the second non - sinusoidal input we explore is gaussian white noise .figure [ fig : noise ] is similar in all regards to figure [ fig : click ] .notice again in the right plot the compression effect .any nonlinear system with multiple sinusoidal inputs will create difference tones .if two frequencies and are put into the ear , will be created at varying intensities , where and are nonnegative integers .the cubic difference tone , denoted , where , is the most prominent .figure [ fig : cdt ] contains three plots of one experiment .the experiment consists of two sinusoidal tones , 7 and 10 khz at 80 db each .the cubic difference tone is 4 khz .the plot on the left is the bm profile for the experiment at 15 ms .we see combination tone peaks at 1.21 cm ( cp for 4 khz ) , 1.54 cm ( cp for 2 khz ) and 1.85 cm ( cp for 1 khz ) .the middle plot shows the snapshot at 15 ms of the active gain parameter , showing the difference tones getting an active boost . finally , the right plot is a spectrum plot of the time series for bm displacement at 1.21 cm , the characteristic place for 4 khz .the cubic difference tone is above 1 nm and can therefore be heard .two - tone ( and multi - tone ) suppression is characteristic of a compressive nonlinearity and has been recognized in the ear .figure [ fig : supp ] illustrates two - tone suppression and is a collection of isodisplacement curves that show decreased tuning in the presence of suppressors and is similar to figure 16 in .we placed a probe at the cp for 4 khz ( 1.21 cm ) and input sinusoids of various frequencies . at each frequency , we record the pressure at the eardrum that gives a 1 nm displacement for 4 khz in the fft spectrum of the time series response at cp .the curve without suppressors is dashed with circles .we then input each frequency again , but this time in the presence of a low side ( 0.5 khz ) tone and high side ( 7.5 khz ) tone , both at 80 db .notice the reduced tuning at the cf . also notice the asymmetry of suppression , which shows low side is more suppressive than high side , in agreement with .for multi - tone suppression , we look at tonal suppression of noise . in figure[ fig : multisupp ] , for each plot , a probe was placed at every grid point along the bm and the time response was measured from 15 ms up to 25 ms .the signal in each consisted of noise at 50 db with a 2 khz tone ranging from 40 db to 80 db ( top to bottom ) .an fft was performed for each response and its characteristic frequency amplitude was recorded and plotted in decibels relative to the average of the response spectrum of 0 db noise from 0.5 - 16 khz .we see suppression of all frequencies , with again low - side suppression stronger than high - side suppression .figure [ fig : multisupp ] is qualitatively similar to figure 3 in .it is useful to compare this figure with figure [ fig : multisuppfft ] .this figure is the same as figure [ fig : multisupp ] , except we do an fft of the input signal at the eardrum .comparing these two figures shows that we have a new spectral transform that can be used in place of an fft in certain applications , for example signal recognition and noise suppression .we studied a two - dimensional nonlinear nonlocal variation of the linear active model in .we then developed an efficient and accurate numerical method and used this method to explore nonlinear effects of multi - tone sinusoidal inputs , as well as clicks and noise .we showed numerical results illustrating compression , multi - tone suppression and difference tones .the model reached agreement with experiments and produced a novel nonlinear spectrum . in future work, we will analyze the model responses to speech and resulting spectra for speech recognition .it is also interesting to study the inverse problem of finding efficient and automated ways to tune the model to different physiological data .applying the model to psychoacoustic signal processing will be another fruitful line of inquiry .the work was partially supported by nsf grant itr-0219004 .j. x. would like to acknowledge a fellowship from the john simon guggenheim memorial foundation , and a faculty research assignment award at ut austin .we need the following lemma : let be a non - zero eigenvalue of with non - trivial eigenvector . thus , gives subtracting the two equations , we have now , if , then from [ equ : a1 ] and [ equ : a2 ] above and , we have . but this means , which is a contradiction .thus , is an eigenvalue of with non - trivial eigenvector by the above lemma applied to ( [ eq : lpinvla ] ) , with constant , we have \end{aligned}\ ] ] where denotes spectrum .thus , we have now , let be the eigen - pair of with the smallest eigenvalue and .note that since is positive definite .thus , we have is the largest eigenvalue of , which gives thus , using the definition of from ( [ eq : dsdef ] ) , we have ^{-1}\}\vec{x}\\ & \geq & \vec{x}^{t}\{[2m_{1}+p_{1}+p_{3}(i-\tilde{m}_{2}^{-1}p_{3})]w^{-1}\}\vec{x}\\ & \geq & \min\{[2m_{1}+p_{1}+p_{3}(1-\tilde{m}_{2}^{-1}p_{3})]w^{-1}\}\end{aligned}\ ] ] where lowercase represents diagonal entries .the third line above follows from being positive definite .finally , we have \max(p_{4})}{\min\{[2m_{1}+p_{1}+p_{3}(1-\tilde{m}_{2}^{-1}p_{3})]w^{-1}\}}\end{aligned}\ ] ] for small enough , we have convergence . e. de boer and a. l. nuttall , `` properties of amplifying elements in the cochlea '' , in : a. w. gummer , ed . ,biophysics of the cochlea : from molecules to models , proc ., titisee , germany , 2002 j. xin , y. qi , and l. deng , `` time domain computation of a nonlinear nonlocal cochlear model with applications to multitone interaction in hearing '' , comm ., vol . 1 , no . 2 , 2003 , pp .
a two space dimensional active nonlinear nonlocal cochlear model is formulated in the time domain to capture nonlinear hearing effects such as compression , multi - tone suppression and difference tones . the micromechanics of the basilar membrane ( bm ) are incorporated to model active cochlear properties . an active gain parameter is constructed in the form of a nonlinear nonlocal functional of bm displacement . the model is discretized with a boundary integral method and numerically solved using an iterative second order accurate finite difference scheme . a block matrix structure of the discrete system is exploited to simplify the numerics with no loss of accuracy . model responses to multiple frequency stimuli are shown in agreement with hearing experiments . a nonlinear spectrum is computed from the model , and compared with fft spectrum for noisy tonal inputs . the discretized model is efficient and accurate , and can serve as a useful auditory signal processing tool . auditory signal processing , cochlea , nonlinear filtering , basilar membrane , time domain
as customers and competitors rely on yelp reviews to judge the quality of a business , it is important for yelp to be able to predict the sentiment polarity and rating of a given review . with yelp s newly released dataset , we perform two types of classifications based on the review text alone : simple positive / negative classification , and a star rating ( 1 through 5 inclusive ) . to build these classifiers, we will use naive bayes , support vector machines , and logistic regression .note that we are using the review text as the only input to the classifier ( e.g. , given the review `` the best british food in new york '' , we want to predict ` positive ' , or 5 stars ) .it is useful for yelp to associate review text with a star rating ( or at least a positive or negative assignment ) accurately in order to judge how helpful and reliable certain reviews are .perhaps users could give a good review but a bad rating , or vice versa .also yelp might be interested in automating the rating process , so that all users would have to do is write the review , and yelp could give a suggested rating .given a review of a business , we try to solve two problems : positive or negative ( sentiment polarity ) classification , and 5-star classification . in both cases the input to the classifier is a string representing a review , and the output ( or class label ) is either _ positive _ or _ negative _ in the former case , and an integer in the interval ] , where is the index of in the dictionary .note that the majority of the elements in will be zero , that is , is a sparse matrix .this is because for each review , a relatively small number of distinct words are used .( an implementation detail : to save memory , is stored as a ` scipy.sparse ` matrix , so that only the nonzero elements of the feature vectors remain in memory . ) now we need to build the transformer to calculate the weight of each word . for thiswe use the _ tf - idf _ statistic .so at this point , although we have a count of the occurrences of each word , there is a discrepancy between long and short reviews , as the former will have a higher count than the latter .therefore we divide the occurrence count of each word in a review by the total number of words in the review .we call these new features term frequencies , or _ tf_. + there remains a problem with _tf _ : some words could appear in many reviews ranging from 1 to 5 stars , so they are not informative because they are not unique to a certain class of reviews .the solution is called inverse document frequency , or _idf_. the idea is to offset or downscale the weight of a word based on its frequency in the corpus . finally , the class labels in this problem are simple . in the case of binary positive or negative classification ,a review is assigned the _ positive _ label if its star rating is greater than or equal to 3 , and any review with a rating less than 3 is assigned the _ negative _ label . in the case of 5-star classification , the review is assigned its star rating . to build the classifier we use three different supervised techniques : naive bayes , support vector machines , and logistic regression .these techniques were chosen as they are simple to understand and implement , they run relatively quickly , and they have historically given good results for text classification . in each casewe used 70% of the data for training , and the rest for testing . to read the data file , which has a json object on each line, we use a function ` loaddata(filename , startline , endline ) ` to read between lines # ` startline ` and # ` endline ` . ` loaddata ` returns two lists : one containing each review , and the other containing the corresponding rating .the elements of the latter will be either _ positive _ or _ negative _ , or an integer in the interval $ ] , depending on whether we do positive / negative or 5-star classification .+ note that we use ` loaddata ` to load both the training and testing data .so of course , the intersection between the two sets is empty , thanks to the interval of line numbers as parameters .+ now that the data is loaded and we have built the vectorizer and transformer , we can build the classifier using ` sklearn.pipeline.pipeline ` .the ` pipeline ` class `` behaves like a compound classifier '' class .the transformer is the _ tf - idf _statistic , and the classifier is built using either ` sklearn.naive_bayes.multinomialnb ` for naive bayes , ` sklearn.linear_model.sgdclassifier ` for support vector machines , or , for logistic regression , ` sklearn.linear_model.logisticregression ` .for example , using the ` pipeline ` class , we can build and train a naive bayes classifier as in the code in figure [ fig : code ] . in figure[ fig : code ] , we first instantiate ` countvectorizer ` and specify that we want to remove english stop words. then we use ` tfidftransformer ` to calculate the weights of each word , and lastly we create the naive bayes classifier with ` multinomialnb ` .after the classifier is built , we train it by simply passing in the training data and training labels .for this project we use the dataset from the yelp dataset challenge .the data are in json format .the format of the review data is shown in figure [ fig : data ] .we are interested in two fields only : ` text ' and ` stars ' .the other fields are not used here , but they could be useful as features for future work ( see the final discussion section ) .+ the file containing the review data has a json object formatted as in figure [ fig : data ] on each line .this file has 1,569,264 reviews , of which we use varying amounts ( see figures [ fig : pndataplot ] and [ fig:5dataplot ] ) .the review text in the data file is uncleaned and taken directly from the website without modification .therefore , as described above , some text preprocessing was necessary before building the classifier . to read this data we use the ` json ` package built in to python+ it is worth mentioning that this review data is by no means comprehensive .it includes data from 10 cities total , in the united kingdom , germany , canada , and the united states .furthermore , the data comes from all types of businesses on yelp , not just restaurants , for instance .the results for both positive / negative classification and 5-star classification are shown in figure [ fig : accuracy ] . [ cols="^,^,^,^",options="header " , ] like their speeds , the accuracies for both naive bayes and support vector machines are quite similar , differing by at most about 4% when using the same amount of data .the accuracy of logistic regression is the highest in both types of classification .the results we obtained are similar to what we had anticipated in some ways , and surprising in other ways .first of all , positive / negative classification is a much simpler problem than 5-star classification , largely because it is less specific ; for example , both 1 and 2 star reviews will be classified as negative , whereas with 5-star classification , it could be hard to tell the difference between the reviews since they will both use similar language .therefore it was expected that we would have far better accuracy for positive / negative classification than for 5-star classification , which was indeed the case .another expected result was that more data would improve accuracy .this turned out to be true to some extent , as more data provided further training examples .+ one surprising result was the better performance of naive bayes in the case of positive / negative classification .it is a very simple classifier , and we had expected support vector machines to outperform it in both types of classification .the difference in accuracy in both cases is about 4% , which is slight but not insignificant .+ another surprise was the excellent performance of logistic regression .it was expected to outperform naive bayes , but not necessarily support vector machines , and not by such a margin : its accuracy was 9.84% and 4.36% higher than support vector machines in positive / negative and 5-star classification respectively . according to jong s paper `` predicting rating with sentiment analysis '' , which also uses yelp s dataset , `` for opinionated texts , there is usually a 70% agreement between human raters '' .our best result for positive / negative classification exceeds that by 22.90% .however , for 5-star classification , our maximum accuracy is about 6.08% lower , but far better than a random star guess , which would have an accuracy of only 20% .+ for positive / negative classification , jong achieved a maximum test accuracy of about 78% with naive bayes , compared to our maximum test accuracy of 92.90% with logistic regression .+ carbon et al . worked on 5-star classification of yelp reviews as well , using several models including gaussian discriminant analysis and logistic regression . using their entire feature set, they achieved a maximum test accuracy of 46.09% with gda , compared to our maximum test accuracy of 63.92% with logistic regression .a. _ use a different learning model . _the models used here are quite simple .a more advanced one such as random forests or neural networks could produce better results , perhaps with a trade - off in time depending on the model .b. _ use more data ._ although there are almost 1.6 million reviews in yelp s dataset , more data could improve classification accuracy . in our results ,the accuracy of logistic regression in particular increased with greater data usage ( see figures [ fig : pndataplot ] and [ fig:5dataplot ] ) .c. _ use more features ._ the only input to our classifiers was the review text .however , there is an abundance of other interesting features in yelp s dataset that could be used for classification .for example , each review itself has a rating ( called ` votes ' ) of how funny , cool , or useful it is .perhaps a review with many such votes could be considered more reliable than other reviews with fewer votes , and therefore the predicted rating from the classifier would be deemed more trustworthy with a higher score assigned to it .another idea would be to use the business i d of the review to look up attributes of the business to help you determine the reliability of a review s rating .so you would effectively be building and combining two different classifiers ( one for the review , and one for the business being reviewed ) , both with the goal of predicting a review s star rating .overall the results were satisfactory for positive / negative classification , but there is room for improvement for 5-star classification .there are many general challenges to overcome besides using more data or more features for 5-star classification . as pang and leediscuss , `` some potential obstacles to accurate rating inference include lack of calibration ( e.g. , what an understated author intends as high praise may seem lukewarm ) , author inconsistency at assigning fine - grained ratings , and ratings not entirely supported by the text '' . these are nontrivial problems , and it is hard to imagine how to solve them by any kind of data preprocessing before building the feature vectors , or by other means . with that said ,there are several promising avenues for future research which could considerably improve 5-star classification accuracy .bo pang and lillian lee , `` seeing stars : exploiting class relationships for sentiment categorization with respect to rating scales , '' cornell univ . ,ithaca , ny and carnegie mellon univ . , pittsburgh , pa , 2005 .
online reviews of businesses have become increasingly important in recent years , as customers and even competitors use them to judge the quality of a business . yelp is one of the most popular websites for users to write such reviews , and it would be useful for them to be able to predict the sentiment or even the star rating of a review . in this paper , we develop two classifiers to perform positive / negative classification and 5-star classification . we use naive bayes , support vector machines , and logistic regression as models , and achieved the best accuracy with logistic regression : 92.90% for positive / negative classification , and 63.92% for 5-star classification . these results demonstrate the quality of the logistic regression model using only the text of the review , yet there is a promising opportunity for improvement with more data , more features , and perhaps different models .
series , measurements of some quantity taken over time , are measured and analyzed across the scientific disciplines , including human heart beats in medicine , cosmic rays in astrophysics , rates of inflation in economics , air temperatures in climate science , and sets of ordinary differential equations in mathematics .the problem of extracting useful information from time series has similarly been treated in a variety of ways , including an analysis of the distribution , correlation structures , measures of entropy or complexity , stationarity estimates , fits to various linear and nonlinear time - series models , and quantities derived from the physical nonlinear time - series analysis literature . however , this broad range of scientific methods for understanding the properties and dynamics of time series has received less attention in the temporal data mining literature , which treats large databases of time series , typically with the aim of either clustering or classifying the data . instead, the problem of time - series clustering and classification has conventionally been addressed by defining a distance metric between time series that involves comparing the sequential values directly . using an extensive database of algorithms for measuring thousands of different time - series properties ( developed in previous work ) , herewe show that general feature - based representations of time series can be used to tackle classification problems in time - series data mining .the approach is clearly important for many applications across the quantitative sciences where unprecedented amounts of data are being generated and stored , and also for applications in industry ( e.g. , classifying anomalies on a production line ) , finance ( e.g. , characterizing share price fluctuations ) , business ( e.g. , detecting fraudulent credit card transactions ) , surveillance ( e.g. , analyzing various sensor recordings ) , and medicine ( e.g. , diagnosing heart beat recordings ) .two main challenges of time - series classification are typically : ( i ) selecting an appropriate _ representation _ of the time series , and ( ii ) selecting a suitable measure of dissimilarity or _ distance _ between time series . the literature on representations and distance measures for time - series clustering and classification is extensive .perhaps the most straightforward representation of a time series is its time - domain form , then distances between time series relate to differences between the time - ordered measurements themselves .when short time series encode meaningful patterns that need to be compared , new time series can be classified by matching them to similar instances of time series with a known classification .this type of problem has traditionally been the focus of the time series data mining community , and we refer to this approach as _ instance - based _ classification . an alternative approach involves representing time series using a set of derived properties , or features , and thereby transforming the temporal problem to a static one .a very simple example involves representing a time series using just its mean and variance , thereby transforming time - series objects of any length into short vectors that encapsulate these two properties .here we introduce an automated method for producing such _ feature - based _ representations of time series using a large database of time - series features .we note that not all methods fit neatly into these two categories of instance - based and feature - based classification .for example , time - series shapelets classify new time series according to the minimum distance of particular time - series subsequences ( or ` shapelets ' ) to that time series .although this method uses distances calculated in the time - domain as a basis for classification ( not features ) , new time series do not need to be compared to a large number of training instances ( as in instance - based classification ) . in this paperwe focus on a comparison between instance - based classification and our feature - based classifiers .feature - based representations of time series are used across science , but are typically applied to longer time series corresponding to streams of data ( such as extended medical or speech recordings ) rather than the short pattern - like time series typically studied in temporal data mining . nevertheless , some feature - based representations of shorter time series have been explored previously : for example , nanopoulos et al . used the mean , standard deviation , skewness , and kurtosis of the time series and its successive increments to represent and classify control chart patterns , mrchen used features derived from wavelet and fourier transforms of a range of time - series datasets to classify them , wang et al .introduced a set of thirteen features that contains measures of trend , seasonality , periodicity , serial correlation , skewness , kurtosis , chaos , nonlinearity , and self - similarity to represent time series , an approach that has since been extended to multivariate time series , and deng et al .used measures of mean , spread , and trend in local time - series intervals to classify different types of time series .as with the choice of representations and distance metrics for time series , features for time - series classification problems are usually selected manually by a researcher for a given dataset .however , it is not obvious that the features selected by a given researcher will be the best features with which to distinguish the known data classes perhaps simpler alternatives exist with better classification performance ?furthermore , for many applications , the mechanisms underlying the data are not well understood , making it difficult to develop a well - motivated set of features for classification . in this work ,we automate the selection of features for time - series classification by computing thousands of features from across the scientific time - series analysis literature and then selecting those with the best performance .the classifier is thus selected according to the structure of the data rather than the methodological preference of the researcher , with different features selected for different types of problems : e.g. , we might discover that the variance of time series distinguishes classes for one type of problem , but their entropy may be important for another .the process is completely data - driven and does not require any knowledge of the dynamical mechanisms underlying the time series or how they were measured .we describe our method as ` highly comparative ' and draw an analogy to the dna microarray , which compares large numbers of gene expression profiles simultaneously to determine those genes that are most predictive of a target condition ; here , we compare thousands of features to determine those that are most suited to a given time - series classification task . as well as producing useful classifiers , the features selected in this way highlight the types of properties that are informative of the class structure in the dataset and hence can provide new understanding .central to our approach is the ability to represent time series using a large and diverse set of their measured properties . in this section ,we describe how this representation is constructed and how it forms a basis for classification . in sec .[ sec : dm_data ] , the datasets analyzed in this work are introduced .the feature - vector representation of time series is then discussed in sec .[ sec : dm_feature_vector ] , and the methodology used to perform feature selection and classification is described in sec . [sec : dm_feature_selection ] .the twenty datasets analyzed in this work are obtained from _ the ucr time series classification / clustering homepage _ .all datasets are of labeled , univariate time series and all time series in each dataset have the same length .note that this resource has since ( late in 2011 ) been updated to include an additional twenty - five datasets , which are not analyzed here .the datasets ( which are listed in tab .[ tab : cfnresults ] and described in more detail in supplementary table i ) , span a range of : ( i ) time - series lengths , , from 60 for the synthetic control dataset , to 637 samples for lightning ( two ) ; ( ii ) dataset sizes , from a number of training ( ) and test ( ) time series of 28 and 28 for coffee , to 1000 and 6164 for wafer ; and ( iii ) number of classes , , from 2 for gun point , to 50 for 50 words .the datasets are derived from a broad range of systems : including measurements of a vacuum - chamber sensor during the etch process of silicon wafer manufacture ( wafer ) , spectrograms of different types of lightning strikes ( lightning ) , the shapes of swedish leaves ( swedish leaf ) , and yoga poses ( yoga ) .all the data is used exactly as obtained from the _ ucr _ source , without any preprocessing and using the specified partitions of each dataset into training and test portions .the sensitivity of our results to different such partitions is compared for all datasets in supplementary table ii ; test set classification rates are mostly similar to those for the given partitions .we present only results for the specified partitions throughout the main text to aid comparison with other studies .feature - based representations of time series are constructed using an extensive database of over 9000 time - series analysis operations developed in previous work .the operations quantify a wide range of time - series properties , including basic statistics of the distribution of time - series values ( e.g. , location , spread , gaussianity , outlier properties ) , linear correlations ( e.g. , autocorrelations , features of the power spectrum ) , stationarity ( e.g. , statav , sliding window measures , prediction errors ) , information theoretic and entropy / complexity measures ( e.g. , auto - mutual information , approximate entropy , lempel - ziv complexity ) , methods from the physical nonlinear time - series analysis literature ( e.g. , correlation dimension , lyapunov exponent estimates , surrogate data analysis ) , linear and nonlinear model fits [ e.g. , goodness of fit estimates and parameter values from autoregressive moving average ( arma ) , gaussian process , and generalized autoregressive conditional heteroskedasticity ( garch ) models ] , and others ( e.g. , wavelet methods , properties of networks derived from time series , etc . )all of these different types of analysis methods are encoded algorithmically as operations .each operation , , is an algorithm that takes a time series , , as input , and outputs a single real number , i.e. , .we refer to the output of an operation as a ` feature ' throughout this work .all calculations are performed using matlab 2011a ( a product of the mathworks , natick , ma ) .although we use over 9000 operations , many groups of operations result from using different input parameters to the same type of time - series method ( e.g. , autocorrelations at different time lags ) , making the number of conceptually - distinct operations significantly smaller : approximately 1000 according to one estimate .the matlab code for all the operations used in this work can be explored and downloaded at www.comp-engine.org/timeseries .differences between instance - based time - series classification , where distances are calculated between the ordered values of the time series , and feature - based time - series classification , which learns a classifier using a set of features extracted from the time series , are illustrated in fig .[ fig : method_illustration ] . although the simplest ` lock step ' distance measure is depicted in fig .[ fig : method_illustration]a , more complex choices , such as dynamic time warping ( dtw ) , can accommodate unaligned patterns in the time series , for example .the method proposed here is depicted in fig .[ fig : method_illustration]b , and involves representing time series as extensive feature vectors , , which can be used as a basis for selecting a reduced number of informative features , , for classification .although we focus on classification in this work , we note that dimensionality reduction techniques , such as principal components analysis , can be applied to the full feature vector , , which can yield meaningful lower - dimensional representations of time - series datasets that can be used for clustering , as demonstrated in previous work , and illustrated briefly for the swedish leaf dataset in supplementary fig . 1 . in some rare cases, an operation may output a ` special value ' , such as an infinity or imaginary number , or it may not be appropriate to apply it to a given time series , e.g. , when a time series is too short , or when a positive - only distribution is being fit to data that is not positive .indeed , many of the operations used here were designed to measure complex structure in long time - series recordings , such as the physical nonlinear time - series analysis literature and some information theoretic measures , that require many thousands of points to produce a robust estimate of that feature , rather than the short time - series patterns of 100s of points or less analyzed here . in this work , we filtered out all operations that produced any special values on a dataset prior to performing any analysis . after removing these operations, between 6220 and 7684 valid operations remained for the datasets studied here .feature selection is used to select a reduced set of features , , from a large initial set of thousands , , with the aim of producing such a set , , that best contributes to distinguishing a known classification of the time series .many methods have been developed for performing feature selection , including the _ lasso _ and recursive feature elimination . in this workwe use a simple and interpretable method : greedy forward feature selection , which grows a set of important features incrementally by optimizing the linear classification rate on the training data . although better performance could be achieved using more complex feature selection and classification methods , we value transparency over sophistication to demonstrate our approach here .the greedy forward feature selection algorithm is as follows : ( i ) using a given classifier , the classification rates of all individual features , , are calculated and the feature with the highest classification rate is selected as the first feature in the reduced set , .( ii ) the classification rates of all features in combination with are calculated and the feature that , in combination with , produces a classifier with the highest classification rate is chosen next as .( iii ) the procedure is repeated , choosing the operation that provides the greatest improvement in classification rate at each iteration until a termination criterion is reached , yielding a reduced set of features : . for iterations at which multiple features produce equally good classification rates , one of them is selected at random .feature selection is terminated at the point at which the improvement in the training set classification rate upon adding an additional feature drops below 3% , or when the training set misclassification rate drops to 0% ( after which no further improvement is possible ) .our results are not highly sensitive to setting this threshold at 3% ; this sensitivity is examined in supplementary fig .3 . to determine the classification rate of each feature ( or combination of features ) , we use a linear discriminant classifier , implemented using the * classify * function from matlab s statistics toolbox , which fits a multivariate normal density to each class using a pooled estimate of covariance . because the linear discriminant is so simple , over - fitting to the training set is not problematic , and we found that using 10-fold cross validation within the training set produced similar overall results .cross validation can also be difficult to apply to some datasets studied here , which can have as few as a single training example for a given class . for datasets with more than two classes , linear classification boundaries are constructed between all pairs of classes and new time series are classified by evaluating all classification rules and then assigning the new time series to the class with the most ` votes ' from this procedure .the performance of our linear feature - based classifier is compared to three different instance - based classifiers , which are labeled as : ( i ) ` euclidean 1-nn ' , a 1-nn classifier using the euclidean distance , ( ii ) ` dtw 1-nn ' , a 1-nn classifier using a dynamic time warping distance , and ( iii ) ` dtw 1-nn ( best warping window , ) ' , a 1-nn classifier using a dynamic time warping distance with a warping window learned using the sakoe - chiba band ( cf .these results were obtained from _ the ucr time series classification / clustering homepage _ .results using a 1-nn classifier with euclidean distances were verified by us and were consistent with the ucr source .in this section , we demonstrate our highly comparative , feature - based approach to time - series classification . in sec . [sec : particular_datasets ] we illustrate the method using selected datasets , in sec . [ sec : all_results ] we compare the results to instance - based classification methods across all twenty datasets , and in sec . [ sec : computational_issues ] we discuss the computational complexity of our method . for some datasets, we found that the first selected feature ( i.e. , , the feature with the lowest linear misclassification rate on the training data ) distinguished the labeled classes with high accuracy , corresponding to vast dimensionality reduction : from representing time series using all measured points , to just a single extracted feature .examples are shown in fig .[ fig:1feat ] for the trace and wafer datasets .the trace dataset contains four classes of transients relevant to the monitoring and control of industrial processes .there are 25 features in our database that can classify the training set without error , one of which is a time - reversal asymmetry statistic , , where is defined as where are the values of the time series , is the time lag ( for this feature ) , and averages , , are performed across the time series .this operation with produces distributions for the four classes of the trace dataset as shown in figs .[ fig:1feat]a and b for the training and test sets , respectively .simple thresholds on this feature , learned using a linear classifier , allow new time series to be classified by evaluating eq . .in this way , the test set of trace is classified with 99% accuracy , producing similar performance as dtw ( which classifies the test set without error ) but using just a single feature , and circumventing the need to compute distances between pairs of time series .a second example is shown in figs .[ fig:1feat]c and [ fig:1feat]d for the wafer dataset , which contains measurements of various sensors during the processing of silicon wafers for semiconductor fabrication that are either ` normal ' or ` abnormal ' . as can be seen from the annotations in figs .[ fig:1feat]c and [ fig:1feat]d , each class of time series in this dataset is quite heterogenous .however , the single feature selected for this dataset simply counts the frequency of the pattern ` decrease - increase - decrease - increase ' in successive pairs of samples of a time series , expressed as a proportion of the time - series length .a simple threshold learned on this feature classifies the test set with an accuracy of 99.98% , slightly higher than the best instance - based result of 99.5% for euclidean 1-nn , but much more efficiently : using a single extracted feature rather than comparing all 152 samples of each time series to find matches in the training set .feature - based classifiers constructed for most time - series datasets studied here combine multiple features .an example is shown in fig .[ fig : syntheticcontrolsupervised ] for the synthetic control dataset , which contains six classes of noisy control chart patterns , each with distinctive dynamical properties : ( i ) ` normal ' ( dark green ) , ( ii ) ` cyclic ' ( orange ) , ( iii ) ` increasing trend ' ( blue ) , ( iv ) ` decreasing trend ' ( pink ) , ( v ) ` upward shift ' ( light green ) , ( vi ) ` downward shift ' ( yellow ) . in statistical process control , it is important to distinguish these patterns to detect potential problems with an observed process . as shown in fig .[ fig : syntheticcontrolsupervised]a for greedy forward feature selection , the misclassification rate in both the training and test sets drops sharply when a second feature is added to the classifier , but plateaus as subsequent features are added .the dataset is plotted in the space of these first two selected features , , in fig .[ fig : syntheticcontrolsupervised]b .the first feature , , is named * ph_forcepotential_sine_10_004_10_median * , and is plotted on the horizontal axis of fig .[ fig : syntheticcontrolsupervised]b .this feature behaves in a way that is analogous to performing a cumulative sum through time of the -scored time series ( the cumulative sum , , is defined as ) , and then returning its median ( i.e. , the median of for ) .this feature takes high values for time series that have a decreasing trend ( the cumulative sum of the -scored time series initially increases and then decreases back to zero ) , moderate values for time series that are approximately mean - stationary ( the cumulative sum of the -scored time series oscillates about zero ) , and low values for time series that have an increasing trend ( the cumulative sum of the -scored time series initially decreases and then increases back to zero ) . as shown in fig .[ fig : syntheticcontrolsupervised]b , this feature on its own distinguishes most of the classes well , but confuses the two classes without an underlying trend : the uncorrelated random number series , ` normal ' ( green ) , and the noisy oscillatory time series , ` cyclic ' ( orange ) .the second selected feature , , named * sp_basic_pgram_hamm_power_q90mel * , is on the vertical axis of fig .[ fig : syntheticcontrolsupervised]b and measures the mel - frequency at which the cumulative power spectrum ( obtained as a periodogram using a hamming window ) reaches 90% of its maximum value , as . ] .this feature gives low values to the cyclic time series ( orange ) that have more low - frequency power , and high values to the uncorrelated time series ( dark green ) .even though this feature alone exhibits poor classification performance ( a misclassification rate of 52.3% on the test data ) , it compensates for the weakness of the first feature , which confuses these two classes .these two features are selected automatically and thus complement one another in a way that facilitates accurate classification of this dataset .although dtw is more accurate at classifying this dataset ( cf .[ fig : syntheticcontrolsupervised]a ) , this example demonstrates how selected features can provide an understanding of how the classifier uses interpretable time - series properties to distinguish the classes of a dataset ( see supplementary fig . 2 for an additional example using the two patterns dataset ) .furthermore , our results follow dimensionality reduction from 60-sample time series down to two simple extracted features , allowing the classifier to be applied efficiently to massive databases and to very long time series ( cf .[ sec : computational_issues ] ) . for many datasets , such as the six - class osu leaf dataset ,the classification accuracy is improved by including more than two features , as shown in fig .[ fig : osuleaf ] .the classification rates of all three 1-nn instance - based classifiers ( horizontal lines labeled in fig .[ fig : osuleaf ] ) are exceeded by the linear feature - based classifier with just two features .the classification performance improves further when more features are added , down to a test set misclassification rate of just 9% with eleven features ( the test set classification rate plateaus as more features are added while the training set classification rate slowly improves , indicating a modest level of over - fitting beyond this point ) .the improvement in training - set misclassification rate from adding an additional feature drops below 3% after selecting five features , yielding a test set misclassification rate of 16.5% ( shown boxed in fig .[ fig : osuleaf ] ) , outperforming all instance - based classifiers by a large margin despite dimensionality reduction from 427-sample time series to five extracted features . havingprovided some intuition for our method using specific datasets as examples , we now present results for all twenty time - series datasets from _ the ucr time series classification / clustering homepage _ ( as of mid-2011 ) . for these datasets of short patternswhose values through time can be used as the basis of computing a meaningful measure of distance between them , dtw has been shown to set a high benchmark for classification performance . however , as shown above , it is possible for feature - based classifiers to outperform instance - based classifiers despite orders of magnitude of dimensionality reduction .results for all datasets are shown in table [ tab : cfnresults ] , including test set misclassification rates for three instance - based classifiers and for our linear feature - based classifier .the final two columns of table [ tab : cfnresults ] demonstrate extensive dimensionality reduction using features for all datasets , using an average of 3.2 features to represent time series containing an average of 282.1 samples . a direct comparison of 1-nn dtw with our linear feature - based classifieris shown in fig .[ fig : compare_performance ] for all datasets .both methods yield broadly similar classification results for most datasets , but some datasets exhibit large improvements in classification rate using one method over the other .note that results showing the variation across different training / test partitions ( retaining training / test proportions ) are shown in supplementary table ii for euclidean 1-nn and our linear feature - based classifier ; results are mostly similar as shown here for fixed partitions . across all datasets ,a wide range of time - series features were selected for classification , including measures of autocorrelation and automutual information , motifs in symbolized versions of time series , spectral properties , entropies , measures of stationarity , outlier properties , scaling behavior , and others .names of all features selected for each dataset , along with their matlab code names , are provided in supplementary table iii . [ cols="^,^,^,^,^,^,^ " , ] [ tab : cfnresults ] as with all approaches to classification ,a feature - based approach is better suited to some datasets than others .indeed , we found that feature - based classifiers outperform instance - based alternatives on a number of datasets , and sometimes by a large margin .for example , in the ecg dataset , the feature - based classifier yields a test set misclassification rate of 1.0% using just a single extracted feature , whereas the best instance - based classifiers ( euclidean 1-nn and dtw 1-nn using the best warping window ) have a misclassification rate of 12.0% . in the coffee dataset , the test set is classified without error using a single extracted feature , whereas the best instance - based classifiers ( both using dtw ) have a misclassification rate of 17.9% . in other cases , instance - based approaches ( including even the straightforward euclidean 1-nn classifier ) performed better .for example , the 50 words dataset has a large number of classes ( fifty ) and a large heterogeneity in training set size ( from as low as 1 to 52 training examples in a given class ) , for which matching to a nearest neighbor using instance - based methods outperforms the linear feature - based classifier .the face ( four ) dataset also has relatively few , quite heterogenous , and class - unbalanced training examples , making it difficult to select features that best capture the class differences ; instance - based methods also outperform our feature - based approach on this dataset . the ability of dtw to adapt on a pairwise basis to match each test time series to a similar training time series can be particularly powerful mechanism for some datasets , and is unavailable to a static , feature - based classifier , which does not have access to the training data once the classifier has been trained .this mechanism is seen to be particularly important for the lightning ( seven ) dataset , which contains heterogenous classes with unaligned patterns dtw performs well here ( misclassification rate of 27.2% ) , while 1-nn euclidean distance and feature - based classifiers perform worse , with misclassification rates exceeding 40% .our feature - based classifiers are trained to optimize the classification rate in the training set , and thus assume similar class proportions in the test set , which is often not the case ; by simply matching instances of time series to the training set , discrepancies between class ratios in training and test sets are less problematic for instance - based classification .this may be a contributing factor to the poor performance of feature - based classification for the 50 words , lightning ( seven ) , face ( four ) and face ( all ) datasets .a feature - based representation also struggles when only a small number of heterogenous training examples are available , as with the 50 words , lightning ( seven ) , and cbf datasets . in this caseit can be difficult to select features that represent differences within a class as ` the same ' , and simultaneously capture differences between classes as ` different ' .although we demonstrate improved performance on the adiac dataset , with a misclassification rate of 35.5% , this remains high .we note that the properties of this dataset provide multiple challenges for our method , that may also contribute to its difficulty with instance - based approaches , including a small number and large variation in the number of examples in the training set ( between 5 and 15 examples per class ) , a negative correlation between training set size and test set size ( where our method assumes the same class proportions in the test set ) , and a large number of classes ( 37 ) , which are relatively heterogenous within a given class , and visually quite similar between classes . despite some of the challenges of feature - based classification ,representing time series using extracted features brings additional benefits , including vast dimensionality reduction and , perhaps most importantly , interpretable insights into the differences between the labeled classes ( as demonstrated in sec .[ sec : particular_datasets ] ) .this ability to learn about the properties and mechanisms underlying class differences in the time series in some sense corresponds to the ` ultimate goal of knowledge discovery ' , and provides a strong motivation for pursuing a feature - based representation of time - series datasets where appropriate . in this section , the computational effort required to classify time series using extracted featuresis compared to that of instance - based approaches . calculating the euclidean distance between two time serieshas a time complexity of , where is the length of each time series ( which must be constant ) .the distance calculation for dynamic time warping ( dtw ) has a time complexity of in general , or using a warping window , where is the warping window size . classifying a new time series using a 1-nn classifier and sequential search ( i.e. , sequentially calculating distances between a new time series and all time series in the training set ) therefore has a time complexity of for euclidean distances and either or for dtw , where is the number of time series in the training set . although the amortized time complexity of the distance calculation can be improved using lower bounds , andspeedups can be obtained using indexing or time - domain dimensionality reduction , the need to calculate many distances between pairs of time series is fundamental to instance - based classification , such that scaling with the time - series length , , and the size of the training set , , is inevitable .while the use of shapelets addresses some of these issues , here we avoid comparisons in the time domain completely and instead classify time series using a static representation in terms of extracted features .time - domain classification can therefore become computationally prohibitive for long time series and/or very large datasets .in contrast to instance - based classifiers , the bulk of the computational burden for our feature - based method is associated with learning a classification rule on the training data , which involves the computation of thousands of features for each training time series , which can be lengthy . however , this is a one - off computational cost : once the classifier has been trained , new time series are classified quickly and independent of the training data . for most cases in this work ,selected features correspond to simple algorithms with a time complexity that scales linearly with the time series length , as .the classification of a new time series then involves simply computing features , and then evaluating the linear classification rule . hence , if all features have time complexities that scale as , the total time complexity of classifying a new time series scales as if the features are calculated serially ( we note , of course , that calculating each of the features can be trivially distributed ) .this result is independent of the size of the training dataset and , importantly , the classification process does not require any training data to be loaded into memory , which can be a major limitation for instance - based classification of large datasets .having outlined the computational steps involved in feature - based classification , we now describe the actual time taken to perform classification using specific examples .first we show that even though the methods used in this work were applied to relatively short time series ( of lengths between 60 and 637 samples ) , they are also applicable to time series that are orders of magnitude longer ( indeed many operations are tailored to capturing complex dynamics in long time - series recordings ) . for example , the features selected for the trace and wafer datasets shown in fig .[ fig:1feat ] were applied to time series of different lengths , as plotted in fig .[ fig : timescaling ] .note that the following is for demonstration purposes only : these algorithms were implemented directly in matlab and run on a normal desktop pc with no attempt to optimize performance .the figure shows that both of these operations have a time complexity that scales approximately linearly with the time - series length , as .feature - based classification is evidently applicable to time series that are many orders of magnitude longer than short time - series patterns ( as demonstrated in previous work )in this case a 100000-sample time series is converted to a single feature : either , or the decrease - increase - decrease - increase motif frequency , in under 5ms . note that although simple operations tended to be selected for many of the datasets studied in this work , other more sophisticated operations ( those based on nonlinear model fits , for example ) have computational time complexities that scale nonlinearly with the time - series length , .the time complexity of any particular classifier thus depends on the features selected in general ( however , in future computational constraints could be placed on the set of features searched across , e.g. , restricting the search to features that scaled linearly as , as discussed in sec [ sec : discussion ] ) .next we outline the sequence of calculations involved in classifying the wafer dataset as a case study .we emphasize that in this paper , we are not concerned with optimizing the one - off cost of training a classifier and simply calculated the full set of 9288 features on each training dataset , despite high levels of redundancy in this set of features and the inclusion of thousands of nonlinear methods designed for long streams of time - series data . in future , calculating a reduced set ( of say 50 features ) could reduce the training computation times reported here by orders of magnitude .the calculation of this full set of 9288 features on a ( 152-sample ) time series from the wafer dataset took an average of approximately 31s . performing these calculations serially for this very large training set with 1000 , this amounts to a total calculation time of 8.6 hours .this is the longest training time of any dataset studied here due to a large number of training examples ; other datasets had as few as 24 training examples , with a total training time under 15min .furthermore , all calculations are independent of one another and can be trivially distributed ; for example , with as many nodes as training time series , the total computation is the same as for a single time series , in this case ( or , furthermore , with as many nodes as time series / operation pairs , the total computation time is equal to that of the slowest single operation operating on any single time series , reducing the computation time further ) . for the wafer dataset ,feature selection took 6s , which produced a ( training set ) misclassification rate of 0% and terminated the feature selection process .although just a single feature was selected here , more features are selected in general , which take 610s per feature to select .it then took a total of 32.5s to load all 6164 test time series into memory , a total of 0.1s to calculate the selected feature and evaluate the linear classification rule on all time series on a basic desktop pc . the resultclassified 6163 of the 6164 , or 99.98% , of the test time series correctly .in summary , the bulk of the computational burden of our highly comparative feature - based classification involves the calculation of thousands of features on the training data ( which could be heavily optimized in future work ) .although instance - based methods match new time series to training instances and do not require such computational effort to train , the investment involved in training a feature - based classification rule allows new time series to be classified rapidly and independent of the training data .the classification of a new time series simply involves extracting feature(s ) and evaluating a linear classification rule , which is very fast ( 3 per time series for the wafer example above ) , and limited by the loading of data into memory ( 5 per time series for the wafer dataset ) . in general , calculation times will depend on the time - series length , , the number of features selected during the feature selection process , , and the computational time complexity of those selected features .performing feature - based classification in this way is thus suited to applications that value fast , real - time classification of new time series and can accommodate the relatively lengthy training process ( or where sufficient distributed computational power is available to speed up the training process ) .there are clear applications to industry , where measured time series need to be checked in real time on a production line , for quality control , or the rapid classification of large quantities of medical samples , for example .in summary , we have introduced a highly comparative method for learning feature - based classifiers for time series .our main contributions are as follows : + ( i ) previous attempts at feature - based time - series classification in the data mining literature have reported small sets of ( or fewer ) manually selected or generic time - series features . here , for the first time , we apply a diverse set of thousands of time - series features and introduce a method that compares across these features to construct feature - based classifiers automatically .+ ( ii ) features selected for the datasets studied here included measures of outlier properties , entropy measures , local motif frequencies , and autocorrelation - based statistics .these features provide interpretable insights into the properties of time series that differ between labeled classes of a dataset .+ ( iii ) of the twenty ucr time - series datasets studied here , feature - based classifiers used an average of 3.2 features compared to an average time - series length of 282.1 samples , representing two orders of magnitude of dimensionality reduction .+ ( iv ) despite dramatic dimensionality reduction and an inability to compare and match similar patterns through time , our feature - based representations of time series produced good classification performance that was in many cases superior to dtw , and in some cases by a large margin .+ ( v ) unlike instance - based classification , the training of a feature - based classifier incurs a significant computational expense .however , this one - off cost allows new time series to be classified extremely rapidly and independent of the training set .furthermore , there is much scope for optimizing this training cost in future by exploiting redundancy in our massive feature set . to introduce the highly comparative approach ,we have favored the interpretability of feature selection and classification methods over their sophistication .feature selection was achieved using greedy forward selection , and classification was done using linear discriminant classifiers .many more sophisticated feature selection and classification methods exist ( e.g. , that allow for more robust and/or nonlinear classification boundaries ) and should improve the classification results presented here .this flexibility to incorporate a large and growing literature of sophisticated classifiers operating on feature vectors , including decision trees and support vector machines or even -nn applied in a feature space , is a key benefit of our approach .considering combinations of features that are not necessarily the result of a greedy selection process ( e.g. , classifiers that combine features with poor individual performance have been shown to be very powerful on some datasets ) , should also improve classification performance .however , we note that complex classifiers may be prone to over - fitting the training data and thus may require cross - validation on the training data to reduce the in - sample bias . however , cross - validation is problematic for some of the datasets examined here that have small numbers of training examples ( as low as just a single training example for a class in the 50 words dataset ) .we used the total classification rate as a cost function for greedy forward feature selection to aid comparison to other studies , even though many datasets have unequal numbers of time series in each class and different class proportions in the training and test sets , thus focusing the performance of classifiers towards those classes containing the greatest number of time series . in future ,more subtle cost functions could be investigated , that optimize the mean classification rate across classes , for example , rather than the total number of correct classifications . in summary ,the simple classification and feature selection methods used here were chosen to demonstrate our approach as clearly as possible and produce easily - interpretable results ; more sophisticated methods could be investigated in future to optimize classification accuracies for real applications . because we used thousands of features developed across many different scientific disciplines ,many sets of features are highly correlated to one another .greedy forward feature selection chooses features incrementally based on their ability to increase classification performance , so if a feature is selected at the first iteration , a highly correlated feature is unlikely to increase the classification rate further .thus , the non - independence of features does not affect our ability to build successful feature - based classifiers in this way .however , strong dependencies between operations can mean that features selected using different partitions of the data into training and testing portions can be different ( or even for the same partition when two or more features yield the same classification rate and are selected at random ) . for homogenous datasets ,features that differ for different data partitions are typically slight variants of one another ; for example , the second feature selected for the synthetic control dataset ( cf .[ sec : particular_datasets ] ) is a summary of the power spectrum for some partitions and an autocorrelation - based measure for others both features measure aspects of the linear correlation present in the time series and thus contribute a similar understanding of the time series properties that are important for classification .the selection of either feature yields similar performance on the unseen data partition .we also note that this redundancy in the feature set could be exploited in future to produce a powerful reduced set of approximately independent , computationally inexpensive , and interpretable features with which to learn feature - based classifiers for time series .future work could also focus on adding new types of features found to be useful for time - series classification ( or comparing them to our implementation of existing methods , cf . ) , as our ability to construct useful feature - based time - series classifiers is limited by those features contained in our library of features , which is currently comprehensive but far from exhaustive .together , these refinements of the feature set could dramatically speed up the computation times reported here , improve the interpretability of selected features , and increase classification performance .many features in our database are designed for long , stationary streams of recorded data and yet here we apply them to short and often non - stationary time series . for example , estimating the correlation dimension of a time - delay embedded time series requires extremely long and precise recordings of a system . although the output of a correlation dimension estimate on a 60-sample time series will not be a robust nor meaningful estimate of the correlation dimension , it is nevertheless the result of an algorithm operating on a time series and may still contain some useful information about its structure .regardless of the conventional meaning of a time - series analysis method therefore , our approach judges features according to their demonstrated usefulness in classifying a dataset .appropriate care must therefore be taken in the interpretation of features should they prove to be useful for classifying a given dataset . although feature - based and instance - based approaches to time - series classification have been presented as opposing methodologies here , future work could link them together . for example , batista et al . used a simple new feature claimed to resemble ` complexity ' , to rescale conventional euclidean distances calculated between time series , demonstrating an improvement in classification accuracy .rather than using this specific , manually - selected feature , our highly comparative approach could be used to find informative but computationally inexpensive features to optimally rescale traditional euclidean distances .in 1993 , timmer et al . wrote : `` the crucial problem is not the classificator function ( linear or nonlinear ) , but the selection of well - discriminating features .in addition , the features should contribute to an understanding [ ... ] . '' in this work , we applied an unprecedented diversity of scientific time - series analysis methods to a set of classification problems in the temporal data mining literature and showed that successful classifiers can be produced in a way that contributes an understanding of the differences in properties between the labeled classes of time series .although the datasets studied here are well suited to instance - based classification , we showed that a highly comparative method for constructing feature - based representations of time series can yield competitive classifiers despite vast dimensionality reduction .relevant features and classification rules are learned automatically from the labeled structure in the dataset , without requiring any domain knowledge about how the data were generated or measured , allowing classifiers to adapt to the data , rather than attempting to develop classifiers that work ` best ' on generic datasets .although the computation of thousands of features can be intensive ( if not distributed ) , once the features have been selected and the classification rule has been learned , the classification of new time series is rapid and can outperform instance - based classification .the approach can be applied straightforwardly to time series of variable length , and to time series that are many orders of magnitude longer than those studied here .perhaps most importantly , the results provide an understanding of the key differences in properties between different classes of time series , insights that can guide further scientific investigation .the code for generating the features used in this work is freely available at http://www.comp - engine.org / timeseries/.the authors would like to thank sumeet agarwal for helpful feedback on the manuscript .b. d. fulcher , m. a. little , and n. s. jones , `` highly comparative time - series analysis : the empirical structure of time series and their methods , '' _ j. roy .soc . interface _ ,10 , no .83 , p. 20130048, 2013 .x. wang , a. mueen , h. ding , g. trajcevski , p. scheuermann , and e. keogh , `` experimental comparison of representation methods and distance measures for time series data , '' _ data min ._ , 2012 .l. ye and e. keogh , `` time series shapelets : a new primitive for data mining , '' in _ proc .15th acm sigkdd intl conf . knowledge discovery and data mining_.1em plus 0.5em minus 0.4emnew york , ny , usa : acm , 2009 , pp . 947956 .a. nanopoulos , r. alcock , and y. manolopoulos , _ information processing and technology_.1em plus 0.5em minus 0.4emcommack , ny , usa : nova science publishers , inc . , 2001 , ch .feature - based classification of time - series data , pp . 4961 .x. wang , a. wirth , and l. wang , `` structure - based statistical features and multivariate time series clustering , '' in _ieee intl conf .data mining_.1em plus 0.5em minus 0.4emieee computer society , 2007 , pp .351360 .d. eads , d. hill , s. davis , s. perkins , j. ma , r. porter , and j. theiler , `` genetic algorithms and support vector machines for time series classification , '' in _ applications and science of neural networks , fuzzy systems , and evolutionary computation v _ , b. bosacchi , d. b. fogel , and j. c. bezdek , eds .4787 , seattle , wa , usa , 2002 , pp .7485 .l. wei and e. keogh , `` semi - supervised time series classification , '' in _ proc . of the 12th acm sigkdd intl conf .knowledge discovery and data mining _ , vol .20 , no . 23 , new york , ny , usa , 2006 , pp . 748753. i. guyon , c. aliferis , and a. elisseeff , _ computational methods of feature selection data mining and knowledge discovery series_.1em plus 0.5em minus 0.4emboca raton , london , new york : chapman and hall / crc , 2007 , ch .causal feature selection , pp .6385 .d. roverso , `` multivariate temporal classification by windowed wavelet decomposition and recurrent neural networks , '' in _3rd ans intl topical meeting on nuclear plant instrumentation , control and human - machine interface _ , vol . 20 , washington , dc , usa , 2000 . x. xi , e. keogh , c. shelton , l. wei , and c. a. ratanamahatana , `` fast time series classification using numerosity reduction , '' in _ proc .23rd intl conf .machine learning_.1em plus 0.5em minus 0.4emnew york , ny , usa : acm , 2006 , pp .10331040 .t. rakthanmanon , b. campana , a. mueen , g. batista , b. westover , q. zhu , j. zakaria , and e. keogh , `` searching and mining trillions of time series subsequences under dynamic time warping , '' in _ proc .18th acm sigkdd intl conf .knowledge discovery and data mining _ , ser .kdd 12.1em plus 0.5em minus 0.4emnew york , ny , usa : acm , 2012 , pp . 262270 .h. ding , g. trajcevski , p. scheuermann , x. wang , and e. keogh , `` querying and mining of time series data : experimental comparison of representations and distance measures , '' _ proc .vldb endowment _ , 2008 .j. shieh and e. keogh , `` : indexing and mining terabyte sized time series , '' in _ proc .14th acm sigkdd intl conf . knowledge discovery and data mining_.1em plus 0.5em minus 0.4emacm , 2008 , pp .
a highly comparative , feature - based approach to time series classification is introduced that uses an extensive database of algorithms to extract thousands of interpretable features from time series . these features are derived from across the scientific time - series analysis literature , and include summaries of time series in terms of their correlation structure , distribution , entropy , stationarity , scaling properties , and fits to a range of time - series models . after computing thousands of features for each time series in a training set , those that are most informative of the class structure are selected using greedy forward feature selection with a linear classifier . the resulting feature - based classifiers automatically learn the differences between classes using a reduced number of time - series properties , and circumvent the need to calculate distances between time series . representing time series in this way results in orders of magnitude of dimensionality reduction , allowing the method to perform well on very large datasets containing long time series or time series of different lengths . for many of the datasets studied , classification performance exceeded that of conventional instance - based classifiers , including one nearest neighbor classifiers using euclidean distances and dynamic time warping and , most importantly , the features selected provide an understanding of the properties of the dataset , insight that can guide further scientific investigation . fulcher and jones : highly comparative , feature - based time - series classification time - series analysis , classification , data mining
the main goal of the cognitive radio technology is to improve the efficiency in the use of limited , temporally and spatially under - utilized licensed radio frequency spectrum .a cognitive radio was first introduced by mitola in as a smart wireless device , which senses the environment , learns and automatically adapts its transmission parameters without changing any hardware structure . through such cognition and the reconfigurability features , cognitive radio systemsenable cognitive users ( unlicensed or secondary users ) to perform spectrum sensing and access the channels based on the sensing results .hence , the spectrum can be utilized opportunistically by allowing the cognitive users to either use the channel if there is no activity of primary users ( licensed users ) or share the spectrum with primary users under certain interference constraints .motivated by the concept of a cognitive radio for efficient spectrum management , ieee recently published ieee 802.22 standard for wireless regional area networks ( wran ) , which is the first cognitive radio based standard for using spectrum holes in tv broadcast bands by cognitive users .it is required that cognitive users transmission does not degrade the performance of primary users , such as tv users , through harmful interference .the performance of cognitive radio systems has been extensively studied in order to obtain more insights regarding their potential applications . in particular , the performance limits of spectrum - sharing schemes were studied in by deriving the capacity of non - fading awgn and fading channels under peak and average received - power constraints at the primary receiver .in addition to interference power constraints , peak and average transmit power constraints were taken into consideration in , where the authors determined the optimal power allocation strategies for the ergodic and outage capacity of a secondary user fading channel under spectrum sharing system subject to different combinations of these constraints .in practical scenarios , errors in channel sensing are inevitable because of uncertainties in a communication channel , e.g. , noise and fading .therefore , the authors in considered the impact of imperfect channel sensing results on the ergodic capacity subject to average interference and transmit power constraints .moreover , the outage capacity and truncated channel inversion with fixed rate ( tifr ) capacity were studied in the presence of sensing errors in .the work in investigated the optimal sensing duration that maximizes the achievable throughput of the secondary users . on the other hand , to overcome the problem of sensing - throughput tradeoff , the authors in proposed a novel cognitive radio system in which spectrum sensing and data transmission are performed at the same time by using the novel receiver structure based on perfect cancellation of the secondary signal . recently , the authors in proposed optimal power allocation schemes to minimize the average bit error rate subject to peak / average transmit power and peak / average interference power constraints in spectrum sharing systems .all of the above works assume the availability of perfect channel side information ( csi ) of the interference channel between the secondary transmitter and primary receiver as well as the transmission channel between the secondary transmitter and the secondary receiver .however , in practice it is not an easy task to obtain perfect knowledge of the fading realizations .therefore , the authors in consider the capacity of cognitive radio systems under imperfect channel side information . in ,ergodic capacity was analyzed under average - received power and peak - received power constraints in the presence of only channel estimation error of the link between the secondary transmitter and primary receiver .another capacity metric , namely secondary user mean capacity , was investigated in with partial csi knowledge of the interference channel under a peak interference constraint .recently , the authors in provided unified expressions of the ergodic capacity for different csi level of the transmission link between the secondary transmitter and secondary receiver , and the interference link between the secondary transmitter and primary receiver subject to an average or a peak transmit power constraint together with an interference outage constraint .different from these works , the authors in also considered a minimum signal - to - interference noise ratio ( sinr ) constraint for the primary user and the interference from the primary transmitter on the secondary user mean capacity under different level of channel knowledge of the link between the primary transmitter and the primary receiver , and the link between the secondary transmitter and the primary receiver .another important consideration for cognitive radio systems especially in streaming and interactive multimedia applications is to support quality - of - service ( qos ) requirements of secondary users in terms of buffer or delay constraints . in this respect , the authors in obtained the optimal power adaptation policy to maximize the effective capacity subject to a given qos constraint in multichannel communications . in ,the optimal rate and power allocation strategy for the ergodic capacity in nakagami fading channels was investigated under statistical delay qos constraints . moreover, the recent work in mainly focused on the impact of adaptive -qam modulation on the effective capacity of secondary users under interference power and delay - qos constraints .notably , in most studies as also seen in the aforementioned works , it is implicitly assumed that channel codes with arbitrarily long codewords can be used for transmission and consequently the well - known logarithmic channel capacity expressions of gaussian channels are employed for analysis . in this paper , we depart from this idealistic assumption and assume that finite blocklength codes are used by the cognitive secondary users for sending messages .hence , in our setup , transmission rates are possibly less than the channel capacity and errors can occur leading to retransmission requests .we further assume that the cognitive users operate under qos constraints imposed as limitations on the buffer violation probability .the secondary users first detect the primary user activity , which is modeled as two - state markov chain with busy and idle states .subsequently , depending on the sensing result , the secondary user adapts its transmission power and rate and sends the data .channel between the secondary users is assumed to be a block - fading channel in which the fading coefficient remains constant within each frame during the transmission .we first consider the scenario with perfect csi at the secondary receiver and no csi at the secondary transmitter . in this case, transmission is performed at two constant rate levels , depending on the sensing decision . in the second scenario ,csi is assumed to be available at both the secondary transmitter and receiver , enabling the secondary user to adapt its transmission rate according to the channel conditions . under these assumptions ,we analyze the throughput in the presence of buffer constraints by making use of the effective capacity formulation , , and the recent results in .the analysis described above is conducted for a cognitive radio system model in which we have a single secondary transmitter , a single secondary receiver , and one or more primary users .the rest of this paper is organized as follows : we introduce the system model in the next section .section [ sec : preliminaries ] provides preliminaries regarding the channel capacity with finite blocklength codes and effective throughput under statistical qos constraints . in section [ sec : effective_throughput ] , we study effective throughput under the following two assumptions : csi is known perfectly by either the receiver only or both the transmitter and receiver .the numerical results are presented and discussed in section [ sec : sim_results ] , followed by conclusions in section [ sec : conclusion ] .in the cognitive radio model we consider , secondary users first determine the channel status ( i.e. , idle or busy ) through spectrum sensing and then enter into data transmission phase with rate and power that depend on the sensing decision .secondary users are allowed to coexist with primary users in the channel as long as their interference level does not deteriorate the performance of primary users .we also assume that channel sensing and data transmission are performed in frames of seconds .duration of first seconds is allocated to channel sensing in which the secondary users observe either primary users faded sum signal plus gaussian background noise or just gaussian background noise , and make a decision on primary user activity . in the remaining seconds , data transmission is performed over a flat - fading channel with additive gaussian background noise and possibly additive interference arising due to transmissions from active primary users .it is assumed that the primary users activity in the channel remains the same during the frame duration of seconds . on the other hand ,activity from one frame to another or equivalently the channel being busy or idle is modeled as a two - state markov chain depicted in figure [ fig : transtion_primary ] .the busy state indicates that primary users are active in the channel whereas idle state represents no primary user activity . in fig .[ fig : transtion_primary ] , , with , denotes the transition probability from state to state , satisfying .note that we set and .given the above two - state markov chain , we can easily determine the prior probabilities that the channel is busy and idle , denoted by and , respectively , as follows : with notations and described below .channel sensing is performed in the first seconds .the remaining duration of seconds is reserved for data transmission . as in , we formulate channel sensing as a binary hypothesis testing problem : where denotes complex circularly symmetric background gaussian noise samples with mean zero and variance , i.e. , . denotes the primary users faded sum signal at the cognitive secondary receiver and can , for instance , be expressed as where is the number of active primary transmitters , is the primary user s transmitted signal and denotes the fading coefficient between the primary transmitter and the secondary receiver .therefore , hypothesis above corresponds to the case in which primary users are inactive in the channel whereas hypothesis models the presence of active primary users . above, denotes the bandwidth of the system and therefore we have complex signal samples in the sensing duration of seconds .we further assume that is an independent and identically distributed ( i.i.d . )sequence of circularly symmetric , complex gaussian random variables with mean zero and variance , i.e. , .the optimal neyman - pearson energy detector is employed for channel sensing , and under the above - mentioned statistical assumptions , the test statistic is the total energy gathered in seconds , which is compared with a threshold : above , is the sum of independent -distributed complex random variables and hence is itself -distributed with degrees of freedom . with this characterization ,the false alarm and detection probabilities can be expressed as where is the regularized gamma function ( * ? ? ?6.5.1 ) , is the lower incomplete gamma function ( * ? ? ?6.5.2 ) , and is the gamma function ( * ? ? ?additionally , and denote busy and idle sensing decisions , respectively .we further express the rest of the conditional probabilities of channel sensing decisions given channel true states , i.e. , , in terms of and , as follows : combining ( [ false_alarm_probability ] ) ( [ eq : sensing_condprobs ] ) and applying the bayes rule , we can obtain the probabilities of channel being sensed to be busy and idle as finally , we would like to note that channel sensing can be performed by either the secondary receiver or transmitter , and we have implicitly assumed that the secondary receiver performs this task . in such a case , we further assume that the binary sensing decision made by the secondary receiver is reliably fed back to the secondary transmitter through a low - rate control channel . following channel sensing , secondary users initiate the data transmission phase in the remaining seconds . they adapt transmission rates and power levels depending on the channel sensing decision and availability of channel side information ( csi ) .more specifically , in the absence of csi at the secondary transmitter , fixed - rate transmission is performed with constant power level while in the presence of perfect csi , data is sent at a variable rate . additionally , the average power is and transmission rate is in the case of channel being sensed to be busy , and average power is and transmission rate is in the case of channel being sensed to be idle .the two - level transmission scheme described above is adopted to limit the interference inflicted on the primary users .therefore , we in general have .if cognitive users are not allowed to transmit when the primary user activity is detected in the channel , then we set . in general, power should be below a certain threshold in order to limit the interference inflicted on the primary users .note that when the transmission power is , the average interference experienced by a primary user is where is the fading coefficient of the channel between the secondary transmitter and primary receiver .then , an upper bound on the transmission power can be expressed as where is the maximum average interference power that the primary users can tolerate and is the channel gain between the secondary transmitter the and primary receiver . however, this may not provide sufficient protection in the presence of sensing errors since primary receivers are disturbed with average transmission power of in the case of miss - detections . therefore , as an additional mechanism to control the interference , an upper bound on the probability of miss detection or equivalently a lower bound on the detection probability should be imposed in cognitive radio systems so that miss - detections occur rarely . yet , another method to limit the average interference power experienced by the primary users is to impose the following constraint on the transmission powers : together with possibly peak constraints and . above , is the detection probability in channel sensing . note that primary receiver is disturbed with transmissions of power and with probabilities and , respectively , which are the probabilities of correct detection and miss - detection events .hence , average interference power is proportional to .we note that such an average interference power constraint was , for instance , considered in .finally , we remark that the analysis in section [ sec : effective_throughput ] is conducted for given average power constraints and given signal - to - noise ratios .therefore , any of the interference constraints discussed above can be easily be accommodated in the subsequent throughput analysis .next , we describe the channel model . the channel between the secondary users is assumed to experience flat fading .we also consider the block - fading assumption in which the fading coefficients are constant within the frame of seconds and change independently between the frames . under these assumptions ,the complex input - complex output relationship is above , is the circularly - symmetric complex fading coefficient with a finite variance , i.e. , . and are the complex channel input and output vectors , respectively . since we assume that transmissions are power constrained by or , the average energy available in the data transmission period of seconds is for , and hence . with energy uniformly distributed across input symbols ,the average energy per symbol becomes is imposed in the data transmission period rather than an average power constraint , the average energy per symbol becomes .this leads to the scaling of the signal - to - noise ratio by a factor of .since the analysis in section [ sec : effective_throughput ] is conducted for given signal - to - noise ratio expressions , an average energy constraint given as above can be incorporated into the analysis easily . ] . in ( [ eq : io_relations ] ) , denotes the vector of i.i.d .noise samples that are circularly symmetric , gaussian random variables with mean zero and variance , and again represents the vector of active primary users faded sum signal received at the secondary receiver similarly as in ( [ eq : fadedsumsignal ] ) .we again assume that the components of are i.i.d .gaussian random variables with mean zero and variance .in this section , we briefly review rates achieved with finite blocklength codes and effective throughput under statistical qos constraints . in , polyanskiy , poor andverd studied the channel coding rate achieved with finite blocklength codes and identified a second - order expression for the channel capacity of the real additive white gaussian noise ( awgn ) channel in terms of the coding blocklength , error probability , and signal - to - noise ratio ( snr ) .as done in , this result can be slightly modified to obtain the following approximate expression for the instantaneous channel capacity of a flat - fading channel attained in the data transmission duration of symbols ) to hold , we assume that is sufficiently large but finite . ] : where is the gaussian -function and denotes the signal - to - noise ratio which can be expressed as average energy per symbol normalized by the variance of the noise random variable .the above expression provides the rate that can be achieved with error probability for a given fading coefficient and signal - to - noise ratio snr .note that as the blocklength grows without bound , the second term on the right - hand side of ( [ eq : codingratef ] ) vanishes and transmission rate approaches the instantaneous channel capacity for any arbitrarily small .equivalently , we can also conclude from ( [ eq : codingratef ] ) that transmission with a given fixed rate can be supported with error probability where the dependence of the error probability on fading is made explicit by expressing with subscript . , and .] in order to observe the effect of finite - length codewords on the reliability of transmissions , in fig .[ fig : e_vs_r ] we display the error probability vs. transmission rate when the transmitter is assumed to employ finite - length codewords together with the asymptotical behavior as the codeword length grows without bound . according to the shannon capacity limit , when the codeword length increases without bound, we can achieve reliable transmission with no decoding errors ( i.e. , ) for any transmission rate less than the instantaneous channel capacity , i.e. , , whereas reliable communication is not possible when .indeed , as noted in , by the strong converse , when , probability of error goes exponentially to 1 as the blocklength increases .therefore , we have the sharp cutoff at the instantaneous capacity in fig .[ fig : e_vs_r ] for the asymptotic scenario of codewords of infinite length .close inspection of ( [ eq : errorprob ] ) leads to the same conclusion as well .let . then, as the blocklength increases to infinity , the term vanishes and in the limit , we have .if , we asymptotically have .on the other hand , for finite - length codewords , when we plot ( [ eq : errorprob ] ) , we see that we have a relatively smooth transition .this behavior indicates that for transmissions with rates less than the instantaneous capacity , we can still have errors , albeit with relatively small probabilities , while transmission rates above the instantaneous capacity can lead to successful transmissions but again only with small probability . in , wu and negidefined the effective capacity as the maximum constant arrival rate that a given service process can support in order to guarantee a statistical qos requirement characterized by the qos exponent .if we define as the stationary queue length , then is the decay rate of the tail of the distribution of the queue length : therefore , for large , the buffer overflow probability can be approximated as exponentially decaying at a rate : hence , larger values of represent more strict qos constraints whereas lower values of indicate looser qos guarantees . the effective capacity , which quantifies the throughput under a buffer constraint in the form of ( [ eq : bufferconstraint ] ) , is given by ( , ) }\ } } \triangleq -\frac{\lambda(-\theta)}{\theta}\end{gathered}\ ] ] where }\} ] is the time - accumulated service process and denotes the discrete - time stationary and ergodic stochastic service process . in the remainder of the paper , will be referred as the effective rate rather than the effective capacity since we study the performance in the finite blocklength regime . before a detailed analysis, we in this subsection briefly describe the impact of considering finite - blocklength regime in the throughput analysis of cognitive radio channels in the presence of buffer constraints .as pointed out in section [ subsec : finiteblocklength ] , the critical difference from the studies with infinite - blocklength codes is that we now have non - zero error probabilities even if the transmission rates are less than the instantaneous capacity .moreover , we observe from ( [ eq : errorprob ] ) that error probabilities , for fixed - rate transmissions , fluctuate depending on the channel conditions . in general , such error eventswill be reflected in the subsequent analysis by the presence of off states in which reliable communication is not achieved due to errors and consequently retransmissions are required .this potentially has significant impact in buffer - limited systems as frequent communication failures and retransmission requests can easily lead to buffer overflows .therefore , coding rates and error probabilities in the finite - blocklength regime should be judiciously analyzed and optimal transmission parameters should be identified .situation is further exacerbated in cognitive radio systems in which channel sensing is performed imperfectly and interference constraints are imposed .firstly , time allocated to channel sensing results in reduced transmission duration , leading to reduced codeword blocklength with consequences on both the rates and error probabilities . additionally , false - alarms and miss - detections , experienced due to imperfect sensing , cause over- or underestimations of the channel , and resulting mismatches cause transmission rates and/or error probabilities to exceed or be lower than required or target levels ( for instance , as will be discussed in section [ sec : state_transition_perfectcsi ] ) .in this section , we first construct an eight - state markov chain in order to model the cognitive radio channel , and then derive the corresponding state transition probabilities when csi is assumed to be perfectly known either at the receiver only or at both the receiver and transmitter .subsequently , we analyze the throughput achieved with finite blocklength codes in the presence of buffer constraints under these two assumptions .it is assumed that perfect knowledge of fading realizations is available at the secondary receiver , but not at the secondary transmitter .therefore , the transmitter performs data transmission with constant rate of or based on the sensing decision about the channel occupancy by the primary users . before analyzing the throughput achieved by the secondary users with finite blocklength codes under buffer constraints ,we construct a state transition model for the cognitive radio channel .first , we list the four possible scenarios , together with corresponding signal - to - noise ratio expressions , arising as a result of different channel sensing decisions and the true nature of primary users activity : * _ scenario slowromancap1@ _( correct - detection denoted by joint event ) : + busy channel is sensed as busy and . * _ scenario slowromancap2@ _ ( miss - detection denoted by ) : + busy channel is sensed as idle and . * _ scenario slowromancap3@ _ ( false - alarm denoted by ) : + idle channel is sensed as busy and . * _ scenario slowromancap4@ _( correct - detection denoted by ) : + idle channel is sensed as idle and . additionally , transmission rate is bits / s / hz in scenarios 1 and 3 above , and is bits / s / hz in scenarios 2 and 4 .when codewords of length are used to send the data at these fixed rates , we know from the discussion in section [ subsec : finiteblocklength ] that information is received reliably with probability while errors occur and retransmission is needed with probability as formulated in ( [ eq : errorprob ] ) .more specifically , the error probabilities in scenarios 1 and 3 are for and 3 , respectively .similarly , we have in scenarios 2 and 4 for and 4 , respectively . above, we see that error probability is a function of the fading coefficient and snr . from this discussion , we conclude that the channel can be either in the on state ( in which information is reliably received ) or the off state ( in which erroneous reception occurs ) in each scenario .hence , we have eight states in total in the markov model for the cognitive radio channel as depicted in fig .[ fig : transtion_cognitive ] . note that since reliable communication can not be achieved in the off states , the transmission rate is effectively zero and the data has to be retransmitted in these states .therefore , the service rates ( in bits / frame ) in four scenarios can be expressed , respectively , as for and .next , we identify the transition probabilities from state to state denoted by in the eight state transition model of the cognitive radio channel .we initially analyze in detail , the probability of staying in the topmost on state . ' '' '' we can first express as in ( [ main_probability ] ) shown at the top of the next page .subsequently , we can write ( [ seperated_probability ] ) by noting that channel being actually busy in the current frame depends on its state in the previous frame due to the two - state markov chain , and channel being detected as busy in the frame depends only on the true state of the channel being busy or idle in the frame and not on previous true states and sensing decisions since channel sensing is performed in each frame independently .moreover , channel being on does not depend on the sensing decisions and channel being on or off in the previous frames due to the block - fading assumption .finally , we have ( [ separated_probability4 ] ) by observing that the first probability in ( [ seperated_probability ] ) is in the markov chain , the second probability is the correct detection probability in channel sensing , and channel is on with probability as discussed above .+ by following the same steps , transition probabilities from all eight states to state 1 can be found as the channel is busy in the first four states and we see that the transition probabilities from these four states to the first state are the same .the channel is idle in the last four states and similarly their transition probabilities are equal .hence , ( [ eq : prob1 ] ) shows that we can group the transition probabilities into two with respect to the true nature of the channel , i.e. , busy or idle .the rest of the transition probabilities between each state can be derived in a similar fashion and the overall result can be listed as follows for and : the set of transition probabilities can be expressed in an state transition matrix \!=\!\left[\begin{array}{cccc}p_{i1}&.&.&p_{i8}\\ .&.&.&.\\p_{i1}&.&.&p_{i8}\\p_{k1}&.&.&p_{k8}\\.&.&.&.\\p_{k1}&.&.&p_{k8}\end{array}\!\right ] .\end{aligned}\ ] ] note that the rank of is 2 since it has only two linearly independent column vectors . in this subsection, we determine the throughput achieved with finite blocklength codes subject to buffer constraints by obtaining the effective rate of the cognitive radio channel with the state - transition model constructed in section [ sec : state - transition ] . the approach and techniques in this sectionclosely follow with the difference that we now consider performance in the finite blocklength regime . in ( * ? ? ?7 , example 7.2.7 ) , it is shown for markov modulated processes that where is defined underneath ( [ eq : eff - cap - def ] ) , is the spectral radius or the maximum of the absolute values of the eigenvalues of the matrix , is the transition matrix of the underlying markov process , and is a diagonal matrix whose components are the moment generating functions of the processes in states ( in our model ) . in our case , we have .\end{aligned}\ ] ] note that is a rank-2 matrix as well . as the -rowed ( ) principal minors of are zero, the coefficients of the characteristic polynomial of the matrix can be found in terms of adding only the 1-rowed and 2-rowed principal minors , then the maximum root of this polynomial gives the spectral radius , which is expressed in ( [ eq : sp ] ) on the next page .now , combining ( [ eq : eff - cap - def ] ) , ( [ eq : theta - envelope ] ) , and ( [ eq : sp ] ) , we can easily express the effective rate of the cognitive channel as in ( [ eq : eff - cap - cognitive ] ) on the next page .note that in ( [ eq : eff - cap - cognitive ] ) characterizes the maximum constant arrival rates that the cognitive radio channel can support in the finite blocklength regime under buffer limitations characterized by the qos exponent .note that this throughput is maximized over transmission rates and .throughput in the absence of any buffer constraints , which can be easily determined by letting in , is given in ( [ eq : eff - cap - theta - zero ] ) . \\&+ \frac{1}{2}\bigg\{\big[\phi_1(\theta)p_{i1 } + \dots + \phi_4(\theta)p_{i4 } - \phi_5(\theta)p_{k5 } - \dots - \phi_8(\theta)p_{k8}\big]^2\\&+ 4(\phi_1(\theta)p_{k1 } + \dots + \phi_4(\theta)p_{k4})\times(\phi_5(\theta)p_{i5 } + \dots + \phi_8(\theta)p_{i8})\bigg\}^{\frac{1}{2 } } \end{split}\ ] ] ' '' '' \\+ & \frac{1}{2}\big\{\big[(p_{i1 } - p_{k5})e^{-\theta ( t - n)br_1 } + ( p_{i3 } - p_{k7})e^{-\theta ( t - n)br_2 } + p_{i2 } + p_{i4}- p_{k6 } - p_{k8}\big]^2\\+ & 4(p_{k1}e^{-\theta ( t - n)br_1 } + p_{k2 } + p_{k3}e^{-\theta ( t - n)br_2 } + p_{k4})\times(p_{i5}e^{-\theta ( t - n)br_1 } + p_{i6 } + p_{i7}e^{-\theta ( t - n)br_2 } + p_{i8})\big\}^{\frac{1}{2}}\bigg ) \end{split}\ ] ] ' '' '' ' '' '' instead of csi known by the receiver only , we in this section consider that both the secondary transmitter and receiver have access to perfect csi .therefore , in contrast to section [ sec : fixed_rate ] , the secondary transmitter can adapt its transmission scheme by varying the rate depending on the instantaneous values of the fading coefficient . under the assumption of perfect csi at the transmitter , the eight - state markov model for the cognitive radio channel with four possible scenarios and on / off states is unchanged as we defined in section [ sec : state - transition ] . additionally , the snr expressions in each scenario are still the same .in contrast to fixed - rate transmission schemes , for a given fixed target error probability , the secondary transmitter now varies its transmission rate according to the channel conditions and channel sensing decision .more specifically , in the case of channel being sensed as busy , the secondary transmitter initiates data transmission with rate on the other hand , if no primary user activity is sensed in the channel , we have the following transmission rate before specifying the transition probabilities of the cognitive radio channel , we initially determine the error probabilities in each scenario that are associated with the transmission rates or : * in scenario 1 , the fixed target error probability is attained with the transmission rate defined above .* in scenario 2 ( in which we have missed detection ) , due to the primary user activity and the resulting interference on secondary users , the actual channel rate associated with error probability is however , the secondary users do not know the true state of the channel , and they only have the imperfect channel sensing result . in this case , the channel is detected as idle even if the primary users are active .hence , for the given target error probability , the secondary users send data with rate , which is obviously higher than the actual rate in ( ) that the channel actually supports with error probability .+ as a result , we have in fact higher error probability ( compared to the given target error probability ) when the transmission rate is ) . equating the transmission rate to that in ( ) , and rearraging the terms , the final expression of the actual error probability can be found as ( [ eq : r1_epsilondoublehead ] ) shown at the top of next page .+ + ' '' '' + in this case , due to the sensing error , we are subject to more transmission errors resulting in lower reliability in data transmission . we also see that error probability in ( [ eq : r1_epsilondoublehead ] ) that can be achieved with transmission rate is a function of the fading coefficient . * in scenario 3 ( in which we have false alarm ) , for a given error probability , the channel supports the rate which is higher than the rate because there is actually no interference from the primary users , i.e , .therefore , the error probability that can be attained with this transmission rate is less than the given fixed target error probability .following the same approach adopted in scenario 2 , the actual error probability can be expressed as ( [ eq : r1_epsilonhead ] ) shown at the top of next page .+ + ' '' '' + note that , the error probability again varies with the fading coefficient . * in scenario 4 , the constant error probability is attained with rate . by combining the above error probability expressions ,the average probability of error for variable - rate transmissions is given by we can further express by using the prior probabilities of the channel state given in ( [ eq : prior_probs ] ) and the probabilities of channel sensing decisions in ( [ false_alarm_probability ] ) ( [ eq : sensing_condprobs ] ) as follows : now we can obtain the transition probabilities in a similar fashion as in section [ sec : state - transition ] for and : where the transition probabilities to states 1 , 2 , 7 and 8 are constant while the rest of the transition probabilities depend on the fading coefficient . we will use the same techniques described in section [ sec : throughput_underqos_nocsi ] . since service rates in on states are functions of the fading coefficient in variable - rate transmission , the only difference comes from the moment generating functions of the processes in on states as follows : then , the approach given in section [ sec : throughput_underqos_nocsi ] can be applied to obtain the effective rate under qos constraints as in ( [ eq : eff - cap - cognitive_perfectcsi ] ) on the next page .the target error probability can be optimized to maximize the effective throughput . when the cognitive radio channel is not subject to any buffer constraints ,hence qos exponent , we have the effective rate expression given in ( [ eq : eff - cap - theta - zero_perfectcsi ] ) . + \frac{1}{2}\big\{\big[(p_{i1 } - p_{k5})\mathbb{e}_{|h|^2}\{e^{-\theta ( t - n)br_1({\text{\footnotesize{snr}}}_1 , |h|^2)}\ } + ( p_{i3 } - p_{k7})\mathbb{e}_{|h|^2}\{e^{-\theta ( t - n)br_2({\text{\footnotesize{snr}}}_4 , |h|^2)}\ } \\+ & p_{i2 } + p_{i4}- p_{k6 } - p_{k8}\big]^2 + 4(p_{k1}\mathbb{e}_{|h|^2}\{e^{-\theta ( t - n)br_1({\text{\footnotesize{snr}}}_1 , |h|^2)}\ } + p_{k2 } + p_{k3}\mathbb{e}_{|h|^2}\{e^{-\theta ( t - n)br_2({\text{\footnotesize{snr}}}_4 , |h|^2)}\ } + p_{k4})\\ & \times(p_{i5}\mathbb{e}_{|h|^2}\{e^{-\theta ( t - n)br_1({\text{\footnotesize{snr}}}_1 , |h|^2)}\ } + p_{i6 } + p_{i7}\mathbb{e}_{|h|^2}\{e^{-\theta ( t - n)br_2({\text{\footnotesize{snr}}}_4 , |h|^2)}\ } + p_{i8})\big\}^{\frac{1}{2}}\bigg ) \end{split } \normalsize\ ] ] ' '' '' ' '' ''in this section , the results of numerical computations are illustrated .more specifically , we numerically investigate optimal transmission parameters such as optimal fixed transmission rates and optimal target error probabilities in variable - rate transmissions .furthermore , we analyze the impact sensing parameters and performance ( e.g. , sensing duration and threshold , and detection and false - alarm probabilities ) , different levels of qos constraints , and codeword blocklengths on the throughput in cognitive radio systems . numerically , we provide characterizations for key tradeoffs . in the simulations , we consider rayleigh fading channel with exponentially distributed fading power with unit mean , i.e. , . it is assumed that the channel bandwidth khz , noise power , interference power and . in the two state markov model , the transition probabilities from busy to idle state and from idle to busy state set to and , respectively .the average power values are db and db in the cases of channel being sensed to be busy and idle , respectively . sensing threshold is chosen as in order to have reasonable probabilities of false alarm and detection . in this case, we have and .unless mentioned explicitly , frame duration is ms , sensing duration is ms , and hence data transmission is performed with complex signal samples . vs. fixed transmission rates and in the rayleigh fading environment .the code blocklength is . ] in fig .[ fig : re_tranmissionrates ] , the effective rate is plotted as a function of fixed transmission rates and .the qos exponent is set to .we see that effective rate is maximized at unique and values . , the probabilities of false alarm and detection , the probability of idle detection vs. sensing duration in fixed - rate transmission . ]we analyze the tradeoff between the sensing duration and the effective rate .hence , in fig . [ fig : effectiverate_sensing_nocsi ] , we plot the effective rate , the probabilities of false alarm and detection , the probability of idle detection as a function of the channel sensing duration for and .the qos exponent is set to .again , fixed - rate transmissions are considered and the effective rate is maximized over transmission rates . for ,the false alarm and detection probabilities increase to and approximately , respectively with increasing . since the false alarm probability is higher , we have lower probability of detecting channel as idle as seen in the lower right figure .hence , the channel is not efficiently utilized by cognitive users due to imperfect channel sensing decisions .therefore , the effective rate is small . on the other hand , when , we have lower false alarm and detection probabilities since the threshold level in hypothesis testing is higher. the probabilities of false alarm and detection diminish to as increases .thus , the secondary user senses the channel as idle more frequently and performs data transmission with higher average power level , which leads to higher effective rate .but , this comes at the expense of higher interference on the primary users , which may be prohibitive since primary users transmission can not be sufficiently protected . if we impose the average interference power constraint in ( [ eq : avg - interference - power ] ) with db , and peak transmission power constraints db and db for and , respectively , the power level is limited by the interference constraint for lower values of detection probability .hence , we have lower effective rate with power control imposed through the constraint in ( [ eq : avg - interference - power ] ) when . as a result, we provide effective protection for primary users . in the case of ,reliable channel sensing is achieved since the probabilities of false alarm and detection approach and , respectively .the effective rate increases until a certain threshold due to reliable channel sensing .however , after that threshold , the effective rate decreases with increasing channel sensing duration .the reason is that as channel sensing takes more time , less time is available for data transmission .additionally , shorter coding blocklength for data transmission further affects adversely , leading to lower effective throughput .thus , there is a more intricate tradeoff between channel sensing duration and throughput in the finite blocklength regime . , the probabilities of detection and false alarm , probabilities of idle and busy detection vs. sensing threshold in fixed - rate transmissions .] in order to analyze the impact of the choice of the sensing threshold on the effective rate , in fig .[ fig : effectiverate_lambda_nocsi ] , we plot the effective rate , probabilities of false alarm and detection , probabilities of idle and busy detection vs. sensing threshold for the values of qos exponent , and in the fixed - rate transmission case . since the channel sensing method is independent of , we display the behavior of the above - mentioned probabilities without any buffer limitations in the lower subfigures .the effective rate is again maximized with respect to transmission rates .initially , as increases , the probability of false alarm starts to diminish .this improves the detection performance , and hence secondary users obtain more accurate channel sensing results .therefore , the effective rate starts increasing .as continues to increase , the false alarm probability approaches and the probability of detection starts to decrease as well .hence , the cognitive users fail to detect the primary users activity even if they are active in the channel ( i.e. , we have higher miss detection probability ) , and use the channel more frequently by transmitting data with higher average power level , which explains the second increase in the effective rate .however , experiencing significant interference can deteriorate the primary users data transmission . to avoid this harmful interference caused by the secondary user , the lower bound on the detection probabilitycan be imposed , i.e. , .also , the transmission power can be limited by the average interference constraint in ( [ eq : avg - interference - power ] ) with db , which leads to decreasing effective rate as the secondary users fail to detect the primary users activity . in the figure, we also see that effective rate decreases with increasing .thus , the effective rate takes the highest values in the absence of qos constraints , i.e. , when . vs the probability of error for different values of qos exponent in variable - rate transmission . ] in fig .[ fig : re_vsepsilon ] , we consider variable - rate transmissions , and display numerical results for the effective rate as a function of the target error probability for and . aslarger values of the error probability indicate that cognitive users data transmission is subject to more errors , they enter into off states frequently , where rate of reliable transmission is effectively zero .therefore , effective rate decreases as increases beyond a threshold .we also observe that effective rate is maximized at a unique optimal error probability .moreover , effective rate decreases as qos constraints become more stringent ( i.e. , for larger values of ) . and the probability of error vs. blocklength in variable - rate transmission . ]the tradeoff between the blocklength and effective rate in variable - rate transmission is analyzed .hence , in fig .[ fig : re_epsilon_vsblocklength ] , we display the behavior of the optimized error probability and effective rate as a function of the code blocklength for .in the lower subfigure we see that as the code blocklength increases , the optimal error probability , which maximizes the effective rate , decreases for given values . in the upper subfigure, we observe that if there is no such buffer limitation , effective rate increases with increasing blocklength .however , under buffer constraints with and , as code blocklength increases until a certain threshold , data transmission is performed with decreasing error probability , which improves the system performance because longer codewords are transmitted more reliably . on the other hand ,the effective rate starts to decrease after the threshold .this is due to our assumption that fading stays constant over the frame of seconds . as the blocklength and hence the value of increase ,cognitive users experience slower fading .therefore , possible unfavorable deep fading lasts longer , leading to degradation in performance . in order to avoid buffer overflows , secondary transmitter becomes more conservative and supports only smaller arrival rates . andprobabilities of false - alarm , detection vs. sensing threshold . ] in fig .[ fig : avgerror_vsthreshold ] , we plot the average error probability , which maximizes the effective rate in variable - rate transmission and the probabilities of detection and false alarm vs. sensing threshold for sensing duration of ms and ms . in the presence of csi knowledge at the transmitter , secondary transmitter performs variable - rate data transmission with given fixed target error probability and . as we know from the analysis in section[ sec : state_transition_perfectcsi ] , error probability does not stay fixed at the target level of in scenarios 2 , 3 where busy channel is sensed as idle and idle channel is sensed as busy , respectively .as increases , the probability of false alarm starts decreasing .hence , average error probability decreases .when the probability of detection and the probability of false alarm approach 1 and 0 , respectively ( in the case of perfect channel sensing ) , the average error probability is equal to the fixed target error probability . as continues to increase , the detection probability diminishes and miss detection ( scenario 2 )occurs more frequently , resulting in error probabilities greater than .cognitive users can experience frequent errors in miss detections with variable error probability , which is larger than the fixed target error probability of .therefore , we have higher average error probability .we can see that channel sensing plays a critical role on the average error probability in variable - rate transmissions .finally we note that as sensing duration increases , the probabilities of false alarm and detection decrease with higher slopes as threshold increases .we also note that lower average error probability is achieved with larger values when . and probabilities of false - alarm , detection vs. channel sensing duration .] next , we analyze the tradeoff between the reliability of the variable - rate transmission and the sensing duration . in fig .[ fig : avgerror_vssensing ] , the average probability of error , which achieves the highest effective rate , the probabilities of detection and false alarm are given as a function of sensing duration for and .the target error probability is fixed to .when , the detection probability approaches and the false alarm probability approaches as sensing duration increases .thus , cognitive users detect the channel as busy more and transmit data with fixed error probability or variable error probability ( scenario 1 and scenario 3 , respectively ) .the average error probability decreases when channel sensing takes more time and approaches approximately .for , cognitive users almost perfectly sense the channel with false alarm and detection probabilities approaching and , respectively with increasing sensing duration .thus , average error probability decreases and approaches .therefore , data transmission is performed at the target error rate .if is chosen as , error probability increases until a certain threshold since we have lower false alarm and detection probabilities and the channel is detected as idle even though it is occupied by primary users , where cognitive users transmission rate is achieved with error rate that is much bigger than the target error probability .after that threshold , less time is allocated for data transmission .therefore , lower transmission rates are supported , yielding more reliable data transmission , and hence decreasing the average error probability . in this subsection, we compare the effective rate achieved under fixed - rate and variable - rate transmission schemes . vs. qos exponent for fixed - rate and variable - rate transmission for different values .] in fig .[ fig : re_vstheta ] , we display numerical results for the effective rate vs. qos exponent in fixed - rate and variable - rate transmissions for ms , ms and s , ms .larger values of indicate that data transmission is performed under more strict qos constraints .we see that increasing diminishes the effective rate for both transmission schemes .the variable - rate transmission achieves better performance when s , ms for all values of . on the other hand, fixed - rate transmission outperforms for low values of when ms , ms . under more strict buffer limitations ( higher values of ) ,cognitive users send data with lower rates .thus , the reliability of transmission becomes more important .therefore , instead of sending data at constant rates , transmitter benefits more by varying the rate . vs. blocklength for fixed - rate and variable - rate transmission , .] effective rate is given as a function of blocklength for fixed - rate and variable - rate transmissions in fig .[ fig : re_vsblocklength ] .we previously observed that effective rate increases until a certain threshold with increasing code blocklength .after that threshold , effective rate starts to diminish .the reason of this trend is explained in fig .[ fig : re_epsilon_vsblocklength ] for variable - rate transmission . in this figure, we also see that the same behavior is observed for fixed - rate transmission .we interestingly note that transmitting with constant rates leads to higher effective rate compared to varying the rate based on channel conditions when code blocklength is less than complex signal samples . when is increased beyond complex signal samples , keeping the error probability constant and performing data transmission with variable rate result in better performance .in this paper , we have analyzed the throughput of cognitive radio systems in the finite blocklength regime under buffer constraints . through the effective capacity formulation ,we have characterized the maximum constant arrival rates that the cognitive radio channel can support with finite blocklength codes while satisfying statistical qos constraints imposed as limitations on the buffer violation probability .we have first focused on the scenario in which the csi of the secondary link is assumed to be perfectly known at the secondary receiver only . in this case , the secondary transmitter sends the data at two different constant rate levels , which depend on the channel sensing decision , and error rates vary with the channel conditions . in the second scenario ,perfect csi is available at both the secondary transmitter and receiver . under this assumption , the secondary transmitter ,considering a target error rate level , varies its transmission rate according to the time - varying channel conditions . for both scenarios ,we have determined the throughput as a function of state transition probabilities of the cognitive radio channel , prior probabilities of idle / busy state of primary users , sensing decisions and reliability , the block error probability , qos exponent , frame and sensing durations .we have investigated the interactions and tradeoffs between different buffer , sensing , transmission , and channel parameters and the throughput . through the numerical results ,we have demonstrated that sensing threshold , duration and reliability have significant impact on the performance . in particular , we have observed that highly inaccurate sensing can either lead to inefficient use of resources and low throughput or cause possibly high interference on the primary users .we have also noted that sensing - throughput tradeoff is more involved since increasing the sensing duration for improved sensing performance not only decreases the time allocated to data transmission but also results in shorter codewords being sent , lowering the transmission reliability .additionally , we have seen in the case of variable transmission - rate that average error probability can deviate significantly from the target error rate due to imperfect sensing .moreover , we have remarked that throughput generally decreases as the qos exponent increases ( i.e. , as qos constraints become more stringent ) , and variable - rate transmissions have better performance under more strict qos restrictions while fixed - rate transmissions lead to higher throughput under looser qos constraints .ieee standard , ieee recommended practice for information technology - telecommunications and information exchange between systems wireless regional area networks(wran)- specific requirements - part 22.2 : installation and deployment of ieee 802.22 systems , " 2012 .x. kang , y .- c .liang , a. nallanathan , h. k. garg , and r. zhang , optimal power allocation for fading channels in cognitive radio networks : ergodic capacity and outage capacity , " _ ieee trans .wireless commun .940 - 950 , feb .2009 .s. stotas , and a. nallanathan , on the outage capacity of sensing - enhanced spectrum sharing cognitive radio systems in fading channels , " _ ieee trans . on commun .10 , pp . 2871 - 2882 , oct .2011 .h. kim , h. wang , s. lim , and d. hong , on the impact of the outdated channel infromation on the capacity of the secondary user in spectrum sharing environments , " _ ieee trans .on commun .284 - 295 , jan . 2012 .h. a. suraweera , p. j. smith and m. shafi , capacity limits and performance analysis of cognitive radio with imperfect channel knowledge , " _ ieee trans .4 , pp . 1811 - 1822 , may . 2010 .p. j. smith , p. a. dmochowski , h. a. suraweera , and m. shafi , the effects of limited channel knowledge on cognitive radio system capacity , " _ ieee trans .927 - 933 , feb . 2013 .j. tang , and x. zhang , quality - of - service driven power and rate adaptation for multichannel communications over wireless links , " _ ieee trans .wireless commun ._ , vol . 6 , no . 12 , pp4349 - 4360 , dec .2007 .l. musavian , s. aissa , and s. lambotharan , adaptive modulation in spectrum - sharing channels under delay quality - of - service constraints , " _ ieee trans .901 - 911 , mar .2011 .m. c. gursoy , throughput analysis of buffer - constrained wireless systems in the finite blocklength regime , " proc . of the 2011 ieee international conference on communications ( icc ) , kyoto , japan , june 2011 .gozde ozcan received the b.s .degree in electrical and electronics engineering from bilkent university , ankara , turkey in 2011 .she is currently working towards the ph.d .degree in the department of electrical engineering and computer science , syracuse university .her research interests are in the fields of wireless communications , statistical signal processing and information theory .currently , she has particular interest in cognitive radio systems .mustafa cenk gursoy received the ph.d .degree in electrical engineering from princeton university , princeton , nj , in 2004 , and the b.s .degree in electrical and electronics engineering from bogazici university , istanbul , turkey , in 1999 with high distinction .he was a recipient of the gordon wu graduate fellowship from princeton university between 1999 and 2003 . in the summer of 2000, he worked at lucent technologies , holmdel , nj , where he conducted performance analysis of dsl modems . between 2004 and 2011 ,he was a faculty member in the department of electrical engineering at the university of nebraska - lincoln ( unl ) .he is currently an associate professor in the department of electrical engineering and computer science at syracuse university .his research interests are in the general areas of wireless communications , information theory , communication networks , and signal processing .he is currently a member of the editorial boards of ieee transactions on wireless communications , ieee transactions on vehicular technology , ieee communications letters , and physical communication ( elsevier ) .he received an nsf career award in 2006 .more recently , he received the eurasip journal of wireless communications and networking best paper award , the unl college distinguished teaching award , and the maude hammond fling faculty research fellowship .
in this paper , throughput achieved in cognitive radio channels with finite blocklength codes under buffer limitations is studied . cognitive users first determine the activity of the primary users through channel sensing and then initiate data transmission at a power level that depends on the channel sensing decisions . it is assumed that finite blocklength codes are employed in the data transmission phase . hence , errors can occur in reception and retransmissions can be required . primary users activities are modeled as a two - state markov chain and an eight - state markov chain is constructed in order to model the cognitive radio channel . channel state information ( csi ) is assumed to be perfectly known by either the secondary receiver only or both the secondary transmitter and receiver . in the absence of csi at the transmitter , fixed - rate transmission is performed whereas under perfect csi knowledge , for a given target error probability , the transmitter varies the rate according to the channel conditions . under these assumptions , throughput in the presence of buffer constraints is determined by characterizing the maximum constant arrival rates that can be supported by the cognitive radio channel while satisfying certain limits on buffer violation probabilities . tradeoffs between throughput , buffer constraints , coding blocklength , and sensing duration for both fixed - rate and variable - rate transmissions are analyzed numerically . the relations between average error probability , sensing threshold and sensing duration are studied in the case of variable - rate transmissions . channel sensing , channel side information , effective rate , finite blocklength codes , fixed - rate transmission , markov chain , probability of detection , probability of false alarm , qos constraints , and variable - rate transmission .
kernel density estimation ( kde ) is such a ubiquitous and fundamental technique in statistics that our claim in this paper of an interesting and useful new contribution to the enormous body of literature ( see , e.g. , ) almost inevitably entails some degree of hubris .even the idea of using kde to determine the multimodal structure of a probability density function ( henceforth simply called `` density '' ) has by now a long history that has very recently been explicitly coupled with ideas of topological persistence . in this paper , we take precisely the opposite course and use the multimodal structure of a density to perform bandwidth selection for kde , an approach we call _ topological density estimation _ ( tde ) .the paper is organized as follows : in [ sec : topologicaldensityestimation ] , we outline tde via the enabling construction of unimodal category and the corresponding decomposition detailed in . in [ sec : evaluation ] , we evaluate tde along the same lines as and show that it offers advantages over its competitors for highly multimodal densities , despite requiring no parameters or nontrivial configuration choices . finally , in [ sec : remarks ] we make some remarks on tde . scripts and code used to produce our resultsare included in appendices , as are additional figures .surprisingly , our simple idea of combining the ( already topological ) notion of unimodal category with ideas of topological persistence has not hitherto been considered in the literature , though the related idea of combining multiresolution analysis with kde is well - established .the work closest in spirit to ours appears to be ( see also ) , in which the idea of using persistent homology to simultaneously estimate the support of a compactly supported ( typically multivariate ) density and a bandwidth for a compact kernel was explored . in particular , our work also takes the approach of simultaneously estimating a topological datum and selecting a kernel bandwidth in a mutually reinforcing way .furthermore , in the particular case where a density is a convex combination of widely separated unimodal densities , their constructions and ours will manifest some similarity , and in general using a kernel with compact support would allow the techniques of and the present paper to be used in concert .however , our technique is in most ways much simpler and kde in one dimension typically features situations in which the support of the underlying density is or can be assumed to be topologically trivial , so we do not explore this integration here .let denote the space of continuous densities on . is _ unimodal _ if is contractible ( within itself ) for all .the _ unimodal category _ is the least integer such that there exist unimodal densities for and with and : we call the rhs a _ unimodal decomposition _ of ( see figure [ fig : unidec ] ) .that is , is the minimal number of unimodal components whose convex combination yields . for example , in practice the unimodal category of a gaussian mixture model is usually ( but not necessarily ) the number of components .note that while a unimodal decomposition is very far from unique , the unimodal category is a topological invariant . for, let be a homeomorphism : then since , it follows that is unimodal iff is . in situations where there is no preferred coordinate system ( such as in , e.g. , certain problems of distributed sensing ) , the analytic details of a density are irrelevant , whereas the topologically invariant features such as the unimodal category are essential .the essential idea of tde is this : given a kernel and sample data for , for each proposed bandwidth , compute the density estimate and subsequently the concomitant estimate of the unimodal category is where denotes an appropriate measure ( nominally lebesgue measure ) .now is the set of bandwidths consistent with the estimated unimodal category .tde amounts to choosing the bandwidth that is , we look for the largest set where is constant i.e . , where the value of the unimodal category is the most prevalent ( and usually in practice , also _persistent_)and pick as bandwidth the central element of .in this section , we evaluate the performance of tde and compare it to other methods following .however , in the present context it is also particularly relevant to estimate highly multimodal densities .sharp peaks and highly oscillating functions [ because they ] should not be tackled with kernels anyway . ''we feel that this reasoning is debatable in light of the qualitative performance and runtime advantages of tde on highly multimodal densities with a number of samples sufficient to plausibly permit good estimates . ] towards that end we also consider the following family : for and ( see figure [ fig : fkm ] ) . as in ,all results below are based on simulation runs and sample sizes , noting that values were evaluated but not explicitly shown in the results of .we consider both gaussian and epanechnikov kernels , and it turns out to be broadly sufficient to consider just tde and ordinary least - squares cross - validation kde ( cv ) . before discussing the results, we finally note that it is necessary to deviate slightly from the evaluation protocol of in one respect that is operationally insignificant but conceptually essential . because tde hinges on identifying a persistent unimodal category , selecting bandwidths from the sparse set of 25 logarithmically spaced points from to used across methods in is fundamentally inappropriate for evaluating the potential of tde .instead , we use ( for both cv and tde ) the general data - adaptive bandwidth set , where and denotes the sampled data . , with details determined ( as usual ) by the problem at hand .while it may be reasonable to dispense with constant spacing in a bandwidth set , it is absolutely essential to have enough members of the set to give persistent results . ]the results of cv and tde on the family are summarized in figures [ fig : hsscomparisongauss ] and [ fig : hsscomparisonepan ] .results of cv and tde on the family for are summarized in figures [ fig : fkmeval500matlab]-[fig : fkmeval200matlabtop ] , and results for are summarized in figures [ fig : fkmeval100matlab]-[fig : fkmeval25matlabtop ] in [ sec : smallsamples ] .while tde underperforms cv on the six densities in , it is still competitive on the three multimodal densities , , and when using a gaussian kernel. as we shall see below using , the relative performance of tde improves with increasing multimodality , to the point that it eventually outperforms cv on qualitative criteria such as the number of local maxima and the unimodal category itself . the relative performance of tde is better with a gaussian kernel than with an epanechnikov kernel : this pattern persists for the family .indeed , the relative tendency of tde to underestimate diminishes if a gaussian versus an epanechnikov kernel is used . as the degree of multimodality increases , the relative tendency of tde to underestimate eventually disappears altogether , and the performance of tde is only slightly worse than that of cv .meanwhile , shows that cv outperforms all the other methods considered there with respect to .therefore we can conclude that tde offers very competitive performance for ( or ) for highly multimodal densities . since cv is expressly designed to minimize the expected value of ( i.e. , ), it is hardly surprising that tde does not perform as well in this respect .however , it is remarkable that tde is still so competitive for the distribution of : indeed , the performance of both techniques is barely distinguishable in many of the multimodal cases .furthermore , the convergence of tde with increasing is clearly comparable to that of cv , which along with its derivatives has the best convergence properties among the techniques in . while points out that cv gives worse values for than competing methods , the performance gap there is fairly small , and so we can conclude that tde again offers competitive performance for ( or and ) for highly multimodal densities . the only respect in which cv offers a truly qualitative advantage over tde for highly multimodal densities is ( or its surrogates and ) .however , there is still considerable overlap in the distributions for tde and cv , and for multimodal densities and sample sizes of or more , cv offers nearly the best performance in this respect of all the techniques considered in .therefore we can conclude that tde offers reasonable though not competitive performance for ( or and ) for highly multimodal densities .the preceding considerations show that tde is competitive overall with other methods though still clearly not optimal for highly multimodal densities ( and sample sizes sufficient in principle to resolve the modes ) when traditional statistical evaluation criteria are used .however , when _ qualitative _ criteria such as the number of local maxima and the unimodal category itself are considered , tde outperforms cv for highly multimodal densities ( see figures [ fig : fkmeval500matlabtop ] and [ fig : fkmeval200matlabtop ] ) . in practice ,such qualitative criteria are generally of paramount importance .for example , precisely estimating the shape of a density is generally less important than determining if it has two or more clearly separable modes .perhaps the most impressive feature of tde , and one that cv is essentially alone in sharing with it , is the fact that tde requires no free parameters or assumptions .indeed , tde can be used to evaluate its own suitability : for unimodal distributions , it is clearly not an ideal choice but it is good at detecting this situation in the first place .in fact while for cv is uniformly better at determining unimodality , for the situation has reversed , with tde uniformly better at determining unimodality .
we introduce _ topological density estimation _ ( tde ) , in which the multimodal structure of a probability density function is topologically inferred and subsequently used to perform bandwidth selection for kernel density estimation . we show that tde has performance and runtime advantages over competing methods of kernel density estimation for highly multimodal probability density functions . we also show that tde yields useful auxiliary information , that it can determine its own suitability for use , and we explain its performance .
non - invasive measurements of density are of special importance for stratified fluids experiments . over the last decades numerous optical techniqueswere developed .for instance , schlieren , shadowgraphy , interferometry , and others , allow for density variation measurements based on the differences in the fluid refractive index .progress in digital imaging helped to create the digital counterparts , such as synthetic schlieren methods .the refractive index varies as , where are the light velocity in the medium and in the vacuum , respectively .light ray passing through a non - uniform density region is deflected from its original path .the relation between the density inhomogeneity and the refractive index are defined by gladstone - dale , where is the gladstone - dale constant , is the wavelength of a light beam and is the density of the fluid .the background oriented schlieren belongs to the family of synthetic schlieren methods .the bos exploits a light source that projects a textured background ( typically a random pattern of dots ) located on one side of a test chamber onto the camera sensor positioned on the opposite side .the first image ( called reference image ) is recorded on the background pattern through a stagnant fluid of uniform density . upon fluid motion or density variations , density gradients induce the distortions of the projected pattern compared to the reference image .the distortions are quantified using optical flow or particle image velocimetry ( piv ) methods , and provide the field of virtual displacements proportional to the derivatives of the index of refraction .dalziel and co - authors utilized the synthetic schlieren methods to calculate the gradient of the density fluctuations in an air flow around a cylinder . generally , the majority of the bos applications are in aerodynamic applications .raffel & richard performed an intensive bos investigation on the formation and interaction of the blade vortex phenomenon , aiming to reduce the noise generated by a rotor of a helicopter .validation of the bos technique can be found in venkatakrishnan & meier .the authors compared the 2d - density field of a cone traveling in air at mach 2 with the known cone - charts .the authors also integrated the field of displacements applying the poisson equation and reconstructed the density field .recently , bos was extended to three - dimensional measurements for air flows by berger et al .the authors extended the image processing algorithms to capture time resolved unsteady gas density fields . over the last decadeseveral studies have focused on the accuracy of the technique .for instance , elsinga and vinnichenko tested the influence of various parameters on the measurement sensitivity and resolution of the technique , and suggested a set of guides for an optimal set - up of a bos system . to the best of our knowledge , the bos measurements in stratified liquid flows are limited to the density gradients only .for instance , sutherland tracked internal waves using the fields of density gradients .the reconstruction of the density field , based on the poisson equation to the density field , is not found in the literature . in this studywe note that the distortions arise from the several sources in the multimedia ( air - glass - liquid - glass - air ) imaging , specifically in its digital version . the air - glass - liquid interface with sharp refractive index changes its behavior as a lens that amplifies the optical aberrations of the light source , camera and lenses .therefore , it is necessary to calibrate the bos optical system through the here proposed multi - step method .the procedure starts with the bos pattern image through air in the test - section ( air - glass - air - glass - air ) , followed by a homogeneous liquid calibration ( air - glass - liquid - glass - air ) and digital image remapping .the new multi - step calibration procedure is accompanied by a novel digital image remapping method which corrects the displacements field .the remapping is based on the displacement field , which is obtained by correlation of the reference image and the image of a homogeneous liquid .this step is followed by the reference image captured through a stagnant stratified fluid .the important result of this study is that the calibration and digital image correction allow us to reconstruct the correct density field based on solution of the poisson equation .we validate the method correctly reconstructing the density of two tests : a ) air - water interface ; and b ) the multi - layer stably stratified saline solution .the paper is organized as follows . in section[ sec : principles ] we briefly review the relevant principles of the background oriented schlieren method .section [ sec : algorithms ] explains the image processing and reconstruction algorithms applied to the experimental data .section [ sec : setup ] shows the experimental setup and the experimental results of the two tests .finally , we summarize the study in section [ sec : conclusions ] .in this section the basic principles of the bos technique are summarized for the sake of brevity and augmented by an extension that allows reconstructing fluid density in stratified liquids .the common setup is a background random dots pattern illuminated by a light source and a digital camera facing the light passing the fluid ( gas or liquid ) , as shown schematically in fig .[ fig_system ] .-axis is the imaging axis and are the orthogonal coordinates .important cross - sections are marked 1 - 6 .[ fig_system ] ] the bos technique is based on the distortion of the background image due to density changes . the image is distorted with respect to the reference image of the random pattern , without the fluid .the distortion is a cumulative effect of the refractive index variation along the light ray passing through the fluid .the displacement fields and can be calculated based on the correlation of the reference and the distorted images .for example , fig .[ fig_displ ] demonstrates the distorted image in our experiment and the corresponding displacement field . ]light ray crossing the interface of different indices of refraction changes the direction ( refracts ) proportional to the ratio of the indices , based on snell s law .consequently , a curvature of the light ray can be approximated as in settles : integrating eq.([int_1 ] ) for a thickness of a fluid layer , , the deflection angles and are obtained : when the bos is implemented for gas flows or for the density gradient field , these models are sufficient for further analysis . however , in the specific case of our interest , light rays refract also at the air - glass , glass - liquid , liquid - glass and glass - air interfaces . according to fig .[ fig_system ] , it passes through sections 1 - 5 ( number of layers in our case is ) .therefore , an extended analysis is required in order to reconstruct the density field . for every layer of air , glass or liquid , of thicknesssay , the displacement is estimated according to eq.([int_1 ] ) : = h_i^{2}\frac{1}{n_{i}}\frac{\partial n}{\partial x } \ ] ] and obtained equivalently .the total displacement is the sum of the individual deflections : the following step of the analysis is the reconstruction of the density field based on the poisson equation : \ ] ] where the multiplier is the inverse of the contributions of different layers : ^{-1 } \ ] ] the bos method reverses the use of the poisson equation eq . [ eq_poisson ] .the displacements are first obtained for each coordinate using the cross - correlation piv algorithm .then the gradient of the displacement field is numerically estimated ( using high order accuracy numerical methods ) and the poisson equation is solved for an unknown . in our case the stratified fluid layer index of refraction is the required result .boundary conditions are necessary for the numerical solution of a partial differential equation .schematically , the boundary conditions are summarized in fig .[ fig_bc ] . and .left / right conditions are neumann type and [ fig_bc ] ] eventually , the gladstone - dale relation ( ) is used to transform the field of to the density field .literature reviews revealed the difficulty to implement the synthetic schlieren method to the stratified fluids .we have identified that the key problem relates to the multi - media imaging path .the method proposed here is called an image remapping method .the method utilizes a multi - step calibration and image processing routine known as remapping .remapping is the shift of each pixel in the image by a distance prescribed by the displacement field .two reference images are captured when the tank is full with air and water ( or another liquid of a uniform index of refraction , close to the final solution ) .liquid in the tank causes an apparent displacement of the dots ( referred to the initial image ) in the image recorded by the camera . in short ,the method explained here separates the result from this apparent distortion using calibration .the order of the steps are shown in a block diagram in fig.[fig_flowchart ] : * we capture three images of the background pattern , through air , water and a saline stratified solution , .( stands for image ) . *the first calibration is the displacement field obtained correlating the air and water images , , where is a convolution operator and subscript is the conjugate ( reversed ) image . *the background pattern image obtained through the saline stratified solution is first remapped using the displacement field whose origins are in the optical system and aberrations due to the multi - media ( air - glass - water - glass - air ) imaging * the corrected image is correlated with the original reference image taken in air , and the result is used to construct and solve the poisson equation . * the result of the poisson equation solution is the desired density field ( applying the conversion . ) , scaledwidth=50.0% ] the result of single steps are shown in fig .[ fig_table_cor ] .the left plot shows the displacement field as arrows . ]the plot emphasizes a typical distortion created by the multi - media imaging of the background pattern .the displacement field is quantified by cross - correlating the air and water images , and .the two images were analyzed using a standard fft - based correlation method with an interrogation area of pixels .the magnitude of the displacement field is shown as contours in the right plot of fig .[ fig_table_cor ] .the maximum displacement is tens of pixels and it is larger at the edges of the image .note that we present the whole field measurement method and the image corresponds to the field of approximately cm .seventy pixels distortion is not visible in the four megapixel image .however , we understand that since the poisson equation utilizes the integration from the edges of boundary conditions , the error propagates to the place where the result is important . in order to correct the image distortion ,the image remapping code was developed .first , a calibration field is obtained by correlating the reference image and the image of the background pattern through the full tank of water , . in the following stage , this calibration fieldis used to correct the distortion generated by the saline water image .the calibration vector field is re - sampled on a dense rectangular mesh grid using linear interpolation .the resulting new field is then applied to each pixel of the saline solution image using the standard image remapping method : ( denotes the transform map ) .it remaps each pixel of the distorted image , inverting the displacement field of the calibration image .the effect of remapping is not significant and therefore not easily visualized .we demonstrated the contour maps of displacement fields with and without correction for the : a ) air - water case ; and b ) multi - layer stratified solutions in fig .[ fig_comp_panel ] .the original , not remapped results , are shown by solid contours and the corrected ones by the dashed lines .the contour maps are very similar .nevertheless , as we explained above , the error accumulates during the solution of the poisson equation .as the next section shows , the results are very significant for the reconstructed density field . , scaledwidth=80.0% ]the experiments are performed in a glass tank with a cross - section and a height of 30 cm .the random dot pattern is created using a matlab script ( makebospattern.m courtesy of frederic moisy , http://www.fast.u-psud.fr/pivmat ) .there are 200,000 black dots distributed randomly over an a4 size transparent sheet .the transparent sheet with a background pattern was attached at the back - side of the tank .it is illuminated with a white led light , equipped with a plastic light diffusing sheet .the light distribution is approximately uniform .the non - uniformity of the light due to the lack of parabolic mirrors in the digital bos application is corrected by the proposed method .the imaging system uses a four megapixel ccd camera ( optronis cl4000cxp ) with a 10-bit sensor of pixels that yields a magnification of 56.2 / pixel .all the bos image pairs were processed similarly to the piv images ( in the present case using fft - based cross - correlation of pixels ) .we present here two important tests , namely test i and ii . in testi , we use the reference image in air only , and implement the method to obtain the position of the air - water interface and reconstruct the density of the two fluids .the air - water interface was shifted between different runs , removing the water from the tank through a valve . in test ii, we implement the method using a reference image in air , a reference image in water , and attempt to reconstruct the density field of the stratified solution of water / epsom salt ( ) . in order to emphasize the accuracy of the method, we establish four layers of distinct density difference , g / ml ( using a calibrated pycnometer ) .each layer , in addition , is naturally stratified .the results of the two tests are presented below .two example images obtained in test i ( air - water interface ) are shown in fig . [ fig_shift ] .the top panel shows the images of the background pattern .a dark line is the interface between air ( top part ) and water ( bottom part ) .we tested several positions of the air - water interface ( not shown here for the sake of brevity ) .[ fig_shift]c shows the result of the first step in the bos method , the correlation of these images with a reference image .the largest displacements appear at the interface .it is important to note that the results depend on a relative height of the air - water interface with respect to the imaging axis of the camera ( i.e. the angle between the ray and the imaging axis in fig .[ fig_system ] ) . ]the result of the algorithm is the density field from two layers , and for air and water , respectively .the density field is homogeneous in the horizontal direction and the result is shown as a spatial average along in fig .[ fig_shift]d .the position of the interface is shown by a jump from the density of air to . in testii we measure the density field of the four layers of stratified saline solution .the original image is shown in fig .[ fig_original_4layers]a - b together with the remapped image . the effect of the remapping algorithm is not clearly seen . however , after the bos calculations we observe a striking difference . utilizing the naive bos method to the air - saline solutioncreates an artifact . for the sake of completeness , we first plot the result of the correlation in fig .[ fig_original_4layers]c , compared with the result of the remapped ( corrected ) image pair in fig . [ fig_original_4layers]d ., title="fig : " ] , title="fig : " ] the final and the major result of the bos method presented in this work is shown in fig .[ fig_4layers_profile ] .the plot shows the magnitude of the displacement field that is used in the poisson equation .next we demonstrate the solution of the poisson equation , converted to the density units .the right panel demonstrates the spatially averaged profile of density .the solid line corresponds to the result shown in fig .[ fig_4layers_profile ] that uses the remapped , corrected pair of images .the dashed line represents the result of the water - saline solution pair analysis without correction .although at the boundaries the values are correct by definition , obviously the non - corrected image pair leads to a completely wrong density profile .we verify that the reconstruction method provides the correct positions of the layers of different density .the results in our case show that not only the density layers are accurately reconstructed , but also the actual values are within a 5% error . , ( center ) the poisson equation solution .color maps correspond to the displacement field in pixels and the density in [ g / ml ] , respectively ( right ) .the final result of the bos method , the density profile for the four layer saline solution .solid line - corrected solution using the new method , dashed the poisson solution of the water - saline solution image pair displacement field without correction .[ fig_4layers_profile ] ]literature reviews show an increase of applications of optical measurement methods of density fields based on the index of refraction of the fluid .this is partially due to the progress in imaging technology .it promotes the use of synthetic , digital optical methods , such as synthetic schlieren . in most cases , the methods applied to gas flows or non - stratified liquid flows .generally the result of the density gradient field is sufficient .obtaining the density field could be difficult due to the phenomenon disclosed in this work the stratification affects the accuracy of reconstruction of the density field .for instance , in the setup using water and saline solutions , the changes of the index of refraction are modified due to stratification .the consequence of the density differences in the stratified flow is the distortion of images .for some optical measurement techniques , such as particle image velocimetry ( piv ) , the variation of refractive index in the bulk of fluid is just a source of additional error . applying the background oriented schlieren ( bos ) method to the piv result of distorted images results in a completely wrong density field . in this paper , the method to reconstruct the density field in stratified liquid flows using bos is proposed . to the best of our knowledge , the reconstruction of the correct density field in multi - layer stratification is performed for the first time .the method works only for stagnant fluid in which the density field is two dimensional , without density gradients along the imaging axis . by analyzing differences in images of background pattern in air and water, we identified the cause of image distortion .there is an apparent displacement of the dots due to refractive index variations .these have a non negligible effect on the magnitude and orientation of the displacement vector field .the error is especially high in the corners of the images , partially due to imaging optics .the distortion then propagates into the final result through the solution of the poisson equation .the algorithm developed in this work corrects the images using deconvolution methods .the correction reduces the measurement errors and allows for quantitative density measurements in stratified fluids .in addition , this correction is useful for the non - perfectly parallel light sources and digital imaging optics .our method improves the applicability of bos .the corrected images enable producing quantitative density field data , comparable with direct , intrusive , density measurements .hopefully , this method will increase the application of bos to the stratified flows .there is an additional study required to verify the accuracy of the method of gaseous and liquid flows of strongly changing index of refraction .elsinga , g. , van oudheusden , b. , scarano , f. , watt , d. : assessment and application of quantitative schlieren methods : calibrated color schlieren and background oriented schlieren .fluids * 36 * , 309325 ( 2004 ) sutherland , b. , daziel , s. , hughes , g. , linden , p. : visualization and measurement of internal waves by synthetic schlieren .part 1 . vertically oscillating cylinder .dynamics of atmospheres and oceans * 390 * , 93126 ( 1999 ) vinnichenko , n. , uvarov , a. , plaksina , y. : accuracy of background oriented schlieren for different background patterns and means of refraction index reconstruction . in : 15th international symposium flow visualization ,minsk ( 2012 )
non - intrusive quantitative fluid density measurements methods are essential in stratified flow experiments . digital imaging leads to synthetic schlieren methods in which the variations of the index of refraction are reconstructed computationally . in this study , an important extension to one of these methods , called background oriented schlieren ( bos ) , is proposed . the extension enables an accurate reconstruction of the density field in stratified liquid experiments . typically , the experiments are performed by the light source , background pattern , and the camera positioned on the opposite sides of a transparent vessel . the multi - media imaging through air - glass - water - glass - air leads to an additional aberration that destroys the reconstruction . a two - step calibration and image remapping transform are the key components that correct the images through the stratified media and provide non - intrusive full - field density measurements of transparent liquids .
in this communication a workable algorithm is derived and presented that allows each processor to store all information required to quickly look up any two - electron integral , involving four basis functions , needed for either density - functional or multiconfigurational wavefunction methods .the method is demonstrated by applications of a uniform electron gas , confined to a cubic box , for electrons with wavevectors that are enclosed in a fermi sphere .strategies for rapid calculation or efficient storage of two - electron integrals , for density - functional calculations , or multiconfigurational active space methods continue to evolve as different mathematical techniques and different types of computing platforms arise and as different types of basis functions are implemented for use in electronic structure calculations . a recent comprehensive review of these efforts by reine _ includes discussions of least - square variational fitting methods and rys polynomials .other methods such as direct methods , analytic algebraic decompositions , tensor hypercontraction and multipole methods are also widely used .many of these methods support the hypothesis that the space of two - electron integrals is smaller than naively expected .this paper seeks to formally prove , for separable functions used in electronic structure calculations , that the set of information on which the n coulomb integrals truly depends is much smaller than expected from a permutational analysis .further a practical approach is developed and applied to the uniform electron gas .the algorithm is based upon a three - dimensional fourier transform , a one - dimensional laplace transform , an additional one - dimensional integral transform , and the use of gaussian quadrature .the storage requirements needed to calculate matrix elements associated with the coulomb operator is reduced to o( ) for either planewaves or gaussians .another motivation for this work is that the development of massively parallel methods requires one to break a problem up into many independent subtasks that can then be performed simultaneously by a large number of computer processors . to achieve high efficiency on massively parallel architectures ,it is necessary to ensure that the amount of information exchanged between processors is small and that the rate of information exchange is intrinsically faster than the computing time used by any processor . for future low - power computing platforms it is desireable ,if not expected , for each processor to have a very limited amount of computer memory .thus , in reference to many - electron quantum mechanics or density functional theory , it is appropriate to reconsider whether there are other means for reconstructing matrix elements that might be more efficient on modern massively parallel architectures . for such systems it would be ideal to allow each processor to quickly reconstruct any possible coulomb integral needed for a quantum - mechanical simulation without information transfer to or from other processors .there is one important aspect of this derivation that appears to be universally correct for many , possibly all , choices of separable basis functions and that is definitely correct for planewave and gaussian basis functions .therefore some general considerations are discussed before moving the focus of this paper to applications within planewave basis sets .given a set of infinitely differentiable and continuous one - dimensional functions , labeled as f , it is possible to create three - dimensional basis functions according to : with .common examples of such basis functions include planewaves inside a box or unit cell or products of one - dimension gaussian functions which generally also have separable polynomial prefactors . in the former case onegenerally uses all possible products subject to the constraint that and then seeks convergence by performing the calculation as a function of the cutoff wavenumber ( ) . assuming one chooses a total of n three - dimensional basis functions , it is then clear that there are approximately one - dimensional basis functions for each cartesian coordinate . for simplicity , but not actually required for this observation, the assumption is that the same one - dimensional basis sets are used for each cartesian component .so , even though there are pairs of three dimensional basis functions , there are only n one dimensional products of basis functions . for planewaves ,the complexity is further reduced to since the product of a planewave is a plane wave . for gaussiansthis number becomes , with a characteristic number of neighbors , since the product of two well separated gaussians is identically zero .the matrix elements that are needed to solve the coulomb problem in density functional theory or to determine matrix elements required for either hartree - fock or multi - configurational calculations are given by however , by using a continuous fourier transform of , followed by a laplace transform of , the above equation can be written in quasi - separable form according to : eq .4 follows from eq . 3 by a continuous fourier transform of .5 follows from eq . 4 by a continuous laplace transform of .6 follows from eq .5 since all functions are separable . in the above equation , the nine - dimensional integralis reduced to a triple product .each one of these products are composed of three dimensional integrals that is defined according to : for either one - dimensional planewaves or gaussians , the above three - dimensional integral can be determined , as a function of , without significant difficulty .it is possible that for other separable functions these integrals would be difficult to calculate .however , since in the worst case there are only of these integrals , one can imagine calculating them only once and storing them forever .this means that one only needs to find an efficient numerical method for performing the laplace integral in eq .6 . from this standpoint , an observation that is absolutely key to capitalizing on this quasi - separable formis that by integrating the above expression ( eq . 8) over , the -dependent part of the , now , two dimensional integral , can in principal , be reduced to products of quantities with the following form : with .therefore , for a large enough value of , it follows that eq .( 7 ) may be rewritten , to any desired precision , according to : in the above equation the are hard - to - determine constants that depend upon the functional form of separable basis sets , the taylor expansion coefficients , , in eq .( 9 ) , a lot of really complicated algebra , triple products of two - dimensional integrals associated with eq .( 8) , and the collection of common coefficients of arising from the occurrence of triple summations associated with each cartesian coordinate . it would be algebraically difficult and computationally inefficient but not impossible to calculate these numbers .* however , for the purpose here it is only necessary to know that the value of could , in principle , be found and to accept that knowledge about the asymptotic power law associated with the laplace integrand provides very important information about how to numerically evaluate the integral which extends to infinity . * to make further progress , the second term in the eq .10 is temporarily rewritten by making the substitution , and .this leads to : now , since both definite integrals are to be evaluated over a finite interval , these integrals can be evaluated using gaussian - quadrature or other one - dimensional numerical integration meshes according to : in the above expressions the two sets of gaussian - quadrature weights and points , and depend only on the choice of and methods and codes for choosing these points are widely available and well known . a back transformation of the right - hand sum , obtained by setting , and defining , the integral collapses to the original recognizable form : \prod_x a_x(\alpha_i , i_x , j_x , k_x , l_x ) \\ \nonumber + & & 4\pi \sigma_{i=1}^q \omega_i \sigma_{n=0}^{\infty } \gamma_n({\bf i , j , k , l } ) \frac{1}{\alpha_i^{n+\frac{3}{2}}}.\end{aligned}\ ] ] with a suitable redefinition of notation for the volume elements and the recognition that the second term includes a summation which is exactly equal to , the laplace integral is reduced to quadratures over products of three one - dimensional integrals ( eq .here , it is emphasized , that eq .( 7 ) could have been immediately written in terms of numerical integrals .however the analysis followed allows one to determine how the asymptotic form of the integrand scales so that the particular case of gaussian quadrature methods , that are amenable to numerical evaluation of polynomials over finite intervals , may be used for performing the integrations .as written , it has been demonstrated that one needs to store at most n one dimensional integrals to reconstruct any of the n integrals .based on past usage of quadrature methods , it is reasonable to expect that one can perform multiscale numerical one - dimensional integration , such as the laplace transformation here , with approximately 30 - 100 sampling points . while the results discussed here are a factor of 2 - 4 away from this goal , it is likely that the number of sampling points can be significantly decreased by determining the value of which allows for the most efficient numerical integration , by breaking the integral ( laplace transformation ) into more than two intervals , and/or by using techniques similar to the variational one - dimensional exponential quadrature methods of ref .for example , a quadrature mesh constructed to integrate polynomials of , rather than x , would be twice as efficient as the standard gaussian quadratures meshes . except for the clear need to exploit the transformation for the final interval that extends to , finding the best quadrature sumsare expected to depend on the form of the separable functions being employed . here , for simplicity and reproducibility by others , only standard gaussian - quadrature methods , with , are used .for planewaves , the product of the one - dimensional functions reduce to a product of two one - dimensional planes waves which is itself a planewave .if one starts with one dimensional planewaves ( e.g. , the products will only provide plane waves .therefore the number of one - dimensional integrals that are required is reduced to . as a simple application, the m - dependence of the exact exchange energy of an unpolarized gas of 2 m electrons in a box with finite volume ( v = lxlxl ) is determined in this section .as m gets very large , the exchange energy will converge to the kohn - sham value of .it is also easy to verify based on scaling arguments that for any number of planewaves placed inside such a box , the exact exchange energy will scale a /l with depending on the occupations as a function of wavevector and the number of electrons m placed in the box . here to validate the numerics , the standard choice of occupation numbers are taken to be unity for all planewaves enclosed in a fermi sphere of various radii .the radii , or fermi wavevector , are chosen so that there are shell closings in reciprocal space . for a finite system ,it is possible to fully occupy a fermi sphere for a well defined cutoff wavevector if one chooses m= 7 , 33 , 123 , 257 , 515 , 925,1419 , 2109 , 3071 , 4169 , 5575 , 7153 , or 9171 electrons of each spin . in fig . 1 ,the ratio of the exact exchange energy to the kohn - sham energy is presented as a function of . in the large m limit ,it is evident that this ratio converges linearly to 1 .this indicates that all the integrals are being performed accurately . in fig . 2 ,the time required per electron , as a function of the total number of electrons , is shown . for cases where each ks - orbital is identically equal to a planewave the time required for the calculation of the exchange ( or coulomb ) interaction scales as the square of the number of electrons . for 9171 electrons ,the hartree - fock exchange energy can be calculated in four seconds on a macbook air . in tablei , the convergence of the hartree - fock energy for m=9171 parallel spin electrons is shown as a function of gaussian - quadrature mesh . for purposes of reproducibility, the first mesh is determined by q quadrature points on the interval between 0 and 1 .these points , designated by in eq .( 12 ) are then transformed as described above to reduce the calculation of each exchange integral to the form shown in eq .14 ( e.g. a total of 2q mesh points for the two intervals ) .the results show that with standard quadrature methods , and an overly simple tesselation into only two sub - intervals , it is difficult to efficiently converge the energy due to sharp structure near .however , as shown in the right - most columns , if one further breaks the first interval into sub - intervals defined by ,[1/5 ^ 7,1/5 ^ 6], ... ,[1/5,1]$ ] and then uses 5- , 10- , and 15-point quadrature meshes in each of these sub intervals , convergence of the energy for m=9171 electrons is achieved ..ratio of exchange energy to kohn - sham exchange energy for a cube containing 2m=18342 electrons as a function of the number of quadrature points used in eq . 14 .mesh 1 uses q quadrature points on an interval between 0 and 1 .mesh 2 , which breaks interval 1 into eight sub - intervals with geometrically varying length scales is numerically more efficient and allows for at least six - place precision .this suggests that the variational exponential quadrature methods , used for radial integrations in ref . may be more efficient [ cols=">,>,>,>,>,>",options="header " , ] to summarize , this paper provides a practical and systematically improvable algorithm that reduces the storage required for the coulomb integrals to for the special cases of basis sets that are commonly used in electronic structure calculations . for the case of planewave calculations , it is only recently that researchers have begun to entertain the possibility of performing multiconfigurational corrections using such basis sets .the results of this paper significantly lower the storage requirements needed for either dft , hartree - fock , or multi - configurational methods based upon planewaves .future improvements of this method , with initial applications of the self - interaction correction to the uniform electron gas calculations are in progress . as compared to structurally simpler plane - wave methods , conversion of this algorithm for use withing gaussian - based - orbital methodologies , will require a large investment of programming time but are fully expected to provide the same reduction of memory / disk requirements for reconstruction of the two - electron integrals .p. hohenberg and w. kohn , phys . rev . *136 * , b864 ( 1964 ) .w. kohn and l.j .sham , phys .* 140 * , a1133 ( 1965 ). f. aquilante , t. b. pedersen , v. veryazov and r. lindh , wires comput .* 3 * 143 ( 2013 ) .10.1002/wcms.1117 d. ma , g. li manni , l. gagliardi , j. of chem .phys . * 135 * 044128 , ( 2011 ) .doi : 10.1063/1.3611401 .reine simen , helgaker trygve , lindh roland .multielectron integrals .wires comput mol sci * 2 * 290 ( 2012 ) .a.m. kster , j. chem . phys . * 118 * , 9943 ( 2003 ) . b.i .dunlap , j.w.d .connolly , j.r .sabin , j. chem . phys . * 71 * , 4993 ( 1979 ) . m. dupuis , j. rys , and h.f .king , j. chem .* 65 * , 111 ( 1976 ) .o. vahtra , j. almlof , m.w .feyereisen , chem .* 213 * 514 ( 1993 ) .pederson , d.v .porezag , j. kortus and d.c .patton , phys .solidi b * 217 * , 197 ( 2000 ) .parrish , c.d .sherrill , e. g. hohenstein , s.i .l. kokkila , and t. j. martinez , j. chem . phys . * 140 * , 181102 ( 2014 ) .lambrecht , c. ochsenfeld , j. chem . phys . * 123 * , 184101 ( 2005 ) . j. p. perdew , j. a. chevary , s. h. vosko , k. a. jackson , m. r. pederson , d. j. singh , and c. fiolhais , phys . rev .b * 46 * , 6671 ( 1992 ) .j. p. perdew , k. burke , and m. ernzerhof , phys .lett . * 77 * , 3865 ( 1996 ) .n. mardirossian and m. head - gordon , j. chem . phys . *142 * , 074111 ( 2015 ) .y. zhao , n.e .schultz , and d.g .truhlar , j. chem .theory comput .* 2 * , 364 ( 2006 ) .m.r . pederson and k.a .jackson , phys .b * 41 * , 7453 ( 1990 ) .press , william h. ; teukolsky , saul a. ; vetterling , william t. ; flannery , brian p. ( 2007 ) . numerical recipes : the art of scientific computing ( 3rd ed . )( new york : cambridge university press . isbn 978 - 0 - 521 - 88068 - 8 ) .perdew and a. zunger , phys .b * 23 * , 5048 ( 1981 ) .pederson , a. ruzsinszky , and j.p .perdew , j. chem . phys . * 140 * , 121105 ( 2014 ) . m.r .pederson , j. chem .* 142 * , 064112 ( 2015 ) .j. sun and m.r .pederson ( to appear ) .
it is tacitly accepted that , for practical basis sets consisting of n functions , solution of the two - electron coulomb problem in quantum mechanics requires storage of o(n ) integrals in the small n limit . for localized functions , in the large n limit , or for planewaves , due to closure , the storage can be reduced to o(n ) integrals . here , it is shown that the storage can be further reduced to o( ) for separable basis functions . a practical algorithm , that uses standard one - dimensional gaussian - quadrature sums , is demonstrated . the resulting algorithm allows for the simultaneous storage , or fast reconstruction , of any two - electron coulomb integral required for a many - electron calculation , on each and every processor of massively parallel computers even if such processors have very limited memory and disk space . for example , for calculations involving a basis of 9171 planewaves , the memory required to effectively store all coulomb integrals decreases from 2.8gbytes to less than 2.4 mbytes .
over the last two decades , instrumented indentation technique ( iit ) has become a widespread procedure that is used to probe mechanical properties for samples of nearly any size or nature .however , the intrinsic heterogeneity of the mechanical fields underneath the indenter prevents from establishing straightforward relationships between the measured load _ vs. _ displacement curve and any expected mechanical properties as it would be the case for a tensile test .many models have been published in the literature in order to enable the measurement of properties such as an elastic modulus , hardness or various plastic properties . despite their diversity ,most of these models deeply rely on the accurate measurement of the projected contact area between the indenter and the sample s surface .the existing methods that are dedicated to estimating the true contact area can be classified into two subcategories : the direct methods which rely on the sole load _ vs. _ displacement curve and the _ post mortem _ methods that use additional data extracted from the residual imprint left on the sample s surface .for example , vickers , brinell and knoop hardness scales rely on _ post mortem _ measurements of the geometric size of the residual imprint .however , in the case of vickers hardness , the contact area is only estimated through the diagonals of the imprint , the possible effect of piling - up or sinking - in is then neglected .post mortem _ methods use indent cross sections to estimate the projected contact area . in the 1990s ,the development of nanoindentation led to a growing interest in direct methods because they do not require time consuming _ post mortem _ measurement of micrometer or even nanometer scale imprints , typically using atomic force microscopy ( afm ) or scanning electron microscopy ( sem ) .uncertainty level on direct measurements remains high , mainly because of the difficulty to predict the occurrence of piling - up and sinking - in .oliver and pharr have eventually considered this issue as one of the _ `` holy grails '' _ in iit .recent development in scanning probe microscopy ( spm ) using the indentation tip ( itspm ) brought new interest in _ post mortem _ measurements .indeed , itspm allows systematic imprint imaging without manipulating the sample or facing repositioning issues to find back the imprint to be imaged . yetitspm imaging technique suffers from drawbacks when compared to afm : it is slower , it uses a blunter tip associated with a much wider pyramidal geometry and a higher force applied to the surface while scanning . while the later may damage delicate material surfaces, the formers will introduce artifacts .nonetheless , these artifacts will not affect the present method .in addition , itspm only allows for contact mode imaging , non contact or intermittent contact modes are not possible . as a consequence ,only the techniques based on altitude images can be used with itspm and there is a need for new methods as very recently reviewed by marteau _this article introduces a new _ post mortem _ procedure that relies only on the altitude image and that is therefore valid for most types of spm images , including itspm . in this paper , a benchmark based on both numerical indentation tests as well as experimental indentation tests on properly chosen materials to span all possible behaviors is first introduced .then , the existing direct methods are reviewed and a complete description of the proposed method is given .these methods are then confronted using the above mentioned benchmark and the results are finally discussed .a typical instrumented indentation test features a loading step where the load is increased up to a maximum value , then held constant in order to detect creep and finally decreased during the unloading step until contact is lost between the indenter and the sample .a residual imprint is left on the initially flat surface of the sample . during the test ,the load as well as the penetration of the indenter into the surface of the sample is continuously recorded and can be plotted as shown in figure [ fig : figure_1 ] . for most materials , the unloading step can be cycled with only minor hysteresis , it is then assumed that only elastic strains develop in the sample . as a consequence , the initial slope of the unloading stepis called the elastic contact stiffness .useful data can potentially be extracted from both the load _ vs. _ displacement curve and the residual imprint .the contact area is defined as the projection of the contact zone between the indenter and the sample at maximum load on the plane of the initially flat surface of the sample .finite element modeling ( fem ) simulations are performed using a two - dimensional axisymmetrical model represented in figure [ fig : figure_2 ] .the sample is meshed with 3316 four - noded quadrilateral elements .the indenter is considered as a rigid cone exhibiting an half - angle to match the theoretical area function of the vickers and modified berkovich indenters .the displacement of the indenter is controlled and the force is recorded .the dimensions of the mesh are chosen to minimize the effect of the far - field boundary conditions .the typical ratio of the maximum contact radius and the sample size is about .the problem is solved using the commercial software abaqus ( version 6.11 , 3ds.com ) .the numerical model is compared to the elastic solution from ( see ) using a blunt conical indenter ( ) to respect the purely axial contact pressure hypothesis used in the elastic solution .the relative error is computed from the load _vs. _ penetration curve and is below .pre - processing , post - processing and data storage tasks are performed using a dedicated framework based on the open source programming language python 2.7 and the database engine sqlite 3.7 .the indented material is assumed to be isotropic , linearly elastic .the poisson s ratio has a fixed value of and the young s modulus is referred to as .the contact between the indenter and the sample s surface is taken as frictionless .two sets of constitutive equations ( ce1 and ce2 ) are investigated in order to cover a very wide range of contact geometries and materials : ce1 : : this first constitutive equation used in this benchmark is commonly used in industrial studies and in research papers on metallic alloys .it uses -type associated plasticity and an isotropic hollomon power law strain hardening driven by the tensile behavior ( stress , strain ) given by eq .[ eq : hollomon ] : plastic parameters are the tensile yield stress and the strain hardening exponent .ce2 : : the second constitutive equation is the drucker - prager law which was originally dedicated to soil mechanics but was also found to be relevant on bulk metallic glasses ( bmgs ) ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) and some polymers .the yield surface is given by eq .[ eq : drucker - prager ] where is the von mises equivalent stress in tension and the hydrostatic pressure .perfect plasticity is used in conjunction with an associated plastic flow .the plastic behavior is controlled by the compressive yield stress and the friction angle that tunes the pressure sensitivity .+ dimensional analysis is used to determine the influence of elastic and plastic parameters on the contact area : in this equation , is the maximum value of penetration of the indenter into the sample s surface . in both cases ,the dimensionless functions show that , since the poisson s ratio has a fixed value ( ) , only the yield strains ( , in the case of ce1 and , in the case of ce2 ) and the dimensionless plastic parameters ( in the case of ce1 and in the case of ce2 ) have an influence on the contact area . as a consequence ,the value of the young s modulus has a fixed arbitrary value pa and only the values of the yield stresses and , the hardening exponent and the friction angle are modified .the simulated range of these parameters are given in tables [ tab : hollomon_params ] and [ tab : dp_params ] .after each simulation , a load _ vs. _ displacement into surface curve and an altitude spm like image using the gwyddion ( http://gwyddion.net/ ) gsf format are extracted .the use of such a procedure allows one to consider both numerical and experimental tests in the benchmark and to derive mechanical properties in the same way .since the simulations are two - dimensional axisymmetric , the contact area is computed as where stands for the contact radius of the contact zone ( see fig .[ fig : figure_2 ] ) . the tested materials ( see table [ tab : samples ] )are chosen in order to cover a very wide range of contact geometries , from sinking - in ( fq ) , intermediate behavior ( wg ) , and to piling - up ( bmg ) .glasses are chosen over metallic alloys because they exhibit negligible creep for temperatures well below the glass transition temperature , no visible size effect and are very homogeneous and isotropic in the test conditions .the fq and wg samples are tested as received ( please note the wg sample was test on it s `` air '' side ) whereas the bmg sample is polished .nanoindentation testing is performed using a commercial hysitron ti950 triboindenter . during each test, the load is increased up to mn with a constant loading rate n / s .the load is then held for 10 s and relieved with a constant unloading rate n / s .four tests are performed on each sample and each residual imprint is scanned with the built - in itspm device with an applied normal force of 2 as summarized in figure [ fig : figure_3 ] .tests are load controlled and the maximum load is set to mn . the true contact area is not known as in the case of the numerical simulations .it is then estimated through the sneddon s eq .[ eq : sneddon ] and is called . young s moduli , and the poisson s ratios of each sample are known prior to indentation testing from the literature or from ultrasonic echography measurements ( _ cf ._ table [ tab : samples ] ) . recalling that is the initial unloading contact stiffness ( cf .figure [ fig : figure_1 ] ) , we have : methods rely on the sole load vs. penetration curve to determine the contact height using equations given in table [ tab : direct_methods ] .let us recall that in the case of sinking - in ( as seen in fig .[ fig : figure_2 ] ) and for piling - up .three direct methods are investigated in this paper : dn : : the doerner and nix method was one of the first to be published ( along with similar work done by bulyshev_ et al . _ ) and it provided the basic relationships later improved by the two other methods .op : : the oliver and pharr method is an all - purpose method that is widely used in the literature , commercial software and standards .the main drawback of this method is that it can not take piling - up into account .lo : : the loubet method is an alternative to the op method , especially for materials exhibiting piling - up . regardless of the chosen method , the value of is used to compute the value of the contact area thanks to the indenter area function ( iaf ) .the iaf depends on the theoretical shape of the indenter as well and on its actual defects which are measured during a calibration procedure .different tip calibration methods are used in the literature : * measurement either of the indenter geometry or the imprint geometry made on soft materials for multiple loads using afm or other microscopy techniques . *the iaf introduced by oliver and pharr requires a calibration procedure on a reference material using only the curve : + + where the factors are fitting coefficients obtained from a calibration procedure on fused quartz . for a given indenter ,the value of the coefficients depend on the penetration depth range used for the calibration procedure . in the case of a perfect modified berkovich tip , and . *the method introduced by loubet ( see ) : + + it is assumed that the only origin of the defects is tip blunting . then , comes from the indenter s theoretical shape ( here ) and is the offset caused by the tip defect and is calibrated using a linear fit made on the upper portion of the curve .this procedure can be performed on any material exhibiting neither significant creep nor size effect , typically fused quartz .this method is intrinsically very efficient when the penetration is high compared to . in the experimental benchmark ,all tests are performed at nm using a diamond modified berkovich tip that exhibits a truncated length nm on all the twelve tests performed on the three samples used in the experimental benchmark and the error is the represented one standard deviation . ] .theses values were calibrated on the fq sample . as a consequence, the iaf introduced by loubet is used on every direct method . by contrast ,numerical simulations use a perfect tip so that the iaf is .spm imaging grants access to a mapping of the altitude of the residual imprint .it is assumed that the surface of the sample is initially plane and remains unaffected far from the residual imprint .this plane is extracted from the raw image using a disk shaped mask centered on the imprint and a scan by scan linear fit on the remaining zone .it is considered as the reference surface and is subtracted to the raw image to remove the tilt of the initial surface . under maximum load ,the contact contour can exhibit either sinking - in and piling - up . in the first case, its altitude is decreased _ vis - - vis _ the reference surface .this behavior is typical of high yield strength materials such as fused quartz . on the opposite ,piling - up occurs when the the increased and is typically triggered by unconfined plastic flow around the indenter as usually observed on low strain hardening metallic alloys .when the indenter is not axisymmetric ( as it is the case for pyramids ) , both sinking - in and piling - up may occur simultaneously .for example , pyramidal indenters can produce piling - up on their faces and sinking on their edges ( or no piling - up to the least ) . during the unloading step ,the whole contact contour is pushed upward with only minor radial displacement .a residual piling - up may form even if sinking - in initially occurred under maximum load .we now focus on a half cross - section of the imprint starting at the bottom of the imprint and formulate two assumptions : 1 .the highest point of any half cross - section is the summit of the residual piling - up .the summit of the residual piling - up indicates the position of the contact contour . as a consequence , the highest point of the cross - section gives the radial position of the contact zone boundary .however , from an experimental point of view , the roughness of the sample s surface may make the true position of the residual piling - up s summit unclear .this issue is particularly true for materials exhibiting high levels of sinking - in such as fused quartz but also for most materials along the edges of pyramidal indenters . in order to limit the effect of surface roughness on the radial localization of the contact zone boundary , each profileis slightly rotated by a small angle value along an axis perpendicular to the cross - section plane and running through the bottom of the imprint ( _ i .e. _ the lowest point of the profile ) .the whole contact contour is then determined by repeating the process in all directions ( to ) then the contact area is calculated .a graphical representation of key steps of the method is made in figure [ fig : figure_4 ] .[ please insert figure 4 here ]the four methods are confronted on the numerical benchmark and their ability to accurately compute the contact area is challenged .the results focus on the relative error between and the contact area predicted by each method .please note that a relative error on the contact area means roughly a relative error on hardness but only on the elastic modulus ( see eq .[ eq : sneddon ] ) .the results are plotted in figures [ fig : figure_5 ] and [ fig : figure_6 ] for constitutive equations ce1 and ce2 respectively and a summary of the key statistics is given in table [ tab : num_bench ] .[ fig : figure_5 ] and [ fig : figure_6 ] , * ( a ) * and * ( b ) * represent the relative error and its absolute value respectively .the later was chose to emphasize the magnitude of the error while * ( c ) * indicates whether piling - up or sinking - in occurs .it is also chosen to measure the success rate of the methods through their ability to match within a error .this value was chosen since even if challenging , it is still realistic from an experimental point of view .these data can be discussed individually for each method : * the dn method systematically tends to underestimate regardless of the type of constitutive equation and of the occurrence of piling - up or sinking - in .the magnitude of the relative error is the highest among the four tested methods .this lack of accuracy can be put into perspective by recalling that the dn method states that the contact between the indenter and the sample behaves as if the indenter was a flat punch during the first stages of the unloading process .this approach was later proved to be too restrictive by oliver and pharr who improved it by taking into account the actual shape of the indenter through the coefficient .as in the case of the modified berkovich tip , the value of the contact depth is systematically increased ( see table [ tab : direct_methods ] ) . * as stated above , the op method drastically improves the overall performances of the dn method .however , its error level depends strongly on the type of contact behavior ( _ i .e. _ piling - up or sinking - in ) and the mechanical properties of the tested material .typically , it performs well for the ce1 law when the strain hardening exponent verifies .it also performs well ( relative errors below ) on materials exhibiting very high yield strains ( higher than in the case of ce2 .the main drawback of the method is its intrinsic inability to cope with piling - up since can never be higher than 1 .this is particularly visible for low values of the strain hardening exponent ( ce1 : ) and with ce2 when the compressive yield strain is lower than .the op method has a low success rate ( see table [ tab : num_bench ] ) but this tendency has to be mitigated by the fact that it is very efficient for a large number of metallic alloys , which can be described by ce1 and exhibit moderate values of hardening exponents . *the lo method allows values and is then recommended for piling - up materials ; it is overall very efficient with ce2 type materials .the drawback is that it tends to overestimate the contact area when sinking - in occurs ; this is particularly true in the case of ce1 with moderate to high hardening exponents ( ) .these observations are in agreement with the results of cheng and cheng regarding the influence of piling - up and sinking - in on the direct estimation of the contact depth .the success rate of this method is the highest among the direct methods and it is clearly the best available direct method for ce2 type materials and for low hardening ce1 type materials . * the proposed method exhibits a success rate ( with the relative error target ) and an average absolute relative error of .the error level remains stable regardless of both the type of constitutive equation and its parameters .this result highlights the fact that when experimentally possible , the use of such a _ post mortem _ method will improve drastically the overall error level of the contact area measurement .the results of the experimental benchmark are represented in fig .[ fig : figure_7 ] . the tendencies observed in the numerical benchmarkare confirmed .the dn method systematically underestimates the contact area .the op and lo methods perform well only on a given spectrum of contact behaviors : both methods give accurate results on the fq sample ; this is consistent with the fact that both of them were optimized to use this material as a reference .while the lo method also exhibits a low error level on the wg sample , the op method leads to an unexpected high error level .it is supposed that even if the wg sample has a very high yield strain , it has no strain hardening mechanism and it is then out of the scope of the op method .the bmg which exhibits a large residual piling - up obviously leads the op method to underestimate drastically the contact area . the lo method performs better although it also underestimates the contact area .this later method systematically exhibits relative errors of while the method proposed in this paper is even more reliable with errors lower than .we observe that the direct methods overall performances are better than in the case of the numerical benchmark .the contact friction , which is neglected in the numerical benchmark , may improve the accuracy of the direct methods without affecting the proposed method .both benchmarks highlight the precision gap between the new method and the existing direct methods .however , the proposed method differs by nature from the three direct methods it is compared to .this section emphasizes the pros and cons of this method : disadvantages : : : * the proposed method relies on spm imaging of the residual imprint while direct methods do not . however , indentation devices tend to be equipped with itspm capability that can be used automatically in conjunction with the indentation testing itself with only a small increase in test duration .* advantages : : : * the proposed method can be run automatically , it requires no adjustable parameters and is user independent .* sample holder and machine stiffness affect the measurement of the penetration into surface as well as the measured contact stiffness and , as a consequence , they also affect all direct methods .the value of the machine stiffness can be measured once and for all while the sample holder s stiffness may change between two samples and requires systematic calibration .this concern is particularly true in the case of small samples such as fibers as well as very hard materials ( such as carbides ) .the contact area measurement provided by the proposed method does not rely on and is then insensitive to the effect of those spurious stiffness issues . yet , let us note that while the value of the contact area is unaffected by stiffness issues , the value of the contact stiffness is of course affected . as a consequence ,the value of the hardness probed with the proposed method is free from any stiffness concern ( as ) while the value of the reduced modulus still requires stiffness calibration ( as ) .* the method does not require any tip calibration procedure and is compatible with all tip shapes .* the method is unaffected by erroneous surface detection also because it does not rely on .we have proposed a new procedure to estimate the indentation contact area based on the residual imprint observation using altitude images produced by spm .this area is the key component of instrumented indentation testing for extracting mechanical properties such as hardness or elastic modulus . for the estimation of this contact area ,the method has been confronted with three widely used direct methods .we have showed , by means of an experimental and numerical benchmark covering a large range of contact geometries and materials , that the proposed method is far more accurate than its direct counterparts regardless of the type of material .we have also discussed the fact that such _post mortem _ procedures are indeed more time consuming than direct methods ; yet they are the future alternative to direct methods with the development of indentation tip scanning probe microscopy techniques .we have also emphasized the fact that this new method has numerous advantages : it can be automated , it is user independent , it is unaffected by stiffness issues and does not require any indenter calibration .the authors would like to thank the brittany region for its support through the cper prin2tan project and the european university of brittany ( ueb ) through the bresmat rtr project .vincent keryvin also acknowledges the support of the ueb ( ept compdynver ) .w. c. oliver and g. m. pharr , `` measurement of hardness and elastic modulus by instrumented indentation : advances in understanding and refinements to methodology , '' _ j. mater ._ , vol . 19 , no . 1 ,pp . 320 , 2004 .x. zhou , z. jiang , h. wang , and r. yu , `` investigation on methods for dealing with pile - up errors in evaluating the mechanical properties of thin metal films at sub - micron scale on hard substrates by nanoindentation technique , '' _ mater .a _ , vol . 488 , no . 1 - 2 , pp .318332 , 2008 .bucaille , s. stauss , e. felder , and j. michler , `` determination of plastic properties of metals by instrumented indentation using different sharp indenters , '' _ acta mater ._ , vol .51 , no . 6 , pp .16631678 , 2003 .m. dao , n. chollacoop , k. j. van vliet , t. a. venkatesh , and s. suresh , `` computational modeling of the forward and reverse problems in instrumented sharp indentation , '' _ acta mater ._ , vol .49 , no . 19 , pp . 38993918 , 2001 .s. i. bulychev , v. p. alekhin , m. k. shorshorov , a. p. ternovskii , and g. d. shnyrev , `` determining young s modulus from the indentor penetration diagram , '' _ ind . lab ._ , vol .41 , pp .1409 1412 , 1975 . w. c. oliver and g. m. pharr , `` an improved technique for determining hardness and elastic modulus using load and displacement sensing indentation , '' _ j. mater ._ , vol . 7 , no . 6 , pp .15641583 , 1992 .t. chatel , h. pelletier , v. le hourou , c. gauthier , d. favier , and r. schirrer , `` original in situ observations of creep during indentation and recovery of the residual imprint on amorphous polymer , '' _ j. mater ._ , vol .27 , no . 01 , pp .1219 , 2012 .y. yokoyama , t. yamasaki , p. k. liaw , and a. inoue , `` study of the structural relaxation - induced embrittlement of hypoeutectic zr cu al ternary bulk glassy alloys , '' _ acta mater ._ , vol .56 , no . 20 , pp .60976108 , 2008 ..simulated range of the dimensionless ratios for the numerical simulations using the constitutive equation ce1 .the `` number '' column stands for the number of values chosen as simulation inputs in the given range . [cols="^,^,^,^ " , ]
the determination of the contact area is a key step to derive mechanical properties such as hardness or an elastic modulus by instrumented indentation testing . two families of procedures are dedicated to extracting this area : on the one hand , _ post mortem _ measurements that require residual imprint imaging , and on the other hand , direct methods that only rely on the load _ vs. _ the penetration depth curve . with the development of built - in scanning probe microscopy imaging capabilities such as atomic force microscopy and indentation tip scanning probe microscopy , last generation indentation devices have made systematic residual imprint imaging much faster and more reliable . in this paper , a new _ post mortem _ method is introduced and further compared to three existing classical direct methods by means of a numerical and experimental benchmark covering a large range of materials . it is shown that the new method systematically leads to lower error levels regardless of the type of material . pros and cons of the new method _ vs. _ direct methods are also discussed , demonstrating its efficiency in easily extracting mechanical properties with an enhanced confidence . nanoindentation , atomic force microscopy , hardness , elastic behavior , finite element analysis
quantum information processing seeks to harness quantum mechanics to enhance information processing capabilities .just as classical communication and computation requires memory buffers , quantum information systems will require memories for quantum states .an optical quantum memory allows coherent , noiseless and efficient storage and recall of optical quantum states .they are an essential building block for quantum repeaters , which will extend the range of quantum communication .they could also find applications as a synchronization tool for optical quantum computers , and in a deterministic single - photon sources .much progress has been achieved towards this goal in recent years , with efficiencies up to 87% , storage times of over one second , as well as bandwidths above a gigahertz and over 1000 pulses stored at once , all being separately demonstrated using different storage techniques .+ if , however , we move towards manipulation of the stored information , a new range of possible uses for quantum memories appear .for instance , the ability to coherently manipulate the spectrum of pulses would prove a key tool for allowing quantum information transfer between systems with different bandwidths .this ability could also lead to increased bit rates over quantum communication channels .another way of improving bit rates is the idea of multiplexing in quantum memories - a powerful tool where multiple signals are bundled into one over a communication channel .multiplexing could be achieved with , for instance , different spatial , temporal or frequency modes in a quantum memory .being able to alter a pulse s shape , as well as its bandwidth , could also lead to increased bit rates due to a decrease in losses caused by pulse aberrations through various media ( i.e. optical fiber ) . +various coherent pulse manipulation techniques have already been demonstrated without the aid of a quantum memory .for instance , three - wave mixing , quantum pulse gates , and pulsed frequency up - conversion have all been shown to be able to coherently alter the temporal , and in some cases spectral , profile of optical pulses .work has also been carried out with pulse shaping and splitting inside a coherent memory using electromagnetically - induced transparency ( eit ) .+ in this paper we will investigate the coherent spectral manipulation abilities of the gradient echo memory ( gem ) scheme .gem has been shown to have high efficiencies and not add noise to the quantum state , while also being able to store up to 20 pulses simultaneously , making it a promising candidate as an optical quantum memory . + previous experimental work has shown that gem is capable of manipulating stored light in a number of ways .-gem , based on three - level atoms , has been used to resequence pulses , stretch or compress the bandwidth of stored pulses , add a frequency offset to the recalled light , and interfere two pulses within the memory .modelling has shown that gem is capable of much more .for example , it could be used as an optical router or all - optical multiplexer . in this paperwe investigate proposals that make particular use of the frequency encoding nature of gem to coherently manipulate the spectrum of stored pulses , filter modulated pulses and combine or interfere pulses of different frequencies .+ the remainder of this paper is structured as follows : sec .[ sec : ff_overview ] presents an overview of the gem protocol and relevant theory , before describing the experimental details in sec .[ sec : ff_setup ] .we then , in sec .[ sec : ff_experiments ] present experimental results characterizing basic frequency manipulation operations , as well as demonstrating frequency domain engineering with fine control of magnetic field gradients provided by a multi - element coil ., bandwidth , and centre frequency enters the storage medium at time where the optical information is stored in the atomic excitation . the memory has a linear frequency gradient placed along it in the -direction and a input frequency bandwidth .( b ) at time the sign of the frequency gradient is reversed , with the memory output bandwidth . in this schemethe echo is emitted at time with pulse shape and centre frequency . ]a linearly varying frequency gradient placed along an ensemble of atoms is the key component to the gradient echo memory scheme .the detuning of each atom from its original resonance , and therefore the frequency it will absorb , is proportional to its position along the memory .therefore gem is a frequency encoding memory , with pulses being stored as their spatial fourier transform along the memory . for a linear gradient , , the bandwidth of the memoryis determined by , where is the length of the memory .we assume here that the centre pulse frequency is stored in the middle of the memory .+ the basic two - level gem operation is shown in fig .[ fig : ff_singlegrad ] . the equations that govern the storage of a light pulse with a slowly varying envelope operator inside a two - level ensemble with atomic polarization operator in this situation are \hat{\sigma}_{12 } + ig \hat{\mathcal{e } } \nonumber \\\partial_z \hat{\mathcal{e } } & = & i\frac{gn}{c}\hat{\sigma}_{12 } , \label{eq : ff_gemeqn}\end{aligned}\ ] ] where is the decay rate from the excited state , is the coupling strength between the two levels , is the number of atoms , and is the speed of light .this equation assumes a weak probe field such that holds , and that all atoms are initially in this ground state .+ to recall the pulse , under normal gem operation , the linear gradient is exactly reversed a time after the pulse has entered the memory , i.e. .this leads to a time - reversal of the absorption process described in eq .[ eq : ff_gemeqn ] and an emission of a time - reversed copy of the input pulse in the forwards direction , i.e. , at time , with the centre frequency of the echo being the same as the input pulse . + it is not necessary , however , to recall with an exact reversal of the input gradient to recall a pulse , or to have a constant gradient along the entire length of the memory .indeed , having fine control of the input and output gradients , as discussed in the following sections , is what provides us with the ability to perform spectral manipulation operations using gem .- probe electric field envelope ; bs - non - polarizing beam - splitter ; and - - currents supplied to the individual solenoids .inset shows the level scheme and the equivalence between the system used and a two - level atom : - one - photon detuning ; - two - photon detuning ; - coupling field rabi frequency ; - decay rate from the excited state ; - decoherence rate between the ground states ; - coupling strength between ground and excited states ; and - effective coupling strength for the equivalent two - level system . ]an overview of the set - up used for the following spectral manipulation experiments is shown in fig . [fig : ff_setup ] .we use the -gem scheme and warm rubidium-87 vapor for our experiments .this three - level system , where a strong classical field is used to couple the two ground states , is equivalent to the two - level one described in the previous section as long as ( i ) the one - photon detuning , and ( ii ) , where is the on - resonance optical depth of the system , is the excited state decay rate , and is the fastest timescale of the system .this equivalence is shown in fig .[ fig : ff_setup ] inset .the coupling strength of the equivalent two - level system is given by , where is the rabi frequency of the coupling field .the advantage of the system is that the storage time of the memory is now controlled by the ground state decoherence rate which is much less than the excited state lifetime .indeed there are a wide range of atoms with stable ground state configurations that are suitable for -gem .+ the weak probe and strong coupling fields are derived from the same laser , which is blue detuned by approximately 3 ghz from the d1 transition .a small part of the laser is sent through a fibre - coupled electro - optic modulator driven at 6.8 ghz , the ground state splitting of , and the positive sideband selected by passing it through a filtering cavity .this field , now 3 ghz blue detuned from the transition , is used for both the probe and local oscillator ( lo ) .the probe and control fields , having the same circular polarisation , are combined on a ring cavity that is resonant with the probe .the probe and coupling fields then enter the memory - a 25 mm diameter , 20 cm long gas cell containing isotopically enhanced , and 0.5 torr krypton buffer gas , heated to approximately 80 using an electronic filament heater .+ eight separate solenoid coils , with four turns each , are placed along the length of the memory .this multi - element coil ( mec ) is used to create the complex gradients for the experiments discussed in the following sections by placing a different current in each coil , and using the superposition principle for magnetic fields , i.e. . the two - photon detuning of each atom can then be defined as a function of position along the memory , where mhz / g is the land factor , and is an arbitrary two - photon offset ( for instance , in this case is defined for a set dc magnetic field and coupling field frequency ) .the memory cell and coils are surrounded with a layer of -metal to shield against external magnetic fields .the heater is turned off during the storage process to ensure there are no stray magnetic fields interacting with the atoms . +upon leaving the memory , the probe and coupling fields pass through a filter cell containing a natural mixture of rb ( i.e. and ) and heated to approximately 150 . due to the detunings chosen above, the coupling field is resonant with a transition , leading to approximately 40 db suppression through the cell .the probe field , which passes through the filter cell with 70% efficiency , is then combined with the local oscillator signal on a non - polarizing beam - splitter and heterodyne detection is performed .fine control of the frequencies of all fields , as well as gating of the probe and coupling fields , is achieved using acousto - optic modulators .this experiment is controlled with an augmented version of the labview code presented in ref . .in this section , we present the various spectral manipulation experiments undertaken with the set - up presented in the previous section . as a function of position along the memory ( normalized to length ) due to ( i ) input gradient and ( ii ) output gradients withminimum and maximum gradient offset ( as noted on figure ) .blue ( dashed ) line corresponds to the desired field and points correspond to the measured magnetic field ( error bars due to sensitivity of gauss - meter ) .( b ) heterodyne data showing ( i ) input pulse ; ( ii ) echo for recall with ; and ( iii ) echo for recall with khz offset .orange points correspond to raw data , black lines correspond to modulated gaussian fit to data , and values correspond to the centre frequencies of pulses extract from the fits .( c ) the change in centre frequency of the output pulses relative to the input pulse as a function of .points represent the measured centre frequency ( error bars from standard deviation of 100 traces ) , and the dashed line corresponds to the theoretical behaviour . ] by adding an offset to the recall field , i.e. , the centre frequency of the echo relative to the input can be altered and will be given by the sign of is dependent on which ground state is used . in this caseit is the state and therefore the sign will be negative .figure [ fig : ff_gradoffset](a ) shows plots of ( i ) input and ( ii ) output gradients with varying magnetic field offsets .figure [ fig : ff_gradoffset](b ) shows single heterodyne traces for ( i ) input and ( ii ) echo with no applied offset , as well as ( iii ) echo with an applied offset of 1.4 mhz .+ the stretched form of the echoes indicates that there is dispersion present in the memory .this is not surprising considering that the bandwidth of the memory is only slightly greater than the bandwidth of the pulse .this effect is accentuated for longer storage times .the additional elongation for recall with a greater offset indicates that fringes of the magnetic field ( i.e. those components that tail off at either end of the cell ) may have affected the stored pulse , leading to greater dispersion .we note , however , that such effects are easily compensated , as we explain in section [ sec : ff_bandwidth ] .+ a modulated gaussian was fitted to the main body of the output pulses in order to extract the value of relative to the lo frequency .this is also shown in fig .[ fig : ff_gradoffset](b ) for the input , as well as the two echoes .figure [ fig : ff_gradoffset](c ) shows a characterization of the change in for a range of values of .this is compared with the behaviour expected from eq .[ eq : ff_gradoffset ] .as can be seen , the two are in good agreement . + as a function of position along the memory ( normalized to length ) due to ( i ) input gradient and ( ii ) minimum and maximum output gradients ( ratios noted on figure ) .blue ( dashed ) line corresponds to the desired field and points correspond to the measured magnetic field ( error bars due to sensitivity of gauss - meter ) .( b ) amplitude plot , normalised to size of input pulse , showing ( i ) input pulse ( shown in red , scaled by a factor of ) , ( ii ) output pulses recalled with varying output gradients as noted .points correspond to demodulated data , dashed lines correspond to gaussian fit to data , and values correspond to the centre frequencies of pulses relative to the lo .bracketed ratios indicate .( c ) the fwhm of the output pulses normalized to the fwhm of the input pulse , as a function of input gradient over output gradient .points represent measured fwhm ( error bars from standard deviation of 100 traces ) , red ( dashed ) line corresponds to eq .[ eq : ff_grad ] , blue ( solid ) line corresponds to linear fit to data . ] by recalling with a steeper output gradient than input gradient , i.e. , the output bandwidth of the memory will be made greater than the input bandwidth .this change in bandwidth is , in turn , passed on to the echo as the absolute emission frequency relative to the centre frequency of each atom along the ensemble will be greater , while the total excitation will remain unchanged . in this case , the output pulse will be compressed in time due to its now greater frequency spectrum .the opposite is also true , i.e. by recalling with a shallower gradient the output pulse bandwidth will be reduced and the pulse elongated in time .+ the temporal profile of the output pulse , measured using the pulse full - width - half - maximum ( fwhm ) can be simply expressed as a function of the input profile and input / output gradient as + this has already been experimentally demonstrated . here ,however , we present a more quantitative study with the extra control of the gradient we obtain with the mec .figure [ fig : ff_gradchange](a ) shows experimental plots of ( i ) the input gradient , and ( ii ) output gradients with ratios from 1:1 to 3:1 . performing fits to individual pulses , as illustrated in fig .[ fig : ff_gradoffset](b ) , allows for in - phase digital demodulation of the heterodyne data .this , in turn , allows for averaging over many traces , something that would not be possible with the non - demodulated data due to phase fluctuations between the probe and local oscillator .+ figure [ fig : ff_gradchange](b ) shows averaged demodulated input and output pulse amplitudes for different recall gradients . as predicted , the output pulses become more compressed as the recall gradient is increased .though it does follow a linear relationship , it does not , however , follow eq .[ eq : ff_grad ] , as can be seen from fig .[ fig : ff_gradchange](c ) .this discrepancy is most probably a result of the highly dispersive nature of gem storage , with large changes in the absorptive profile of the system especially at either side of the gem frequency storage window when the pulse bandwidth is approximately equal to the memory bandwidth , as discussed in the previous section .having a larger input bandwidth would reduce the effect of dispersion on the pulse .+ it can also be seen from fig .[ fig : ff_gradchange](b ) that the echoes are emitted from the memory earlier , i.e. at a time , when recalled with a steeper output gradient .this is because a steeper gradient will cause the rephasing process to occur at a faster rate , and will also affect the amount of dispersion .the frequency of the echo is not the same as the input pulse due to the inherent gem frequency shift predicted in ref . , which is greater for shorter storage times . + as a function of position along the memory ( normalized to length ) due to gradients ( i)-(iii ) corresponding to times ( i)-(iii ) in ( b ) . for traces ( a ) and ( c )blue ( dashed ) lines correspond to the desired field and points correspond to the measured magnetic field ( error bars due to sensitivity of gauss - meter ) .( b ) spectral filtering of ( i ) a gaussian envelope containing two frequency components separated by 700 khz ( red , non - demodulated , scaled by ) , and the demodulated retrieval ( blue ) of ( ii ) higher , and ( iii ) lower frequency components averaged over 100 traces . for traces ( b ) and( d ) points correspond to data , lines correspond to fit to data , and values correspond to centre frequencies of pulses .( c ) two - photon detuning due to ( i ) input and ( ii ) , ( iii ) , and ( iv ) output gradients corresponding to times ( i)-(iv ) in ( d ) , which shows the conversion from the time to frequency domain of ( i ) a gaussian pulse with two modulation sidebands at khz ( red , non - demodulated , scaled by ) , and the demodulated retrieval of ( ii ) higher frequency sideband , ( iii ) carrier , and ( iv ) lower frequency sideband averaged over 100 traces ( blue ) . ] if we now consider the storage of a modulated pulse , the frequency encoding nature of gem will mean that the carrier and sideband components of the pulse will be stored in different parts of the memory .therefore , if we had fine enough control over the recall gradient , we could choose when to recall the different frequency components by switching the gradient only in the pertinent part of the memory .+ an experimental demonstration of this filtering is shown in fig.s [ fig : ff_spectralfiltering](a)-(b ) . here a carrier pulse with a gaussian envelope and two frequency components separated by 700 khz is sent into the memory . by reversing the gradient only in one half of the memory at a time , the different frequency components of the pulse can be recalled separately .the output pulses in this case both have the same due to the offset in the recall gradient for the lower frequency component .+ furthermore , following the same logic , by being able to switch the gradient slowly along the length of the memory , the stored pulse can be recalled as its fourier transform .this is shown experimentally in fig.s [ fig : ff_spectralfiltering](c)-(d ) for a modulated gaussian with two sidebands at khz . in this casethe gradient is reversed in three stages , rather than a gradual reversal of the entire memory , due to the limitations of the labview code refresh rate ( ) .the three outputs were fit separately to allow for their different frequencies .the time window for each demodulation is denoted by the dashed lines in fig .[ fig : ff_spectralfiltering](d ) .+ in the above experimental demonstration , equal power was put into the sidebands and carrier .the reason this is not the case for the echo is because of coupling field - induced scattering of light . in normal -gem storagethe coupling field is switched off during the storage process to limit this effect .this is not possible , however , for multi - pulse recall in a single gas cell as the coupling field must be present for recall to occur . + as a function of position along the memory ( normalized to length ) due to ( i)-(ii ) input gradients and ( iii ) output gradient corresponding to times ( i)-(iii ) in ( b ) .blue ( dashed ) lines correspond to the desired field and points correspond to the measured magnetic field ( error bars due to sensitivity of gauss - meter ) .( b ) interference of two initially time separated pulses ( i ) and ( ii ) , shown in red , which are also separated in frequency by 700 khz .( iii ) shows the superposition of the two pulses .the inset shows the output from the memory for storage of only a single pulse : recall ( , green ) ; or recall ( , blue ) .points correspond to demodulated data averaged over 100 traces , lines correspond to gaussian fit to data , and values correspond to centre frequencies of pulses .( c ) the change in relative phase of the fitted interference pulse as a function of the relative phase of the input pulses .points represent data extracted from fit ( error bars from standard deviation of 100 traces ) , and the dashed line corresponds to the theoretical behaviour . ]a time reversal of the spectral filtering process is also possible .that is , if we take two pulses with different frequencies and store them one at a time in different halves of the memory we can alter the gradients in the different halves at different times .this will cause the recalled echoes to overlap , and therefore interfere , at the output of the memory .previous experiments in pulse interference using gem have shown how the memory can facilitate interference between modes separated in either the time or frequency domains . herewe look at an alternate method using complex gradients made possible with the mec . + thisis shown in fig.s [ fig : ff_difffreqint](a)-(b ) . here, two pulses separated in frequency by 700 khz are stored in separate halves of the memory . the lower ( higher ) frequency pulse being stored in the second ( first ) half . setting the gradient to 0 in the second half of the memory when the first ( lower frequency ) pulse enters ensures it will be stored in the first half of the memory . setting the gradient to 0 in the first half of the memorywhile the higher frequency pulse enters and is stored serves two purposes : apart from ensuring that none of is stored in the first half of the memory , it also means that the stored will not undergo any additional dephasing .therefore , after is stored we can reverse the gradient across the entire memory at once , causing the superposition of the echoes on the output . + to investigate the phase preserving quality of the memory, we altered the relative phase between and and looked at the phase of the interference pattern of the echo .as can be seen from fig .[ fig : ff_difffreqint](c ) , the change in relative phase of the two input pulses matches the relative phase of the interference pattern at the output .the only free parameter in the fitting of the echoes was the relative phase , with the amplitude , timings and frequencies of the two individual pulses taken from storage of individual echoes and , shown in fig .[ fig : ff_difffreqint](b ) inset . + as a function of position along the memory ( normalized to length ) due to ( i)-(ii ) input gradients and ( iii)-(iv ) output gradients corresponding to times ( i)-(iv ) in ( b ) .blue ( dashed ) lines correspond to the desired field and points correspond to the measured magnetic field ( error bars due to sensitivity of gauss - meter ) .( b ) interference of two initially time separated pulses ( i ) and ( ii ) ( red dashed lines ) , which have the same centre frequency .( iii ) initial , and ( iv ) secondary superpositions of the two pulses .blue points and solid line correspond to maximum constructive interference for , while green crosses and dashed line correspond to maximum destructive interference for .points correspond to demodulated data averaged over 100 traces , lines correspond to gaussian fit to data , and values correspond to centre frequencies of pulses .( c ) the change in area ( normalized to the maximum intensity of the individual echoes ) as a function of the relative phase of the input pulses for ( i ) , and ( ii ) .points represent data extracted from fit ( error bars from standard deviation of 100 fits ) , and the dashed line corresponds to a fit to the data . ] two pulses with the same frequency can also be interfered in a similar manner .an experimental demonstration of same frequency interference is shown in fig .[ fig : ff_samefreqint ] .the two pulses are , as before , stored in different halves of the memory by setting in the other half . as can be seen from fig .[ fig : ff_samefreqint](a)(iii ) , the first recall gradient is no longer monotonic and , therefore , the pulse stored in the first half of the memory will be partly re - absorbed in the second half .this is why has a greater amplitude than .the non - absorbed component of will interfere with the retrieved light from the second part of the memory .if these two echoes are in phase then there will be constructive interference and an enhanced echo will be retrieved .if they are out of phase then will be small and the residual energy will remain as atomic excitation inside the memory .therefore , if the gradient in the second half of the memory is switched again , a second echo can be recalled from the memory from these leftover excitations .this is shown in fig.s [ fig : ff_samefreqint](b)(iii)-(iv ) .+ figure [ fig : ff_samefreqint](c ) shows the areas of the two echoes as the relative phase between and is varied .interference fringes can be seen , with a phase shift between the two echoes .the visibility for both and is approximately 60% ( normalized to maximum echo output for and separately ) .these values could potentially be improved with finer gradient control , especially in terms of timings .in this paper we have experimentally demonstrated a number of different spectral manipulation operations using gem .these operations could have various uses in a quantum information network .for instance , the ability to alter the bandwidth of the pulse , demonstrated in section [ sec : ff_bandwidth ] , could be used to match systems with different bandwidths and , by increasing bandwidths , help to improve bit rates for a number of time - bin qubits as described in . combining this with the ability to change the centre frequency of the stored information , demonstrated in section [ sec : ff_frequency ] ,would allow one , in theory , to match any two optical systems .+ this latter ability would allow for the conversion of time - bin qubits into frequency - bin qubits .it would also , along with the frequency encoding nature of gem , allow for the ability of frequency multiplexing .a number of pulses with different frequencies could be combined into one temporal pulse inside the memory , as demonstrated in section [ sec : ff_interference ] , and sent down the communication channel .once they reached the other end of the channel they could be separated with a second memory , as demonstrated in section [ sec : ff_filtering ] .this could greatly improve qubit rates over optical channels in quantum information networks .multiplexing quantum memories and nodes has also been suggested as a way of improving quantum repeater designs by speeding up the entanglement generation process , and against memory coherence times .+ the phase sensitive interference of initially time separated pulses , demonstrated in section [ sec : ff_interference ] , could also find applications in quantum computing .in an all optical switch , for instance , where it is the relative phase between the pulses that determines how much light is emitted at different times . + all these potential applications require high efficiencies and therefore high optical depths .a high optical depth is especially important as increasing the bandwidth of the system will decrease the recall efficiency .also , a drawback to using one physical memory ( i.e. the 20 cm long gas cell ) as a system of sub - memories is that the optical depth for each individual sub - memory will be of the total memory optical depth .+ an alternative method would be to use physical memories placed in series to create a memory network .not only would this increase the overall optical depth of the system but it could help to alleviate two other drawbacks to the sub - memory approach taken here : finer control of the gradient ; and coupling field - induced scattering .much care was taken with the construction of the multi - element coil , and the decision on the order of gradients , to ensure the desired and physical gradients matched as well as possible .this does , however , place limitations on the operations that can be applied .using many physical memories as one memory network would automatically increase the resolution of the gradients with respect to the length of each sub - memory .another option for improving the resolution would be to move to an alternate gradient creation technique such as the ac stark effect .+ the coupling field - induced scattering was discussed in section [ sec : ff_filtering ] with regards to the extra decay of information that is left in the memory while other information is recalled .this is a concern as this scattering leads to a decoherence rate almost 10 times larger than other decoherence mechanisms present and can not be combatted in a single memory .however , this issue could be addressed with a network of memories if one were to use orthogonal polarizations for the probe and coupling fields and place polarizing beam - splitters between the memories .in this paper we have presented experimental demonstrations of theoretical spectral manipulation operations originally investigated in ref .we showed that using the gradient echo memory scheme we can alter the bandwidth ( and therefore temporal profile ) of a pulse , as well as change its centre frequency .we also demonstrated the ability of gem to act as a spectral filter and , using the frequency - encoding nature of gem , were able to recall a modulated pulse as its fourier transform .finally we showed that two initially time separated pulses , with the same or different frequencies , could be caused to interfere coherently at the output of the memory .these abilities could be used to improve qubit rates across quantum communication channels , as well as potential uses in quantum computing applications .many thanks to shane grieves for his work constructing the hardware for the mec , and to peter uhe for their initial testing .this research was conducted by the _australian research council centre of excellence for quantum computation and communication technology _ ( project number ce110001027 ) .
the ability to coherently spectrally manipulate quantum information has the potential to improve qubit rates across quantum channels and find applications in optical quantum computing . in this paper we present experiments that use a multi - element solenoid combined with the three - level gradient echo memory scheme to perform precision spectral manipulation of optical pulses . these operations include bandwidth and frequency manipulation , spectral filtering of separate frequency components , as well as time - delayed interference between pulses with both the same , and different , frequencies . these operations have potential uses in quantum information applications .
recently we have shown that the pareto law appears asymptotically ( ) in the distribution of money among the agents in the steady state of a trading market model : when the agents have random saving propensities .the market is modeled as an ideal gas where each molecule is identified with an agent , with the additional attribute that each agent has a random saving propensity , and each trading event between two agents is considered to be an elastic or money conserving collision between two molecules . in another model a geometric model for earthquakes we have shown that a power law similar to the gutenberg - richter law appears in the asymptotic distribution of the overlap between two dynamically intersecting cantor sets : since a geological fault is formed of a pair of fractal rock surfaces that are in contact and in relative motion , it is modeled by a pair of overlapping cantor sets ( the simplest known fractal ) , one shifting over the other .the overlap between the two cantor sets represents the area of contact between the two surfaces of the fault and hence it is proportional to the energy released in an earthquake resulting from ruptures in the regions of contact . in both the models we get simple power laws with the exponents .although these have been obtained separately for the two models , using both numerical and analytic methods , we show here that the two cases have a common feature that results in a common mode of origin of the power laws observed in the distribution of money and fractal overlap .the derivation of the power laws presented here shows that the common feature is a log - normal distribution in which the normal factor spreads indefinitely thus leaving the power - law factor to dominate the asymptotic distribution .let us first consider the ideal gas model of an isolated economic system that we refer to as the market in which the total money and the total number of agents are both constant ; there is neither any production nor any destruction of money within the market and no migration of agents occurs between the market and its environment .the only economic activity allowed in the market is trading among the agents . each agent possesses an amount of money at time .the time is discrete and each event of trading is counted as a unit time step . in an event of trading ( shown schematically in fig .1 ) a pair of agents and randomly redistribute their money between themselves such that their total money is conserved and none of the two agents emerges from the trading process with negative money ( i.e. , debt is not allowed ) : it has already been shown that in the steady state market ( ) the money with the individual agents follow the gibbs distribution : when there is no restriction on the amount of money each agent can trade with except that it must satisfy the conditions of eq .( [ eq : local - conservation ] ) . here represents the economic equivalent of temperature and is defined as the average money per agent in the market .if each agent saves a fraction ( ) of its own money at every trading and is the same for all agents at all time steps , the individual money with the agents in the steady state follows the gamma distribution . if we consider the effect of randomly distributed saving fraction among the agents , the money distribution in the steady state assumes the form of the pareto law .the evolution of the agents money in a trading can be written as \label{eq : evolution - i}\ ] ] and \label{eq : evolution - j}\ ] ] where and are the saving fractions of agents and respectively .the saving fractions are quenched , i.e. , fixed in time for each agent and are distributed randomly and uniformly ( like white noise ) on the interval .the random division of the total traded money is given by the number that varies randomly with the trading events .the money distribution in the steady state is found to have a long power - law tail ( shown in fig.2 ) that fits with the pareto law for .we also have analytic proofs of the pareto distribution of money observed in this random - saving gas - like model ; all these proofs proceed by formulating the trading events as scattering processes and show that the pareto distribution is a steady state solution of the scattering problem . herewe give a simple derivation of the asymptotic distribution of money in the steady state of the market model using an argument of the mean - field type , thereby avoiding the intricacies of the previous proofs . in our approachthe money redistribution equations ( [ eq : evolution - i ] ) and ( [ eq : evolution - j ] ) are reduced to a single stochastic map by taking the product of the two equations : now we introduce a mean - field - like approximation by replacing each of the quadratic quantities , and by a mean quantity . therefore eq .( [ eq : evolution - ij ] ) is replaced by its mean - field - like approximation where is an algebraic function of , and ; it has been observed in numerical simulations of the model that the value of , whether it is random or constant , has no effect on the steady state distribution ( illustrated in fig .2 ) and the time dependence of results from the different values of and encountered during the evolution of the market . denoting by , eq . ( [ eq : stochastic - map ] ) can be written as : where is a random number that changes with each time - step .the transformed map ( eq . [ eq : random - walk ] ) depicts a random walk and therefore the ` displacements ' in the time interval $ ] follows the normal distribution now where is the log - normal distribution of : .\label{eq : log - normal1}\ ] ] the normal distribution in eq .( [ eq : rw - distribution ] ) spreads with time ( since its width is proportional to ) and so does the normal factor in eq .( [ eq : log - normal1 ] ) which eventually becomes a very weak function of and may be assumed to be a constant as .consequently assumes the form of a simple power law : that is clearly the pareto law ( [ eq : pareto - law ] ) for the model .next we consider a geometric model of the fault dynamics occurring in overlapping tectonic plates that form the earth s lithosphere . a geological faultis created by a fracture in the earth s rock layers followed by a displacement of one part relative to the other .the two surfaces of the fault are known to be self - similar fractals . in this modela fault is represented by a pair of overlapping identical fractals and the fault dynamics arising out of the relative motion of the associated tectonic plates is represented by sliding one of the fractals over the other ; the overlap between the two fractals represents the energy released in an earthquake whereas represents the magnitude of the earthquake . in the simplest form of the modeleach of the two identical fractals is represented by a regular cantor set of fractal dimension .this is the only exactly solvable model for earthquakes known so far .the exact analysis of this model for a finite generation of the cantor sets with periodic boundary conditions showed that the probability of the overlap , which assumes the values , follows the binomial distribution of : since the index of the central term ( i.e. , the term for the most probable event ) of the above distribution is , , for large values of eq .( [ eq : binomial - regular ] ) may be written as by replacing with . for , we can write the normal approximation to the above binomial distribution as since , we have } , \label{eq : normal - approx'}\ ] ] not mentioning the factors that do not depend on .now where \label{eq : log - normal2}\ ] ] is the log - normal distribution of . as the generation index , the normal factor spreads indefinitely ( since its width is proportional to ) and becomes a very weak function of so that it may be considered to be almost constant ; thus asymptotically assumes the form of a simple power law with an exponent that is independent of the fractal dimension of the overlapping cantor sets : this is the gutenberg - richter law ( eq . [ eq : gutenberg - richter - law ] ) for the fractal - overlap model of earthquakes .it was also observed in numerical simulations that for several other regular and random fractals , thus suggesting that the exponent may be universal . the exact result of eq . ( [ eq : binomial - regular ] ) in ref . disagreed with the asymptotic power law of eq .( [ eq : power - law2 ] ) obtained previously by renormalization group analysis of the model for in ref .the disparity between the two results had appeared because it was overlooked that the former is the exact distribution of whereas the latter was the asymptotic distribution of .however the above analysis shows that the power law in eq .( [ eq : power - law2 ] ) is indeed the asymptotic form of the exact result .this is qualitatively similar to what is observed in the distribution of real earthquakes : the gutenberg - richter power law is found to describe the distribution of earthquakes of small and intermediate energies ; however deviations from it are observed for the very small and the very large earthquakes . the fact that the fractal - overlap model produces an asymptotic power law distribution of overlaps suggests that the gutenberg - richter law owes its origin significantly to the fractal geometry of the faults .furthermore , since this model contains the geometrical rudiments ( i.e. , the fractal overlap structure ) of geological faults and it produces an asymptotic distribution of overlaps that has qualitative similarity with the gutenberg - richter law , we are inclined to believe that the entire distribution of real earthquake energies is log - normal that is wide enough for the gutenberg - richter power law to be observed over a large range of energy values .in the trading market model , we have shown that the money redistribution equations for the individual agents participating in a trading process can be reduced to a stochastic map in ( eq . [ eq : stochastic - map ] ) . using the transformation , the map was reduced to a random walk in the variable and hence the distributions of and found to be normal and log - normal respectively ; in the steady state , i.e. , for , the latter was found to assume the form of a power law identical to the pareto law with the exponent .likewise , in the fractal - overlap model for earthquakes the distribution of overlaps was found to be log - normal for large generation indices of the cantor set and it further reduced asymptotically ( as ) to a power law similar to the gutenberg - richter law for earthquake energies . in both the cases , the original distribution of the relevant variable ( and ) was log - normal in which the normal factor became a very weak function of the variable in the asymptotically ( and respectively ) , thus rendering a power - law form to the distribution .our derivations of the two power laws in the two vastly different models also indicate the universality of the exponents . in particular , the value of the gutenberg - richter exponent in the fractal - overlap model is clearly independent of the dimension of the fractals used and therefore the result is of a general nature . in the context of this paper it may be mentioned that in a similar fashion pietronero et al found a common mode of origin for the laws of benford and zipf .a. chakraborti and b. k. chakrabarti , eur .j. b * 17 * ( 2000 ) 167 ; a. das and s. yarlagadda , phys .t * 106 * ( 2003 ) 39 ; m. patriarca , a. chakraborti and k. kaski , phys .e * 70 * ( 2004 ) 016104 .s. pradhan , b. k. chakrabarti , p. ray and m. k. dey , phys .* t106 * ( 2003 ) 77 ; s. pradhan , p. chaudhuri and b. k. chakrabarti , in _ continuum models and discrete systems _ , ed .d. j. bergman , e. inan , nato sc .series , kluwer academic publishers ( dordrecht , 2004 ) pp .245 - 250 ; arxiv : cond - mat/0307735 .
we show that there is a common mode of origin for the power laws observed in two different models : ( i ) the pareto law for the distribution of money among the agents with random saving propensities in an ideal gas - like market model and ( ii ) the gutenberg - richter law for the distribution of overlaps in a fractal - overlap model for earthquakes . we find that the power laws appear as the asymptotic forms of ever - widening log - normal distributions for the agents money and the overlap magnitude respectively . the identification of the generic origin of the power laws helps in better understanding and in developing generalized views of phenomena in such diverse areas as economics and geophysics .
quantum algorithms have proven to be more efficient than classical algorithms in solving a number of problems .for instance , shor s quantum algorithm can factor numbers exponentially faster than classical algorithms .grover s search algorithm does not change the complexity class , but provides significant speed up for large databases .we would like to explore the power of quantum algorithms in the context of knot theory .classification of knots and links in a three - dimensional space is one of the open problems .jones introduced a recursive procedure for determining a polynomial relation for these knots and links .jones polynomials do classify some knots and links .there are other generalized polynomials which improve the classification but none of them have achieved complete classification . it is known that the evaluation of jones polynomial classically is a hard problem . hence it will be interesting to study the computation of knot and link polynomials using a quantum algorithm .there are diverse approaches in physics to obtain polynomials for knots and links . following alexander s theorem ,any knot can be viewed as a closure or capping of an -strand braid .therefore the polynomials for knots and links can be determined by studying representation theory of braid groups .the common ingredient in these approaches is to find different representations of braid groups .we now present a brief summary of some of these approaches : \1 ) * n - state vertex models * which are two - dimensional statistical mechanical models where the bonds of the square lattice carry spin representations of .the number of possible states of spin is denoted by .the properties of these models are described by the so - called -matrix which is an matrix .the number of nonzero elements in the -matrix for -state vertex models is given by .in the literature , the vertex models are referred to either as -state vertex model or -vertex models ; both are equivalent .for example , 2-state vertex models carry spin on the lattice bonds and they are equivalently called as six - vertex models where `` six '' denotes the number of non - zero -matrix elements . in ref . , braid group representations and knot polynomials from the -matrices of -state vertex models were obtained .\2 ) * chern - simons gauge theory * is a topological field theory which provides a natural framework for the study of knots and links .the knot polynomials are given by the expectation value of wilson loop observables .in particular , the jones polynomial corresponds to the wilson loop carrying spin representation in chern - simons theory . clearly , arbitrary representations of any compact gauge group can result in generalized polynomials .the polynomials are in the variable which is a function of the coupling constant and the rank of the gauge group . the field theoretic polynomials were obtained by exploiting the connection between chern - simons theory on a three - manifold with boundary and the corresponding wess - zumino - witten conformal field theory ( wzw ) on the boundary .the polynomials crucially depended on various representations of the monodromy or braiding matrices in the wzw models . recently , freedman et al . have attempted simulation of topological field theories by quantum computers .the topological quantum computation proposed in refs .- is at a mathematically abstract level .it exploits the connection between fractional quantum hall states and chern - simons theory at the appropriate integer coupling .\3 ) * state sum method * of obtaining bracket polynomials . in ref . , the construction of a unitary representation has been shown for the three - strand braid .further within this approach , it has been shown that it is not possible for a quantum computer to evaluate the knot polynomial . however , for a specific choice of the polynomial variable , the linking number can be determined .our aim is to determine the jones polynomial for any knot or link obtained from braids using a quantum algorithm . for this purpose, we need to determine matrix representation for braid generators .we will recapitulate the construction of braid group representation from the six - vertex model . then, we can determine the eigenvalues and eigenstates for these braid generators .this exercise suggests that we can associate product of unitary operators for any braid word .hence the unitary transformations , corresponding to any braid word , can be implemented using quantum gates .it is important to stress that the quantum computation in this paper is crucially dependent on the mapping of any braid word to a product of unitary operators .the polynomials for knots and links can be directly computed by choosing a suitable eigenbasis of the braiding matrices .essentially , a quantum algorithm will determine the probability of finding unitarily evolved initial state ( ) in a final state ( ) : where represents the series of unitary operators corresponding to the braid word . for the braiding matrices obtained from the six - vertex model ,the above matrix element gives the modulus - square of the jones polynomial ( up to an overall constant ) .this paper is organized as follows . in section[ s - braid ] , we present a general method to find representation of braid groups using n - state vertex models .we discuss in detail the six - vertex model , braiding eigenvalues and eigenstates . using these eigenstates, we evaluate jones polynomial . in section [ comp ] , we present a method to perform the evaluation of the modulus - square of the jones polynomial as a quantum computation by considering any knot or link as a composition of cups , a series of braiding operations and caps . in the concluding section ,we summarize the results obtained and discuss the significance of the quantum algorithm .in this section , we review the construction of braid group representations from n - state vertex models . in order to compare the eigenstates of the braiding operator with the qubit states ,the six - vertex model ( spin on the bonds of the square lattice ) is relevant .hence , we shall present the explicit form of the -matrix and the braid matrix for the six - vertex model . as mentioned in the introduction ,vertex models are two - dimensional statistical mechanical models with the spins lying on the bonds of a square lattice .the properties of these models are described by the -matrix elements between edge states and : where is the spectral parameter . heretake values .the integrability condition of these models requires the following equations to be satisfied : where are called yang - baxter operators and the relation ( [ yb ] ) is called yang - baxter equation .the explicit form of in terms of the -matrix elements is given by where is the identity acting at the -th position and is a matrix such that .the solution to eq .( [ yb ] ) can be written in a compact form : where the terms in parenthesis are the quantum clebsch - gordan coefficients ( q - cg ) which are nonzero if and only if takes a value in the range and satisfies the condition . here is given by where the q - cg coefficient variable .these solutions are spectral parameter dependent solutions .the explicit form of the -matrix for the six - vertex model is [ cols="^,^,^,^,^ " , ] where and .these eigenvalues are equal up to an overall normalization to the eigenvalues of the wess - zumino - witten model monodromy matrices .we observe that the eigenvalues of the matrix on the coupled states depend only on and not on .therefore , we can suppress the dependence on the eigenstates of the braiding operator and equivalently write it as a tensor product state involving the spin placed on the bonds of the six - vertex model .that is , even though we have explicitly diagonalized the matrix , we must remember that all the braid group generators s can not be simultaneously diagonalized .the spectral parameter independent form of eqs .( [ yb],[yb1 ] ) are the defining relations of the braid group which implies that we can simultaneously diagonalize either s or s . in this subsection , we would like to address the eigenvectors and eigenvalues of braid generators from the viewpoint of obtaining polynomial invariants of knots from platting or capping of braids .it is well known that knots from braids are not unique .that is , braids related by markov moves i and ii give rise to the same knot .these two moves indeed completely remove the non - uniqueness .so the construction of polynomial invariants for knots must be such that the polynomial does not change under markov moves .one such procedure for knots obtained from closure of braids has been presented in .we will use the eigenstates of the braiding operators to directly compute the polynomial invariant for any knot obtained from closure or capping of an -strand braid . in order to remove the non - uniqueness due to markov moves ,we place orientations on the strands of the braid .further , we introduce a correction factor to the braid eigenvalues obtained from six - vertex model such that the polynomial does not change under markov moves i and ii .the correction factor on braiding eigenvalues depends on the relative orientations between the two strands . for right - handed half - twists between strands of parallel orientation ,the braiding eigenvalues are similarly , for right - handed half twists between strands of antiparallel orientation , the braiding eigenvalues are the eigenvalues for left - handed half - twists are inverse of the right - handed half - twists eigenvalues .suppose we consider any knot obtained from capping of a -strand oriented braid . clearly , capping is possible if the number of outgoing strands is equal the number of incoming strands in the oriented braid .in other words , the quantum states s on the strands ( [ nev ] ) should be such that therefore to study knots from braids , only the subspace of the states satisfying eq .( [ zer ] ) needs to be considered .hence the construction of the eigenstates of braiding operators should be consistent with eq .( [ zer ] ) .we will now present the eigenstates of the braiding matrices which will enable direct evaluation of knot polynomials . for a -strand oriented braid , we can write the most general eigenbasis of braiding operators with eigenvalue , for all s , as : recall that the appropriate braiding eigenvalues ( [ para],[antip ] ) need to be substituted depending on the relative orientations of the two strands involved in braiding and the handedness .the brackets within the basis kets should be identified with the notation in eq .( [ notat ] ) .that is , , , and so on .note that the final combined state in the above basis is chosen to be spin which is essential to satisfy the condition ( [ zer ] ) to describe knots from closure or capping of braids . in the similar fashion ,we can write a different eigenbasis for braiding operators with eigenvalue for all s : in order to achieve the final spin state , we require . incidentally , these two bases are equivalent to the conformal blocks in wess - witten conformal field theory .the two different bases ( [ odd],[even ] ) are related by an orthogonal ( unitary ) duality matrix the duality matrix can be written in terms of products of quantum - racah coefficient matrices : \prod_{m=0}^{n-2}a_{t_i r_{i-1}}\left[\begin{matrix}t_{i-1 } & j_{2i}\cr r_i&s_{2m}\end{matrix}\right]\right)~\nonumber\\ & ~ & \prod_{m=0}^{n-2 } a_{l_i j_{2i+2 } } \left [ \begin{matrix}t_m & s_{2m+2}\cr s_{2m+3 } & t_{m+1}\end{matrix}\right]\end{aligned}\ ] ] where the closed form expression for the quantum - racah coefficient matrix is &= & ( -1)^{s_1+s_2+s_3+s_4 } \sqrt{[2j+1][2l+1 ] } \delta(s_1,s_2,j ) \delta(s_3,s_4,j ) \delta(s_1,s_4,l ) \delta(s_2 , s_3,l)\nonumber\\ ~&~&\times \sum_{m \geq 0}(-1)^m [ m+1 ] ! \{[m - s_1-s_2-j ] ! [ m - s_3-s_4-j ] ! [ m - s_1-s_4-l]!\nonumber\\ ~&~&\times [ m - s_2-s_3-l]![s_1+s_2+s_3+s_4-m ] ! [ s_1+s_3+j+l - m]!\nonumber\\ ~&~&\times[s_2+s_4+j+l - m]!\}^{-1}\end{aligned}\ ] ] where ![a - b+c]![a+b - c]!\over[a+b+c+1]!} ] and diagonal braiding matrices .these unitary representations play the role of quantum gates in the quantum computation of jones polynomial .in this section , we attempt to compute the jones polynomials , for knots and links obtained from platting or capping of strand braid as shown in fig . 1 , through a quantum algorithm . we have already elaborated in the previous section that we can associate ( product of unitary matrices ) for every braid word .the quantum algorithm involves the following steps : + : let the initial -qubit state be ( ) .+ : we perform the sequence of unitary operations corresponding to the braid word in fig . 1. the unitarily transformed state will be : finally , we determine the probablity of the unitarily evolved state in a specific final state as taking the final state to be , we get the modulus square of the jones polynomial ( up to an overall normalisation) ( [ jone ] ) . for a subclass of knots(links ) called achiral knots(links ) , is unchanged under . in other words ,the matrix element will be real .for these achiral knots and links , the quantum algorithm directly gives the jones polynomial ( up to an overall normalisation ) .in this paper , we have presented matrix representations for braiding matrices from six - vertex models .we have discussed the representation theory of braids , namely , the eigenbasis and eigenvalues of the braid generators obtained from six - vertex models .the explicit evaluation of jones polynomial , for any knot / link from braids , is presented . from the evaluation, we have shown that we can associate a series of unitary operators for any braid word .this is the significant result of the paper enabling quantum computation .we have demonstrated a quantum algorithm , involving these unitary operators , which can determine the modulus square of the jones polynomial for any knot or link .the algorithm gives jones polynomial for achiral knots and links .we must realize that the quantum computation essentially determines the probablity of unitarily evolved initial state in a specific final state .further , the number of unitary operators is dependent on the braid word and at most equal to twice the length of the braid word . *acknowledgments * : we would like to thank l.h .kauffman for comments and queries which significantly helped us to improve the paper .we would also like to thank umasankar for going over the manuscript and suggesting corrections .10 a. ekert , p. hayden , and h. inamori , quant - ph/0011013 and references therein .+ a. ekert and r. jozsa , reviews of modern physics * 68 * ( 1996 ) 733 - 753 .jones , bull .ams * 12 * ( 1985 ) 103 - 111 .p. freyd , d. yetter , j. hoste , w.b.r .lickorish , k. millet and a. oceanu , bull .ams * 12 * ( 1985 ) 239 - 246 .kauffman , ann .studies ( princeton univ .press , 1987 ) ; ( world scientific , 1991 ) .f. jaeger , d. vertigen , and d. welsh , math .cambridge philos .* 108 * ( 1990)35 - 53 .y. akutsu and m. wadati , jour . of the phys .soc . of japan ,* 56 * ( 1987 ) 3039 - 3051 ; m. wadati , t. deguchi and y. akutsu , phys . rept . * 180 * ( 1989 ) 247 - 332 .e. witten , commun .* 121 * ( 1989 ) 351 - 391 .p. ramadevi , t.r .govindarajan and r.k .kaul , nucl .b * 402 * ( 1993 ) 548 - 566 .l. h. kauffman , math.qa/0105255 .freedman , a. kitaev , and z. wang , commun .* 227 * ( 2002)587 - 603 , quant - ph/0001071 .freedman , quant - ph/0003128 .freedman , a. kitaev , m.j .larsen , z. wang , quant - ph/0101025 .l. h. kauffman , math.qa/0105255 .l. h. kauffman and s. j. lomonaco jr . , quant - ph/0205137 .kaul , hep - th/9804122 , frontiers of field theory , quantum gravity and strings , nova science , ( 1999 ) 45 ; p. ramadevi , ph.d thesis ( 1996 ) . v. pasquier , commun .* 118 * ( 1988 ) 355 - 364 .kirillov , n.yu .reshetikhin , ed .kac , world scientific , 1989 .kaul , commun .* 162 * ( 1994 ) 289 - 320 .vandersypen , m. steffen , g. breyta , c.s .yannoni , m.h .sherwood , and i.l .chaung , nature * 414 * ( 2001 ) 883 - 887 .
it is a challenging problem to construct an efficient quantum algorithm which can compute the jones polynomial for any knot or link obtained from platting or capping of a -strand braid . we recapitulate the construction of braid - group representations from vertex models . we present the eigenbases and eigenvalues for the braiding generators and its usefulness in direct evaluation of jones polynomial . the calculation suggests that it is possible to associate a series of unitary operators for any braid word . hence we propose a quantum algorithm using these unitary operators as quantum gates acting on a qubit state . we show that the quantum computation gives jones polynomial for achiral knots and links .
fashion as an object of research was introduced to sociology at the beginning of xx century by georg simmel , in direct connection with social classes . according to simmel ,fashion `` is a product of class distinction '' .dynamics of fashion is driven by two forces ( we would prefer to say `` processes '' ) : adaptation to society and individual departure from its demands .the social stratification is projected into a division of roles : elites tend to differ from lower classes , while the latter tend to imitate elites .these processes of imitation and avoidance produce a stream of given status symbols ( clothing , social conduct , amusement ) from elites to lower classes , where finally they disappear , replaced by new patterns .later , the phenomenon was called `` simmel effect '' .+ while imitation as an object of social simulations has attracted common interest , the thread of avoidance is much less popular .the original simmel thoughts were converted to simulations by roberto pedone and rosaria conte , but these works remain almost unnoticed . in the original version of the famous model of dissemination of culture by robert axelrod avoidance is absent , and it has been added only recently . similarly , social repulsion appeared to be a useful concept when added to the deffuant model of dynamics of social opinion .+ it is somewhat surprising that the authors of these papers do not compare their results with real data on fashion . the conclusions of are concentrated on the fact that the simmel effect is present in the numerical results , and on the mutual comparison of different variants of calculations .current interpretations of the results of the axelrod model seem to follow large scale theory , as suggested by the term cultural . in , the authors suggest that their numerical results could be related to the distribution of languages. however , in their model the number of equivalent options of cultural traits ( the variable ) is of the order of hundreds . obviously , nobody in this world has a choice of one hundred languages .the interpretation of fashion ( as for example clothing ) , although natural here , remains unexplored .+ the aim of this text is to connect the calculations of pedone and conte to some sets of real data on fashion , available in literature .namely , we intend to apply the model to the datasets of american babies names in the period 1880 - 2010 .some comments will also be possible on the data on skirt lengths .the model itself is slightly modified ; a scale - free network is used as the structure of a model social network , instead of a square lattice or a torus .the number of options remains as a model parameter , as in ; however , we do not take into account any interaction between different variables , so the number of variables ( in ) is set to one .the social status of agents remains constant during the simulation and is read from the node degree .+ in two subsequent sections , the model is explained and numerical results are shown .section 4 is devoted to the datasets , and section 5 - to the simmel effect in the data on babies names . in the last section ,we summarize the similarities and differences between the results and the data .scale - free networks are constructed with new nodes attached to nodes , according to the known principle of preferential attachment .the network size was kept large enough to assure the mean shortest path not less than 3 ; for example , for the minimal network size is nodes .as noted above , the social status of nodes is determined by their degree .the network structure should be complex enough to contain nodes of middle class between elite nodes and nodes of low class ; hence the distances in networks should be large enough .+ a variable is assigned to each node .the values of these variables belong to the set .the parameter marks then the number of options of a cultural feature . in the initial state for all nodes . + for each node , its neighbours are divided into two sets : those with degree larger or equal to the degree of form the set , and the remaining neighbours the set .a node which has no neighbours in is marked . during the simulation, nodes are selected randomly .for the selected node , say , states of the set are estimable , and states of the set are shaming .if a state ( a number within the allowed range ) exists which is simultaneously estimable and not shaming , it is legal .then we substitute . for marked nodes ,all states are estimable .if more legal states exist , takes the value which is most frequent in . if no legal states exist for , remain unchanged . for marked nodes ,the legal state is chosen randomly . to speed up the simulation , we start from one of marked nodes. + this part of algorithm is taken from , where the authors write : `` the symbols of higher level neighbors will be marked as legal only if they are not equal to the symbols of the lower level neighbors .the agent will modify its color if ( a ) no symbol of higher level neighbors is equal to its own or ( b ) one at least of the lower level neighbors is equal to it . in either case , the agent will randomly assume one of the remaining legal colors . ''the difference between our algorithm and the algorithm in is that we use the scale - free network and not the torus .also , in our approach high degree is equivalent to high social status , while in it is assigned randomly .this means that in , the structure of interacting nodes is not coupled with the status ; in our approach , this coupling is present .indications , that this coupling exists , abound in the literature ; for a review see .a set of scale - free networks is investigated , of =1000 and 5000 , =3 , 5 and 8 .for all networks , the number of options appears to be sufficiently large to observe the same behaviour as for all higher values of .namely , once a new value appears at a marked node , the number of nodes described with this value increases , later it decreases . inmost observed cases , finally it decreases to zero .this does not prevent a reappearance of this value in subsequent cycles .the cyclic behaviour , rise and fall of some values , is visualised in fig .+ on the contrary to this seemingly universal behaviour , for small the system behaviour is different .namely , for =5 the observed values never disappear : once a value is present , it will be present forever , and the fluctuations of its frequency remain relatively small .a typical example is shown in fig .as increases , a tendency towards smaller frequencies can be seen , and it prevails above =20 , where more and more symbols disappear . however , the change is not sharp , and the term `` crossover '' seems to be more appropriate than `` transition '' .+ we made an attempt to check if the time period of a typical `` rise and fall '' depends on .indeed , for this time is found to be about time steps , while for it is about time steps , i.e. more than 4 times smaller . herethe calculation was made for .( the time step is equivalent to a check of a pair of nodes . )however , we note that the distribution of is rather wide ; the statistics ( 413 cases for , 2251 in the same time length of the sample for ) does not allow to infer about the character of the plots , shown in fig .3 . yet we can see that the cycle length decreases with : more names , shorter the cycles .at the webpage of the u. s. social security administration , the statistics is available on the frequency of male and female given names in 1880 - 2011 .the datasets are provided also for particular states , but the latter is not analyzed here . from these data , we extracted the time dependences of number of events , when a child got a given name in a given year .these data show irregularities , which can not be atributed to random noise . actually , we found it fascinating to trace how the names of different kinds of celebrities are visible or not , appear and disappear over the course of the years .sometimes the relation to a given person seems obvious , as for the name charlie .the number of newborn boys with this name was almost constant before 1910 , but the data show an increase , more or less linear , from about 800 in 1909 to almost 2900 in 1919 .then , the plot started to decrease so slowly that the plateau about 500 was obtained not earlier than in 1970 .in other cases , an interpretation is less straightforward . in 1910 , a similar increase of popularity is observed for the name albert , from about 2000 to more than 10000 in 1921 . in 1910 , einstein was proposed for the first time as a candidate for the nobel prize .however , some contribution is possible also from albert i of belgium , who started his reign in 1909 .a study on history of american culture could bring some light in this matter . to end with more recent example , a sharp increase of number of small angelinas from 1000 to almost 6000 between 2000 and 2005does not leave doubts about its origin . + two facts about the data can be of interest for further research .first , the diversity of names tends to increase in time .this can be seen , for example , in the time dependence of the fragmentation index , defined as , where is the name index and is the percentage of babies with this name in -th year . the plot is shown in fig .it is remarkable that despite the overall tendency of decrease , a small maximum is observed in 1947 : again a question for cultural studies .the second fact is that the distribution of is close to the power law , and the quality of the fit does not worsen in time . in 2011 ,the exponent was 1.9 ( see fig .5 ) and its older values are not far from this .we add that in our fitting procedure the first point is purposefully omitted .this point refers to most frequent names .why it does not fit to the power law ?this question should be discussed together with the question about the origin of the power law itself .+ when compared with babies names , the data on skirt lengths are much less complete . in , we get the mean skirt length ratios to height of figure , in the uk and west germany .the data come from measurements of photos of day dress in two autumn magazines , littlewoods ltd and neckermann versand , in 1954 - 1990 .plots and tables are given also on the ratios of skirt width to height and on the standard deviations of skirt lengths and widths within the year in the same period .while the data on skirt lengths show approximately the same time dependence in both magazines , the ratio of width to height differ more clearly before 1970 .we note that the standard deviation for skirt length is the largest after 1980 , while the one for skirt width is the largest before 1962 . on the other hand , conclusions of an analysis of data from 1789 - 1980 suggest that according to the general trend , the within - year variance increase in time .+ earlier data reveal that the skirt length loses its discriminatory power in terms of today . in , we find data from fashion journals for 1845 - 1915 , where the accuracy is one milimeter .as we read there , the ratio of length of dress to height of figure was not smaller than 0.95 till 1912 , with a sudden fall to 0.842 in 1919 - what a come down ! in the same period the skirt diameter passed a variation from 55 cm to 110 cm , then to 30 cm . on the other hand , as we read in ( the data from 1860 to 1980 , four fashion journals ) , more shirt lengths were concurrently present and the measurements give only most typical results .in , the data are provided in a coarse - grained form ; for skirt length , the categories are as follows : train , floor , ankle , calf , cover knee , above knee .the last category was rather broad already in 1980 .there are five classic names , with their popularity top above 60 thousands babies born around 1946 - 1950 , with remarkable maxima also at 1920 or a few years later .these are : james , john , mary , robert and william . in the case of mary ,the top at 1920 is the highest .however , the top record belongs to linda ( see fig.6 ) .this name , popular already in 1946 ( 52 thousands ) , jumped to above 99 thousands in 1947 ; we believe that the effect is due to a popular song entitled linda , released in november 1946 .other names from top ten are : michael , david and jennifer with broad maxima in 1957 , 1960 and 1972 , respectively , and lisa : a sharper maximum in 1965 .+ in the next ten names , again we find peaks . in five of them ,the slope is larger on the left , as in lambda. four of them is more or less symmetric , and in only one ( patricia ) perhaps the slope is larger on the right .we stress that after inspecting a hundred of most popular names , we have always seen a structure which can not be attributed to uncorrelated fluctuations . in most cases ,an eye armed with the rayleigh criterion sees only one maximum .often , the lambda shape can be identified : an example is given in fig .7 , together with a calculated plot which can be considered as typical. from numerous associations to contemporary celebrities , we mention the shortest career of shirley , from 14316 newborn babies in 1933 to 35149 in 1936 , but less than 18 thousands already in 1940 . +our main goal is to connect the approach of pedone and conte with the reference data on fashion .indeed , the model captures the difference between the case of small ( say , 5 ) and large ( 30 or more ) . in the former case ,particular values of never disappear ; in the latter , clear cycles are observed , cf .1 vs fig.2 .this follows the difference between the length of skirts and the babies names . +a critical comment is that the skirt length is a continuous variable and the classification in literature ( above ankle , calf etc . )is arbitrarily rough .if we enter into details of a given female dress , we could find that the pattern used , for example , by two ladies in natchez in 1848 was never repeated in later history .on the other hand , if the length of names is used instead of name itself , the results on name sets could be the same as those for the skirt length .then , the above difference can be assigned not as to the difference between the data sets but rather to our rule of classification .it is even possible to draw an analogy of the phenomenon of fashion with the idea of micro- and macroscopic states .a continuation of his thread is , however , out of the scope of this text .+ we conclude that the combination of imitation and avoidance allows to interpret the dynamics of frequency of babies names . in this interpretation ,the rise and fall of popularity of a name is a consequence of social imitation and avoidance .we note that today , these changes of popularity are more abrupt that in times of simmel . then , the correspondence of his theory with the contemporary data was not as clear yet as it is today .the paper is dedicated to dietrich stauffer on the occasion of his future 70-th birthday .g. simmel , _ fashion _ , international quarterly 10 ( 1904 ) 130 .r. pedone and r. conte , _ the simmel effect : imitation and avoidance in social hierarchies _ , lnai 1979 ( 2000 ) 149 .r. pedone and r. conte , _ dynamics of status symbols and social complexity _ , social science computer review 19 ( 2001 ) 249 . c. castellano , m. marsili and a. vespignani , _ nonequilibrium phase transition in a model of social influence _ ,85 ( 2000 ) 3536 .k. sznajd - weron and j. sznajd , _ opinion evolution in closed community _ , intc 11 ( 2000 ) 1157 .g. deffuant , d. neau , f. amblard and g. weisbuch , _ mixing beliefs among interacting agents _ , adv .sys . 3 ( 2000 ) 87 .s. galam , _ sociophysics . a review on galam models _ ,c 19 ( 2008 ) 409 . c. castellano , s. fortunato and v. loreto , _ statistical physics of social dynamics _ , rev .( 2009 ) 591 .r. axelrod , _ the dissemination of culture : a model with local convergence and global polarization _ , j. conflict resolution 41 ( 1997 ) 203 .a. radillo - diaz , l. a. perez and m. del castillo - mussot , _ axelrod models of social influence with cultural repulsion _ ,e 80 ( 2009 ) 066107 .m. j. krawczyk and k. kulakowski , _ combinatorial aspect of fashion _ , submitted ( arxiv:1205.2251 ) .s. huet , g. deffuant and w. jager , _ a rejection mechanism in 2d bounded confidence provides more conformity _ , adv .sys . 11 ( 2008 ) 529 .b. dybiec , n. mitarai and k. sneppen , _ information spreading and development of cultural centers _ ,e 85 ( 2012 ) 056116 .the official website of the u. s. social security administration ( www.ssa.gov ) l. curran , _ an analysis of cycles in skirt lengths and widths in the uk and germany , 1954 - 1990 _ , clothing and textile res .j. 17 ( 1999 ) 65 .b. d. belleau , _ cyclical fashion movement : women s day dresses : 1860 - 1980 _ , clothing and textile res .j. 5 ( 1987 ) 15 .e. d. lowe and j. w. g. lowe , _ velocity of the fashion process in women s formal evening dress , 1789 - 1980 _ , clothing and textile res .j. 9 ( 1990 ) 50 .barabsi and r. albert , _ emergence of scaling in random networks _ , science 286 ( 1999 ) 509 s. p. borgatti and p.c. foster , _ the network paradigm in organizational research : a review and typology _ , j. of management 29 ( 2003 ) 991 .a. l. kroeber , _ on the principle of order in civilization as exemplified by changes of fashion _ , american anthropologist 21 ( 1919 ) 235 .
simulations of the simmel effect are performed for agents in a scale - free social network . the social hierarchy of an agent is determined by the degree of her node . particular features , once selected by a highly connected agent , became common in lower class but soon fall out of fashion and extinct . numerical results reflect the dynamics of frequency of american babies names in 1880 - 2011 . * the simmel effect and babies names * + m. j. krawczyk , a. dydejczyk and k. kuakowski + _ _ faculty of physics and applied computer science , agh university of science and technology , al . mickiewicza 30 , pl-30059 krakw , poland + kulakowski.agh.edu.pl _ pacs numbers : _ 89.65.ef ; 07.05.tp _ keywords : _ fashion , simulation , babies names , simmel effect
several pathogens , including _ plasmodium falciparum _ malaria and african trypanosomes , achieve immune escape by the so - called _ antigenic variation _( see a recent review by gupta ) .the latter essentially refers to a process by which a pathogen keeps changing its surface proteins , thus preventing antibodies from recognizing and destroying it .antigenic variation is achieved by exploiting a large repertoire of antigenic variants that differ in some of their epitopes .an important requirement here is that the variants must not be expressed all at the same time , as otherwise the resulting immune response will detect and destroy all of them , thus terminating the infection .here we examine a particular model of antigenic variation for _ p. falciparum _ malaria , put forward by recker __ . within this framework ,each variant is assigned one major epitope , which is unique to that variant , and also several minor epitopes that are shared between different variants .both types of epitopes elicit epitope - specific responses , but in the case of the minor epitopes these are cross - protective between variants that share them .a critical feature of the model is that the immune response to the major epitope ( uniquely variant - specific ) is long - lasting in comparison with the immune responses ( frequently cross - protective ) to the minor epitopes . under these conditions ,the dynamics may be characterized by sequential domination of different variants .thus , the conclusion of the model is that effectively the host immune system can itself be responsible for prolonging the malaria infection and causing chronicity . through numerical simulations and by analysing a caricature of the model involving complete synchrony between variants , recker and gupta have shown that stronger cross - protective immune responses lead to prolonged length of infection and reduced severity of the disease .this was explained by the conflicting interaction of cross - protective and variant - specific immune responses . in this paperwe perform a detailed study of this model with particular emphasis on stability aspects , as well as possible bifurcation scenarios .first , we consider the case when the variant - specific immune responses to the major epitopes do not decay . in this case , the phase space of the system possesses a very peculiar geometry with a high - dimensional surface of equilibria having different types of stability .we show , it is exactly this curious structure that causes successive re - appearance of different malaria variants in the dynamics until the specific immune responses reach sufficient protective level to prevent further appearance of given variants in the dynamics . if the specific immune responses can decay ( even slightly ) , the dynamics is qualitatively different , as the phase space geometry changes significantly .now it contains a large number of distinct equilibria with different number of non - zero variants , some of which can be related by the permutation symmetry of the system .an interesting tool for investigating the dynamics is the imposition of synchrony among the variants . from mathematical perspective , in the case of complete synchrony the dimension of the system is drastically reduced . here, we use the tools of synchronization theory to investigate the robustness of such state .the outline of this paper is as follows . in the next section the model of cross - reactive immune response to malariais introduced and its basic properties are discussed .section 3 contains the analysis of a particular case when the decay rate of a long - lasting immune response vanishes .numerical simulations will be presented that illustrate the behaviour of the system in this case .a general situation of arbitrary non - decaying specific immune responses is considered in section 4 . in section 5the stability of the fully symmetric state of the system is investigated by means of numerical computation of transverse lyapunov exponents .the paper concludes in section 6 with a discussion .in this section we use the above - mentioned multiple epitope description to introduce a model of the interaction of malaria variants with the host immune system .our derivation follows that of recker _ with some refinements .it is assumed that each antigenic variant consists of a single unique major epitope , that elicits a long - lived ( specific ) immune response , and also of several minor epitopes that are not unique to the variant .assuming that all variants have the same net growth rate , their temporal dynamics is described by the equation where and denote the rates of variant destruction by the long - lasting immune response and by the transient immune response , respectively , and index spans all possible variants .the dynamics of the variant - specific immune response can be written in its simplest form as with being the proliferation rate and being the decay rate of the immune response .finally , the transient ( cross - reactive ) immune response can be described by the minor modification of the above equation ( [ zeq ] ) : where the sum is taken over all variants sharing the epitopes with the variant .we shall use the terms long - lasting and specific immune response interchangeably , likewise for transient and cross - reactive . to formalize the above construction , one can introduce the adjacency matrix , whose entries are equal to one if the variants and share some of their minor epitopes and equal to zero otherwise .obviously , the matrix is always a symmetric matrix .prior to constructing this matrix it is important to introduce a certain ordering of the variants according to their epitopes . for this purpose we shall use the _ lexicographic _ ordering , as explained below . to illustrate this ,suppose we have a system of two minor epitopes with two variants in each epitope , which is the simplest non - trivial system of epitope variants . in this case, the total number of variants is four , and they are enumerated as it is clear that for a system of minor epitopes with variants in each epitope , the total number of variants is given by now that the ordering of variants has been fixed , it is an easy exercise to construct the adjacency matrix of variant interactions . for the particular system of variants ( [ var4 ] ) , this matrix has the form in general , for a system of two minor epitopes with variants in the first epitope and variants in the second , the matrix will be an block matrix consisting of blocks of ones along the main diagonal , with the rest of the matrix being filled with identity matrices . for simplicity , in the rest of the paper we will concentrate on the case of two minor epitopes , but the resultscan easily be generalized for arbitrary number of minor epitopes .using the adjacency matrix one can rewrite the system ( [ yeq])-([weq ] ) in a vector form where etc ., denotes a vector of the length with all components equal to one , and in the right - hand side of the first equation multiplication is taken to be entry - wise so that the output is a vector again . to better understand the symmetry of the system it is convenient to represent graphically relations between different variants .figure [ fig1 ] shows such relations in the case of two minor epitopes .one can observe that within each horizontal and each vertical stratum , the network of variants is characterized by an all - to - all " coupling . besides this, if the number of variants in both minor epitopes is the same , then there is an additional reflectional symmetry .formally this means the system is equivariant with respect to the following symmetry group this construction can be generalized in a straightforward way for a larger number of minor epitopes .it is noteworthy that each of the variants has exactly the same number of connections to other variants .we finish this section by noting that the system ( [ vs ] ) is well - posed , in that provided the initial conditions for this system are non - negative , the solutions satisfy , for all . + * remark . * in many cases it is reasonable to assume the initial conditions for the system ( [ vs ] ) to be of the form .a possible exception is when the immune system has already built - up a long - lasting response from prior exposure to a certain variant . in this case , the initial condition for the system ( [ vs ] ) will contain non - zero entries for some of variables .we begin our analysis of the system ( [ vs ] ) by considering a particular case of vanishing decay rate of the long - lasting immune response ( some partial results for this case have been obtained in ) . in this casethe only steady states of this system are given by this is a rather degenerate situation as the fixed points are not separated in the phase space , but rather form an -dimensional hypersurface with each point of it being a fixed point of the system ( [ vs ] ) .linearization near one such fixed point has the eigenvalues of multiplicity , zero of multiplicity , and the rest of the spectrum is given by the generalized eigenvectors of the zero eigenvalue correspond to the directions along the hypersurface of the fixed points .as long as there is at least one , the corresponding steady state is a saddle , otherwise it is a stable node . from the dynamical systems perspective , the case corresponds to the so - called _ bifurcations without parameters _ .indeed , in the space as one crosses the hyperplane , one of the eigenvalues crosses zero along the real axis .furthermore , since the hypersurface is in general high - dimensional , the cases of two or more eigenvalues crossing zero at the same time ( this happens along the lines ) are still generic , and these lead to `` bifurcations '' of a higher co - dimension .it is important to note that all these bifurcations are of the steady state type and there is no possibility of a hopf bifurcation that could lead to temporally periodic solutions .figure [ fig2 ] shows numerical simulations of a typical behaviour in the system ( [ vs ] ) for .these results were obtained by integrating the system ( [ vs ] ) using the variable order solver based on backward differentiation formulas to account for the stiffness of the system . initially most variants have quite high amplitudes ,but as the time progresses , their amplitudes decrease as illustrated in fig .[ fig2](a ) .figures ( b ) and ( d ) illustrate this feature in more detail by showing the dynamics of a single variant and its specific immune response . with each subsequent re - appearance of the variant , the specific immune response to it is building up , andultimately it reaches a protective level , which prevents this variant from ever re - appearing in the dynamics . as suggested by fig .[ fig2](c ) , sometimes more than one variant appear at the same time , and this is very good from the immune system perspective , as it allows simultaneous destruction of all of these variants. the question of synchronization between different variants will be investigated in section 5 .the fact that the system exhibits the jumps from one variant to another can be explained by the existence of the above - mentioned hypersurface of equilibria .when one of the variants decays , the trajectory approaches the neighbourhood of the hypersurface of equilibria , and since all points on this hypersurface are saddles of different dimensions , the trajectory is pushed away along the unstable manifold of one of these fixed points .this behaviour is reminiscent of that in the neighbourhood of a heteroclinic cycle , with the major difference being that in the present case the nodes of the cycle are not distinct but rather form a smooth hypersurface .there is a clear separation of time scales in the dynamics : the trajectories move quickly to / away from the invariant plane , and then they slowly move towards the hyper - axis before the next iteration . with time , the phase space excursions between subsequent returns to the equilibrium manifold become shorter ( they are restricted by the ever growing variables ) , and eventually all trajectories converge to a point .similar behaviour takes place in the phase coordinates of other variants , which all approach the point .the importance of such a point for understanding the dynamics has been previously highlighted , for instance , in the analysis of adaptive control systems , where it gave rise to bad point bifurcations at which the close - loop systems could never be stabilised .if during time evolution , the trajectory reaches the hyperplane for some , and at least one of or is different from zero , then this trajectory will escape the basin of attraction of the point and instead it will asymptotically converge to the hypersurface of equilibria with the value of without any further phase space excursions .this will happen provided the initial amount of a given variant is high enough .figure [ fig3 ] ( a ) shows in red the projection of the stable manifold of the point on the reduced phase space of a single variant together with a representative trajectory in blue . in the same figure a trajectory in green illustrates the scenario in which the protective level of immune response is reached within one parasitemia peak , and hence there are no further oscillations . in fig .[ fig3 ] ( b ) we show the close - up of the phase dynamics in the neighbourhood of the hypersurface of equilibria .one can clearly observe recurrent oscillations of parasitemia , during which the specific immune response is monotonically increasing until it reaches the protective level .next we would like to discuss the issues of peak dynamics and the threshold for chronicity , which have been previously studied in recker and gupta .the chronicity threshold is defined as the critical ratio of the variant destruction rates , such that if , then during the first peak the protective level of immunity will be reached , so that the system will display no further oscillations .there are several simplifying assumptions , which have to be made in order to derive analytical expressions for the solutions needed for the analysis of peak dynamics .first of all , it is reasonable to assume that all variants in the full system ( [ vs ] ) are identical , and therefore this system can be replaced by where is the number of connections for each variant , and , etc .the second assumption is that for a single parasitemia peak the cross - reactive immune response does not have time to decay , i.e. for a peak dynamics we have .this reduces the system to assuming zero initial conditions , which correspond to the absence of pre - existing specific or cross - reactive immune responses , the analytic expression for the solutions of the system ( [ fss ] ) can be found as ,}\\\\ \displaystyle{y(t)=\frac{c_1}{\beta}\left[1-\tanh\left(\sqrt{\frac{c_1\psi}{2}}(t+c_2)\right)^{2}\right ] , } \end{array}\ ] ] where the integration constants and are given by and is defined as initially , monotonically increases , until it reaches its peak of exactly at , after which is monotonically decreasing . due to the symmetry of the solution , at , has the same value as it had at the initiation of parasitemia peak . by considering the equation for , one can argue that if at the end of the parasitemia peak the combined specific and cross - reactive immune response has reached the protective level of , then this will prevent further oscillations .evaluating and at the end of parasitemia peak , we find the threshold for chronicity as if , then during the first peak should be sufficiently high to allow the build - up of protective immunity .conversely , if , then will be too low for protective immunity to be reached within one peak , and therefore the system will display further oscillations .it is important to note that the trajectories shown in red and blue in fig .[ fig3 ] satisfy the condition for chronicity , but still for these trajectories the protective level of immune response is not reached within a single parasitemia peak .the reason for this discrepancy is due to the fact that in the system describing the peak dynamics , the cross - reactive immune response does not decay , because if it did , then at the end of the parasitemia peak the combined immunity would be below the protective level , and therefore further oscillations would occur , as shown in fig .this also highlights the importance of initial conditions , and in particular , the initial amount of variants , which may play a crucial role in whether or not the protective immunity level will be reached within one peak for the same parameter values .in the previous section we considered the case , in which the long - lasting immune responses can only grow with time , unless they are saturated at the level preventing further re - emergence of particular variants . for ,the situation is drastically different as the hypersurface of equilibria no longer exists , and instead , it degenerates into two separate steady states .one of these is the origin , which is always a saddle with an -dimensional unstable manifold and a -dimensional stable manifold .the other steady state originating from the hypersurface of equilibria is the fully symmetric equilibrium where is the number of connections for each variant .using fig .1 , this number can easily be interpreted as the number of elements in the horizontal and vertical strata , to which the current variant belongs . when considered in the context of a reduced system ( [ fss ] ) , in which all variants are assumed to behave in the same manner ( see next section for further analysis of this case ) , this steady state is stable for all values of parameters , as shown in recker and gupta . at the same time , this result does not hold for the full system ( [ vs ] ) , as the stability of the fully symmetric equilibrium does depend on parameters of the system . more specifically, the fully symmetric equilibrium can undergo hopf bifurcation , thus giving rise to periodic occurrences of parasitemia peaks .figures [ fig4 ] ( a ) and ( b ) show the boundary of the hopf bifurcation in the parameter space of the system ( [ vs ] ) with two variants in each of the two minor epitopes .these figures indicate that the higher is the decay rate of variant specific immune response , the larger should be the values of the relative immune efficiency and that of a ratio of proliferation rates to guarantee the occurrence of the hopf bifurcation .the corresponding temporal evolution of variants in the parameter regime beyond the hopf bifurcation is illustrated in figures ( c ) and ( d ) .one can observe periodic oscillations of all four variants , which have approximately the same maximum amplitudes and are slightly out - of - phase with each other .the plot of the dynamics of one variant shown in fig .[ fig4 ] ( d ) indicates that peaks of parasitemia corresponding to this variant have decreasing amplitudes , and after several occurrences there are large periods of time when the variant is quiescent .cross - reactivity between different variants causes subsequent re - appearance of large - amplitude oscillations after such periods of quiescence .the reason for this is as follows .the long - lasting and cross - reactive immune responses show anti - phase oscillations , which are quite regular both in amplitude and in period .these oscillations lead to a slightly irregular oscillations of the combined rate of variant destruction .intervals of parasitemia peaks correspond to the combined variant destruction rate oscillating around the critical value of with long - lasting immune response increasing .after such intervals , the combined variant destruction rate stays above keeping the variant absent from the dynamics , and during this time the long - lasting immune response wanes , until it starts to recover during the next cycle .we emphasize that this dynamics can only occur in the case when the long - lasting immune response can decay , hence this feature could not be observed in the previously analysed case of .besides the origin and a fully symmetric equilibrium , the system also possesses steady states characterized by a different number of non - zero variants .one should notice that the symmetry of the system mentioned earlier implies that for a given number of non - zero variants , many of the corresponding steady states are symmetry - related . at the same time, one can identify several clusters of the steady states with different values of the steady states which can not be transformed into each other by a symmetry .for example , if we consider the system with two variants in each of the two minor epitopes , then there exist six steady states with two non - zero variants .introducing the notation the steady states with non - zero variants 12 , 13 , 24 and 34 form one cluster : while the steady states with non - zero variants 14 and 23 are in another cluster where and .all the steady states in the first cluster are related by permutation , and the steady states in the second cluster are also related by some permutation , but the steady states from the first cluster can not be related to those in the second cluster .the reason for this becomes clear if one more closely analyses the structure of the adjacency matrix given in ( [ mat4 ] ) . in the case of a steady state from the first cluster , both rows and of matrix contain ones in positions and ( i.e. the variants and cross - react with each other ) , while in the case of the second cluster the rows and contain only a single one in either position or position ( i.e. the variants and are completely unrelated ) . due to this difference the steady states from the two clusters are different and it is impossible to change from one cluster to another by permutation .as far as stability of the steady states different from the origin and the fully symmetric equilibrium is concerned , they all are saddles of different dimensions . even though they are unstable as steady states , it is possible for some of them to form some sort of a heteroclinic cycle . +* remark . * in the case when the number of malaria variants participating in the dynamics exceeds four , the symmetry of the system increases the co - dimension of the hopf bifurcation for the fully symmetric steady state .moreover , the purely imaginary eigenvalues at the hopf bifurcation would coincide , thus creating extra complications for the analysis by virtue of increasing the dimension of the centre manifold .some details of possible bifurcation scenarios in systems with an `` all - to - all '' coupling can be found in , and the extension of those results should provide an insight into the effects of symmetry on the dynamics of system ( [ vs ] ) .the complete analysis of these effects will be presented elsewhere .an interesting dynamical regime occurs when , by virtue of initial conditions or time evolution , the system behaves in such a way that all variants are indistinguishable from each other , in other words , the system is in a state of complete symmetry . in this case , the dimension of the system reduces drastically from to just three . as several insightful results have been obtained for this case , it is important to study how robust this state of complete symmetry is with respect to perturbations that attempt to break the symmetry . to characterize stability properties of the symmetric state one can use transverse lyapunov exponents , as is customary in the studies of synchronization , see , for instance . by analogy with synchronization theory we shall call the hypersurface of complete symmetry a _ symmetry manifold_. writing , one can split the total dynamics into that inside the symmetry manifold and the linearized dynamics in the transverse direction given by here is again the number of connections of a given variant , and denote zero and unity matrices , respectively .the minimal condition for the stability ( or robustness ) of the symmetric state is that the maximum lyapunov exponent associated with the system ( [ lin_dyn ] ) has negative real part . by solving equations ( [ lin_dyn ] ) in combination with ( [ reduced ] ) ,we determine the dependence of the leading lyapunov exponent on the system parameters . in figure [ fig5 ] we show the results of numerical simulations for the maximal transverse lyapunov exponent . in both plots we kept the rates of variant destruction equal to each other andalso the proliferation rates were taken to be the same .figure [ fig5 ] indicates that for small values of or , the fully symmetric state of the system is transversely unstable , as signified by the positive transverse lyapunov exponent .this means that in such parameter regime different variants will not synchronize in time , hence it is unlikely to observe the fully symmetric state in experiment .however , when the variant destruction rates / proliferation rates are increased , the fully symmetric state becomes transversely stable , i.e. independently on initial conditions for each particular variants , they will all ultimately follow the same time evolution . the increase in the decay rate ofthe specific immune response plays a stabilizing role , since it lowers the values of of the maximal transverse lyapunov exponent .the robustness of the fully symmetric state appears to be independent on the relative efficiency of immune responses .in this paper the temporal behaviour in a model of antigenic variation in malaria has been studied from a dynamical systems perspective . using the model of immune response to multiple epitopes, we have demonstrated that when the long - lasting immune response does not decay , the system possesses a high - dimensional surface of equilibria , and these exhibit steady - state bifurcation without parameters , i.e. some part of the surface of equilibria consists of saddles of different dimensions , while another part contains stable nodes .the existence of these two parts of the surface of equilibria with different stability properties accounts for the observed patterns of behaviour of malaria variants , when different variants exhibit out - of - phase parasitemia peaks that decay with time .if the initial amounts of all variant are not very large , then phase space excursions between successive re - appearances of the variants become shorter as the time grows , and eventually all trajectories approach the single steady state * t * characterized by all coordinates equal to each other and equal to the value at the boundary between the saddles and the nodes on the surface of equilibria .if , however , an initial amount of a given variant is sufficiently high , then a trajectory with such initial condition will escape the basin of attraction of the above - mentioned point * t * by reaching the protective level of long - lasting immune response to a given variant while having either a non - zero transient response to this variant or a non - zero amount of the variant itself . in this case, the eventual time evolution of the solution will be different in that it will also approach the surface of equilibria but now it will be above the critical protective level without any further excursions in the phase space .when both variant - specific and cross - reactive immune responses are allowed to decay with a certain rate , the dynamics are quite different . in this casethe surface of equilibria disintegrates , and instead the phase space of the system contains a large number of distinct fixed points many of which are related to each other by the permutation symmetry of the variants .at the same time , they may form separate clusters which are not related by symmetry . provided the decay rate of the specific immune response is high enough , the fully symmetric equilibrium will exhibit hopf bifurcation , thus giving rise to periodic oscillations of the variants .these oscillations appear to be out - of - phase for different variants , and such oscillations are separated by extended time intervals during which the amount of a variant is very small . in order to investigate towhat extent the results obtained in the approximation of complete symmetry between variants describe the general patterns of behaviour , we have numerically computed the transverse lyapunov exponents of the fully symmetric state .this analysis indicates that while the fully symmetric state is not robust to small perturbations for small proliferation / variant destruction rates , the robustness is restored as these rates increase . in this casethe dynamics of the completely symmetric system faithfully represents that of the full original system . finally , we note that the robustness of complete synchronization between variants increases with the decay rate of the specific immune responsethe authors would like to thank marty golubitsky , hinke osinga , oleksandr popovych and mario recker for useful discussions .they would also like to thank two referees for their comments and suggestions , which have helped to improve the presentation in this paper .recker , m. , new , s. , bull , p.c . , linyanjui , s. , marsh , k. , newbold , c. , gupta , s. : transient cross - reactive immune responses can orchestrate antigenic variation in malaria . _ nature _ * 429 * , 555 - 558 ( 2004 ) .
we examine the properties of a recently proposed model for antigenic variation in malaria which incorporates multiple epitopes and both long - lasting and transient immune responses . we show that in the case of a vanishing decay rate for the long - lasting immune response , the system exhibits the so - called `` bifurcations without parameters '' due to the existence of a hypersurface of equilibria in the phase space . when the decay rate of the long - lasting immune response is different from zero , the hypersurface of equilibria degenerates , and a multitude of other steady states are born , many of which are related by a permutation symmetry of the system . the robustness of the fully symmetric state of the system was investigated by means of numerical computation of transverse lyapunov exponents . the results of this exercise indicate that for a vanishing decay of long - lasting immune response , the fully symmetric state is not robust in the substantial part of the parameter space , and instead all variants develop their own temporal dynamics contributing to the overall time evolution . at the same time , if the decay rate of the long - lasting immune response is increased , the fully symmetric state can become robust provided the growth rate of the long - lasting immune response is rapid .
wireless personal communication enables ubiquitous exchange of various data types such as voice , video , photos , and text among individuals .the emergence of new advanced systems such as the ieee 802.11ac and the 3gpp lte - advanced are expected to achieve additional data rates .of late , wireless communication industries have begun to discuss their scenarios serving machine - type communication devices such as meters / sensors as well as user equipments such as smart phones .these machine - to - machine ( m2 m ) communications have extensive applications , from monitoring environments to full electrical / mechanical automation ( e.g. smart grid , smart city , internet of things ) , which has been being considered as one of the most crucial technologies in future .the sensor network can also be regarded as a kind of m2 m , and there have been many studies in the form of ad - hoc networks .this paper only considers the environment with specific data collectors directly communicating with sensors .this environment is suitable when sensor nodes support only simple single - hop communication functionalities and deployment of many data collectors is easy .this type of m2 m communication is similar to cellular communication systems where the base stations serve user equipment within their coverage , but it has the unique characteristics : there can be a huge number of devices ( e.g. trillions ) each of which has only a small amount of data and a low activity , and their functionalities have to be simple .these characteristics may require technologies differentiated from the conventional high data rate human - to - human ( h2h ) communications .for example , machine - type devices such as meters and sensors need uplink resources intermittently for reporting measured or sensed data to their serving data collector , but it is hard to dedicate limited uplink resources to each .thus , simple random access can be considered as a solution for directly transmitting measured data or initially requesting uplink resources .the data collectors that receive many sensors measured data simultaneously can successfully decode only signals with signal - to - interference - plus - noise ratio ( sinr ) above a certain value . in order to keep a high success probability of many sensor nodes intermittent transmissions ,the system may need a lot of data collectors , and conventional macro / micro base stations may not be appropriate for these roles . in other words ,data collectors have to be easy to deploy and cost - effective .they support only simple functionalities and are interconnected with external networks through wired or wireless links .it can be considered that not only a new type of device for data collection is defined but also such devices as pico / femto base stations around sensor nodes play the role of data collectors .[ fig_system_model ] shows a system architecture with data collectors and sensor nodes . in this environment ,some questions are : how many data collectors are needed ?how much transmit power sensors have to use for successful transmission ? and , how the wireless channels affect the performance. this paper will provide answers to those questions through a stochastic analysis based on a spatial point process and on simulations .the main factor of determining system performance is the interference from neighbor sensor nodes .this interference depends on the spatial distribution and sensor - node access methods . because the spatial configurations of transmitting and receiving nodes can have enormous possibilities , it is impossible to consider each possibility .stochastic geometry provides a useful mathematical tool to model network topology , and it also enables analysis of essential quantities such as interference distribution and outage .this stochastic geometry has mainly been applied to pure ad hoc networks and their performance has been analyzed under the assumption of random transmitter location and receiver with fixed distances to its transmitter .this paper considers the environment where both transmitters ( sensor nodes ) and receivers ( data collectors ) are randomly deployed and each transmitter are served by the data collector nearest to it . - analyzed the distribution of signal - to - interference ratio ( sir ) or sinr in random cellular networks where both transmitter and receiver are randomly located ; analyzed the distribution of sir considering the path loss and shadowing , derived a simple - form sinr distribution in case of rayleigh fading and a path - loss exponent of four , and expanded the analysis results in into the results for a more general fading model including nakagami- fading .but , they assumed that each base station always has the user equipment within its coverage and communicates with a user equipment that is scheduled exclusively within one cell and focused on transmitter - centric coverage ( i.e. downlink ) . modeled cdma uplink interference power as a log - normal distribution using the moment - matching method . also , asymptotically analyzed uplink spectral efficiency in spatially distributed wireless networks , where the base stations have multiple antennas , by using infinite - random - matrix theory and stochastic geometry .the current system is similar to the uplink cellular systems , but this paper will only consider random access without any explicit scheduling for the simple functionalities of sensor nodes and data collectors .the three contributions of this paper are : first , an analysis shows how the channel affects the sir distribution for nakagami- fading .a simple form on the sinr distribution is found for some special channel models .second , an analysis describes how many data collectors per area are on average required to meet the outage probability for the given mean number of sensor nodes per area in case of rayleigh fading .third , this paper suggests a simple design method of the transmit power and the mean number of data collectors to meet the given outage probability .the remainder of this paper is organized as follows : section [ sec_model ] presents the system model based on a homogeneous poisson point process ( ppp ) .section [ sec_sinr ] analyzes the sir distribution for nakagammi- fading channels and the sinr distribution for rayleigh fading channels .section [ sec_density ] derives the intensity of data collectors required to keep the outage probability below a certain value and suggests a design method of the transmit power .section [ sec_results ] discusses numerical results . finally , section [ sec_conclusions ] concludes .a sensor node senses or measures environments and then transmits its data to the closest data collector .sensor nodes do not always have data to transmit but send them only when their sensing data are generated .for example , machines such as meters and event sensors may transmit data intermittently rather than continuously , and it has to be successful with probability above a certain value . in order to model intermittent transmissions , the sensor node s activity is defined as .this value of is between and , and this paper considers its small values .meanwhile , data collectors that receive data from sensor nodes , are always ready to receive data from them .this paper considers environments where both of sensor nodes and data collectors are randomly deployed .sensor nodes are distributed according to a homogeneous ppp , , and they transmit sensed data to their nearest data collectors through random access schemes . denotes the intensity of sensor nodes that is the average number of them per area . in order to consider unplanned deployments of data collectors , the random locations of data collectorsare modeled as a homogeneous ppp , , with intensity , like sensor nodes .each sensor node transmits its data to a data collector closest to it , so a data collector builds a coverage based on voronoi tessellation , as shown in fig [ fig_voronitessellation ] . the standard power loss propagation model with the path loss exponent and the nakagami- fading model are considered . in the nakagami- fading model , , and model rayleigh fading , ricianfading with parameter , and no fading , respectively .also , it is assumed that all sensor nodes transmit with the same power .a typical data collector located on the origin receives the signal with from a typical sensor node when the distance between them is and the fading channel gain is . by slyvnyak s theorem ,interfering nodes except for a typical sensor node located on still constitute a homogeneous ppp with intensity .thus , the interference power of the link between a typical sensor node and a typical data collector can be expressed as where denotes the location of a interfering node and means the fading gain of a link between a typical data collector and a interfering sensor node . are independently and identically distributed ( i.i.d . )random variables . here, it is assumed that a typical data collector does not perform any scheduling for sensor nodes within coverage served by itself , so they may interfere with each other even though they are served by a common data collector . eventually , the link of a transmitter - receiver pair experiences interference from interfering nodes distributed according to a homogeneous ppp with effective intensity .if a sensor transmits using one of resources that is chosen at random , is . decreases as increases and this means that is also a parameter for the system design . in this paper, is assumed . when the interference is dealt with as noise and single antennais equipped on both transmitters and receivers , the sinr is given by where is the noise power and is equal to . in case of , ( [ eqn_sinrmodel ] )means the sir .this section analyzes the sinr distributions and derive simple - form sir or sinr distribution for some specific channels . for more generalization of results ,the fading gain distribution of ( [ eqn_generalfading ] ) is first considered . for some finite set and a finite integer set .this type of complementary cumulative distribution function ( ccdf ) includes a variety of fading - gain distributions such as exponential distribution , chi - square distribution and gamma distribution . [ lem_sinr_gneralfading ] _let sensor nodes and data collectors distributed with homogeneous ppp s with intensities and , respectively and each sensor node builds communication link with a data collectors closest to it . when the ccdf of the fading gain of a desired signal is given by ( [ eqn_generalfading ] ) and the fading gain of the interfering signal is denoted as a random variable , , the ccdf of sinr is given by _ _ where , is the expectation of and denotes the gamma function .the derivative in ( [ eqn_sinr_generalfading ] ) can be reexpressed as follow . _^j \frac{\partial^k}{\partial \zeta^k } \left[\lambda_s \xi(\zeta,\alpha)+\zeta \tilde{\sigma}^2 \right]^{l - j } \end{array}\ ] ] see appendix [ app_lem_sinr_gneralfading ] .the result of sinr distribution in lemma [ lem_sinr_gneralfading ] requires cumbersome integrations and differentiations , but simple - form result can be obtained for specific channel models . to begin with, an analysis considers nakagami- fading channel .the received signal power experiencing nakagami- fading channel can be modeled using gamma distributions .thus , assuming that desired signals experience nakagami- fading while interfering signals experience nakagami- fading , the ccdf of fading gains can be give by ( [ eqn_nakagamifading_s ] ) and ( [ eqn_nakagamifading_i ] ) have the forms of ( [ eqn_generalfading ] ) , so the sinr distribution can be derived by using lemma [ lem_sinr_gneralfading ] .generally , lemma [ lem_sinr_gneralfading ] requires the calculation of a derivative in ( [ eqn_sinr_generalfading_derivative ] ) and it is too complex to calculate it for any , and .fortunately , it is possible to obtain a simple form for the ccdf of sinr under interference limited environments , i.e. . [ pro_sir_nakagami ] _ let sensor nodes be randomly located with intensity and served by the nearest data collectors randomly deployed with intensity intensity . when their links experience nakagami- fading given by ( [ eqn_nakagamifading_s ] ) and ( [ eqn_nakagamifading_i ] ) , and , the ccdf of sir is given by _^l \end{array}\ ] ] _ where and ] for and . here , .proposition [ pro_sinr_rayleigh ] is the result for .when is not , the ccdf of sinr can be expressed by generalized hypergeometric functions .but they are not simple , so this paper does not deal with them .when sensor nodes are spatially distributed according to a homogeneous ppp with a certain intensity , it is important to decide how many data collectors should be deployed in order to keep the success probability of random accesses above a certain value .this section analyzes the requirement of the intensity of data collectors deployed at random , and the effect of channels on its required intensity , given the intensity of sensor nodes and a target outage probability .the outage probability , , is defined as where is the minimal sinr value required for the successful receptions .the required intensity of data collectors for rayleigh fading is presented in corollary [ cor_density_rayleigh_sir ] and proposition [ pro_density_rayleigh_sinr ] .[ cor_density_rayleigh_sir ] _ let sensor nodes randomly located with intensity and served by the nearest data collectors .it is assumed that all links experience rayleigh fading with unit mean and .the necessary and sufficient condition of the intensity of data collectors randomly deployed , , for keeping the outage probability below , is _ _ where is defined in corollary [ cor_sir_rayleigh ] ._ ( [ eqn_density_rayleigh_sir ] ) can be directly derived from ( [ eqn_sir_rayleigh ] ) .[ pro_density_rayleigh_sinr ] _ let sensor nodes randomly located with intensity and served by the nearest data collectors .it is assumed that all links experience rayleigh fading with unit mean and .the sufficient condition of the intensity of data collectors randomly deployed , , for keeping the outage probability below , is _\lambda_s \end{array}\ ] ] _ where ._ see appendix [ app_pro_density_rayleigh_sinr ] .the condition of in ( [ eqn_density_rayleigh_sir ] ) under interference - limited environments is necessary and sufficient while the condition in ( [ eqn_density_rayleigh_sinr ] ) under environments with non - neglectable noise is just sufficient .in fact , ( [ eqn_density_rayleigh_sinr ] ) has been derived from a lower bound of the complementary error function .but , for small values of , ( [ eqn_density_rayleigh_sinr ] ) also gives a tight lower bound of , and in particular , ( [ eqn_density_rayleigh_sinr ] ) is the same as ( [ eqn_density_rayleigh_sir ] ) with when . here , given , , and , the design method of the transmit power ( ) of sensor nodes and the intensity ( ) of data collectors is suggested , for a path loss exponent of four .the relations among these variables are given by ( [ eqn_sinr_rayleigh ] ) , but it is not easy to use ( [ eqn_sinr_rayleigh ] ) directly for the design of and . on the contrary , the lower bound of the ccdf of sinr with a simpler form of ( [ eqn_proof_pro_density_rayleigh_sinr_lb ]) can give a simple design method for them .the lower bound of sinr ccdf in ( [ eqn_proof_pro_density_rayleigh_sinr_lb ] ) is equivalent to the intensity condition of data collectors of ( [ eqn_density_rayleigh_sinr ] ) and the second term within a square root in ( [ eqn_density_rayleigh_sinr ] ) approximately models the effect of noise .now , the transmit power and the intensity of data collectors can be separately designed .first , for neglecting the noise effect , the second term within the square root of ( [ eqn_density_rayleigh_sinr ] ) has to be much smaller that .the definition of gives a condition of the transmit power . where ( a ) follows from the inequality of arithmetic and geometric means and the definition of .thus , the transmit power can be set to where is a constant much less than one and it is a design parameter . has to be set not only to neglect the noise power but also to keep transmit power as small as possible for sensor node s power saving .next , the intensity of data collectors can be designed according to ( [ eqn_density_rayleigh_sir ] ) because the intensity condition ( [ eqn_density_rayleigh_sinr ] ) is almost equal to ( [ eqn_density_rayleigh_sir ] ) if is set by ( [ eqn_txpw_design ] ) . in this design ,the transmit power is reciprocally proportional to and this means that the longer the distance among sensor nodes is , the larger the required transmit power is , because of the noise effect , when the intensity of data collector is determined by ( [ eqn_density_rayleigh_sir ] ) . even though this design method is very simple , but it gives a good design method for the random deployment of data collectors to serve randomly distributed wireless sensors .its performances will be shown in section [ sec_results ] . in interference - limited environments with rayleigh fading channels ,corollary [ cor_density_rayleigh_sir ] shows the effect of the path loss exponents obviously . because the function is a increasing function of , it is obvious that the required density of data collectors decreases as the path loss exponent increases , when and for given and , from the definition of and ( [ eqn_density_rayleigh_sir ] ) . on the contrary ,it is not easy to express the required intensity of date collectors in case of the nakagami- fading with general s , in a simple form . here, the effect of wireless channels on system designs is analyzed by comparing the performances to those of rayleigh fading , rather than deriving their requirements exactly , only when .when deploying data collectors with intensity for randomly distributed sensor nodes with intensity , let and denote the outage probabilities for the rayleigh fading model and the another examined - fading model for the required sir , respectively .first , in case of reference channel model assuming the rayleigh fading , the and have the following relation from ( [ eqn_sinr_rayleigh ] ) . on the other hand , in case of the examined - fading channel , is defined as where is derived from ( [ eqn_sir_nakagami ] ) . in other words , ( [ eqn_density_effect_nakagami ] ) means that the deployment of data collectors with in the nakagami- fading channel is equal to the deployment of data collectors with in the rayleigh fading channel in term of outage probability .hence , quantifies the effect of wireless fading channels on the system design and is simplified from ( [ eqn_density_effect_rayleigh ] ) and ( [ eqn_density_effect_nakagami ] ) , as follows .this section evaluates and discusses the performance of systems with data collectors randomly deployed to serve randomly distributed wireless sensors , based on results of section [ sec_sinr ] and section [ sec_density ] .it is assumed that the total intensity of sensor nodes ( ) spatially distributed according to a homogeneous ppp is . also , is set to and it means that the sensor nodes awake on average every sec ( about 17 minutes ) , when they transmit data to data collectors during msec on each awake mode . also , the minimal sinr value ( ) required for the successful reception of db is considered .[ fig_sinrcdf_rayleigh ] shows the cdf of sinr according to and .this can be interpreted as the outage probability for which is a value on x - axis . s ( or ) of db and db are assumed .these values mean that the transmit powers of sensor nodes are dbm ( mw ) and dbm ( mw ) , when the power spectral density of the noise is dbm / hz and the bandwidth is mhz .[ fig_sinrcdf_rayleigh ] indicates that analysis results in ( [ eqn_sir_rayleigh ] ) and ( [ eqn_sinr_rayleigh ] ) definitely coincide with the simulation results .when is db , the s of and result in the outage probabilities of and , respectively .as increases , outage probability decreases .in other words , larger intensity of data collectors and higher transmit power lead to less outage probability .[ fig_effectnoise_rayleigh ] and fig .[ fig_effecttxpower_rayleigh ] explain these effects more quantitatively . in fig .[ fig_effectnoise_rayleigh ] , the outage probability decreases as the intensity of data collectors increases , and their required intensity can be obtained for a given outage probability . also its lower bound by ( [ eqn_density_rayleigh_sinr ] ) is shown .the lower bound of in ( [ eqn_density_rayleigh_sinr ] ) is tighter when the effect of noise is reduced .the effect of noise on outage probability decreases as increases .this is because the increase of leads to the increase of received snr because of the decrease in distances between data collectors and sensor nodes .[ fig_effecttxpower_rayleigh ] shows how the transmit power of sensor nodes affect the outage probability .the results of fig .[ fig_effecttxpower_rayleigh ] were evaluated by changing the intensity of sensor nodes for given relative intensities of data collectors . in other words, it shows the effect of noise by changing the geometric size of networks .the larger geometric size , i.e. larger distances between sensor nodes and data collectors , leads to the bigger effects of noise on the system performance .these results also verify that the design of the transmit power not only reduces the noise effect but also keeps the transmit power as small as possible . also , under the environments of fig .[ fig_sinrcdf_rayleigh ] , the design by ( [ eqn_txpw_design ] ) with provides the transmit power of dbm ( i.e. db ) , and it is observed that db approaches the performance of in fig .[ fig_sinrcdf_rayleigh ] .these results confirm that ( [ eqn_txpw_design ] ) is a very efficient design method .the path loss exponent is another crucial factor to have an effect on system performances . as fig .[ fig_sinrcdf_rayleigh_pathlossexp ] indicates , they result in very different performance for the same transmit power . at db ,the noise can be neglected in case of a pathloss exponent while it causes severe performance degradation in case of a pathloss exponent . by contrast ,when the noise effect can be neglected , larger s result in less outage probability for given and .in fact , for given and , the required for a pathloss exponent of increases by the factor of , compared to a pathloss exponent of when , where is defined in corollary [ cor_density_rayleigh_sir ] . for example , when , the path loss exponents of and requires and times of the intensity of data collectors for the path loss exponent of .[ fig_sinrcdf_nakagami ] - fig .[ fig_effectchannel_density ] examine the performance for nakagami- fading channels .[ fig_sinrcdf_nakagami ] explains how the line - of - sight factors of fading channels contribute to the sinr distribution .the increase in results in the decrease in outage probability .but , more than two does not have an big effect on the performance , compared to equal to two .[ fig_sinrcdf_nakagami ] also indicates that analysis results exactly coincide with simulation results when considering that the performance of db is as good as that of .[ fig_effectchannel_nakagami ] shows the effect of channels on outage probability under the interference - limited environments .the outage probability decreases as and the pathloss exponent increase .it means that the rician fading and awgn environments need less intensity of data collectors than the rayleigh fading environments for the same path loss exponent .moreover , from this figure , the intensity of data collectors required to meet a certain outage probability can be obtained .[ fig_effectchannel_density ] examines the relative effect of other fading channels compared to the rayleigh fading channel in term of the intensity of data collectors , which is defined in ( [ eqn_density_gain ] ) .it shows that and has a big effect on the system design such as the deployment of data collectors .so far , this paper analyzed and discussed the effect of the wireless channels , the transmit power and the intensity of data collectors on system performances when data collectors are randomly deployed to successfully collect the data from randomly - located sensor nodes .as the number of wireless nodes increases enormously in future , it is more and more difficult to design the system . for reducing these difficulties , efficient system design methodsis required to deal with a huge number of wireless nodes , so the rigorous understanding about the spatial distribution and effect of interference will be basics for them . even though this paper has considered only simple random access , these results will be able to be used as basic models for developing more sophisticated spatial resource management methods .this paper has considered the environment where receivers ( data collectors ) as well as transmitters ( sensor nodes ) are randomly deployed and each transmitter is served by the receiver nearest to it . in networktopology modeled by homogeneous poisson point processes , analysis and simulation results showed the sinr distribution , and a simple design method of transmit power was suggested . under interference - limited environments , the larger the path loss exponent and the portion of line - of - site factors were , the less the outage probability was . on the contrary , under non - neglectable noise environments ,the large path loss exponent caused severe performance degradation .moreover , the intensity of data collectors required to keep the outage probability above a certain value was derived , and it depends on required outage probability , an intensity of sensor nodes , a fading channel model , a path loss exponent and noise power .this required intensity helps to design such parameters as the amount of wireless resources and the access probability for medium access control .random access scheme is very simple and does not cause control - overhead problems even under environments with a huge number of sensor nodes , but its required intensity of data collectors is never small .thus , it is needed to find more sophisticated spatial resource management schemes and the result of this paper may be used as a basic model for them .this proof is similar to proof of theorem 1 in that has considered the transmitter - centric coverage ( or downlink ) and only the transmitter intensity . here, an analysis focuses on the receiver - centric coverage by data collectors ( or uplink ) and allows that multiple transmitters within the service area of a common data collector simultaneously transmit . for those differences and the completeness, this paper provides the full derivation of the ccdf of sinr .the probability that there is a data collector at a distance of from a typical sensor node is . for this data collector to be a serving data collector of a typical sensor node, all other data collectors must be farther than from a typical sensor node , and its probability is .thus , the probability density function of the distance between a typical sensor node and its serving data collector , , is equal to .the ccdf of sinr is where . from ( [ eqn_generalfading ] ) , ) \cdot \right. \\ \hspace{2.5 cm } \left .\sum_{k \in \mathcal{k } } a_{nk}(\beta r^\alpha [ i_r + \tilde{\sigma}^2])^k \right\ } \\ = \sum_{n \in \mathcal{n}}\sum_{k \in \mathcal{k } } a_{nk } ( \beta r^\alpha)^k \cdot \\ \hspace{2.5cm}\mathrm{e}_{i_r}\left\ { ( i_r + \tilde{\sigma}^2)^k \exp(-n\beta r^\alpha [ i_r + \tilde{\sigma}^2 ] ) \right\ } \\ \stackrel{\mathrm{(a)}}{= } \sum_{n \in \mathcal{n}}\sum_{k \in \mathcal{k } } a_{nk } ( -\beta r^\alpha)^k \left.\frac{d^k \mathrm{e}\{\exp\left(-\zeta(i_r+\tilde{\sigma}^2)\right)\ } } { d\zeta^k } \right|_{\zeta = n \beta r^\alpha } \\ \stackrel{\mathrm{(b)}}{= } \sum_{n \in \mathcal{n}}\sum_{k \in \mathcal{k } } a_{nk } ( -\beta r^\alpha)^k \left.\frac{d^k \mathcal{l}_{i_r } ( \zeta ) \exp(-\zeta \tilde{\sigma}^2 ) } { d\zeta^k } \right|_{\zeta = n \beta r^\alpha } \end{array}\ ] ] where ( a ) and ( b ) follow from the definition of laplace transform , , its property , , and the independence of and .the laplace transform of is v dv \right ) \\ & \stackrel{\mathrm{(d)}}{= } \exp\left ( -2\pi\lambda_s \cdot \right . \\ & \hspace{1.3 cm } \left .\int_{0}^{\infty } \left(\int_{0}^{\infty}[1- \exp(-\zeta v^{-\alpha } g ] v dv \right ) f_{g_i}(g ) dg \right ) \\ & \stackrel{\mathrm{(e)}}{= } \exp\left ( -\frac{2\pi\lambda_s \zeta^{\frac{2}{\alpha}}}{\alpha } \gamma\left(-\frac{2}{\alpha}\right ) \int_{0}^{\infty } g^{\frac{2}{\alpha } } f_{g_i}(g ) dg \right ) \\ &\stackrel{\mathrm{(f)}}{= } \exp\left ( -\lambda_s \xi(\zeta,\alpha ) \right ) \end{array}\ ] ] where ( c ) follows from the probability generating functional ( pgfl ) of the ppp ; ( d ) uses the probability density function of a random variable ; ( e ) follows from the change of variable and the definition of the gamma function ; ( f ) follows from the property of gamma function and the definition of . by substituting ( [ eqn_proof_lem11_gs_ccdf ] ) and ( [ eqn_proof_lem11_l_ir ] ) into ( [ eqn_proof_lem11_sinr ] ) , ( [ eqn_sinr_generalfading ] )is derived .also , ( [ eqn_sinr_generalfading_derivative ] ) is obtained from the following equation which can be derived by the derivative of the exponential function and the chain rules : where denotes .the fading gain of nakagami- fading channel given in ( [ eqn_nakagamifading_s ] ) can be reexpressed as so , the nakagami- fading is the case of , and .thus , where is defined as .when , the derivative ( [ eqn_sinr_generalfading_derivative ] ) is calculated into ^j \frac{\partial^k}{\partial \zeta^k } \left[\lambda_s \pi \zeta^{\frac{2}{\alpha } } c(m_i,\alpha ) \right]^{l - j } \\ = \exp \left(-\lambda_s \pi \zeta^{\frac{2}{\alpha } } c(m_i,\alpha ) \right ) \sum_{l=0}^{k } \frac{1}{l!}\sum_{j=0}^{l}(-1)^{l+j}\binom{l}{j } \\ \hspace{1 cm } [ -\lambda_s \pi c(m_i,\alpha)]^{l } \left [ \prod_{i=0}^{k-1 } \left(\frac{2}{\alpha}(l - j)-i\right ) \right ] \zeta^{\frac{2}{\alpha}l - k } \end{array}\ ] ] from ( [ eqn_nakagamifading_s_reform ] ) and ( [ eqn_sir_nakagami_derivative ] ) , ( [ eqn_sinr_generalfading ] ) is ^{l } \cdot \\ \hspace{1 cm } \delta_{k , l } \cdot \exp\left(-[\lambda_s \pi c(m_i,\alpha)(m_s\beta)^{\frac{2}{\alpha } } + \lambda_c \pi]r^2\right ) r dr \\ \stackrel{\mathrm{(b)}}{= } 2 \pi \lambda_c \sum_{k=0}^{m_s-1 } \frac{(-1)^k}{k ! } \\ \hspace{1 cm } \sum_{l=0}^{k } \frac{(-1)^l}{l ! } \left [ \lambda_s \pi c(m_i,\alpha)(m_s\beta)^{\frac{2}{\alpha } } \right]^l \delta_{k , l } \\ \hspace{1 cm } \int_{0}^{\infty } \exp\left ( -[\lambda_s \pi c(m_i,\alpha)(m_s\beta)^{\frac{2}{\alpha } } + \lambda_c \pi]r^2 \right)r^{2l+1}dr \\\stackrel{\mathrm{(c)}}{= } 2 \pi \lambda_c \sum_{k=0}^{m_s-1 } \frac{(-1)^k}{k ! } \\ \hspace{1 cm } \sum_{l=0}^{k } \frac{(-1)^l}{l ! } \left [ \lambda_s \pi c(m_i,\alpha)(m_s\beta)^{\frac{2}{\alpha } } \right]^l \delta_{k , l } \\ \hspace{1 cm } \left ( \frac{1}{2 } \left [ \lambda_s \pi c(m_i,\alpha)(m_s\beta)^{\frac{2}{\alpha}}+\lambda_c \pi \right]^{-l-1 } \gamma(l+1)\right ) \\\frac{\lambda_c}{\lambda_c + \lambda_s c(m_i , \alpha ) ( m_s\beta)^{\frac{2}{\alpha } } } \sum_{k=0}^{m_s-1 } \frac{1}{k ! } \cdot \\ \hspace{1 cm } \sum_{l=0}^{k } ( -1)^{k+l } \delta_{k , l } \left[\frac{\lambda_s c(m_i , \alpha ) ( m_s\beta)^{\frac{2}{\alpha}}}{\lambda_c + \lambda_s c(m_i , \alpha ) ( m_s\beta)^{\frac{2}{\alpha } } } \right]^l \end{array}\ ] ] where ( a ) follows from the definition of in proposition [ pro_sir_nakagami ] , ( b ) follows from the interchange of a summation and an integration , ( c ) follows from the calculation of the integral part by the definition of the gamma function , and ( d ) follows from the property of the gamma function for a nonnegative integer .let and .( [ eqn_sinr_rayleigh ] ) can be expressed as where ( a ) follows from the lower bound of the complementary error function . from ( [ eqn_sinr_rayleigh ] ) and ( [ eqn_proof_pro_density_rayleigh_sinr_erfclb ] ) , where ( b ) follows from the definition of and .( [ eqn_proof_pro_density_rayleigh_sinr_lb ] ) is rewritten into \lambda_c\\ \hspace{2 cm } - ( 1-\varepsilon_t)(k^2 \beta_t \lambda_s^2 + 2 \beta_t \tilde{\sigma}^2 ) \geq 0 \end{array}\ ] ] which is a quadratic inequality with the form of where and for .thus , ( [ eqn_proof_pro_density_rayleigh_sinr_quadeq ] ) gives a positive lower bound of . by solving the inequality ( [ eqn_proof_pro_density_rayleigh_sinr_quadeq ] ) for a variable , ( [ eqn_density_rayleigh_sinr ] )is derived .r. v. kulkarni , a. forster , and g. k. venayagamoorthy , computational intelligence in wireless sensor networks : a survey , _ ieee communications surveys & tutorials _ , vol . 13 , no . 1 ,68 - 96 , feb .2011 m. haenggi , j. g. andrews , f. baccelli , o. dousse , and m. franceschetti , stochastic geometry and random graphs for the analysis and design of wireless networks , _ ieee journal on selected area in communications _27 , no . 7 , pp .1029 - 1046 , sep .2009 j. g. andrews , r. k. ganti , m. haenggi , n. jindal , and s. weber , a primer on spatial modeling and analysis in wireless networks , _ ieee communications magazine _ , vol .48 , no . 11 , pp . 156 - 163 ,a. m. hunter , j. g. andrews , and s. p. weber , `` transmission capacity of ad hoc networks with spatial diversity '' , _ ieee transactions on wireless communications _ ,vol . 7 , no . 12 , pp . 5058 - 5071 , dec .2008 m. nakagami , the -distribution- a general formula of intensity distribution of rapid fading , in _ statistical methods in radio wave propagation _ ( w. g. hoffman , ed . ) , pp . 3 - 36 , pergamon press , oxford , england , 1960
recently , wireless communication industries have begun to extend their services to machine - type communication devices as well as to user equipments . such machine - type communication devices as meters and sensors need intermittent uplink resources to report measured or sensed data to their serving data collector . it is however hard to dedicate limited uplink resources to each of them . thus , efficient service of a tremendous number of devices with low activities may consider simple random access as a solution . the data collectors receiving the measured data from many sensors simultaneously can successfully decode only signals with signal - to - interference - plus - noise - ratio ( sinr ) above a certain value . the main design issues for this environment become how many data collectors are needed , how much power sensor nodes transmit with , and how wireless channels affect the performance . this paper provides answers to those questions through a stochastic analysis based on a spatial point process and on simulations . m2 m , stochastic geometry , spatial reuse , outage probability , network design , poisson point process .
since 90 years a number of methods have been proposed to calculate the electrostatic potential in ionic crystals .these methods can be separated into two categories , the direct summation methods and the indirect summation ones .the former uses a real space summation of the electrostatic potential generated by the ions within a finite volume ( ) .however , when enlarging the volume , such partial summations are conditionally convergent .the convergence depends on the specific shape of .in addition , when achieved , the convergence is quite slow .the indirect summation methods do not present these drawbacks since the long range part of the potential is calculated in the reciprocal space .indeed , the summation is divided into two parts , a short range one , evaluated by a direct summation in real space and a long range one evaluated in the reciprocal space . among these methods ,the most widely used is the ewald s method , which is actually considered as the reference for madelung potential calculations . despite its qualitythe ewald s method is not easily usable in different domains of physics .this is for instance the case in clusters ab - initio calculations used for the treatment of strongly correlated systems , the study of diluted defects in materials or adsorbates or for qm / mm type of calculations . for this types of calculations ,real space direct summation methods are used .there is thus a need for efficient and accurate techniques for the determination of the madelung potential in real space .the convergence problems found in real space summation are linked to the shape of the summation volume , , and more specifically the charges at its surface . in order to insure the convergence of the summation, the surface charges are renormalized .several methods have been proposed for this purpose .the most common and simple one is the evjen s method .this method uses a volume , built from a finite number of crystal unit cells , and renormalizes the surface charges by a factor 1/2 , 1/4 or 1/8 according whether the charge belong to a face , edge or corner of .this method insures , in most cases , the convergence of the electrostatic potential when increases .however , in some cases such as the famous it does not converge to the proper value . other authors proposed to renormalize not only the surface charges , but also the charges included in a thin skin volume .the adjustment of the renormalization factors are , in this case , numerically determined so that to reproduce the exact potential , previously computed using the ewald s method at a chosen set of positions .such a method presents the advantage of reaching a very good precision .however several drawbacks can be pointed out : i ) the previous calculation of the electrostatic potential using the ewald s method at a large number of space positions , ii ) the necessity to invert a large linear system to determine the renormalization factors and iii ) finally the fact that the latter are not chosen on physical criteria .indeed , this last point induces the possibility that the renormalization factors can be either larger than one or negative .it results that even if the electrostatic potential is very accurate at the chosen reference positions , its spatial variations can be unphysical and thus , the precision can strongly vary when leaving the reference points .marathe _ et al _ suggested a physical criterion , based on the analysis of the convergence of the real space summation , for the choice of the renormalization factors .indeed , it is known that the direct space summation converges to the proper limit if the volume presents null dipole and quadrupole moments .the authors of reference showed , on the simple example of a linear alternated chain , that it is possible to find a finite number of charge renormalization factors allowing the cancellation of these two multipolar moments .they also assert that the cancellation of additional multipolar moments increases the speed of convergence .unfortunately they did not prove this affirmation and more importantly , they did not proposed a practical way to determine the renormalization factors in order to reach this goal . in the present paperwe propose a systematic method for the determination of the renormalization factors allowing the cancellation of a given number of multipolar moments as well as a careful analysis of the direct space summation convergence as a function of the number of canceled multipolar moments .the next section will present the convergence proof , section 3 will develop the method for determination of the renormalization factors and section 4 will present the optimization of the method and illustration on a typical example .as already mentioned in the introduction , several papers already exist on this subject .however the results are only partial and there is not complete analysis of the convergence issue .we will thus present in this section a global analysis of the electrostatic potential convergence in a real space approach and an estimation of the error .we want to evaluate the limit of the following series where is a vector of the bravais s lattice , j refers to a charge located at the position of the unit cell . is a set of volumes such that for the sake of simplicity we require that the set of also presents the following conditions the well known problem of this series is that the limit depends on the particular choice of the unit cell and of the volumes .in fact different shapes of the charge set will result in different limits due to the surface effects .however , it has been demonstrated that this conditional convergence disappears if one considers a unit cell with zero dipolar and quadrupolar moments .we will consider in the following that the cell fulfills this condition .in this case , one only obtains the so - called `` bulk contribution '' as in the ewald s and related methods .the error on the electrostatic potential evaluation , , can be written as for large values of , can be evaluated by a multipolar expansion . for practical reasons , we will use an expansion expressed in spherical coordinates .indeed , for a given order , this expansion contains less terms than the usual multipolar expansion based on cartesian coordinates .the use of the later multipolar expansion is still possible but is more complex ( see ref . ) . in spherical coordinates , the multipolar expansion of the error made on reads : where are the multipolar moments of unit cell at .they can be expressed as are the spherical coordinates of the bravais s vector and the spherical coordinates of .the are schmidt semi - normalized spherical harmonics : where are legendre functions . in order to overvalue the error, one needs to overvalue the spherical harmonics .we thus consider the addition formula : where is a legendre polynomial and is the angle between and .it comes for and and thus using this result , one obtains the following overvalue for the moments : where is the typical size of ( i.e. the diameter of the circumsphere of ) and is the sum of the absolute values of its charges one should notice at this point that has the same parity as .the contributions of and unit cells to thus cancel when is odd . using equations [ eq : majy ] and [ eq : majm ] ,one gets the following overvaluation : where is the order of the first even , non - zero moment .let us now overvalue the sum over the powers by a volume integral .for each cell located at , the norm of the position vectors belonging to the volume is smaller than .one can thus overvalue by being the volume of the unit cell .if is the radius of the insphere of , it comes and the later sum converges for large enough , i.e. for .the decreasing function can be overvalued by its value in .further summation over leads to the following expression with the electrostatic potential at thus converges as where is the first , even , non - zero moment of the unit cell .in several applications , as for instance in cluster ab initio calculation , the problem depends on the spatial variations of the potential and not on its absolute value . in such casesit is sufficient to cancel the dipolar moment of the unit cell in order to ensure the convergence of the calculation .the convergence rate can also be expected to be faster than for the calculation of the potential at a point as we will show in this section .let us overvalue the error made on the calculation of a difference of potential between two points located at and : where and are the spherical coordinates of and respectively .as in the preceding section will be the spherical coordinates of and those of . in order to express the previous expression as a function of , we use following expansion of solid spherical harmonics ( for simple derivation see ref . , see also ref . ) : where the sum over spans all integer values .nevertheless , only a finite number of terms will contribute , since if .setting and introducing spherical harmonics leads to : ^{\frac{1}{2 } } \nonumber\\ * & & \times\ , r_1^{l - l}\ , y_{l - l}^{m - m}(\theta_1,\phi_1)\end{aligned}\ ] ] considering in this equation instead of is equivalent to the transformation that results in an overall factor . inserting relation [ eq : ylmexp ] into eq .[ eq : dv2 ] and inverting the summation over and leads to the expansion : with : ^{\frac{1}{2 } } \nonumber\\ * & & \times ( -1)^{m - m } \ ; r_1^{l - l}\ ; y_{l - l}^{m - m}(\theta_1,\phi_1 ) \nonumber\\ * & & \times\left(1-(-1)^{l - l } \right ) \label{eq : alm}\end{aligned}\ ] ] considering the parity of , one can see from eq .[ eq : dv2_alm ] that the contributions from cells located at and cancel when is odd . moreover , due to the last term of eq . [ eq : alm ] , the coefficients are zero when is even .only terms with even and odd have a non zero contribution , thus only moments with odd order will contribute to the error .the consequence is that the first non - zero contribution in equation [ eq : dv2_alm ] corresponds to where is the first , non - zero , odd moment of the unit cell .let us now find an overvalue of the terms .it is easy to show using a recurrence relation on the values of and , that if , and , the following relation holds : using the previous overvaluation of the moments ( eq . [ eq : majm ] ) one obtains : where the summation runs only up to since and are of different parity .it comes as in previous section , the sum over can be overvalued by a volume integral ( cf . eq [ eq : sumk_maj ] ) .overvaluation of the error thus reads : the sum over converges if is larger than .it can be calculated using derivative of power series .after simplification , one obtains : where as one increase the size of the set of charges , the difference of electrostatic potential between two points converges like , where is now the first , odd , non - zero moment of the unit cell .this convergence is slightly faster than the convergence of the absolute value of potential .when the order of the first non zero moment is even , the convergence rates differ by a factor , otherwise they are similar .the convergence problem of the electric field at a point is very similar to the problem of the difference of potential between two points .however since it could be of practical interest , for instance for molecular dynamists in the calculation of ionic forces , we will provide in this section the analysis of the electric field convergence .the error on the evaluation of the electric field at a given point is related to the error of the potential difference between two nearby points as where is the component of the electric field and is the unit vector in the direction . can thus be overvalued using equation [ eq : res2 ] as expected , one sees that the electric field converges with the same rate as the potential energy difference between two points , that is as , where is the first , odd , non - zero moment of the unit cell .as depicted in the previous section , convergence can be considerably increased if one cancels several multipolar moments of the unit cell . in general the evjen methodallows to only cancel the dipolar moment , and thus provides a convergence of the potential differences in . in order to really take advantage of the former property, one needs a method allowing the cancellation of several multipolar moments . in this sectionwe will establish a method to construct unit cells with a chosen number of zero multipolar moments .the method , based on the usage of partial charges , is general and can be applied to any bravais s crystal .let be the lattice vectors of the bravais s crystal , and the associated unit cell .in order to introduce partial charges , we consider a larger cell of dimensions , that we will refer as the `` _ construction cell _ '' .the construction cell thus contains original unit cells which positions in can be labeled by , and indices , ranging from to .if we note the number of charges in the original cell , the cell now contains charges .these charges will be corrected by a factor ( where refers to the charge ) .when on rebuild the lattice using the construction cells , the cells overlap , and the final charge at position corresponds to the superposition of partial charges from several construction cells .it is straightforward to show that the condition to retrieve the nominal value of the charges reads : at this stage , considering the latter equations , the cell contains free parameters that could be used to cancel multipolar moments . for the sake of simplicity and generality ( i.e. for the method not to depend on the particularity of a given crystal ), we will impose further conditions on the coefficients .we first reduce the problem to a one dimensional problem by setting : where the three coefficients , and are used to cancel multipolar moments of the problems obtained when the cell is respectively projected on the three axes of the crystal . for each one dimensional problem ,the construction cell contains projected unit cells .the condition on the coefficients now reads : it is easy to show that , if these coefficients cancel a fixed number of multipolar moments in each one dimensional problems , the coefficients will cancel the moments of same order in the original three dimensional problem .we further impose to the coefficients to only depend on the fractional coordinates ( ) of the charges , in the corresponding direction : the functions are thus the same for the three directions and for all charges . as a consequencetheir expression is the same for all crystals . for a given value of , and considering the condition for the reconstruction of the crystal ( eq . [ eq : recon2 ] ) , we are left with degrees of freedom .we thus impose to the functions to cancel multipolar moments .this can be done by setting the moments created at the center of the construction cell by a unique charge : where , is the fractional coordinate of the charge and are constant values ( independant of the crystal specifications ) .the equation obtained for corresponds to the condition for the reconstruction of the crystal ( ) .the moments of the construction cell , can thus be obtained by summing the contributions of all charges . as the unit cell is neutral ,these contributions cancel out .the equations [ eq : l3 ] and [ eq : mk ] thus define sets of partial charges that allow to construct cells with zero multipolar moments .the shape of these partial charges depends on the choice of the constants values . in order to find the most reasonable choice of partial charges , we search for constants that satisfy the following physical conditions : 1 . ] : from this relation , it is obvious that the functions satisfy the first three conditions mentioned above .we now consider a fragment of crystal made of construction cells .as these cells are composed of original cells , they partially overlap , and the size of the fragment corresponds to cells . cells in the center contains charges with the nominal values , and cells on each side of the fragment contains partial charges .the latter partial charges are proportional to the coefficients : these coefficients are represented on fig .[ f : mp ] .the abscise values have been shifted by , so that the origin corresponds to the position of the effective edge of the fragment ( i.e. the position of the edge obtained when using original cells without partial charges ) .one can see that the renormalization of the charges is relatively small .indeed , even in the case , this renormalization is weaker then for charges at distances larger than .as already mentioned , the and are polynomial functions of order .it is also interesting to notice that at the junction point between the ( resp . ) and ( resp . ) functions , the renormalization function and all its the derivatives are continuous , except for the last one ( ^{\rm th}$ ] derivative ) .+ + we first use the cells to illustrate the convergence of the standard , real - space calculation of the potential established in section [ s : conv ] . in order to observe the general behavior of the different methods , we chose the quartz structure , that possesses a reasonable number of atoms per unit cell and not too many symmetries .a cell is constructed for a fixed number of .the cells are used to produce sets of charges of increasing size .the potential and the electric field are calculated at the position of the twelve atoms of the central unit cell .[ f : dv ] a ) represents the maximum of the error made on these potential values as a function of the width of the set of charges , fig .[ f : dv ] b ) represents the maximum of the error made on the sixty six differences of potential , and finally fig .[ f : dv ] c ) represents the maximum of the error made on the electric field .as expected the errors decrease as power functions of the width of the set of charges .it fully agrees with eq .[ eq : res1 ] , [ eq : res2 ] and [ eq : res3 ] .in particular , the fact that the cancellation of a moment of odd order do not increase the convergence rate of the potential at a point , clearly appears .similarly , the cancellation of even order moment do not improve the convergence rate of the calculation of differences of potential and electric field .+ + one can see from the previous figures that this standard approach is not the more efficient .indeed the increase of the number of zero multipolar moments clearly yield a faster convergence rate than the increase of the volume of the system for a fixed value of .let us therefore fix the width of the volume containing the nominal charge , and let increase .the variation of the maximum error made on the potential , on the potential differences and on the electric field are respectively represented on fig .[ f : dvexp ] a ) , b ) and c ) .one sees that the present approach leads to an exponential convergence of the potential in all cases .the convergence is very fast , since an increase of the set of charges width by two unit cells results in a precision increase by a factor better than . increasing the number of central cells without partial charges has a small influence on the convergence speed . for method even becomes less efficient since increasing increases the size of the total set of charges .the best convergence is obtained for .it corresponds to the case where the cell in which the potential is calculated is surrounded by one shell of cells with the nominal charge values .finally we compare our method to the famous ewald s method which mixes calculation in real space and reciprocal space .this method introduces gaussian distribution of charge , where the coefficient can be adjusted .increasing coefficient increases the convergence rate of the real space sum , but slows down the sum in reciprocal space . a width of gaussian proportional to the characteristic length of the unit cell , which corresponds to ,is generally assumed to give a good compromise .we calculated the error made on the value of the potential using the ewald s method for different values of around .the results are represented on figure [ f : dvewald ] , as well as the error of our method obtained for .figure [ f : dvewald ] reports the error on the potential as a function of the number of construction cells used in the calculation .let us point out that , while this variable is pertinent for the global convergence rate analysis , for each charge , the ewald s method requires an error function evaluation resulting in a non negligible pre - factor , not present in our method and not taken into account in figure [ f : dvewald ] .one sees that the convergence rate of the present method is comparable with the ewald s method . if one is only interested in the potential evaluation at a single point , the ewald s method with an optimal parameter is somewhat faster than the present one .one the other hand , once the renormalization have been computed , the value of the potential at any other point of the of the central area can be calculated with a similar precision at little cost .more important , properties using potential integrals or complex potential functions can be more easily evaluated since our method used only algebraic functions .number of authors have searched for a fast converging method for the evaluation of the electrostatic potential in real space .similarly , many works where done yielding partial results on the convergence rate of such real series .the present work fills the gaps and proposes a general analysis of both the convergence of the potential at one point and of the convergence of differences of potential .indeed , we gave a general and rigorous proof of the relation ( claimed by other authors ) between the power law convergence of the series and the number of zero multipolar moments of the crystal construction cell .based on these convergence analyses we derived a general real space method with an exponential convergence rate , comparable with the ewald s method .the exponential convergence is reached as a function of the number of canceled multipolar moments in the _ construction _ cell .the crystal is indeed constructed using overlapping _ construction _ cells with renormalized charges .we derived a general analytical expression of the renormalization factors , for any given number of zero multipolar moments .finally , we would like to point out that our method warrants continuous and smooth variations of the renormalization factors .this property is of particular interest for molecular dynamic usage since it insures continuous and smooth variations of the ionic forces as a particle crosses the cell boundaries .one can see the present functions as optimized cut - off functions .
an efficient real space method is derived for the evaluation of the madelung s potential of ionic crystals . the proposed method is an extension of the evjen s method . it takes advantage of a general analysis for the potential convergence in real space . indeed , we show that the series convergence is exponential as a function of the number of annulled multipolar moments in the unit cell . the method proposed in this work reaches such an exponential convergence rate . its efficiency is comparable to the ewald s method , however unlike the latter , it uses only simple algebraic functions .
the emergence of electronic trading as a major means of trading financial assets makes the study of the order book central to understanding the mechanisms of price formation . in order - driven markets ,buy and sell orders are matched continuously subject to price and time priority .the _ order book _ is the list of all buy and sell limit orders , with their corresponding price and size , at a given instant of time .essentially , three types of orders can be submitted : * _ limit order _ : specify a price ( also called `` quote '' ) at which one is willing to buy or sell a certain number of shares ; * _ market order _ : immediately buy or sell a certain number of shares at the best available opposite quote ; * _ cancellation order _ : cancel an existing limit order . in the econophysics literature ,`` agents '' who submit exclusively limit orders are referred to as _liquidity providers_. those who submit market orders are referred to as _ liquidity takers_.limit orders are stored in the order book until they are either executed against an incoming market order or canceled .the _ ask _ price ( or simply the ask ) is the price of the best ( i.e. lowest ) limit sell order .the _ bid _price is the price of the best ( i.e. highest ) limit buy order .the gap between the bid and the ask is always positive and is called the _spread_. prices are not continuous , but rather have a discrete resolution , the _ tick _ , which represents the smallest quantity by which they can change .we define the _ mid - price _ as the average between the bid and the ask the price dynamics is the result of the interplay between the incoming order flow and the order book .figure [ fig1 ] is a schematic illustration of this process .note that we chose to represent quantities on the bid side of the book by non - positive numbers .although in reality orders can have any size , we shall assume throughout this paper that all orders have a fixed unit size .this assumption is convenient to carry out our analysis and is , for now , of secondary importance to the problem we are interested in .we start with the simplest agent - based market model : * the order book starts in a full state : all limits above and below are filled with one limit order of unit size .the spread starts equal to 1 tick ; * the flow of market orders is modeled by two independent poisson processes ( buy orders ) and ( sell orders ) with constant arrival rates ( or intensities ) and ; * there is one liquidity provider , who reacts immediately after a market order arrives so as to maintain the spread constantly equal to 1 tick .he places a limit order on the same side as the market order ( i.e. a buy limit order after a buy market order and vice versa ) with probability and on the opposite side with probability .the mid - price dynamics can be written in the following form where is a bernoulli random variable the infinitesimal generator is the operator , if exists , defined to act on sufficiently regular functions , by - f(\mathbf{x})}{t}.\ ] ] it provides an analytical tool to study . ]associated with this dynamics is it is well known that a continuous limit is obtained under suitable assumptions on the intensity and tick size . noting that can be rewritten as and under the following assumptions the generator converges to the classical diffusion operator corresponding to a brownian motion with drift .this simple case is worked out as an example of the type of limit theorems that we will be interested in in the sequel .one should also note that a more classical approach using the functional central limit theorem ( fclt ) as in or yields similar results ; for given fixed values of , and , the rescaled - centered price process converges as , to a standard brownian motion where let us mention that one can easily achieve more complex diffusive limits such as a local volatility model by imposing that the limit is a function of and this would be the case if the original intensities are functions of and themselves .we now consider the dynamics of a general order book under a poisson type assumption for the arrival of new market orders , limit orders and cancellations .we shall assume that each side of the order book is fully described by a _finite _ number of limits , ranging from to ticks away from the best available opposite quote .we will use the notation where designates the ask side of the order book and the number of shares available at price level ( i.e. ticks away from the best opposite quote ) , and designates the bid side of the book . by doing so , we adopt the representation described e.g. in or , but depart slightly from it by adopting a _ finite moving frame _ , as we think it is realistic and more convenient when scaling in tick size will be addressed .let us now recall the events that may happen : * arrival of a new market order ; * arrival of a new limit order ; * cancellation of an already existing limit order .these events are described by _independent _ poisson processes : * : arrival of new market order , with intensity and ; * : arrival of a limit order at level , with intensity ; * : cancellation of a limit order at level , with intensity and . is the size of any new incoming order , and the superscript `` '' ( respectively `` '' ) refers to the ask ( respectively bid ) side of the book .note that the intensity of the cancellation process at level is proportional to the available quantity at that level . that is to say , each order at level has a lifetime drawn from an exponential distribution with intensity .note also that buy limit orders arrive below the ask price , and sell limit orders arrive above the bid price .we impose constant boundary conditions outside the moving frame of size : every time the moving frame leaves a price level , the number of shares at that level is set to ( or depending on the side of the book ) .our choice of a finite moving frame and constant and independent positive random variables would not change much our analysis .we take constants for simplicity . ]boundary conditions has three motivations .firstly , it assures that the order book does not empty and that , are always well defined .secondly , it keeps the spread and the increments of , and bounded this will be important when addressing the diffusive limit of the price .thirdly , it makes the model markovian as we do not keep track of the price levels that have been visited ( then left ) by the moving frame at some prior time .figure [ dynamics ] is a representation of the order book using the above notations .order book dynamics : in this example , , , , . the shape of the order book is such that and .the spread ticks .assume that at time a sell market order arrives , then , and .assume instead that at a buy limit order arrives one tick away from the best opposite quote , then , and . ]we can now write the following coupled sdes for the quantities of outstanding limit orders in each side of the order book s are non - positive . ] where the s are _ shift operators _ corresponding to the renumbering of the ask side following an event affecting the bid side of the book and vice versa .for instance the shift operator corresponding to the arrival of a sell market order of size is instead of etc . for the shift operators . ] with similar expressions can be derived for the other events affecting the order book . in the next sections, we will study some general properties of such models , starting with the generator associated with this -dimensional continuous - time markov chain .let us now work out the infinitesimal generator associated with the jump process described above .we have + ; j^{m^+}(\mathbf{b})\right ) -f ) \notag\\ & + \sum_{i=1}^{n}{\lambda_i^{l^+ } ( f\left(a_i+q ; j^{l_i^+}(\textbf{b})\right ) - f)}\notag\\ & + \sum_{i=1}^{n}{\lambda_i^{c^+ } \frac{a_i}{q } ( f\left(a_i - q ; j^{c_i^{+}}(\textbf{b})\right ) - f)}\notag\\ & + \lambda^{m^- } { \left ( f\left(j^{m^-}(\mathbf{a } ) ; [ b_i + ( q - b({i-1 } ) ) _ + ] _-\right ) -f \right)}\notag\\ & + \sum_{i=1}^{n}{\lambda_i^{l^- } ( f\left(j^{l_i^-}(\textbf{a } ) ; b_i - q\right ) - f)}\notag\\ & + \sum_{i=1}^{n}{\lambda_i^{c^- } \frac{|b_i|}{q } ( f\left(j^{c_i^-}(\textbf{a } ) ; b_i+q\right ) - f ) } , \label{infgen}\end{aligned}\ ] ] where , to ease the notations , we note instead of etc . and the operator above , although cumbersome to put in writing , is simple to decipher : a series of standard difference operators corresponding to the `` deposition - evaporation '' of orders at each limit , combined with the shift operators expressing the moves in the best limits and therefore , in the origins of the frames for the two sides of the order book .note the coupling of the two sides : the shifts on the s depend on the s , and vice versa .more precisely the shifts depend on the profile of the order book on the other side , namely the cumulative depth up to level defined by and the generalized inverse functions thereof where designates a certain quantity of shares .note that a more rigorous notation would be for the depth and inverse depth functions respectively .we drop the dependence on the last variable as it is clear from the context .the index corresponding to the best opposite quote equals the spread in ticks , that is now focus on the dynamics of the best ask and bid prices , denoted by and .one can easily see that they satisfy the following sdes ,\\ dp^b(t ) & = - \delta p [ ( b^{-1}(q ) - b^{-1}(0 ) ) dm^-(t ) \\ & - \sum_{i=1}^n{(b^{-1}(0)-i)_+ dl_i^{-}(t ) }+ ( b^{-1}(q)-b^{-1}(0))dc_{i_b}^-(t ) ] , \end{aligned } \right.\ ] ] which describe the various events that affect them : change due to a market order , change due to limit orders inside the spread , and change due to the cancellation of a limit order at the best price .one can summarize these two equations in order to highlight , in a more traditional fashion , the respective dynamics of the mid - price and the spread , \label{midpriceincrement}\end{aligned}\ ] ] .\end{aligned}\ ] ] the equations above are interesting in that they relate in an explicit way the profile of the order book to the size of an increment of the mid - price or the spread , therefore linking the price dynamics to the order flow .for instance the `` infinitesimal '' drifts of the mid - price and the spread , conditional on the shape of the order book at time , are given by & = \frac{\delta p}{2 } \left [ ( a^{-1}(q ) - a^{-1}(0 ) ) \lambda^{m^+ } - ( b^{-1}(q ) - b^{-1}(0 ) ) \lambda^{m^- } \right.\notag\\ & - \sum_{i=1}^n{(a^{-1}(0)-i)_+ \lambda_i^{l^+ } } + \sum_{i=1}^n{(b^{-1}(0)-i)_+ \lambda_i^{l^-}}\notag\\ & + \left .( a^{-1}(q ) - a^{-1}(0 ) ) \lambda_{i_a}^{c^+ } \frac{a_{i_a}}{q } - ( b^{-1}(q ) - b^{-1}(0 ) ) \lambda_{i_b}^{c^- } \frac{|b_{i_b}|}{q } \right]dt , \label{midpricedrift}\end{aligned}\ ] ] and & = \delta p \left [ ( a^{-1}(q ) - a^{-1}(0 ) ) \lambda^{m^+ } + ( b^{-1}(q ) - b^{-1}(0 ) ) \lambda^{m^- } \right.\notag\\ & - \sum_{i=1}^n{(a^{-1}(0)-i)_+ \lambda_i^{l^+ } } - \sum_{i=1}^n{(a^{-1}(0)-i)_+ \lambda_i^{l^- } } \notag\\ & + \left .( a^{-1}(q ) - a^{-1}(0 ) ) \lambda_{i_a}^{c^+ } \frac{a_{i_a}}{q } + ( b^{-1}(q ) - b^{-1}(0 ) ) \lambda_{i_b}^{c^- } \frac{|b_{i_b}|}{q } \right ] dt .\label{spreaddrift}\end{aligned}\ ] ]in this section , our interest lies in the following questions : 1 . is the order book model defined above stable ? 2 .what is the stochastic - process limit of the price at large time scales ? the notions of `` stability '' and `` large scale limit '' will be made precise below .we first need some useful definitions from the theory of markov chains and stochastic stability .let be the markov transition probability function of the order book at time , that is , \ ; t \in \mathbb{r}_+ , \mathbf{x}\in \mathcal{s } , e \subset \mathcal{s},\ ] ] where is the state space of the order book .we recall that a ( aperiodic , irreducible ) markov process is _ ergodic _ if an invariant probability measure exists and where designates for a signed measure the _ _ total variation norm _ _ ] note that since the state space is countable , one can formulate the results without the need of a `` measure - theoretic '' framework .we prefer to use this setting as it is more flexible , and can accommodate possible generalizations of our results . ] defined as in , is the borel -field generated by , and for a measurable function on , _ -uniform ergodicity . _ a markov process is said _ergodic _ if there exists a coercive as . ] function , an invariant distribution , and constants , and such that ergodicity can be characterized in terms of the infinitesimal generator of the markov process .indeed , it is shown in that it is equivalent to the existence of a coercive function ( the `` lyapunov test function '' ) such that for some positive constants and . ( theorems 6.1 and 7.1 in . ) intuitively , condition says that the larger the stronger is pulled back towards the center of the state space .a similar drift condition is available for discrete - time markov processes and reads where is the _ drift operator _ .\ ] ] and a finite set .( theorem 16.0.1 in . )we refer to for further details . of utmost interestis the behavior of the order book in its stationary state .we have the following result : if , then is an ergodic markov process .in particular has a _ stationary distribution _ .moreover , the rate of convergence of the order book to its stationary state is _exponential_. that is , there exist and such that let be the total number of shares in the book ( shares ) .using the expression of the infinitesimal generator we have where the first three terms in the right hand side of inequality correspond respectively to the arrival of a market , limit or cancellation order ignoring the effect of the shift operators .the last two terms are due to shifts occurring after the arrival of a limit order inside the spread .the terms due to shifts occurring after market or cancellation orders ( which we do not put in the r.h.s . of ) are negative , hence the inequality . to obtain inequality, we used the fact that the spread is bounded by consequence of the boundary conditions we impose and hence is bounded by . the drift condition can be rewritten as for some positive constants .inequality together with theorem 7.1 in let us assert that is -uniformly ergodic , hence .the spread has a well - defined stationary distribution this is expected as by construction the spread is bounded by .let denote the embedded markov chain associated with . in event time , the probabilities of each event are `` normalized '' by the quantity for instance , the probability of a buy market order when the order book is in state , is := p^{m^+}(\mathbf{x } ) = \frac{\lambda^{m^+}}{\lambda(\mathbf{x})}.\ ] ] the choice of the test function does not yield a geometric drift condition , and more care should be taken to obtain a suitable test function . let be a fixed real number and consider the function for the test function . ] we have is -uniformly ergodic .hence , there exist and such that where is the transition probability function of and its stationary distribution . if we factor out in the r.h.s of , we get where then with the usual notations denote the r.h.s of .clearly hence there exists such that for and let denote the finite set we have with therefore is -uniformly ergodic , by theorem 16.0.1 in . the proof above can be applied to the case where the cancellation rates are independent of the state of order book shall denote the order book in order to highlight that the assumption of proportional cancellation rates is relaxed .the probability of a cancellation in ] means ] . by taking the expectation over on both sides of and noting that $ ] is finite by theorem 14.3.7 in , we get | \leq r_2 \rho^n = : \rho(n ) , k , n \in \mathbb{n}.\ ] ] hence the stationary version of satisfies a _ geometric mixing condition _ , and in particular theorems 19.2 and 19.3 in on functions of mixing processes let us conclude that + 2 \sum_{n=1}^{\infty}\mathbb{e}_{\mu}[\overline{\eta}_0 \overline{\eta}_n ] \label{asymptoticvariance}\ ] ] is well - defined the series in converges absolutely and coincides with the asymptotic variance = \sigma^2.\ ] ] moreover where is a standard brownian motion . the convergence in happens in , the space of -valued cdlg functions , equipped with the skorohod topology .obviously , theorem [ mainresult ] is also true with non - proportional cancellation rates under condition . in this casethe result holds both in event time and physical time .indeed , let denote a poisson process with intensity .the price process in physical time can be linked to the price in event time by then in the large scale limit , the mid - price , the ask price , and the bid price converge to the same process .figures [ fig5][fig8 ] are obtained by numerical simulation of the order book .we note in particular the asymptotic normality of price increments , the fast decay of autocorrelation and the linear scaling of variance with time , in accordance with the theoretical analysis . average depth profile .] histogram of the spread . ] histogram of price increments . ] price sample path . ]autocorrelation of price increments . ]variance in event time .the dashed line is a linear fit . ]this paper provides a simple markovian framework for order book modeling , in which elementary changes in the price and spread processes are explicitly linked to the instantaneous shape of the order book and the order flow parameters .two basic properties were investigated : the ergodicity of the order book and the large scale limit of the price process .the first property , which we answered positively , is desirable in that it assures the stability of the order book in the long run .the scaling limit of the price process is , as anticipated , a brownian motion .a key ingredient in this result is the convergence of the order book to its stationary state at an exponential rate , a property equivalent to a geometric mixing condition satisfied by the stationary version of the order book .this short memory effect , plus a constraint on the variance of price increments guarantee a diffusive limit at large time scales .we hope that our approach offers a plausible microscopic justification to the much celebrated bachelier model of asset prices .we conclude with a final remark regarding two possible extensions : the assumption of a finite order book size andhence a bounded spread may seem artificial , and one can seek more general stability conditions for an order book model in which the spread is unbounded _ a priori_. in addition , richer price dynamics ( heavy tailed return distributions , long memory , more realistic spread distribution etc . ) can be achieved with more complex assumptions on the order flow ( e.g. feedback loops , or mutually exciting arrival rates ) .these extensions may , however , render the model less amenable to mathematical analysis , and we leave the investigation of such interesting ( but difficult ) questions for future research .
we present a mathematical study of the order book as a multidimensional continuous - time markov chain where the order flow is modeled by independent poisson processes . our aim is to bridge the gap between the microscopic description of price formation ( agent - based modeling ) , and the stochastic differential equations approach used classically to describe price evolution at macroscopic time scales . to do this , we rely on the theory of infinitesimal generators and foster - lyapunov stability criteria for markov chains . we motivate our approach using an elementary example where the spread is kept constant ( `` perfect market making '' ) . then we compute the infinitesimal generator associated with the order book in a general setting , and link the price dynamics to the instantaneous state of the order book . in the last section , we prove that the order book is _ ergodic_in particular it has a _ stationary distribution_that it converges to its stationary state _ exponentially fast _ , and that the large - scale limit of the price process is a _ brownian motion_. * keywords : * limit order book ; agent - based modeling ; order flow ; bid - ask spread ; markov chain ; stochastic stability ; fclt ; geometric mixing .
data aggregation is a key task performed within sensor networks to fuse information from multiple sensors and deliver it to a sink node in a manner that eliminates redundancy and enables energy saving .the redundancy is a consequence of the correlation inherent in smooth data fields , such as temperature , pressure , and sound measurements , in practical applications that include surveillance and habitat monitoring .suppose a sensor transfers its single measurement to the sink over intermediate sensors along a routing path .each intermediate sensor combines the data it receives with its own data and forwards it along the route .this data aggregation process usually involves data transmission .however when there is redundancy in data then it lends itself to sparse representations and in - network compression thereby yielding energy savings in the information transfer .recently the use of compressive data gathering has been examined , and shown to reduce transmission requirements to , where m represents the number of random measurements , and .if we ignore temporal change and only consider data in a certain time snapshot , then each sensor only has one measurement . according to compressive sensing ( cs ) theory , the compressive data gathering ( cdg ) method requires all the sensors to collectively provide at the sink at least measurements to fully recover the signal , where is the sparsity of signal .we note that when the cdg method is applied in a large scale network , may still be a large number .moreover in the initial data aggregation phase in , leaf nodes unnecessarily transmit measurements , which is in excess of their sensed data and therefore introduces redundancy in data aggregation . recognizing this , the hybrid cs aggregation method proposed an amalgam of the merits of non - cs aggregation and plain cs aggregation .it optimized the data aggregation cost by setting a threshold and applying cs aggregation when data gathered in a sensor equals or exceeds .the data transmission cost and hence the energy consumption is reduced .however , we observe that only a small fraction of sensors utilize cs aggregation method and the transmission measurement number for those nodes using cs aggregation method is still large .our work here shows significant improvement is possible and it stems from a hierarchical clustering architecture that we propose .the central idea is to configure sensor nodes such that instead of one sink node being targeted by all sensors , several nodes are designated for intermediate data collection and concatenated to yield a hierarchy of clusters at different levels .the use of the hierarchical architecture reduces the measurement number in the algorithm for cs aggregation since in the new architecture it is based on the cluster size rather than the global sensor network size . in this paper , we propose a novel cs - based data aggregation hierarchical architecture over the sensor network and investigate its performance in terms of data rate and energy savings . to the best of our knowledge , we are the first to investigate compressive sensing method for hierarchical data aggregation in sensor network .we refer to our method as hierarchical data aggregation using compressive sensing ( hdacs ) .the proposed data aggregation architecture distributes the workload of one sink to all the sensors , which is crucial for balancing energy consumption over the whole network . in this paperwe also perform a theoretical analysis of the data transmission requirements and energy consumption in hdacs .we implement our proposed architecture on a sidnet - swans simulation platform and test different sizes of two - dimensional randomly deployed sensor network .the results validate our theoretical analysis .substantial energy savings are reported for a large portion of sensors on the different hierarchical positions , ranging from 50% to 77% when compared with , and from 37% to 70% when compared with .the main idea behind this new architecture is that all sensors will no longer aim at flowing their data into one sink . instead, plenty of collecting clusters have been concatenated forming different types of clusters in different levels .the data flows from the source node through the architecture to the sink .suppose n sensors have been uniformly and randomly deployed in a 2d square space with area s. let be the unit area in the lowest level and the clusters have been defined in a multi - resolution way with highest level t. in each level , we define : * : the area of cluster * : the cluster head in the network * : the number of sensors in one cluster * : the transmission number of measurements for cluster * : the sum of distances between cluster head and its children nodes . * : the ratio of transmitting data size and receiving data size .* : the transmission energy cost in the cluster * : the collection of cluster heads * : the number of cluster heads * : the sum of measurements for transmission within network * : the transmission energy cost for all the clusters here , is defined as the collection of all the clusters , which implies .total transmission measurements is and total energy cost is . in order to simplify the model analysis and get quantitative comparisons with previous work, we put some constraints on our cs data aggregation architecture .figure [ dataaggarc ] shows the logical tree of clustering configuration , which consists of identical nodes in level i for and random leaf nodes . consider a n nodes network , where , we have the following formula : besides , in level , is the sum of distances between cluster head and its children cluster heads in the level i-1 .the number of cluster heads is .the area of cluster will be the same as that of all the other clusters at that level .we denote them as which combines subregions from i-1 level , and it satisfies the relation of .we distribute the sensors in a 2d randomly deployed network with some constraints .* there will be at least n nodes in each cluster in level 1 .this property requires we have to maximize the probability of n nodes in one cluster : it requires to minimize . from historical experiences , we are prone to set up to guarantee the full coverage of the whole region for each clusters with square area without producing intersections between two neighboring clusters . therefore , . andwe get the minimum of . *the remaining sensors are uniformly and randomly distributed in clusters .so we set to maximize the probability of of nodes in clusters . this has already been achieved in the constrains ( [ p1 ] ) .the main advantage of this network deployment is that it is based on 2d randomly deployed network topology , which corresponds to practical sensors distribution .it also addresses issues when the the condition that is not met . besides, the number of cluster heads will be at most , and the leaf nodes will be . if , .this result implies that only a small number of nodes will be involved with multiple level data processing and aggregation .the only job of other sensors is just sending their data directly to the cluster head . the balance in load distributionis achieved by randomly choosing different cluster heads in each duty circle . in the initial phase , sensors in each region only send their raw data to their cluster head , which adopts the same strategy as paper so as to reduce cs data aggregation redundancy . compressed them into random measurements .in level i ( ) , the cluster head receives random measurements , where ] .the total transmission measurements for the whole data aggregation task is : let and and get the closed form of : and therefore , the lower bound of data transmission number m is : and upper bound is on the other hand if data is sent using the same data architecture , the total measurements with the plain or non - hybrid cs ( ncs ) algorithm in paper is : . in paper , the total measurements for hybrid cs ( hcs )algorithm is : . in the following analysis, we assume the sparsity k as unity to rule out the effects from data field for data aggregation comparison .figure [ meacom ] shows the quantitative comparison of total data transmission measurements with cluster size n = 4 , 16 , 64 for proposed hdacs method , ncs data aggregation and hcs method with 1024 sensor nodes . from figure[ meacom ] , we find the bigger the cluster size is , the less measurements needed for data transmission .however , this theoretical analysis does not consider the realistic routing protocol underlying the network architecture in the lower layer. simply expanding the cluster size within local cluster and all the nodes forward their sensed data into cluster head directly , which definitely will lead to severe data flooding and data loss .therefore , cluster size will be fixed as 4 and 16 in the following analysis .figure [ diffsize ] shows total data transmission measurements changes with increase of sensor nodes under these two fixed cluster sizes . from the figure, we observe that the ncs method introduces a large number of data redundancy .the measurements required by hcs method is a little worse than proposed method , but we need to point out that this comparison is based on the premise that the data is propagated on the muli - resolution data architecture . since a lot of sensors are leaf nodes and only transmit their raw data to their cluster heads both in the proposed method in this paper and hcs method in the first level , they lead to very similar result in the theoretical analysis .the data compression ratio is calculated as follows : + compression ratio resides in the range of : , \text{if } i\geq 2 ] , where and are the location coordinates of and its children nodes respectively . in a large dense uniformly and randomly distributed sensor network , if , where . and for , .the final total energy consumption will be : let and we get the closed form of : and and its closed form is : therefore , the lower bound of total energy consumption e is : % \omega(e)=\frac{1}{4 } c s^{1/2}(n - n^{t-1 } ) + \frac{1}{4}ck \log{n}s^{1/2 } ( n-1 ) n^ t s_1'\ ] ] and upper bound of e is : \ ] ] follow a similar derivation , we get the transmission energy consumption for ncs method in paper with the same data aggregation architecture \ ] ] and energy consumption with hcs method in paper \ ] ] to ignore the effects of all the constant parameters , we assume as unity . figure [ ec_networksize ] reflects final transmission energy consumption trend with 300 , 400 , 500 , 600 , 700 , 800 network scale for cluster size 4 .the proposed hdacs method achieves the highest efficiency in energy consumption compared with other methods . in the following paper, we set up 2d irregular network deployments on java - based sidnet - swans simulation platform to demonstrate feasibility and robustness of our hierarchy model . a variety of practical applications in survelillance and habitat monitoring , the data fields such as temperature , sound , pressure measurements are usually smooth .in this paper we ignore the effect of variation of sparsity k in each level .therefore , smooth data field with uniform noise is a practical choice to get the sparse signal representation with identical sparsity k. we perform discrete cosine transform ( dct ) for each of the collecting clusters before taking random measurements .the main reasons for choosing dct are : a ) .it yields fast vanishing moments of signal representation and gives real coefficients unlike discrete fourier transform ( dft ) .it also does not require that cardinality of measurements be a power of 2 as wavelet transform does .we perform the truncating process for dct coefficients by forcing those magnitudes below a threshold to zero in order to further sparsify the signals .the threshold has been set up by percentile of the first dominant magnitudes . in actual simulation , is chosen as 0.01 , 0.005 .the multi - scale routing protocol matches well with hierarchical data aggregation mechanism .since our model mainly focuses on the dense and large - scale network topology , it guarantees the existence of shortest path between any two nodes .cosamp algorithm has been adopted as the cs recovery algorithm in our implementation .this algorithm takes as a proxy to represent signal inspired by the restricted isometry property of compressive sensing .compared with other recovery methods such as various versions of omp algorithms , convex programming methods , combinatorial algorithms , cosamp algorithm guarantees computation speed and provides rigorous error bounds .sidnet - swans is a sensor network simulation environment for various aspects of applications , which provides with java based visual tool , has been utilized to study the performance of the proposed algorithm .the jist system , which stands for java in simulation time , is a java - based discrete - event simulation engine .jist system has been used to obtain the transmission time and energy consumption for each sensor .figure [ simuimage ] is a snapshot of user interface of newly designed cs data aggregation architecture on sidnet - swans for 400 sensors network . in this sectionthe performance has been evaluated on sidnet - swans platform with jist system to demonstrate all the theoretical analysis process .the algorithms was tested against five network sizes : 300 , 400 , 500 , 600 , and 700 nodes over a flat data field with uniformly distributed additive white noise . in all these network, we choose and .the leaf nodes number in the level one is flexible , which fits the characteristics of two - dimensional random deployment of sensor networks .therefore , . in the recovery procedure, we adopt the idea of model - based cosamp algorithm as the dct representation makes the support location of coefficients visible and design a new cosamp algorithm for dct based signal ensemble , which accurately recovers the data .we define the signal to noise ratio ( snr ) as the logarithm of the ratio of signal power from each sensors over recovery error in the fusion center . as we see from figure [ snr ], the change of sensor size does not affect snr performance .figure [ er ] shows the comparison of transmission energy consumption distribution for 400 sensor networks .ratio1 is defined as transmission energy consumption ratio of proposed hdacs and ncs data aggregation .ratio2 is defined as transmission energy consumption ratio of proposed hdacs and hcs data aggregation .as we see from the figure [ er ] , ratio1 is less than 0.5 , which means 50% transmission energy will be saved compared with ncs data aggregation .ratio2 is almost equal or less than 1 , which is owing to the fact that most nodes only transmit data in the level one and finish their job .both proposed hdacs and hcs data aggregation adopt the same strategy that only raw data is transmitted for those leaf nodes , which explains why most ratio2 values of nodes are equal to one .but for those nodes working as collecting clusters in the levels that are higher than one , ratio2 values are less or equal to 0.633 as we expect . the nodes with highest levelsave almost 70% power .moreover , the results we obtain so far depend on the frame size per transmission in mac layer to some extent . if the data size becomes larger , data will be segmented into more frames for transmission . and this will definitely cost more power . since the comparison of proposed hdacs , ncs and hcs algorithms always refers to compare the number of and .suppose one frame size is , then the frame number of data size and are and respectively .if and two frame number are and .when , and , frame number are 1 and 2 respectively , which explains how 50% transmission energy is saved by using hdacs data aggregation .in this paper , we presented a novel power - efficient hierarchical data aggregation architecture using compressive sensing for a large scale dense sensor network .it was aimed at reducing the data aggregation complexity and therefore enabling energy saving .the proposed architecture is designed by setting up multiple types of clusters in different levels .the leaf nodes in the lowest level only transmit the raw data .the collecting clusters in other levels perform dct to get sparse signal representation of data from their own and children nodes , take random measurements and then transmit them to their parent cluster heads .when parent collecting clusters receive random measurements , they use inverse dct transformation and dct model based cosamp algorithm to recover the original data . by repeating these procedures , the cluster heads in the top levelwill collect all the data .we perform theoretical analysis of hierarchical data aggregation model with respect to total data transmission number , data compression ratio and transmission energy consumption .we also implement this model on sidnet - swans simulation platform and test different sizes of two - dimensional randomly deployed sensor network .the results demonstrate the validation of our model .it guarantees the accuracy of collecting data from all the sensors .the transmission energy is significantly reduced compared with the previous work . in our future work, we will also take into consideration changeable factors of sparsity .it refers to more complex data fields , and adaptive model will be set up to handle the dynamic nature of data aggregation fields . besides, other cs recovery algorithms will also be investigated to reduce recovery complexity and improve signal recovery accuracy. distributed compressive sensing , that factors in the spatial correlation of data , turns out to be a very promising recovery algorithm .moreover , other tasks besides data aggregation will also be exploited on our proposed hierarchical architecture .jun luo , liu xiang , catherine rosenberg,_does compressed sensing improve the throughput of wireless sensor networks ? _ , in proceedings of the ieee international conference on communications ( icc10 ) , pp 16 , cape town , south africa , may 2010 m. a. t. figueiredo , r. d. nowak , and s. j. wright ._ gradient projection for sparse reconstruction : application to compressed sensing and other inverse problems_. ieee j. selected topics in signal processing : special issue on convex optimization methods for signal processing , 1(4):586598 , 2007 .d. baron , m. b. wakin , m. f. duarte , s. sarvotham , and r. g. baraniuk , _ distributed compressed sensing _ , technical report ece-0612 , electrical and computer engineering department , rice university , december 2006 .
compressive sensing ( cs ) method is a burgeoning technique being applied to diverse areas including wireless sensor networks ( wsns ) . in wsns , it has been studied in the context of data gathering and aggregation , particularly aimed at reducing data transmission cost and improving power efficiency . existing cs based data gathering work in wsns assume fixed and uniform compression threshold across the network , regardless of the data field characteristics . in this paper , we present a novel data aggregation architecture model that combines a multi - resolution structure with compressed sensing . the compression thresholds vary over the aggregation hierarchy , reflecting the underlying data field . compared with previous relevant work , the proposed model shows its significant energy saving from theoretical analysis . we have also implemented the proposed cs - based data aggregation framework on a sidnet swans platform , discrete event simulator commonly used for wsn simulations . our experiments show substantial energy savings , ranging from 37% to 77% for different nodes in the networking depending on the position of hierarchy . data aggregation , compressive sensing , hierarchy , power efficient algorithm , wireless sensor network
relativistic viscous hydrodynamics is a popular choice for modeling high - energy heavy - ion collisions .hydrodynamics is appropriate when collisions are sufficiently rapid to keep the various species moving with a single collective velocity and local kinetic temperature , and to keep the stress - energy tensor sufficiently isotropic to warrant a viscous treatment .however , these conditions are lost near the end of the reaction , when the various hadrons begin to cool separately and move with different collective velocities . the final stage of the reaction and the decoupling are then best modeled with a microscopic simulation , which in the limit of many particles , or with a high over - sampling , becomes equivalent to a boltzmann description .often these simulations are referred to as hadronic cascades .particles are emitted into the hadronic cascade through a hyper - surface separating the hydrodynamic region from the cascade .if the phase space density of a particle s of momentum at the boundary is , the number of particles emitted into the simulation side of the boundary through a hyper - surface element is given by the cooper - frye formula , here , is a small hyper - surface element .variations of the cooper - frye formula have been applied to numerous hybrid models . in each of these approachesthe boltzmann equation is solved by sampling techniques , i.e. , rather than storing information for each phase - space element , one follows the evolution of sample particles chosen consistently with the phase space density . for sampling ratios of one ,the cascade is a one - to - one simulation of the hadronic stage . for higher sampling ratios ,the models approach the limit of a boltzmann equation . since hadronic cascades are modeling the low - density stage at the end of the collision where velocity gradients are reduced , sampling ratios of unity are nearly indistinguishable from the boltzmann limit .this is in contrast to the case of simulating the early partonic stage , where sampling ratios need to be of order 10 or more to approach the boltzmann limit .whether the sampling ratios are unity or not , one needs to generate particles into the cascade code consistently with the hydrodynamic description at the hyper - surface . for a time - like element , , one can consider the emission from a frame where the emission is simultaneous across the element and represents a volume element undergoing sudden emission .for a space - like element , one can choose a frame where the surface is stationary .in this frame represents the area of the element multiplied by the time of the emission . depending on the hyper - surface element , , and the particle s momentum ,the number can be either positive or negative .the positive contribution describes particles being fed into the cascade , whereas the negative contribution represents the backflow , i.e. those particles which leave the cascade region , cross the interface , and enter the hydrodynamic domain .both the positive and negative contributions are necessary if energy , momentum and charge are to be conserved across the interface .similar issues to those being discussed here in the context of a hydrodynamics / cascade interface have also been studied with regards to coupling a hydrodynamics directly to the vacuum , instantaneous freezeout , see for example .when coupling directly to the vacuum the stress - energy tensor is discontinuous , while with coupling to a boltzmann code , one can aim for a continuous description if the interface is performed at a sufficiently high density that collision rates justify the hydrodynamic description .most hadronic cascades ignore backflow .such codes apply a step function , , to eq .( [ eq : cooperfrye ] ) , and do nt erase particles from the simulation that flow backward across the boundary . for modeling of particles at mid - rapidity , it has been shown that the backflow is on the order of one half of one percent .however , it has been reported that the error grows to the level of several percent away from mid - rapidity .the purpose of this short paper is to introduce a method for correcting for backflow .the approach accounts for the backflow through the production and propagation of negative - weight particles .local conservation laws are exactly conserved in the limit of high sampling .the theoretical underpinnings of this approach are described in the next two sections .a hadronic cascade was altered to incorporate these changes , and a brief evaluation of the behavior is presented in the subsequent section along with conclusions about the approach . the appendix provides a description of the algorithm used to numerically sample the particle flow across the hydrodynamic / cascade interface .in order to apply the cooper - frye formalism of eq .( [ eq : cooperfrye ] ) one needs to first generate a list of hyper - surface elements , , where the index denotes each small element .formally , can be described as the portion of the hyper - surface that falls within a small four - volume element , .the surface is described by a criteria based on local properties , such as the temperature or density , and can be written parametrically as , where the function could be any quantity propagated through the hydrodynamic evolution , such as the particle or energy density .for instance , if the interface is chosen to occur at fixed temperature , , the location of the surface would be described by .the function should be defined such that one is inside the hydrodynamic region if and outside if .the hyper - surface element is then where is a step function .for example , if is defined by a surface of constant temperature , is parallel to . for time - like elements , , one can always consider the element in a frame where points purely in the time direction . in this framethe emission is simultaneous across the hyper - surface , and represents a small volume element . if the criteria is falling with time , e.g. the temperature is falling , and particles are being emitted into cascade region .the majority of the emission in heavy - ion collisions comes from such elements . for ,the criteria is rising , and the hydrodynamic region is absorbing the volume element .this can be understood by considering the sign of the term in the cooper - frye formula .having is unusual in a time - like element , because it is difficult to find an area where the density or temperatures are rising . even with lumpy initial conditions , the longitudinal expansion is driving the densities downward even in regions where two lumps are expanding into one another . for space - like elements , , one can always find a boost and rotation which point along the positive axis . in this framethe emission surface is temporarily stationary , and the emission is positive for particles with and negative for . if the flow velocity in this frame , , is greater than zero , there will more positive contribution than negative contribution . for explosive collisionsthis is usually the case .the negative contributions represent those particles flowing into the hydrodynamic region , i.e. , the backflow .both the positive and negative contributions are required to conserve charge .for instance , the charge that travels through an element , from the hydrodynamic to the cascade region between times and , is the first term vanishes from current conservation and the last two terms describe the difference between the net charges in the region at the two times .the net current density for all the hadron species can be written in terms of the phase space density as since the contributions from all momenta are required to construct the conserved current density at an position , one can not throw away the contributions to the cooper - frye formula from those momenta with without violating current conservation .similar expressions can be derived for the energy and momentum , thus showing that the negative contributions to the cooper - frye expressions are essential if one wishes to satisfy any of the conservation laws . thereexist numerous algorithms for finding the hyper - surface , depending on whether the initial conditions are lumpy or smooth , whether the calculation is one - dimensional , two - dimensional , or three dimensional , and depending on what sort of mesh is used to model the hydrodynamic expansion .a particularly robust algorithm that works for three - dimensional systems of arbitrary topology is described in .the output of hydrodynamic codes is a list of the hyper - surface elements , , their space - time coordinates , and any additional information required to reconstruct the phase space density , e.g. , the collective flow , temperature , densities , and anisotropies of the stress - energy tensor . for each within each , one can calculate the probability of creating a particle , .one then decides to create a particle with probability .an efficient algorithm for doing this is presented in the appendix .the issue of backflow occurs when is negative . in the field of relativistic heavy ions ,two approaches have been applied thus far .the first approach is simply to neglect the negative contribution . as mentioned in the introduction , this violates energy , momentum and charge conservation at the one percent level at mid - rapidity , and higher away from mid - rapidity . a second method for handling the backflow has been to erase those particles from the cascade simulation that cross back into the hydrodynamic region .this requires storing the location of the hyper - surface from the hydrodynamic module for consideration by the cascade .this method would satisfy the conservation laws as long as the phase space density is continuous across the boundary .a strong discontinuity would suggest that the hydrodynamic treatment was not justified for the densities or temperatures that determined the interface , or that the viscous corrections to the stress - energy tensor in the hydrodynamic treatment is not consistent with the dynamics of a hadron gas . choosing a higher interface density or temperature , or better accounting for viscous effects , might solve the problem .for the purpose of this paper , we will assume that the phase space density is continuous , though with the recommendation that the continuity be better studied in the future .a more daunting problem with this approach comes from the book - keeping required to decide whether particles crossed the boundary .hydrodynamic approaches tend to use very small cells , on the order of 0.1 fm .particles would typically cross hundreds of such cell boundaries , with a very small fraction of such cell boundaries representing an interface with the hydrodynamic treatment .although there are a variety of algorithms used in cascades , it would seem that the majority of the numerical cost in performing the cascades might then be applied to tracking the backflow .given the field s interest in fluctuating ( lumpy ) initial conditions , the topology of the breakup surface could be complex , and could vary event - to - event .this makes it difficult to find a robust algorithm that avoids constantly checking for interface crossings .this approach was applied in but that was for a smooth , azimuthally symmetric , boost - invariant hydrodynamic treatment , where the hyper - surface could be represented by giving the radius of the transition hyper - surface as a function of time .a third scheme is presented here . instead of tracking particles relative to the interface, this scheme simulates the evolution of particles emitted with the negative weights , , from eq .( [ eq : cooperfrye ] ) .a weight of negative unity is originally assigned to such particles , whereas a weight of positive unity is assigned to particles emitted with .since the negative - weight particles are a small fraction of the overall particles , the additional numerical cost should be on the order of a few percent .the particles are then evolved through the cascade , but with the products of each collision being assigned weights as described below .when analyzing the final products for their impact on specific observables , each particle would contribute proportional to its weight .for instance , when incrementing a bin used to calculate spectra , a particle with weight of -1 would reduce the bin count by one .if the phase space density is continuous across the interface , the particles being scattered from the cascade region back into the hydrodynamic region should exactly cancel the negatively weighted particles emitted according to the cooper - frye formula .even if the phase space density turns out to be discontinuous , this method will still satisfy all the conservation laws .it is propagating the weights through the subsequent collisions where the weighting becomes complicated .if a pair of incoming particles each has a weight of positive unity , both incoming particles are removed from the particle list after the collision ( a weight of zero is assigned to the incoming particles ) and a weight of positive unity is assigned to each of the collision products .however , if the incoming particles have arbitrary weights , , both the new outgoing particles and incoming particles may need to be assigned post - collision weights if the conservation laws are to be satisfied .if the new weights assigned to the incoming particles are non - zero , those particles can not be deleted from the particle list after the collision .the weights for the outgoing particles ( those that are newly created or are scattered into new momentum states ) , are labeled , and are all set to the product of the weights of the incoming particles , the weights for the scattered or newly created particles are all equal to the product of the incoming weights .this ensures that if any of the incoming weights are changed by a factor , that the weight of each of the outgoing products is changed by the same factor . instead of being erased ,the incoming particles are assigned post - collision weights , if all the incoming weights are unity , for all , and the incoming particles can be erased . assigning non - zero post - collision weights to the incoming particles as prescribed in eq .( [ eq : weights2 ] ) ensures that conservation laws are enforced in each collision , given that the charge is conserved in the reaction , the conserved charge could just as easily refer to a continuous variable like momentum or energy .for the several simple process involving scatterings or decays , the weights of the outgoing particles are shown in eq.s [ eq : weights ] .each incoming particle is labeled , where refers to all information about the particle and its trajectory , while is its weight .the outgoing particles are similarly referenced , with referring to the particle s information .if all weights were unity , the weights could be ignored and the reactions would be described as .one can understand the weightings for each line of eq .( [ eq : weights ] ) from eq .( [ eq : weights1 ] ) . for example, one can consider a case where a positive - weight particle collides with a negative - weight particle , .the positive - weight particle should not have scattered since the negative - weight particle should not be there , plus the negative - weight particle is in principle canceling the effect of a spurious positive - weight particle that should have been erased from cascade for having re - entered the hydrodynamic region .thus , not only should particle-1 not scatter , one needs to correct for the fact that such particles sometimes scatter spuriously , which means the forward going track weight needs to go from 1 to 2 .the scattered tracks are given negative weight so that the cancel the effect of particle-1 scattering off a spurious track .similar arguments follow for each row in the list . if the phase space density is continuous across the border , the original negative weight tracks should exactly cancel the positive - weight tracks that would have been erased when reentering the hydrodynamic region if one were applying the second method described earlier . implementing the procedure described in eq .( [ eq : weights ] ) turns out to be non - tenable due to the growth in the weights of the outgoing products .for example , if two particles with weight 2 collide , the scattered particles would have weights of 4 . in practice , this lead to exponentially growing weights as a function of the number of collisions .in addition to the growing weights , the number of trajectories being considered increases since the incoming particles are often reweighted , rather than deleted . fora central au+au collision at the highest rhic energy , final - state weights would often exceed the numerical range of the computer , and the number of tracks would exceed the available memory . even if the simulation were allowed to finish , the high - weighted tracks would overwhelm the answer with noise .thus , the procedure needs to be modified so that the effects of scattering with non - unity - weighted particles is regulated . to limit the growth and associated noise of heavily weighted tracks , the procedure described abovewas modified .particles were first divided into two sets : base particles and backflow tracer particles .the base particles are created and evolve exactly as if the backflow particles had never been included in the cascade .the tracer particles begin as the set of negative - weighted particles representing the backflow .they also include all the trajectories required to trace the influence of the backflow .tracer particles are allowed to interact with base particles , but are not allowed to collide with one another .for this reason the effect of the backflow is handled correctly to linear order in the backflow .since the fraction of backflow particles is unlikely to exceed a few percent , the error associated with this approximation should be on the order of a tenth of a percent or less .charge , energy and momentum is conserved in each collision as well as in the generation of the particles through the hydrodynamic / cascade boundary .all of the base particles are assigned weights of positive unity , whereas the tracer particles are allowed to have weights of .the weights for scattering processes needs to be assigned consistently with those given in eq .[ eq : weights ] .however , because collisions between two tracer particles is neglected , the list of interactions is abbreviated and given in eq .( [ eq : altreactionlist ] ) . for a collision between incoming particleswhose momenta , position and charges are labeled by and while the outgoing particles are denoted by , the cascade must consider three cases .in the first case both particles are base particles and have weights of positive unity .such particles are described by the notation . in the second case ,one of the particles is a tracer particle with positive weight , , and in the third case one has a tracer particle of negative weight . for decays , , the outgoing decay products all have the same weights as the incoming particle , and are all either base or all tracer particles depending on whether is a base or tracer particle . for decays ,the original track is deleted . from inspecting the three reactions in eq .( [ eq : altreactionlist ] ) one can see that the evolution of the base particles is exactly the same as it would be without the tracer particles , as there is no change to the base particles occurring when the base particle collides with a tracer particles .the tracer particles for the final states are all added into the equations on the right - hand sides of the last two expressions in eq .( [ eq : altreactionlist ] ) to make the reactions consistent with the weights in eq .[ eq : weights ] .for the two reactions involving tracer particles there are two outgoing particles that differ only by their weights or by whether they are tracer particles .since having two particles with the same trajectory would cause numerical problems , the second particle is translated randomly in relative rapidity in the case of bjorken boost invariance , or if boost invariance is not implemented , moved by a small random step in coordinate space .this procedure correctly reproduces the evolution described by the algorithm described in eq .[ eq : weights ] to first order in the backflow .however , for the processes described in eq .( [ eq : altreactionlist ] ) that involve an incoming tracer particle , the number of tracer particles triples from one to three .thus , if a large number of collisions occur the number or tracer particles quickly grows and the method can become noisy .for that reason , the scattering of tracer particles is curtailed once a tracer particle and its ancestors have suffered a number of collisions .this is accomplished by storing a number for each tracer particle .then , when a incoming tracer particle scatters according to the list in eq .( [ eq : altreactionlist ] ) , the number is incremented by one and assigned to all the tracer products of the final state .the sensitivity to the cutoff is shown in figure [ fig : fake ] .the figure shows results from a simulation of 100 gev on 100 gev au+au central au+au collisions at rhic . the hadronic cascade , b3d , modeled a longitudinally boost invariant system by describing hadrons with spatial rapidities between -1 and 1 with cyclic boundary conditions .the algorithm for modeling backflow described above was incorporated into the model with several values of . for set to zero , the backflow particles were created , but were not allowed to collide .after decays , the average number of backflow particles per collision was approximately 4.5 per event , a small fraction of the over 2300 final - state base particles .a typical hadron might collide two to three times per cascade event but backflow particles were likely to collide significantly more often due their being emitted into the dense hydrodynamic region .since the number of tracer particles triples with each collision , the number of tracer particles rises rapidly as a function of , as shown in the lower panel of fig .[ fig : fake ] , and by the time the number of final - state tracer particles exceeds the number of final - state base particles . from a simulation of the central two units of rapidity in a 100 gev on 100 gev au+au collision ,the number of tracer particles , the additional particles required to model the effects of the backflow , is shown in the lower panel as a function of , which limits the amount of scattering the tracer particles can undergo . for backflow particles never scatter and only 4.5 particles are required. however , the effect of a scattering spreads over more and more particles as is increased , and by the time , the number of tracer particles is greater than the number of actual particles . the lower panel shows the net transverse energy carried by tracer particles in the final state .the effect saturates for large . however , large is numerically more expensive and can introduce more statistical noise into the analysis ., scaledwidth=60.0% ] for , the backflow particles carried nearly 2.2 gev of transverse energy , which subtracts from the net transverse energy of the event due to their negative weight .this is denoted by , and is shown in the upper panel of fig .[ fig : fake ] as a function of .once , tracer particles can have both positives and negative weights .the net transverse energy carried by the tracer particles then involves a large cancellation and can become noisy for large due to the large number of tracer particles . by letting the tracer particles scatter , the net transverse energy carried by the tracer particlesis reduced by the longitudinal work done in the expansion .the quantity saturates for larger . the bulk of the correction for scattering is accounted for with just a few scatterings . for example , by setting , one simulates nearly 90% of the change of with little additional computational cost since the average number of tracer particles is only on the order of 100 .the backflow contribution from a cooper - frye interface between hydrodynamic and boltzmann / cascade models can indeed can be accounted for with the methods above .the method was shown to correctly account for the backflow in the limit that the backflow is sufficiently small so that the effects are linear in the backflow .additionally , a second approximation can be applied that limits the simulation of backflow to a finite number of scatterings .most of the secondary scattering effects occur in the first few scatterings , which allows one to impose the cutoff for small .this prevents the method from consuming significant additional numerical resources or from adding significant statistical noise to the output .even with , charge , energy and momentum flow through the interface is satisfied .this work was mainly motivated by the desire to find a method to account for backflow without requiring disproportionate resources to describe an effect whose impact is modest at best . at mid - rapidity in high - energy heavy - ion colliisons ,the number of particles traveling back into the hydrodynamic region is only a fraction of a percent of the total number of particles . for slightly lower energies , or away from mid - rapidity , that fraction might increase , though is unlikely to exceed a few percent .the method described here seems ideal for such cases .the method was implemented into the cooper - frye generator and cascade used in the code b3d .this code was designed to perform the same tasks as the hadronic cascade urqmd , but to be better suited for a high - energy and to be faster .b3d can generate particles from the hypersurface using the algorithms described in the appendix , and perform the hadronic cascade in approximately 0.25 seconds on a single core in a typical laptop .adding backflow corrections with slowed the code down by 3% and increased the size of the final - state data files by 5% .the results of fig .[ fig : fake ] show that using allows one to calculate the effects of the backflow to the level .given that backflow is a small effect , this may be more than sufficient for most purposes .for a small hyper - surface element , the number of particles emitted is aside from the factor of , the second line looks like the usual thermal emission .the primed quantities refer to momenta and phase space densities measured in the reference frame of the fluid , i.e. the frame where .this formula works for any , but for a monte carlo procedure it must be chosen large enough that never exceeds unity for any .efficiency is lost if it becomes larger than necessary .the choice above is the optimum value as it corresponds to the smallest acceptable value that keeps . 1 .choose particles according to a static thermal distribution with volume .the quantity should incorporate any viscous corrections .2 . boost the particles by 3 . keep or reject the particle with probability .if rejected , continue to the next species .this procedure should be repeated for each species . however , volume elements tend to be small and there might be many species with small probabilities . rather than throwing random numbers for each speciesit is more efficient to implement the following algorithm : 1 .generate a series of thresholds separated by random amounts , , where generates a random number between zero and unity .this produces thresholds separated by lengths with probabilities .if one is at a position and one increments by , the chance of hitting a threshold is , independent of any previous threshold .2 . thus one can make a cumulative sum for each species , where is the density in the rest frame of species .if crosses a threshold , one considers creating a particle as described above . if crosses two thresholds , one attempts to create two particles . for each differential probabilitythe chance of crossing a threshold will be independent of any other differential probability , so the emissions are independent , and thus poissonian .with this approach , the number of random numbers one generates in deciding whether to attempt creating a particle is the number of particles one would make if the weights , in eq .( [ eq : weightdef ] ) , were unity .this can be a significant improvement compared to throwing random numbers for each species ( perhaps over 300 ) in each volume element ( perhaps many millions ) .since one knows the total density of hadrons , , one can check to see whether crosses a threshold . if not , one can increment the running total in one swipe and avoid considering each species independently .this procedure assumes that you already know . for calculations where the interfaces is defined by a fixed temperature , and with , one can calculate these densities once , and store them . for calculations with a variety of breakup densities , one might store an array of values . [ [ generating - a - particle - with - the - thermal - phase - space - density - fbf - p . ] ] generating a particle with the thermal phase space density .~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ in the absence of viscous corrections one can generate a particle in the fluid rest frame with the following algorithm .for a probability distribution , one can sample the distribution by taking the natural log of the product of three random numbers , i.e. , where are random numbers chosen uniformly between zero and unity . for a three - dimensional distribution of massless particles , the choice of coordinates is ^ 2}{[\ln(r_1r_2r_3)]^2}.\end{aligned}\ ] ] to check that this works one can calculate the jacobian , for massive particles one can throw a fourth random number , and if , one repeats the procedure until one satisfies the additional condition . unfortunately , this becomes inefficient for large masses , , because the probability of successfully satisfying the condition for becomes small for small . in that case one applies an alternative algorithm . for , it was found that a more efficient method is based on the expression , where is the kinetic energy .the strategy is to generate ignoring the factor , then do a keep - or - repeat based on that weight . to generate a value of consistent with , one breaks up the factor into three terms , onefirst throws a random number and chooses which of the three terms to use as a distribution based on the integrated weights for each term , and .once one has picked a given term , one can pick as and respectively . with this value of , one can now do a keep or repeat decision based on the weight .after is chosen , and can be picked with new random numbers , .viscous corrections can be applied according to .this involves transforming the momentum according to ( in the rest frame ) where is the shear tensor in the frame of the fluid , and is a constant chosen so that the generated distribution will indeed have the stress - energy tensor one wishes assuming that the viscous correction is much smaller than the pressure .this coefficient can be found analytically given the list of masses and spins of the hadrons . for ,the linear approximation is good to the one percent level or better in that it consistently reproduces the viscous correction according to h. sorge , phys .b * 373 * , 16 ( 1996 ) [ nucl - th/9510056 ] .s. pratt and j. murray , phys .c * 57 * , 1907 ( 1998 ) . c. anderlik , l. p. csernai , f. grassi , w. greiner , y. hama , t. kodama , z. i. lazar and v. k. magas _ et al ._ , phys .c * 59 * , 3309 ( 1999 ) [ nucl - th/9806004 ] .f. cooper and g. frye , phys .d * 10 * , 186 ( 1974 ) . c. nonaka and s. a. bass , phys .c * 75 * , 014902 ( 2007 ) [ nucl - th/0607018 ] . c. gale , s. jeon , b. schenke , p. tribedy and r. venugopalan , phys .* 110 * , 012302 ( 2013 ) [ arxiv:1209.6330 [ nucl - th ] ] .h. song , s. a. bass , u. heinz , t. hirano and c. shen , phys .c * 83 * , 054910 ( 2011 ) [ erratum - ibid .c * 86 * , 059903 ( 2012 ) ] [ arxiv:1101.4638 [ nucl - th ] ] .j. novak , k. novak , s. pratt , c. coleman - smith and r. wolpert , arxiv:1303.5769 [ nucl - th ] .d. molnar and m. gyulassy , nucl .a * 697 * , 495 ( 2002 ) [ erratum - ibid .a * 703 * , 893 ( 2002 ) ] [ nucl - th/0104073 ] .s. cheng , s. pratt , p. csizmadia , y. nara , d. molnar , m. gyulassy , s. e. vance and b. zhang , phys . rev .c * 65 * , 024901 ( 2002 ) [ nucl - th/0107001 ] .p. huovinen and h. petersen , arxiv:1206.3371 [ nucl - th ] .s. pratt and j. vredevoogd , phys .c * 78 * , 054906 ( 2008 ) [ erratum - ibid .c * 79 * , 069901 ( 2009 ) ] [ arxiv:0809.0516 [ nucl - th ] ] .s. pratt and g. torrieri , phys .c * 82 * , 044901 ( 2010 ) [ arxiv:1003.0413 [ nucl - th ] ] .
methods for building a consistent interface between hydrodynamic and simulation modules is presented . these methods account for the backflow across the hydrodynamic / simulation hyper - surface . the algorithms are efficient , relatively straight - forward to implement , and account for conservation laws across the hyper - surface . the methods also account for the spurious interactions between particles in the backflow and other particles by following the subsequent impact of such particles . since the number of altered trajectories grows exponentially in time , a cutoff is built into the procedure so that the effects of the backflow are ignored beyond a certain number of collisions .
the 2004 sumatra tsunami struck the indian ocean and presented a wake - up call around the globe for improved preparedness and awareness for tsunamis and other coastal hazards . in its aftermath , many countries initiated the assessment or , in some cases reassessment , of their tsunami risks which mainly have been based on the maximum runup determination . has been broadly referenced for this purpose , who developed an analytical model for non - breaking waves runup. however , the approach for breaking analysis must be numerical due to its complexity . in their report for u.s .coastlines , mentioned the importance of numerical modeling in hazard assessments to quantify the impacts of future events .it has been recognized that as long as waves are not breaking , boussinesq and shallow water equations can be applied to simulate tsunami wave dynamics .especially the boussinesq equations are appropriate to study the wave approach and the runup process , and significant efforts have been made to include the proper dispersive terms . for a better representation of the flow field characteristics with boussinesq equations , proposed to use multiple layers along the vertical direction .however , such a multi - layer high - order boussinesq model is computationally expensive .furthermore , very close to the coast where the waves break , the irrotational assumption , which is appropriate for non - breaking waves , is violated .therefore , to fully understand and comprehensively study the very near - shore dynamics and effects of tsunami waves is better to explore with three - dimensional computational models .additionally , also highlighted that the runup value alone might not be appropriate to quantify the damage caused by tsunamis .runup describes the inundated area to the first order and can be employed for a large - scale overview of what happens after a tsunami strikes .however , runup values might not be sufficient to explore better coastal management strategies and solutions . in this contribution, we use lagrangian numerical simulations to revisit the wave breaking hydrodynamics .we simulate the experimental setup for the canonical problem for long - wave runup with gpusph .we utilize three - dimensional solitary waves in order to be consistent with even though and discussed the fact that solitary waves are not the best model for tsunami waves .we use gpusph , a computer code that employs smoothed particle hydrodynamics ( sph ) to simulate breaking and non breaking solitary waves .sph solves the navier - stokes equations aided by the computational resource of graphical processing units ( gpu , * ? ? ?* ; * ? ? ?because of its lagrangian nature , sph is an appropriate approach to simulate flows with high turbulence such as breaking waves .based on the general sph formulation , the motion equations are written as : in which , , and represent the density , mass , velocity and pressure of the fluid particle and its neighboring fluid particles . the distance between particles and represented by , is the kernel or interpolation function and is the artificial viscosity to prevent spurious particle movements : the parameter is the speed of sound . for surface flows ,the parameters and are constants with values of 0.01 and 0 respectively .the initial distance between particles , , is constant . for each particle to consider its neighbors , the radius of the support domain is , and therefore related to the initial particle distribution .all particles within the kernel are referred to as neighbor particles and form the neighbor list for the calculation of the physical properties . due to the relative movement of particles ,neighbor lists need to be updated at very time step . due to the fact that eq .( [ sphdensity ] ) is weakly compressible , an equation of state is required to relate pressure to density : for a more complete description of the sph method , we refer to , and .we employ the experimental setup of for the numerical simulations with gpusph ( fig .[ scheme ] ) .the particle diameter is m. the tank is 37.73 m long , 0.39 m wide , and 0.61 m high .the toe of the beach , , is located at 14.68 m measured from the initial wave maker location .parameter represents the slope of the beach ( ) . is the wave height measured from the initial depth , .the origin of the coordinate system is located at the initial position of the shoreline , increases towards the wave maker and increases upward .the solitary waves have the following surface profile : \ ] ] where is the wave height and refers to the location where the wave elevation corresponds to the amplitude of the solitary wave , . from eq .[ mu ] , obtained the following wave maker formula : in which is the wave maker displacement and is the wave number ( ) . and located at 14.68 m from the initial wave maker position . is the wave height measured from the initial depth , .the beach slope is .coordinate system origin is located at the initial beach shoreline.,scaledwidth=100.0% ] the piston is represented by boundary particles regularly distributed and applies lennard - jones boundary condition .the beach , the bottom surfaces and the domain sides are represented by the method proposed in which applies a smoothing kernel to the boundaries . to analyze the results of the simulations , we divide the flume domain into segments .each segment is m thick ( direction in fig .1 ) , 0.39 m wide and 0.70 m high ( width and height of the flume respectively ) .the analysis time increment , , is 0.10 s. for a certain , we compute in each segment the averaged flow momentum , averaged flow kinetic energy and averaged flow force ( * ? ? ?* hydrodynamic force ) per unit volume of fluid contained in the segment . in computing the flow forces ,the drag coefficient , , is assumed to be 1 ( * ? ? ?* table 8 - 2 ) as we are not considering any physical objects in the flow domain .then we pick the maximum values and their locations of the aforementioned variables .we refer to these maximum values as the maximum flow momentum , maximum flow kinetic energy and maximum flow force .we repeat this process at each so we track the maxima in space and time . then we obtain the absolute maximum of the flow momentum , flow kinetic energy and flow force from all maxima . for data analysis , we employ dimensionless time , , dimensionless length , , and dimensionless wave height , .additionally , we use the wave crest , , defined by the water elevation that represents the wave amplitude at each .the tracking of the wave crest during the simulation defines the wave crest path .wave front , , is defined by the wave bore whose elevation is 40 of .the 40 threshold is obtained by comparing the wave front of case shown in fig .3.5.5 of synolakis ( 1986 ) and the sph simulation with the same setup .the best fit is determined by defining wave front as 40 of .we keep this criterion for the rest of cases studied in this work .to validate gpusph for the canonical problem , we simulate the case 225a from .the case 225a refers to experiments with a depth of m and .this is a case with strong breaking . in fig .[ validation ] we compare the experiment 225a ( crosses ) and gpusph simulation ( red lines ) before , during and after wave breaking .the fit between the measurements from the laboratory experiment and simulation is excellent , even after breaking .furthermore , in fig . [ maximumheight ]we compare the distribution of the maximum wave heights between the solitary wave from ( * ? ? ? * black crosses ) and our simulation ( solid line ) . the good fit between the laboratory experiments and the numerical simulation appears .we also present the distribution of the maximum wave amplitudes for the rest of the simulations developed in this work .solitary waves up 1:19.85 beach between the 225a experiment adapted from ( crosses ) and the numerical simulation ( red solid line).,scaledwidth=100.0% ] adapted from up 1:19.83 beach.,scaledwidth=70.0% ] a further validation is carried out by comparing the velocities distribution between the experiments presented in irish et al .( 2014 ) and the sph simulations .the setup is 1:10 steep beach whose toe is located at the end of a horizontal plane , 22 m long from the initial wave maker location ( = 0 ) .the initial basin depth is 0.73 m measured from the horizontal plane and the wave height is 0.43 m. velocities from experiments are measured using acoustic doppler velocimeters ( advs ; 50 hz ) at locations = 32.87 m ( a ) and = 35.06 m ( b ) respectively .figure 4 shows the horizontal velocity comparison between the irish et al .( 2014 ) experiments and our gpusph simulations .this good fit also indicates the suitability of the sph particle size to simulate the solitary waves presented in this work .notice that there is a lack of data from experiments during initial runup compared with sph .this is because the bubbly and turbulent bore resulted in noisy measurements from advs .-velocity comparison between the experiments and the sph simulations at locations = 32.87 m ( a ) and = 35.06 m ( b ) respectively from the initial wave maker location ( = 0).,scaledwidth=60.0% ] figure 3.5.5 in presents the comparison of the wave front path , for a solitary wave , between experiments and the solution of the nonlinear theory .he noted the presence of an intense wave front acceleration in the shoreline environment that has to be analyze .we study this effect by simulating solitary waves with different ratios and show the dimensionless spatial - temporal evolutions of their wave front paths in fig .[ curves ] ( * ? ? ?note in fig .5 that the more horizontal the slope of the wave path is , higher its velocity is and vice versa .we observe a slope change of the wave front trajectory around the shoreline . hereis where flow accelerations occur and where we focus .the maxima of the flow momentum , flow kinetic energy and flow force are represented with different dashed lines in fig .[ curves ] , [ maxima ] , [ space ] and [ lines ] .the format of these lines is consistent among these figures.the absolute maxima of flow momentum , flow kinetic energy and flow force is represented consistently with squares , circles and triangles respectively in these figures . . and .origin of normalized time is set when wave front reaches the vertical of the beach toe , , shown in fig .[ scheme ] .squares , circles and triangles represent the absolute maxima of flow momentum , the absolute maxima of flow kinetic energy and the absolute maxima of flow force respectively.,scaledwidth=70.0% ] the temporal evolution of the maxima cited previously and their absolute maxima are presented in fig .[ maxima ] for different ratios .figure [ maxima]a shows the evolutions of the maxima flow momentum in lines and their absolute maxima in squares .figure [ maxima]b depicts the maxima flow kinetic energy evolution and their absolute maxima with circles .the maximum kinetic energy evolution has two peaks .the first and smaller of the peaks occur due to the wave breaking .figure [ maxima]c shows the maxima flow force evolution and their absolute maxima with triangles . shows the time evolution of the maxima flow momentum for different ratios and its absolute maxima with squares .subfigure depicts the time evolution of the maxima flow kinetic energy for different ratios and its absolute maxima with circles .subfigure shows the time evolution of the maxima flow force for different ratios and its absolute maxima with triangles . for legend use the one shown in fig .5 . note that all values are per volume of fluid contained within the segments where they are computed from.,scaledwidth=70.0% ] for the different solitary waves studied , fig .[ space ] shows the spatial evolution of the variables mentioned in fig .[ maxima ] .figure [ space]a shows the maxima flow momentum over space and their absolute maxima with squares and fig .[ space]b depicts the maxima flow kinetic energy over space and their absolute maxima with circles . as in fig .[ maxima]b , the maxima kinetic energy evolution has two peaks as well .moreover fig .[ space]c shows the maxima flow force over space and its absolute maximum with triangles .note that the absolute maxima of the flow force occurs just before the shoreline and where the maxima flow momentum reach their local maximum or second peaks .these locations are also where the maxima flow kinetic energy experience sharp increases. shows the spatial evolution of the maxima flow momentum for different ratios and its absolute maxima with squares .subfigure depicts the spatial evolution of the maxima kinetic energy for different ratios and its absolute maxima with circles .subfigure shows the spatial evolution of the maxima flow force for different ratios and its absolute maxima with triangles . for legend use the one shown in fig .note that all values are per volume unit of fluid contained within the segments where they are computed from.,scaledwidth=70.0% ] we relate in fig .[ lines]a the relationship between the value of the flow force absolute maximum with . absolute maxima of flow force is normalized in this figure using the maximum value which is provided by the ratio simulation .figure [ lines]b shows the relationship between the locations of the absolute maxima of the flow momentum ( squares ) , the absolute maxima of the flow kinetic energy ( circles ) and the absolute maxima of the flow force ( triangles ) with the ratio . furthermore fig .[ lines]c displays the relationships between the times of the flow momentum absolute maxima ( squares ) , flow kinetic energy absolute maxima ( circles ) and flow force absolute maxima ( triangles ) with the ratio . provides the relationship between the absolute maxima of the flow force normalized with the value from the run with the ratio .subfigure shows the relationship between the location where the absolute maxima of the flow momentum , flow kinetic energy and flow force occur and the ratio .subfigure depicts the relationship between the time when they appear at with the ratio . as shown in fig .[ curves ] , [ maxima ] and [ space ] , squares , circles and triangles represent the absolute maxima of the flow momentum , absolute maxima of the flow kinetic energy and the absolute maxima of the flow force respectively for each simulated .for colors use the legend shown in fig .in this contribution , we studied the three - dimensional breaking process of several solitary waves using gpusph model . variations of the wave front velocity ( accelerations ) during breaking are analyzed in fig .[ curves ] , where wave front velocities are represented by the inverse of the absolute values of the slopes of the curves .note that the slopes of the curves are negative as the waves approaches the shoreline until the rundown starts .the more vertical the slope of the curve , the slower the wave front and vice versa .so , if the slope of the curve is less steep , then there is acceleration of the wave front .these accelerations occur between the squares and circles ( fig . 5 , 6 , 7 and 8) . the higher the ratio , the higher the intensity of the acceleration of the wave front .the presence of wave front acceleration during breaking is related to wave shoaling and has implications in the spatio - temporal evolution of the flow momentum , flow kinetic energy and flow force .while shoaling , as the wave becomes higher and steeper , the maximum flow momentum also increases .the maximum elevation of the wave crest generates the absolute maximum of the flow momentum ( squares in fig .[ curves ] , fig .[ maxima]a and fig . [ space]a ) .after this absolute maximum , the maximum flow momentum decreases because the wave becomes lower and the water depth shallower . for flow kinetic energywe observe that the largest velocities of the wave tip occur during breaking .however , the averaged velocity of the wave tip segment is not the largest because the averaging also considers slower water located deeper .the maximum flow kinetic energy reaches its absolute maximum when the water becomes shallower ( circles in fig .[ maxima]b ) .it is important to note that circles in fig .[ maxima]b are also the locations where wave front and wave crest converge . when the absolute maximum flow momentum occurs ( squares in fig .[ maxima]a ) , the maximum kinetic energy at that time is very low compared with its absolute maximum ( circles in fig .[ maxima]b ) .conversely , when the flow is at its absolute maximum kinetic energy ( circles in fig .[ maxima]b ) , the maximum flow momentum at that time is very low compared with its absolute maximum ( squares in fig .[ maxima]a ) . additionally , the water depth is shallow where the absolute maxima of the flow kinetic energy occur .hence , although the kinetic energy is involved in the dangerousness of the tsunamis , it is not the only one to be considered .the maximum flow momentum and especially the maximum flow force should also be taken into account . by analyzing the evolutions of maximum flow momentum and the maximum kinetic energy ,we observe that the second peak of the maximum momentum ( fig .6a ) almost coincides with the first peak of the maximum kinetic energy ( fig .6b ) and concurs with the sharp increment of the maximum kinetic energy ( fig .7a and 7b ) . from hereon ,the aim is to employ a variable that has an absolute maximum at points where the maximum flow momentum and maximum flow kinetic energy have their local maxima that is nearly coincident both spatially and temporally .figures [ maxima]c and [ space]c depicts the spatio - temporal evolution of the above mentioned variable which is given by the flow force. the triangles in both figures are points of absolute maximum of flow force which synchronizes with local maxima of flow momentum and flow kinetic energy .the absolute maximum of the flow force appears just after the wave tip falls down on the water surface .this fall down causes a highly chaotic and turbulent layer of flow with significantly higher velocity at the surface and its vicinity .this process suddenly increases the kinetic energy ( first sharp increase in fig . [ maxima]b and in fig .[ space]b ) and the water level which leads to an increase in the flow momentum ( local maxima peaks in fig .[ maxima]a and fig . [ space]a ) .both together lead to the absolute maximum of the flow force ( triangles in fig .[ maxima]c and fig . [ space]c ) .this location and time is when the flow becomes the most turbulent and dangerous .absolute maximum of flow force occurs just before shoreline ( triangles around in fig .[ space]c . )regardless of the ratio of the wave . for the larger waves , = 0.45 , 0.40 and 0.35 runs , secondary peaks are present following the absolute maxima of the flow force , adding extra hazards .these results have to be considered in the shoreline facilities design .figure [ lines]a depicts the absolute maxima of the flow force as a function of .this figure allows to estimate the absolute maximum flow force for any solitary wave not studied in this work with the same setup .additionally , fig .[ lines]b and [ lines]c provide the location and time respectively of the absolute maximum of flow momentum ( squares ) , the absolute maximum of flow kinetic energy ( circles ) and the absolute maximum of flow force ( triangles ) . the larger the ratio is , the larger the distance between squares and circles gets ( see fig . [ lines]b and fig .[ lines]c ) and also the longer the breaking process takes and the farther the runup reaches .therefore by knowing the ratio of any solitary wave , it is possible to estimate not only the magnitude of the flow force , but also its occurrence spatially and temporally .in this study , we carried out numerical simulations with a non - depth integrated formulation of solitary waves to study their hazards during their proximity to the shoreline . because wave breaking is a three - dimensional process , neither the shallow water equations nor the boussinesq approximations can describe breaking analysis accurately .we approached this by averaging momentum , kinetic energy and force of the flow in segments along axis and by obtaining their maxima evolutions from three - dimensional sph simulations .figure 5 shows the wave front paths for different ratios . from fig . 6 and 7we obtain the locations and time of the absolute maxima of flow momentum , flow kinetic energy and flow force for various . when these points are plotted on fig .5 they appear on the curve of their respective wave path .this implies that the absolute maximum of flow momentum , the absolute maximum of flow kinetic energy and the absolute maximum of flow force always occur in the wave front , regardless of the ratio .so we conclude that the wave front will always be the most dangerous part of the tsunami at any time regardless of the wave ratio .also we conclude that flow force is the most important variable to utilize , over the flow momentum or flow kinetic energy , in order to identify the tsunamis dangerousness during breaking .this is mainly because the peaks of the absolute maxima of the flow force occur at points of nearly simultaneous local maxima of flow momentum and flow kinetic energy .this also occurs regardless of their ratio .the absolute maximum flow force for any occurs just before the shoreline .it denotes that the areas where the absolute maximum flow force occurs are most prone to hazards during the breaking and inundation processes .these conclusions render a better understanding of the breaking process and let us identify the physical variables that must be considered to evaluate the destructive hazards of solitary waves in space and time .also they provide important considerations to be taken into account to develop more reliable tsunami - risk evaluations in order to protect marine infrastructures and human lives .the work presented in here is based upon work partially supported by the national science foundation under grants nsf - cmmi-1208147 and nsf - cmmi-1206271 .15 natexlab#1#1url # 1`#1`urlprefix dunbar , p. k. , weaver , c. s. , 2008 . usstates and territories national tsunami hazard assessment : historical record and sources for waves .us department of commerce , national oceanic and atmospheric administration washington , dc .fema , 2011 .principles and practices of planning , siting , designing , constructing , and maintaining residential buildings in coastal areas .fema p-55 , 4th edition .federal emergency management agency , usa . irish , j. l. , weiss , r. , yang , y. , song , y. k. , zainali , a. , marivela - colmenarejo , r.,2014 .laboratory experiments of tsunami run - up and withdrawal in patchy coastal forest on a steep beach .natural hazards 74 ( 3 ) , 19331949 .jones , j. e. , 1924 . on the determination of molecular fields .ii . from the equation of state of a gas . in : proceedings of the royal society of london a : mathematical , physical and engineering sciences . vol .106 . the royal society , pp . 463477 .lynett , p. , liu , p. l .- f ., 2004a . a two - layer approach to wave modelling . in : proceedings of the royal society of london a : mathematical , physical and engineering sciences .the royal society , pp .26372669 .synolakis , c. e. , bernard , e. n. , 2006 .tsunami science before and beyond boxing day 2004 .philosophical transactions of the royal society of london a : mathematical , physical and engineering sciences 364 ( 1845 ) , 22312265 .
a plethora of studies in the past decade describe tsunami hazards and study their evolution from the source to the target coastline , but mainly focus on coastal inundation and maximum runup . nonetheless , anecdotal reports from eyewitnesses , photographs and videos suggest counterintuitive flow dynamics , for example rapid initial acceleration when the wave first strikes the initial shoreline . further , the details of the flow field at or within tens of meters of the shoreline are exquisitely important in determining damage to structures and evacuation times . based on a set of three - dimensional numerical simulations using solitary waves as a model , we show the spatial - temporal distribution of the flow momentum , kinetic energy and force during the breaking process . we infer that the flow reaches its highest destructive capacity not when flow momentum or kinetic energy reach their maxima , but when flow force reaches its . this occurs in the initial shoreline environment , which needs to be considered in nearshore structures design . sph , solitary wave , tsunami , breaking wave .
calculating the derivatives of noisy functions is of prime importance in many applications .the problem consists of calculating stably the derivative of a smooth function given its noisy data , .this is an ill - posed problem : a small error in may lead to a large error in .many methods have been introduced in the literature .a review is given in [ 7 ] .divided differences method with has been first proposed in [ 4 ] , see also .necessary and sufficient conditions for the existence of a method for stable differentiation of noisy data are given in , see also . in our paper a method for stable differentiation based on solving the regularized volterra equation is proposed ( see also ) .one often applies the variational regularization ( vr ) method for stable differentiation . in this paper ( and in ) an approach , based on the fact that the quadratic form of the operator is nonnegative in real hilbert space , , is used .consider two different approaches to solving equation . the first approach consists of solving directly regularized equation .the second approach is based on the dynamical systems method ( dsm ) and an iterative scheme from . in ,the derivatives of a noisy function are obtained by solving the equation if is continuous on , and then the following result holds ( see ) : assume .then where solves with .the solution of is : this formula and an _ a priori _ choice , where , is a constant , yield a scheme for stable differentiation .when is known , the problem is reduced to calculating integral .there are many methods for calculating accurately and fast integral ( see e.g. ) . however , there is no known algorithm for choosing which are optimal in some sense .the advantage of our approach is that the cpu time for the method is very small compared with the vr and dsm , see section [ sectionfirst ] .moreover , one can calculate the solution analytically when the function is simple by using tables of integrals or maple .another approach to stable differentiation is to use the dsm ( see ) .the dsm yields a stable solution of the equation : where is a hilbert space and is a linear operator in which is not necessarily bounded but closed and densely defined .the dsm to solve is of the form : where and is a nonincreasing function such that as .the unique solution to is given by an iterative scheme for computing in is proposed in : with satisfying one chooses and as follows : where , . to increase the speed of computing we recommend choosing . at each iteration onechecks if this is a stopping criterion of discrepancy principle type ( see ) .if is the first time such that is satisfied , then one stops and takes as the solution to .the choice of satisfying is done by iterations as follows : 1 . as an initial guess for one takes , where .2 . if , then one takes as the next guess and checks if condition is satisfied .if then one takes .if , then is used as the next guess .4 . after is updated , one checks if is satisfied .if is not satisfied , one repeats steps 2 and 3 until one finds satisfying condition .algorithms for choosing and computing are detailed in algorithms 1 and 2 in .numerical experiments are carried out in matlab in double - precision arithmetic . in all experiments , by , }(t) ] .let us compute the derivatives of the function contaminated by the noise function .the derivative of is . to solve this problem we use three methods : the first method , based on computing integral , the vr method , and the dsm method , based on a discretized version of .numerical results for this problem are presented in figure [ fig1 ] . in our experiments , since the results otained by the dsm and the vr are nearly the same , we present only the results for the dsm in figure [ fig1 ] and [ fig12 ] in order to make these figures simple . in this experimentthe trapezoidal quadrature rule is applied to integral equation and is used for computing integral .one may use higher order intepolation methods to compute integral .however , it does not necessarily bring improvements in accuracy .this is so because using a high order intepolation method for inaccurate data may even lead to worse results .this is the case when the noise level is large .the approximate derivative formula for close to 0 does not use much information about .thus , we only use for computing for ] , we take and use formula for with ] except for the which are close to the boundary of the interval .indeed , it can be showed analytically that the solution to equation satisfies .however , the derivative of in figure [ fig1 ] satifies and . if the computed derivatives at the points close to the boundary are discarded , then in both cases the dsm and the vr are more accurate than the first method .figure [ fig12 ] presents the numerical experiment for contaminated by the same noise function . for this problem ,since the function to be differentiated satisfies both the dsm and the vr give more accurate results than the first method . from figure [ fig1 ] and [ fig12 ] one can see that for the computed derivatives are very close to the exact derivative at all points except for those close to the boundary in figure [ fig1 ] .let us give numerical results for computing the second derivatives of noisy functions .the problem is reduced to an integral equation of the first kind .a linear algebraic system is obtained by a discretization of the integral equation whose kernel is green s function here $ ] and as the right - hand side and the corresponding solution one chooses one of the following ( see ) : collocation method is used for discretization .this discretization can be improved by other methods but we do not go into detail .we use and , where is a vector containing random entries , normally distributed with mean 0 , variance 1 , and scaled so that .this linear algebraic system is mildly ill - posed : the condition number of is ..results for case 1 and 2 with , .[ cols="^,^,^,^,^,^,^,^,^,^,^,^,^ , > , < " , ] table [ table1 ] shows that numerical results obtained by the dsm are more accurate than those by the vr .figure [ fig2 ] plots the numerical solutions for these cases .the computation time of the dsm in these cases is about the same as or less than that of the vr . from table[ table1 ] one can see that both the dsm and the vr perform better in case 2 than in case 1 .note that the regularized equation to solve for second derivatives in this case is of the same form as equation . as we discussed earlier , it is because in case 2 we have .we conclude that in this experiment the dsm competes favorably with the vr . , .] looking at figure [ fig2 ] case 1 , one can see that the computed values at and are zeros .again , the regularized scheme forces the computed derivative to satisfy the relations .if one wants to compute the derivative of a noisy function on an interval by the proposed method , one should collect data on a larger interval and use this method to calculate the derivative at the points which are not close to the boundary .in this paper two approaches to stable differentiation of noisy functions are discussed .the advantage of the first approach is that it contains neither matrix inversion nor solving of linear algebraic systems .its computation time is very small .the drawback of the method is that there is no known _ a posteriori _ choice of .the second approach is an implementation of the dsm .it competes favorably with the vr in both computation time and accuracy .the dsm competes favorably with the vr in solving linear ill - conditioned algebraic systems . _a posteriori _ choice of , an efficient way to compute integral for the first method , and an efficient discretization of the volterra equation with the implementation of the dsm are planned for future research .
based on a regularized volterra equation , two different approaches for numerical differentiation are considered . the first approach consists of solving a regularized volterra equation while the second approach is based on solving a disretized version of the regularized volterra equation . numerical experiments show that these methods are efficient and compete favorably with the variational regularization method for stable calculating the derivatives of noisy functions . * keywords * : ill - posed problems , numerical differentiation . * ams subject classification * : primary 65d05 . secondary 65d25 .
and x - rays respectively ) is clearly visible .improved energy calibration managed to unite both responses and improve the energy resolution by up to a factor of 2 ( from 32% to 16%fwhm at 8.2kev for this apd ) ., width=332 ] avalanche photodiodes ( apds ) are silicon - based solid state detectors that convert photons into a charge current .they provide a compact , robust , magnetic field insensitive solution for light and x - ray detection with gains on the order of 100 and fast response times . due to this, apds are extensively used in a large variety of physics , medical and aerospace applications .we have studied x - rays with energies between 1 and 10kev and observed two distinct apd responses to monoenergetic x - rays absorbed in different depths inside the apd . by constructing apd specific standard traces , and using a pulse - by - pulse fitting technique , we improved the apd energy resolution by a factor of 2 , and the time resolution by 30% .in addition , we were able to identify background signals stemming from electrons that deposit a few kev energy in the apd .the data presented in this work were gathered in the muonic helium lamb shift experiment , using a set of twenty large area avalanche photo diodes ( laapds ) from radiation monitoring devices ( model s1315 ; active surface area each ) .the muonic helium ions represent an extended x - ray source that emits predominantly monoenergetic x - rays of 1.52kev and 8.22kev as well as electrons with up to 50mev of kinetic energy ( see appendix [ app : exp ] ) .previous tests of these apds found 40% detection efficiency for 8.2kev x - rays , and an average energy resolution of 16% ( fwhm ) after calibration .our x - ray detection setup consists of two linear arrays of 10 laapds each , in which each laapd is mounted on a separate titanium piece for efficient cooling and easy replacement .the detector arrays are mounted inside a vacuum around 10 , and inside a 5tesla magnetic field , above and below the x - ray source .custom - built low - noise , fast response preamplifiers are fitted to the laapds .both laapd / pre - amplifier assemblies are cooled using an external ethanol circulation system and are actively temperature stabilized at around .the achieved short term temperature stability was better than .highly stable temperatures are crucial for the operation of laapds since their gain depends strongly on their operating temperature .bias voltages were chosen to provide the best energy resolution per apd and ranged from 1.61kv up to 1.69kv , approximately 50 volts below the breakdown voltage .the pre - amplifiers with two bipolar input transistors in cascode configuration ( bfr 182 npn , bft 92 pnp ) have been used for the generation of a fast response from the large capacitance ( 120pf ) of the laapd .an overall gain of 150mv/ at 50 has been measured with a test pulse .outgoing apd signals were further amplified by gain 4 main - amplifiers and fed to the caen v1720 waveform digitizers ( 250 ms / s , 12 bit ) for recording .our experiment requires pileup detection in the x - ray detectors to reduce background effects .standard shaping amplifiers that are commonly used feature integration times too long to separate pulses on a 100ns scale .this deteriorated the performance in our previous measurements where we used rutherford appleton laboratory ( ral ) 108a pre - amplifiers with - long integration times ( see fig .16 ) . for our new project we used fast pre - amplifiers with 30ns rise time . when calculating a simple integral over the recorded pulses ,a poor energy resolution became visible as seen in fig .[ fig : ecal ] .the double peak structure that was clearly resolved in 6 out of 20 apds is a result of two different apd responses to the monoenergetic 8.2kev x - rays as can be seen in the upper part of fig.[fig : trace ] .similar effects were previously reported for beveled edge apds and 14.4kev x - rays .we first observed the same behaviour in a separate test setup without magnetic field .hence the features described here can not be attributed to magnetic trapping effects in the drift region .we can only speculate why this effect was not seen for x - rays taken in another experiment at cryogenic temperatures .pre - selection of apds with good energy resolution at 5.9kev can lead to the vanishing of the double peak structure .this was the case in our previous measurement .also the large average angle of incidence in our setup increseases this effect significantly . to compensate, we developed a simple standard response fitting technique that allowed us to distinguish between different responses on a hit - by - hit basis , improving the energy resolution by a factor of two ( see fig.[fig : ecal ] , bottom ) and correcting for a 25ns time shift between both signal types as discussed in section[sec : time ] . in the secs .[ sec : inter]-[sec : summary ] , the different features of the measured x - ray signals are discussed before the fitting routine and the improved energy calibration are presented .then timing difference between both responses and the influence of electron signals in the analysis are reviewed before a brief summary and outlook is given .component has a rise time of about 35ns while the component shows a rise time of about 70ns .separate averaging of both individual data sets allows to produce standard traces that accurately describe all x - ray traces between 2kev and 10kev .bottom : electron induced signals that correspond to an x - ray energy of 8.2kev after calibration .the dashed curves show the average of the and x - rays . even though similar in shape to fast 8kev x - ray signals , a fit was able to identify 86% of these electrons correctly ., width=332 ]-i - p - n doping profile .the weakly doped intrinsic part ( ) serves as conversion region for most incoming x - rays ( case 1 ) .photoelectrons created are transferred towards the avalanche region . in this high field areasecondary electrons are generated through impact ionization providing charge gain .low energy x - rays have a high probability of being stopped in the initial drift region ( )(case 2 ) . these experience additional signal delay and reduced gain .some photons convert in the multiplication region ( ) , also leading to reduced signal amplitudes ( case 3 ) .more about this effect can be found in .the bottom figure shows the electric field profile in the several regions of the apd together with the x - ray absorption profile for 1.5kev and 8.2kev x - rays ., width=332 ] the working principle of the apds used in our setup is explained in fig.[fig : scheme ] . in the conversion region ( ) , incoming photons produce primary photoelectrons .differences in the thickness of this layer ( ) give rise to changes in detector energy acceptance .a p - n junction is placed on the back side of the active volume creating high local field strengths . inside this avalanche region ( )electron impact ionization at the high field p - n junction leads to a multiplication of free charge carriers providing gain for the initially converted primary photoelectrons .the calculated absorption length for 8.2kev and 1.5kev x - rays is 70 m and 9 m , respectively . due to the extended size of our x - ray source ,the average incident angle of 52 degrees in our geometry gives rise to an effective 1.6 times longer average path inside the apds .this absorption length for 8kev x - rays is similar to the apd layer thicknesses and therefore leads to a number of different effects on the apd output depending on the region where the photon is absorbed .the different possibilities are also shown and explained in fig.[fig : scheme ] .the largest part of the recorded 8kev x - rays stops in the conversion region ( ) and follows the normal apd working principle that provides high charge collection efficiency and fast amplification .nevertheless , some x - rays are absorbed either in the drift layer ( ) or in the avalanche region ( ) .the x - rays absorbed in region ( ) undergo only partial amplification resulting in low amplitudes down to zero .this gain reduction is responsible for the flat energy tails seen in fig.[fig : ecal ] .x - rays absorbed in region ( ) generate electrons which are only slowly transfered to the following region ( ) due to the lower field strengths in ( ) . traps in this regionmay hold electrons for non - negligible times , lengthening the pulse and causing a reduction in amplitude ( see fig.[fig : trace ] ) .similar effects of reduced charge collection efficiency were also studied for x - ray energies below the silicon k - edge . from fig.[fig : trace ] we also observe that these x - rays only show a single amplitude and not a continous distribution up to the one of x - rays absorbed in region ( ) .this indicates that the trapping mechanism occurs at the boundary between regions ( ) and ( ) .0.3 ) , and one with significantly faster rise time ( slope 0.7 ) . the last contribution with an integral above 700 arises from mev electrons depositing kev energy in the apd active region ., width=332 ] in a range of 200ns after the leading edge of the pulse .all spectra show two prominent peaks at 1.5kev and 8.2kev .the fast rising component provided by in dark blue enfolds signals converted in or behind the conversion region ( ) .it consists mostly out of 8.2kev x - rays and the visible low energy tail is created by the loss of gain for x - rays converted in the avalanche region ( ) .the light blue distribution stands for all traces that were best described by the slow rising pulse shape and consists mostly out of 1.5kev x - rays and some 8.2kev x - rays mixed in .signals best matching the electron trace are shown in the orange division .these signals are formed by a continuous electron background and a contribution of wrongly identified x - rays ., width=332 ] in order to investigate this effect a set of roughly x - ray traces were recorded per apd .fitted baseline fluctuations were below 10mv for all analyzed signals , compared to average signal amplitudes of 500mv for 8.2kev x - rays .our analysis routine starts with an edge finder ( square weighting function with a width of 200ns ) to find the beginning of the pulse in the recorded trace .then the slope of the leading edge is fitted with a linear function . using a criterionwe improve the accuracy of the slope determination by varying start time of the pulse within 20ns while keeping the fitting window fixed .finally , we normalize the slope to the pulse integral provided by the edge finder to obtain the ( amplitude - independent ) rise time of the pulse . and x - rays for a single apd .these time spectra have been optained by plotting the time of the 8.2kev x - ray signal detected in a laapd relative to the 1.5kev signal detected in another laapd for both classes .both laapds including preamp - delay line etc . have been synchronized using electrons .the origin is chosen as the center of gravity of the response .a 25ns timing difference could be measured between both responses .the relatively poor time resolution is given by the coincident 1.5kev x - rays that are detected just above the noise level in our setup leading to a washed out signal . , width=332 ] when the rise time is plotted versus the integral of the pulse , four different contributions to the spectra can be identified as seen in fig.[fig : slopevsint ] .the two most prominent peaks are created by converted 8.2kev photons with slow and fast detector responses , labeled and respectively . for these peakswe see a clear difference in rise time and integral while most of the low energy 1.5kev x - rays show a slow rise time .the rise time distribution for small signals is broadened due to low amplitudes and noise .the last visible component is generated by the already mentioned high energy ( up to 50mev ) electrons ( created by muon - decay , further explained in the appendix[app : exp ] ) .these electrons deposit energies up to 50kev in the apds and their signals display a third kind of standard pulse shape , namely a mixture of fast and slow x - ray pulse shapes .this is shown in the lower panel of fig.[fig : trace ] . in order to further analyze the two classes of 8.2kev x - rays ,two sets of apd traces for and were created by selecting the respective peaks in fig.[fig : slopevsint ] with adequate cuts .a collection of selected traces for the and cases is shown in fig.[fig : trace ] .for each of the two x - ray classes , traces were numerically averaged after shifting each trace to correct for the variation of the pulse starting time .this averaging created the standard traces of the subsets ( & ) .these traces had to be produced once per each apd for a measurement period of several months and stayed constant throughout multiple heating / cooling cycles of the apd assembly .for the final analysis , each apd pulse is fitted with all available standard traces .starting at the time provided by the edge finder , the standard trace is fitted to the pulse .the timing is then varied and the is recorded for each fit . to save computational effortthat would arise for a 2-parameter fit ( amplitude and time ) , the amplitude of the standard trace is always fixed by matching its integral to the integral of the signal ( after baseline subtraction ) in a 200ns wide time window . finally , the minimal between the various standard traces is used to separate the pulses into different classes : and ( and electrons , see below ) .the result from the best - fitting class is used to get amplitude , integral and timing values of the recorded signal .the allocation of the recorded apd signals in the and classes according to the fit routine can be seen in the top parts of fig.[fig : energy ] .calibration of the two x - ray spectra created by the and fits is done by matching the peaks in both separate integral spectra to the respective energy of 8.2kev . as expected , the component of x - rays is the largest part of the recorded signals in our setup as seen in fig.[fig : energy ]. the observed 1:1.7 ratio of to x - rays agrees roughly with the expected absorption ratio of 1:1.5 estimated from the thicknesses of layers ( ) and ( ) .in addition to the variation in the observed 8.2kev x - ray energy we were also able to measure a difference in timing between the and the components . in order to achieve a common timing reference point for this study , coincidence events between the 8kev x - rays recorded in the apd under investigation and the 1.5kev x - rays registered in neighboring apds were studied .these two x - ray types are emitted within a picosecond time window from the muonic atoms used ( as is further explained in the appendix[app : exp ] ) .special attention was given to time calibration in order to avoid possible timing shifts created by the distinct standard traces and used for different signals .therefore calibration of the different apds and traces against each other was done using the supplementary measured electron signals .the mev electrons create hits in multiple detectors on their spiraling motion in the surrounding magnetic field enabling us to get a common timing for all apds . when comparing the timing of the measured 8.2kev x - rays we observed a 25ns delay between signals and normal signals .a time spectrum showing this effect for a single apd is shown in fig.[fig : time ] . correcting for this effectimproves the apd time resolution of our setup by more than 30 when the two responses seen in fig.[fig : time ] are unified .better results might be achieved when a more clearly defined common timing is provided since the timing resolution is limited by the low amplitude 1.5kev signals just above the noise level .apart from the improved energy resolution that was achieved with the methods described in section[sec : meth ] , we were also able to differentiate high energy electron signals in the apds from similar x - ray signals .these mev electrons deposit up to 50kev in the apd active volume and were always present in the experiment .due to their passage through all the apd layers , electrons show signals with yet another shape that can be distinguished from the previously discussed and x - ray responses . a third standard trace created by averaging a set of clearly identified electron signals that correspond to a mean energy of 12 kev .this was done supplementary to the already known x - ray traces and .a comparison of an electron induced signal shape at 8.2kev and the respective real x - ray traces is also shown in fig.[fig : trace ] ( bottom ) . as electrons with mev energies deposit energy in all three apd layers .( , , ) .the corresponding standard trace can be approximately parameterized as a mixture of the and standard responses . using the same routine as for the previous pulse analysis, the fit was able to differentiate between x - ray and electron signals with very high fidelity , leading to a correct electron identification in 86% of the cases .we have observed effects from the apd layer structure that lead to two distinct responses to x - rays in the 6 - 10kev range .the individual signal types can be identified with high fidelity by examining the rising edge of the measured pulses .correcting for this effect improves the energy resolution by up to a factor of 2 depending on the apd .additionally we were able to correct for timing differences between both responses .while the different rise time classes were observed in all 20 apds under investigation , only 6 of them showed a resolved double - peak structure in the energy spectrum obtained by a simple integral . using the rise time analysis , it was also possible to filter mev energy decay electrons .an electron - specific standard trace was clearly distinguishable from the two different kinds of x - ray signals recorded for 8.2kev x - rays .a fit of the signal shape was used to exclude them from the x - ray data with an overall effectiveness of 86% , while only 14% of the 8kev x - rays were wrongly identified as electrons .this lead to significant background reduction in the lamb shift experiment .we thank ulf rser , matteo nssli , hanspeter v. gunten , werner lustermann , adamo gendotti , florian barchetti , ben van den brandt , paul schurter , michael horisberger and the mpq , psi , eth workshops and support groups for their help ., f.m . and r.p .acknowledge support from the european research council ( erc ) through stg .f.d.a . , l.m.p.f , a.l.g ., c.m.b.m . and j.m.f.s .acknowledge support from feder and fct in the frame of project ptdc / fis - nuc/0843/2012 .c.m.b.m . acknowledges the support of fct , under contract no .sfrh / bpd/76842/2011 .f.d.a . acknowledges the support of fct , under contract no .sfrh / bpd/74775/2010 .a.a , k.k and k.s acknowledge support from snf 200021l-138175 .b.w and m.a.a .acknowledge support of dfg_gr_3172/9 - 1 .this research was supported in part by fundao para a cincia e a tecnologia ( fct ) , portugal , through the projects no .pestoe / fis / ui0303/2011 and ptdc / fis/117606/2010 , financed by the european community fund feder through the compete .p. a. and j. m. acknowledges the support of the fct , under contracts no .sfrh / bpd/92329/2013 and sfrh / bd/52332/2013 .the data presented in this work were acquired using muonic helium ions as x - ray source during the recent lamb shift experiment .the experiment is performed at the high intensity proton accelerator facility at paul scherrer institute in switzerland .its purpose was to measure the different 2s transitions in the and exotic ions via laser spectroscopy .the required information about its environment and working principle will be briefly sketched in this section .the accelerator physics environment leads to stringent demands on stability and robustness of the apds and the analysis routine employed that exceed common specifications .for example , the apd arrays used are placed inside a 5 t solenoidal magnet where they are mounted next to a low pressure helium gas target .muonic ions are created in this 20 cm long gas volume operated at by low energy muons that are provided by the accelerator beam line .the dataset described in this work was obtained during the measurement campaign in 2013 that offers multiple transitions in the low kev x - ray region .these consist of the , and transitions at 1.52 kev , 2.05 kev and 2.30kev respectively as well as the , and transitions at 8.22kev , 9.74kev and 10.28kev , emitted by the muonic helium ions during the so - called atomic cascade within a time frame of few ns total .the muons decay after an average lifetime of 2.2 into muon neutrino , electron antineutrino and " high energy " electrons in the mev range .these electrons deposit energy when transversing the apd , creating electron hole pairs in all regions of the apd quasi simultaneously .the induced signals correspond to virtual x - ray energies of up to 50kev .this would raise background effects for the experiment that uses the recorded 8.2kev x - rays as signal for laser spectroscopy .therefore a supplemental set of 4 plastic scintillators surrounds the gas target and apd arrays for additional means of electron detection and exclusion of background .since the overall detection efficiency for electrons in the mentioned plastic scintillators is only roughly 30% , additional means for electron identification were desirable .this was achieved by waveform analysis described in sections [ sec : meth ] and [ sec : elec ] .gentile et al ., magnetic field effects on large area avalanche photodiodes at cryogenic temperatures , nucl . instrum . and meth . a 652 520 ( 2011 ) .see http://physics.nist.gov/physrefdata/ffast/html/form.html . for x - ray penetration depths in silicon
avalanche photodiodes are commonly used as detectors for low energy x - rays . in this work we report on a fitting technique used to account for different detector responses resulting from photo absorption in the various apd layers . the use of this technique results in an improvement of the energy resolution at 8.2kev by up to a factor of 2 , and corrects the timing information by up to 25ns to account for space dependent electron drift time . in addition , this waveform analysis is used for particle identification , e.g. to distinguish between x - rays and mev electrons in our experiment .
during the past two decades , the lattice boltzmann ( lb ) method has been developed rapidly and has been successfully applied to various fields , ranging from magnetohydrodynamics , to flows of suspensions , flows through porous media , compressible fluid dynamics , wave propagation yangw - jcp-2000-wave , yangw - pre - wave , etc .apart from the fields listed above , this versatile method is particularly promising in the area of multiphase systems .this is partly owing to its intrinsic kinetic nature , which makes the inter - particle interactions ( ipi ) be incorporated easily and flexibly , and , in fact , the ipi is the underlying microscopic physical reason for phase separation and interfacial tension in multiphase systems .so far , several lb multiphase models have been proposed . among them , the four well - known models are the chromodynamic model by gunstensen et al . , the pseudo - potential model by shan and chen ( sc ) , the free - energy model by swift et al . , and the hcz model by he , chen , and zhang .the chromodynamic model is developed from the two - component lattice gas automata ( lga ) model originally proposed by rothman and keller . in this model ,the red and blue colored particles are employed to represent two different fluids .phase separation is achieved through controlling the ipi based on the color gradient .similar to the treatment in molecular dynamics ( md ) , in sc model , non - local interactions between particles at neighboring lattice sites are incorporated .the interactions determine the form of the equation of state ( eos ) .phase separation or mixing is governed by the mechanical instability when the sign of the ipi is properly chosen . in the free - energy model , besides the mass and momentum conservation constraints , additional ones are imposed on the equilibrium distribution function , which makes the pressure tensor consistent with that of the free - energy functional of inhomogeneous fluids . in the hcz model ,two distribution functions are used .the first one is used to compute the pressure and the velocity fields .the other one is used to track interfaces between different phases .molecular interactions , such as the molecular exclusion volume effect and the intermolecular attraction , are incorporated to simulate phase separation and interfacial dynamics .the aforementioned models have been successfully applied to a wide variety of multiphase and/or multicomponent flow problems , including drop breakup , drop collisions pof - pre-2005-abraham , wetting , contact line motion contact - line - motion-2004 , contact angle shanchen , chemically reactive fluids , phase separation and phase ordering phenomena , hydrodynamic instability rt - pre-1998,zhangry , etc . however , despite this , the current lb versions for multiphase flows are still subjected to , at least , one of the following constrains ( i ) the isothermal constraint ( i.e. , the deficiency of temperature dynamic ) , ( ii ) the limited density ratio and temperature range , ( iii ) the spurious velocities .this paper addresses mainly the last restriction and the total energy conservation in practical simulations .spurious velocities extensively exist in simulations of the liquid - vapor system and reach their maxima at the interfacial regions , indicating deviation from the real physics of a fluid system . reducing and eliminating the unphysical velocities are of great importance to the simulations of multiphase flows .firstly , large spurious velocities will lead to numerical instability .secondly , the local velocities are small during phase separation and coarsening. if the spurious currents are too large , then we may not be able to separate the spurious currents from the real local flows , which is especially true in the case of phase separation with high viscosity .thirdly , for a thermal multiphase system , accurate flow velocities are required in order to obtain an accurate temperature field . in dealing with this issue ,extensive efforts have been made during the past years .wagner pointed out that the origin of the spurious currents is due to the incompatibility between the discretizations of driving forces for the order parameter and momentum equation .therefore , he suggested to cure the spurious velocities by removing the nonideal terms from the pressure tensor and introducing them as a body force .sofonea and cristea et al . presented a finite difference lb ( fdlb ) approach and proposed two ways to eliminate the unwanted currents . in the first way , a high - accuracy numerical scheme , the flux limiter method is employed to calculate the convection term of the lb equation . in the second way , a correction force term is introduced to the lb equation that cancels the spurious velocities and allows to recover the mass equation correctly .shan and succi et al . showed that the origin of the spurious currents are due to the insufficient isotropy in the calculation of density gradient .therefore , using the information of the density field on an extended neighboring of a given site to construct high order isotropic difference operators , is the key for the correct discretization of spatial derivatives and taming the spurious currents in the interface .yuan et al . demonstrated that smaller parasitical velocities and higher density ratio can be achieved using more realistic eos in a single - component multiphase lb model .lee and fischer reported that the use of the potential form of the surface tension and the isotropic fd scheme can eliminate parasitic currents to round - off .seta and okui composed a more accurate fourth order scheme to calculate the derivatives in the pressure tensor .this convenient approach reduces the amplitude of spurious velocities to about one half of that from the second order scheme .pooley and furtado sten-2008 analyzed the causes of spurious velocities in a free - energy lb model and provided two improvements .first , by making a suitable choice of the equilibrium distribution and using the nine - point stencils ( nps ) scheme to calculate derivatives , the magnitude of spurious velocities can be decreased by an order .moreover , a momentum conserving force is presented to further reduce the spurious velocities .yeomans et al . identified two sources of the spurious velocities , the long range effects and the bounce - back boundary conditions , when a single relaxation time ( srt ) lb algorithm is used to solve the hydrodynamic equations of a binary fluids .aiming to reduce the unwanted velocities , they proposed a revised lb method based on a multiple - relaxation - time ( mrt ) algorithm . in this work , we present a thermal lb model for simulating thermal liquid - vapor system with neglectable spurious velocities .this model is a further development of the one originally proposed by watari and tsutahara ( wt ) and then developed by gonnella , lamura and sofonea ( gls ) .the original wt model works only for ideal gas .gls introduced an appropriate ipi force term to describe van der waals ( vdw ) fluids . herewe introduce a windowed fft ( wfft ) scheme to calculate the convection term and the force term .the improved model is convenient to compromise the high accuracy and stability . with the new model ,non - conservation problem of total energy due to spatiotemporal discretizations is much better controlled and spurious currents in equilibrium interfaces are significantly damped .the rest of the paper is structured as follows . in the next section the thermal lb models for ideal gas and for vdw fluidsare briefly reviewed . in section iiiwe illustrate the necessity of the further development and detail the usage of the wfft scheme and its inverse .comparisons and analysis of numerical results from different schemes are presented in section iv , where we will show how the spurious velocities around linear and curved interfaces can be reduced by the new model .finally , in section v , we summarize the results and suggest directions for future research .the thermal multiphase model is developed from the thermal lb model , originally , proposed by wt , which is based on a multispeed approach . in this approach ,additional speeds are required and higher order velocity terms are included in the equilibrium distribution function to obtain the macroscopic temperature field .wt model uses the following discrete - velocity - model ( dvm ) which involves a set of 33 nondimensionalized velocities \text { , } \label{dvm_eq_1}\]]where subscript , ..., indicates the -th group of the particle velocities whose speed is and , ..., indicates the direction of particle s speed . in our simulations we set ,, , and .the distribution function , discrete in space and time , evolves according to a srt boltzmann equation \text { , } \label{bgk_eq}\]]where , , and denote the local equilibrium distribution function , the spatial coordinate , and the relaxation time , respectively . is expressed as a series expansion in the local velocity +\cdot \cdot \cdot \text { , } \label{feq}\end{aligned}\]]where the weight factors are }\text { { , } } \label{fk}\end{aligned}\]] hydrodynamic quantities , such as density , velocity , and temperature are determined from the following moments the combination of the above dvm and the general fd scheme with first - order forward in time and second - order upwinding in space composes the original fdlb model by wt . in the fdlb model ,particle velocities are independent from the lattice configuration . as a result, higher - order numerical schemes can be used to reduce the numerical viscosity and to enhance the stability of the model .this is of great importance to lb simulations , especially in phase separation studies , where long lasting simulations are needed to establish the growth properties .wt model can be applied to compressible flows with small mach number and the revised version extends it to compressible flows with high mach number due to better numerical stability .nevertheless , neither the original one nor the improved one has the ability to describe multiphase flows , since both models lead to the ideal eos only , which do not support thermodynamical two - phase state .fortunately , by incorporating a forcing term , the improved model can be applied to thermal liquid - vapor systems .compared to isothermal models , the variable temperature that the gls model can be implemented is of great importance , since thermal effects are ubiquitous and sometimes dominant in an important class of flows .examples are referred to boiling , distillation , as well as the dynamics of phase separation , where the freedom in temperature limits the rate of phase separation and induces different rheological and morphological behaviors .dynamic effects of temperature can not be considered in isothermal models , therefore , most studies have been restricted to either isothermal systems or the systems where effects of temperature dynamics are negligible .the forcing term introduced by gls is added into the right - hand side ( rhs ) of eq .( 2 ) + i_{ki}\text { , } \label{iki - bgk}\]]where takes the following form , {ki}^{eq}\text{. } \label{iki}\]] in eq .( 10 ) is introduced to control the equilibrium properties of the liquid - vapor systems and allows to recover the following equations for vdw fluids , =0\text{% , } \label{energy}\]]where the non - viscous stress , the dissipative tensor , and the total energy density , respectively . , , and are heat conductivity , shear , and bulk viscosities . and in eq .( [ nvs ] ) are the vdw eos and the contribution of density gradient to pressure tensor , which have the following expressions\delta _ { \alpha \beta } -m(\rho \nabla ^{2}\rho + \left\vert \nabla \rho\right\vert ^{2}/2)\delta _ { \alpha \beta } \text{.}\]]the expression allows a dependence of the surface tension on temperature , where is the surface tension coefficient and is a constant . in order to recover eqs .( [ mass])-([energy ] ) , five constraints are imposed on the forcing term , which make coefficients in eq .( [ iki ] ) as the following form \text { , } \label{bbb}\]]\}\text{,}\end{aligned}\]]\text{. } \label{ccqq}\]]it is worth noting that in this model the prandtl number can be changed by adjusting the parameter in the term .in this section , we present our contribution to the thermal multiphase lb model : spatial derivatives in the convection term and in the forcing term , are calculated via the wfft algorithm and its inverse . to illustrate the necessity , we present simulation results for a thermal phase separation process by various numerical schemes . here the time derivative is calculated using the first - order forward euler fd scheme .the spatial derivatives in are calculated using the second - order central difference scheme .spatial derivatives in the convection term are calculated using various schemes listed as follows : let , and be three successive nodes of the one dimensional lattice . using the second - order central difference scheme to discretize the convention term , eq .( [ iki - bgk ] ) can be rewritten in a conservative form and the time step and the courant - friedrichs - levy ( cfl ) number .compared with the second - order central difference scheme , the lw scheme contributes a dissipation term , which is in favor of the numerical stability .then , by using this scheme , eq .( [ iki - bgk ] ) can be formulated as as we know , the lw scheme is very dissipative and has a strong smoothing effect " . obviously , it is not favorable to recover the sharp interface in the multiphase system . to further improve the numerical accuracy , the modified partial differential equation ( mpde ) remainder after discretizing with eq .( [ ml- ] ) is derived is clear that the first and the second terms in the rhs of eq .( [ mpde ] ) correspond to the third - order dispersion error and the fourth - order dissipation error , respectively .therefore , we can add the dispersion term into the rhs of eq .( [ iki - bgk ] ) to compensate the dispersion error , we can add the dissipation term into rhs of eq .( [ dispersion ] ) . using the 2nd - cd scheme to discrete and gives bars above and indicate that they are discretized . if only is added into the rhs of eq .( [ ml- ] ) , for convenience of description , we refer to this scheme as mlw1 . if both and are added into the rhs of eq .( [ ml- ] ) , then a more accurate lb equation is obtained , and we refer to this scheme as mlw2 . the fl scheme has been widely employed by sofonea et al .sofonea - multiphase , sofonea - us to reduce the spurious velocities and to improve the numerical stability in liquid - vapor systems .figure 1 shows the characteristic line on the square lb lattice for direction when using this approach to compute the convective term along the characteristic line , eq .( [ iki - bgk ] ) becomes \label{fl } \\ & & -\frac{{1}}{\tau } ( f_{ki , j}^{n}-f_{ki , j}^{n,{eq}})\delta t+i_{ki , j}^{n}\delta t\text{,}\end{aligned}\]]with and in eq .( [ fl ] ) are two fluxes , which are defined as \psi ( \theta _ { ki , j}^{n})\text{,}\]] the flux limiter is expressed as a smooth function in particular , if , it corresponds to the first - order upwind scheme and to the lw scheme .a wide choice of flux limiters can work well with lb models . in this work, we will use the monitorized central difference ( mcd ) fl , which is most widely used by sofonea et al . recently , a new scheme , named the nps scheme , has been widely used to calculate the spatial derivatives by many scholars so as to ensure higher isotropy and to reduce spurious velocities . the general choice of stencils for calculating the derivatives and the laplacian are \notag \\ & = & \partial _ { x}+\frac{1}{6}\delta x^{2}\partial _ { x}^{3}+2b\delta x^{2}\partial _{ x}\partial _ { y}^{2}+\cdots \text{,}\end{aligned}\]]and \notag \\ & = & \nabla ^{2}+\frac{{\delta x^{2}}}{{12}}(\partial _ { x}^{4}+\partial _ { y}^{4})+f\delta x^{2}\partial _ { x}^{2}\partial _ { y}^{2}+\cdots \text{,}\end{aligned}\]]with and to keep consistency between the continuous and discrete operators . the bars above and represent that they are discrete operators .the central entry denotes the lattice node at which the derivative is calculated , and the remaining entries are the eight neighbor nodes around the central one . and are two free parameters that are chosen to minimize the spurious velocities .a large amount of numerical tests indicate that the best choice is and in the gls model . in our simulations , both the convection term and the forcing term are calculated by this way .next , we conduct simulations of a thermal phase separation process with numerical schemes listed above. initial conditions of our test are chosen as is a random density with an amplitude and can be regarded as incipient nuclei in the density field .periodical boundary conditions ( pbc ) are imposed on a square lattice with . unless otherwise stated , the remaining parameters are , , , , , , , throughout our simulations .figure 2 shows the variations of total energy for the phase separating process with various numerical schemes .the legend in each case is composed of two parts , ` a'+`b ' , where ` a ' is ` cd ' , ` lw ' , ` mlw1 ' , ` mlw2 ' , ` fl ' and ` nps ' and it shows the scheme to calculate the convection term ; ` b ' is ` cd ' and ` nps ' and it shows the scheme to calculate the forcing term . figure 2 demonstrates that the total energy density is not conservative in simulations even though it is in theoretical analysis. further survey of these results indicates that the derivation decreases by increasing the accuracy of scheme .therefore , we conclude that the non - conservation of total energy is mainly due to the spatial discretization errors . to overcome the problem of energy non - conservation , a new algorithm based on wfft is proposed .this approach is especially powerful for periodic system and also provides spatial spectral information on hydrodynamic quantities .moreover , with this approach , higher - order derivatives and fractional - order derivatives can be computed in a convenient way . for the sake of clarity , we start with the definition of fourier transform of its inverse is the module of wave vector , is the imaginary unit , and stands for the fourier transform of a spatial function . in eq .( [ ift ] ) , , and is the length of the system divided into equal segments .the above two equations are exactly correct when is infinitely large or is infinitely small . a general theorem of derivative based on fft states that is the fourier transform of .the theorem suggests a way to calculate spatial derivative , as shown in fig .firstly , transform in real space into in reciprocal space ; then , multiply with ; finally , take the inverse fourier transform ( ift ) of , the spatial derivative can be obtained .the approach mentioned above has excellent accuracy properties , typically well beyond that of standard discretization schemes . in principle , it gives the exact derivative with infinite order accuracy if the function is infinitely differentiable , which is another advantage of fft scheme compared to the fd scheme . in our manuscript , using this virtue , the fft scheme is designed to approximate the true spatial derivatives , as a result , to eliminate spurious velocities and to guarantee energy conservation .however , the trouble in proceeding in this manner is that , in many cases , it is difficult to ensure that infinite differentiability condition is satisfied .for example , the sod shock tube problem contains the shock wave , the rarefaction wave and the contact discontinuity .then the derivative of hydrodynamic quantity , or has a discontinuity as the same character as the square wave ( see fig.6 for more details ) .then the discontinuity will induce oscillations , known as the gibbs phenomenon .the gibbs phenomenon influences the accuracy of the fft not only in the neighborhood of the point of singularity , but also over the entire computational domain .more importantly , sometimes , it will cause numerical instability .for example , for the problems shown in figs . 9 and 17iv ) , the above approach is unstable due to the gibbs phenomenon .recently , there is a trend to use smoothing procedures which attenuate higher - order fourier coefficients to avoid or at least to reduce these oscillations ( i.e. , wfft method ) .a straightforward and convenient way to attenuate the higher - order fourier coefficients is to multiply each fourier coefficients by a smoothing factor ( filter ) , such as the lanzos filter , raised cosine filter , sharpened raised cosine filter and exponential cutoff filter , as listed in refs . . in the present study , based on taylor series expansion of wave number , we present a way to construct smoothing factors .firstly , we expand in taylor series }{\delta x/2 } \notag \\ & = & \frac{1}{\delta x/2}[\sin ( k\delta x/2)+\frac{1}{6}\sin^{3}(k\delta x/2)+% \frac{3}{40}\sin ^{5}(k\delta x/2)+\frac{5}{112}\sin ^{7}(k\delta x/2)+ ...] \notag \\ & = & \frac{1}{\delta x/2}\sum_{n=0}^{\infty } \frac{\gamma ( n/2)\delta _ { 0,\theta ( n)}\varepsilon ( -1+n)}{\sqrt{\pi } n\gamma ( \frac{n+1}{2})}\sin ^{n}(k\delta x/2)\text { , } \label{k - taylorseries}\end{aligned}\]]where is the gamma function , $ ] is the mod function , and is the unit step function . thus , in order to damp the gibbs oscillations , or in order to filter out more high frequency waves, may take the form of an appropriately truncated taylor series expansion of sin .for example , may take the following forms is consistent with the one used in ref .some simple derivations indicate that the above approach with , , , and has a second - order , fourth - order , sixth - order , and eighth - order accuracy in space , respectively ( see appendix for more details ) . therefore , smoothing factor for takes the following form smoothing factors for , , and can be calculated in a similar way and are represented in fig .4 . it is clear that the lower - order smoothing factors and , filter out more high frequency waves , and may result in excessively smeared approximations , which are unfaithful representations of the truth physics . on the other hand , the higher - order smoothing factors and , reserve more higher frequency waves , but may not damp the gibbs phenomenon ( see fig . 7 for more details ) , then cause numerical instability . the smoothing factorsshould survive the dilemma of stability versus accuracy .in other words , they should be minimal but make the evolution stable at the same time . as a simple test , using the wfft algorithm , the derivative of a infinite differentiable function , is calculated with , , , , and plotted in fig .it is clearly seen that when is used , the errors reduce to round - off . as another test, the validity of the wfft scheme is verified by the modified sod shock tube with higher pressure ratio . for the problem considered ,the initial condition is described by " and " indicate macroscopic variables at the left and right sides of the discontinuity .the size of grid is , time step is , and relaxation time is .figure 6 shows the computed density , pressure , velocity , and temperature profiles at , where the circles are for simulation results and solid lines are for analytical solutions .the two sets of results have a satisfying agreement .figure 7 shows the temperature profiles obtained from the wfft schemes with , , , , and in ( a ) and local details of the part near the shock wave in ( b ) .one can see that higher - order filters , such as and , have higher accuracy in smooth regions , but can not refrain the gibbs phenomenon in unsmooth regions effectively .the lower - order filters , although are too dissipative , can damp the spurious oscillations to neglectable scale , which are capable of shock capturing . therefore , it should be noted that , for flows without shock waves and/or discontinuities , the wfft scheme with higher - order filter is stable , valid and appropriate . while for the compressible flows with shock waves and/or discontinuities , the wfft scheme with lower - order filter is a more appropriate choice . in the present study ,we focus on the liquid - vapor systems without shock waves and strong discontinuities .therefore , the wfft schemes with higher - order filters are used . for comparisons , we verify the proposed fft algorithm with the same problem described in fig . 2 and display variations of total energy obtained from wfft schemes with , , , and in fig .it is found that , for each case , oscillates at the beginning , then approaches a nearly constant value .behaviors of can be interpreted as follows . at the beginning of phase separation ,the fluids separate spontaneously into small regions with higher and lower densities , and more liquid - vapor interfaces appear . as a result ,spacial discretization errors in eq .( 10 ) induced by interfaces arrive at their maxima that account for the initial oscillations .as time evolves , under the action of surface tension , the total liquid - vapor interface length decreases due to the mergence of small domains , then the discretization errors decrease . with the increase of precision , variations of total energy decreases .we can , therefore , come to the conclusion that wfft scheme with higher - order filter has more advantage in guaranteeing energy conservation than the one with lower - order filter and other fd schemes used above .in this section , two kinds of typical benchmarks are performed to validate the physical properties of the thermal multiphase model and the newly proposed algorithm . the first one is related to a planar interface .the second one is related to a circular interface . to checkif the thermal lb multiphase model can correctly reproduce the equilibrium thermodynamics of the system and the numerical accuracy of the new scheme , a series of simulations about the liquid - vapor interface at different temperatures were performed .unless otherwise stated , the wfft scheme with is used throughout our simulations .simulations were carried out over a domain with pbc in both directions .the initial conditions are set as and are the theoretical values at .parameters are set to be , and others are unchanged .the initial temperature is set to be , but dropping by , when the equilibrium state of the system is achieved .simulations were then run until the temperature had reduced to . in fig .9 , the liquid - vapor coexistence curves from lb simulations using various numerical schemes at different temperatures are compared to the theoretical predictions from maxwell construction .one can see that when using the wfft and the nps schemes , the results are closer to the theoretical phase diagram .nevertheless , when the temperature is lower than , the nps scheme becomes unstable .physically , this is owing to the sharp interface when ( see fig .11 ) and the nature of the vdw eos , since large density ratio occurs when the temperature is much lower than the critical one .from another perspective , it demonstrates that the wfft algorithm has a better numerical stability for this test .results from the mlw2 and the fl schemes deviate remarkably from the theoretical values , especially for the vapor branch at lower temperatures .besides physical reasons listed above , numerical accuracy of these two schemes is also an important factor .the velocity and density profiles at are shown in fig .10 and fig .11 , respectively . as one can see in fig .10 , for all schemes , spurious velocities exist and reach their maxima near the interface regions. however , the maximum spurious velocities obtained from different schemes are greatly different . for the mlw2 and the fl schemes ,the maxima of are on the order of and , respectively .a significant reduction of the is achieved by using the nps scheme , decreasing the maximum to about . through the usage of the wfft algorithm , is further reduced by an order of magnitude compared to nps scheme .density profiles in fig .11(a ) indicate that spurious interfaces ( scatter symbols near the interfaces ) have been produced when using the mlw2 scheme or the fl scheme because of excess numerical diffusion , which does not provide us a clear picture of phase separation , especially when the temperature is close to the critical value .this feature is not present when using the nps scheme or the wfft scheme ( see fig .11b ) . for all numerical schemes, the strength of surface tension plays an important role in reducing spurious velocities , as shown in fig .12 . in the case of ,the amplitudes of can be reduced by a factor of approximately with respect to the case of .subsequent simulations indicate that will decrease to when increases to .this is due to the existence of a wider interface and a smaller density gradient in the interface region when increases . with the decrease of spurious velocities , a more accurate phase diagram , especially in the vapor branch ,is also achieved ( see fig .13 ) , even in the mlw2 case and the fl case . besides the strength of surface tension, temperature is another key factor affecting spurious velocities , as displayed in fig .14 . for all numerical schemes ,a lower temperature makes larger spurious velocities .it is worth mentioning that , at the same temperature , the wfft algorithm also allows to reduce of about an order of magnitude compared to nps scheme . in our simulations, so far , we have not discussed in detail the width of interface . according to the vdw theory ,the interface width , can be determined by numerically solving the following integral for a planar interface ^{1/2}}\text{,}\]]where , and , -\rho ^{\ast 2}\text{,}\]] , in this model .note that the solution of the above equations gives the exact density profile for a planar interface for any value of .equilibrium density profiles across the liquid - vapor interface at from lb simulations versus results from vdw theory are shown in fig . 15 .it is clear that although the liquid and the vapor densities calculated from the mlw2 and the fl schemes coincide with the theoretical ones , neither the mlw2 nor the fl scheme produces the correct interface profile .the wider interface in these two cases is due to the excess numerical diffusion , as shown in fig.11a .the nps approach leads to a small deviation from the vdw theory , while the wfft scheme presents a perfect consistency with the theoretical solution . in fig .16 we display density profiles obtained from wfft algorithm with and , respectively . as expected , the interface becomes wider as increases .a wider interface decreases the density gradient in the interface region and helps to stabilize the liquid - vapor system at lower temperature . in this subsection, we will look at the dynamics of the relaxation of a deformed droplet driven by surface tension , and investigate the magnitude , as well as the spatial extent of the spurious currents for the circular interface .initially , an equilateral triangular droplet with an initial sharp interface is placed at the center of the computational domain with lattice units .initial conditions are given by subscripts ` in ' and ` out ' indicate macroscopic variables inside and outside the liquid drop , respectively .pbc are employed on both the vertical and horizontal boundaries .the surface tension parameter is , leaving the others unchanged .after time steps , the system reaches equilibrium .contour plots of the fluid density at four representative times are shown in fig .it is clearly seen that due to the effects of surface tension , the droplet relaxes to a circle slowly .the velocity fields at obtained from the nps scheme and the wfft scheme with are plotted in fig .18 . to illustrate the structure of the velocity field clearly, the lengths of the velocity vectors are multiplied by . to be seenis that spurious currents exist in each case and are roughly aligned in the direction normally to the interface and rapidly disappear away from the interface .however , the magnitude of the spurious currents are significantly reduced as the wfft approach is used .figure 19 shows temporal evolution of the maximum velocity with the second - order , the fourth - order , the sixth - order , and the eighth - order fft schemes , and the nps scheme .we can see that , in each case , decreases , and tends to nearly a constant when .more importantly , with the increase of precision , decreases .there is a decrease of a factor for the velocities when using the wfft scheme with respect to the nps scheme .the density and the pressure profiles along the center line of the droplet are plotted in fig .in the inner and outer of the droplet , pressures and are two constants and a rapid change occurs across the interface .the difference between the two constants is usually used to compute the surface tension for a given .for this purpose , we introduce the laplace law which states is the mean pressure inside the droplet averaged over all points of from the droplet center and is the external mean pressure averaged over all the points of . in this way, only the particles far from the interfaces are considered .surface tension can also be computed in such a way in order to test these relations , a series of simulations with sides ranging from to are run with three different surface tension parameters , and . in fig .21 , we present a plot of versus at and linear relation is well satisfied . by measuring the slope ,the surface tension is found to be , , and , which are in excellent agreement with the theoretical values , obtained from eq .( 55 ) , , , and .we mention that the relative error of surface tension increases with the decrease of surface tension parameter .there are two reasons accounting for this behavior .firstly , a larger will cause a larger surface tension and a larger pressure difference .this helps to measure the pressure difference with high accuracy .secondly , when decreases larger spurious velocities will be produced and larger pressure oscillations will be induced .in this paper , a thermal lb model for liquid - vapor system is developed .the present model experienced mainly three stages .it was originally composed by wt for ideal gas , then developed by gls by adding an interparticle force term . herewe propose to use the wfft scheme to calculate both the convection term and the external force term .the usage of the wfft scheme is detailed and analyzed .it is found that the lower - order filters , with better numerical stability and lower accuracy , can effectively reduce the gibbs phenomenon at discontinuity , while the higher - order filters help the scheme to maintain high resolution in smooth regions .one can choose appropriate filter according to the specific problem .with the higher - order wfft algorithm , one can better control the non - conservation problem of total energy due to spatiotemporal disterizations .the model has been successfully applied to the calculation of interfacial properties of liquid - vapor systems . very sharp interfaces can be achieved . by adopting the new modelthe magnitude of spurious currents can be greatly reduced . as a result , the phase diagram of the liquid - vapor system obtained from simulations are more consistent with that from theoretical calculation .the accuracy of the simulation results is also verified by the laplace law .besides the numerical effects , both the surface tension and temperature have also significant influences on the spurious velocities .a stronger surface tension and/or a higher temperature can decrease the density gradient near the interfaces and stabilize the simulations . the analysis presented in this work provides an convenient way of extending the wfft approach to multiphase lb models and to numerical solving partial differential equations . in further studies we will increase the depth of separation , which the model can undergo and investigate the similarities and differences between thermal and isothermal phase separationsthe authors sincerely and warmly thank the anonymous reviewers for their valuable comments , encouragements , and suggestions , and we warmly thank dr .victor sofonea for many instructive discussions .ax and gz acknowledge support of the science foundations of lcp and caep [ under grant nos .2009a0102005 , 2009b0101012 ] , national natural science foundation of china [ under grant no .11075021 ] .yg and yl acknowledge support of national basic research program ( 973 program ) [ under grant no .2007cb815105 ] , national natural science foundation of china [ under grant no . 11074300 ] , fundamental research funds for the central university [ under grant no . 2010ys03 ] , technology support program of langfang [ under grant nos .2010011029/30/31 ] , and science foundation of nciae [ under grant no .2008-ky-13 ] .substituting eq . ( [ k - taylorseries ] ) into eq .( [ fft ] ) , the rhs of eq .( [ fft ] ) can be expressed as , \label{ikfk}\end{aligned}\ ] ] taking ift of the rhs of the first line of eq .( [ ikfk ] ) gives & = & \frac{1}{l}% \sum_{n =- n/2}^{n/2 - 1}e^{\mathbf{i}kx_{j}}\times \frac{\mathbf{i}}{\delta x/2}% \sin ( k\delta x/2)\times \widetilde{f}(k ) \nonumber \\ & = & \frac{1}{l}\sum_{n =- n/2}^{n/2 - 1}e^{\mathbf{i}kx_{j}}\frac{e^{\mathbf{i}% k\delta x/2}-e^{-\mathbf{i}k\delta x/2}}{\delta x}\times \widetilde{f}(k ) \nonumber \\ & = & \frac{f(x_{j}+\delta x/2)-f(x_{j}-\delta x/2)}{\delta x } \nonumber \\ & = & f^{\prime } ( x_{j})+\frac{1}{24}\delta x^{2}f^{\prime \prime \prime } ( x_{j})+ ...\end{aligned}\]]it is clear that the fft scheme with operator has a second - order accuracy in space . in a similar way, we have =f^{\prime } ( x_{j})+\frac{1}{1920}% \delta x^{4}f^{(5)}(x_{j})+ ... ,\]]=f^{\prime } ( x_{j})+\frac{1}{% 322560}\delta x^{6}f^{(7)}(x_{j})+ ... ,\]]=f^{\prime } ( x_{j})+\frac{1}{% 92897280}\delta x^{8}f^{(9)}(x_{j})+ ... ,\]]where , , and represent the fifth order , the seventh order , and the ninth order derivatives , respectively . therefore , the wfft approach with , , , and has a second - order , fourth - order , sixth - order , and eighth - order accuracy in space , respectively . from another perspective , it should be noted that , the fft scheme is not a local scheme or , in other words , is not a local operator spectral - methods - book-2,plasma - book , since each fft coefficient is determined by all the grid point values of , as shown in eqs.(ft-[ift ] ) .therefore , the fft scheme is not a finite - point formula , like the second - order fd is a 3-point formula , or the fourth order expression , is a 5-point formula ; rather , the fft scheme is -point formulas .but there are important reasons for expressing derivatives as local operators . in a continuous space, the derivative of a function is defined locally .hence , when modeling a continuous system with a discrete system , it is desirable to retain the local character of the derivative .this can be especially true near boundaries or marked internal inhomogeneities . from the above derivations , we find that the fft scheme with corresponds a 3-point fd scheme .therefore , from the point of numerical analysis , , , , and can maintain the local characteristic of in some extent .hence , errors arising from the discontinuity are also localized and the accuracy away from the discontinuity can be ensured .m. swift , w. osborn , j. yeomans , phys .lett . 75 ( 1995 ) 830 ; g. gonnella , e. orlandini , j. yeomans , phys . rev .78 ( 1997 ) 1695 ; a. wagner , j. yeomans , phys .80 ( 1998 ) 1429 ; d. marenduzzo , e. orlandini , j. yeomans , phys . rev . lett .92 ( 2004 ) 188301 ; r. verberg , c. pooley , j. yeomans , a. balazs , phys .( 2004 ) 184501 ; d. marenduzzo , e. orlandini , j. yeomans , phys .98 ( 2007 ) 118102 .a. g. xu , g. gonnella , a. lamura , phys .e 67 ( 2003 ) 056105 ; a. g. xu , g. gonnella , a. lamura , phys .e 74 ( 2006 ) 011505 ; a. g. xu , g. gonnella , a. lamura , g. amati , f. massaioli , europhys .71 ( 2005 ) 651 .a. onuki , phys .94 ( 2005 ) 054501 ; a. onuki , phys . rev .e 75 ( 2007 ) 036304 ; r. teshigawara , a. onuki , europhys .84 ( 2008 ) 36003 ; r. teshigawara , a. onuki , phys .e 82 ( 2010 ) 021603 .
we further develop a thermal lb model for multiphase flows . in the improved model , we propose to use the windowed fft and its inverse to calculate both the convection term and external force term . by using the new scheme , gibbs oscillations can be damped effectively in unsmooth regions while the high resolution feature of the spectral method can be retained in smooth regions . as a result , spatiotemporal discretization errors are decreased dramatically and the conservation of total energy is much better preserved . a direct consequence of the improvements is that the unphysical spurious velocities at the interfacial regions can be damped to neglectable scale . with the new model , the phase diagram of the liquid - vapor system obtained from simulation is more consistent with that from theoretical calculation . very sharp interfaces can be achieved . the accuracy of simulation results is also verified by the laplace law . the high resolution , together with the low complexity of the fft , endows the proposed method with considerable potential , for studying a wide class of problems in the field of multiphase flows and for solving other partial differential equations . lattice boltzmann method ; spurious velocities ; liquid - vapor systems ; windowed fft 47.11.-j , 47.55.-t , 05.20.dd
[ [ ams - subject - classification - msc2010 ] ] ams subject classification ( msc2010 ) the study of heavy traffic in queueing systems began in the 1960s , with three pioneering papers by kingman .these papers , and the early work of prohorov , borovkov and iglehart , concerned a single resource . since then there has been significant interest in networks of resources , with major advances by harrison and reiman , reiman , williams and bramson . for discussions , further references and overviews of the very extensive literature on heavy traffic for networks ,williams , bramson and dai , harrison and whitt are recommended .research in this area is motivated in part by the need to understand and control the behaviour of communications , manufacturing and service networks , and thus to improve their design and performance . but researchers are also attracted by the elegance of some of the mathematical constructs : in particular , the multi - dimensional reflecting brownian motions that often arise as limits . a question that arises in a wide variety of application areasconcerns how flows through a network should be controlled , so that the network responds sensibly to varying conditions .road traffic was an area of interest to early researchers , and more recently the question has been studied in work on modelling the internet . in each of these cases the network studied is part of a larger system : for example , drivers generate demand and select their routes in ways that are responsive to the delays incurred or expected , which depend on the controls implemented in the road network .it is important to address such interactions between the network and the larger system , and in particular to understand the signals , such as delay , provided to the larger system .work on internet congestion control generally addresses the issue of fairness , since there exist situations where a given scheme might maximise network throughput , for example , while denying access to some users . in this areait has been possible to integrate ideas of fairness of a control scheme with overall system optimization : indeed fairness of the control scheme is often the means by which the right information and incentives are provided to the larger system .might some of these ideas transfer to help our understanding of the control of road traffic ? in this paper we present a preliminary exploration of a particular topic : ramp metering .unlimited access to a motorway network can , in overloaded conditions , cause a loss of capacity .ramp metering ( signals on slip roads to control access to the motorway ) can help avoid this loss of capacity .the problem is one of access control , a common issue for communication networks , and in this paper we describe a ramp metering policy , _ proportionally fair metering , inspired by rate control mechanisms developed for the internet . _the organisation of this paper is as follows . in section [ asq ]we review early heavy traffic results for a single queue . in section [ mic ]we describe a model of internet congestion control , which we use to illustrate the simplifications and insights heavy traffic allows . in section [ bnm ]we describe a brownian network model , which both generalizes a model of section [ asq ] and arises as a heavy traffic limit of the networks considered in section [ mic ] .sections [ mic ] and [ bnm ] are based on the recent results of .these heavy traffic models help us to understand the behaviour of networks operating under policies for sharing capacity fairly .in section [ mcm ] we develop an approach to the design of ramp metering flow rates informed by the earlier sections . for each of three examples ,we present a brownian network model operating under a proportionally fair metering policy .our first example is a linear network representing a road into a city centre with several entry points ; we then discuss a tree network , and , in section [ rc ] , a simple network where drivers have routing choices . within the brownian network models we show that in each case the delay suffered by a driver at an entry point to the network can be expressed as a sum of dual variables , one for each of the resources to be used , and that under their stationary distribution these dual variables are independent exponential random variables . for the final examplewe show that the interaction of proportionally fair metering with choices available to arriving traffic has beneficial consequences for the performance of the system .john kingman s initial insight , that heavy traffic reveals the essential properties of queues , generalises to networks , where heavy traffic allows sufficient simplification to make clear the most important consequences of resource allocation policies .consider a queue with a single server of unit capacity at which customers arrive as a poisson process of rate .customers bring amounts of work for the server which are independent and identically distributed with distribution , and are independent of the arrival process .assume the distribution has mean and finite second moment , and that the load on the queue , , satisfies .let be the _ workload _ in the queue at time ; for a server of unit capacity this is the time it would take for the server to empty the queue if no more arrivals were to occur after time . kingman showed that the stationary distribution of is asymptotically exponentially distributed as .current approaches to heavy traffic generally proceed via a weaker assumption that the cumulative arrival process of work satisfies a functional central limit theorem , and use this to show that as , the appropriately normalized workload process can be approximated by a reflecting brownian motion on . in the interior of , behaves as a brownian motion with drift and variance determined by the variance of the cumulative arrival process of work .when hits zero , then the server may become idle ; this is where delicacy is needed .the stationary distribution of the reflecting brownian motion is exponential , corresponding to kingman s early result .we note an important consequence of the scalings appearing in the definition ( [ scaling ] ) , the _ snapshot principle . because of the different scalings applied to space and time , the workload is of order while the workload can change significantly only over time intervals of order . hence the time taken to servethe amount of work in the queue is asymptotically negligible compared to the time taken for the workload to change significantly . _note that the workload does not depend on the queue discipline ( provided the discipline does not allow idling when there is work to be done ) , although the waiting time for an arriving customer certainly does .kingman makes elegant use of the snapshot principle to compare stationary waiting time distributions under a range of queue disciplines .it will be helpful to develop in detail a simple example .consider a markov process in continuous time with state space and non - diagonal infinitesimal transition rates let . if then the markov process has stationary distribution ( here , the superscript signals that the random variable is associated with the stationary distribution ) .the markov process corresponds to an m / m/1 queue , at which customers arrive as a poisson process of rate , and where customers bring an amount of work for the server which is exponentially distributed with parameter .next consider an m / g/1 queue with the processor - sharing discipline ( under the processor - sharing discipline , while there are customers in the queue each receives a proportion of the capacity of the server ) . the process is no longer markov , but it nonetheless has the same stationary distribution as in ( [ geom ] ) .moreover in the stationary regime , given , the amounts of work left to be completed on each of the customers in the queue form a collection of independent random variables , each with distribution function a distribution recognisable as that of the forward recurrence time in a stationary renewal process whose inter - event time distribution is .thus the stationary distribution of is just that of the sum of independent random variables each with distribution , where has the distribution ( [ geom ] ) .let be a random variable with distribution .then we can deduce that the stationary distribution of has the property that in probability , the mean of the distribution , as .for fixed , under the stationary distribution for the queue , let be the number of customers in the queue with a remaining work requirement of not more than .then , in probability as . at the level of stationary distributions , this is an example of a property called state - space collapse : in heavy traffic the stochastic behaviour of the system is essentially given by , with more detailed information about the system ( in this case , the numbers of customers with various remaining work requirements ) not being necessary .the amount of work arriving at the queue over a period of time , , has a compound poisson distribution , with a straightforwardly calculated mean and variance of and respectively , where .an alternative approach is to directly model the cumulative arrival process of work as a brownian motion with matching mean and variance parameters : thus where is a standard brownian motion .let a brownian motion starting from the origin with drift and variance . in this approachwe define the queue s workload at time by the system of equations the interpretation of the model is as follows .while is positive , it is driven by the brownian fluctuations caused by arrival of work less the work served .but when hits zero , the resource may not be fully utilized .the process defined by equation ( [ q2 ] ) is continuous and non - decreasing , and is the minimal such process that permits , given by equation ( [ q1 ] ) , to remain non - negative .we interpret as the cumulative unused capacity up to time .note that can increase only at times when is at zero .the stationary distribution of is exponential with mean .this is the same as the distribution of where has the stationary distribution of the reflecting brownian motion that approximates the scaled process given by ( [ scaling ] ) .furthermore , the mean of the stationary distribution of is the same as the mean of the exact stationary distribution of the workload , calculated from its representation as the geometric sum ( [ geom ] ) of independent random variables each with distribution and hence mean .in other words , for the m / g/1 queue , we obtain the same exponential stationary distribution either by ( a ) approximating the workload arrival process directly by a brownian motion without any space or time scaling , or by ( b ) approximating the scaled workload process in ( [ scaling ] ) by a reflecting brownian motion , finding the stationary distribution of the latter , and then formally unwinding the spatial scaling to obtain a distribution in the original spatial units .furthermore , this exponential distribution has the same mean as the exact stationary distribution for the workload in the m / g/1 queue and provides a rather good approximation , being of the same order of accuracy as the exponential approximation of the geometric distribution with the same mean .the main point of the above discussion is that , in the context of this example , we observe that for the purposes of computing approximations to the stationary workload , using a direct brownian model for the workload arrival process ( by matching mean and variance parameters ) provides the same results as use of the heavy traffic diffusion approximation coupled with formal unwinding of the spatial scaling , and the approximate stationary distribution that this yields compares remarkably well with exact results .we shall give another example of this kind of fortuitously good approximation in section [ bnm ] .chen and yao have also noted remarkably good results from using such ` strong approximations ' without any scaling . in this sectionwe describe a network generalization of processor sharing that has been useful in modelling flows through the internet , and outline a recent heavy traffic approach to its analysis .consider a network with a finite set of _ resources . be a non - empty subset of , and write to indicate that resource is used by route .let be the set of possible routes .assume that both and are non - empty and finite , and let and denote the cardinality of the respective sets .set if , and otherwise .this defines a matrix of zeroes and ones , the _ resource - route incidence matrix .assume that has rank , so that it has full row rank . __ suppose that resource has capacity , and that there are connections using route .how might the capacities be shared over the routes , given the numbers of connections ? this is a question which has attracted attention in a variety of fields , ranging from game theory , through economics to political philosophy .here we describe a concept of fairness which is a natural extension of nash s bargaining solution and , as such , satisfies certain natural axioms of fairness ; the concept has been used extensively in the modelling of rate control algorithms in the internet .let . a _ capacity allocation policy _ , where , is called _ proportionally fair _ if for each , solves note that the constraint ( [ pf2 ] ) captures the limited capacity of resource , while constraint ( [ pf4 ] ) requires that no capacity be allocated to a route which has no connections .the problem ( [ pf1])([pf4 ] ) is a straightforward convex optimizationproblem , with optimal solution where the variables are lagrange multipliers ( or _ dual variables _ ) for the constraints ( [ pf2 ] ) .the solution to the optimization problem is unique and satisfies for by the strict concavity on and boundary behaviour of the objective function in ( 1.6 ) .the allocation describes how capacities are shared for a given number of connections on each route .next we describe a stochastic model for how the number of connections within the network varies .a connection on route corresponds to continuous transmission of a document through the resources used by route .transmission is assumed to occur simultaneously through all the resources used by route .let the number of connections on route at time be denoted by , and let .we consider a markov process in continuous time with state space and non - diagonal infinitesimal transition rates where is the -th unit vector in , and , , .the markov process corresponds to a model where new connections arrive on route as a poisson process of rate , and a connection on route transfers a document whose size is exponentially distributed with parameter . in the case where and , the transition rates ( [ sflm ] ) reduce to the rates ( [ mm1 ] ) of the m / m/1 queue .define the _ load _ on route to be for .it is known that the markov process is positive recurrent provided these are natural constraints : the load arriving at the network for resource must be less than the capacity of resource , for each .let ] , , ] . for each , define to be the unique value of that solves the following optimization problem : where the function was introduced in and can be used to show positive recurrence of under conditions ( [ capcon ] ) . in difference is used as a lyapunov function to show that any fluid model solution converges towards the invariant manifold .it is straightforward to check that if and only if and it turns out that a'(a[\mu]^{-1 } [ \nu ] [ \mu]^{-1 } a')^{-1 } w.\ ] ] note that if lives in the space then , given by equations ( [ req : w ] ) and ( [ rfluone ] ) as ^{-1}\bar n^r ] , which we can write as ^{-1 } [ \nu ] [ \mu]^{-1 } a ' q \\hbox { for some } q\in \mathbb{r}_+^{j } \big\},\ ] ] generally a space of lower dimension .call the _ workload cone .let ^{-1 } [ \nu ] [ \mu]^{-1 } a ' q \\ & \qquad \qquad \hbox{for some } q\in \mathbb{r}_+^{j } \hbox { satisfying } q_j=0\big\ } , \end{split}\ ] ] which we refer to as the face of the workload cone . _we define diffusion scaled processes , as follows . for each and , let in the next sub - section we outline the convergence in distribution of the sequence as . as preparation ,note that if and for all , then for all .suppose , as a thought experiment , that for each the component behaves as the queue - length process in an independent m / m/1 queue , with a server of capacity .then a brownian approximation to would have variance .next observe that if the covariance matrix of is ] is ^{-1 } [ \nu ] [ \mu]^{-1 } a ' . \label{gamma}\ ] ] let be as in section [ mic ] : thus is a matrix of zeroes and ones of dimension and of full row rank , and , are vectors of positive entries of dimension . let , , and .let and be defined by expressions ( [ wcone ] ) and ( [ wsupj ] ) respectively .let and be given by ( [ gamma ] ) . in the following ,all processes are assumed to be defined on a fixed filtered probability space and to be adapted to the filtration .let be a probability distribution on .define a brownian network model by the following relationships : a. for all , b. has continuous paths , for all , and has distribution , c. is a -dimensional brownian motion starting from the origin with drift and covariance matrix such that is a martingale under , d. for each , is a one - dimensional process such that a. is continuous and non - decreasing , with , b. for all .the interpretation of the above brownian network model is as follows . in the interior of the workload cone each of the resources are fully utilized , route is receiving a capacity allocation for each , and the workloads are driven by the brownian fluctuations caused by arrivals and departures of connections .but when hits the face of the workload cone , resource may not be fully utilized .the cumulative unused capacity at resource is non - decreasing , and can increase only on the face of the workload cone .the work of dai and williams establishes the existence and uniqueness in law of the above diffusion . in it is shown that , if for all , then has a unique stationary distribution ; furthermore , if denotes a random variable with this stationary distribution , then the components of are independent and is exponentially distributed with parameter for each .now let be a vector of positive entries of dimension , define a sequence of networks as in section [ ht ] , and suppose and are related by the heavy traffic condition ( [ req : numu ] ) . in it is shown that , subject to a certain local traffic condition on the matrix and suitable convergence of initial variables , the pair converges in distribution as to a continuous process where is the above diffusion and .the proof in relies on both the existence and uniqueness results of and an associated invariance principle developed by kang and williams .( the local traffic condition under which convergence is established requires that the matrix contains amongst its columns the columns of the identity matrix : this corresponds to each resource serving at least one route which uses only that resource .the local traffic condition is not needed to show that has the aforementioned stationary distribution ; that requires only the weaker condition that have full row rank . )it is convenient to define , a process of _ dual variables . from this ,the form of , and the relation , it follows that a ' \tilde q ] has the stationary distribution of . then ,after formally unwinding the spatial scaling used to obtain our brownian approximation , we obtain the following simple approximation for the stationary distribution of the number - of - connections process in the original model described in section [ mic2 ] : where , , are independent and is exponentially distributed with parameter . as mentioned in section [ asq ] ,an alternative approach is to directly model the cumulative arrival process of work for each route as a brownian motion : where , , are independent standard brownian motions ; here the form of the variance parameter takes account of the fact that the document sizes are exponentially distributed . under this model ,the potential netflow ( inflow minus potential outflow , ignoring underutilization of resources ) process of work for resource is a -dimensional brownian motion starting from the origin with drift and covariance matrix [ \mu]^{-1}a'=\gamma ] , which is the same as the distribution of the right member of ( [ approx ] ) .thus , just as in the simple case considered in section [ asq ] , in this connection - level model , using the direct brownian model yields the same approximation for the stationary distribution of the number - of - connections process as that obtained using the heavy traffic diffusion approximation and formally unwinding the spatial scaling in its stationary distribution .if we specialize the direct brownian network model to the case where and , then we obtain the brownian model of section [ asq ] , with and where the stationary distribution for is exponentially distributed with mean , yielding the same approximation as in section [ asq ] .a more interesting example is obtained when and is the matrix : so that routes each use a single resource in such a way that there is exactly one such route for each resource , and one route uses all resources . in this case , the stationary distribution given by ( [ approxp ] ) accords remarkably well with the exact stationary distribution described by massouli and roberts ; it is again of the order of accuracy of the exponential approximation of the geometric distribution with the same mean .( we refer the interested reader to for the details of this good approximation . ) in this section and in section [ asq ] we have seen intriguing examples of remarkably good approximations that the direct brownian modelling approach can yield .inspired by this , in the next two sections we explore the use of the direct brownian network model as a representation of workload for a controlled motorway .rigorous justification for use of this modelling framework in the motorway context has yet to be investigated .see the last section of the paper for further comments on this issue .once motorway traffic exceeds a certain threshold level ( measured in terms of density the number of vehicles per mile ) both vehicle speed and vehicle throughput drop precipitously .the smooth pattern of flow that existed at lower densities breaks down , and the driver experiences stop - go traffic .maximum vehicle throughput ( measured in terms of the number of vehicles per minute ) occurs at quite high speeds about 60 miles per hour on californian freeways and on london s orbital motorway , the m25 while after flow breakdown the average speed may drop to 2030 miles per hour . particularly problematicis that flow breakdown may persist long after the conditions that provoked its onset have disappeared .variable speed limits lessen the number and severity of accidents on congested roads and are in use , for example , on the south - west quadrant of the m25 . butvariable speed limits do not avoid the loss of throughput caused by too high a density of vehicles .ramp metering ( signals on slip roads to control access to the motorway ) can limit the density of vehicles , and thus can avoid the loss of throughput .but a cost of this is queueing delay on the approaches to the motorway .how should ramp metering flow rates be chosen to control these queues , and to distribute queueing delay fairly over the various users of the motorway ? in this section we introduce a modelling approach to address this question , based on several of the simplifications that we have seen arise in heavy traffic .consider the linear road network illustrated in figure [ fig : road ] .traffic can enter the main carriageway from lines at entry points , and then travels from left to right , with all traffic destined for the exit at the right hand end ( think of this as a model of a road collecting traffic all bound for a city ) .let , , , taking values in be the line sizes at the entry points at time , and let , , , be the respective capacities of sections of the road .we assume the road starts at the left hand end , with line feeding an initial section of capacity , and that .the corresponding resource - route incidence matrix is the square matrix [ t][t] [ t][t] [ t][t] [ t][t]users [ t][t]long , uncongested link [ t][t]short , [ t][t] [ t][t] [ t][t] [ t][t] [ t][t] [ t][t] [ t][t] [ t][t] we model the traffic , or work , arriving at line , , as follows :let be the cumulative inflow to line over the time interval ] , where , and suppose these processes are independent over .suppose the metering rates for lines 1 , 2 , , at time can be chosen to be any measurable vector - valued function satisfying constraints ( [ pf2])([pf4 ] ) with , and such that for .observe that we do not take into account travel time along the road : motivated by the snapshot principle , we suppose that varies relatively slowly compared with the time taken to travel through the system . , say from entry point .a more refined treatment might insist that the rates satisfy the capacity constraints ( [ pf2 ] ) .we adopt the simpler approach , since we expect that in heavy traffic travel times along the motorway will be small compared with the time taken for to change significantly . ]how might the rate function be chosen ?we begin by a discussion of two extreme strategies .first we consider a strategy that prioritises the upstream entry points .suppose the metered rate from line , , is chosen so that for each the cumulative outflow from line , , is maximal , subject to the constraint ( [ eeet ] ) and for all : thus there is equality in the latter constraint whenever is positive .for each of , , , 1 in turn define to be maximal , subject to the constraint ( [ eeet ] ) and in consequence there is equality in constraint ( [ eet ] ) at time if , and by induction for each the cumulative flow along link , , is maximal , for , , , 1 .thus this strategy minimizes , for all times , the sum of the line sizes at time , .the above optimality property is compelling if the arrival patterns of traffic are exogenously determined .the strategy will , however , concentrate delay upon the flows entering the system at the more downstream entry points .this seems intuitively unfair , since these flows use fewer of the system s resources , and it may well have perverse and suboptimal consequences if it encourages growth in the load arriving at the upstream entry points .for example , growth in may cause the natural constraint ( [ capcon ] ) to be violated , even while traffic arriving at line suffers only a small amount of additional delay .next we consider a strategy that prioritises the downstream entry points . to present the argument most straightforwardly ,let us suppose that the cumulative inflow to line is discrete , i.e. , is constant except at an increasing , countable sequence of times , for each .suppose the inflow from line is chosen to be whenever is positive , and zero otherwise. then link will be fully utilized by the inflow from line a proportion of the time .let whenever both is positive and for , and let otherwise .this strategy minimizes lexicographically the vector at all times .provided the system is stable , link will be utilized solely by the inflow from line a proportion of the time .hence the system will be unstable if and thus may well be unstable even when the condition ( [ capcon ] ) is satisfied .essentially the strategy starves the downstream links , preventing them from working at their full capacity .our assumption that the cumulative inflow to line is discrete is not essential for this argument : the stability region will be reduced from ( [ capcon ] ) under fairly general conditions .the two extreme strategies we have described each have their own interest : the first has a certain optimality property but distributes delay unfairly , while the second can destabilise a network even when all the natural capacity constraints ( [ capcon ] ) are satisfied .given the line sizes , we suppose the metered rates are chosen to be proportionally fair : that is , the capacity allocation policy solves the optimization problem ( [ pf1])([pf4 ] ) .hence for the linear network we have from relations ( [ cs1])([cs2 ] ) that where the are lagrange multipliers satisfying under this policy the total flow along section will be its capacity whenever . given line sizes , the ratio is the time it would take to process the work currently in line at the current metered rate for line .thus give estimates , based on current line sizes , of queueing delay in each of the lines .note that these estimates do not take into account any change in the line sizes over the time taken for work to move through the line .next we describe our direct brownian network model for the linear network operating under the above policy .we make the assumption that the inflow to line is a brownian motion starting from the origin with drift and variance parameter , and so can be written in the form for , where , , are independent standard brownian motions . for example, if the inflow to each line were a poisson process , then this would be the central limit approximation , with .more general choices of could arise from either a compound poisson process , or the central limit approximation to a large class of inflow processes .our brownian network model will be a generalization of the model ( [ q1])([q2 ] ) of a single queue , and a specialization of the model of section [ bnm ] to the case where , , and the matrix is of the form ( [ asquare ] ) .let note that the first term is the cumulative workload entering the system for resource over the interval ] .we assume the stability condition ( [ capcon ] ) is satisfied , so that .write a ' \ , \mathbb{r}_+^j \label{wc}\ ] ] for the workload cone , and a ' q : q \in \mathbb{r}_+^j , q_j = 0 \ } , \label{wcj}\ ] ] for the face of .our brownian network model for the resource level workload is then the process defined by properties ( i)(iv ) of section [ bnm ] with in place of , in place of and ' ] .since is our model for , our brownian model for the line sizes is given by a ' { \breve q}. \label{delay1}\ ] ] within our brownian model we represent ( nominal ) delays at each line as given by since these would be the delays if line sizes remained constant over the time taken for a unit of traffic to move through the line , with both the arrival rate and metered rate at line . for line at time will not in general be the realized delay ( the time taken for the amount of work found in line at time to be metered from line ) . since the metered rate will in general differ from even when and .our definition of nominal delay is informed by our earlier heavy traffic results : as approaches we expect scaled realized delay to converge to scaled nominal delay .metered rates do fluctuate as a unit of traffic moves through the line , but we expect less and less so as the system moves into heavy traffic . ] relation ( [ delay2 ] ) becomes , for the linear network , parallelling relation ( [ delay ] ) .note that when hits the face of the workload cone , then and ; thus the loss of utilization at resource when hits the face of the workload cone is just sufficient to prevent the delay at line becoming smaller than the delay at the downstream line .if has the stationary distribution of , then the components of ')^{-1 } { \breve w}^s ] enforces the relationship .since the matrix is not invertible , this is no longer a necessary consequence of the network topology , but is a natural modelling assumption , motivated by the forms of state - space collapse we have seen earlier .essentially lines and use the same network resources and face the same queueing delays . a brownian network model of the first strategy from section [ aln ] could also be constructed , but the workload cone and its faces would not be of the required form ( [ wc ] ) and ( [ wcj ] ) , but instead would be defined by and the requirement that if then , with the interpretation .thus face represents the requirement that the workload for resource comprises at least the workload for resource , for , 2 , , .under this model , resource is fully utilized except when hits the face of the workload cone ( [ wineq ] ) : it is not possible for to leave , since the constraints expressed in the form ( [ wineq ] ) follow necessarily from the topology of the network embodied in .the model corresponds to the assumption that there is no more loss of utilization than is a necessary consequence of the network topology .note that the proportionally fair policy may fail to fully utilize a resource not only when this is a necessary consequence of the network topology , but also when this would cause an upstream entry point to obtain more than what the policy considers a fair share of a scarce downstream resource .next consider the tree network illustrated in figure [ fig : roadtree ] .access is metered at the six entry points so that the capacities , , , are not overloaded .there is no queueing after the entry point , and the capacities satisfy the conditions , , . given the line sizes , we suppose the metered rates are chosen to be proportionally fair : that is , the capacity allocation policy solves the optimization problem ( [ pf1])([pf4 ] ) where for this network we assume , as in the last section , that the cumulative inflow of work to line is given by equation ( [ et ] ) for , where , , are independent standard brownian motions .our brownian network model is again the process defined by properties ( i)(iv ) of section [ bnm ] with in place of , in place of , ' ] are independent and is exponentially distributed with parameter for each .the brownian model line sizes and delays are again given by equations ( [ delay1 ] ) and ( [ delay2 ] ) respectively , each with stationary distributions given by a linear combination of independent exponential random variables , one for each section of road .a key feature of the linear network , and its generalization to tree networks , is that all traffic is bound for the same destination . in our application to a road network this ensures that all traffic in a line at a given entry point is on the same route . if traffic on different routes shared a single line it would not be possible to align the delay incurred by traffic so precisely with the sum of dual variables for the resources to be used .ensures that the queueing delays in the proportionally fair brownian network model are partially ordered .a technical consequence is that a wide class of fair capacity allocations , the -fair allocations , share the same workload cone : in the notation of , the cone does not depend upon . ]next consider the road network illustrated in figure [ fig : roadparallel ] .three parallel roads lead into a fourth road and hence to a common destination .access to each of these roads is metered , so that their respective capacities , , , are not overloaded , and . there are four sources of traffic with respective loads , , , : the first source has access to road alone , on its way to road ; the second source has access to both roads and ; and the third source can access all three of the parallel roads .we assume that traffic arriving with access to more than one road distributes itself in an attempt to minimize its queueing delay , an assumption whose implications we shall explore .we could view sources of traffic as arising in different geographical regions , with different possibilities for easy access to the motorway network and with real time information on delays .or we could imagine a priority access discipline where some traffic , for example high occupancy vehicles , has a larger set of lines to choose from .given the line sizes , we suppose the metered rates are chosen to be proportionally fair : that is , the capacity allocation policy solves the optimization problem ( [ pf1])([pf4 ] ) . for this network andso , from relations ( [ cs1])([cs2 ] ) , we assume the ramp metering policy has no knowledge of the routing choices available to arriving traffic , but is simply a function of the observed line sizes , the topology matrix and the capacity vector .how might arriving traffic choose between lines ?well , traffic that arrives when the line sizes are and the metered rates are might reasonably consider the ratios in order to choose which line to join , since these ratios give the time it would take to process the work currently in line at the current metered rate for line , for , 2 , 3 .but these ratios are just for , 2 , 3 . given the choices available to the three sources, we would expect exercise of these choices to ensure that , or equivalently that the delays through lines 1 , 2 , 3 are weakly decreasing . because traffic from sources and has the ability to make route choices ,condition ( [ capcon ] ) is sufficient , but no longer necessary , for stability .the stability condition for the network of figure [ fig : roadparallel ] is and is thus of the form ( [ capcon ] ) , but with and replaced by and respectively , where the forms , capture the concept of four _ virtual resources _ of capacities , , 2 , 3 , 4 .given the line sizes , the workloads for the four virtual resources are . for , 2 , 3 , 4 , we model the cumulative inflow of work from source over the interval ] .we assume the stability condition ( [ enl ] ) is satisfied , so that all components of the drift are strictly negative .let be defined by ( [ wc ] ) , ( [ wcj ] ) respectively , with replaced by .define a process of dual variables for the virtual resources : ')^{-1 } { \breve w} ] are independent and for , 2 , 3 , 4 , is exponentially distributed with parameter where under the brownian network model , the stationary distribution for line sizes and for delays at each line are given by the distributions of and , respectively , where the brownian network model thus corresponds to natural assumptions about how arriving traffic from different sources would choose their routes .the results on the stationary distribution for the network are intriguing .the ramp metering policy has no knowledge of the routing choices available to arriving traffic , and hence of the enlarged stability region ( [ enl ] ) .nevertheless , under the brownian model , the interaction of the ramp metering policy with the routing choices available to arriving traffic has a performance described in terms of dual random variables , one for each of the virtual resources of the enlarged stability region ; when a driver makes a route choice , the delay facing a driver on a route is a sum of dual random variables , one for each of the virtual resources used by that route ; and under their stationary distribution , the dual random variables are independent and exponentially distributed .the design of ramp metering strategies can not assume that arriving traffic flows are exogenous , since in general drivers behaviour will be responsive to the delays incurred or expected . in this paperwe have presented a preliminary exploration of an approach to the design of ramp metering flow rates informed by earlier work on internet congestion control .a feature of this approach is that it may prove possible to integrate ideas of fairness of a control policy with overall system optimization .there remain many areas for further investigation . in particular , we have seen intriguing examples , in the context of a single queue and of internet congestion control , of remarkably good approximations produced for the stationary distributions of queue length and workload by use of the direct brownian modelling approach .furthermore , in the context of a controlled motorway , where a detailed model for arriving traffic is not easily available , use of a direct brownian model has enabled us to develop an approach to the design and performance of ramp metering and in the context of that model to obtain insights into the interaction of ramp metering with route choices .nevertheless , we expect that the use of direct brownian network models will not always produce good results . indeed , it is possible that such models may be suitable only when the scaled workload process can be approximated in heavy traffic by a reflecting brownian motion that has a product - form stationary distribution .we believe that understanding when the direct method is a good modelling approach and when it is not , and obtaining a rigorous understanding of the reasons for this , is an interesting topic worthy of further research .abou - rahme , n. , beale , s. , harbord , b. , and hardman , e. 2000 .monitoring and modelling of controlled motorways .pages 8490 of : _ tenth international conference on road transport information and control_. dai , j. g. , and williams , r. j. 1995 .existence and uniqueness of semimartingale reflecting brownian motions in convex polyhedrons ._ theory probab ._ , * 40 * , 140 .correction : * 50 * ( 2006 ) , 346347 . harrison , j. m. 1988 .brownian models of queueing networks with heterogeneous customer populations .pages 147186 of : fleming , w. , and lions , p. l. ( eds ) , _ stochastic differential systems , stochastic control theory and their applications _ , i m a vol .appl . 10 .new york : springer - verlag . ,kelly , f. p. , lee , n. h. , and williams , r. j .. 2009 .state space collapse and diffusion approximation for a network operating under a fair bandwidth sharing policy ._ , * 19 * , 17191780 .kingman , j. f. c. 1963 .the heavy traffic approximation in the theory of queues .pages 137169 of smith , w. l. , and wilkinson , r. i. ( eds ) , _ proceedings of the symposium on congestion theory_. chapel hill , nc : univ . of north carolina .reiman , m. i. 1982 .the heavy traffic diffusion approximation for sojourn times in jackson networks .pages 409422 of : disney , r. i. , and ott , t. ( eds ) , _ applied probability computer science : the interface _ , vol .boston : birkhauser .williams , r. j. 1996 . on the approximation of queueing networks in heavy traffic .pages 3556 of : kelly , f. p. , zachary , a. , and ziedins , i. ( eds ) , _ stochastic networks : theory and applications_. oxford : oxford univ .
unlimited access to a motorway network can , in overloaded conditions , cause a loss of capacity . ramp metering ( signals on slip roads to control access to the motorway ) can help avoid this loss of capacity . the design of ramp metering strategies has several features in common with the design of access control mechanisms in communication networks . inspired by models and rate control mechanisms developed for internet congestion control , we propose a brownian network model as an approximate model for a controlled motorway and consider it operating under a proportionally fair ramp metering policy . we present an analysis of the performance of this model .
let us reconsider why the unification of general relativity and quantum theory has proven so difficult .mathematically , the problems clearly begin with the fact that the two theories are formulated in the quite different languages of differential geometry and functional analysis .physically , an important problem appears to be that general relativity and quantum theory , when considered together , are indicating that the notion of distance loses operational meaning at the planck scale of about ( assuming 3 + 1 dimensions ) .namely , if one tries to resolve a spatial structure with an uncertainty of less than a planck length , then the corresponding momentum uncertainty should randomly curve and thereby significantly disturb the very region in space that is meant to be resolved .one of the problems in the effort of finding a unifying theory of quantum gravity is , therefore , to develop a mathematical framework which combines differential geometry and functional analysis such as to give a precise description of a notion of a shortest distance in nature .candidate theories may become testable when introduced to inflationary cosmology and compared to the cmb measurements , see . in the literature ,there has been much debate about whether the unifying theory will describe space - time as being discrete or continuous .it is tempting , also , to speculate that a quantum gravity theory such as m theory , see e.g. , a noncommutative geometric theory , see e.g. , or a foam theory , see e.g. , once fully understood , might reveal the structure of space - time as being in some sense in between discrete and continuous , possibly such as to combine the the differentiability of manifolds with the ultraviolet finiteness of lattices . at first sight, this third possibility seems to be ruled out , however : as gdel and cohen proved , no set can be explicitly constructed whose cardinality would be in between discrete and continuous , see e.g. .the message of this talk is that , nevertheless , there still is at least one mathematical possibility by which a theory of quantum gravity might yield a description of space - time which combines the differentiability of manifolds with the ultraviolet finiteness of lattices :let us recall that physical theories are formulated not directly in terms of points in space or in space - time but rather in terms of the functions in space or in space - time .this suggests a whole new class of mathematical models for a finite minimum length .namely , fields in space - time could be functions over a differentiable manifold as usual , while , crucially , the class of physical fields is such that if a field is sampled only at discrete points then its amplitudes can already be reconstructed at _ all points in the manifold - if the sampling points are spaced densely enough .the maximum average sample spacing which allows one to reconstruct the continuous field from discrete samples could be on the order of the planck scale , see ._ since any one of all sufficiently tightly spaced lattices would allow reconstruction , no particular lattice would be preferred .it is because no particular lattice is singled out that the symmetry properties of the manifold can be preserved . the physical theory , i.e. fields and actions etc .could be written , equivalently , either as living on a differentiable manifold , thereby displaying e.g. external symmetries , or as living on any one of the sampling lattices of sufficiently small average spacing , thereby displaying its ultraviolet finiteness .physical fields , while being continuous or even differentiable , would possess only a finite density of degrees of freedom .the mathematics of classes of functions which can be reconstructed from discrete samples is well - known , namely as _ sampling theory , in the information theory community , where it plays a central role in the theory of sources and channels of continuous information as developed by shannon , see . _the simplest example in sampling theory is the shannon sampling theorem : choose a frequency . consider the class of continuous functions whose frequency content is limited to the interval , i.e. for which : if the amplitudes of such a function are known at equidistantly spaced discrete values whose spacing is or smaller , then the function s amplitudes can be reconstructed for all .the reconstruction formula is : }{(x - x_n)\omega_{max}}\ ] ] the theorem is in ubiquitous use in digital audio and video as well as in scientific data taking .sampling theory , see , studies generalizations of the theorem for various different classes of functions , for non - equidistant sampling , for multi - variable functions and it investigates the effect of noise , which could be quantum fluctuations in our case . as was shown in , generalized sampling theorems automatically arise from stringy uncertainty relations , namely whenever there is a finite minimum position uncertainty , as e.g. in uncertainty relations of the type : , see . a few technical remarks :the underlying mathematics is that of symmetric non self - adjoint operators . through a theorem of naimark ,unsharp variables of povm type arise as special cases .let us consider as a natural ( because covariant ) analogue of the bandwidth restriction of the shannon sampling theorem in curved space the presence of a cutoff on the spectrum of the laplace operator on a riemannian manifold ( or the dalembert or the dirac operator on a pseudo - riemannian or a spin manifold respectively ) .we start with the usual hilbert space of square integrable scalar functions over the manifold , and we consider the dense domain on which the laplacian is essentially self - adjoint .using physicists sloppy but convenient terminology we will speak of all points of the spectrum as eigenvalues , , with corresponding eigenvectors " .since we are mostly interested in the case of noncompact manifolds , whose spectrum will not be discrete , some more care will be needed , of course . for hilbert space vectorswe use the notation , in analogy to dirac s bra - ket notation , only with round brackets .let us define as the projector onto the subspace spanned by the eigenspaces of the laplacian with eigenvalues smaller than some fixed maximum value .( for the dalembertian and for the dirac operator , let bound the absolute values of the eigenvalues . )we consider now the possibility that in nature all physical fields are contained within the subspace , where might be on the order of .in fact , through this spectral cutoff , each function in acquires the sampling property : if its amplitude is known on a sufficiently dense set of points of the manifold , then it can be reconstructed everywhere .thus , through such a spectral cutoff a sampling theorem for physical fields arises naturally . to see this , assume for simplicity that one chart covers the -dimensional manifold .consider the coordinates , for as operators that map scalar functions to scalar functions : . on their domain within the original hilbert space , these operators are essentially self - adjoint , with an hilbert basis " of non - normalizable joint eigenvectors .we can write scalar functions as , i.e. scalar functions are the coefficients of the abstract hilbert space vector in the basis of the vectors .the continuum normalization of the is with respect to the measure provided by the metric . on the domain of physical fields , , the multiplication operators are merely symmetric but not self - adjoint .the projections of the eigenvectors onto the physical subspace are in general no longer orthogonal .correspondingly , the uncertainty relations are modified , see .consider now a physical field , i.e. a vector , which reads as a function : .assume that only at the discrete points the field s amplitudes are known .then , if the discrete sampling points are sufficiently dense , they fully determine the hilbert space vector , and therefore everywhere . to be precise , we assume the amplitudes to be known .we use the sum and integral notation because may be discrete and or continuous ( the manifold may or may not be compact ) .define .the set of sampling points is dense enough for reconstruction iff is invertible , because then : and we therefore obtain the reconstruction formula : in communication theory , the stability of the reconstruction is important due to noise and is handled as in . here , not only may quantum fluctuations act as ` noise ' , but information can also be entangled .still , following shannon and landau , it is natural to define the density of degrees of freedom through the number of dimensions of the space of functions in with essential support in a given volume .clearly , we recover conventional shannon sampling as a special case .the shannon case has been applied to inflationary cosmology in for flat space .it should be very interesting to apply to cosmology also the general approach presented here , both to generic non - flat spatial slices , and also to the fully covariant case based on a cutoff of the spectrum of the dirac or dalembert operator .in particular , the analysis of the analog of sampling theory in the case of indefinite metrics should provide a new approach to the problem of generally covariant uv cutoffs .we also note that higher than second powers of the fields ( second powers occur as scalar products in the hilbert space of fields ) are now nontrivial in quantum field theoretical actions : this is because the multiple product of fields needs to be defined such as to yield a result within the cut - off hilbert space . in this context, it should be interesting also to reconsider the mechanism of sakharov s induced gravity , see .a sampling theoretical cutoff can be applied in arbitrary dimensions and it should of interesting , e.g. , to model a maximum achievable information density on black hole horizons this way .for holography , see e.g. .note that the sampling theoretical cutoff is always holographic in the sense that all information is encoded already in zero - dimensional sets , namely in any set of sampling points from which reconstruction is possible . in principle , holography in this sense should not be surprising : any quantum theory which lives on a separable hilbert space , in any space - time dimension , lives on a hilbert space with a countable basis .for example , the hilbert space of ordinary qm in three dimensions is unitarily equivalent to the hilbert space of qm in any other number of dimensions , simply because all separable hilbert spaces are unitarily isomorphic .the key observation in our sampling theory approach is that discrete sets of formal position eigenvectors can be chosen as such a countable basis in the hilbert space , if there is bandwidth cutoff .our approach to sampling on curved space significantly simplifies in the case of compact manifolds , where the spectrum of the laplacian is discrete and the cut off hilbert space is finite dimensional .intuitively , it is clear that knowledge of a function at as many points as is the dimension of the cutoff hilbert space generically allows one to reconstruct the function everywhere .if the compact manifold is a group , the peter - weyl theorem provides us explicitly with the finite - dimensional hilbert spaces of functions of finite bandwidth " obtained by cutting of the spectra of casimir operators . in the particular case of and the laplacian we obtain the fuzzy sphere ,see e.g. , which has been much discussed in the context of noncommutative geometry , . in the literature ,sampling theory on generic riemannian manifolds has been little studied so far .this is because sampling theory originated and finds most of its applications in communication engineering .interesting results that are of relevance here were obtained , however , by pesenson , see e.g. , who considered , in particular , the case of homogeneous manifolds . in ,the starting point is also a cutoff on the laplace operator s spectrum .reconstruction , however , works differently , namely by approaching the solution iteratively in a sobolev space setting .a. kempf , 2001 , phys.rev.*d63 * 083514 , astro - ph/0009209 , a. kempf , j. c. niemeyer , 2001 , phys.rev.*d64 * 103501 , astro - ph/0103225 , r. easther , b. r. greene , w. h. kinney , g. shiu , 2001 , phys.rev.*d64 * 103502 , hep - th/0104102 , r. easther , b.r .greene , w.h .kinney , g. shiu , 2002 , phys.rev.*d66 * 023518 , hep - th/0204129 j. polchinski , hep - th/0209105 , a. connes , _ noncommutative geometry , academic press ( 1994 ) , s. majid , _ foundations of quantum group theory , cambridge university press ( 1996 ) f. markopoulou , gr - qc/0203036 , d. oriti , h. pfeiffer , gr - qc/0207041 , l. smolin , hep - th/0209079 a. kempf , in proceedings 18th iap colloquium on the nature of dark energy , paris , france , 1 - 5 jul 2002 .e - print archive : gr - qc/0210077 a. kempf , 2000 , phys.rev.lett . *85 * , 2873 , hep - th/9905114 , a. kempf , 1997 , europhys.lett .* 40 * 257 , hep - th/9706213 c. e. shannon , w. weaver , 1963 , _ the mathematical theory of communication , univ . of illinois press .benedetto , p.j.s.g .ferreira , 2001 , _ modern sampling theory , birkaeuser e. witten , 1996 , phys . today * 49 * 24 , a. kempf , 1994 , j. math . phys . * 35 * , 4483 a. kempf , j.math.phys . * 35 * 4483 ( 1994 ) , hep - th/9311147 , a. kempf , g. mangano , r. b. mann , phys.rev.*d52 * 1108 ( 1995 ) , hep - th/9412167 h.j .landau , proc . of the ieee ,* 10 * , 1701 ( 1967 ) a.d .sakharov , reprinted in gen .grav . * 32 * , 365 ( 2000 ) j.d .bekenstein , acta phys.polon . * b32 , 3555 ( 2001 ) , quant - ph/0110005 , r. bousso , phys.rev.lett . * 90 , 121302 ( 2003 ) , hep - th/0210295 , p.s.custodio , j.e.horvath , gr - qc/0305022 j. madore , class .quantum grav .* 9 , 69 ( 1992 ) i. pesenson , trans ., * 352 * , 4257 ( 2000 ) , i. pesenson , j. fourier analysis and applic ., * 7 * , 93 ( 2001 ) b. davies , y. safarov , _ spectral theory and geometry , cambridge univ . press ( 1999 )gilkey , j.v .leahy , j. park , _ spinors , spectral geometry , and riemannian submersions , electronic library of mathematics , http://www.emis.de/elibems.html ( 1998 ) g. esposito , _ dirac operators and spectral geometry , cambridge univ . press ( 1998 ) m. puta , t. m. rassias , m. craioveanu , _ old and new aspects in spectral geometry , kluwer ( 2001 ) _ _ _ _ * * * _ _ _ _
the often - asked question whether space - time is discrete or continuous may not be the right question to ask : mathematically , it is possible that space - time possesses the differentiability properties of manifolds as well as the ultraviolet finiteness properties of lattices . namely , physical fields in space - time could possess a finite density of degrees of freedom in the following sense : if a field s amplitudes are given on a sufficiently dense set of discrete points then the field s amplitudes at all other points of the manifold are fully determined and calculable . which lattice of sampling points is chosen should not matter , as long as the lattices spacings are tight enough , for example , not exceeding the planck distance . this type of mathematical structure is known within information theory , as sampling theory , and it plays a central role in all of digital signal processing .
extracting quantitative information about the position and motion of features in video images is often key to understanding fundamental problems in science .for example , the tracking of colloidal hard spheres in three - dimensional confocal images has provided important insights into phenomena such as melting , crystallization , and the glass transition .biophysical experiments such as the investigation of cell mechanics by microrheology or the measurement of single biomolecule mechanics using optical or magnetic tweezers rely on the precise positional measurement of single colloidal particles .moreover , the tracking of single proteins in live cells provided a powerful tool for understanding biological processes , and eventually lead to the development of super - resolution microscopy techniques such as palm and storm .crucial for these studies is a method to extract trajectories of features from video images , which has been described extensively in colloidal science as well as in single molecule tracking .most single particle tracking algorithms have been designed for spherical features , as it is the most common type of signal .recent developments in colloidal synthesis provide means to assemble spheres in so - called colloidal molecules .single particle tracking of these clusters of spheres will provide insights into the role of anisotropy in for instance crystallization and diffusion .as the basic building blocks of these studies contain closely spaced particles , a robust automated method is required to perform accurate particle tracking on partially overlapping features .automated methods for single - particle tracking follow roughly the following pattern : an image with features of interest is first preprocessed , then single features are identified in a process called `` segmentation '' , these feature coordinates are refined to sub - pixel accuracy , and finally the features are linked to the features in the previous image .iteration of this algorithm over a sequence of images results in particle trajectories that can be used for further analysis .although this method has proven itself as a robust and accurate method , issues arise when features become so closely spaced that their signals overlap .this essentially limits studies to dilute systems , repelling particles , or model systems with very specific characteristics such as index - matched and core - shell fluorescent particles . in particular ,overlapping feature signals give rise to two complications : firstly , the segmentation step regularly recognizes two closely spaced features as one feature due to the overlap of signals . in order to identify the trajectories of closely spaced features completely ,tedious frame - by - frame manual corrections are necessary , prohibiting the analysis of large data sets . in super - resolution microscopy methods ,reported approaches to solve this issue are repeated subtraction of point - spread functions of detected features , or advanced statistical models classifying merge and split events .notably , these tracking methods do not use all the available information : as the feature locations are known in the previous frame , the segmentation of the image may be enhanced using the projected feature locations . herewe will present a fast and simple method for image segmentation that makes use of this history of the feature locations .we will test this method on artificial images and experimental data of colloidal dimers .a second issue that arises when two feature signals overlap is that their refined coordinates will underestimate the separation distance .especially the commonly employed center - of - mass centroiding suffers from this systematic `` overlap bias '' , leading to on apparent attraction between colloidal particles . for fluorescence images ,this issue can be addressed by least - squares fitting to a sum of gaussians , which has been reported as a way to measure the distance between overlapping diffraction limited features . here, we will apply this method to images with features that are not diffraction limited .we conduct systematic tests on the accuracy ( bias ) and precision ( random error ) of the obtained feature positions . to demonstrate the new automated segmentation and refinement methods, we will apply it to three - dimensional confocal images of a diffusing colloidal cluster consisting of two spheres and use the obtained trajectories to extract its diffusion tensor .as our algorithm for single particle tracking is based on the widely employed algorithm by crocker and grier , we will first introduce their algorithm and call it `` cg - algorithm '' . throughout this work a python implementation of this algorithm , trackpy , was used for comparison .the cg - algorithm consists of four subsequent steps : preprocessing , feature segmentation , refinement , and linking .see figure [ fig : flowchart](a ) for a schematic overview .the preprocessing consists of noise reduction by convolution with a sized gaussian kernel and background reduction by subtracting a rolling average from the image with kernel size .the length scale is chosen just larger than the feature radius .the subsequent segmentation step finds pixels that are above a given relative intensity threshold and are local maxima within a certain radius .the length scale is the minimum allowed separation between particles .after the refinement step ( see next section ) the linking connects the features in frame with features in frame by minimizing the total displacement between the frames . between two frames ,particles are allowed to move up to a maximum distance . in this process , each frame is treated individually : only during the final step ( linking ) , features are connected into trajectories .we rearranged this process such that the information about the particle locations in the previous frame is used already in the segmentation .this allows us to project the expected feature locations in consecutive frames and therefore increase the success rate of segmentation .see figure [ fig : flowchart](b ) for a schematic overview .we describe the new segmentation algorithm here using a minimal example of two closely spaced features in two subsequent frames , which can be generalized to an arbitrary number of features in any number of frames .see figures [ fig : relocate](a)-(c ) .we will assume that feature finding and refinement was performed successfully on the previous frame ( figure [ fig : relocate](d ) ) .the current frame is first subjected to gray dilation and thresholding step , just as in the cg - algorithm .because features are closely spaced in the frame 2 , this leads to segmentation into only one single feature ( figure [ fig : relocate](e ) ) . .therefore these features could belong to a single subnet via a missing feature .( g ) subsequently , a region is defined up to distance from the features in frame 1 ( dashed yellow line ) , that is used to ( h ) mask frame 2 . in this step ,also all features that were found already are masked up to , which enables the detection of the second feature that is less than distance from the features in frame 1 and farther than from other features in frame 2 .the newly found feature is then added to the subnet so that the linking can be completed ( i ) . ]then a part of the linking step is executed : features are divided into so - called subnetworks .this is a necessary step in the cg algorithm to break the sized combinatorial problem of linking two sets of features into smaller parts .first , linking candidates are identified using a kd - tree . linking candidates for features in frame 1 are features that are displaced up to a distance in frame 2 and vice versa .then subnetworks are created such that all features that share linking candidates are in the same subnetwork . for a sufficiently large distance ,all features in figure [ fig : relocate](f ) belong to the same subnet : the feature in frame 2 is a linking candidate for both features in frame 1 . from the subnetworks , the number and estimated location of missing features is obtained `` for free '' : if a subnetwork contains fewer particles in frame 2 than in frame 1 , there must be missing features in its vicinity . to account for the possibility that a missing feature could connect two subnetworks , we combine subnetworks if they are less than distance apart in frame 1 whenever missing features are being located . in order to estimate the location of the missing features , a region up to distance around the features in the previous frame is masked in the current frame ( dashed yellow line in figures [ fig : relocate](g)-(h ) ) .subsequently , all already found features are masked up to a radius of ( figure [ fig : relocate](f ) ) .this enables us to find local maxima that are further than distance from all other features in the current frame and closer than distance from the features in the previous frame .from the masked subimage , local maxima are obtained again through gray dilation and thresholding .after this , feature selection filters can be inserted in order to select appropriate features , for example with a minimum amount of integrated intensity .then the new feature is added to the subnetworks and linking is completed by minimizing the total feature displacement ( figure [ fig : relocate](i ) ) . by performing the linking during the segmentation process, additional information is taken into account : not only the present image is used to identify the features , but also the coordinates from the previous frame .therefore , we expect a higher number of correctly identified feature positions for the combined linking and segmentation method . because all the computationally intensive tasks were already present in the original algorithm , the execution time of our new algorithmwas observed to be similar .subpixel accuracy and precision is a key feature of single particle tracking . although the size of a single pixel is diffraction limited to approximately , localization precisions down to have been reported .these subpixel feature locations are obtained by starting from an initial guess supplied by the segmentation step , which is then improved in the so - called `` refinement '' step . here, we will describe a general - purpose framework for refinement of overlapping features using non - linear least squares fitting to summed radial model functions .we will compare this method to the center - of - mass centroiding that is present in the cg algorithm . for radially symmetric features ,the feature position is given by its center - of - mass .due to its simplicity and computational efficiency , this method is a preferred choice for many tracking applications . in the center - of - mass refinement ,the center coordinate of the feature is obtained iteratively from the image , such that : non - linear least squares fitting to a model function is conceptually different , since it goes beyond assuming only feature symmetry and requires knowledge on the feature shape .if image noise is uncorrelated and normal distributed , this method gives the maximum likelihood estimate of the true centroid .although this assumption is not strictly valid , the precision of this method is generally higher than the center - of - mass method when the image is subject to noise . by simultaneously fitting a sum of multiple model functions, this method can be extended to tracking multiple overlapping features .we employ this approach here and formulate the feature model function in the following way : here , is the image coordinate , the feature center , its intensity , its radius , and a model function of a single feature , which is a function of and a list of parameters .the reduced radial coordinate is defined for any number of dimensions and allows for anisotropic pixel sizes through the vector nature of .the feature model function is defined only up to distance from the feature center .it is in principle possible to use any function for and apply it to images with different signal intensities and physical pixel sizes through the separate parameters and . in this article , we limit ourselves to the gaussian function }$ ] so that we do not have extra parameters .we keep constant and allow and to be optimized .the model image is constructed by the summation of the individual features , which are each only defined within a region with radius .this additivity is a good assumption for fluorescence microscopy techniques .we add a fixed background signal , which we keep constant within each cluster of overlapping features , but we allow it to vary between clusters to account for spatially different background values . for an image or video consisting of features , the following `` objective function '' is minimized : the feature model function is defined by eq . [ eq : generic_model ] .if all features are separated by more than , this minimization can be separated into single feature problems . however , when features have overlapping regions , their objective functions can not be separated and have to be minimized simultaneously .we separate the full image objective function ( eq .[ eq : global_obj ] ) into groups ( `` clusters '' ) using the kd - tree algorithm .each of the resulting cluster objective function is minimized using the sequential linear least squares programming ( slsqp ) algorithm interfaced through the open - source python package scipy .this slsqp algorithm allows for additional constraints and bounds on the parameters .we use bounds to suppress diverging solutions and constraints to for example fix the distance between two features to a known value .the optimizer is supplied with an analytic jacobian of eq .[ eq : global_obj ] to increase performance .the here described framework of feature refinement in principle allows refinement of any feature that can be described by a radial function .although less computational efficient than the conventional refinement by center - of - mass , it can take into account feature overlap and additionally allows for constraints on parameters .the above described methods for single particle tracking were tested quantitatively on both artificial and experimental data .artificial images were generated by evaluating the following analytical functions for disc- and ring - shaped features on an integer grid : }&r \ge d\\ 1&\mathrm{otherwise}\\ \end{cases } , \label{eq : disc}\\ f_{ring}(r , t ) & = \exp{\left[-\left(\frac{r - t - 1}{t}\right)^2\right]}. \label{eq : ring}\end{aligned}\ ] ] here , the reduced radial coordinate is given by eq .[ eq : r ] , is the solid disc radius in units of , and is the ring thickness in units of .the true feature location was generated at a random subpixel location . unless stated otherwise , we chose , , , and . see figure [ fig : model_features ] for two example model features generated with these parameters .images were discretized to integer values and a poisson distributed , signal - independent background noise with a mean intensity of is added to each image .the signal - to - noise ratio is defined as .each refinement test was performed on 100 images having two overlapping features with a given center - to - center distance and random orientations .in order to ensure that the choice of initial coordinates did not affect the refined coordinate , we generated the initial coordinates randomly within from the actual coordinate .experimental measurements on colloidal particles were performed with an inverted nikon tie microscope equipped with a nikon a1r resonant confocal scanhead scanning lines at .for the two - dimensional diffusion measurements , we used a 20x objective ( na = 0.75 ) , resulting in a physical pixel size of .for the three - dimensional measurements , a 100x ( na=1.45 ) oil immersion objective was used , resulting in an xy pixel size of . a calibrated mcl nanodrive stage enabled fast z stack acquisition with a z step size of . as the objective immersion liquid ( )is closely matched with the sample solvent ( ) , this step size equals the physical pixel size in z direction within an error of .we acquired 5.13 three - dimensional frames per second with a size of 512x64x35 pixels ( x - y - z ) . for two - dimensional diffusion measurements we employed samples consisting of partially clustered tpm ( ) colloids with a diameter of containing a fitc ( fluorescein ) fluorescent marker , as described in .particles were confined to the microscope coverslip through sedimentation .the samples for three - dimensional measurements consisted of core - shell ritc ( rhodamine b ) labeled pmma ( ) colloidal clusters that were synthesized via an emulsification - evaporation method according to .the average distance between the two constituent spheres of radius in a cluster is , determined by scanning electron microscopy using an fei nanosem at .the clusters were both index and density matched using a mixture of cyclohexyl bromide and cis - decalin in a weight ratio of 72:28 and imaged in a rectangular capillary , similar to experiments described in .the python code on which this work is based is available on - line and will be integrated into a future version of trackpy , that is available through conda as well as through the python package index .all tests described in this work are implemented as `` unittests '' that ensure the correct functioning of the code on each update .as described in the method section , the integrated segmentation and linking step extends the frame - by - frame segmentation used in the cg algorithm in such a way that it makes use of the history of feature locations . in order to test the effect of our extension, we compared the segmentation in the cg algorithm with our integrated segmentation and linking on experimental video images .the video images contain a single diffusing colloidal dimer , which consists of two permanently connected spheres .the identified trajectories for 800 frames are displayed in figure [ fig : dimer ] .clearly , by taking into account the history of the feature positions , the dimer positions can be tracked significantly better : for the new algorithm two features were detected in all of the 800 frames , while for the cg algorithm , only one third of the frames had 2 features , resulting in short disconnected trajectories that appear to hop between two feature locations .the here described extension of segmentation increases the number of correctly segmented features significantly .it has to be noted though that the segmentation of the first frame is not enhanced by our method because of the lack of information on the previous feature positions .generally , there is a start - up period of a few frames in which the number of correctly segmented features increases .these potentially incorrectly tracked frames can be ignored for most tracking applications . for cases where the first frames are relevant , the algorithm could be ran backwards from the first correctly segmented frame .after the segmentation step , the subpixel position is obtained in the refinement step . in this sectionwe will analyze the effect of signal overlap on the accuracy and precision in the refined feature coordinates using both center - of - mass and the here described least - squares fitting to sums of model functions .we define the accuracy or bias as the mean difference between the measured and the true value .the precision is the random deviation around the measured average , which we calculate with the root of squared deviations from the measured average .first , we took two gaussian model features ( eq . [ eq : disc ] with ) and varied their spacing. see figure [ fig : refine_distance ] .the deviations of the obtained positions are measured parallel and perpendicular to the line connecting the two actual feature positions .we found for both refinement methods that there is no bias in the perpendicular coordinate .for the parallel coordinate , however , we found a clear difference between the two refinement methods : in center - of - mass centroiding , the parallel coordinate was negatively biased because of feature overlap , meaning that the distance between the two overlapping features was systematically underestimated .for the least - squares fitting to sums of model functions , however , the bias stayed within . and signal - to - noise ratio .the bias for the center - of - mass ( com ) refinement is shown for mask radius from 6 to 10 , both with rolling average background subtraction ( denoted with dots ) and without ( denoted with crosses ) .the bias for the least - squares fitting to a sum of gaussians method is denoted with tilted crosses .the dashed black line denotes the bias at which features are detected precisely in between the two actual feature positions . ] this negative bias for center - of - mass centroiding has been described before and is a logical consequence of the method : if two features overlap , each of the features obtains extra intensity on the inside of the dimer .this bias increases in magnitude with decreasing particle separation , until both features are detected precisely in between the two actual positions .the bias increases also with increasing mask radius , as shown in figure [ fig : refine_distance ] . apart from this negative bias , we observed a longer ranged positive bias .this effect has its origin in the preprocessing . for center - of - mass centroiding ,it is vital that the constant image background is subtracted .this is conventionally achieved by subtracting a rolling average of the image with box size of typically .although this method has proven to be robust for background subtraction , it also introduces a skew in the feature signals when features are closer than ( see figure s1 ) . here , is the typical feature diameter . from thiswe conclude that it is important not to use a rolling average background subtraction in order to accurately track features that are spaced closer than .if the background subtraction was omitted , the positive bias was indeed not observed , as can be seen in figure [ fig : refine_distance ] . in order to account for the background signal in the least - squares fitting algorithm, we introduced a background variable in the objective function ( eq . [ eq : global_obj ] ) instead .secondly , we analyzed the bias and precision of overlapping gaussian features , disc shaped features , and ring shaped features while keeping the particle separation constant at .see figure [ fig : dimer_study ] .in all cases , we observed no bias in the perpendicular coordinate , as is expected from the symmetry of the dimer . also , the precision for the perpendicular direction was in close agreement with the precision of parallel direction . in figure[ fig : dimer_study](a ) , it can be seen that the signal - to - noise ( s / n ) ratio did not influence the bias for gaussian shaped model features , while the precision improved with increasing s / n ratio . at ,the least - squares optimizer was always able to find a minimum . at ,the optimizer sometimes diverged and yielded random results .this failure of least - squares fitting was reported already for by cheezum _as the slsqp minimization allows for bounds on the feature parameters , we were able to suppress the diverging solutions by limiting the displacements of center coordinates to the mask size .this enhancement enables us to also use the least - squares method for . in figure[ fig : dimer_study](b ) , it is visible that the bias in the parallel coordinate decreased with increasing feature size .although the bias was so small that we can still speak of `` subpixel accuracy '' , the bias of approximately for typical values of might be problematic for super resolution techniques in which sum of gaussians are used as model functions for overlapping point spread functions . as the magnitude of the bias increased with decreasing feature size and not with increasing s / n , we conclude that the bias is caused by the discretization of the feature shape , which depends on the used discretization model . as colloidal molecules are often larger than the diffraction limit , their feature shape is typically not gaussian . herewe will assess the effect of non - gaussian shapes on the tracking bias and precision using a disc - shaped model feature as described by eq .[ eq : disc ] .see figures [ fig : dimer_study](c)-(e ) .the observed precision in the refined position of the overlapping discs was surprisingly high , and the precision even slightly increases up to a disc size of ( figure [ fig : dimer_study](c ) ) .this was probably caused by the larger integrated signal intensity of the disc shaped feature , which increased the s / n ratio integrated over the feature . for disc sizes greater than , the precision degraded due to the absence of smooth feature edges .the bias was lowest for small feature sizes ( figure [ fig : dimer_study](e ) ) , since the disc - sized feature is then almost equal to the gaussian shaped feature .still , the magnitude of the bias did not exceed for all tested disc - shaped features .finally , in figure [ fig : dimer_study](f ) , we tested the least - squares fitting of gaussians on ring - shaped model features ( eq . [ eq : ring ] ) , such as may be obtained for particles with fluorescent markers on their surface only .although a gaussian is clearly a poor model function for these ring - shaped model features , it still performed remarkably well with absolute bias and precision both below for any ring thickness above , probably because the tails of the features are still gaussian - shaped . for thin rings with a thickness below ,the least - squares optimization diverges . for these feature shapes ,a more appropriate model function should be used . to summarize, we observed that least - squares fitting to sums of gaussians is able to accurately refine the location of overlapping gaussian - shaped features .the negative bias of multiple pixels present in center - of - mass centroiding is reduced to less than if the feature radius is above .this makes fitting to sums of gaussian an appropriate method for refining overlapping features with typical radii around and s / n ratios above 2 .although the gaussian is not a perfect model for disc - shaped or ring - shaped features , the bias and precision were very similar due to the limited pixel size for typical images of overlapping colloidal particles , given that the feature edges are smooth . for overlapping features that are not well modeled by a gaussian and that have a radius larger than , different model functions should be used . as described by jenkins _ , it is possible to experimentally obtain an average feature shape and successfully use this for feature refinement of single features . for an extension to multiple overlapping features, we found that this technique is computationally too demanding , as there are no efficient optimizers for functions with discretized parameters . in order to use this technique for overlapping features ,the average feature shape should be described with a continuous function , which can be directly used in our framework for least - squares minimization .although an accuracy of is sufficient for many applications , a further improvement in accuracy could be reached by maximizing the log - likelihood corresponding to eq .[ eq : global_obj ] instead of using the direct least - squares minimization .for single features , using a maximum likelihood estimator has been proven to give a more precise estimate of the true feature positions .if additional information about the tracked features is available , constraints can be applied to increase tracking accuracy . in our framework for least - squares optimization of summed radial model functions , any combinations of parameters in the image model function ( eq . [ eq : generic_model ] ) can be constrained by equations of the following form : here , is a function and is an array consisting of all parameters of features that are in a cluster of size .we demonstrate the use of constraints here using colloidal dimers with known distance between the two constituent spheres . using our algorithmwe automatically tracked 1006 out of 1170 recorded frames .a constraint was chosen such that the distance between the constituent spheres equals the average distance measured on sem images ( ) .the resulting tracked three - dimensional images can be seen in supporting video s1 .as the shape of a colloidal cluster is anisotropic , the short - term diffusion of such a particle is also anisotropic : for example , a dimer has a lower hydrodynamic friction when moving along its -axis , compared to when moving along its -axis . in general , the dynamics of any brownian object is described by a symmetric second - rank tensor of diffusion coefficients , consisting of 21 independent elements .we chose the point of highest symmetry for the origin of the cluster based coordinate system and aligned the z - axis with the long axis of the dimer , so that all off - diagonal terms in the diffusion tensor are zero .see figure [ fig : clusters](a ) .the computed diffusion tensors were averaged over lagtimes up to .the resulting diffusion tensor reflects the symmetry of the dimer and can be seen in supporting table s1 ..anisotropic diffusion coefficients of the colloidal dimer .the coordinate system is defined in figure [ fig : clusters ] .the error denotes the confidence interval estimated using a bootstrap algorithm . [ cols="<,<,<,<",options="header " , ]
quantitative tracking of features from video images is a basic technique employed in many areas of science . here , we present a method for the tracking of features that partially overlap , in order to be able to track so - called colloidal molecules . our approach implements two improvements into existing particle tracking algorithms . firstly , we use the history of previously identified feature locations to successfully find their positions in consecutive frames . secondly , we present a framework for non - linear least - squares fitting to summed radial model functions and analyze the accuracy ( bias ) and precision ( random error ) of the method on artificial data . we find that our tracking algorithm correctly identifies overlapping features with an accuracy below 0.2 px and a precision of 0.1 to 0.01 px for a typical image of a colloidal cluster . finally , we use our method to extract the three - dimensional diffusion tensor from the brownian motion of colloidal dimers .
bitcoin is an emergent phenomenon realized through the subtle interaction of multiple data structures and incentive mechanisms . in isolationthe various components of the bitcoin technology ecosystem are well known and in some cases have existed for years .the novelty of bitcoin was to combine these elements in a previously unimagined way .the success of bitcoin as a cryptocurrency has generated interest in the underlying design principles of the cryptocurrency .this in turn has prompted some to critically reassess traditional methods used to process information .the purpose being to determine the extent to which architectural aspects of bitcoin might be leveraged to reduce or eliminate current inefficiencies .one of the architectural components of bitcoin is a modified linked list known as a _ blockchain _ , demonstrated in figure [ fig : ecund ] . at a fundamental levela blockchain can be thought of as nothing more than a linear collection of data elements , i.e. nodes ( ) .each node , , is pointed to by the subsequent node , , through a reference to its hash .therefore maintains a hash of .one of the characteristics of this data representation format is that the integrity of the complete list can be easily verified with relatively low storage requirements , in fact by maintaining only the single hash at the head of the list .this construct , introduced in 1990 by haber & stornetta , is integral to the bitcoin specification .a blockchain can be used for verifiably representing and persistently storing information related to supply chains in manufacturing .however as noted earlier the innovation of bitcoin is not due to any one element but rather to the interplay of many technical and non - technical components . in this workwe examine a subset of these components and detail a methodology for utilizing them towards the creation of an efficient supply chain management system .the process by which data pertaining to particular parties is safely shared in environments of low trust , such as exists between organizations cooperating in a multi - node supply chain network can be problematic .prior efforts to apply the broadly defined concept of `` blockchain '' towards the creation of efficiency gains in supply chain management systems have emphasized the use of distributed computing environments . the scripting language of bitcoinis limited in the sense that it is not turing complete .various altcoin implementations have endeavored to provide that functionality .the purpose of which being the fashioning of a decentralized virtual machine .such a system would constitute a distributed app - engine capable of executing programs in a peer - to - peer network . in order to operate effectivelythe system would seek to prevent the execution of programs that could be detrimental to the network .the fundamental issue that operations of this nature are grappling with is the halting problem .operators of such platforms must answer the question of how to ensure that a program eventually terminates and does not waste network resources .one workaround employed to bypass this impediment has been to rent computation cycles for a fee . to datethere have been initial endeavors to apply such systems to the improvement of supply chain management information flows .the contributions of these efforts to the creation of a viable solution have thus far been found wanting .provenance , one such attempt , states explicitly in the first paragraph of their technical whitepaper `` _ _ the decentralized application ( dapp ) proposed in this paper is still in development _ _ '' , and goes on to furnish no precise technical specifications .skuchain likewise provides no technical information describing the implementation of their proposed system .distinct from these commercial operations , the use of distributed app - engines towards the improvement of supply chain data transmission has been examined in , which introduces an ontology - driven model for tracing provenance of goods .the relative immaturity of decentralized application systems , such as those employed by the endeavors referenced above , presents a compelling reason to exclude them from a studies such as those that deal with practical applications realizable with current technology . in this workwe dispense with the notion of employing decentralized application systems .we examine exclusively the characteristics inherent to the bitcoin protocol at the current state of the art .this paper is intended to provide concrete recommendations for the practical amelioration of supply chain management systems that are realizable with modern engineering and technological capabilities .one of the most vulnerable attack vectors in bitcoin protocol inspired supply chain management systems is the mapping from the `` real world '' to the digital world .the application of rfid to facilitate this inter - linkage as described by takes the approach followed here in its identification of tangible inefficiencies and proposal for an actionable corrective measure .the exclusive attribution of effects from various properties native to the bitcoin cryptocurrency protocol is not straightforward as the validity of the system emerges from a complex balance of forces .there exists a multi - faceted interchange of data representations and game theoretic incentive mechanisms that give rise to the transaction network which supports bitcoin . accordinglythis section does not attempt to exhaustively enumerate the attributes of that protocol .what follows is a high level characterization of a subset of the properties that are inherent to bitcoin .those properties are examined in light of their potential utility to information retention and transmission contextualized in the environment of a multi - party ( inter - organizaional ) supply chain .the selected aspects of the protocol are evaluated in order to provide a mapping between their relevance to bitcoin and their utility to a supply chain management system .anonymity ( and pseudo - anonymity ) remains one of the fundamental tenants of the `` cypherpunk '' movement that gave rise to the concept of cryptocurency .the related property of fungability with respect to individual bitcoins is important to the long - term viability of this technology as a medium of exchange .association with nefarious activity including coins that have been involved in deep - web drug deals , such as those that regularly take place on alphabay market , would potentially subject those coins to censure by authorities , were they to be positively identified .such a result as that just described could diminish public confidence in the currency and precipitate the ultimate dissolution of the transaction network .what makes cryptocurrencies attractive to counterparties in these illicit transactions is the relative size of the anonymity set associated with their use .network participants are represented only by their address , which if used in accordance with best security practices can be difficult to associate with a real - world identity .this serves to emphasize the point that pseudo - anonymity of users is a characteristic regarded by some as indispensable to bitcoin .the practices employed by drug dealers of the century , detailed above , stand in stark contrast to the techniques employed by drug dealers of previous eras , such as the 1980 s .the 2001 film blow presents a portrayal of this reality whilst simultaneously serving to illustrate an important aspect of supply chain management .the scene wherein the american cocaine importer , george jung , introduces his colombian connection to the head of his california distribution network initiates a process by which george is subsequently extricated from this commercial pipeline .this brief anecdote will underscore the importance of anonymity amongst nodes in a supply chain .inability to identify nodes to whom one is not directly connected is a critical feature of a supply chain management system , lest that information be used to `` cut out '' intermediary nodes .however , representing nodes in a supply chain in such a way that we are able to trace the goods they move through the network is important for preserving provenance of entities exchanged .strict prohibition on the de - anonymization of nodes with whom one shares no common edge is simultaneously balanced with the need to trace the provenance of interchanged information .there is an analogy in this trade off to the operational characteristics of the bitcoin protocol . accordinglythe pseudo - anonymity properties of the bitcoin network can be usefully employed in the creation of a shared data structure amongst supply chain nodes with the specified constraints .nodes must be represented by means of a persistent pseudonymous identifier through which it is possible to associate them with the information exchanged ( ) at an approximate time ( ) .this representation should be resistant to attempts at the association of such nodes with `` real world '' identifying information .the use of data replication to prevent against the deterioration of information resources in systems requiring a high degree of fault tolerance has been shown to be effective .this technique has been employed successfully in the distributed processing of large data sets across clusters of computers , such as hadoop which utilizes redundant copies of information to prevent against unintentional data corruption .replication in computing involves sharing information ( distribution ) so as to ensure consistency between redundant records .this process often employs a rudimentary consensus mechanism to establish the canonical source in the event of malfunction , explored later in more detail .the bitcoin protocol utilizes the concept of data replication in the historical transaction data maintained by all full nodes on the network .there is a clear analogy here to the individual cluster members in the hadoop architecture , many of which maintain a distinct copy of ( often a subset over ) the database . in the context of a supply chainthis feature would enable each node to independently verify their own copy of the flow of goods or merchandise throughout the network .multiple copies of the shared database , including associated blocks and component transactions , should be maintained by actors operating in the supply chain network exceeding a predetermined threshold .the consensus model in it s most elemental instantiation would take the form of simple majority as in the case of hadoop detailed above .mutual agreement between counter - parties provides another form of consensus whereby the definitive characterization of an exchange can be recorded after it s having taken place ] .representation of individual nodes in the supply chain by a unique digital signature provides a mechanism for the manufacturing of trust . if two transacting nodes can consent to affix their personal signatures to a transaction we can consider that this process has been successfully concluded .nodes have permission only to ascent to transactions in which they are directly involved .this procedure is roughly sketched in figure 1 .such a process tends to result in a large common history .the signature take the place of `` proof - of - work '' in the form of reputation staked on the veracity of their participation in an atomic transaction .if an individual node were to itself become corrupt we rely on the integrity of the remaining network participants that maintain a replicated copy of that transaction .the quorum necessary to officiate a particular interpretation can be based on simple majority of a fixed percentage ( e.g. 3 + 1 ) ) to establish an implicit consensus on the canonical database representation .this conception of consensus is dissimilar to the bitcoin network where trust is a function of the hashing power that a node is able to control .the threat model herein considered differs markedly from that necessitated by the maintenance of a distributed cryptocurrency network since it is assumed that nodes exchanging goods in a supply chain already foster a modicum of trust , i.e. a working business relationship .this assumption permits of more flexibility in the optimal behaviour policy we can expect from nodes .consensus is established by ascent of mutually transacting nodes .these atomic instances are committed to a common shared history replicated throughout a subset of the network which serves as the future canonical representation based on a predetermined consent parameter ( network percentage ) . .blocks are depicted in blue , the connection between blocks ( hash pointers ) in red .node is a supplier of raw materials . in two instancesthese have been contaminated by the mismanagement of .contaminated units are green .the principle of provenance enables latter nodes , including & , to trace the origin of the contaminated goods back up the tree to identify their source.,scaledwidth=80.0% ] theoretically to spend a bitcoin requires that its provenance be explicitly verified against the entire transaction history of the network from the present epoch through to inception .this feature is beneficial to supply chain systems concerned with targeted recall of defective products , especially so since each individual bitcoin exists as a unique unit within the closed system .there is no individual actor with the capacity to ` crtl + c , ctrl + v ` a bitcoin into existence .bitcoin units are recorded by the unspent transaction output ( utxo ) set .the balance of any one wallet is the summation over the utxo instances assigned to the private key with which it is associated .analogously an automobile , or similarly a simple chair , is the summation over it s constituent ( unique ) parts .targeted recall of products effected by particular contagion is an important concern to many organizations , for instance large automotive manufacturers . in 2015 the volkswagen emissions scandal ( vw - abgasaffre ) prompted a vast recall campaign of large swaths of vehicles , likely including vehicles unaffected by the defective component .the problem of tracking unique product constituents through the inherent `` mixing '' process that goes on throughout a supply chain , from material aggregation to finished product , can be conceived of as a task related to that of tracking `` tainted '' coins through the bitcoin network .this traceability would allow manufacturers to implement targeted recalls with surgical precision , a substantial efficiency gain for supply chain management systems .representation of information interchange units should be unique facilitating a navigable trail of provenance for individual components throughout the shared database .the bitcoin proof - of - work exhausts computational power , and ultimately electricity ( among other considerations , i.e. the raw materials used to fashion hardware ) in the expending of scarce resources , viz .time and money , in order to bring new bitcoins into existence .reputation and social capital are likewise a scarce resource .proof - of - work describes the procedure whereby nodes exchange one resource to be remunerated in kind with another .for instance in a supply chain management system nodes consent to the veracity of a transaction by affixing it with their digital signature , expending reputation , and are remunerated with a certified representation of data they consider important . [ designp_v ] transaction quora can establish the degree to which they are concerned with the integrity of some exchange unit by committing to the data structure that represents it with a proof - of - work .the bitcoin protocol is optimized to combat problems inherent to the distributed exchange of value in a peer - to - peer network such as double spending . in a supply chainthe idea of double spending is nonsensical .nodes that maintain a working business relationship preserve this arrangement by acting ( in most cases ) with integrity . thereby the proposed system is construed to utilize this degree of mutual cooperation between transacting parties , for instance in the exchange quora mechanism ( design principle [ designp_v ] ) .this assumption obviates the need for an external arbitrator under the belief that parties at loggerheads would be unable to achieve mutual agreement .the sale of counterfeit products is an issue that modern brands with multinational supply chain networks perpetually combat .disingenuous goods circulate widely on the deep web markets mentioned above .initiatives such as ` code.moncler.com ` by french luxury goods manufacturer moncler attempts to encourage users to register the qr code stitched into their garments online .the system we propose here would enable such manufacturers to keep stricter account of the flow of goods and production materials .this process is closely related to that of tracking the provenance of contaminated or malfunctioning goods as depicted in figure 2 .the simplest potential threat is that of incidental data - corruption , loss , or human - input error .the distributed nature of the data model , together with a pre - arranged consensus threshold would serve to stem the adverse effect of this inevitability .it has been stated that a `` private blockchain '' is nothing more than an atypical name for a shared database . in this workwe have demonstrated that this is in fact the case .however we have also endeavored to provide examples whereby a shared database with certain properties , under specific assumptions , can solve useful problems . in assessing the merits of a technology oneis never fully correct ( or fully incorrect ) to prefer one method over another .in creating this paper we might have individually type - set each letter , printed it with ink , and scanned the result into a computer .we chose the more difficult method and used latex . in the modern world businesses are often are burdened with long and convoluted supply chains .the final determination of the degree to which the data management techniques described above are practically useful is empirical .in it is pointed out that satoshi was probably not an academic because he implemented his system first and wrote about it later .this work asserts the plausibility of that conjecture insofar as the protocol herein described remains to be actualized .what we have done in this work is to venture a rough framework and design methodology . through careful consideration of the processes by which one might derive concrete value through the actionable properties inherent in the bitcoin protocol we seek to solve useful problems for the supply chain management community .tian , feng : an agri - food supply chain traceability system for china based on rfid & blockchain technology .service systems and service management ( icsssm ) , 2016 13th international conference on , 16 ( 2016 )
heretofore the concept of `` blockchain '' has not been precisely defined . accordingly the potential useful applications of this technology have been largely inflated . this work sidesteps the question of what constitutes a blockchain as such and focuses on the architectural components of the bitcoin cryptocurrency , insofar as possible , in isolation . we consider common problems inherent in the design of effective supply chain management systems . with each identified problem we propose a solution that utilizes one or more component aspects of bitcoin . this culminates in five design principles for increased efficiency in supply chain management systems through the application of incentive mechanisms and data structures native to the bitcoin cryptocurrency protocol .
spontaneous confinement transitions occurring in magnetically confined plasmas are vital for the development of fusion as a viable future energy source .it is generally assumed that zonal flows are an essential ingredient for understanding these transitions .these flows can be generated spontaneously from turbulence via reynolds stress .however , the experimental identification of zonal flows is a hard problem .zonal flows are low - frequency phenomena with a global character , i.e. , with long wavelengths in the toroidal and poloidal directions .more specifically , they are associated with a potential perturbation with toroidal mode number , poloidal mode number and finite radial wavelength .the detection of these flows has been difficult , partly due to the fact that fusion - grade plasmas are characterized by the presence of many instabilities and waves , making it hard to isolate the low - frequency , long - wavelength zonal flow in the data , containing other fluctuating and/or oscillating contributions .several techniques have been used , including the measurement of spectra and ( long - range ) correlation , the hilbert - huang transform , bicoherence and non - linear energy transfer .the fact that zonal flows are ` global ' in the sense that they are long - wavelength phenomena should facilitate their unambiguous detection provided simultaneous measurements at several remote points inside the plasma are available , at the ( radial ) location where the zonal flow exists . given such multipoint data ,a technique is needed to isolate and extract the ` global ' component from the fluctuation data .this task is ideally suited for the biorthogonal decomposition ( bod ) , also known as proper orthogonal decomposition ( pod ) or singular value decomposition ( svd ) , among others .however , there may be other global oscillations affecting the fluctuation data , different from zonal flows .therefore , in the present work , the bod technique is complemented with additional techniques to quantify the long - range character of the biorthogonal modes ( cf . ) or their propagating nature , thus facilitating the distinction between global modes associated with magneto - hydrodynamic ( mhd ) activity or other oscillations , on the one hand , and proper zonal flows , on the other .the details of the biorthogonal decomposition are well - known , and the reader is referred to the references for detailed information . here, we will summarize its main features .the multipoint measurements constitute a data matrix , where the index labels the time and the detector . typically , the time corresponding to the time index , , is equally spaced , since measurements are typically taken at a fixed sampling rate , although this is not strictly necessary .the physical location of the detectors , , however , is often dictated by practical convenience or space limitations and will not necessarily correspond to a regular grid .finally , to facilitate the physical interpretation of the results , the measurements performed at the various detectors should ideally be of the same physical quantity ( in our case , an electric potential ) and be given in the same units .the bod method decomposes the data matrix as follows : where is a ` chrono ' ( a temporal function ) and a ` topo ' ( a spatial or detector - dependent function ) , such that the chronos and topos satisfy the following orthogonality relation : an alternative normalization ( indicated by the superscript ) is such that the root - mean - square ( rms ) value of the topos and chronos equals 1 . in this case , with . with this normalization , the number represents the contribution of mode to the total rms ` fluctuation amplitude 'thus , the fractional contribution of a given mode to the total ` fluctuation energy ' ( proportional to the square of the rms ) can be computed from the superscript ` ' is irrelevant in this expression .the combination chrono / topo at a given , , is called a spatio - temporal ` mode ' of the fluctuating system , and is constructed from the data matrix without any prejudice regarding the mode shape .the decomposition is performed by computing the singular value decomposition ( svd ) of the data matrix .note that this is always possible for any real - valued rectangular matrix , and standard software packages are available for this purpose .the are the eigenvalues ( sorted in decreasing order ) , where .a threshold may be set for cutting off the expansion in bod modes ( based on , e.g. , a noise level ) and so keeping only the dominant contribution of modes to the data matrix for further analysis and ignoring noisy or minor contributions .when the bod expansion is cut off at a certain mode , the reconstructed data , , are defined by eq .( [ biortho ] ) while restricting the sum to . in this case, the reconstructed data minimize the least squares error with respect to the actual data among all possible sets of spatiotemporal modes .if the physical system under study contains normal mode oscillations and is sufficiently well - sampled ( spatially and temporally ) , the probability that the bod modes correspond closely to the said normal modes is high ; the bod is particularly sensitive to resonant linear normal modes . on the other hand ,the bod technique is perhaps less suited for the analysis of non - linear systems in which no normal modes exist , as the bod modes are unlikely to correspond to any meaningful physical modes in this case .an advantage of the bod analysis is that no _ a priori _ assumption is made regarding the mode shape or spectral properties , unlike standard analysis techniques such as fourier decomposition .this has the mentioned advantage that the bod modes concentrate a maximum of fluctuation power in the lowest modes , but the disadvantage that the modes are not guaranteed to coincide exactly with the normal modes of the system ( if known ) .furthermore , it should be noted that the svd decomposition is not unique .first , the sign of the chrono and topo of a given mode can trivially be inverted without affecting their product , i.e. , without affecting the reconstruction according to eq .( [ biortho ] ) .second , if two svd modes and have the same eigenvalues , a rotation in the two - dimensional vector space spanned up by the chronos and and a compensating rotation in the space spanned up by the corresponding topos can be defined so that the reconstructed data , according to eq .( [ biortho ] ) , are unchanged , giving rise to a ( rotational ) indeterminacy . as will be clarified below, this may occur in the case of propagating modes .as such , it does not imply a great disadvantage , since such modes will be considered pairwise anyway .the covariance between a signal and a signal ( being the temporal and the spatial index ) is defined as : since the signals are expanded according to eq .( [ biortho ] ) , this expression can be simplified , using eq .( [ norm ] ) , to : i.e. , the covariance between two signals is simply the sum of the products of the corresponding topos , weighed by the square of the eigenvalues . in other words ,the topos reflect the covariance between the measurement signals ; and the contribution of each mode to the covariance is simply . from eq .( [ covariance_topos ] ) one immediately obtains , again using eq .( [ norm ] ) : thus , the topos are the eigenvectors of the covariance matrix , and the corresponding eigenvalues .this suggests that the topos and eigenvalues can be found by computing the eigenvectors and eigenvalues of the covariance matrix , after which the corresponding chronos can be obtained by simple matrix multiplication : this view of the bod mode decomposition may be helpful when interpreting the meaning of the bod modes .the fact that the topos are the eigenvectors of the covariance matrix immediately suggests an alternative approach : instead of analyzing the raw data , one could first normalize the data to their rms value when computing the svd of instead of , the resulting topos will then be the eigenvectors of the _ correlation _ matrix . by definition ,the correlation between measurements and is obtained by normalizing the covariance , eq .( [ covariance ] ) , to the rms of each signal : from eq .( [ covariance_topos ] ) , it is clear that this quantity only depends on the eigenvalues and the topos .the contribution of a given mode to the correlation is ( cf . ) thus , it is possible to quantify the contribution of a specific bod mode to the so - called _ long range correlation _ ( lrc ) . to achieve this ,assume we may subdivide the set of measurements into two complementary sets : , with elements , and , with elements , such that measurements in set are taken at a remote location from measurements in set .the long range correlation is then defined as the correlation between any pair of measurements taken at remote locations .we define the contribution to the _ mean _ long range correlation of a given mode by averaging the correlation due to this mode among any two remote measurement points : this definition may lead to low values in the case of partial cancellations due to the existence of a mixture of correlations and anti - correlations between individual measurements .to measure the overall correlation ` intensity ' regardless of sign , we also define the mean absolute long range correlation : one important aspect of the bod method is that it assumes that all modes are separable in space and time .standing waves are of this type .however , one often has to contend with propagating structures , such as running waves .it is important to be able to recognize and distinguish these different structures .a typical example of a running wave is .this can trivially be decomposed into a sum of two standing wave patterns : by fourier s theorem , this result can be generalized to any linearly propagating spatiotemporal structure of argument .thus , linearly propagating or ` traveling ' waves or structures generate a pair of bod modes , with similar eigenvalues and a mutual phase difference ( in space and time ) of around 90 .consequently , a method is needed to identify such mode pairs .if the propagating mode has a clearly defined frequency , the fourier power spectra of the concerned chronos will have a peak at this frequency ( here , the circumflex indicates the fourier transform ) , and the cross phase between the chronos is easily determined as the phase of the complex cross spectrum at the frequency , where the star signifies complex conjugate . as the low - frequency modes of interestdo not always have a clearly defined spectral peak , we will use a technique for quadrature detection based on the hilbert transform . in this framework, we must assume that the bod topos and chronos fluctuate symmetrically about zero , which can be achieved by subtracting their mean or applying a suitable high - pass frequency filter prior to analysis .this is necessary as it is a well - known fact that the hilbert transform does not produce a reasonable estimate of the quadrature if the mean of the signal is not zero or is drifting . with this additional assumption , it is feasible to compute the quadrature of a given topo or chrono by means of the hilbert transform .thus , we can use the full set of topos and chronos and compute the following spatial and temporal quadrature matrices : where is the hilbert transform and the tilde refers to the removal of the mean ( or lowest frequencies ) mentioned above .the elements of these quadrature matrices are restricted to the range $ ] ; the absolute value of a given element will differ significantly from 0 when the corresponding modes are in approximate quadrature . by means of this technique ,it becomes possible to identify linearly propagating structures by finding pairs of modes with similar eigenvalues , , such that and are significantly different from zero . by default , modes that do not occur in pairsthen correspond to standing wave structures .to test whether the analysis methods described in the preceding section are capable of delivering the promised results , in this section we will apply these techniques to gyrokinetic simulations with known properties , carried out with the code euterpe .euterpe is a global gyrokinetic particle - in - cell monte carlo code developed originally at crpp lausanne with the aim of simulating plasma turbulence in arbitrary threedimensional geometries .it is being developed at the max plank ipp ( greifswald , germany ) and is presently used at the national fusion laboratory ( ciemat , spain ) for the simulation of tj - ii plasmas . for more information about the codethe reader is referred to the cited references .several linear simulations have been run in tj - ii geometry : linear simulations of zonal flow relaxation and simulations of ion temperature gradient ( itg ) instabilities .results from these two kinds of simulations are used in this work to test the zonal flow detection technique . in this kind of simulation, the relaxation of an initial zonal perturbation to the density is studied following the seminal work by rosenbluth and hinton .the simulation is initiated with a perturbation to the density of the form , where is the normalized toroidal flux used as radial coordinate in euterpe , the equilibrium density , the radial wavenumber of the perturbation , the ion larmor radius , and the ion mass and charge , respectively , the equilibrium ion temperature , the magnetic field and means flux surface average .the simulation is carried out in the standard magnetic configuration of tj - ii ( labelled 100_44_64 ) under plasma conditions in which a neoclassical root confinement transition ( the so - called low density transition ) occurs at the plasma edge .the simulation corresponds to a numeric experiment in which this root transition is simulated by slowly evolving density and temperature profiles .the density and temperature profiles and also the neoclassical equilibrium electric field are included in the simulation .the initial perturbation is allowed to evolve linearly without taking into account collisions and the plasma potential is monitored at several positions emulating several multi - pin langmuir probes measuring plasma potential at a set of radial positions , as shown in the fig .[ euterpe_probe ] : probe 1 is located at and probe 2 at half the period ( ) . the probe pins ( 30 for each probe ) are ordered from large to small radius and cover most of the minor radius . in this simulation ,the zonal component of the potential ( ) is observed to relax via a low frequency damped oscillation , typical for stellarators , and also higher frequency oscillations identified as geodesic acoustic modes ( gam ) appear ( see and references therein ) .the frequency of the slow oscillation is in the range of 510 khz and the gam , only noticeable at the very beginning of the simulation , is around khz .the gam oscillation is associated with low- modes having much lower amplitude than the component .raw simulation data are shown in fig .[ euterpe_zf_data ] .the bod eigenvalues and the long range correlation contribution of each mode , , are shown in fig .[ euterpe_zf_lrc ] .clearly , nearly all lrc is concentrated in the first bod mode .[ euterpe_zf_topos ] shows the first four topos , fig .[ euterpe_zf_chronos ] the first four chronos , and fig .[ euterpe_zf_spectra ] the spectra of the first four chronos .note that the spectrum of the first chrono exhibits the expected temporal behavior of a zonal flow : it peaks at very low frequency . comparing the spectrum calculated for the whole time window and for ms , it is clear that the peak at about 50 khz corresponds mainly to the gam appearing in the initial part of the time window ( ms ) . in this respect , the quadrature analysis , shown in fig .[ euterpe_zf_quadrature ] , shows that modes 3 and 4 , most clearly showing the mentioned spectral peak ( fig .[ euterpe_zf_spectra ] ) , form a propagating mode pair , as one might have suspected on the basis of their similar eigenvalues ( fig . [ euterpe_zf_lrc ] ) and spectra .this propagation would be _ radial _ , not poloidal / toroidal , in view of the near identity of the topos 3 and 4 in the two remote probes , which is consistent with the gam nature of the fluctuations ( constancy of potential on flux surfaces ) . thus having identified modes 1 and modes 3 + 4 , mode 2 remains .note that this radially oscillating mode with very similar shape in both probes ( fig .[ euterpe_zf_topos ] ) hardly oscillates in time but mainly decays in amplitude ( fig . [ euterpe_zf_chronos ] ) , while exhibiting a very similar spectrum as mode 1 for ms ( fig . [ euterpe_zf_spectra ] ) .we conclude that in this case , the zonal flow is captured by the first 2 bod modes , with interesting properties : mode 1 has no radial sign changes and oscillates in time , while mode 2 has no temporal sign changes and oscillates in space .note that in view of the orthogonality requirements of the bod modes , eq .( [ norm ] ) , there can be only one topo that does not change sign and only one chrono that does not change sign .[ reconstruction ] shows a comparison between the , the flux surface averaged component of the simulated potential , and the reconstructed potential oscillations using only the first and the first two bod modes , respectively .very close agreement is observed when both zf bod modes are included . in this case , a linear simulation of ideal itg instability is studied in the tj - ii standard configuration .the ion and electron density profiles and the electron temperature profiles are taken to be flat while the ion temperature profile is defined to have a typical tanh shape , such that it has a large gradient at mid - radius .this renders the itg modes unstable in this region of the plasma .the electron temperature , which is the same as the ion temperature at the maximum gradient position , is ev . in this kind of simulation ,the energy of the unstable modes increases with time , as no saturation mechanism is included in the simulation .the potential shows typical structures with resonant mode numbers ( ) such that and centered around , being the average minor radius of the flux surface at half radius . in this case , and .the unstable modes propagate in the ion diamagnetic direction as corresponds to an ion drift wave . in this simulation ,no zonal flow is generated by these modes , as it is linear and no mode interaction is taken into account .raw simulation data of potential at the locations of the synthetic probes are shown in fig .[ euterpe_itg_data ] .the bod eigenvalues and the long range correlation contribution of each mode , , are shown in fig .[ euterpe_itg_lrc ] .by contrast with the preceding section , the lrc is very small here .on the other hand , the first two bod eigenvalues have a similar amplitude , suggesting a possible mode pair ( corresponding to a propagating mode ) . indeed , the quadrature analysis shown in fig .[ euterpe_itg_qxqt ] confirms this suspicion : clearly , modes 1 and 2 form a ( propagating ) pair , as do modes 3 and 4 .[ euterpe_itg_topos ] shows the topos , reflecting the radial mode structure of these propagating modes .+ fig .[ euterpe_itg_chronos ] shows the corresponding chronos , which exhibit the expected exponential growth of the oscillation amplitude .these observations are in accordance with the theoretical mode structure obtained from the distribution of the mode shapes and amplitudes in the simulation .in this section , we will apply the techniques described in section [ method ] and tested on gyro - kinetic simulations in section [ euterpe ] to data obtained from a toroidally confined plasma in tj - ii .the goal is to see whether it is actually possible to identify a zonal flow with some degree of confidence , using appropriately located potential measurements .tj - ii disposes of a double reciprocating probe system , allowing the simultaneous measurement of the floating potential on various radially separated probe pins on two probes with a large toroidal separation .fluctuating structures with a high level of correlation between the toroidally separated probes ( ` long range correlation ' or lrc ) can possibly be identified with zonal flow ( zf ) structures .the radially spaced pins on each of the two probes provide information about the radial wave number of the detected structures. however , zonal flows are not the only phenomena that can give rise to lrcs .in particular , it has been shown that other modes ( e.g. , drift waves or magnetohydrodynamic ( mhd ) oscillations ) may also produce lrcs .the question , therefore , is whether one can distinguish between different origins of observed lrcs : on the one hand , zf - like behavior and on the other , mhd or other ( rotating / propagating ) oscillations .zfs are electrostatic potential fluctuations having toroidal mode number , poloidal wave number , and a finite radial wavenumber . due to this symmetry, the zf potential structure may fluctuate relatively slowly but _ does not rotate_. the latter property allows one to distinguish a zf from , e.g. , a low - frequency mhd mode , which ( with very rare exceptions ) usually does rotate .the two techniques discussed in section [ method ] ( i.e. , the quantification of the contribution of each mode to lrcs and the quadrature detection technique ) should enable one to make this distinction based on the experimental data : zfs are long - range correlated ` standing wave structures ' , while mhd , rotating or propagating modes are ` traveling wave structures ' .tj - ii is a heliac type stellarator with four field periods , having major radius m , minor radius m , and toroidal magnetic field t. the plasmas considered here are heated by the electron cyclotron resonant heating ( ecrh ) system , consisting of two beam lines with an injected power of up to 400 kw each .electron density is controlled by means of a gas puffing system and the line average electron density typically reaches values of about m .the central electron temperature is typically ev , and the central ion temperature is around ev .langmuir probe d is located in a top port at toroidal position , while probe b is located in a bottom port at .thus , the two probe systems are remote , separated toroidally by a distance of m ( about half a toroidal turn ) . in the experiments analyzed here ,each probe is fitted with a ` rake ' probe head measuring floating potential at radially spaced pins .probe pins are separated radially by 3 mm in the d probe , and by 1.7 mm in the b probe ; probe d covers a radial range of about 3.5 cm , while probe b covers about 1.5 cm .we analyze shot 36012 in the time interval ms . in this discharge ,the rotational transform , , has a value of around 1.455 at the magnetic axis and 1.55 at the edge , so that the 3/2 rational surface is located at about . in the mentioned time interval ,both the line average electron density and the plasma energy content are fairly constant ; m , near the critical density of the electron to ion root confinement transition at tj - ii .[ 36012_spectrum ] shows the mean spectrum of all probe pins .several spectral peaks are visible , which we will identify in the following . fig .[ 36012_lambda ] shows the bod eigenvalues and the lrc . to calculate the lrc , eqs .( [ lrc ] ) and ( [ lrcabs ] ) , the probe pins have been subdivided into two sets : namely , pins corresponding to probe d or b , respectively . fig .[ 36012_spectra ] shows the spectra of the first 3 modes .the first bod mode suggests a strong long range _the second and third bod modes form a propagating pair ( confirmed by quadrature analysis , not shown ) ; the spectrum shows that this propagating mode has a clear frequency peak at khz . regarding the negative value of , it should be noted that topo 1 exhibits a complex radial structure , cf .[ 36012_topos ] .thus , the global character of the definition of may not capture the details of this structure .the two peaks appearing in topo 1 at are in fact _ correlated _ and correspond to a zonal flow - like structure , indicated schematically by a grey area in fig .[ 36012_topos ] .this structure is considered zonal flow - like for the following reasons : ( 1 ) it is long range correlated ( recall eq .( [ covariance_topos ] ) ) ; ( 2 ) the structure has the same value of floating potential in the two remote probes , suggesting an global structure : ( 3 ) the spectral characteristics of the structure show no clear peak and spectral power is concentrated at very low frequency , cf . fig .[ 36012_spectra ] ; ( 4 ) the radial position and radial width are very similar for the two remote probe systems ( as indicated by the grey area in fig .[ 36012_topos ] ) .the radial width of the zf - like structure is rather small , namely about one centimeter .the four points mentioned above constitute the most unambiguous identification of a zf - like structure reported yet in literature .further outward ( ) , the floating potentials in the two probes are predominantly _ anti- _ correlated such that topo 1 has an opposite sign for the two probes .the temporal variation of this anti - correlated spatial structure is linked to the zf fluctuations , as is evident from its inclusion in the same chrono .we speculate that the fluctuations of the zf intensity ( at ) cause a variation of outward transport ( in the range ) with some toroidal / poloidal asymmetry probably associated with the different local curvature and flux expansion at the two probe locations . at this point, we would like to emphasize that the radial structure revealed by this technique is very hard to extract using traditional methods based on , e.g. , two - point correlation functions , due to the fact that the signals contain contributions from both the zf - like structure and the propagating modes .traditional techniques can not separate these contributions clearly , unless a hypothesis is made about the spectral distribution and frequency filters are applied . here ,no hypotheses are needed and the results follow directly from the multipoint analysis of the raw data .in this work , we propose a novel analysis method to identify ` global ' , long range correlated ( lrc ) components from multipoint fluctuation measurements .the analysis is based on the well - known biorthogonal decomposition ( bod ) , which is extended by defining a quantity that measures the contribution of each bod mode to the overall lrc , cf .( [ lrc]-[lrcabs ] ) .in addition , a quadrature detection technique is introduced , based on the hilbert transform , to determine whether two bod modes ( with similar eigenvalues ) are in quadrature and thus are likely to correspond to a propagating mode .the combination of these various items of information ( mode contribution to lrc and quadrature among modes , as well as the spectral and spatial characteristics of the bod modes ) allows distinguishing zonal flow - like ( ` global ' ) , low frequency oscillations from propagating oscillations provided sufficient and adequately placed multipoint data are available .the method was tested on gyrokinetic simulations with the global code euterpe , using data from synthetic probes .two simulations were analyzed , using synthetic probes to generate signals . in the first simulation ,the linear , collisionless relaxation of an initial zonal perturbation to the density was studied .the bod analysis successfully extracted the zf mode from the generated signals .the gam - like oscillations appearing at the beginning of the simulation were identified as associated to a `` radially propagating structure '' . in the second simulation ,the linear itg instability was simulated , and no zf was produced . in this case , the bod analysis technique detected propagating modes and yielded their structure , and did not detect any zf ( as indeed it should nt ) .then , the method was applied to tj - ii langmuir probe data . in a discharge near the electron - ion root confinement transition ,the method was shown to be able to separate zf - like fluctuations from propagating contributions to the lrc .unambiguous identification of the zf - like structure was possible , using four elements : ( 1 ) the structure exhibits ( positive ) long range correlation ; ( 2 ) it has the same value of floating potential in the two remote probes , suggesting an global structure ; ( 3 ) the spectrum shows no clear peak and spectral power is concentrated at very low frequency ; ( 4 ) the radial position and radial width are very similar in the two remote probes . by contrast , the propagating structure was recognized by the quadrature of two topos and chronos having similar bod eigenvalues , and a peak in the corresponding spectra . it is noted that with only two toroidally / poloidally separated multi - pin probes , as used in this work , the capacity to identify modes propagating in the various directions is somewhat limited .more sophisticated probes , providing more detailed spatial information in the toroidal / poloidal directions , or the combination of information from various different types of diagnostics might overcome such limitations and allow a better identification of propagating modes .while bearing this in mind , it is shown that the bod is a powerful technique to separate zf - like , global oscillations from other ( propagating ) oscillations and to extract their mode structure . in this framework , it has been speculated that microscale turbulence might not only interact with zonal flows , but also with mhd modes .this would imply the existence of an ( indirect ) interaction between zfs and mhd modes .indeed , in the past it has been observed that the growth of zfs may modify mhd activity ( e.g. , ) .it is expected that the methodology presented here could help clarifying this issue , or more generally , yield fruitful results for a wide range of multipoint ( and remote ) measurements in systems with long range correlated structures .the authors thankfully acknowledge the computer resources , technical expertise and assistance provided by the barcelona supercomputing center centro nacional de supercomputacin and the ciemat computing center . research sponsored in part by the ministerio de economa y competitividad of spain under project nr .ene2012 - 30832 .this project has received funding from the european union s horizon 2020 research and innovation programme under grant agreement number 633053 .the views and opinions expressed herein do not necessarily reflect those of the european commission .10 a. fujisawa , t. ido , a. shimizu , s. okamura , k. matsuoka , h. iguchi , y. hamada , h. nakano , s. ohshima , k. itoh , k. hoshino , k. shinohara , y. miura , y. nagashima , s .-itoh , m. shats , h. xia , j.q .dong , l.w .yan , k.j .zhao , g.d .conway , u. stroth , a.v .melnikov , l.g .eliseev , s.e .lysenko , s.v .perfilov , c. hidalgo , g.r .tynan , c. holland , p.h .diamond , g.r .mckee , r.j .fonck , d.k .gupta , and p.m. schoch .experimental progress on zonal flow physics in toroidal plasmas ., 47(10):s718 , 2007 .mckee , r.j .fonck , m. jakubowski , k.h .burrell , k. hallatschek , r.a .moyer , w. nevins , d.l .rudakov , and x. xu .observation and characterization of radially sheared zonal flows in diii - d ., 45:a477 , 2003 .pedrosa , c. hidalgo , e. caldern , t. estrada , a. fernndez , j. herranz , i. pastor , and the tj - ii team .threshold for sheared flow and turbulence development in the tj - ii stellarator ., 47:777 , 2005 .pedrosa , c. hidalgo , e. caldern , a. alonso , r.o .orozco , j.l .de pablos , the tj - ii team , and p. balan .spontaneous edge sheared flow development studies in the tj - ii stellarator ., 55(12):1579 , 2005 .zhao , t. lan , j.q .dong , l.w .yan , w.y .hong , c.x .liu , j. qian , j. cheng , d.l .yang , x.t . ding , y. liu , and c.h .toroidal symmetry of the geodesic acoustic mode zonal flow in a tokamak plasma ., 96:255004 , 2006 .pedrosa , c. silva , c. hidalgo , b.a .carreras , r.o .orozco , d. carralero , and the tj - ii team .evidence of long - distance correlation of fluctuations during edge transitions to improved - confinement regimes in the tj - ii stellarator ., 100:215003 , 2008 . y. xu , s. jachmich , r.r .weynants , m. van schoor , m. vergote , a. krmer - flecken , o. schmitz , b. unterberg , c. hidalgo , and textor team .long - distance correlation and zonal flow structures induced by mean shear flows in the biasing h - mode at textor . , 16:110704 , 2009 .liu , t. lan , c.x .zhao , l.w .yan , w.y .hong , j.q .dong , k.j .zhao , j. qian , j. cheng , x.r .duan , and y. liu .characterizations of low - frequency zonal flow in the edge plasma of the hl-2a tokamak ., 103:095002 , 2009 .carreras , b.ph .van milligen , r.b .perez , m.a .pedrosa , c. hidalgo , and c. silva .reconstruction of intermittent waveforms associated with the zonal flow at the transition leading to the edge shear flow layer .51:053022 , 2011 . p.h .diamond , m.n .rosenbluth , e. snchez , c. hidalgo , b. van milligen , t. estrada , b. braas , m. hirsch , h.j .hartfuss , and b.a .carreras . in search of the elusive zonal flow using cross - bicoherence analysis . , 84(21):4842 , 2000 . m. xu , g. r. tynan , p. h. diamond , p. manz , c. holland , n. fedorczak , s. chakraborty thakur , j. h. yu , k. j. zhao , j. q. dong , j. cheng , w. y. hong , l. w. yan , q. w. yang , x. m. song , y. huang , l. z. cai , w. l. zhong , z. b. shi , x. t. ding , x. r. duan , and y. liu .frequency - resolved nonlinear turbulent energy transfer into zonal flows in strongly heated l - mode plasmas in the hl-2a tokamak ., 108:245001 , 2012 .g. kerschen , j .- c .golinval , a.f .vakakis , and l.a .the method of proper orthogonal decomposition for dynamical characterization and order reduction of mechanical systems : an overview ., 41:147 , 2005 .e. snchez , r. kleiber , r. hatzky , m. borchardt , p. monreal , f. castejn , a. lpez - fraguas , x. sez , j.l .velasco , i. calvo , a. alonso , and d. lpez - bruna .collisionless damping of flows in the tj - ii stellarator ., 55:014015 , 2013 .e. snchez , r. kleiber , r. hatzky , m. borchardt , p. monreal , f. castejn , a. soba , z. sez , and j.m .simulaciones girocinticas de turbulencia en plasmas de fusin con geometra tridimensional . in real sociedadespaola de fsica , editor , _ proc .33 biennial meeting of the royal spanish physics soc ., santander _ , volume iv , page 114 , 2011 .velasco , j.a .alonso , i. calvo , j. arvalo , e. snchez , l. eliseev , s. perfilov , t. estrada , a. lpez - fraguas , c. hidalgo , and the tj - ii team .damping of radial electric field fluctuations in the tj - ii stellarator ., 55:124044 , 2013 .pedrosa , a. lpez - snchez , c. hidalgo , a. montoro , a. gabriel , j. encabo , j. de la gama , l.m .martnez , e. snchez , r. prez , and c. sierra .fast movable remotely controlled langmuir probe system ., 70(1):415 , 1999 .van milligen , a. lopez fraguas , m.a .pedrosa , c. hidalgo , a. martin de aguilera , and e. ascasbar .parallel and perpendicular turbulence correlation length in the tj - ii stellarator ., 53:093025 , 2013 . c. hidalgo , m.a .pedrosa , e. snchez , b. gonalves , j.a .alonso , e. caldern , a.a .chmyga , n.b .dreval , l. eliseev , t. estrada , l. krupnik , a.v .melnikov , r.o .orozco , j.l .de pablos , and c. silva .physics of sheared flow development in the boundary of fusion plasmas ., 48:s169 , 2006 .a. ishizawa and n. nakajima .excitation of macromagnetohydrodynamic mode due to multiscale interaction in a quasi - steady equilibrium formed by a balance between microturbulence and zonal flow . , 14:040702 , 2007 .
this work addresses the identification of zonal flows in fusion plasmas . zonal flows are large scale phenomena , hence multipoint measurements taken at remote locations are required for their identification . given such data , the biorthogonal decomposition ( or singular value decomposition ) is capable of extracting the globally correlated component of the multipoint fluctuations . by using a novel quadrature technique based on the hilbert transform , propagating global modes ( such as mhd modes ) can be distinguished from the non - propagating , synchronous ( zonal flow - like ) global component . the combination of these techniques with further information such as the spectrogram and the spatial structure then allows an unambiguous identification of the zonal flow component of the fluctuations . the technique is tested using gyro - kinetic simulations . the first unambiguous identification of a zonal flow at the tj - ii stellarator is presented , based on multipoint langmuir probe measurements .
facility location problems are widely investigated in the fields of operations research and theoretical computer science .the -center problem is a classic one in this line of investigation . given a graph with positive edge lengths , a supply set , and a demand set , the -center problem asks for elements from such that is minimized , where denotes the distance from to in .conventionally , , where and are the set of vertices and _ points _ of , respectively . a point of a graph is a location on an edge of the graph , and is identified with the edge it locates on and the distance to an end vertex of the edge .the -center problem in general graphs , for arbitrary , is np - hard , and the best possible approximation ratio is 2 , unless np = p .when is fixed or the network topology is specific , many efficient algorithms were proposed .there are many generalized formulations of the center problem , like the capacitated center problem and the minmax regret center problem .the backup center problem is formulated based on the _ reliability model _ , in which the deployed facilities may sometimes fail , and the demands served by these facilities have to be reassigned to functioning facilities . more precisely , in the backup -center problem , facilities may fail with _ failure probabilities _ .given that the facilities do not fail simultaneously , the goal is to find locations that minimize the expected value of the maximum distance over all vertices to their closest functioning facility .we leave the formal problem definition to section [ section : pre ] .the backup -center problem is np - hard since it is a generalized formulation of the -center problem . for , wang et al . proposed a linear time algorithm for the problem on trees . when the edges are of identical length , hong and kang proposed a linear time algorithm on interval graphs .recently , bhattacharya et al . consider a weighted formulation of the backup 2-center problem , in which each vertex is associated with a nonnegative weight , and the distance from vertex to is weighted by the weight of .they proposed - , - , - , and -time algorithms on paths , trees , cycles , and unicycles , respectively , where is the number of vertices . in this paper , we focus on the weighted backup 2-center problem on a tree and design a linear time algorithm to solve this problem .the algorithm is asymptotically optimal , and therefore improves the current best result on trees , given by bhattacharya et al .the strategy of our algorithm is _ prune - and - search _ , which is widely applied in solving distance - related problems .the rest of this paper is organized as follows . in section [ section :pre ] , we formally define the problem and briefly review the result given by bhattacharya et al . . based on their observations , a further elaboration on the objective function is given . in section [ section :linear ] , we design the linear time algorithm , and concluding remarks are given in section [ section : conclusion ] .let be a tree , on which each vertex is associated with a nonnegative weight , and each edge is associated with a nonnegative length .a location on an edge is identified as a _ point _ , and the set of points of denoted by .the unique path between two points and is denoted by , and the _ distance _ between two points and is defined to be the sum of lengths of the edges on .the _ weighted distance _ from vertex to point is defined as .eccentricity _ of a point is defined as and the point with minimum eccentricity is said to be the _ weighted center _ of .note that the weighted center of a tree is unique . for , the eccentricity of a vertex w.r.t . is defined as let and be two points of .the partition of is defined as , where and .a _ weighted 2-center _ consists of two points and minimizing where .we denote a weighted 2-center by . unlike the weighted center of a tree, there may be more than one weighted 2-center .now we are ready to define the weighted backup 2-center problem . given a tree and two real numbers and in , the weighted backup 2-center problem asks for a point pair minimizing , where and . to ease the presentation, we assume that . with the assumption , minimizing equivalent to minimizing , where we note here that all the proofs in this paper can immediately be extended to the case where failure probabilities are different .moreover , and are not restricted to be deployed on different points .if and are identical , the point must be the weighted center , as shown in proposition [ proposition : b2center_identical ] .[ proposition : b2center_identical ] let be a weighted backup 2-center of tree . if and are identical , then it is the weighted center of .let be the weighted center of .suppose to the contrary that , but .since , we have , and therefore which contradicts that is a weighted backup 2-center . when computing a weighted backup 2-center , any vertex with weight zero can be treated as a point on an edge , and any edge with length zero can be contracted to be a vertex with weight . with this manipulation , an instance with `` nonnegative constraints '' on vertex weights and edge lengths can be reduced to one with `` positive constraints '' , and there is a straightforward correspondence between the solutions .therefore , in the discussion below , we may focus on the instances with positive vertex weights and edge lengths . .the number beside each vertex and each edge is the weight of the vertex and the length of the edge , respectively .edges with no number aside are of length one.,width=2 ] throughout the rest of this paper , we use the tree given in figure [ figure : tree ] as an illustrative example .in addition , weighted centers and weighted 2-centers are referred to as centers and 2-centers , respectively , for succinctness .the algorithm of bhattacharya et al . depends on the following observations .[ lemma : solution_location ] let be any 2-center . there is a weighted backup 2-center such that ( resp . ) lies on a path between ( resp . ) and .[ lemma : left_equal_right ] if , then holds for a weighted backup 2-center on a tree , where . by lemma [ lemma : solution_location ] , we may focus the nontrivial case where .the path is embedded onto the -axis with each point on corresponding to point on the -axis . for simplicity, we use to denote both the set of points on this path and the corresponding set of points on the -axis . for each vertex , the _ cost function _ is defined as clearly , is a v - shape function .assume that the minimum of occurs at .let and be defined as and let the upper envelopes of and be denoted by and , respectively .an example is given in figure [ figure : envelopes_and_quasiconvex](a ) . and of the tree given in figure [ figure : tree ] .dotted curves are functions , and solid curves are and .( b ) function with different failure probabilities . the failure probability ranges from to .,width=4 ] in the algorithm of bhattacharya et al . , they focus on processing the information within the path from to , where is a 2-center satisfying the following property . [property : center_2center ] for the center , we have . in addition , there is a 2-center satisfying and , where .property [ property : center_2center ] holds due to the continuity of the solution space , i.e. , the set of points of .moreover , it can be derived from property [ property : center_2center ] that for any point pair with , , and , the partition satisfies since is a 2-center . as a result ,bhattacharya et al . gave an algorithm to compute a weighted backup 2-center on a tree .we summarize it as in . center of 2-center of compute and each bending point , where evaluate , and keep the minimum this algorithm runs in time , where is the number of vertices .the bottleneck is the computation of and .once and are computed , the remainder can be done in time since there are _ bending points _ , at which the function is not differentiable w.r.t . . while processing from left to right , the corresponding moves monotonically to the left , and therefore a one - pass scan is sufficient to find the optimal solution .readers can refer to for details . to improve the time complexity , we elaborate on some properties of the objective function below .as in , the discussion below focuses on the behavior of the objective function on .we observe that the objective function possesses a good property ( the quasiconvexity , given in lemma [ lemma : quasiconvex ] ) when and satisfy certain restrictions .the existence of such a 2-center is proved in proposition [ proposition : two_center_equal ] .[ proposition : two_center_equal ] for any tree , there is a 2-center satisfying , where .let be a 2-center , and we embed onto the -axis as in section [ section : review ] .without loss of generality assume that , where , and it follows that due to the continuity of .we claim that one can have the requested 2-center by moving , towards along , to a point such that .let .clearly , and , and .suppose to the contrary that is not the requested 2-center .it follows that either ( i ) is not a 2-center , or ( ii ) . for ( i ), there is a vertex in satisfying a contradiction . for ( ii ), it can be derived that , which contradicts the definition of .any 2-center with satisfies .once a 2-center is computed , can then be computed in linear time , based on the arguments in the proof of proposition [ proposition : two_center_equal ] . in the rest of this paper , we assume that is a 2-center satisfying , where .next , we elaborate on and . as noted in , both and are piecewise linear . on a path ,both and are obviously continuous and convex .it also holds on in a tree , as shown in lemma [ lemma : continuity ] .is not continuous .all vertices are of weight except the bottom one , whose weight is .numbers beside edges are the edge lengths . clearly , is not continuous at .,width=1 ] [ lemma : continuity ] let be a 2-center of a tree satisfying , where .the function and are continuous and convex on . because of symmetry , we prove the lemma only for , and we claim that is continuous .the convexity then follows since is the upper envelope of half lines of positive slope .suppose to the contrary that is not continuous .there is a point , with , satisfying let .clearly , at point we have it follows that since otherwise . however , for , we have which leads to a contradiction .notice that lemma [ lemma : continuity ] does not hold for all 2-centers .an example is given in figure [ figure : discontinuity ] .by lemma [ lemma : left_equal_right ] , an optimal solution occurs at the point pair satisfying , , and .thus , we may focus on the single variable function , defined as to design an efficient algorithm , we expect some good properties on .unlike the eccentricity function , function is not convex on ( see figure [ figure : envelopes_and_quasiconvex](b ) ) .fortunately , it is quasiconvex .moreover , for any interval ] ( see lemma [ lemma : quasiconvex ] ) .[ lemma : quasiconvex ] for , the following statements hold .* implies ; * implies .we prove only the statement that implies .the other statement can be proved in a similar way . with the assumption that , we have and therefore thus , .we note here that lemma [ lemma : quasiconvex ] holds for function in a symmetric manner , where this property will be used in designing our algorithm , and its proof is similar to that of lemma [ lemma : quasiconvex ] .the bottleneck on the time complexity of the algorithm of bhattacharya et al . is the computation of and .fortunately , due to the strict quasiconvexity of and the piecewise linearity of and , one can apply the strategy of prune - and - search to obtain an optimal solution in linear time .the quasiconvexity of a function implies that a local minimum of is the global minimum of , and the idea of the prune - and - search algorithm is to search the local minimum over an interval ] is recursively reduced to a subinterval , and once it is reduced , the size of the instance can also be pruned with a fixed proportion . in more detail, the following steps are executed in each recursive call : 1 .[ step : choose_point ] choose a point in ] .[ step : search_direction ] determine whether , , or .[ step : discard ] depending on the result of step [ step : search_direction ] , update ] ; * for each ] or ] , does the process of discarding vertices .an illustration is given in figure [ figure : reduceinstance1 ] . the median of ] , we have clearly , and can be computed in time linear to and , respectively .if is given , then can also be computed in time .it remains to show how is determined . given a value , since , the point on satisfies if and only if therefore , for ] is given as . the evaluation of at a given point can be done symmetrically , and the details are omitted . with the procedures given in sections [ subsection : step1and3 ] and [ subsection :step2 ] , we may implement the idea given in the beginning of section [ section : linear ] , which recursively reduces the size of the problem instance .the procedure is given as . [ line : choose_median ] [ line : discard_critical_vertices ] an example is given in figure [ figure : reduceinstance1 ] . since is chosen as the median of ( line [ line : choose_median ] ) , it can be derived that a fixed proportion of vertices in are discarded after the execution of ( line [ line : discard_critical_vertices ] ) . moreover , together with property [ property : prune_and_search ] , it can be easily derived that for ] ; * for ] since is piecewise linear .we denote this procedure by .the integration is given as , and the procedure for computing a weighted backup 2-center is given as .the correctness and time complexity are analyzed in theorem [ theorem : correctness_and_time ] . and [ line : reducebegin ] [ line : reduceend ] center of 2-center of with the property in proposition [ proposition : two_center_equal ] is the vertex set of [ theorem : correctness_and_time ] the weighted backup 2-center problem on a tree can be solved in linear time by .let with .by lemma [ lemma : solution_location ] , ] , and thus , , , and are initialized accordingly as in . besides , by definition , we have and . with lemma [ lemma : discarded_portion ] ,the initialization of and guarantees , for $ ] , is computed correctly .similar arguments hold for the initialization of and . for the time complexity , both the center and the 2-centercan be computed in time . for ,let , , , and .lines [ line : reducebegin][line : reduceend ] of are executed if either or .together with lemma [ lemma : discarded_portion ] , it can be derived that where the equality holds when , , , and . as a result ,let the execution time of be , where .it follows from ( [ eq : proportion ] ) that therefore , .in this paper , we propose a linear time algorithm to solve the weighted backup 2-center problem on a tree , which is asymptotically optimal . based on the observations given by bhattacharya et al . , `` good properties '' of the objective function are further derived . with these properties ,the strategy of prune - and - search can be applied to solve this problem . for future research , the hardness of the backup -center problem on trees is still unknown , even for the unweighted case .it worth investigation on this direction .the author would also like to thank professor kun - mao chao , ming - wei shao , and jhih - heng huang for fruitful discussions .hung - lung wang was supported in part by most grant 103 - 2221-e-141 - 004 , from the ministry of science and technology , taiwan .ben - moshe , b. , bhattacharya , b. , shi , q. : an optimal algorithm for the continuous / discrete weighted 2-center problem in trees .7th latin american symposium on theoretical informatics ( lncs 3887 ) , valdivia , chile , pp .166177 ( 2006 ) .bhattacharya , b. , de , m. , kameda , t. , roy , s. , sokol , v. , song , z. : back - up 2-center on a path / tree / cycle / unicycle .20th international conference on computing and combinatorics ( lncs 8591 ) , atlanta , ga , usa , pp . 417428 ( 2014 ) .
in this paper , we are concerned with the weighted backup 2-center problem on a tree . the backup 2-center problem is a kind of center facility location problem , in which one is asked to deploy two facilities , with a given probability to fail , in a network . given that the two facilities do not fail simultaneously , the goal is to find two locations , possibly on edges , that minimize the expected value of the maximum distance over all vertices to their closest functioning facility . in the weighted setting , each vertex in the network is associated with a nonnegative weight , and the distance from vertex to is weighted by the weight of . with the strategy of prune - and - search , we propose a linear time algorithm , which is asymptotically optimal , to solve the weighted backup 2-center problem on a tree .
in recent years , synchronization effects of two or more interconnected classical systems have aroused comprehensive attention because synchronization phenomena are found widely in nature . for examples, huygens found that two clocks with different swings at the initial time will appear synchronization with time evolvement ; it was also observed that the fireflies glow synchronously and the oscillation of heart cells in human or animal will keep in step with each other . at the same time ,synchronization effects exhibit inimitable application potential in many fields , such as the synchronous transmission of information in the internet , the synchronous transmission and amplification of signals between coupled lasers , the encryption and decryption of signals using chaotic synchronization technology , and so on . to date, the synchronization of classical system has gradually become the investigation focus in the numerous fields .the groundbreaking work on theoretical exploration of classical synchronization is marked by yamada and fujisaka who put forward a criterion to judge synchronization behaviors through calculating the lyapunov exponent of the coupling system and obtained synchronization conditions [ 1 ] .after this , pecora and carroll found synchronization phenomena in the electronic circuit and designed a circuit scheme of encrypted communications using synchronization techniques , which demonstrates the attractive application prospect of synchronization effects and arouses the intense research interest for synchronization theory and application [ 2 ] .recently , many effective synchronization techniques have been proposed in order to achieve complete or phase synchronization of classical systems [ 3 - 7 ] .naturally , it is expected to found a similar synchronization phenomenon in quantum systems in order to realize the synchronous transmission of quantum information or states due to its unique advantages of synchronization effects .however , it is difficult to define precisely some concepts which describe synchronization in quantum systems , like `` tracks '' and errors " .even some concepts used in classical dynamics are completely unsuitable to be used in quantum dynamics because of remarkable differences of two kinds of systems .therefore , the relative research of quantum synchronization once is thought as unfeasible .the optomechanical system , as a representative of mesoscopic systems , has attracted widespread attention and systematic discussion recently [ 8 - 11,26 ] .mesoscopic systems exhibit simultaneously both properties of classical and quantum system under certain conditions because the scale of the system is in - between macro - system and micro - system .so some phenomena , no matter what chaos behaviors and limit cycle in classical kingdom [ 12 - 14,23 ] or quantum entanglement and quantum coherent in quantum domain [ 15 - 18,23 ] , have been observed in optomechanical systems , which provides reliable basis to expand synchronization theory from classical to quantum . at 2013 ,mari _ et al ._ extended the classical synchronization concepts to the quantum system [ 19 ] and developed quantitative theory of synchronization for continuous variable systems evolving in the quantum regime . and in their work ,two different measures quantifying the level of synchronization of coupled continuous variable are also introduced .whereafter , some progress has been made in interrelated theories and experiments [ 23 - 25,27 ] .however in the general case , the synchronization effects are very sensitive to parameters of the systems , such as driving field , coupled intensity , and so on .therefore , it is expected further to investigate and obtain a quantitative synchronization criterion in order to determine directly whether the synchronization can be realized .meanwhile , the synchronization criterion can also be regarded as a necessary and sufficient condition of the synchronization effect , which means that the quantum coupling systems can be adjusted and controlled to satisfy the synchronization criterion and to realize the aim of quantum synchronization .further , the controllability and practicability of quantum synchronization can be improved . in this work, we present a general method for discussing synchronization effects in mesoscopic quantum systems .we introduce the first order and second order measurements to describe the expectation value and the fluctuation of error respectively and give the necessary conditions to estimate the presence of quantum synchronization effects . using this theory, we design a model based on optomechanical system to realize logic control of quantum synchronization .subsequently , we validate the criterion through the simulation .this paper is organized as follows : in sec . , the classical synchronization theory is briefly introduced . in sec . , the processing method of quantum mesoscopic synchronization is described and the quantitative criteria for determining quantum complete synchronization and quantum phase synchronization are proposed . in sec . , a controllable quantum synchronization model base on optomechanical system is designed and the phase synchronization effect is discussed . finally , the summary and the prospects are given in sec ..considering two classical coupled systems where and are state variables of two systems , and are couplings between systems , respectively .if the error when , the complete synchronization between classical systems is realized . if the phases and of and meet , the phase synchronization between the systems is obtained .the classical synchronization criteria reported previously are mainly to analyze the stability of the error or phase error and to determine whether or can converge asymptotically to zero through calculating the largest lyapunov exponent . in the next section, we will propose a criterion for quantum synchronization based on the classical synchronization theory .in the heisenberg picture , we use quadrature operators and to describe two coupled quantum systems ( here ; =i{\delta}_{jj'} ] and /\sqrt{2} ] .we introduce as the measurement of the quantum phase synchronization similar to the discussion of complete quantum synchronization , let and rewrite eq.(7 ) : where should be a second order measurement as well under . in summary , the synchronization effects in mesoscopic quantum systems can be discussed through the following steps : + a. write the operator equations of system s conjugate mechanical quantities in the heisenberg picture , define the error operators and take them as the form of fluctuations near their expectation value , that is , .+ b. make stability analysis for and calculate the largest lyapunov exponent of the error equations .if the largest lyapunov exponent is less than zero , the evolution of can tend to zero stably after a certain time , whereas it may be ruleless oscillation .+ c. if the largest lyapunov exponent is less than zero , the following work is to discuss the magnitude of the noise ( ) and to calculate and base on eq.(5 ) and eq.(8 ) , respectively .oppositely , if and keeps a constant but not zero , the synchronization between the quantum systems is achieved .a controlled quantum synchronization model is designed based on the quantum optomechanical system in order to check the validity of the above - mentioned quantitative criteria . in this model, we can realize quantum synchronization control through different logical relationship of the switches , shown in fig.1 .two coupled optomechanical systems are driven by laser and interact mutually through a phonon tunnel and a fiber which can be controlled by the open or close of the switches and .the hamiltonian of the system can be given directly after a rotating approximation [ 20 , 21 ] ( ) .\\ & -\mu(b_1{b}^\dagger_2+{b}^\dagger_1b_2)+\lambda(a_1{a}^\dagger_2+{a}^\dagger_1a_2 ) \end{split}\ ] ] here and are the optical creation and annihilation operators for the system , and are the mechanical creation and annihilation operators . and are the optical detunings and the mechanical frequencies , respectively . is the optomechanical coupling constant and is the laser intensity which drives the optical cavities . is the intensity of the phonon tunnel and is coupling constant of the fiber.the switches and can change and values from zero to a positive constant by on and off . after considering the dissipative effects, the following quantum langevin equations can be written in heisenberg picture through the input - output properties +e - i\lambda a_2+\sqrt{2\kappa}{a}^{in}_1\\ & \partial_{t}a_2=[-\kappa+i\delta_2+ig({b}^\dagger_2+b_2)]a_2+e - i\lambda a_1+\sqrt{2\kappa}{a}^{in}_2\\ & \partial_{t}b_1=[-\gamma - i\omega_1]b_1+ig{a}^\dagger_1a_1+i\mu b_2+\sqrt{2\gamma}{b}^{in}_1\\ & \partial_{t}b_2=[-\gamma - i\omega_2]b_2+ig{a}^\dagger_2a_2+i\mu b_1+\sqrt{2\gamma}{b}^{in}_2 \end{split}\ ] ] here and are the optical and mechanical damping rates . and are the input bath operators , which satisfy and , where ^{-1} ] , ] and ] and , . can be obtained by substituting the matrix into eq.(17 ) and can be expressed as }^{-1 } \end{split}\ ] ] time - averaged is further calculated in order to show directly the size of quantum fluctuation under the different parameters . and the calculation result is shown in fig.4 . with varied and .in this calculation , we let and other parameters are same with fig.2 ] although the phase expectation values of every system tend to be equal , the quantum fluctuations between the systems under different parameters still influence the perfection of quantum phase synchronization , as shown in fig.4 .the fluctuation of system error will be reduced to minimum extent while taking and , which draws the conclusion that the best effect of synchronization has been reached .the dynamical evolution of the system is simulated here to test the validity of our criterion . before the simulationwe let and and assume that the initial phase error between the systems is .the remaining parameters are same as ones used in fig.2 .the simulation results are illustrated in fig.5 in which the unit of ordinate is .it can be known from figs.5(a)(c ) that the synchronization between systems will not be achieved as long as any a switch of and is opened . upon further inspection, we notice that two different systems will never achieve phase synchronization in the other parameters ( ,, and ) but the same when couples disappear .only when two switches are both closed , the systems can realize synchronization , shown as fig.5(d ) .this result is identical with our analysis and it also verifies the quantum synchronous criterion proposed in our work . by the way , the logical relationship and " of two switches is taken as an example in fig.3 , however , the logical relationships or " or exclusive - or " between the switches can also be selected to realize the quantum synchronization of the systems by adjusting appropriate parameters of and .in this paper , we investigate quantum synchronization effects and present the quantitative criteria of complete synchronization and phase synchronization between quantum systems .further , we realize the quantum phase synchronization between coupled optomechanical systems by using our criterisa . through calculating the largest lyapunov exponent , we find that the systems will not reach synchronization unless switches and are closed synchronously ( satisfying the logic relation and " ) when the parameter values are taken at the ranges ] . at the same time , we obtain the information that the fluctuation of system error will reduce to minimum while and by calculating the second order measurement of the quantum phase synchronization .finally , the dynamical evolution of the system is simulated in order to test the validity of our criterion under above parameters .since the concrete quantum synchronization criteria have been proposed and the control theory of quantum synchronization effects is simple and efficient in the work , other designers can set different synchronization conditions to satisfy themselves aims .we believe that our work can bring certain application values in quantum communication , quantum control and quantum logical gates .[ 15]vitali d , gigan s , ferreira a , bhm h r , tombesi p , guerreiro a , vedral v , zeilinger a , aspelmeyer m. optomechanical entanglement between a movable mirror and a cavity field .lett . , 2007 , 98(3 ) :030405 - 4 .[ 22]in eq.(2 ) and eq.(6 ) , we make the error operators divided by artificially in order to ensure .discussing the expectation value of the error operator , however , the physical significance of error will not clear if make the error operators divided by .so , we substitute into eq.(21 ) , instead of .
we propose a quantitative criterion to determine whether the coupled quantum systems can achieve complete synchronization or phase synchronization in the process of analyzing quantum synchronization . adopting the criterion , we discuss the quantum synchronization effects between optomechanical systems and find that the error between the systems and the fluctuation of error are sensitive to coupling intensity by calculating the largest lyapunov exponent of the model and quantum fluctuation , respectively . through taking the appropriate coupling intensity , we can control quantum synchronization even under different logical relationship between switches . finally , we simulate the dynamical evolution of the system to verify the quantum synchronization criterion and to show the ability of synchronization control .
in orthogonal frequency - division multiple access ( ofdma ) multi - cell networks , the main factor that has a direct impact on the system performance is intercell interference ( ici ) which is caused by the use of the same frequency resource in adjacent cells at the same time .intercell interference coordiantion ( icic ) techniques have been introduced as an effective technique that can significantly mitigate the ici and improve users performance , especially for users experiencing low signal - to - interference - plus - noise ratio ( sinr ) .the two dimensional ( 2-d ) traditional hexagonal network model with deterministic bs locations is the most popular model that is used to analyze a cellular network . in this model , a service area is divided into several hexagonal cells with _same _ radius and each cell is served by a bs which is often located at the center of the cell .tractable analysis was often achieved for a fixed user with limited number of interfering bss or in case of ignoring propagation pathloss .another tractable and simple model is the wyner model which was developed by information theorists and has been widely used to evaluate the performance of cellular networks in both uplink and downlink directions . in wyner and its modified models ,users were assumed to have fixed locations and interference intensity was assumed to be deterministic and homogeneous .however , for a real wireless network , it is clear that users locations may be fixed sometimes , but interference levels vary moderately depending on several factors such as receiver and transmitter locations , transmission conditions , and the number of instantaneous interfering bss . hence , these models are no longer accurate to evaluate the performance of multi - cell wireless networks , thus the ppp network model has been proposed and developed as the accurate and flexible tractable model for cellular networks . in ppp model , the service area is partitioned into non - overlapping voronoi cells in which the number of cells is a random poisson variable .each cell is served by a unique bs that is located at its nucleus .users are distributed as some stationary point process and allowed to connect with the strongest or the closest bss . in the strongest model , each user measures sinr from several candidate bss andselects the bs with the highest sinr . in the closest model , the distances between the user and bss are estimated , and the bs which is nearest to the user is selected . in this work , we assume that each user associates with the nearest bs. the ppp network performance can be evaluated by coverage probability approach and moment generating function ( mgf ) approach .coverage probability approach was proposed to calculate the coverage probability and capacity of a typical user that associates with its nearest base station , and then extended for ppp network enabling frequency reuse . in these work , the closed - form expressions were evaluated by ignoring gaussian noise and only in rayleigh fading . the closed - form expression for coverage probability is yet to be investigated and developed for a composite rayleigh - lognormal fading channel .mgf approach was proposed in to avoid the complexity of coverage probability approach . by using this approach, the authors derived the average capacity of a user in a simple ppp network with generalized fading channels .the final equations , however , were not simple because they contained the gauss hypergeometric function which is expressed as an integral .some work that evaluated the effects of rayleigh and shadowing were considered in . however , in , shadowing was not incorporated in channel gain and assumed to be constant when the origin ppp model is rescaled .instead of rescaling the network model , authors in introduced a new approach to derive the mathematical expression for coverage probability for ppp network neglecting noise . in most of papers , it was assumed that each cell had either a user or a single rb , and all bss have same power and transmit continuously .these assumptions led to the fact that the neighbouring bss always created ici to a typical user .hence , the impacts of scheduling algorithms such as round robin on network performance were not clearly presented .furthermore , in all papers that discussed above , the expressions of coverage probability were only presented in the close - form expression in the case of high snr or neglecting gaussian noise , otherwise they were presented with two layer integrals which could not be evaluated . in this paper, it is assumed that each bs is allocated rbs to serve users and has different transmission power .these assumptions are relevant to the practical network because in cellular networks , the transmission powers of bss in different tiers such as macro , pico and fermto , are significantly different .even , the transmission powers of bss in a given tier still vary and depend on the location or transmission condition . the closed - form expression for coverage probability of a typical user in the closest ppp network modelis derived by using coverage probability approach and gauss - legendre approximation .a simple part of this paper was presented in with assumptions that there is only a rb and a user in the network and all bss have same transmission power .furthermore , in this paper , the variance of simulation results is presented to confirm the stable and accuracy of simulation programs .homogeneous poisson model of wireless network is the simplest ppp model with a single hierarchical level . in this model ,the service area is partitioned into non - overlapping voronoi cells in which the number of cells is a random poisson variable .each cell is served by a unique bs that is located at its nucleus ( see figure [ fig : pppnetwork ] ) .users are distributed as some stationary point process and allowed to connect with the closest bss . and,width=340,height=245 ] in the nearest model , an importance parameter is defined as the distance from a typical user to its associated bs . sinceeach user connects with the closest bs , all neighboring bss must be further than .the null probability of a 2-d poisson process with density in a globular area with radius is , then the cumulative distribution function ( cdf ) of is given by : . the pdf can be obtained by finding the derivative of the cdf : in figure [ fig : pppnetwork ] , a 6 km x 6 km service area is considered where the distribution of bss is a poisson spatial process with density .it can observed that the boundaries of the cell as well as the locations of bss in this model are generated randomly to correspond with the changes of network operations .the main weakness of this model is that sometimes bss are located very close together , but this can be overcome by taking the average from multiple results of network performance . in this paper, it is assumed that every cell in the network has users and is allocated resource block ( rb ) .the probability where the probability where a bs causes intercell interference ( ici ) to a typical user is represented by a indicator function .this indicator function takes values 1 if the base station in cell and transmit on the same rb at the same time .when the round robin scheduling is deployed , the expected values of is archived by : in downlink cellular network , the transmitted signal from a bs usually experiences multiple propagation phenomena including fast fading , slow fading and path loss .fast fading is caused by multipath propagation phenomena that results in rapid fluctuations of the received signal in terms of phase and amplitude .slow fading , which occurs as the signal travels through large obstructions such as buildings or hills , leads to the slower phase and amplitude changes over the period of transmission .path loss is a natural phenomenon in which the transmitted signal power gradually reduces when it travels over a distance . in this session, we will discuss about the statistical models of these propagation phenomena . in most statistical models of wireless networks, it is assumed that all receiver antennas have the same gain and height .the received signal power at a receiver at a distance from the transmitter can be given by equation [ eq : pathloss ] : the propagation path loss in db unit is obtained by in which is path loss exponent ; p and are standard transmission power of a bs and a power adjustment coefficient , respectively , .the values of , which were found from field measurements are listed in table [ table : pathloss] ' '' '' environment & path loss coefficient + ' '' '' free space & 2 + ' '' '' urban area & 2.7 - 3.5 + ' '' '' suburban area & 3 - 5 + ' '' '' indoor ( line - of - sight ) & 1.6 - 1.8 + due to the variation of with changes of transmission environment , as a signal propagates over a wide range of areas , it can be affected by different attenuation mechanisms . for example , the first propagation area near the bs is free - space area where and the second area closer to the user may be heavily - attenuated area such as urban area where . in a real network ,the path loss can be estimated by measuring signal strength and then be overcome by increasing the transmission power . the multipath effect at the mobile receiver due to scattering from local scatters such as buildings in the neighborhood of the receiver causes fast fading , while the variation in the terrain configuration between the base - station and the mobile receiver causes slow shadowing ( figure [ fig : fading ] ) .the received signal envelope is composed of a small scale multipath fading component superimposed on a larger scale or slower shadowing component .the signal envelope of the multipath component can be modeled as a rayleigh distributed rv , and its power can be modeled as an exponential rv .thus , the path power gain has a mixed rayleigh - lognormal distribution which is also known as the suzuki fading distribution model .the pdf of power gain of a signal experiencing rayleigh and lognormal fading is found from the pdf of the product two cascade channels . in which and are mean and variance of rayleigh - lognormal random variable .+ using the substitution , , then the equation [ eq : originalcompo ] becomes the integral in equation [ tmpgauss - hermite ] has the suitable form for gauss - hermite expansion approximation .thus , the pdf can be approximated by : in which * and are the weights and the abscissas of the gauss - hermite polynomial respectively .the approximation becomes more accurate with increasing approximation order .for sufficient approximation , is used . * .hence , the cdf of rayleigh - lognormal rv is obtained by the integral of pdf from 0 to , and is derived in the following steps : since is defined as the channel power gain , is a positive real number .the mgf of can be found as shown below : dx \nonumber \\ & = \sum_{n=1}^{n_p}\frac{\omega_n}{\sqrt{\pi}}\frac{1}{1+s\gamma(a_n ) } \label{eq : mgf}\end{aligned}\ ] ] the received signal power for a user that is communicating with it s serving bs at a distance and a channel power gain is given by : the set of interfering bss is denoted as ; and are the distance and channel power gain from a user to an interfering bs , respectively .the interfering bss are assumed to transmit at the same power .the intercell interference at a user is obtained by combining equation [ eq : desiredsignalpower ] and [ eq : interfernce ] , the received instantaneous sinr(r ) at a user is found from equation [ eq : receivedsinr ] where denotes the gaussian noise at the receiver .the coverage probability of a typical user at a distance from its serving bs for a given threshold is defined as the probability of event in which the received sinr in equation [ eq : receivedsinr ] is larger than a threshold . in other words ,if the received sinr(r ) at a user is larger than sinr threshold , the user can successfully decode the received signal and communicate with the serving bs .the value of is dependent on the receiver sensitivity of the ue .the coverage probability can be written as a function of sinr threshold , bs density and attenuation coefficient and the distance between the user and its serving bs : or for a given user , if is the distance from the user to its serving bs then depends on the power gain from bs , the power gain from interfering bs , is the set of interfering bs , and is the distance from a user to its interfering bs . in equation [ eq : coveragesinrform ] , stands for the conditional average coverage probability and it is expressed as a function of variables and , then equation [ eq : coveragesinrform ] can be written as the coverage probability of a typical user in rayleigh - lognormal fading in which bss are distributed as ppp with density and are allocated sub - bands randomly is given by where is the signal - to - noise ratio at the transmitter , ; is defined in equation [ eq : singlefinalexp ] .[ theo : singlecoverage ] : see the appendix .it is observed that there are two exponential parts in equation [ eq : singlecoveragepro ] .the first part , i.e , which represents the transmission power of the serving bs and the coverage threshold , indicates that the coverage probability is proportional to . the second part , i.e , which represents the ici , indicates that the coverage probability is inversely proportional to the exponential function of the ratio between the number of users and rbs .the average coverage probability of a typical user over a cellular network with composite rayleigh - lognormal fading is [ lemmaaveragecov ] in which and are weights and nodes of gauss - legendre rule with order ; as defined in equation [ theo : singlecoverage ] . the average coverage probability is achieved by taking the expected value of in equation [ eq : singlecoveragepro ] with variable the integral in equation [ eq : tmpgausslegenre ] has the suitable form of gauss - legendre approximation .hence , the average coverage probability is approximated by the lemma [ lemmaaveragecov ] is proved .the close - form expression of the average coverage probability has been not yet been derived .hence , the use of gauss - legendre rules is considered as the appropriate approach to find the close - form expression . for or high ,the average coverage probability can be achieved as follows : this is the close - form expression of the average coverage probability of a typical user in the interference - limited ppp network .it is observed from equation that the average coverage probability does not depend on the density of bs which means the power of the desired signal in this case counter - balanced with the power of ici .this results is comparable with others that were published in for the case of rayleigh fading and a single user .the coverage probability of a typical user over network in rayleigh fading only . where f_i(t)=c^ + _m=1^n_gl where is the signal - to - noise ratio at the transmitter , ; is defined in equation [ eq : singlefinalexp ] .rayleigh fading is a special case of composite rayleigh - lognormal fading with and given that , then the coverage probability in this case is derived by equation [ eq : singlecovray ] .the average coverage probability over network is calculated by integrating equation [ eq : singlecoveragepro ] with variable , and then its closed - form is expressed as in equation [ eq : singlecoveragepro ] where was defined in equation [ eq : singlecovray ] .this analytical result is comparable to the corresponding result for rayleigh fading given in .the average rate , i.e. ergodic rate , of a typical randomly user located in the network is defined as \ ] ] where is the received sinr at the user given in equation [ eq : receivedsinr ] ; represents the conditional expected values of over the ppp network with variable . since , \nonumber\\ & = \int\limits_{0}^{\infty}\mathbb{p}\left[sinr(r)>e^t-1\right]dt \nonumber \\ & = \int\limits_{0}^{\infty}\overline{p}_c(e^t-1,\lambda,\alpha)dt\end{aligned}\ ] ] in which is the average coverage probability of the typical user in the ppp network and obtained by equation [ eq : averagecoverp ]. using the similar approach in theorem [ lemmaaveragecov ] , the average rate can be approximated by where and are weights and nodes of gauss - legendre rule with order ; and is defined in equation [ eq : averagecoverp ] .the simulation algorithms is described in the following steps : ' '' '' _ * for i=1:1:nor * _ count = 0 ; + _ * for i=1:1:nos * _ + _ 1 .generate numbers of bss _ + _ 2 .generate distances between a user and bss . _ + _ 3 .generate rayleigh - lognormal power gain values . _ + _ 4 .calculate sinr . _ + _ 5 .count outage event _+ if + _ count = count+1 ; _ + end + * end * + coverage probability _p = count / nos ; _ + * end * + variance is obtained by equation [ eq : variance ] + ' '' '' in which and are number of simulation runs and samples per each run , respectively .higher values of and give more accurate and stable results , however , it takes time and requires high performance computers . in this work , and are appropriate choices to obtain the acceptable variance of simulation results ( smaller than ) . the relationship between coverage probability and related parameters are validated and visualized by monte carlo simulations as shown in the following figures .the simulation parameters in figures ( _ if be not mentioned in figures _ ) are summarised in table [ simulationpara ] . ' '' '' * parameter * & * value * + ' '' '' density of bss & + ' '' '' number of rbs & 15 + ' '' '' standard transmission power & ( db ) + ' '' '' power adjustment coefficient & + of serving bs & + ' '' '' coverage threshold & ( db ) + ' '' '' fading channel & db + ' '' '' & db + ' '' '' pathloss exponent & + with higher values of , total power of interfering signals decreases at a faster rate with distance compared to desired signal since the user receives only one useful signal from serving cell and often suffers more than one interfering signals .the average coverage probability is , hence , inversely proportional to path loss exponent . and different values of pathloss exponent ,width=340,height=245 ]figure [ fig : singlesnr ] indicates that when coverage threshold db and , pathloss exponent increases from 3.0 to 3.5 and ends at 4.0 , the average coverage probability will increase by and .the variance of average coverage probability with different values of is shown in table [ table : tc0coverage ] . ' '' '' path loss exponent & 3.0 & 3.5 & 4 + ' '' '' average coverage probability & 0.2362 & 0.3228 & 0.387 + when the coverage threshold increases that means the ue need a higher received sinr to detect and decode the received signals , the probability of successful communication between the user and its associated bs reduces which is reflected in the decrease of coverage probability as shown in figure [ fig : singlesnr ] .it is observed that when the coverage threshold increases from 0 db to 5 db , the average coverage probability reduces by around from 0.2362 to 0.136 ., width=340,height=245 ] when the transmission power p is much greater than the power of gaussian noise , i.e. , the equation [ eq : receivedsinr ] can be approximated by hence in this case , the average coverage probability is consistent with the changes of standard transmission power .figure [ fig : multisnr ] indicates that the average coverage probability is proportional to the standard transmission power when db and reaches the upper bound when db .furthermore , it is observed that the upper bound is inversely proportional to the transmission power ratio .for example , when the transmission power ratio increase by 5 times from 1 to 5 , the upper bound reduces by 30% from 0.6 to around 0.42 . , width=340,height=245 ]the impact of the ratio between the number of users and rbs ( i.e. user ratio ) is presented in figure [ fig : differentuserratio ] .when the user ratio increases , it means that more users have connections with the bs and more rbs should be used .hence , the probability which two bss transmit on the same rb at the same time increase which result in an increase of the ici .consequently , the average coverage probability reduces . , width=340,height=245 ] it is clear that an increase in the density of bss means that the user has more opportunities to connect with the bs and the distance from the users and its serving bs may be reduced .however , when the density of bss increases , the number of interfering bss increases .hence , the power of the interfering bss in this case is counter - balanced by the power of the serving bs . consequently , average the coverage probability does not depend on the density of the bs as shown in figure [ fig : differentlambda ] . ,width=340,height=245 ] the square of the variance of lognormal random variable , i.e. , denotes the power of the fading channel .that means if increases , the signal will be more strongly affected by the fading .hence , the average capacity is inversely proportional to the .figure [ fig : capacity ] indicates that when the power of fading channel doubles from 5 db to 8 db , the average data rate reduces by from 1.792 to 1.426 ( bit / hz / s ) in the case of , i.e. all bss have the same transmission power . in all simulation results ,the power adjustment coefficient of the serving bs is set to 1 while the coefficient of the interfering bs can take three values 1 , 5 and 10 from figure [ fig : multisnr ] to [ fig : capacity ] and from 1 to 10 in figure [ fig : capacitypowerratio ] .hence , in this case represents the ratio between the interfering and serving bs of the typical user .the effects of power ratio on user s performance are demonstrated through the gap between curves with different values of and highlighted in the table [ performancedifferntrho ] . ' '' ''power ratio & 1 & 5 & 10 + ' '' '' average coverage probability & 0.4815 & 0.3770 & 0.3195 + & & ( -21.70% ) & ( -33.64% ) + ' '' '' average capacity & 1.426 & 1.089 & 0.9037 + & & ( -23.63% ) & ( -36.63 ) + in the table [ performancedifferntrho ] , the negative percentage represents the percentage by which the user s performance , e.g. average coverage probability and average capacity , reduce when compared to those in the case when power ratio equals 1 . for example, and mean the average coverage probability decreases by and when the power ratio increase from 1 to 5 and ends at 10 .the accuracy of simulation is represented through the variance of the simulation results which is defined by in which * nos is the number of simulations * is the simulation result at run .* is the expected vale of nos simulation times .+ in simulation , the results are obtained by taking the average values from 5 runs , the number of samples in each run is upto ( sample ) .the variances of the results obtained is shown in figures are presented in figure [ fig : variance ] .it is observed that in all cases , the variance of simulation results are smaller than .hence , it is said that the results obtained from simulation are accurate and stable .in this paper , the performance of the typical user in terms of coverage probability and capacity in the ppp network in rayleigh - lognormal fading channel was presented .the analytical results for the network with users user in each cell are comparable with the corresponding published results for the network with either a user or a rb .furthermore , the paper assumed that the interfering and serving bss have different transmission power .this assumption corresponds to the differences between the transmission power of bss in different tiers or even in a given tier .the numerical results show that when the coverage threshold which represents the sensitivity of ue increased three times from 0 to 5 db , the average coverage probability reduces by around 42.2% .furthermore , when the power ratio between the transmission power of interfering and serving bs increased from 1 to 5 and ends at , the average capacity of a link reduced by 23.63% and 36.63% , respectively .the coverage probability of a typical user , which is located in cell and served on rb , is defined in equation [ coveragadefinition ] : \nonumber \\ & = \sum_{n=1}^{n_p}\frac{w_n}{\sqrt{\pi } } \mathbb{e}\left[\exp\left(-\frac{t_cr^\alpha(i_\theta+\sigma^2)}{\zeta p\gamma(a_n ) } \right ) \right ] \nonumber \\ & = \sum_{n=1}^{n_p}\frac{w_n}{\sqrt{\pi}}\exp\left(-\frac{t_cr^\alpha\sigma^2}{\zeta p\gamma(a_n ) } \right ) \mathbb{e}\left[\exp\left(-\frac{t_cr^\alpha i_\theta}{\zeta p\gamma(a_n ) } \right ) \right ] \nonumber \\ & = \sum_{n=1}^{n_p}\frac{\omega_n}{\sqrt{\pi}}\exp\left(-\frac{t_cr^\alpha}{\gamma(a_n)}\frac{1}{\zeta snr } \right)\mathbb{e}\left(\exp\left(-f(n)i_\theta \right ) \right)\end{aligned}\ ] ] in which . is the standard transmission power - noise ratio at the base station .considering the expectation and given that the ici was defined in equation [ eq : interfernce ] \nonumber\\ & = \mathbb{e}_\theta\left[\mathbb{e}_{g_u}\prod_{u\in\theta}\tau(rb_i = rb_j)\exp\left(- f(n)\rho pg_ur_u^{-\alpha } \right ) \right ] \nonumber \\ & = \mathbb{e}_\theta\left[\prod_{u\in\theta}\mathbb{e}_{g_u}\epsilon \exp\left(- f(n)\rho pr_u^{-\alpha}g_u \right ) \right]\end{aligned}\ ] ] since is rayleigh - lognormal fading channel whose mgf is calculated from equation [ eq : mgf ] , then \label{eq : coverdefi}\end{aligned}\ ] ] using the properties of ppp probability generating function given that and letting , then the integral becomes \\ = & r^2 ( i_1-i_2)\end{aligned}\ ] ] using properties of gamma function , the first integral is obtained by for accurate computation , is chosen .subsequently , the expectation can be approximated by
spatial poisson point process ( ppp ) network , whose base stations ( bs)s are distributed according to a poisson distribution , is currently used as a accurate model to analyse the performance of a cellular network . most current work on evaluation of ppp network in rayleigh fading channels are usually assumed that the bss have fixed transmission power levels and there is only a resource block ( rb ) or a user in each cell . in this paper , the rayleigh - lognormal fading channels are considered , and it is assumed that each cell is allocated resource blocks ( rb ) to serve users . furthermore , the serving and interfering bs of a typical user are assumed to transmit at different power levels . the closed - form expression for the network coverage probability for both low and high snr is derived by using gauss - legendre approximation . the analytical results indicates that the performance of the typical user is proportional to the transmission power and density of bss when db and , and reaches the upper bound when db or . the variance of monte carlo simulation is considered to verify the stability and accuracy of simulation results . _ index terms _ : random cellular network , homogeneous cellular network , coverage probability , frequency reuse , rayleigh - lognormal .
the reconstruction of a function from given measurements is a fundamental task in data processing and occupies numerous directions of research in mathematics and engineering .a typical problem requires the reconstruction or approximation of a physical field from pointwise measurements .a field may be a distribution of temperatures or water pollution or a solution to a diffusion equation , in mathematical terminology a field is simply a smooth function of several variables .the standard assumption on the smoothness is that the field is bandlimited to a compact spectrum .if the spectrum is a fundamental domain of a lattice in or a symmetric convex polygon in , then there exist precise reconstruction formulas from sufficiently many samples in analogy to the shannon - whittaker - kotelnikov sampling theorem .let be the fourier transform in the exponent .] of or , where denotes the imaginary unit and denotes the scalar product between vectors and in .we say that is bandlimited to the closed set , if its fourier transform is supported on . in this case we write for the space of fields with finite energy bandlimited to the spectrum . in the context of field estimationwe always assume that the spectrum is a compact , symmetric , convex set .the classical theory of sampling and reconstructing of such high - dimensional bandlimited fields dates back to petersen and middleton in signal analysis and to beurling in harmonic analysis .both identified conditions for reconstructing such fields from their point measurements in .further research on non - uniform sampling generated more results on conditions for perfect reconstruction from samples taken at non - uniformly distributed spatial locations .see and the survey .previous work deals primarily with the problem of reconstructing the field from measurements taken by a collection of static sensors distributed in space , like that shown in figure [ fig : sampr2a ] . in this casethe performance metric for quantifying the efficiency of a sampling scheme is the spatial density of samples .this is the average number of sensors per unit volume required for the stable sampling of the monitored region . in this paperwe investigate a different method for the acquisition of the samples , which we call _ mobile sampling_. the samples are taken by a mobile sensor that moves along a continuous path , as is shown in figure [ fig : sampr2b ] . in such a caseit is often relatively inexpensive to increase the spatial sampling rate along the sensor s path while the main cost of the sampling scheme comes from the total distance that needs to be traveled by the moving sensor .hence it is reasonable to assume that the sensor can record the field values at an arbitrarily high but finite resolution on its path .the new method for the acquisition of samples changes the mathematical nature of the problem completely . when using samples from static sensors , we need to establish a sampling inequality with evaluations of the form for constants independent of . for mobile sampling ,we need to establish a `` continuous '' sampling inequality of the form where is the sum of line integrals along the paths .again , the performance metric should reflect the cost required for the data acquisition .for the appropriate metric is the average number of sensors , i.e. , samples , per unit volume .for some of us have argued in and in that the relevant metric should be the average path length traveled by the sensors per unit volume ( or area , if ) .we call this metric the _ path density_. such a metric is directly relevant in applications like environmental monitoring using moving sensors , .in retrospect this metric is also useful in designing -space trajectories for magnetic resonance imaging ( mri ) , where the path density can be used as a proxy for the total scanning time per unit area in -space .the continuous sampling inequality raises many interesting questions both for engineers and for mathematicians . on the mathematical sideare the abstract construction of continuous frames in the sense of ( * ? ? ?* chaps . 3 and 5 ) or or the analysis of sampling measures and their properties , see for a theory of sampling measures for fock spaces and for bergmann spaces .on the engineering side , we need to design concrete , realizable trajectories with a small path density for bandlimited fields with convex spectrum .this problem was introduced by some of us in and and answered for the special case of trajectory sets that consist of a union of uniformly spaced lines .the contribution in this article is twofold .first , we study arbitrary trajectory sets of parallel lines and derive a necessary condition for the minimal path density in the style of landau s famous result in . extending the results in we show , in theorem [ th : parallel ] , that the minimal path density achievable by sampling along trajectories of arbitrary parallel lines is exactly the area of the maximal hyperplane section of the spectrum .we work under the standard assumption that the spectrum of the signals is convex and symmetric ( although some results holds for more general spectra , see section [ sec : conc ] ) . at first glance, the sampling along parallel lines seems to be an easy generalization of point sampling , because it can be reduced to the sampling problem in smaller dimensions .however , even this case offers some interesting and challenging problems that we did not envision before .for instance , in section 3 we will use the existence of universal samplings sets as established by olevsky and ulanovsky and by matei and meyer in order to prove that the frame bounds are uniform for sections of convex sets .in addition , this result enriches our knowledge about the properties of universal sampling sets . for another crucial argumentwe need the brunn - minkowski inequality .of course , the mathematician s immediate instinct is to study more general sets of trajectories and try to prove a result analogous to landau s necessary condition for the path density .we show in proposition [ prop : infzero ] that such a result can not hold by constructing stable trajectory sets with arbitrarily small path density .thus in a sense there is no optimal configuration of paths and the problem of optimizing the path density is ill - posed .this answers a question raised in .however , as soon as we minimize over trajectory sets with given stability parameters ( uniform frame bounds ) the optimization problem becomes well - posed .our main density result ( theorem [ th : infab ] ) shows that the path density for a stable set of trajectories is bounded below by an expression involving the stability parameters and the geometry of the spectrum .this is a report on a successful and fertile collaboration between engineers and mathematicians .we , the mathematicians , are intrigued by the questions that motivate mobile sensing . although the mathematical literature has investigated generalizations of sampling ( the theorems of sereda - logvinenko and the theory of sampling measures ) for the sake of generalization , we would never have dreamt of the particular conditions on the paths that are imposed by practical considerations ( see condition ) .we , the engineers , are intrigued by the mathematical subtleties that popped up at every corner and subsequently led to an extended theory of path sampling .the paper is organized as follows . in section [ sec : probstat ]we describe the formal problem statement .then , in section [ sec : optresults ] , we characterize the minimal density of sampling trajectories consisting of parallel lines .section [ sec : disc ] treats the problem of optimizing over arbitrary trajectories , and section [ sec : conc ] presents some conclusions .the proofs of some technical lemmas needed throughout the article are postponed to section [ sec_tec ] , so as not to obstruct the flow of the article .* notation*. we use to denote the canonical inner product on and , and to denote the unit vector along the -th coordinate axis . for denote the hyperplane orthogonal to through the origin by , and denotes the orthogonal projection of a set onto the hyperplane .for a set we use to denote the volume of with respect to lebesgue measure . by denote the closed euclidean ball of radius centered at and , and by ^d ] be a bounded interval and is a curve in .we say that is rectifiable , if is finite , where the supremum is taken over all finite partitions . in this case is called the arc length of .every piecewise differentiable curve is rectifiable . if quantities and satisfy the condition that there exist with we write .we also use the notation to indicate that there exists such that . to compare the size of functions , we use the landau notation and .the symbol means that there exist and such that for all , we have , and if .we say that a set of points is _ uniformly discrete _ or _ separated _ if , i.e. , there exists such that for any two distinct points we have .for example , a lattice in is uniformly discrete , but a sequence in converging to a point in is not .the lower and upper beurling densities of are for every compact set with non - empty interior and whose boundary has measure zero , the lower density can be also calculated as : and a similar statement holds for the upper density ( * ? ? ?* lemma 4 ) .the _ covering constant _ of a set is a set is called _ relatively separated _ if it has a finite covering constant , which holds if and only if it has finite upper beurling density .a set is called a _ convex body _ if it is convex , compact and has non - empty interior .a convex body is called _ centered _ if and symmetric if .the following fact will be frequently used in approximation arguments .[ lemma_conv_1 ] let be a centered convex body .let , then and .a _ trajectory _ in is the image of curve , i.e. , such that the restriction of to any finite interval is rectifiable .trajectory set _ is defined as a countable collection of trajectories : where is a countable set of indices and every is a trajectory . in analogy to the beurling density we define the lower and upper _ _ of a trajectory set as follows : let be the total arc - length of the trajectories in . then the _ lower path density _ and the upper path density are if , then is said to possess the homogeneous path density .an illustration comparing beurling and path densities is provided in figure [ fig : beurlingandpd ] . as with beurling s density ,the path density does not depend on the particular choice of the euclidean ball .more precisely , we have the following result . [ lemma_dens_tile ]let be a compact set with non - empty interior and with a boundary of measure zero and let be the total arc - length of trajectories from located in . then and .lemma [ lemma_dens_tile ] can be proved by following landau s proof of the analogous result for beurling s density ( * ? ? ?* lemma 4 ) .we refer the reader to that article .the simplest example of a trajectory set in is a sequence of equispaced parallel lines in ( e.g. , see figure [ fig : unifset2d ] ) .we call such a trajectory set a _uniform set in . such a uniform set has a path density equal to , where is the spacing between the lines ( see ( * ? ? ?* lemma 2.2 ) ) .similarly a _ uniform set in _ is defined as a collection of parallel lines in such that the cross - section forms a -dimensional lattice , see figure [ fig : unifset3d ] .recall that for static sampling with fixed sensors the appropriate notion of stability was the sampling inequality . for mobile sampling alongtrajectory sets we require similar conditions for the stability and are led to the following definition .[ trajset ] a trajectory set of the form ( [ eqn : trajset ] ) is called a _stable nyquist trajectory set _ for if satisfies the following conditions : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ con : recon2 ] _ [ nyquist ] _ there exists a uniformly discrete set of points on the trajectories in , , such that forms a set of stable sampling for .[ con : path3 ] _ [ non - degeneracy ] _ there exists a function such that with the following property : for every and every , there is a rectifiable curve \to { \mathbb{r}^d } , ] consisting of concatenated line segments that contains each point , , and we can now prove the following . [lem : c2forq ] let be a trajectory set consisting of lines parallel to a vector and let be the intersection of with the hyperplane orthogonal to .assume that . then the trajectory set satisfies condition . for every ball ,we need to construct a single path containing all line segments of , but without increasing the path length significantly . for this we need to connect the points of intersection of on each hemisphere by a short path .such a choice in dimension is plotted in figure [ fig : ext ] . in higher dimensions ,we resort to lemma [ lemma_short_path ] . for a rigorous argument, we may assume without loss of generality that the lines in are parallel to .let be arbitrary .let and denote the half - spaces and the hyperplane let , and .to each point corresponds a symmetric point .let us further denote and .since , it follows that by lemma [ lemma_short_path ] , there exists a path contained in consisting of line segments , that passes through all the points in , and such that .let the sequence denote the order in which points in appear in . by symmetry , the sequence of line segments connecting the points is a path contained in that connects all points in and has length . and ball is prolongated to a rectifiable path with negligible additional length . ]we construct a rectifiable curve containing as follows .let denote the curve comprising the sequence of line segments connecting the points since for all , the curve contains the line segment connecting and exactly once , it follows that contains .furthermore for all , the curve contains either the line segment connecting and or that connecting and . thus counting all line segments in we obtain invoking again lemma [ lemma_short_path ] we obtain a curve contained in that goes through each point in point and has length .finally we form by linking to by means of a line segment contained in ( of length at most ) .the curve is completely contained in , it contains and it is rectifiable since it consists of a finite number of line segments .in addition , from the length estimates above we conclude that , as desired .( note that in all the estimates , the implicit constants depend on the set but not on the center of the ball . ) for the proof of lemma [ lem : c2forq ] we do not need the full strength of lemma [ lemma_short_path ] . if we accept ( without proof )that in condition instead of balls one may use cubes with side length and aligned parallel to lines in , then lemma [ lemma_short_path ] can be replaced by the following , more elementary argument .if a cube is parallel to , then contains two copies of .as is relatively separated , it can be approximated by a finite union of lattices isomorphic to ( with asymptotically small error ) .it is now elementary to connect the lattice points in ^{d-1} ] , we have . using the fact that is a sampling set for we have hence for every bandlimited to . by landau s result on necessary density conditions for sampling we deduce that .thus , by lemma [ lemma_dens_parallel ] , it follows that the conclusion follows because was arbitrary and . to prove that equality holds in wemust show that there are nyquist trajectories with path density arbitrarily close to the volume of the section of through the origin .the following proposition shows that this problem can be reduced to finding sampling sets for each section of with uniform bounds . precisely , for let be the section of at height . then .[ prop_sections ] let be a closed set . assume that is a set of stable sampling for with uniform bounds for all , then for every with if in addition is compact , then there exists a lattice , such that set and for .then and .we further define the partial fourier transform since plancherel s theorem yields using the support property , we obtain for almost all that with constants independent of by assumption .finally , if in addition is compact , then is bandlimited to a compact set and the integrals involving can be replaced by sums over a suitably dense lattice .proposition [ prop_sections ] applies to spectra of the form with two continuous functions .this set can have a very large projection onto the last coordinate while remains small . in order to prove theorem [ th : parallel ], we need to find , for each centered symmetric convex body and each direction , a stable nyquist trajectory for consisting of lines parallel to and with a path - density close to the measure of the central section of by . after a rotation, we may assume that and analyze the horizontal sections of . according to proposition [ prop_sections ] ,we need to find a set , such that ( a ) its beurling density is close to and ( b ) is simultaneously a sampling set for all spaces of functions bandlimited for all with uniform sampling bounds . in the special casewhen is contained in an `` oblique '' cylinder , i.e. , for some vector and all ( figure [ fig : sampr2c ] ) , it suffices to find a sampling set only for with density close to the critical one .this problem was already solved in . in general, the horizontal sections are _ not _ contained in translates of the central section . as a simple example we mention the regular octahedron and two sections perpendicular to .the octahedron fits into a cylinder with a cross - section that is strictly larger than the central _ minimal _ cross - section ( see figure [ fig_secs ] ) . therefore the simple argument sketched above does not work . to solve the general case ,we need the concept of universal sampling sets , as introduced in . given , a _-universal sampling set _ is a set with uniform density that is a sampling set for , for all compact spectra with .it is known that for all there exist universal sampling sets .for example , in dimension the set is a universal sampling set with density ( with denoting the fractional part of ) .on the other hand , if the requirement that be compact is dropped , universal sampling sets do not exist .a universal -sampling set is a set of stable sampling for all compact spectra with , but the frame bounds may depend on .we now argue that when the spectra consist of sections of a compact convex body , then these bounds can be chosen to be uniform . we need the following technical lemma ,whose proof is deferred to section [ sec_proof_lemma_sections ] .[ lemma_sections ] let be a convex and compact set , , and . then there exists such that for all we now show that the sections of a convex compact set admit a universal sampling set with uniform stability bounds .[ prop_unif_bounds ] let be a convex and compact set and let let be an -universal sampling set . then is a sampling set for all , , with sampling bounds uniform in .let be compact interval such that .let .since is closed , there exists such that we let denote the slightly enlarged section . with this notation , by lemma [ lemma_sections ] , there exists such that the family of intervals is an open cover of .then , by compactness , for finitely many .hence , for every , there exists such that since for , the universal sampling property implies that is a sampling set for with bounds .let hence , is a sampling set for with bounds for all .since , according to , every section is contained in some set , it follows that is a sampling set with bounds for all with .note finally that , for .this completes the proof .with proposition [ prop_unif_bounds ] we can now show the estimates for the necessary path density for convex spectra . from proposition [ prop : unifbetterthanparallel ]it follows that let us show that all these inequalities are actually equalities .assume without loss of generality that and note that since is convex and symmetric the section through the origin is the one with maximal area .this is a consequence of the brunn - minkowski inequality , see for example . given a number satisfying let be a -universal sampling set and let be a set of lines parallel to that go through . since possesses finite ( uniform ) density , satisfies condition by lemma [ lem : c2forq ] .in addition , the fact that possesses a uniform density and lemma [ lemma_dens_parallel ] imply that and that is homogeneous .propositions [ prop_unif_bounds ] and [ prop_sections ] imply that is a nyquist trajectory set .this shows that .the conclusion follows by letting tend to .we now consider the problem of designing trajectory sets without requiring the trajectories to be straight lines . in the following propositionwe show that the optimization problem is ill - posed by constructing a sequence of trajectory sets in with arbitrarily small path density . [prop : infzero ] let be a compact set . for every exists a trajectory set , such that .thus , by enlarging if necessary , we can assume that it is a cube .since the statement to be proved is invariant under dilations we further assume that ^ 2 ] be a finite set of cardinality and its periodization with period . then is separated and contained in . since , it follows that is a sampling set for } ] be the curve granted by condition in definition [ trajset ]. then ) \supset p \cap b^d_a(x ) \supset\lambda \cap b^d_a(x) ] , where and is a constant that depends only on . * if ^d ] be a rectifiable curve .let ] such that ( i ) contains the entire portion of inside and ( ii ) .let us consider the set ).\end{aligned}\ ] ] we estimate in two different ways .firstly , if , then . in view of we have consequently , secondly , since is a sum of two compact sets , is compact .let and consider the ( open ) set .by lemma [ lemma_conv_1 ] , .consequently , \right \}}}\end{aligned}\ ] ] is an open cover of , and there exists a finite set ] , then explicitly since contains a sampling set with stability parameters , proposition [ prop : gaps ] implies that for all and with .then and thus also . by proposition [ prop : optpdforgap ]we obtain that .since by ^d } = 2^{d-1 } \delta_{[-1/2,1/2]^d } c^{d-1 } \big(\frac{b \sigma ( \partial\omega)}{a { \ensuremath{\left| \omega \right| } } } \big)^{d-1 } \ , , \end{aligned}\ ] ] the conclusion follows . for the case ^ 2 ] and the explicit estimate for from proposition [ prop : gaps ] : have studied the problem of designing trajectories for sampling bandlimited spatial fields using mobile sensors . we have identified trajectory sets composed of parallel lines that ( i ) possess minimal path density and ( ii ) admit the stable reconstruction of bandlimited fields from measurements taken on these trajectories .we also have shown that the problem of minimizing the path density is ill - posed if we allow arbitrary trajectory sets that admit stable reconstruction . as a positive resultwe have shown that the problem is well - posed if we restrict the trajectory sets to contain a stable sampling set with given stability parameters .we point out that , for the results presented here , the assumption that the spectrum of the signals is convex is not essential , but a matter of convenience . indeed , in most results the convexity of can be replaced by a suitable assumption on the regularity of its boundary ( eg . lemma [ lemma_sections ] ) . in theorem [th : parallel ] the convexity of is used to guarantee that the maximal area of the cross - sections by hyperplanes is attained by a hyperplane that goes through the origin . for non - convex spectra, a characterization analogous to the one in theorem [ th : parallel ] should consider cross - sections by arbitrary hyperplanes .this work opens up several possible research directions .one question is whether we can solve the problem exactly .this would require a tight lower bound on the path density of every trajectory set in .another interesting variation concerns trajectory sets consisting of arbitrary , not necessarily parallel lines and the necessary path density .k. grchenig was partially supported by national research network s106 sise and by the project p 26273-n25 of the austrian science fund ( fwf ) .j. l. romero gratefully acknowledges support from the project m1586-n25 of the austrian science fund ( fwf ) and from an individual marie curie fellowship , within the 7th .european community framework program , under grant piif - ga-2012 - 327063 .j. unnikrishnan and m. vetterli were supported by erc advanced investigators grant : sparse sampling : theory , algorithms and applications sparsam no .by applying a suitable rotation , we may assume without loss of generality that for some .then the projection of onto the hyperplane determined by is simply for we set and . since is compact ,the minima and maxima exist ; and since is convex , the line segments are contained in , so that consequently \setminus [ \tau _ - ( x ' ) , \tau _ + ( x')]\ } \ , , \end{aligned}\ ] ] and each fibre over has length . now using fubini s theorem, we obtain that \setminus [ \tau _ - ( x ' ) , \tau _ + ( x')]}(t ) dt \ , dx ' \\ & \leq \alpha \int _ { p_{q^\perp } e } 1 \ , dx ' = \alpha |p_{q^\perp } e| = the following proposition - that is part ( b ) of proposition [ prop : gaps ] , restated for convenience - gives an explicit estimate for the gap of sampling sets for the spectrum ^d ] , where is given by .let , so .we start by noting some facts . without loss of generalitylet us assume that .suppose that the conclusion does not hold .then there exists a sequence of real numbers such that and hence there exist points such that since is closed , there exists such that .since is convex , so is .consequently , .let us estimate therefore , there exist such that .since , this contradicts the fact that .let us enumerate the points of as .without loss of generality we further assume that , for ( indeed , if , for some , then we may remove from the set without altering the set . ) .let us consider the sets . for ,let . by lemma [ lemma_slide ], it follows that since , , for all .hence , considering the vectors we see that therefore , let us decompose as hence , as claimed .j. j. benedetto and h .- c .nonuniform sampling and spiral mri reconstruction . in a.aldroubi , a. laine , and m. unser , editors , _ proc .spie symp .wavelets applications in signal and image processing viii _ ,volume 4119 , pages 130141 , june 2000 .a. beurling .local harmonic analysis with some applications to differential operators . in _ some recent advances in the basic sciences , vol .annual sci .belfer grad .school sci ., yeshiva univ . , new york , 19621964 )_ , pages 109125 .belfer graduate school of science , yeshiva univ ., new york , 1966 .a. beurling .on balayage of measures in fourier transforms ( seminar , inst .for advanced studies , 1959 - 60 , unpublished ) . in l.carleson , p. malliavin , j. neuberger , and j. wermer , editors , _ collected works of arne beurling_. birkhauser , boston , 1989 .k. grchenig and t. strohmer .numerical and theoretical aspects of non - uniform sampling of band - limited images .in f. marvasti , editor , _ nonuniform sampling : theory and applications _, chapter 6 , pages 283 324 .kluwer , 2001 .j. unnikrishnan and m. vetterli .sampling trajectories for mobile sensing . in _ proc. 2011 49th annual allerton conference on communication , control , and computing ( allerton ) _ , pages 12301237 , allerton house , uiuc , illinois , usa , sept . 2011 .j. unnikrishnan and m. vetterli . on sampling a high - dimensional bandlimited field on a union of shifted lattices .in _ information theory proceedings ( isit ) , 2012 ieee international symposium on _ , pages 1468 1472 , july 2012 .j. unnikrishnan and m. vetterli . on optimal sampling trajectories for mobile sensing . in_ proceedigns of the 10th international conference on sampling theory and applications ( sampta 2013 ) _ , pages 352355 , july 2013 .
we study the design of sampling trajectories for stable sampling and the reconstruction of bandlimited spatial fields using mobile sensors . the spectrum is assumed to be a symmetric convex set . as a performance metric we use the path density of the set of sampling trajectories that is defined as the total distance traveled by the moving sensors per unit spatial volume of the spatial region being monitored . focussing first on parallel lines , we identify the set of parallel lines with minimal path density that contains a set of stable sampling for fields bandlimited to a known set . we then show that the problem becomes ill - posed when the optimization is performed over all trajectories by demonstrating a feasible trajectory set with arbitrarily low path density . however , the problem becomes well - posed if we explicitly specify the stability margins . we demonstrate this by obtaining a non - trivial lower bound on the path density of an arbitrary set of trajectories that contain a sampling set with explicitly specified stability bounds .
highly efficient solvers for elliptic partial differential equations ( pdes ) are required in many areas of fluid modelling , such as numerical weather- and climate- prediction ( nwp ) , subsurface flow simulations and global ocean models .often these equations need to be solved in `` flat '' domains with high aspect ratio , representing a subsurface aquifer or the earth s atmosphere . in both casesthe horizontal extent of the area of interest is much larger than the vertical size .for example , the euler equations , which describe the large scale atmospheric flow , need to be integrated efficiently in the dynamical core of nwp codes like the met office unified model .many forecast centres such as the met office and european centre for medium - range weather forecasts ( ecmwf ) use semi - implicit semi - lagrangian ( sisl ) time stepping to advance the atmospheric fields forward in time because it allows for larger model time steps and thus better computational efficiency .however , this method requires the solution of a anisotropic elliptic pde for the pressure correction in a thin spherical shell at every time step . as the elliptic solve can account for a significant fraction of the total model runtime , it is important to use algorithmically efficient and parallel scalable algorithms .suitably preconditioned krylov - subspace and multigrid methods ( see e.g. ) have been shown to be highly efficient for the solution of elliptic pdes encountered in numerical weather- and climate prediction ( see and the comprehensive review in ) .multigrid methods are algorithmically optimal , i.e. the number of iterations required to solve a pde to the accuracy of the discretisation error is independent of the grid resolution . however - as far as we are aware - multigrid algorithms are currently not widely implemented operationally in atmospheric models and one of the aims of this paper is to demonstrate that they can be used very successfully in fluid simulations at high aspect ratio . whereas `` black - box '' algebraic multigrid ( amg ) solvers such as the ones implemented in the dune - istl and hypre libraries can be applied under very general circumstances on unstructured grids and automatically adapt to potential anisotropies , they suffer from additional setup costs and lead to larger matrix stencils on the coarse levels . on( semi- ) structured grids which are typical in many atmospheric and oceanographic applications , geometric multigrid algorithms usually give much better performance , as they can be adapted to the structure of the problem by the developer .in contrast to amg algorithms which explicitly store the matrix on each level , it is possible to use a matrix - free approach : instead of reading the matrix from memory , it is reconstructed on - the - fly from a small number of `` profiles '' .this leads to a more regular memory access pattern and significantly reduces the storage costs , in particular if these profiles can be factorised into a horizontal and vertical component .as the code is memory bandwidth limited this also has a direct impact on the performance of the solver .robust geometric multigrid methods adapt the smoother or coarse grid transfer operators to deal with very general anisotropies in the problem ( see e.g. ). however , this robustness comes at a price and these methods are often computationally expensive and difficult to parallelise . in the problems we consider ,the tensor - product structure of the underlying mesh and the grid - aligned anisotropy make it possible to use the much simpler but highly efficient tensor - product multigrid approach described for example in : line - relaxation in the strongly coupled direction is combined with semi - coarsening in the other directions only .the implementation is straightforward : in addition to an obvious modification of the intergrid operators , every smoother application requires the solution of a tridiagonal system of size in each vertical column with grid cells .the tridiagonal solve requires operations and hence the total cost per iteration is still proportional to the total number of unknowns .the method is also inherently parallel as in atmospheric applications domain decomposition is typically in the horizontal direction only . in method was analysed theoretically for equations with a strong vertical anisotropy on a two dimensional tensor - product grid .the authors show that optimal convergence of the tensor - product multigrid algorithm in two dimensions follows from the optimal convergence of the standard multigrid algorithm for a set of one - dimensional elliptic problems in the horizontal direction .while the original work in applies in two dimensions , it has been extended to three dimensions in and the algorithm has recently been applied successfully to three dimensional problems in atmospheric modelling in .although the proof in relies on the coefficients in the pde to factorise exactly into horizontal - only and vertical - only contributions , we stress that this property is not required anywhere in the implementation of the multigrid algorithm . in practicewe expect the algorithm to work well also for approximately factorising coefficients and under suitable assumptions we are able to also prove this rigorously .to demonstrate this numerically , we carry out experiments for the elliptic pde arising from semi - implicit semi - lagrangian time stepping in the dynamical core of atmospheric models such as the met office unified model , where the coefficients only factorise approximately but the multigrid convergence is largely unaffected .alternatively , we also investigate approximate factorisations of the atmospheric profiles and then apply the tensor product multigrid algorithm to the resulting , perturbed pressure equation to precondition iterative methods for the original system , such as a simple richardon iteration or bicgstab . as the operator is usually `` well behaved '' in this direction ( i.e. it is smooth and does not have large variations on small length scales ) , the multigrid algorithm will converge in a very small number of iterations .an additional advantage of applying the multigrid method only to the perturbed problem with factorised profiles is the significant reduction in storage requirements for the matrix .as the algorithm is memory bound and the cost of a matrix application or a tridiagonal solve depends on the efficiency with which the matrix can be read from memory this leads to performance gains in the preconditioner : we find that the time per iteration can be reduced by around , but this has to be balanced with a possibly worse convergence rate .nevertheless , our numerical experiments show , that in some cases the factorised preconditioner can be faster overall . on novelmanycore computer architectures , such as gpus , where around 30 - 40 floating point operations can be carried out per global memory access , we expect the performance gains from this matrix - free tensor - product implementation to be even more dramatic . if the matrix is stored in tensor product format and the local stencil is calculated on - the - fly , the costs for the matrix construction can essentially be neglected compared to the cost of reading fields from memory .for example , carrying out a sparse matrix - vector product requires 1 global memory read and global writes per grid cell compared to reads and 1 write if the -point matrix stencil is stored explicitly - a speedup of almost a factor two .the benefits of this matrix - free implementation on gpus has recently been shown in a similar context in . in state - of - the - art global weather prediction modelsthe horizontal resolution is of the order of tens of kilometres with the aim of reducing this to around one kilometre in the next decade ( the number of vertical grid cells is typically around ) .the resulting problems with degrees of freedom can only be solved on operational timescales if their scalability can be guaranteed on massively parallel computers .in addition to the sequential algorithmic performance we demonstrate the parallel scalability of our solvers on hector , the uk s national supercomputer which is hosted and managed by the edinburgh parallel computing centre ( epcc ) . we find that our solvers show very good weak scaling on up to 20,480 amd opteron cores and can solve a linear system with 11 billion unknowns in less than 5 seconds ( reducing the residual by five orders of magnitude ) .all our code is implemented in the distributed and unified numerics environment ( dune ) , which is an object oriented c++ library and provides easy to use interfaces to common parallel grid implementations such as alugrid and ug . due to the modular structure of the library and because we can rely on the underlying parallel grid implementations , the implementation of our solvers on tensor - product grids is straightforward . throughout the code performanceis guaranteed by using generic metaprogramming based on c++ templates .[ [ structure ] ] structure + + + + + + + + + this paper is organised as follows . in section [ sec : ellipticpde ] we describe the pressure correction equation arising in semi - implicit semi - lagrangian time stepping in atmospheric models and discuss the discretisation of the resulting linear pde with particular emphasis on the tensor - product structure of the grid .the theory of the tensor - product multigrid algorithm is reviewed in section [ sec : tensorproductmultigrid ] where we extend the analysis in to three dimensions following . in this sectionwe also prove the convergence of the preconditioned richardson iteration for non - factorising profiles .the grid structure and the discretisation of the equation as well as the implementation of our algorithms in the dune framework are described in section [ sec : implementation ] .numerical results for different test cases are presented together with parallel scaling tests in section [ sec : numericalresults ] .we conclude and present ideas for future work in section [ sec : conclusions ] .some more technical aspects can be found in the appendices , in particular the finite - volume discretisation is described in detail in appendix [ sec : discretisation ] .the elliptic pde which arises in semi - lagrangian semi - implicit time stepping in atmospheric forecast models is derived for example in for the endgame dynamical core of the unified model . for simplicity ( and in contrast to )the work in this paper is based on a finite volume discretisation of a continuous version of this pde and in the following we outline the main steps in the construction of the corresponding linear algebraic problem .the euler equations describe large scale atmospheric flow as a set of coupled non - linear differential equations for the velocity , ( exner- ) pressure , potential temperature and density . { \frac{d\theta}{dt } } & = r_\theta \qquad \text{(thermodynamic equation)}\\[1ex ] { \frac{d\rho}{dt } } & = - \rho \nabla\cdot v \qquad\text{(mass conservation)}\\[1ex ] \rho\theta & = \gamma\pi^{\gamma } \qquad\text{(equation of state ) } \end{aligned } \label{eqn : eulerequations}\ ] ] the -terms describe external- and sub - gridscale- forcings such as gravity and unresolved convection .the constants and are defined as 3 & p_0/r_d , & & , & & r_d / c_p , where is a reference pressure ; and are the specific heat capacity and specific gas constant of dry air .system can be written schematically for the state vector as .\label{eqn : eulerschematic}\ ] ] advection is described in the semi - lagrangian framework , i.e. material time derivatives are replaced by where is the departure point of a parcel of air at time which is advected to position at time .the right - hand - side of ( [ eqn : eulerschematic ] ) is treated semi - implicitly .because of the small vertical grid spacing and the resulting large courant number of vertical sound waves , vertical advection needs to be treated fully implicitly , but some of the other terms are evaluated at the previous time step and thus treated explicitly ; we write .we use the -method with off - centering parameter for implicit time stepping and replace & = \mathcal{n}^{(\text{impl.})}[\phi({\ensuremath{\boldsymbol{x}}},t ) ] + \mathcal{n}^{(\text{expl.})}[\phi({\ensuremath{\boldsymbol{x}}},t ) ] \\ & \mapsto \mu \mathcal{n}^{(\text{impl.})}[\phi^{(t+\delta t)}({\ensuremath{\boldsymbol{x } } } ) ] + ( 1-\mu ) \mathcal{n}^{(\text{impl.})}[\phi^{(t)}({\ensuremath{\boldsymbol{x } } } ) ] + \mathcal{n}^{(\text{expl.})}[\phi^{(t)}({\ensuremath{\boldsymbol{x } } } ) ] \end{aligned } \label{eqn : semiimplicit}\ ] ] and in the following we always assume that which corresponds to the scheme described in . by eliminating the potential temperature , density and all velocities from the resulting equation ,one ( non - linear ) equation for the pressure at the next time step can be obtained . to solve this equation via ( inexact ) newton iteration, all fields are linearised around a suitable reference state ( which can for example be the atmospheric fields at the previous time step ) denoted by subscript `` ref '' . to thisend the pressure at the next time step is written as with analogous expressions for and ; the reference velocities are assumed to be zero .it should , however , be kept in mind that the linearisation does not need to be `` exact '' as the non - linear equation can be solved with an inexact newton iteration . in particular ,some terms can be moved to the right hand side which is equivalent to treating them explicitly or lagging them in the non - linear iteration .naturally , there will be a tradeoff between faster convergence of the newton iteration and the cost of the inversion of the linear operator ; for example , in all couplings to non - direct neighbours , which can be large in the case of steep orography , are moved to the rhs to reduce the size of the stencil of the linear operator . while these considerations are relevant for the optimisation of the non - linear solve in a particular model , in this article we focus on the solution of the linear equation , which is the computationally most expensive component of the newton iteration .once the exner pressure has been calculated , the evaluation of the remaining atmospheric fields at the next time step is straightforward and does not require any additional ( non-)linear solves .in contrast to explicit time stepping methods the courant number can be chosen significantly larger than 1 , which makes semi - implicit semi - lagrangian time stepping very popular in operational models . however , because of the short advective time scale and to ensure that large scale flow is described correctly , the courant number is usually limited to around 10 , i.e. the implicit time step size is no more than one order of magnitude larger than what would be allowed in an explicit method . to evaluate the overall performance of the method , the benefits of a larger time step would have to be balanced against the additional cost for the elliptic solve . for ease of notationwe simply write in the following and drop the time indices .then the non linear equation for is of the form to solve this equation iteratively we expand all fields around a reference state ( which can , for example , be given by the fields at the previous time step ) to obtain a linear operator . as discussed above , in practise some termsmight be lagged in the non - linear iteration , i.e. moved to the right hand side of the linear equation . at each step of the nonlinear iterationwe write for the approximate solution to ( [ eqn : non_linear ] ) and update the pressure as follows : every iteration requires the solution of a linear equation for the pressure correction , which we denote as in the following . to construct the linear operator we proceed as follows : starting from ( [ eqn : eulerequations ] ) the semi - lagrangian framework in ( [ eqn : semilagrangian ] ) is used for horizontal advection and vertical advection is treated implicitly ( to ensure that mass is exactly conserved , advection is treated implicitly in all three spatial dimensions in the mass equation ) .the right hand sides are expanded according to ( [ eqn : semiimplicit ] ) .we linearise around reference profiles , and which fulfil the equation of state , i.e. write etc . andassume that the velocity expansion is around zero .if we split up the velocity into a tangential- and vertical- component the time - discretised euler equations in ( [ eqn : eulerequations ] ) finally become in a spherical geometry w & = & r_w ' - { \ensuremath{\mu \delta t}}c_p \left({\ensuremath{\theta_{\operatorname{ref}}}}{\partial_r}\pi ' + ( { \partial_r}{\ensuremath{\pi_{\operatorname{ref } } } } ) \theta'\right ) , \label{eqn : sislmom_vert}\\[1ex ] \theta ' & = & r'_\theta - { \ensuremath{\mu \delta t}}({\partial_r}{\ensuremath{\theta_{\operatorname{ref}}}})w , \label{eqn : sislthermodyn}\\[1ex ] \rho ' & = & r_\rho ' - { \ensuremath{\mu \delta t}}\left ( \frac{1}{r^2}{\partial_r}\left(r^2 { \ensuremath{\rho_{\operatorname{ref } } } } w\right ) + \frac{1}{r}\left({\ensuremath{{\boldsymbol{\nabla}}_{\!\!{\mathcal{s}}}}}\cdot \left({\ensuremath{\rho_{\operatorname{ref}}}}{\ensuremath{\boldsymbol{v}}}_{{{\mathcal{s}}}}\right)\right ) \right),\label{eqn : sislmass}\\[1ex ] \pi ' & = & \frac{{\ensuremath{\pi_{\operatorname{ref}}}}}{\gamma}\left(\frac{\rho'}{{\ensuremath{\rho_{\operatorname{ref } } } } } + \frac{\theta'}{{\ensuremath{\theta_{\operatorname{ref}}}}}\right ) , \label{eqn : sislstate}\end{aligned}\ ] ] where is the normal component of the derivative and is the component tangential to a unit sphere with outer normal .any terms that depend on the current time step are absorbed in the -terms .we then rewrite ( [ eqn : sislstate ] ) as a function of and insert it together with ( [ eqn : sislmom_horiz ] ) into ( [ eqn : sislmass ] ) to obtain an equation with , and only by solving ( [ eqn : sislmom_vert ] ) and ( [ eqn : sislthermodyn ] ) for and we obtain 2 w & = ( f_2- (_r ) ) , & & = ( f_3+()^2 c_p (_r)(_r ) ) [ eqn : wtheta ] where arises from the implicit treatment of vertical advection and the ( squared ) vertical buoyancy ( or brunt - visl- ) frequency is given by the functions and only depend on the fields at the current time step .we rescale the vertical coordinate by the radius of the earth and the potential temperature by a reference temperature at ground level to make it dimensionless .finally , we multiply equation by and denote the typical horizontal velocity by is the speed of sound in a parcel of air with temperature . furthermore we introduce the dimensionless quantity after eliminating and from ( [ eqn : substitution1 ] ) with the help of ( [ eqn : wtheta ] ) we obtain a second order equation for the pressure correction : \right\ } \\ &-\omega^4 \frac{1}{r^2 } { \ensuremath{{\boldsymbol{\nabla}}_{\!\!{\mathcal{s}}}}}\cdot\left({\ensuremath{\lambda_{\operatorname{ref}}}}{\ensuremath{\rho_{\operatorname{ref}}}}({\ensuremath{{\boldsymbol{\nabla}}_{\!\!{\mathcal{s}}}}}{\ensuremath{\pi_{\operatorname{ref}}}})({\partial_r}{\ensuremath{\theta_{\operatorname{ref}}}})({\partial_r}\pi')\right ) + \gamma \frac{{\ensuremath{\rho_{\operatorname{ref}}}}}{{\ensuremath{\pi_{\operatorname{ref}}}}}\pi ' = rhs \end{aligned}\label{eqn : helmholtzapp}\ ] ] the term arises due to the last term in ( [ eqn : sislmom_horiz ] ) . in term is not included in the linear operator since all terms which stem from reference profiles that do not depend exclusively on the vertical coordinate are neglected . to be consistent with this approach, the term is assumed to be moved to the right hand side of the linear equation in the following .the first two terms in the curly brackets are the sum of a vertical advection and a vertical diffusion term .in contrast , in , the linear pressure correction equation is derived from the discretised euler equations .however , it can be shown that ( [ eqn : helmholtzapp ] ) is identical to the continuum limit of equation ( 67 ) in if the latter is written down explicitly in spherical coordinates . denoting the unknown pressure correction by , as is common in the mathematical literature , the elliptic operator can be written as & = -\omega^2 \begin{pmatrix } { \partial_r } , & \frac{1}{r}{\ensuremath{{\boldsymbol{\nabla}}_{\!\!{\mathcal{s}}}}}\end{pmatrix}^t \begin{pmatrix } \alpha_r & 0 \\[1ex ] 0 & \alpha_{{{\mathcal{s}}}}{\ensuremath{\operatorname{id}_{2\times 2}}}\end{pmatrix } \begin{pmatrix } { \partial_r}\\[1ex ] \frac{1}{r}{\ensuremath{{\boldsymbol{\nabla}}_{\!\!{\mathcal{s}}}}}\end{pmatrix } u - \omega^2 \begin{pmatrix } \xi_r , & 0 \end{pmatrix}^t \begin{pmatrix } { \partial_r}\\[1ex ] \frac{1}{r}{\ensuremath{{\boldsymbol{\nabla}}_{\!\!{\mathcal{s}}}}}\end{pmatrix } u + \beta u \end{aligned } \label{eqn : helmholtzvector}\ ] ] where is the identity matrix .the equation is solved in a thin spherical shell , ] and .in contrast to global latitude - longitude grids , on quasi - uniform grids the ratio between the smallest and largest grid spacing is bounded . to ensure that the horizontal acoustic courant number ( where is the smallest grid spacing ) remains unchanged as the horizontal resolutionis increased , the time step size has to decrease linearly with .a simple scaling argument shows that the vertical advection term is much smaller than the diffusion term at high resolution .the functions , , and are referred to as `` profiles '' in the following and can be obtained from the background fields , and by comparing the elliptic operators in ( [ eqn : helmholtzapp ] ) and ( [ eqn : helmholtzvector ] ) : 4 _ r & = r^2 ( = r^2 _ ) , & _ & = , & _ r & = (_r ) , & & = .[ eqn : profiles ] after discretisation , the helmholtz equation in ( [ eqn : helmholtzvector ] ) can be written as a large algebraic system of the form where the finite - dimensional field vector represents the pressure correction in the entire atmosphere .if we assume that the horizontal resolution is around 1 kilometre and vertical grid levels are used , each atmospheric variable has degrees of freedom . problems of this size can only be solved with highly efficient iterative solvers and on massively parallel computers .current forecast models , such as the met office unified model , use suitable preconditioned krylov subspace methods ( see e.g. for an overview ) such as bicgstab .due to the flatness of the domain the equation is highly anisotropic : typical grid spacings in the horizontal direction are at the order of tens of kilometres , whereas the distance between vertical levels can be as small as a few metres close to the ground .while this anisotropy is partially compensated by the ratio in ( [ eqn : profiles ] ) , it remains large in particular for small time steps for which ( recall that we chose units such that ) .as discussed in the literature , a highly efficient preconditioner for krylov methods in this case is vertical line relaxation .this amounts to a block jacobi or block sor iteration where the degrees of freedom in one vertical column are relaxed simultaneously by solving a tridiagonal equation .however , ( geometric ) multigrid algorithms have also been considered by the atmospheric modelling community and recently some of the authors have demonstrated their superior behaviour for a simplified model equation .efficient algorithms for the solution of anisotropic equations have been studied extensively in the multigrid literature . for general anisotropies in convection dominated problems ,robust schemes have been designed by adapting the smoother ( see e.g. ) or the coarsening strategy and the restriction / prolongation operators ( see e.g. ) .for example , in alternating approximate plane- and line- smoothers are discussed . alternatively ,if algebraic multigrid ( amg ) is used , the coarse grids and smoothers will automatically adapt to any anisotropies and the method can even be applied on unstructured grids . however , amg has additional setup costs for the coarse grids and explicitly stores the coarse grid matrices .this has a significant impact on the performance in bandwidth - dominated applications .while these `` black - box '' approaches work well for very general problems and do not require anisotropies to be grid - aligned , they can be computationally expensive and difficult to parallelise .the problem is simplified significantly in the case of grid - aligned anisotropies , which are typical in atmospheric- and ocean - modelling applications .it has long been known that if the problem is anisotropic in one direction only , this can be dealt with effectively by either adapting the smoother or coarsening strategy ( see e.g. and also the discussion for simple anisotropic model problems in ) .both methods can be combined as for example discussed in where the solution of two dimensional anisotropic problems with grid - aligned anisotropies is studied . by using line - relaxation in the -direction together with semi - coarsening in the -direction only , the multigrid solver is robust with respect to anisotropies in both the - and -direction as long as they are grid - aligned . in the followingwe will refer to multigrid algorithms which combine horizontal semi - coarsening with vertical line relaxation in the strongly coupled direction as _ tensor product multigrid _ ( tpmg ) methods ( both in 2d and in 3d ) . in convergence of such a tensor - product multigrid solver for elliptic equations of the form in a two dimensional domain \times[0,1] ] : +\beta(r,{\ensuremath{\hat{{\ensuremath{\boldsymbol{r } } } } } } ) u(r,{\ensuremath{\hat{{\ensuremath{\boldsymbol{r } } } } } } ) = f(r,{\ensuremath{\hat{{\ensuremath{\boldsymbol{r}}}}}})\ ] ] with ] and tensorise them to obtain the product space over .we write and . for any two functions , in bilinear form associated with the operator in can be expressed in terms of the bilinear forms as using the kronecker product , the galerkin - matrix representation of the bilinear form can then be expressed in terms of the galerkin matrices of the bilinear forms in ( [ eqn : bilinearformstp ] ) , i.e. here correspond to , and respectively and describe the vertical derivative- and mass- matrices .analogously the derivative and mass matrix in the horizontal direction are described by , which correspond to and . to use the tensor - product multigrid approach , we further assume that there is a nested sequence of finite element spaces over , where the subscript denotes the multigrid level ; for the icosahedral and cubed sphere grid this hierarchy naturally exists .we then use to discretise the full three dimensional problem on the multigrid level , i.e. we do not coarsen in the vertical direction .the line smoother then corresponds to collectively relaxing all degrees of freedom in each of the -dimensional subspaces where are the nodal basis functions on level .the two - dimensional prolongation and restriction naturally induce intergrid transfer operators between the three dimensional spaces and by , . on each multigrid levelthe matrix can be constructed recursively using the galerkin product and it is easy to see that and the ( block-)smoother can be written as in the case of weighted block - jacobi relaxation , for example , the matrices and are the weighted diagonals of and .one v - cycle of the tensor product multigrid algorithm can now be written down compactly as follows .on the finest level , this v - cycle is applied to the right hand side of the original problem until the residual error is reduced below a certain tolerance .we typically choose the numbers of smoothing steps to be and , for , and on the coarsest grid . to simply apply a few steps of the smoother on the coarsest gridis sufficient because the cfl condition ensures that the system matrix on the coarsest grid is dominated by the mass matrix term and thus well - conditioned .[ [ reduction - of - the - theory - to - two - dimensions ] ] reduction of the theory to two dimensions + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the crucial idea in is now that it is possible to construct a set of invariant -dimensional subspaces such that the convergence of the tensor product multigrid method for the problem in can be analysed by independently studying the convergence of a standard multigrid algorithm in each of these subspaces over .this can be seen as follows : because both and are positive definite , there exists an eigenbasis , , of and a corresponding set of strictly positive eigenvalues such that 2 ( ^2 a^r+b^r)_j^r & = _ j m^r_j^r , & m^r_j^r,_k^r= _ j , k j , k\{1, ,n}. [ eqn : eigenvalueequation ] it follows from simple identities for the inner product on tensor product spaces that and so the subspaces spanned by the different are -orthogonal , with a similar property for the smoother matrix . as we do not coarsen in the vertical direction , the intergrid operators and do not mix different subspaces . for each space is trivially isomorphic to and each of the independent subspaces corresponds to a two dimensional problem on with the following matrix representation of the linear operator and smoother : 2 a^_,j & ^2 a_^+ _ j m_^ , & w^_,j & ^2 w^a,_+ _ j w^m,_. in particular , is the galerkin matrix which is obtained from discretising the bilinear form on .this bilinear form is the weak formulation of the following two dimensional operator : [ [ convergence - of - two - dimensional - multigrid ] ] convergence of two dimensional multigrid + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + according to theorem 10.7.15 in , the multigrid v - cycle converges for each of the two dimensional operators , if there exists a such that the smoothing property and the approximation property are satisfied on all levels .the smoothing property ( [ eqn : smoothingproperty2d ] ) is automatically satisfied for ( sufficiently damped ) point jacobi and sor smoothers ( remark 4.6.5 in ) . to see this ,denote the matrix consisting only of the diagonal entries of by and use , i.e. weighted point jacobi relaxation .the relaxation parameter is chosen such that where is the spectral norm .then ( [ eqn : smoothingproperty2d ] ) follows by definition from the equivalence applied to .a proof of the approximation property is significantly harder and we will not give it here ( see lemma 10.7.8 and remark 10.7.13 in ) .it depends on some minimal regularity assumptions on the profiles and .the constant may depend on the contrast , i.e. the maximum variation of the profiles .we stress again that we use quasi - uniform grids for the horizontal discretisation ( see the review in for a discussion of grids considered in meteorological application ) .in contrast to latitude - longitude grids , where the convergent grid lines near the pole introduce an additional horizontal anisotropy , the ratio between the smallest and largest grid spacing is bounded from below in the grids we consider .hence the simple block - jacobi and block - sor smoothers which relax all degrees of freedom in one vertical column simultaneously will be efficient and no additional horizontal plane smoothing or selective semi - coarsening as described in is required .as the two dimensional equations are solved on the unit sphere , the operator could become near - singular if .however , it is easy to see that this is not the case . as noted in section [ sec : linearequation ] we require the scaling to keep the courant number fixed as the horizontal resolution increases .therefore the second order term in ( [ eqn : cont2doperator ] ) is of order and hence the relative importance of the two terms in ( [ eqn : cont2doperator ] ) is independent of grid resolution .it follows that all the eigenvalues of ( [ eqn : eigenvalueequation ] ) are of order 1 .it is a reasonable assumption that the profiles , are `` well - behaved '' in the sense that they are dominated by large scale variations due to global weather systems , small scale phenomena such as strong local variations carry substantially less energy . in this casewe expect the spectrum of to be bounded from above and below by two constants which are independent of .[ [ convergence - of - three - dimensional - multigrid ] ] convergence of three dimensional multigrid + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + as argued above , the three dimensional problem can be decoupled into a set of two - dimensional problems . due to the particular form of the smoother and of the prolongation / restriction matrices ,it is in fact easy to verify that the smoothing property and the approximation property 2 a^ _ & w^ _ & 0 ( a^_+1)^-1 - p_^h ( a^_)^-1 r_^h & c_a ( w^_+1)^-1 , for the tensor product multigrid algorithm for the original 3d problem on follow directly from the respective properties ( [ eqn : smoothingproperty2d ] ) and ( [ eqn : approximationproperty2d ] ) for the 2d problems on , for all .let us assume that ( [ eqn : smoothingproperty2d ] ) and ( [ eqn : approximationproperty2d ] ) are satisfied , for all , and let denote the iteration matrix for one step of the tensor product multigrid v - cycle defined above , i.e. where is the exact solution of the equation .then the convergence rate independent of , where is the energy norm induced by .this is the main result given and proved for the two dimensional case in ( * ? ? ?* theorem 2 ) . as we have seen above, the proof extends directly also to three dimensions and to our pressure correction problem here . in that casethe assumptions of the theorem are satisfied as discussed above .we now assume that the matrix can be written as the sum of a perfectly factorising symmetric positive definite matrix and a small correction , namely .we quantify the deviation from perfect factorisation by and assume that .we also assume that the theory in section [ sec : prooffactorising ] applies and the multigrid iteration for the factorising operator converges , i.e. the error is reduced by a factor in every multigrid v - cycle .the richardson iteration for the full operator preconditioned with multigrid v - cycle cycles for can then formally be written as \left({{a^{\otimes}}}\right)^{-1}\left(\mathbf{f}-a\mathbf{u}^{(k)}\right).\ ] ] then at every step the error to the exact solution is reduced by a factor \left({{a^{\otimes}}}\right)^{-1}a||_{{{a^{\otimes } } } } } \nonumber\\ & \ \le \ { ||\left({{a^{\otimes}}}\right)^{-1}a-{\operatorname{id}}||_{{{a^{\otimes}}}}}+{||{{m^{\otimes}}}||_{{{a^{\otimes}}}}}^\mu{||\left({{a^{\otimes}}}\right)^{-1}a||_{{{a^{\otimes } } } } } \\delta + ( 1 + \delta ) ( \rho^\otimes_a)^\mu .\label{eqn : richardsonconvergencerate}\end{aligned}\ ] ] thus , for an arbitrary the convergence rate is less than 1 , provided the number of v - cycles . on the other hand , if we only apply one v - cycle ( ) , then a convergence rate can still be guaranteed provided .similar results can also be proved for the convergence of krylov solvers , such as bicgstab , preconditioned with multigrid v - cycle cycles for .in practise , and as we demonstrate in the following , the tensor product preconditioners will be efficient for a wider range of problems not covered by the formal theory .we now describe the discretisation and dune implementation of the solvers we used in our numerical experiments .for simplicity we use a simple finite volume discretisation for all numerical experiments in this work .more complex schemes such as mimetic mixed finite elements are also currently under consideration for the development of dynamical cores and might require the solution of the equation in a different pressure space , such as higher order dg space . however , the basic ideas described in this work can still be applied .grids used in meteorological applications ( and also in many ocean models ) usually have a tensor - product structure .they consist of a semi - structured two dimensional horizontal grid on the surface of the sphere and a one - dimensional vertical grid which is often graded to achieve higher resolution near the surface .in particular each three dimensional grid cell can be uniquely identified by the corresponding horizontal cell and a vertical index .this tensor - product structure in itself has important implications for the performance of any implementation : while it might be necessary to use indirect indexing for the horizontal grid , the vertical grid can always be addressed directly . as typically the number of vertical levels is large with , the cost of indirect addressing in the horizontal direction can be `` hidden '' , a phenomenon which we have confirmed numerically for our solvers in section [ sec : indirectaddressing ] .furthermore fields can be stored such that the levels in each column are stored consecutively in memory , which leads to efficient cache utilisation ( however , as discussed in a different memory layout has to be used on gpu architectures where the vertically - consecutive storage would prevent coalesced memory access in the tridiagonal solve ) . to be able to use the geometric multigrid solvers described in this work, we also assume that the horizontal grid has a natural hierarchy ; this is true for the icosahedral grids which are used in our numerical tests where each triangular coarse grid cell consist of four smaller triangles on the next - finer multigrid level .in contrast to a simple longitude - latitude grid , these semi - structured grids have no pole - problem , i.e. the ratio between the size of the largest and smallest grid spacing is bounded .this implies that there is no additional horizontal anisotropy which would further complicate the construction of a solver ( however , as has been shown in , the tensor - product multigrid approach can still be applied for longitude - latitude grids if the horizontal coarsening strategy is adapted appropriately ) . in the finite volume discretisationany continuous field is approximated by its average value in a grid cell .in particular , for each _ horizontal _ grid cell we store one vector of length representing the field in the vertical column .in this cell the discrete equation ( [ eqn : algebraicequation ] ) for the -vector can be written as where the sum runs over all horizontal neighbours of . in this expression and tridiagonal- and diagonal matrices of the form 2 a_t & = ( _ t,_t,_t ) , & a_tt & = ( _ tt ) .[ eqn : tridiagonalsystem ] both matrices can be reconstructed on - the - fly from a number of scalar quantities , which are obtained from a discrete approximation of the profiles in ( [ eqn : profiles ] ) and geometric factors .this reduces the amount of main memory access , in particular if the factorising profiles in the preconditioner are used . for each horizontal cell the explicit expressions of the diagonals , and upper- and lower- subdiagonals , depend on whether the profiles can be factorised or not and are given explicitly in the next section . a block - sor iteration with overrelaxation factor then be written as and requires a tridiagonal solve in each vertical column to apply the inverse of the matrix to the residual .this can be implemented using the thomas algorithm .all code was implemented using the dune library , which provides a set of c++ classes for solving pdes using grid based methods . in particularit provides interfaces to ( parallel ) grid implementations such as alugrid and uggrid .the implementation of the grids is separated from data which is attached to the grid by the user via mapper functions between different grid entities ( cells , edges , vertices ) and the local data arrays . in our casewe used the dune - grid module to implement a two dimensional host grid and then attached a whole column of length to each horizontal grid cell .we represent the matrix as follows : in the non - factorising case ( ) , we store a vector of length at each horizontal cell to represent the zero order term , two vectors and of length to represent the vertical diffusion and advection terms , and one vector of length at each horizontal edge .the explicit form of these vectors is obtained by a standard finite volume discretisation of the problem and is written down in equation ( [ eqn : hatfunctionsnonfac ] ) in the appendix .the vectors , , and in ( [ eqn : tridiagonalsystem ] ) are in the factorising case ( ) it is only necessary to store _ scalars _ , , and on the horizontal cells and edges .in addition to this , four vectors of length and ( , , and ) which arise from the vertical discretisation need to be stored once for the entire grid .the explicit form of these quantities is given in ( [ eqn : hatfunctionsfac ] ) in the appendix .similarly to ( [ eqn : tridiagonalnonfactorising ] ) the matrix entries in ( [ eqn : tridiagonalsystem ] ) can be calculated on the fly as the scalars , , and only need to be read once per vertical column and the associated cost can be hidden together with the cost of indirect addressing on the horizontal grid for large enough . moreover , the vectors , , and require only a small amount of memory and can be cached . in summary , the cost of memory access for the matrix is likely to be significantly smaller than the cost of accessing field vectors such as and when solving the tridiagonal system in ( [ eqn : blockjacobi ] ) or in the matrix vector product .the dune - grid interface provides iterators over the horizontal grid cells and over the neighbours of each cell . to implement for example the sparse matrix vector product ( spmv ) in ( [ eqn : columnequation ] )we iterate over all horizontal grid cells , and then in each cell we loop over the edges for all neighbours to read the profiles stored on the cells and edges from memory and construct the matrices and .these are then applied to the local vectors and to evaluate , which requires inner loops over the vertical levels .of all grids that are currently available through the dune interface we found that only alugrid can be used to represent a two - dimensional sphere embedded in three dimensional space .unfortunately the scalability of alugrid is very limited because in a parallel implementation the entire grid is stored on each processor .alternatively we used a three dimensional uggrid implementation for a thin spherical shell consisting of one vertical layer to represent the unit sphere .based on the coarsest grid , finer multigrid levels can be constructed by refinement in the horizontal direction only . any geometric quantities in this thin three dimensional grid can then be related to the corresponding values on the two dimensional grid by simple scaling factors .we implemented both a gnomonic cubed sphere grid and an icosahedral grid , for which the grid points are projected onto the sphere , and all numerical results reported in this work were obtained with the icosahedral grid . as is typical in atmospheric applications ,parallel domain decomposition is in the horizontal direction only .as the dune host grids that we used are already inherently parallel , parallelisation of the code was straightforward by calling the relevant halo exchange routines when necessary .load balancing was achieved by choosing the problem size such that the number of cells on the coarses level is identical to the number of processors and each processor `` owns '' one coarse grid cell and the corresponding child cells .while at first sight this might cause a problem for large core counts because the coarsest level still has a relatively large number of degrees of freedom and the multigrid hierarchy is very shallow , it turns out that the zero order term in the helmholtz equation ( [ eqn : helmholtzvector ] ) averts potential problems .this is because relative to the zero order term the importance of the horizontal diffusion term decreases with a factor of four on each coarse level , and so after a small number of coarsening steps the problem is well conditioned and can be solved by a very small number of smoothing iterations .an alternative and more physical explanation is that any interactions in the continuous pde in ( [ eqn : helmholtzvector ] ) are exponentially damped with an intrinsic length scale and hence it is not necessary to coarsen the grid beyond this scale . this has been confirmed numerically for a simplified test problem in , where it has been shown that as little as four multigrid levels still give very good convergence for typical grid spacings and time step sizes . in the parallel scaling tests in this workwe typically used 6 or 7 multigrid levels and one iteration of the smoother to solve the coarse grid problem .in the following we study the performance of the two tensor - product preconditioners and described in section [ sec : tpmgs ] applied to two test cases in atmospheric flow simulation .we confirm the optimality and robustness of even for non - factorising profiles , compare the performance of the two variants and study their parallel scalability .all runs ( including the sequential tests ) were carried out on the phase 3 configuration of the hector supercomputer , which consists 2816 compute nodes with two 16-core amd opteron 2.3ghz interlagos processors each .the entire cluster contains 90,112 cores in total .the code was compiled with version 4.6.3 of the gnu c compiler . unless stated otherwise we always used 6 multigrid levels with two vertical line - sor pre- and post- smoothing steps on each level ( ) ; the overrelaxation factor in the smoother was set to .one smoother iteration is used to solve the coarse grid problem .we use linear interpolation to prolongate the solution to the next - finer grid ( ) . the right hand side , which in each cell represent a cell integral of a field , is restricted to the next - coarser level ( ) by summing the fine grid values of all four fine grid cells comprising the coarse grid cell .recall that these integrid - operations only require interpolation and summation in the horizontal direction .the tolerance in the iterative solver was set to , i.e. we iterate until the residual has been reduced by at least five orders of magnitude .the number of vertical levels was set to , which is typical for current meteorological applications .we note , however , that all runtimes should be directly proportional to ( and this is confirmed in the following section ) . while data in one vertical column is stored consecutively in memory and can be addressed directly , in general indirect addressing has to be used in the horizontal directions .however , as the horizontal lookup is only required once per column , the relative penalty for this will be very small provided is large enough . as discussed in , in this case the overhead from indirect addressingcan be `` hidden '' behind work in the vertical direction . to verify this we ran our solver with two different dune grid implementations and measured the time per iteration for different numbers of vertical levels .we expect this time to depend on as follows where is the overhead of indirect addressing and depends on the grid implementation . the constant encapsulates any other work which is only done once per column and both and the slope are independent of the horizontal grid .figure [ fig : nzdependency ] shows the results for the alugrid and uggrid implementation and confirms the linear dependency in ( [ eqn : nzdependency ] ) . of vertical levels for two grid implementations ( uggrid in red , open squares and alugrid in blue , filled circles ) on an icosahedral grid ; results for shown both for the ( dashed lines ) and ( solid lines ) preconditioner . ]as can be seen from this plot , for both preconditioners and the overhead from indirect addressing and the additional overhead together are at the order of less than as soon as . incidentally both dune grid implementations that we tested are equally efficient .we stress that in both grids data in adjacent vertical columns is not necessarily stored consecutively in memory .not surprisingly , the slope is larger for the more expensive preconditioner . the results in this section also confirm that performance tests carried out on a directly addressed horizontal grid , such as the results in , can be generalised to indirectly addressed grids .we first test our solver with the profiles from a simplified meteorological test problem which corresponds to a balanced atmosphere with constant buoyancy frequency and zonal flow with one jet in each hemisphere .the advantage of this test case is that the deviation of the atmospheric profiles from a perfect factorisation can be controlled by varying a single parameter . in is shown that under the assumption that the velocity field points in the longitudinal direction and the buoyancy frequency defined in ( [ eqn : buoyancyfrequency ] ) is constant , a solution of the euler equations is given by ^{\gamma}{{e^{{{{\mathcal{s}}}}}(\phi)}}{{e^r(r ) } } , & u({\ensuremath{\hat{{\ensuremath{\boldsymbol{r}}}}}},r ) & = u_{{{\mathcal{s}}}}(\phi ) \end{aligned } \label{eqn : balancedflow}\ ] ] where the functions and are defined as 2 e^()&= , & e^r(r)&= . in the horizontal direction the profiles only vary in the latitudinal direction $ ] .the parameter is related to the buoyancy frequency by with .the function is related to the velocity field as with angular velocity . for our numerical experimentswe choose the velocity such that it corresponds to two jets with peak velocity in the mid latitudes ( , ) : \label{eqn : velocityfield}\ ] ] as plotted together with the corresponding in figure [ fig : exnerpressure ] . and jet function defined in eqns .( [ eqn : jetfunction ] ) and ( [ eqn : velocityfield ] ) for ( ) .right : exner pressure and relative difference in the -plane for the same value of .the height above ground is measured in units of the depth of the atmosphere . ] and jet function defined in eqns .( [ eqn : jetfunction ] ) and ( [ eqn : velocityfield ] ) for ( ) .right : exner pressure and relative difference in the -plane for the same value of .the height above ground is measured in units of the depth of the atmosphere . ]if we fix the reference pressure and temperature to physically realistic values and , the only free parameter in ( [ eqn : balancedflow ] ) is the buoyancy frequency .in particular if is identical to , i.e. , the first term in the expression for the exner pressure in ( [ eqn : balancedflow ] ) vanishes and all profiles factorise exactly . in the following we present numerical results for a range of buoyancy frequencies between and .as a preconditioner we use both a multigrid algorithm with the full model operator and the tensor - product multigrid algorithm with an approximate factorisation of the exner pressure which reduces to the expression in ( [ eqn : balancedflow ] ) for . both the exner pressure and the relative difference , which is an indicator of the quality of the factorisation , are plotted for in the plane in figure [ fig : exnerpressure ] . as can be seen from this figure, the relative difference between the profiles can be larger than 15% . , blue columns and dashed curves ) and the approximate factorisation ( , hatched green columns and solid curves ) in ( [ eqn : facapproxbalancedflow ] ) . in all casesa problem with and total degrees of freedom was solved sequentially on hector . ] , blue columns and dashed curves ) and the approximate factorisation ( , hatched green columns and solid curves ) in ( [ eqn : facapproxbalancedflow ] ) . in all casesa problem with and total degrees of freedom was solved sequentially on hector . ] the time per iteration is shown in figure [ fig : titerniterbreakdown ] ( left ) for two grid implementations .both a preconditioned richardson iteration and bicgstab are used with one multigrid v - cycle as a preconditioner .it is important to note that bicgstab requires two applications of the preconditioner and two sparse matrix - vector products per iteration , while the richardson iteration only requires one of each , and not surprisingly the figure demonstrates that most of the time is taken up by the multigrid preconditioner in all cases .the number of iterations for each of the combinations is plotted in figure [ fig : titerniterbreakdown ] ( right ) for a range of .first of all we note the almost perfect robustness of the full preconditioner for this test problem where the profiles strongly deviate from the factorising case , but the convergence of preconditioned richardson iteration and preconditioned bicgstab are essentially not affected .the practically observed convergence rate for the v - cycle ( in the richardson iteration ) is around .this confirms the theoretical results in sections [ sec : prooffactorising ] and [ sec : proofnonfactorising ] .bicgstab converges in approximately half the number of iterations than richardson , as expected . in terms of time per iteration , the multigrid preconditioner with factorised profiles ( ) can be up to faster than the algorithm with non - factorising profiles ( ) .however , this comes at the expense of an increase in the number of iterations for larger values of that can be seen in figure [ fig : titerniterbreakdown ] ( right ) .while for the richardson iteration the increase is almost threefold if is used , this is much less dramatic for bicgstab where only requires twice as many iterations as for the largest . finally , the total solution time is shown in figure [ fig : tsolvebalancedflow ] . , dashed curves ) and the approximate factorisation ( , solid curves ) in ( [ eqn : facapproxbalancedflow ] ) .in all cases a problem with and total degrees of freedom was solved sequentially on one node of the hector supercomputer . ]as expected , the total solution time for solvers with preconditioner grows as increases .however , as the time per iteration is about 25% smaller for this preconditioner , for small the total solution time is also reduced by a similar factor .the most robust solver appears to be bicgstab , which gives the best overall performance for large , even with the factorising preconditioner .while the runs in the previous section were carried out under idealised and not necessarily realistic conditions , we also tested our solver for profiles obtained from common meteorological test cases .we first obtained the profiles , and from an aquaplanet run of the met office unified model .while these fields contain significantly more variation than the idealised profiles described in section [ sec : balancedflow ] and also describe phenomena such as convection near the ground and baroclinic instabilities , they are largely `` well behaved '' in the sense that most of them can be factorised approximately into a horizontal and a vertical variation . to quantify this further , we plot for each of the profiles the average , minimum and maximum over the horizontal grid on each vertical level in figure [ fig : verticalprofiles ] ( right ) ., , and ) .the horizontal variation is also represented by gray bands between the minimum and maximum value on each grid level ( dashed curves ) .right : zero - order term on the lowest grid level .the horizontal variation in the field is at the order of . ] , , and ) .the horizontal variation is also represented by gray bands between the minimum and maximum value on each grid level ( dashed curves ) .right : zero - order term on the lowest grid level .the horizontal variation in the field is at the order of . ] for most profiles the horizontal variation is small and the average value decays exponentially with height ; see for example figure [ fig : verticalprofiles ] ( left ) , which shows the profile on the lowest grid level .the only exception is which shows significant horizontal variation in the lower atmosphere .this is mainly due to the fact that , as can be seen from the explicit expressions in ( [ eqn : profiles ] ) , this profile contains the buoyancy frequency and hence vertical derivatives of the potential temperature , which can vary significantly from column to column due to convection in the lower atmosphere .we found that for these more typical profiles the factorising preconditioner causes both solvers to diverge .an easy fix for this is to factorise all profiles except .we denote the resulting preconditioner with partial factorisation , where we keep the full non - factorising profile for , as .as table [ tab : titeraquaplanet ] demonstrates , this increases the time per iteration by just over relative to the fully factorising case ( ) , but it is still significantly smaller than in the non - factorised case ( ) .[ tab : titeraquaplanet ] .time per iteration and speedups relative to for different solvers and preconditioners . in all cases a problem with and total degrees of freedom was solved sequentially on one node of hector using the alugrid implementation .[ cols= " < , > , > , > , > , > , > " , ] in addition to studying the sequential performance of the solvers , and in particular ensuring that they are algorithmically efficient , it is crucial to guarantee their parallel scalability on large computer clusters . for thiswe carried out scaling tests of our solvers for the balanced flow testcase described in section [ sec : balancedflow ] with ; in contrast to the previous runs we always used 7 multigrid levels so that on the coarsest level each processor stores one vertical column of data . in figure[ fig : weakscaling ] ( left ) the weak scaling of the time per iteration on the hector supercomputer is shown for up to 20,480 cores , the largest problem that was solved has just over degrees of freedom .we find that the number of iterations does not increase with the core count , and even drops in some cases .the richardson solver requires seven iterations to reduce the residual by five orders of magnitude for both preconditioners , whereas bicgstab requires 4 ( ) and 3 ( ) iterations for the same residual reduction .consequently the total solution time in figure [ fig : weakscaling ] ( right ) shows the same excellent weak scaling .in this work we discussed several multigrid preconditioners for anisotropic problems in flow simulations in `` flat '' domains with high aspect ratio . the algorithms are based on the tensor - product multigrid approach proposed and analysed for two - dimensional problems with separable coefficients in .we extended the method and its analysis to three dimensional problems and via a perturbation argument also to non - separable coefficients .we demonstrated the excellent performance of tensor - product multigrid for two model pdes arising in semi - implicit semi - lagrangian time stepping in atmospheric modelling .the numerical tests confirm the theoretically predicted optimality and effectivity of the method .the practically observed convergence rates are around .the tests also show that under certain conditions a preconditioner based on an approximate factorisation of the atmospheric profiles can reduce the total solution time .we found this to be the case both for an idealised flow scenario and for a more realistic aquaplanet test case .we also demonstrated the excellent weak parallel scaling on up to 20,480 cores of the hector supercomputer .overall our work demonstrates that bespoke multigrid preconditioners are highly efficient for solving the pressure correction equation encountered in nwp models .there are several ways to further improve this work : so far all tests have been carried out without any orography .it is known that steep gradients can lead to deteriorating performance of the non - linear iteration and we plan to study this by looking at the full non - linear solve for more realistic model problems . for simplicitywe used a finite volume discretisation , but more advanced approaches such as higher - order mixed finite elements can also be used in this framework .this will require the solution of a suitable pressure correction equation in higher order fem spaces .the parallel performance can also be further improved by , for example , overlapping calculations and communications and strong scaling tests should also be carried out .finally , the performance gains from approximate factorisations of the matrix are expected to be significantly higher on gpu systems and hence on such architectures its use may be more justified and more efficient for a wider class of profiles .+ this work was funded as part of the nerc project on next generation weather and climate prediction ( ngwcp ) , grant numbers ne / j005576/1 and ne / k006754/1 .we gratefully acknowledge input from discussions with our collaborators in the met office dynamics research group and the gungho ! project , in particular tom allen , terry davies , markus gross and nigel wood .ian boutle kindly provided the aquaplanet um output used in section [ sec : resultsaquaplanet ] .we would like to thank all dune developers and in particular oliver sander for his help with extending the parallel scalability of the ug grid implementation .this work made use of the facilities of hector , the uk s national high - performance computing service , which is provided by uoe hpcx ltd at the university of edinburgh , cray inc and nag ltd , and funded by the office of science and technology through epsrc s high end computing programme .to use a finite volume discretisation we assume that the horizontal grid is divided into elements of area where the centre of each cell is denoted by .the vertical grid is defined by grid levels each three dimensional grid cell is defined by a cell of the horizontal grid and a vertical index , such that cell with is bound by and and we write .any continuous field can be approximated by its average value in a grid cell as where we have used as all degrees of freedom in one vertical column are stored consecutively in memory , we also introduce the vector with . then it is straightforward to write down the finite volume discretisation of the individual terms in ( [ eqn : helmholtzvector ] ) .after integration by part in the horizontal direction the first term becomes where the sum runs over the neighbours of the horizontal grid cell . is the length of the edge between the cells and and is an outward normal vector on this edge and tangential to the sphere .note that is a field that `` lives '' on the vertical faces of a three dimensional grid cell .the second term in ( [ eqn : discrdiffusion ] ) is treated similarly by a vertical integration by parts to obtain often the vertical boundary conditions are fixed by requiring that the vertical velocity is zero at the top and bottom of the atmosphere .this implies mixed boundary conditions of the form for the pressure . to simplify the discussion, we use homogeneous neumann boundary conditions , i.e. . because all degrees of freedom in a vertical column are relaxed simultaneously in our solver this should not have any impact on the performance .furthermore , due to the presence of the zero - order term in ( [ eqn : helmholtzvector ] ) both the individual tridiagonal systems in one column and the elliptic operator are non - singular .in ( [ eqn : verticaldiffusion ] ) neumann boundary conditions are enforced by setting and otherwise .the field is defined on the horizontal faces of each three dimensional grid cell .based on the results in the previous sections we we can now give explicit expressions for the quantities which are needed in ( [ eqn : tridiagonalnonfactorising ] ) and ( [ eqn : tridiagonalfactorising ] ) to construct the entries of the ( tri- ) diagonal matrices in ( [ eqn : tridiagonalsystem ] ) . in the most general caseall profile functions depend on the radial coordinate and the horizontal coordinate . in this casedefine ( \hat{\alpha}_{{{{\mathcal{s}}}}})_{tt',k } & \equiv \omega^2(r_{k+1}-r_{k } ) \frac{|s_{tt'}|{\ensuremath{\boldsymbol{n}}}_{tt'}\cdot({\ensuremath{\hat{{\ensuremath{\boldsymbol{r}}}}}}_{t'}-{\ensuremath{\hat{{\ensuremath{\boldsymbol{r}}}}}}_t)}{|{\ensuremath{\hat{{\ensuremath{\boldsymbol{r}}}}}}_{t'}-{\ensuremath{\hat{{\ensuremath{\boldsymbol{r}}}}}}_t|^2}(\alpha_{{{{\mathcal{s}}}}})_{tt',k } , & ( \hat{\alpha}_{{{\mathcal{s}}}})_{t , k } & \equiv \omega^2\sum_{t'\in{\mathcal{n}(t)}}(\hat{\alpha}_{{{\mathcal{s}}}})_{tt',k}\\[1ex ] ( \hat{\alpha}_r)_{t , k } & \equiv \omega^2 \sigma_k|t| \frac{2r_k^2}{r_{k+1}-r_{k-1 } } ( \alpha_r)_{t , k } , & ( \hat{\xi}_{r})_{t , k } & \equiv \omega^2\sigma_k |t|\frac{r_k^2(r_{k+1}-r_{k})}{r_{k+1}-r_{k-1}}(\xi_{r})_{t , k } \end{aligned } \label{eqn : hatfunctionsnonfac}\ ] ] which can all be precomputed .if the profile functions can be written as in this case define ( \hat{\alpha}_{{{\mathcal{s}}}}^r)_k & \equiv ( r_{k+1}-r_{k } ) ( \alpha_{{{\mathcal{s}}}}^r)_k , & ( \hat{\alpha}_{{{\mathcal{s}}}}^{{{\mathcal{s}}}})_{tt ' } & \equiv \omega^2 \frac{|s_{tt'}|{\ensuremath{\boldsymbol{n}}}_{tt'}\cdot({\ensuremath{\hat{{\ensuremath{\boldsymbol{r}}}}}}_{t'}-{\ensuremath{\hat{{\ensuremath{\boldsymbol{r}}}}}}_t)}{|{\ensuremath{\hat{{\ensuremath{\boldsymbol{r}}}}}}_{t'}-{\ensuremath{\hat{{\ensuremath{\boldsymbol{r}}}}}}_t|^2}(\alpha_{{{{\mathcal{s}}}}}^{{{{\mathcal{s}}}}})_{tt'}\\[1ex ] & & ( \hat{\alpha}_{{{\mathcal{s}}}}^{{{\mathcal{s}}}})_{t } & \equiv \sum_{t'\in{\mathcal{n}(t)}}(\hat{\alpha}_{{{\mathcal{s}}}}^{{{\mathcal{s}}}})_{tt'}\\[1ex ] ( \hat{\alpha}_r^r)_k & \equiv \sigma_k \frac{2r_k^2}{r_{k+1}-r_{k-1 } } ( \alpha_r^r)_k , & ( \hat{\alpha}_r^{{{\mathcal{s}}}})_t & \equiv \omega^2|t| ( \alpha_r^{{{\mathcal{s}}}})_t \\[1ex ] ( \hat{\xi}_r^r)_k & \equiv \sigma_k \frac{r_k^2(r_{k+1}-r_{k})}{r_{k+1}-r_{k-1}}(\xi_r^r)_k , & ( \hat{\xi}_r^{{{\mathcal{s}}}})_t & \equiv \omega^2|t|(\xi_r^{{{\mathcal{s}}}})_t \end{aligned } \label{eqn : hatfunctionsfac}\ ] ] t. davies , m. j. p. cullen , a. j. malcolm , m. h. mawson , a. staniforth , a. a. white , and n. wood . a new dynamical core for the met office s global and regional modelling of the atmosphere ., 131(608):17591782 , 2005 .n. wood , a. staniforth , a. white , t. allen , m. diamantakis , m. gross , t. melvin , c. smith , s. vosper , m. zerroukat , and j. thuburn . an inherently mass - conserving semi - implicit semi - lagrangian discretisation of the deep - atmosphere global nonhydrostatic equations . ,2013 . published online december 4th 2013 .j. r. bates , f. h. m. semazzi , r. w. higgins , and s. r. m. barros .integration of the shallow - water equations on the sphere using a vector semi - lagrangian scheme with a multigrid solver ., 118(8):16151627 , 1990 .a. qaddouri and j. ct .preconditioning for an iterative elliptic solver on a vector processor . in j. palma , a. sousa , j. dongarra , and v. hernndez , editors ,_ high performance computing for computational science ( vecpar 2002 ) _ , volume 2565 of _ lecture notes in computer science _ , pages 451455 .springer , berlin , 2003 .s. d. buckeridge , m. j. p. cullen , r. scheichl , and m. wlasak . a robust numerical method for the potential vorticity based control variable transform in variational data assimilation . , 137(657):10831094 , 2011 .r. d. falgout and u. meier - yang .hypre : a library of high performance preconditioners . in p.m. a. sloot , c. j. k. tan , j. j. dongarra , and a. g. hoekstra , editors , _ lecture notes in computer science _ ,volume 2331 , pages 632641 .springer , 2002 .p. bastian , m. blatt , a. dedner , c. engwer , r. klfkorn , r. kornhuber , m. ohlberger , and o. sander . a generic grid interface for parallel and adaptive scientific computing .part ii : implementation and tests in dune ., 82(2 - 3):121138 , 2008 .p. bastian , m. blatt , a. dedner , c. engwer , r. klfkorn , m. ohlberger , and o. sander .a generic grid interface for parallel and adaptive scientific computing .part i : abstract framework . , 82(2 - 3):103119 , 2008 .a. burri , a. dedner , r. klfkorn , and m. ohlberger . an efficient implementation of an adaptive and parallel grid in dune . in _ proceedings of 2nd russian - german advanced research workshop on computational science and high performance computing , stuttgart
many problems in fluid modelling require the efficient solution of highly anisotropic elliptic partial differential equations ( pdes ) in `` flat '' domains . for example , in numerical weather- and climate - prediction an elliptic pde for the pressure correction has to be solved at every time step in a thin spherical shell representing the global atmosphere . this elliptic solve can be one of the computationally most demanding components in semi - implicit semi - lagrangian time stepping methods which are very popular as they allow for larger model time steps and better overall performance . with increasing model resolution , algorithmically efficient and scalable algorithms are essential to run the code under tight operational time constraints . we discuss the theory and practical application of bespoke geometric multigrid preconditioners for equations of this type . the algorithms deal with the strong anisotropy in the vertical direction by using the tensor - product approach originally analysed by brm and hiptmair [ numer . algorithms , 26/3 ( 2001 ) , pp . 219 - 234 ] . we extend the analysis to three dimensions under slightly weakened assumptions , and numerically demonstrate its efficiency for the solution of the elliptic pde for the global pressure correction in atmospheric forecast models . for this we compare the performance of different multigrid preconditioners on a tensor - product grid with a semi - structured and quasi - uniform horizontal mesh and a one dimensional vertical grid . the code is implemented in the distributed and unified numerics environment ( dune ) , which provides an easy - to - use and scalable environment for algorithms operating on tensor - product grids . parallel scalability of our solvers on up to 20,480 cores is demonstrated on the hector supercomputer . * keywords * : * ams classifiers * : 65n55 , 65y20 , 65f08 , 65y05 , 35j57 , 86a10
collective motion ( or `` flocking '' ) is a ubiquitous phenomenon , observed in a wide array of different living systems and on an even wider range of scales , from fish schools and mammal herds to bacteria colonies and cellular migrations , down to the cooperative behavior of molecular motors and biopolymers at the subcellular level .the aerial displays of starling flocks and other social birds are of course among the most spectacular examples , and have attracted the interest of speculative observers for quite a long time . to the physicist eye ,these phenomena are also highly nontrivial because they occur _ far from equilibrium _ , as single constituent particles in a flock ( whether they are birds , bacteria or cells ) are _ active _ , i.e. they continuously dissipate free energy to perform systematic ( i.e. non - thermal ) motion .also , collective motion often arises _ spontaneously _ , without any leader , external field or geometrical constraint guiding the process . in a more technical language, we may say that ordered motion follows from the _ spontaneous breaking of a continuous symmetry _ , viewing a collectively moving flock as an orientationally ordered phase of _ active matter _ .collective motion phenomena , of course , are not restricted to living matter , and in recent times they have been studied in various experimental systems , such as active colloids and driven granular matter .the ubiquity of collective motion phenomena at all scales , from groups of large vertebrates to subcellular collective dynamics , strongly hints at the existence of some universal features , possibly shared among the many different situations , regardless of many individual - level details .one way of approaching these problems is to construct and study minimal models of collective motion , that is models stripped of as many details as possible and only equipped with the basic features that we believe characterize the problem , typically its fundamental symmetries and conservation laws .this approach is fundamentally justified by hydrodynamics considerations , by which a great deal of microscopic details may be ignored , at least if we are interested in the large wavelength and long time behavior of our system . in any case , even if one is interested in finer , non - asymptotic details , it is surely good practice , before starting toying with your favourite model , to first understand the underlying , long wavelength physics inevitably shared by all systems with the same fundamental features . in these notes , i will introduce and discuss in details the properties of the vicsek model the simplest off - lattice model describing a flocking state and of the related vicsek class .approaching the study of collective motion , it is important to understand that all physical systems and models sharing the same basic features with this class will also display the same asymptotic properties . the only way to escape this , is to alter some fundamental property of the system , like changing the broken symmetry ( for instance from polar to nematic symmetry . ] ) or adding a further conservation law ( for instance momentum conservation , which is relevant for most active suspensions ) .this is likely going to be the main message of this lecture .the vicsek model ( vm ) is perhaps the simplest model displaying a transition to collective motion ; in the study of active matter plays a prototypical role , similar to the one played by the ising model for equilibrium ferromagnetism .its simple dynamical rule has been adopted as the starting point for many generalizations and variations which have been applied to a wide range of different problems . the vicsek model has been originally introduced 20 years ago by the pioneering work of vicsek and coworkers .subsequent numerical studies ( see for instance refs . and ) greatly helped in clarifying its properties .the model describes the _ overdamped _ dynamics of a collection of self - propelled particles ( spps ) characterized by their off - lattice position and direction of motion ( or heading ) , a unit vector , . here is the particle index , , and labels time .all particles move with the same constant speed , according to the time - discrete dynamics so that orientation and particle velocity coincide but for a multiplicative constant ( and often the term _ velocity _ is also used for the orientation * s * ) .+ particles tend to align their direction of motion with the one of their _ local _ neighbours , and depends on the average direction of all particles ( included ) in the spherical neighborhood of radius centered on .indeed , in the vicsek algorithm the alignment with ones neighborhood is almost perfect , only hampered by a white noise term which plays a role analogous to the one of a temperature in equilibrium systems . in two spatial dimensions ( ) , the direction of motion is defined by a single angle , with , and one may simply write the orientation dynamics as + \eta \ , \xi_i^t \label{eq:2}\ ] ] where is a zero average , delta - correlated scalar noise uniformely distributed in ] .thus the case completely dominates alignment and just gives a collection of independent random walkers .] . such a noiseis often called _ white _ , since it has a flat fourier spectrum .( [ eq:2 ] ) , the function arg returns the angle defining the orientation of the average vector , and is the connectivity matrix , this way of chosing neighbours is sometimes defined as _ metric _ , being based on the metric notion of distance .the dynamics ( [ eq:1])-([eq:2 ] ) , depicted in fig . [ fig:1]a , is _ synchronous _ , meaning that all particles positions and headings are adjusted at the same time . + in studying this model , one can always chose a convenient set of space and time units , such that and the model behavior only depends on three _ control parameters _ : the noise amplitude , the particles speed and the total density of particles , where is the volume of the system .being interested in the bulk properties of a system , one typically assumes periodic boundary conditions , so that , with being the linear system size . in numerical simulations ,periodic boundary conditions help to minimize finite size effects due to finite boundaries , and in the following we will implicitly assume them unless stated otherwise . in the literature , one may find a number of slightly different flavours of the algorithm defined above . for instance , the noise in eq . ( [ eq:2 ] ) may be distributed according to a gaussian , a small , short ranged repulsion force between particles may be included to account for volume exclusion , or the position at time , as defined in eq .( [ eq:1 ] ) , may be determined by the direction of motion at time and not at time ( indeed , this was actually the choice made in the original paper by vicsek and coworkers ) .however , typically all these differences do not matter much , and do not change the physical properties of the vicsek model .+ on the other hand , there are some features which are essential , and define what we call the _ vicsek class_. it is worth discussing them explicitly : * _ spontaneous symmetry breaking to polar order_. eqs .( [ eq:1])-([eq:2 ] ) are isotropic in space , as no preferred direction is given a priori .however , eq . ( [ eq:2 ] ) contains an explicit polar ( or ferromagnetic ) alignment term . if this alignment term is strong enough to overcome the effect of the noise ( or to put it differently , if the noise amplitude is low enough ), the system may develop global orientational order and thus collective motion , signaled by a finite _ polar order parameter _ ( or center of mass velocity ) an analogous of the total magnetization in spin systems .the value of the modulo of the polar order parameter is essentially determined ( minus fluctuations and finite size effects ) by the three control parameters , and . its stationary time average typically used to describe the spontaneous symmetry breaking phenomenon , with in the ordered phase do not cancel exactly one with each other , leading to . ] .+ its orientation in the ordered phase , on the other hand , is not determined a priori , and all directions are equally likely ( the one picked up at a given time being chosen by fluctuations ) . since the orientation can change continuously in space .] , in the transition to collective motion a _ continuous _ symmetry is spontaneously broken .this has a number of important consequences that will be explored in this notes . *_ self - propulsion and local alignment interactions_. particles are self - propelled , that is , they move according to eq .( [ eq:1 ] ) .in particular , they change their relative position according to their velocity fluctuations .thus , the connectivity matrix in eq .( [ eq:2 ] ) is not static , but it changes in time in a nontrivial way .this is exactly where the far from equilibrium nature manifests in the vicsek model , as it will be discussed in section [ mw ] . of course, the connectivity matrix will change as a consequence of particle motion only if the interactions are local , that is , if . *_ conservation laws_. the only conservation law of the vm is the conservation of the total number of particles , that is , our birds do not die or get born on the fly .there are no other conservation laws , and in particular it should be noted that momentum is not conserved .our self - propelled particles are thought to be moving over a dissipative substrate ( or in a viscous medium ) which acts as a momentum sink .this is of course not the case of a particle swimming in a three - dimensional suspensions , where momentum is transferred from the swimmers ( typically exerted as a force dipole ) to the surrounding fluid , and long - range hydrodynamic interactions are probably relevant ( and for man - made micro swimmers or self - propelled nano - rods they are typically the only interactions ! ) .+ as a consequence of the lack of momentum conservation , also galileian invariance is broken .in fact , the vm is explicitly formulated in the reference frame in which the dissipative substrate is at rest , and it is not invariant under any arbitrary velocity shift . all together ,the features discussed above define the vicsek class .finally , we have to remark another obvious feature of the vicsek model : all particles move with the same speed .however , to a certain extent it is possible to relax this conditions staying inside the vicsek class .for instance , one can let the individual speeds fluctuate in some bounded interval without changing the model asymptotic properties . in the literature , it is possible to find different ways of implementing the noise in the equation for the orientation dynamics .in the past , some attention has been given to the so called _ vectorial _ noise ( as opposed to the noise used in eq .( [ eq:2 ] ) which is sometimes called _ scalar _ or _ angular _ ) .one may replace eq .( [ eq:2 ] ) by where is the number of interacting neighbours , and is a random unit vector , delta - correlated in time and in the particle index .the denominator in the r.h.s . of eq .( [ eq:2bis ] ) is a normalization term to ensure that . if one interprets the scalar noise of eq .( [ eq:2 ] ) as an error the spp makes trying to take the ( perfectly determined ) mean direction of motions of his neighbours , the vectorial version of the noise can be thought as the sum of the errors made while trying to assess the direction of motion of the interacting neighbours would be more appropriate than one directly proportional to , but here i will stick to the latter mainly for historical reasons . ] .certain literature , also refers to these two noises implementation as ( respectively ) intrinsic and extrinsic , but it is important to stress that _ these two implementations do not yield different asymptotic properties _ , even if their finite size behavior may be slightly different ( more later on this ) .it is however worth remarking that eq .( [ eq:2bis ] ) can be directly extended to any spatial dimension , while starting from eq .( [ eq:2 ] ) requires some more care . in order to write a vicsek dynamics with scalar noise in ,one has to introduce a rotation operator performing a random ( and of course delta - correlated ) rotation uniformly distributed around the argument vector , \label{eq:2 - 3d}\ ] ] in , for instance , $ ] will lay in the solid angle subtended by a spherical cap of amplitude and centered around .it is instructing to consider the relations between the vicsek model and some well - known models of equilibrium statistical physics .obviously , the vm may be seen as an xy ( or heisenberg in ) ferromagnet in which particles are not fixed in some lattice positions but can actually move along the spin direction .indeed , the xy or heisenberg equilibrium models can be formally recovered in the case , where particles do not move at all and is fixed once for all .if the local connectivity of the static connection network its dense enough , its dynamics converges to the equilibrium distribution of an xy or heisenberg model , with a temperature that is a monotonic function of the noise amplitude .this is however a_ singular limit _case is radically and qualitatively different from the behavior at any small but finite . ] .another way of looking at the vicsek model is to see it as a _ persistent random walk _ in which particles may align their directions of motion one with each other by some local interaction rule . in continuous time and ,the persistent random walk can be written as ( with being some white noise ) which is just a vicsek dynamics without the alignment interaction term ( starting from vicsek dynamics , eq .( [ eq : pers ] ) can be formally obtained by taking the limit ) . once again , this is a singular limit , and a collection of non - interacting persistent random walker has an equilibrium distribution with some temperature given by the noise term .the opposite limits , and , also correspond to singular cases .as already mentioned , if the interactions are long ranged , the system is globally coupled and the connectivity matrix is trivially static . in this way ,motion is completely decoupled from long - ranged alignment , and most ( if not all ) of the fascinating vicsek model properties are lost .the infinite speed limit , on the other hand , just produces a random rewiring of the connectivity network : if , any small fluctuation in the orientation will push nearby particles infinitely apart . in a system with periodic boundary conditionsthis is equivalent to random rewiring of interactions , another trivial case in which motion decouples from alignment .the bottom line is that , while is interesting to understand the relations between the vm and its limiting cases , it is not possible in general to deduce properties of the former from the study of the latter ( singular ) limiting cases .the vicsek model is extremely simple and particularly well suited for numerical studies , as eqs .( [ eq:1])-([eq:2 ] ) can be easily implemented on a computer .however , it should be noted that a straightforward implementation of the metric neighbouring condition ( [ eq : metric ] ) would require testing the distance of all couples , an operation scaling with system size as order .this approach would quickly become unmanageable as the number of spps grows , making practically impossible to run simulations with more than a few thousands particles .there is of course a way around this problem , based on techniques originally developed for the study of molecular dynamics .the idea is rather simple , even if its algorithmic interpretation may not be so straightforward .one should ideally divide the system volume in boxes of linear size ( remember that one can always rescale space so that ) , assigning at each timestep each particle to a given box .once this is done , it is clear that for any given particle , all other particles laying outside the box containing and its next neighbouring boxes can not be closer than .therefore , one immediately and effortlessly reduces its search to a handful of boxes per particle . in has to only look into 9 boxes ( the general formula in spatial dimension of course gives boxes ) . a sketch for this algorithmis depicted in fig .[ fig:1]b . at any fixed total density , and the total volume in such a way that the total density stays constant . ]the mean number of particles contained in these boxes does not grow with , so that the number of operation needed to find all the interacting couples grows only linearly with .since also assigning particles to boxes is an order operation , it is immediate to conclude that the entire molecular dynamics algorithm computational time is of order rather than as the naive algorithm . a huge improvement if one is interested in asymptotic ( i.e. long time and large ) properties .any serious numerical study should employ molecular dynamics algorithms .current state of the art simulations of vicsek model involve from a few millions to a few tens of millions of particles .we now proceed to discuss the main physical properties exhibited by the vicsek class .as we shall see , they mostly emerge from the intriguing interplay between particles self propulsion and the spontaneous symmetry breaking characterizing the ordered state .numerical simulations easily show that the vicsek model display a transition from disorder to ordered collective motion .for instance , as the noise amplitude is decreased below a certain threshold ( and both and kept fixed ) , particles start to synchronize their heading and to move together . starting from disordered initial condistions ,this coarsening process is relatively fast , and the size of ordered domains grows linearly in time , . the easiest way to capture the transition to collective motion is to monitor the order parameter ( the center of mass velocity ) defined in eq .( [ op ] ) . at high noise amplitudes ,spps are unable to synchronize their headings , which tend to cancel out in the sum .it can be shown that the sum of randomly oriented unit vectors has a modulo of order , so that in the disordered phase the scalar order parameter , or essentially zero for any large number of spps . at lower noise amplitudes , below a certain threshold ,the system undergoes a spontanous symmetry breaking phase transition as spps synchronize their heading .the scalar order parameter becomes finite and roughly of order one ( note that perfect order exactly implies ) .this is resumed in fig .[ fig:2]a , where the long - time ( or stationary ) average is shown for different noise amplitudes .the parameter that is varied as the system goes through the symmetry breaking is referred to as _control parameter_. the threshold noise amplitude value for the onset of collective motion is , of course , not independent from the other model parameters , and one has . one simple way to understand the onset of collective motion is to consider that , in order to synchronize the heading of all spps , information should be able to propagate through the entire system . while alignment interactions between particles produce such information , noise clearly destroys it . a simple mean - field like argumentcan then be put forward for low densities . to simplify things ,lets rescale our units so that the interaction range is one , .if particles are often isolated , and their relatively rare interactions can be treated as _ instantaneous _ collisions . ] from which particles emerge agreeing on their headings .the distance that a particle travels between collisions , the _ mean free path _ , scales as .information can propagate through the system only if the mean free path is larger than the spp _ persistence length _ , that is the distance a particle can travel before losing its out - of - collision heading . at the onset of order one expects these two quantities to have the same magnitude , .given that the persistence length is inversely proportional to noise variance , , we have immediately a relation that has been numerically verified for ( at least in ) , and that defines a critical line in the plane .this implies that one can also use the total density as a control parameter , keeping the noise amplitude fixed . in this case, one crosses to collective motion as the density is increased . at a first glance , one may think that the symmetry breaking transition to collective motion should be similar to the transition to order in an equilibrium spin system , leading directly to some homogeneously ordered state .this is however not the case , due to the interplay between local order and local density induced by motion . moving particles , indeed ,may gather in high density patches , increasing in turn the number of interacting neighbors , i.e. particles with a mutual distance smaller than .locally high density has a positive feedback on the efficiency of the alignment interaction , so that high density patches may be able to locally align while the rest of the systems does not ; this is something that can not happen in an equilibrium spin system ! one can indeed show that this feedback mechanism inevitably leads to a long wavelength instability near the onset of order , that destabilizes the homogeneous ordered phase and leads to ( spontaneous ) phase separation .for the polar symmetry of the vicsek class , these phase separation takes the form of high - density ordered bands that travel in a low - density sea of disordered particles ( see fig . [fig:2]c ) .bands extend transversally to the direction of motion and are characterized by a well - defined width , so that it is possible to accomodate several in the same system .indeed , on very large timescales they seem to settle in a regularly spaced pattern , leading to a smectic arrangement of traveling ordered bands . in ,simple symmetry considerations imply that these structures manifest as sheets , again extending transversally to the direction of motion ( fig .[ fig:2]d ) . at lower noise values or larger densities , the long wavelength instability disappears , and a second transition leads to a homogeneous ordered phase .the resulting vicsek class phase diagram , sketched qualitatively in fig .[ fig:2]e , is thus composed of three phases . a disordered one , akin to a collection of persistent random walkers , a phase - separated ordered regime , characterized by high density ordered bands , and finally an homogeneous ordered phase .in the latter two phases , the rotational symmetry is spontaneously broken and the system exhibit collective motion . as a consequence of phase separation , the symmetry breaking transition to polar order is a first order one , rather than a second order critical one . at the onset of order , the system is bistable , and alternates between the disordered regime and the appearance of a single ordered band , with the order parameter showing the corresponding jumps characteristics of phase coexistence and of first order phase transtions ( fig .[ fig:2]b ) .the transition to collective motion can also be interpreted as a liquid - gas transition , albeit in a non - equilibrium context and with no accessible supercritical region .this phase diagram , with phase separation and a first order transition characterizing the onset of order , is rather generic ; it is indeed common not only to the entire vicsek class , but also to systems whose broken state is characterized by a different symmetry ( although details of the phase separated regime may change with different symmetries ) .however , the existence of this phase separated regime has proven rather elusive , and it took a decade from the first introduction of the vicsek model to discover it .in fact , the long - wavelength instability leading to phase separation is characterized by a rather large instability wavelength , so that in systems not too large , where , phase separation can not be observed and the transition may be mistakenly thought to be continuous and critical .it is only when is sufficiently larger than that the true asymptotic behavior of the vicsek model emerge .the instability wavelength of course depends on model parameters and , to make things worse , also on non - universal details such as the noise implementation .in particular , it is rather larger in systems with a scalar noise ( as in eq .[ eq:2 ] ) than in systems with a vectorial one ( as in eq .[ eq:2bis ] ) , so that it is not uncommon to be able to observe bands only in systems with several hundred thousands of particles or more .moreover , seem to diverge both in the low density and in the low speed limits . these difficulties ( one can say that the vm is characterized by very strong finite size effects ) have fueled a long debate on the order of the phase transition , on the genericity of the phase separated band regime and on the eventual difference between scalar and vectorial noise models in the thermodynamic limit .careful finite size analisyis and large scale simulations , together with the study of hydrodynamic theories for the vicsek class however , have gathered convincing evidence over the last decade .nowadays there is a general consensus for the scenario detailed above : no asymptotic difference between scalar and vectorial models , first order transition to collective motion and genericity of the phase separation scenario .we conclude this section noting that moving bands quite similar to vm ones have been observed in _ in vitro _experiments with motility assays , i.e. in a mixture of molecular motors and actin filaments which are among the constituents of cellular cytoskeleton .such a systems is of course much more complicated than the vm , but is still characterized by self - propulsion ( due to the molecular motors ) and may undergo a spontaneous symmetry breaking thanks to filament interactions which are effectively aligning .these experimental result demonstrate the power of the minimal model approach .a relevant change of the vicsek rule ( [ eq:2 ] ) is given by topological interactions .in topological models , one choses interacting neighbours not as the spps lying inside a metric range , but on the basis of some local topological ( or metric - free ) rule , such as the nearest neighbours or the voronoi neighbours are then chosen to be the particles forming the first shell around particle in the voronoi tessellation . to simulate this algorithm , one can not use the molecular dynamics techniques of metric models , but should resort to libraries optimized for geometric tessellations .a good ( and freely available ) example is the cgal library , http://www.cgal.org/ ] it is important to stress that this is still a local interaction rule , albeit in the topological rather than metric sense .these choices are motivated by experimental evidence , gathered in starling flocks and in other social vertebrates , that individual do not interact with neighbours chosen inside a certain fixed range , but rather with a more or less fixed number of neighbours regardless of local density .one can think that visual perception , limited by occlusions to the first shell of neighbours , is better modeled by topological rather than metric interactions . in topological models , fluctuations in local densitydo not affect the interaction frequency or the number of interacting neighbours , so that there is no positive feedback on the efficiency of the alignment interaction . in the absence of an interaction range, it is indeed possible to rescale lengths in order to always have a unit total density , .moreover , the long - wavelength instability which destabilizes the homogeneous ordered phase at the onset of order is not present in models with topological interactions , and phase separation is removed .the corresponding phase diagram is much simpler and independent from total density .as the noise is lowered , one directly crosses from disorder to an homogeneously ordered , collectively moving , phase . in the absence of phase separation ,the transition is a second order , continuous one , characterized by a novel set of critical exponents .we now turn our attention to the homogeneously ordered phase .one interesting and , to a certain extent , surprising property emerging from numerical simulations of the vicsek models , is its ability to display true collective motion in , that is , to have a true long range ordered phase in which the order parameter is finite for any system size .this is in apparent contradiction with a well - known theorem due to mermin and wagner ( mw ) , stating that no system breaking a continuous symmetry in two spatial dimensions may achieve long range order ( lro ) .a classical example of this theorem is given by the xy model in .in this case , the system may only achieve a lesser kind of order , called _ quasi long range order _( qlro ) , where the order parameter decays algebraically with the number of spins , albeit with a very small exponent .and decreases monotonously from at the kt transition to at .] while this means that , strictly speaking , no order is present asymptotically , a trace of order can still be found in the algebraically decaying spin - spin correlation function , and it is thus possible to formally define a phase transition the kosterlitz - thouless ( kt ) transition .there is of course a caveat , since the mermin - wagner theorem only applyes to equilibrium systems , and the vicsek model is of course out - of - equilibrium .it is however interesting to understand why the ability of vicsek particles to move can beat the mw theorem .it is instructive to consider a simplified argument first introduced by john toner in his lecture notes on flocking showing that the vm does so thanks to more efficient information transfer mechanisms .first consider xy spins on a dimensional lattice .suppose they all point in the same direction , with the exception of a single `` mistaken '' spin , that lies at an angle from all the others .how this mistake will evolve in time on our lattice ?ferromagnetic alignment can not simply reset to zero : all it can do is to `` iron it out '' , spreading it to nearby lattice sites . on a lattice ,this propagation mechanism is purely diffusive , , and in a time the original error will spread out over a distance , or a volume .since the total error inside the volume is conserved , the error per spin decays as .this is what happens to a single mistake .however , noise fluctuations constantly produces local errors with a number of errors per spin proportional to time . in a propagation volume one has errors .their combined root mean square , according to central limit theorem , is .we are finally in the position to compute the total error amplitude per spin , that is if , eq . ( [ eq : mw ] ) predicts that fluctuation errors per spin should decay algebraically in space .this means that order is resistant to fluctuations , and the system displays long range order . on the other hand ,if , fluctuations grows algebraically in space , so that no global order is possible .the case is marginal , with a zero algebraic exponent but a logarithmic divergence in the system size .. ] in this case , fluctuations are still unbounded , but only logarithmically , so that the order is destroyed extremely slowly and the equilibrium system displays qlro .note that the fact that we are breaking a _ continuous _ symmetry is essential to this argument . only in this case ,in fact , arbitrary small fluctuations can induce an arbitrarily small mistake in spin orientation . in the vm ,however , orientation fluctuations are coupled to motion .indeed , fluctuations induce a separation between particles of order in the directions transversal to the mean direction of motion , and in the longitudinal direction , so that two different mechanisms compete to transport orientation information : particle motion and standard diffusion .the propagation volume is readily decomposed in its transversal and longitudinal directions ( see fig .[ fig:3]a ) , where we have so that the error per spin in the vicsek model is given by the three equations ( [ eq : w1])-([eq : theta ] ) , where we have introduced the three unknown exponents , and , should be solved simultaneously .they yield a system of three linear equations in the three unknown exponents which can be readily solved .the explicit solution depends on the dimension .three different cases are in order . for onehas so that above the _ upper critical dimension _ sistem is fully diffusive and . for transversal propagationis superdiffusive and we have with again a negative .finally , for our simple argument also predicts superdiffusion propagation also in the longitudinal direction : which gives for any , so that orientation fluctuations are suppressed on large scales and the vm can attain long ranger order in any , thanks to the non - equilibrium , self propelled nature of its particles)-([eq : w2 ] ) only holds if the system shows lro , and therefore are invalid for .] the fact that , below the upper critical dimension , particle motion dominates over simple diffusion resulting in a superdiffusive propagation is related to the so called _ breakdown of linearized hydrodynamics_. this phenomenon can be studied more rigorously by a dynamical renormalization group ( drg ) study of the hydrodynamic equations for the vicsek universality class , first obtaines by toner & tu by by symmetry arguments .their detailed analysis clearly lies out of the scope of this notes , but it is worth mentioning that drg calculations suggest that it is only in the transversal direction that particle motion dominates over simple diffusion .this consideration forces in the above argument .this invalidates eq .( [ pip3 ] ) and extends eq .( [ pip2 ] ) below , yelding and thus lro in any dimension larger than .finally , we also note that generically so that fluctuations propagate much slower in the longitudinal directions than in the transversal ones ( see fig .[ fig:3]a ) .this spatial anisotropy is of course due to the symmetry breaking process . once a direction of motion is picked up , spatial isotropy is broken and the longitudinal direction can have different scaling properties from the transversal ones .the homogeneous ordered phase of the vicsek class is sometimes referred to as the _ toner & tu phase _ , after the authors of the pioneering papers that first discussed its hydrodynamic behavior . in this sectionwe briefly discuss its most important properties , which hold for both metric and topological interactions .it is well known that in systems where a continuous symmetry is spontaneously broken , the entire ordered phase is characterized by an algebraic decay of its connected correlation functions ( i.e. the corresponding fluctuations correlation function) .this is also true for the vicsek model ; moreover , by virtue of the coupling between orientation and local particle density , both the density - density and the orientation - orientation connected correlation functions show an algebraic decay . in particular , it is instructive to consider orientation fluctuations .their equal time , two points correlation function is defined as where is the distance between particle and and is an average over realizations ( or time in a stationary states ) .it can be shown that one has . in systems of finite linear size , due to the global constraint , the correlation function has a zero , which can be used as a finite - size definition of the correlation length , . as a consequence of the spontaneous symmetry breaking one has , i.e. the correlation length scales with the system size . in finite systemsone can thus write where is a universal scaling function with .we have just shown that in the vicsek class orientation ( or velocity ) fluctuations are _scale free_. while a rigorous demonstration is beyond the scope of these notes , it is important to remark that this is just a consequence of the spontaneous breaking of a continuous symmetry .the concept of scale free correlations in collective motion , in fact , received a certain attention after they have been measured in starling flocks observed in the wild .the algebraic nature of correlation functions has a number of other non - trivial consequences .the so called giant particles number fluctuations are one of the most relevant .we begin giving an operative , computational definition .define a box of linear size inside your system , containing particles at time .one can then measure the mean number of particles contained in the box by taking a mean in time over different countings . in the homogeneous phase , this will be simply given by .together with the mean , one can also measure root mean square fluctuations . by considering boxes of different size ( see fig .[ fig:3]b ) , one can then explore numerically the relation between the mean and its fluctuations in equilibrium systems , away from critical point one , has generally in agreement with the central limit theorem , but numerical simulations show that in the entire toner & tu phase one has in both two and three spatial dimensions , as shown in fig .[ fig:3]c .fluctuations in number density are anomalously large in the vicsek class ! this is indeed another manifestation of the slow power - law decay of correlations .a slow enough decay in space of local density can be measured through a suitable space coarse - graining over a volume . ]fluctuations correlations , corresponds indeed to an algebric divergence behavior at small frequencies in fourier space ( where ) , as opposed to ordinary equilibrium systems where .the small frequency behavior of the stationary density structure factor gives indeed the fluctuations to mean ratio in the limit of a large particle number , {n \to \infty}\ ] ] remembering that , and that transforming back into fourier space one has ( the box linear extension being a small frequency cutoff ) , one obtains or as anticipated , the equilibrium result is recovered when the structure factor is finite for , that is for . the argument given above is slightly simplified in implicitly assuming spatial isotropy of correlation functions and of the corresponding structure factor .we indeed know that this is not the case : due to symmetry breaking spatial isotropy is broken , and correlation functions show different algebraic behaviors in the transversal and longitudinal directions .in fact , by measuring means and fluctuations in square boxes , we are taking an average over the different directions .correspondingly , in the above argument , one should average over all directions . in the appendixwe carry on this procedure in detail making use of toner & tu theory predictions for .it yields an estimate of for and for .note that the estimates for and are very close to each other and in substantial agreement with current numerical data as shown in fig .[ fig:3]c .it is finally worth noticing that the alignment rule alone is not able to maintain the cohesion a of a finite flock in open space .fluctuations , in fact , will inevitably pull apart particles one from each other , finally disintegrating the flock .as already mentioned , in numerical simulations this problem is usually solved by introducing periodic boundary conditions , an appropriate choice when one is interested in the bulk , asymptotic properties of the vicsek class . however , if one wants to simulate a finite group in open space , some attractive interaction should be added to introduce a surface tension and stabilize the finite flock .attraction ( together with short range repulsion ) was already present in ref . , where a pioneering flocking model has been proposed in the context of computer graphics , but the first study of a vm model with cohesion in a statistical physics context has been performed in ref . , where eq .( [ eq:2bis ] ) has been modified by adding an attraction / repulsion term .one has where is the unit vector going from particle to is the reciprocal distance and in topological interaction were used . here is a two body force , repulsive at short range and attracting further away .for instance , one can chose , where is the equilibrium distance . by increasing the cohesion parameter , it has been shown that the finite flock can pass from a gas phase where the group disintegrates in open space to a ( moving ) liquid one and eventually to a ( moving ) crystal phase .the effect of strong repulsion alone added to alignment has been discussed in .in these notes , we have discussed the vicsek model and its relative `` universality class '' by making use of numerical experiments and of a number of illustrative but somehow simplified arguments . a more rigorous analytical treatment of the vm asymptotic properties is given by hydrodynamic theories , but their detailed discussions clearly lies out of the scope of this lecture notes .the interested reader should consult the original work of toner & tu on phenomenological hydrodynamics , where an rg approach to the study of the homogeneous ordered phase is carried on , and the boltzmann - ginzburg - landau approach developed in .+ while the vicsek universality class is robust to many variations , such as changes in the way the noise is implementes ( as long as no long - range correlations are introduced ) or the details of the local alignment interaction ( but relevant changes can be introduced switching the interaction from metric to topological as discussed in section [ topo ] ) , changes in some fundamental features are typically relevant . modifying the nature of broken symmetry , for instance , is a typical example of such a change .for example , one may consider nematic rather than ferromagnetic alignment , without altering the polar self - propelled nature of particles ( the so called self propelled rods model ) , or consider altogether completely nematic particles ( which have a preferred axis of motion but not a well defined direction ) such as in active nematics .these models are relevant to the modelling of elongated active particles interacting by volume exclusion forces , which typically induce an effective nematic interaction . in general , these so called _ vicsek - like _ models constitute different universality classes , but share a very similar phase diagram structure with the vicsek class : the phase diagram of all metric vicsek - like models , for isntance , exhibit a phase separated regime ( possibly with different symmetries / properties w.r.t .the vm ) taking place at the onset of order and separating the disordered from the homogeneously ordered phase .other relevant changes include violation of particles number conservation , as discussed in ref . , or as previously discussed the inclusion of momentum conservation and long - ranged hydrodynamic interactions .i acknowledge support from the marie curie career integration grant ( cig ) pcig13-ga-2013 - 618399 .i am also indebited with h. chat , j. toner and s. ramaswamy for many lectures and spirited discussions that found their way into these notes .in this appendix , we compute explicitly the anomalous density fluctuations exponent making use of the results of toner & tu theory .the density structure factor has an anysotropic structure and it is given by ( in units of the interaction distance ) where and are ( respectively ) the projection of the fourier space vector in the longitudinal and transversal directions w.r.t .the direction of motion .the two exponents and are scaling exponents for which the drg flows to a fixed point . according to a conjecturefirst put forward in , in any dimension they are while this conjecture has never been proven rigorously , there is a reasonable numerical evidence supporting the above scaling exponent values for and , to a lesser extent , . in the followingwe will assume the above values hold .we can visualize the three different sectors which in eq .( [ sq1 ] ) determines the scaling of the density structure factor as in fig .in particular , we are interested in the scaling behavior as one approaches along different paths in the plane ( or moves towards infinity in the real axis representation .it is easy to see that moving towards infinity along the line the structure factor picks up a divergence this is actually the strongest possible divergence in any . moving to infinity along the line , for instance , gives while chosing other paths towards lying in the sector i , ii or iii fig .[ figa ] also produces weaker divergences or no divergences at all .this can be checked by chosing a family of paths .the value of the exponent determines the chosen sector for our path , with corresponding to sector i , to sector ii and to sector iii . to summarize , the structure factor is dominated by divergences along the line , by eqs .( [ alpha ] ) this finally gives
in these lecture notes , prepared for the microswimmers summer school 2015 at forschungszentrum jlich , i discuss the well known vicsek model for collective motion and its main properties . in particular , i discuss its algorithmical implementation and the basic properties of its universality class . i present results from numerical simulations and insist on the role played by symmetries and conservation laws . analytical arguments are presented in an accessible and simplified way , but ample references are given for more advanced readings .
let be a poisson process of constant intensity , and let be independent and identically distributed ( i.i.d . ) -valued random vectors defined on the same probability space and having a common distribution function , which is assumed to be absolutely continuous with respect to the lebesgue measure with density .assume that and are independent and define the -valued process by the process is called a compound poisson process ( cpp ) and forms a basic stochastic model in a variety of applied fields , such as , for example , risk theory and queueing ; see .suppose that , corresponding to the true parameter pair , a sample , from is available , where the sampling mesh is assumed to be fixed and thus independent of .the problem we study in this note is nonparametric estimation of ( and of ) .this is referred to as decompounding and is well studied for one - dimensional cpps ; see .some practical situations in which this problem may arise are listed in .however , the methods used in the above papers do not seem to admit ( with the exception of ) a generalization to the multidimensional setup .this is also true for papers studying nonparametric inference for more general classes of lvy processes ( of which cpps form a particular class ) , such as , for example , .in fact , there is a dearth of publications dealing with nonparametric inference for multidimensional lvy processes .an exception is , where the setup is however specific in that it is geared to inference in lvy copula models and that , unlike the present work , the high - frequency sampling scheme is assumed ( and ) . in this work, we will establish the posterior contraction rate in a suitable metric around the true parameter pair .this concerns study of asymptotic frequentist properties of bayesian procedures , which has lately received considerable attention in the literature ( see , e.g. , ) , and is useful in that it provides their justification from the frequentist point of view .our main result says that for a -hlder regular density , under some suitable additional assumptions on the model and the prior , the posterior contracts at the rate , which , perhaps up to a logarithmic factor , is arguably the optimal posterior contraction rate in our problem .finally , our bayesian procedure is adaptive : the construction of our prior does not require knowledge of the smoothness level in order to achieve the posterior contraction rate given above .the proof of our main theorem employs certain results from but involves a substantial number of technicalities specifically characteristic of decompounding .we remark that a practical implementation of the bayesian approach to decompounding lies outside the scope of the present paper .preliminary investigations and a small scale simulation study we performed show that it is feasible and under certain conditions leads to good results .however , the technical complications one has to deal with are quite formidable , and therefore the results of our study of implementational aspects of decompounding will be reported elsewhere .the rest of the paper is organized as follows . in the next section ,we introduce some notation and recall a number of notions useful for our purposes .section [ main ] contains our main result , theorem [ mainthm ] , and a brief discussion on it .the proof of theorem [ mainthm ] is given in section [ proofs ] . finally , section [ pr.lem.1 ] contains the proof of the key technical lemma used in our proofs .assume without loss of generality that , and let , .the -valued random vectors are i.i.d .copies of a random vector where are i.i.d . with distribution function , whereas , which is independent of , has the poisson distribution with parameter .the problem of decompounding the jump size density introduced in section [ intro ] is equivalent to estimation of from observations , and we will henceforth concentrate on this alternative formulation .we will use the following notation : : : law of : : law of : : law of we will first specify the dominating measure for , which allows us to write down the likelihood in our model . define the random measure by \bigr)\otimes\mathcal{b}\bigl ( \mathbb{r}^d\setminus\{0\}\bigr).\ ] ] under , the random measure is a poisson point process on \times(\mathbb{r}^d\setminus\{0\}) ] and to possess a density with respect to the lebesgue measure .the prior for will be specified as a dirichlet process mixture of normal densities .namely , introduce a convolution density where is a distribution function on , is a positive definite real matrix , and denotes the density of the centered -dimensional normal distribution with covariance matrix .let be a finite measure on , and let denote the dirichlet process distribution with base measure ( see or , alternatively , for a modern overview ) . recall that if , then for any borel - measurable partition of , the distribution of the vector is the -dimensional dirichlet distribution with parameters .the dirichlet process location mixture of normals prior is obtained as the law of the random function , where and for some prior distribution function on the set of positive definite matrices .for additional information on dirichlet process mixtures of normal densities , see , for example , the original papers and , or a recent paper and the references therein .let denote the class of probability densities of the form . by bayes theorem , the posterior measure of any measurable set given by the priors and indirectly induce the prior on the collection of densities .we will use the symbol to signify both the prior on and the density .the posterior in the first case will be understood as the posterior for the pair , whereas in the second case as the posterior for the density .thus , setting , we have in the bayesian paradigm , the posterior encapsulates all the inferential conclusions for the problem at hand .once the posterior is available , one can next proceed with computation of other quantities of interest in bayesian statistics , such as bayes point estimates or credible sets .the hellinger distance between two probability laws and on a measurable space is given by assuming that , the kullback leibler divergence is we also define the -discrepancy by in addition , for positive real numbers and , we put using the same symbols , , and is justified as follows .suppose that is a singleton and consider the dirac measures and that put masses and , respectively , on .then , and similar equalities are valid for the -discrepancy and the hellinger distance . for any , by denote the largest integer strictly smaller than , by the set of natural numbers , whereas stands for the union . for a multiindex , we set . the usual euclidean norm of a vector is denoted by .let and be constants , and let be a measurable function .we define the class of locally -hlder regular functions as the set of all functions such that all mixed partial derivatives of up to order exist and , for every with , satisfy see p. 625in for this class of functions .define the complements of the hellinger - type neighborhoods of by where is a sequence of positive numbers .we say that is a posterior contraction rate if there exists a constant such that as in -probability .the -covering number of a subset of a metric space equipped with the metric is the minimum number of -balls of radius needed to cover it .let be a set of cpp laws .furthermore , we set we recall the following general result on posterior contraction rates . [ thm2.1ghosal01 ] suppose that for positive sequences such that , constants , and sets , we have then , for and a constant large enough , we have that as in -probability , assuming that the i.i.d .observations have been generated according to . in order to derive the posterior contraction rate in our problem ,we impose the following conditions on the true parameter pair . [ass : truth ] denote by the true parameter values for the compound poisson process . 1 . is in a compact set \subset(0,\infty) ] and is such that ,\ ] ] for some constants and ; 2 .the base measure of the dirichlet process prior is finite and possesses a strictly positive density on such that for all sufficiently large and some strictly positive constants , and , ^d\bigr)\leq b_1 \exp \bigl(-c_1 x^{a_1}\bigr),\ ] ] where 3 . thereexist strictly positive constants , , , , , , , , such that for all large enough , for all small enough , and for any and , here denotes the smallest eigenvalue of the matrix .this assumption comes from , to which we refer for an additional discussion .in particular , it is shown there that an inverse wishart distribution ( a popular prior distribution for covariance matrices ) satisfies the assumptions on with . as far as is concerned, we can take it such that its rescaled version is a nondegenerate gaussian distribution on .[ cond_pi1 ] assumption requiring that the prior density is bounded away from zero on the interval ] .we now state our main result .[ mainthm ] let assumptions [ ass : truth ] and [ ass : prior ] hold .then there exists a constant such that , as , in -probability . herewe conclude this section with a brief discussion on the obtained result : the logarithmic factor is negligible for practical purposes . if , then the posterior contraction rate obtained in theorem [ mainthm ] is essentially , which is the minimax estimation rate in a number of nonparametric settings .this is arguably also the minimax estimation rate in our problem as well ( cf .theorem 2.1 in for a related result in the one - dimensional setting ) , although here we do not give a formal argument .equally important is the fact that our result is adaptive : the posterior contraction rate in theorem [ mainthm ] is attained without the knowledge of the smoothness level being incorporated in the construction of our prior . finally , theorem [ mainthm ] , in combination with theorem 2.5 and the arguments on pp .506507 in , implies the existence of bayesian point estimates achieving ( in the frequentist sense ) this convergence rate .after completion of this work , we learned about the paper that deals with nonparametric bayesian estimation of intensity functions for aalen counting processes . although cpps are in some sense similar to the latter class of processes , they are not counting processes .an essential difference between our work and lies in the fact that , unlike , ours deals with discretely observed multidimensional processes . also uses the log - spline prior , or the dirichlet mixture of uniform densities , and not the dirichlet mixture of normal densities as the prior .the proof of theorem [ mainthm ] consists in verification of the conditions in theorem [ thm2.1ghosal01 ] .the following lemma plays the key role . [lem : ineq ] the following estimates are valid : moreover , there exists a constant , depending on and only , such that for all ] with -intervals of size .let be centers of the balls from a minimal covering of with -balls of size . by lemma [ lem : ineq ] , for any , by appropriate choices of and .hence , ,\overline { h}_1\bigr ) \times n ( \overline{c } \varepsilon_n , { \mathcal{f}}_n , \overline{h}_2 ) , \ ] ] and so ,\overline { h}_1\bigr)+\log n(\overline{c } \varepsilon_n,\mathcal{f}_n,\overline{h}_2).\ ] ] by proposition 2 and theorem 5 in , there exists a constant such that for all large enough , on the other hand , ,\overline{h}_1\bigr ) & = \log n\bigl ( \varepsilon_n,[\underline{\lambda } , \overline{\lambda}],|\cdot|\bigr ) , \\ & \lesssim\log \biggl ( \frac{1}{\varepsilon_n } \biggr ) \\ & \lesssim\log \biggl ( \frac{1}{\overline{\varepsilon}_n } \biggr).\end{aligned}\ ] ] with our choice of , for all large enough , we have so that for all large enough , we can simply rename the constant in this formula into , and thus is satisfied with that constant .we first focus on .introduce suppose that . from weobtain furthermore , using , we have combination of these inequalities with the definition of the set in yields consequently , by assumption [ ass : prior](i ) , furthermore , theorem 4 in yields that for some and all sufficiently large , we substitute with and write to arrive at now , since , for all large enough , we have consequently , for all large enough , choosing , we have verified ( with ) . for the verification of , we use the constants and as above .note first that by theorem 5 in ( see also p. 627there ) , for some and any constant , we have provided that is large enough .thus , without loss of generality , we can take the positive constant greater than .this gives which is indeed .we have thus verified conditions , and the statement of theorem [ mainthm ] follows by theorem [ thm2.1ghosal01 ] since ( eventually ) .we start with a lemma from , which will be used three times in the proof of lemma [ lem : ineq ] . consider a probability space .let be a probability measure on and assume that with radon nikodym derivative .furthermore , let be a sub--algebra of . the restrictions of and to are denoted and , respectively . then and =:\zeta' ] and , and so in the lemma should be taken as and as . in the proof of lemma [ lem : ineq ] , for economy of notation ,a constant depending on and may differ from line to line .we also abbreviate and to and , respectively .the same convention will be used for , , , and .application of lemma [ lem : convexnew ] with gives .using and the expression for the mean of a stochastic integral with respect to a poisson point process ( see , e.g. , property 6 on p. 68 in ) , we obtain that \biggr ) \\ & = \lambda_0 \mathrm{k}(\mathbb{p}_{0},\mathbb{p})+ \mathrm{k}(\lambda _ 0,\lambda).\end{aligned}\ ] ] now where is some constant depending on and .the result follows .we have + { \mathbb{e}}_{\mathbb{q}_0 } \biggl [ \log^2 \biggl ( \frac{\mathrm{d}\mathbb { q}_0}{\mathrm{d}\mathbb{q } } \biggr ) 1 _ { \ { \frac{\mathrm{d}{\mathbb { q}}_0}{{\mathrm{d}}{\mathbb { q } } } < 1 \ } } \biggr ] \\ & = \mathrm{i}+\mathrm{ii}.\end{aligned}\ ] ] application of lemma [ lem : convexnew ] with ( which is a convex function ) gives } \biggr ] \le\mathrm{v } ( { \mathbb { r}}_0 , { \mathbb { r}}).\ ] ] as far as is concerned , for , we have the inequalities the first inequality is trivial , and the second is a particular case of inequality ( 8.5 ) in and is equally elementary .the two inequalities together yield applying this inequality with ( which is positive on the event ) and taking the expectation with respect to give \\ & \le4 \int \biggl ( \sqrt{\frac{{\mathrm{d}}{\mathbb { q}}_0}{{\mathrm{d}}{\mathbb { q } } } } -1 \biggr)^2 { \mathrm{d}}{\mathbb { q } } \\ & = 4 h^2({\mathbb { q}}_0 , { \mathbb { q } } ) \le4\mathrm{k } ( { \mathbb { q}}_0 , { \mathbb { q } } ) . \ ] ] for the final inequality , see , p. 62, formula ( 12 ) . combining the estimates on and obtain that after some long and tedious calculations employing and the expressions for the mean and variance of a stochastic integral with respect to a poisson point process ( see , e.g. , property 6 on p. 68 in and lemma 1.1 in ) , we get that by the -inequality we have from which we deduce for some constant depending on and only . as far as is concerned , the -inequality and the cauchy schwarz inequality give that \biggr)^2\nonumber \\& \leq2 \lambda_0 ^ 2 \mathrm{v}(\mathbb{p}_{0 } , \mathbb{p})+2\mathrm { k}(\lambda_0,\lambda)^2,\label{ineqiv}\end{aligned}\ ] ] from which we find the upper bound for some constant depending on and . combining estimates and on and with inequalities and yields .similarly , the upper bounds and , combined with and , yield .first , note that for }$ ] , = { \mathbb{e}}_{{\mathbb { q } } } \biggl [ g \biggl ( \frac{{\mathrm{d}}{\mathbb { q}}_0}{{\mathrm{d}}{\mathbb { q } } } \biggr ) \biggr].\ ] ] since is convex , an application of lemma [ lem : convexnew ] yields . using and invoking lemma 1.5 in , in particular , using formula ( 1.30 ) in its statement, we get that where denotes the -norm .this proves .furthermore , from this we obtain the obvious upper bound which yields .the authors would like to thank the referee for his / her remarks .the research leading to these results has received funding from the european research council under erc grant agreement 320637 .
given a sample from a discretely observed multidimensional compound poisson process , we study the problem of nonparametric estimation of its jump size density and intensity . we take a nonparametric bayesian approach to the problem and determine posterior contraction rates in this context , which , under some assumptions , we argue to be optimal posterior contraction rates . in particular , our results imply the existence of bayesian point estimates that converge to the true parameter pair at these rates . to the best of our knowledge , construction of nonparametric density estimators for inference in the class of discretely observed multidimensional lvy processes , and the study of their rates of convergence is a new contribution to the literature . ./style / arxiv - vmsta.cfg decompounding , multidimensional compound poisson process , nonparametric bayesian estimation , posterior contraction rate 62g20 , 62m30
symmetry occurs in many combinatorial search problems .for example , in the magic squares problem ( prob019 in csplib ) , we have the symmetries that rotate and reflect the square .eliminating such symmetry from the search space is often critical when trying to solve large instances of a problem .symmetry can occur both _ within _ a single solution as well as _ between _ different solutions of a problem .we can also _ apply _ symmetry to the constraints in a problem .we focus here on constraint satisfaction problems , though there has been interesting work on symmetry in other types of problems ( e.g. planning , and model checking ) .we summarize recent work appearing in .a symmetry is a bijection on assignments . given a set of assignments and a symmetry , we write for .a special type of symmetry , called _ solution symmetry _ is a symmetry _ between _ the solutions of a problem .more formally , we say that a problem has the _ solution symmetry _ iff of any solution is itself a solution . the _ magic squares _ problem is to label a by square so that the sum of every row , column and diagonal are equal ( prob019 in csplib ) . a _normal _ magic square contains the integers 1 to .we model this with variables where iff the column and row is labelled with the integer .`` lo shu '' , the smallest non - trivial normal magic square has been known for over four thousand years and is an important object in ancient chinese mathematics : the magic squares problem has a number of solution symmetries .for example , consider the symmetry that reflects a solution in the leading diagonal . this map `` lo shu '' onto a symmetric solution : any other rotation or reflection of the square maps one solution onto another .the 8 symmetries of the square are thus all solution symmetries of this problem .in fact , there are only 8 different magic square of order 3 , and all are in the same symmetry class .one way to factor solution symmetry out of the search space is to post symmetry breaking constraints .see , for instance , .for example , we can eliminate by posting a constraint which ensures that the top left corner is smaller than its symmetry , the bottom right corner .this selects ( [ loshu ] ) and eliminates ( [ loshu2 ] ) .symmetry can be used to transform such symmetry breaking constraints .for example , if we apply to the constraint which ensures that the top left corner is smaller than the bottom right , we get a new symmetry breaking constraints which ensures that the bottom right is smaller than the top left .this selects ( [ loshu2 ] ) and eliminates ( [ loshu ] ) .symmetries can also be found within individual solutions of a constraint satisfaction problem .we say that a solution _ contains _ the internal symmetry ( or equivalently is a internal symmetry _ within _ this solution ) iff .consider again `` lo shu '' .this contains an internal symmetry . to see this , consider the solution symmetry that inverts labels , mapping onto solution symmetry maps `` lo shu '' onto a different ( but symmetric ) solution .however , if we now apply the solution symmetry that rotates the square , we map back onto the original solution : in general , there is no relationship between the solution symmetries of a problem and the internal symmetries within a solution of that problem .there are solution symmetries of a problem which are not internal symmetries within any solution of that problem , and vice versa .however , when all solutions of a problem contain the same internal symmetry , we can be sure that this is a solution symmetry of the problem itself .the exploitation of internal symmetries involves two steps : finding internal symmetries , and then restricting search to solutions containing just these internal symmetries .we have explored this idea in two applications where we have been able to extend the state of the art . in the first, we found new lower bound certificates for van der waerden numbers .such numbers are an important concept in ramsey theory . in the second application, we increased the size of graceful labellings known for a family of graphs .graceful labelling has practical applications in areas like communication theory . before our work , the largest double wheel graph that we found graceful labelled in the literature had size 10 . using our method , we constructed the first known labelling for a double wheel of size .katsirelos , g.,narodytska , n. , walsh , t. : static constraints for breaking row and column symmetry . in : 16th int . conf . on principles and practices of constraint programming ( cp-2010 ) , ( 2010 ) . under review .flener , p. , frisch , a. , hnich , b. , kiziltan , z. , miguel , i. , pearson , j. , walsh , t. : breaking row and column symmetry in matrix models . in : 8th int .conf on principles and practices of constraint programming ( cp-2002 ) , ( 2002 )
symmetry can be used to help solve many problems . for instance , einstein s famous 1905 paper ( `` on the electrodynamics of moving bodies '' ) uses symmetry to help derive the laws of special relativity . in artificial intelligence , symmetry has played an important role in both problem representation and reasoning . i describe recent work on using symmetry to help solve constraint satisfaction problems . symmetries occur within individual solutions of problems as well as between different solutions of the same problem . symmetry can also be applied to the constraints in a problem to give new symmetric constraints . reasoning about symmetry can speed up problem solving , and has led to the discovery of new results in both graph and number theory .
a quantum computer is expected to solve some of computationally hard problems for a conventional digital computer .the realization of a practical quantum computer is , however , still challenging in many respects .one of the obstacles to any realization is a phenomenon known as `` decoherence '' .the number of gate operations is severely limited since a quantum state is vulnerable due to interactions with the surroundings .many strategies to overcome internal and external decoherence have been proposed , such as ( 1 ) quantum error correcting codes ( 2 ) decoherence - free subspaces ( 3 ) holonomic quantum computation , among others . in a conventional design of a quantum circuit , the so - called elementary set of gates such as single - qubit rotations and cnot gates , are utilized .this design is motivated by the universality theorem proved in ref .suppose we are to implement an -qubit unitary matrix .in the conventional implementation this matrix is decomposed into a product of matrices acting bewteen a pair of basis vectors .then a cnot gate transforms one of the basis vectors to a new vector , so that the pair of the vectors forms a subspace corresponding to a single qubit on which the matrix acts .quantum algorithm acceleration is a totally new approach to the decoherence issue .this principle is originally proposed in the context of the holonomic quantum computation and of josephson junction qubits . what is required to implementis not individual elementary gates but rather the -qubit matrix realizing the given quantum algorithm . this matrix is directly implemented by properly choosing the control parameters in the hamiltonian .the variational principle tells us that the gate execution time reduces , in general , compared to the conventional construction since the conventional gate sequence belongs to the possible solutions in direct implementation .the proposal has been made for fictitious josephson charge qubits , which are still beyond reach .it is the purpose of this paper to demonstrate the acceleration of a quantum algorithm using an nmr quantum computer at our hand . for this demonstration ,we employ two - qubit grover s search algorithm whose initial state is generated as a pseudopure state by cyclic permutations of the state populations . in this process , we need to prepare three different initial states using transformations acting on a thermal equilibrium ensemble .it is found that the optimized pulse sequence reduces the gate operation time to 25% of the conventional pulse sequence operation time in two ensembles while it remains unchanged in one ensemble , which is already optimized using the conventional pulse sequence .the next section is devoted to the formalism of our approach .the hamiltonian for the two - qubit molecule is introduced and the time - evolution operator is defined .section iii is the main part of this paper , where the exact solutions for grover s algorithm are obtained .these solutions are experimentally verified with our nmr computer .section iv is devoted to summary and discussion .in the present paper , we are concerned with an nmr quantum computer with a two - qubit heteronucleus molecule . to be more specific , we use carbon-13 labeled chloroform as a computational resource throughout our theoretical and experimental analyses .the hamiltonian of the molecule is \nonumber\\ & & -\omega_{12 } \left[\cos \phi_2 ( i_2 \otimes \sigma_x /2 ) + \sin \phi_2 ( i_2 \otimes \sigma_y/2 ) \right]\nonumber\\ & & + 2\pi j \sigma_z \otimes \sigma_z/4 , \label{eq : ham}\end{aligned}\ ] ] in the rotating frame of each nucleus . here is the unit matrix of order 2 while is the pauli matrix .the parameter is the amplitude of the rf pulse for the spin while is its phase .these four independent control parameters are collectively denoted as .the time - evolution operator = { \mathcal t } \exp\left[-i \int_0^t h(\gamma(t ) ) dt\right]\ ] ] associated with the hamiltonian ( [ eq : ham ] ) is a functional of , where is the time - ordered product and we employ the natural units in which is set to unity . note that \in su(4) ] as = \|u[\gamma(t ) ] -u_{\rm target}\|_{\rm f},\ ] ] where is the frobenius norm of a matrix .note that ] . to find the absolute minima ,we have generated 512 initial conditions and searched for the optimal solutions with the polytope algorithm on a parallel computer with 512 cpus .to our surprise , the execution time is discrete and assumes the values for all four cases .the ambiguity corresponds to the path leaving from and traversing the compact group times before hitting the destination .the result shows that fig .1 of is misleading at least in the present case : the distance between the cosets and is _ unique _ in the sense that the only degree of freedom left is how many times the path traverses before arriving at the target . the optimal execution time obtained here , however , is the same as that for the conventional pulse sequence in all cases . in other words ,the conventional pulse sequences are already time - optimal .for such a simple algorithm , time - optimization may be carried out by inspection by experts .therefore , we look for more complicated cases to demonstrate the power of this method .suppose we would like to execute grover s search algorithm with a room - temperature liquid state nmr computer .the sample is in a thermal equilibrium state and we will use the temporal averaging by cyclic permutations of state populations to obtain a pseudopure initial state .this is carried out by applying the unitary operators and to the initial thermal state before is executed . here stands for the cnot gate with the control bit and the target bit .we now present our search result for and , the other producing similar results .* : an example of a typical time - optimal solution for is the execution time is , which simply reproduces that for the conventional pulse sequence .therefore , the conventional pulse sequence is already optimized .we will employ the conventional pulse sequence for in the following .* : for this case , an example of the time - optimal pulse sequence is ,\nonumber\end{aligned}\ ] ] where the second line of shows the pulse sequence made of the existing terms in the hamiltonian .it should be noted that this pulse sequence requires the execution time in spite of an additional gate , which is composed of two cnot gates and costs of time to execute in the conventional pulse sequence .the execution time of the time - optimal pulse sequence is 25% of that for the conventional pulse sequence .the execution time for the other solutions takes discrete values .this corresponds to a path traversing times before arriving at the .* : an example of the time - optimal pulse sequence is \otimes i_2 \nonumber\\ u_j&= & e^{-i ( \pi/4)\sigma_z \otimes \sigma_z}\\ k_2 & = & e^{i ( \pi/4 ) \sigma_x } \otimes i_2.\nonumber\end{aligned}\ ] ] note again that the execution time of the gate for the time - optimal pulse sequence is 25% of that for the conventional pulse sequence . in general ,the execution time is as in the above case .the generators in the pulse sequences , that do not exist in the hamiltonian ( [ eq : ham ] ) , are rewritten in favor of the existing terms by making use of the conjugate transformations ( [ eq:1qubit ] ) and ( [ eq:2qubit ] ) .the results are summarized in table i. also shown in the table are the results according to the conventional pulse sequence for the respective gate .gate&pulse sequence&execution time + & ` 1 : -y -(1/2j)-ym - xm-(1/2j)-ym - xm- ` & 1/j + & ` 2 : -y -(1/2j)-ym - x -(1/2j)-ym - xm- ` & + & ` 1 : -x-(1/2j)-x --------------y -(1/2j)-ym - xm-(1/2j)-ym - xm- ` & 2/j + & ` 2 : --------------x -(1/2j)-x -y -(1/2j)-ym - x -(1/2j)-ym - xm- ` & + & ` 1 : --------------x -(1/2j)-x-y -(1/2j)-ym - xm-(1/2j)-ym - xm- ` & 2/j + & ` 2 : -x -(1/2j)-x --------------y -(1/2j)-ym - x -(1/2j)-ym - xm- ` & + + gate&pulse sequence&execution time + & ` 1 : -x -(1/2j)-xm - ym-(1/2j)-y -pi(45)- ` & 1/j + & ` 2 : -x -(1/2j)-xm - y -(1/2j)-x -ym - ` & + & ` 1 : -x -(1/2j)-xm - ym- ` & 1/2j + & ` 2 : -ym - ym- ` & + & ` 1 : ` & 1/2j + & ` 2 : -y -x -(1/2j)-xm- ` & + in our experiments , we used 0.6 milliliter , 200 millimolar sample of carbon-13 labeled chloroform ( cambridge isotopes ) in d-6 acetone .data were taken at room temperature with a jeol eca-500 ( the hydrogen larmor frequency being approximately 500 mhz ) spectorometer .the measured coupling strength is hz and the transverse relaxation time is s for the hydrogen nucleus while s for the carbon nucleus .the longitudinal relaxation time is measured to be s for both nuclei .the spin 1 and 2 in table [ table : ps ] correspond to carbon-13 and h , respectively .our experimental results are shown in fig .[ fig : s ] , which shows the spectra corresponding to the state .the spectra in fig .[ fig : s ] ( a ) were obtained by using hard pulses whose duration is 25 for pulses .we have intentionally introduced longer pulses with 250 duration to see the effect of imperfections , see fig .[ fig : s ] ( b ) . in the first case, we can well ignore the time evolution due to the j - coupling while pulses are applied since the characterist time for the -coupling is . in the latter case , however , six 250 pulses amounts to the total duration of 1.5ms , which is comparable to and we expect that the difference in the number of pulses will manifest .figure [ fig : s ] ( b ) clearly demonstrates that time - optimal pulse sequences produce sharper main peak compared to the conventional pulse sequences and less unwanted signal , showing the superiority of our solutions .we also expect that quantum algorithm accreleration should be effective to fight against decoherence .negative signal amplitudes indicate that the carbon nucleus is in the state , while appearance of the main signal at the frequency of 77.5 ppm ( instead of 79.2 ppm ) implies that the hydrogen nucleus is in the state .the insets show the signals in the vicinity of 79.2 ppm where a signal may appear if the hydrogen nulceus has component .the scales in the insets are the same as in the main panels .( a ) the upper panel shows the spectra obtained with conventional ( dotted line ) and optimized ( solid line ) pulse sequences .each -pulse duration is set to 25 .the main peak produced by the optimized pulse sequences is slightly sharper than that of the conventional one .( b ) the lower panel shows spectra with conventional ( dotted line ) and optimized ( solid line ) pulse sequences , in which the duration of the -pulse is now set to 250 .the signal produced by the optimized pulse sequences is clearly better than that by the conventional ones .note also that unwanted signal in the inset is weaker for the time - optimal pulse sequences ., width=302 ]in summary , we have demonstrated both theoretically and experimentally that quantum algorithms may be accelerated if the unitary matrix realizing an algorithm is directly implemented by manipulating the control parameters in the hamiltonian .we have verified this by implementing grover s algorithm which picks out the `` file '' starting from the pseudopure state generated by cyclic permutations of the state populations .we obtained the time - optimal pulse sequences and compared the results with those obtained by the conventional pulse sequences .it turns out that the gate is already optimized in the conventional pulse sequence while the gates and required for the cyclic permutations are accelerated so that the execution time is 25% of that for the conventional pulse sequence in both cases .the number of the pulses required for ( ) is 4 ( 3 ) in the time - optimal pulses sequence , while it is 14 in both cases if the conventional pulse sequences are employed .the smallness in the number of pulses required leads to a higher - quality spectrum .we would like to thank manabu ishifune for sample preparation , toshie minematsu for assistance in nmr operations and katsuo asakura and naoyuki fujii of jeol for assistance in nmr pulse programming .parallel computing for the present work has been carried out with the cp - pacs computer under the `` large - scale numerical simulation program '' of center for computational physics , university of tsukuba .we would like to thank martti salomaa for drawing their attention to refs . and careful reading of the manuscript .mn is grateful for partial support of a grants - in - aid for scientific research from ministry of education , culture , sports , science and technology ( no . 13135215 ) and japan society for promotion of science ( jsps ) ( no .14540346 ) .st would like to thank jsps for partial support ( no .15540277 ) .99 m. a. nielsen , and i. l. chuang , _ quantum computation and quantum information _ , ( cambridge university press , cambridge , 2000 ) .f. de .martini and c. monroe ( eds . ) , _ experimental quantum computation and information _ , ( ios press , amsterdam , 2002 ) . a. r. calderbank and p. w. shor , phys . rev .a * 54 * , 1098 ( 1996 ) .d. a. lidar , i. l. chuang , and k. b. whaley , phys .lett . * 81 * , 2594 ( 1998 ) .p. zanardi and m. rasetti , pjysa , * 264 * , 94 ( 1999 ) . j. j. vartiainen , m. mttnen , and m. m. salomaa , phys .92 * , 177902 ( 2004 ) .m. mttnen , j. j. vartiainen , v. bergholm , and m. m. salomaa , quant - ph/0404089 ( 2004 ) .a. barenco , c. h. bennett , r. cleve , d. p. divincenzo , n. margolus , p. shor , t. sleator , j. a. smolin , and h. weinfurter , phys .rev . a * 52 * , 3457 ( 1995 ) .a. o. niskanen , m. nakahara , and m. m. salomaa , quantum inf .* 2 * , 560 ( 2002 ) .a. o. niskanen , m. nakahara , and m. m. salomaa , phys .a * 67 * , 012319 ( 2003 ) .s. tanimura , d. hayashi , and m. nakahara , phys .a * 325 * , 199 ( 2004 ) .a. o. niskanen , j. j. vartiainen , and m. m. salomaa , phys .* 90 * , 197901 ( 2003 ) . j. v. vartiainen , a. o. niskanen , m. nakahara and m. m. salomaa , int .. inf . * 2 * , 1 ( 2004 ) j. v. vartiainen , a. o. niskanen , m. nakahara and m. m. salomaa , phys . rev .a , to be published . i. l. chuang , n. gershenfeld , and m. kubinec , phys .lett . * 80 * , 3408 ( 1998 ) .n. khaneja , r. brockett , and s. j. glaser , phys .a * 63 * , 032308 ( 2001 ) .n. khaneja , harvard thesis ( 2000 ) .l. grover , in _ proceedings of the 28th annual acm symposium on the theory of computation _( acm press , new york , 1996 ) , 212 , l. k. grover , phys . rev .lett . * 79 * , 325 ( 1997 ) .
in general , a quantum circuit is constructed with elementary gates , such as one - qubit gates and cnot gates . it is possible , however , to speed up the execution time of a given circuit by merging those elementary gates together into larger modules , such that the desired unitary matrix expressing the algorithm is directly implemented . we demonstrate this experimentally by taking the two - qubit grover s algorithm implemented in nmr quantum computation , whose pseudopure state is generated by cyclic permutations of the state populations . this is the first exact time - optimal solution , to our knowledge , obtained for a self - contained quantum algorithm .
over a decade of research in network analysis has revealed a number of common properties of complex real - world networks ._ community structure _ the occurrence of cohesive modules of nodes is of particular interest as it provides an insight into not only structural organization but also functional behavior of various real - world systems .the analysis of communities has thus been the focus of many recent endeavors , while community structure analysis is also considered as one of the most prominent areas of network science .however , most of the past work was constrained to communities characterized by higher density of links_link - density communities _ ( fig .[ fig_comms_zkc ] ) .in contrast to the latter , recent studies reveal that networks comprise even more sophisticated modules than classical cohesive communities .in particular , real - world networks can also be naturally partitioned according to common patterns of connections among nodes into _ link - pattern communities _ ( fig .[ fig_comms_swc ] ) .link - pattern communities can in fact be related to relevant functional roles in various complex systems , moreover , they also provide a further comprehension of real - world network structure that is obscure under classical frameworks .note that link - density communities could be seen as a special case of link - pattern communities , although several fundamental differences exist .in particular , link - pattern communities do not correspond to densely connected groups of nodes , while generally do not even feature connectedness .the latter actually implies low transitivity clustering coefficient for the nodes in link - pattern communities , which contradicts with small - world phenomena .however , recent work suggests that best link - pattern communities might indeed emerge in parts of networks that exhibit low values of clustering ( e.g. , technological networks ) , where small - world property does not generally hold .recently , ubelj and bajec have proposed a general propagation algorithm that can reveal arbitrary network modules ranging from link - density to link - pattern communities .their algorithm does not require any prior knowledge of the true structure , though they introduce a community parameter that models the nature of each community according to the measure of network bottlenecks conductance .we advance the latter by proposing a more adequate modeling strategy based on node clustering coefficient .the resulting algorithm is evaluated on various synthetic benchmark networks with planted partition , on random graphs and also resolution limit examples .it is shown to be comparable to current state - of - the - art , whereas , the proposed strategy also greatly improves on the approach of ubelj and bajec ( on these networks ) . furthermore , to demonstrate its generality , we also employ the algorithm for community detection in different unipartite and bipartite real - world networks , for generalized community detection and predictive data clustering . the rest of the paper is structured as follows . in section [ sec_rw ]we briefly review relevant related work , with emphasis on the community detection literature .section [ sec_alg ] introduces the proposed algorithm , while the empirical evaluation with formal discussion is done in section [ sec_eval ] .the performance on various real - world examples is presented in section [ sec_rwe ] , and conclusions are made in section [ sec_conc ] .despite the wealth of the literature on classical communities in recent years , only a small number of authors have considered more general link - pattern communities .nevertheless , authors have recently proposed different algorithms based on stochastic blockmodels , mixture models , model selection , data clustering and other . however , in contrast to the propagation algorithm proposed in this paper , and that in , all other approaches require some prior knowledge of the true structure ( e.g. , the number of communities ) .the latter indeed seriously limits their use in practice .note that authors have also analyzed vertex similarity based on common patterns of connections commonly referred to as _structural equivalence_whereas , some of the research on classical communities also apply for link - pattern counterparts .it ought to be mentioned that link - pattern communities are known as _blockmodels _ in social networks literature .these have been extensively studied in the past , however , their main focus and employed formulation differs from ours .let the network be represented by an undirected graph , where is the set of nodes of the graph and is the set of its links ( edges ) .furthermore , let be the weight of the link between nodes . moreover , let denote the community ( label ) of node , and let be the set of its neighbors .the proposed model - based propagation algorithm is , as the algorithm in , based on the label propagation principle of raghavan et al . . in the following ,we thus first introduce the latter . [[ label - propagation . ] ] label propagation .+ + + + + + + + + + + + + + + + + + label propagation algorithm ( lpa ) reveals link - density communities by exploiting the following procedure .first , each node is labeled with a unique label ( i.e. , ) .then , at each iteration , the node adopts the label shared by most of its neighbors ( with respect to link weights ) .hence , where is the set of neighbors of node that share label ( ties are broken uniformly at random ) .due to the existence of many intra - community links , relative to the number of inter - community links , cohesive modules of nodes form a consensus on some label after a few iterations . thus ,when the algorithm converges a local equilibrium is reached disconnected sets of nodes sharing the same label are classified into the same community . due to extremely fast structural inference of label propagation ,the algorithm exhibits near linear complexity and can easily scale to networks with millions of nodes and links .note that , to address issues with oscillations of labels in some networks ( e.g. , bipartite networks ) , label updates in eq .( [ eq_lpa ] ) occur in a random order .[ [ general - propagation . ] ] general propagation .+ + + + + + + + + + + + + + + + + + + + ubelj and bajec have argued that label propagation can not be directly applied for the detection of link - pattern communities , as the bare nature of propagation requires connected ( and cohesive ) groups of nodes .however , when one considers second order neighborhoods , and propagates labels through nodes neighbors , link - pattern communities indeed correspond to cohesive modules of nodes ( see fig . [ fig_comms_swc ] ) . based on the above theyhave proposed general propagation algorithm ( gpa ) that is presented in the following .let be a community parameter that models the nature of community , ] .then , node balancers are set according to where and are parameters of the algorithm .intuitively , we fix to , while is set to based on some preliminary experiments ( see section [ sec_eval ] ) .node balancers can also be modeled with a linear function as , however , introduction of the above parameters allows for a distinct control over the algorithm .in particular , analysis in section [ sec_eval ] reveals that increasing improves the stability of the algorithm , although the computational time thus also increases .note also that setting to yields a classical label propagation where all are equal . to further boost the community detection strength of the algorithm , defensive preservation of communitiesis employed through diffusion values , .here higher diffusion values propagation preferences are given to core nodes of each ( current ) community , while lower values are given to their border nodes .the latter results in an immense ability of detecting communities , even when they are only weakly depicted in the network s topology . at each iteration ,diffusion values are estimated by means of a random walker utilized on each ( current ) community .hence , and where is the intra - community degree of node ( all , are initialized to ) . besides deriving an estimate of the core and border of each community, the main rationale here is to formulate propagation diffusion within each community , to estimate the current state of label propagation , and then to adequately alter the dynamics of the process .analysis in section [ sec_eval ] reveals that defensive preservation of communities significantly improves the detection strength of the algorithm , while for further discussion and analysis see . despite the discussion above, the core of the algorithm is in fact represented by a community modeling strategy implemented through parameters .ubelj and bajec have proposed to measure the conductance of each community , to determine whether it better conforms with link - density or link - pattern regime .conductance of community is defined as a relative size of the corresponding network cut ratio of inter - community links thus it is a measure of network bottlenecks .hence , at each iteration , they simply set , while all are initialized to .the main weakness of their strategy is that each community is considered independently of other .thus , in the following , we propose a more adequate community modeling strategy based on the properties of complex real - world networks .[ [ model - based - propagation . ] ] model - based propagation .+ + + + + + + + + + + + + + + + + + + + + + + + community modeling strategy of ubelj and bajec considers merely the nature of each respective community , whereas all other communities are disregarded .although no proper empirical study exists , in an ideal case , link - pattern communities would link to other link - pattern communities rather than to other link - density communities .the latter follows from the fact that the concerned links would else obviously decrease the quality of the respective link - density community make it a link - pattern community .thus , we propose a community model based on the hypothesis that the neighbors communities should be of the same type either link - density or link - pattern as the concerned node s community .hence , where is the degree of node and is the set of nodes in community .we also argue that an adequate initialization of community parameters is of vital importance ( exact results are omitted ) .otherwise , the algorithm can easily get trapped in some local stable probably suboptimal fixed point that is hard to escape from .however , eq . ( [ eq_delta ] ) can not be directly employed at the beginning , as all nodes still reside in their own communities .we thus refine the above hypothesis such that the node s neighbors should not only reside in the same type of the community , but in the same respective community .the latter immediately implies that the neighbors of the nodes in link - density communities should also link to each other , whereas the opposite holds for the nodes in link - pattern communities .hence , for each node , one could initially set to , where is a node clustering coefficient defined as the probability that two neighbors of node also link to each other network transitivity .it ought to be mentioned that recent work suggests that transitivity rather than homophily gives rise to the modular structure in real - world networks .however , consider a node with very high degree a hub node .hubs commonly appear in link - density communities , still , due to a large number of links , they would only rarely experience high values of clustering coefficient ( the opposite would in fact imply a large clique ) . also , as most networks are disassortative by degree ,hubs tend to link to low degree nodes that can not provide for high clustering of the hub node .indeed , in many real - world networks node clustering coefficient roughly follows , where is the degree of node .hence , we model initial communities as ( assume ) [ eq_init]_c_n = 1 & for , + & otherwise , where and are estimated from the network using ordinary least squares , and is a parameter .we set to based on some preliminary experiments .( [ eq_init ] ) and eq . ( [ eq_delta ] ) define the proposed model - based propagation algorithm ( mpa ) , which is else ( almost ) identical to the algorithm in ( see alg . [ alg_mpa ] ). however , the evaluation on synthetic and real - world networks in section [ sec_eval ] and section [ sec_rwe ] , respectively , reveals that the proposed approach significantly outperforms that in . for a thorough evaluation ,we also analyze two variations of the basic approach that fix all community parameters to either or .the approaches thus result in a fully link - density or link - pattern community detection algorithms , and are denoted mpa(d ) and mpa(p ) , respectively .graph and parameters , , communities * * shuffle** and the following we evaluate the proposed algorithm on different synthetic benchmark networks with planted partition , and also on random networks . the results are assessed in terms of three different measures of community significance , borrowed from the field of information theory and community detection literature .let be a partition extracted by an algorithm and let be the known partition of the network ( corresponding random variables are and , respectively ) . first normalized mutual information ( nmi)has become a de facto standard in the recent literature .nmi of and is defined as , where is the mutual information , and , and are standard and conditional entropies .nmi of identical partitions equals , and is for independent ones , ] .last , for a better comprehension , we also adopt a more intuitive measure fraction of correctly classified nodes ( fcc)that is commonly adopted within community detection literature . the node is considered correctly classified , if it resides in the same community as at least one half of the nodes in its true community .again , ] .when is , all links are set according to the designed community structure , while for equal , the networks are completely random .the results are shown in fig .[ fig_eval_gn2 ] . observe that for small values of only mpa and mpa(p ) can accurately reveal the planted structure in these networks .however , when increases , the performance of mpa is similar to that of a classical community detection algorithm ( e.g. , mo(g ) or mpa(d ) ) .mm(em ) can detect communities to some extent until ( dashed lines in figs .[ fig_eval_gn2 ] , [ fig_eval_sb])when , for the nodes within link - density communities , there are twice as many links that conform with the planted structure than randomly placed links . note also that twice as many links are needed to define a link - pattern community , compared to a respective link - density community , which would yield the same threshold at for these networks ( solid lines in figs .[ fig_eval_gn2 ] , [ fig_eval_sb ] ) .thus , mpa accurately extracts planted link - density and link - pattern communities in these networks , as long as they are clearly depicted in the network s topology .note also that community modeling strategy within mpa seems more adequate than that of gpa .[ [ sb - benchmark . ] ] sb benchmark .+ + + + + + + + + + + + + gn2 benchmark provides a rather unrealistic testbed due to homogeneous degree and community size distributions .we address the latter by proposing a class of simple benchmark networks with heterogeneous community sizes .networks comprise three communities of , and nodes , respectively ( see network in fig .[ fig_sb ] ) .the latter two again form a bipartite structure of link - pattern communities , while the third community corresponds to a classical cohesive module .links are placed according to the designed community structure such that the average degree of the nodes in the first and third community is fixed to .the latter implies an average degree of for the nodes in the second community .furthermore , we also add some number of links uniformly at random for each node denoted node confusion degree , .the results appear in fig .[ fig_eval_sb ] .the performance of the algorithms is rather similar to that on gn2 benchmark ( note different scales in figs .[ fig_eval_gn2 ] , [ fig_eval_sb ] ) . only mpa can accurately reveal the planted structure for small values of , while the model within gpa again seems to fail .observe that mm(em ) can extract communities equally well , even when equals of the links for the nodes in the second community still agrees with the intrinsic structure , thus , the communities are only marginally defined .the latter clearly demonstrates that knowing an exact number of communities indeed presents a significant advantage .[ [ lfr - benchmark . ] ] lfr benchmark .+ + + + + + + + + + + + + + to enable easier comparison with previous literature on community detection , we also apply the algorithms to a class of standard benchmark networks with scale - free degree and community size distributions proposed by lancichinetti et al . .the size of the networks is set to , while community sizes range between and nodes .note that all communities here correspond to a link - density regime .as before , the quality of the planted structure is controlled by a mixing parameter , ] .the nodes of the network are linked with the probability associated with the lowest common ancestor in the community dendrogram .varying the values of can infer ( almost ) arbitrary hierarchical structure of either link - density or link - pattern communities .however , due to simplicity , we associate each level of the nodes with the same probability . thus , denote ]-normalization . in order to obtain a sparse network , links must also be thresholded accordingly .( due to simplicity , we consider only unweighted versions of the algorithms . )note that the resulting network thus commonly decomposes into several connected components , however , community detection algorithm can still be employed to further partition these components ( see table [ tbl_rw_clus ] ) .crcrcccc & & & & & & & + ' '' '' & & & & & & & + & & & & & & & + & & & & & & & + & & & & & & & + we employ community detection to predict class variables of two famous datasets iris plants dataset introduced by fisher , and ecoli protein localization sites dataset . for comparison , in table [ tbl_rw_clus ]we also report the results for a classical partitional clustering algorithm k - means ( denoted km ) .observe that mpa obtains extremely promising results on these datasets , while it also significantly outperforms mm(em ) and km that are both advised about the number of communities .still , the results could be further improved in various ways .( note that low nmi for mm(em ) on ecoli dataset is not entirely evident . )the paper proposes an enhanced community modeling strategy for a recently introduced general propagation algorithm .the resulting algorithm can detect arbitrary network modules ranging from link - density communities to link - pattern communities while , in contrast to most other approaches , it requires no apriori knowledge about the true structure ( e.g. , the number of communities ) .the algorithm was evaluated on various benchmark networks with planted partition , on random graphs and resolution limit test networks , where it is shown to be at least comparable to current state - of - the - art . moreover , to demonstrate its generality , the algorithm was also employed for community detection in different unipartite and bipartite social networks , for generalized community detection and data clustering .the results imply that the proposed community model provides an adequate approximation of the real - world network structure , although , recent work suggests that network clustering and degree mixing could be even further utilized within the model .the latter will be considered for future work .( for supporting website see http://lovro.lpt.fri.uni - lj.si/. )this work has been supported by the slovene research agency arrs within research program no .p2 - 0359 .blondel , v.d . ,gajardo , a. , heymans , m. , senellart , p. , dooren , p.v . : a measure of similarity between graph vertices : applications to synonym extraction and web searching .46(4 ) , 647666 ( 2004 ) horton , p. , nakai , k. : a probabilistic classification system for predicting the cellular localization sites of proteins . in : proceedings of the international conference on intelligent systems for molecular biology . pp . 109115 ( 1996 )lin , c. , koh , j. , chen , a.l.p . : a better strategy of discovering link - pattern based communities by classical clustering methods . in : proceedings of the pacific - asia conference on knowledge discovery and data mining .5667 ( 2010 )
community structure is largely regarded as an intrinsic property of complex real - world networks . however , recent studies reveal that networks comprise even more sophisticated modules than classical cohesive communities . more precisely , real - world networks can also be naturally partitioned according to common patterns of connections between the nodes . recently , a propagation based algorithm has been proposed for the detection of arbitrary network modules . we here advance the latter with a more adequate community modeling based on network clustering . the resulting algorithm is evaluated on various synthetic benchmark networks and random graphs . it is shown to be comparable to current state - of - the - art algorithms , however , in contrast to other approaches , it does not require some prior knowledge of the true community structure . to demonstrate its generality , we further employ the proposed algorithm for community detection in different unipartite and bipartite real - world networks , for generalized community detection and also predictive data clustering .
signal processing and signal transmission play an important role in many different areas .typical problems include the reconstruction of a signal by its discretely sampled values as well as the detection of changes from a given reference signal . for univariate signals , sampled equidistantly using a sampling period and disturbed by additive noise , such that one obtains a block of noisy samples where a nonparametric joint reconstruction / detection algorithm has been proposed in the paper of .their approach has several appealing features .firstly , the algorithm can detect changes while reconstructing the signal at the same time .secondly , it is a nonparametric approach , i.e. no further information about the exact class to which the observed signal belongs is necessary .lastly , the procedure works in a sequential way such that changes can be detected on - line , in contrast to off - line detection schemes which can first detect changes in retrospect , i.e. when the whole data set is already available .a natural question arises whether this approach also works for high - dimensional signals .one answer to this problem is given in , where the authors treat matrix - valued signals and apply results from by considering quadratic forms . in the present paper , however , we consider a more general framework by focusing our attention on signals for .examples for such signals are multifaceted , including geographic and climatic data as well as image data , that are observed over a fixed time horizon . in order to simplify the notation we fix as this case also covers the most interesting applications .however , our results also hold true for and arbitrary and the corresponding proofs can easily be completed along the same lines .thus , in the following we are interested in reconstructing three - dimensional signals and , at the same time , in detecting changes from a given reference signal . here , one component represents the time and the other two the location .the application that we have in mind are video signals , i.e. sequences of image frames over time .the basis on which we now want to establish our investigations is a finite block of noisy samples that in accordance with model ( [ basicmodelpreface ] ) is obtained from the model here , is the unknown signal depending on time ( ) and location ( and ) , is a zero mean random field and , are the sampling periods .we assume that they fulfill and , as . as in want to base our approaches on classical reconstruction procedures from the signal sampling theory , leading to sequential partial sum processes as detector statistics , see section [ detalg ] . in order to make these detector statistics applicable we need to determine proper control limits ; thus , in section [ limdist ] we will show that we can generalize the two main weak convergence results in to our multidimensional context , i.e. we show weak convergence of the detection process towards gaussian processes under different assumptions on the dependence structure of the noise processes where either the null hypothesis or the alternative holds true . in section [ ext ] we present extensions to weighting functions , which allow to detect the location of the change as well , and discuss how to treat the case of an unknown but time - constant reference signal. finally , in section [ sim ] we present some simulation results concerning the rejection rates and the power of the detection algorithm .we now want to extend the main results of to signals with . as in base our estimator of on results of the signal sampling theory like the shannon - whittaker theorem .this theorem has generalizations to signals with several variables . in three dimensions we have for band - limited functions on \times[-\omega_2,\omega_2]\times[-\omega_3,\omega_3] ] .as we receive our data in a sequential way over time as a sequence of image frames , we want to be able to detect changes as early as possible , i.e. we want to give an alarm as soon as we have enough evidence in our samples corresponding to the first image frames , to reject the null hypothesis . to achieve this aimwe consider a sequential partial sum process over time which is defined as \notag\\ & \hspace{4.3cm}\varphi\left(t - l_1\tau_1,r_2-l_2\tau_2,r_3-l_3\tau_3\right),\end{aligned}\ ] ] for , r_2\in[0,\overline{\tau}_2 ] , r_3\in[0,\overline{\tau}_3] ] with .the symbol then denotes the point ] which is defined as now , let be a subset of ^q ] set and respectively .we call a _ block _ if it is of the form ,\ ] ] where each , j=1,\ldots , q ] .we now define the increment of a stochastic process ^q\} ] , cf . , p. 709 , where stands for the space of all real - valued continuous functions on ^q ] on ^q ] , then the increments are independent normal random variables with means zero and variances being the -dimensional lebesgue measure on ^q ] as for some constant . here ,the constant equals the long - run variance of the random field , i.e. there exist several results in the literature about the weak invariance principle ( [ ass1 ] ) under specific conditions on the random field .in particular , in the i.i.d .case we get the functional central limit theorem under the sole assumptions that see corollary 1 in . more generally , a functional central limit theorem for strictly stationary and -mixing random fields can be found in , cf .theorem 1 .further results on weak invariance principles for random fields include weakly stationary associated as well as weakly stationary and -mixing random fields , cf . , p. 2906 , and , theorem 1 , respectively .the latter obtain a strong approximation of the partial sum field by a brownian motion from which one can deduce a weak invariance principle quite directly .other results on functional central limit theorems for random fields include the ones of , cf . theorem 1.1 , and , cf . theorem 2 .these authors consider random fields of the form where is a measurable function and the are i.i.d . random variables . introduce the notion of a -stable random field and then obtain a weak invariance principle for the so - called smoothed partial sum process . with the help of assumption 1 we are now in a position to formulate the following theorem stating the asymptotic behaviour of the process .[ fcltnull ] suppose the noise process meets assumption 1 .assume that the sampling periods fulfill for as .then , under the null hypothesis , we have as for , ] and ] .the weak convergence takes place in a higher dimensional skorohod space and the last integral is interpreted as multivariate riemann - stieltjes integral , see appendix [ details ] for more details .the next lemma is a characterization of the correlation structure of the limit process .[ covstructure ] * the process is a nonstationary multivariable gaussian process with and covariance function for , , , and . * the process has continuous sample paths .now that we have the limit distribution of under the null hypothesis at our disposal , we can easily derive central limit theorems for the local and global maximum norm detector defined in ( [ locmvb ] ) and ( [ globmvb ] ) .[ cltdet ] assume that condition ( [ learningsample ] ) holds true .then , under the conditions of theorem [ fcltnull ] the detectors satisfy the following central limit theorems : : \sup_{r_2\in[0,\overline{\tau}_2],\atop r_3\in[0,\overline{\tau}_3]}\left|\mathcal{f}(s , s\overline{\tau}_1,r_2,r_3)\right|>c_l\right\},\\ & \mathcal m_n / n_1\rightarrow\mathcal{m}\coloneqq\inf\left\{s\in[s_0,1 ] : \sup_{0\leq t\leq s\overline{\tau}_1}\sup_{r_2\in[0,\overline{\tau}_2],\atop r_3\in[0,\overline{\tau}_3]}\left|\mathcal{f}(s , t , r_2,r_3)\right|>c_m\right\},\end{aligned}\ ] ] as .we now investigate the behaviour of our statistic under a general class of alternatives , i.e. in the situation when the observed signal and the reference signal differ .we assume that our observed data obey the following model : with the true signal depending on the sample size and as .the process is again the zero mean noise random field fulfilling assumption 1 .it turns out that the process converges to a well - defined and non - degenerate limit process under general conditions on the variation of the difference , similar as in .however , whereas in dimension the vitali variation suffices , in higher dimensions one has to consider the variation in the sense of hardy and krause .let with .a _ ladder _ on ] , we have with such that is the successor of for all and it is for .if we now define as the set of all ladders on ] as in order to generalize the one - dimensional variation to the multidimensional case we need the concept of multidimensional ladders .we now consider a hyperrectangle ] as , where is a ladder on ] , where denotes the set of all ladders on ] , in the sense of vitali , is }(f)\coloneqq\sup_{\mathcal y\in\mathbb y}v_{\mathcal y}(f).\end{aligned}\ ] ] the variation of on the hyperrectangle ] ) if .note that for the variation in the sense of vitali corresponds with the common definition of the variation of a univariate function .we say that the function is of bounded variation in the sense of hardy and krause ( and we write or ] and }f\left(\bm{x}_u:\bm z_{-u}\right)<\infty ] , which is the original definition of bounded variation of hardy , see .this means , that in the above definition could be replaced by an arbitrary fixed point of the hyperrectangle ] .this is a local alternative with a change point at time .this means , that up to time the observed data obey and after this point in time they get disturbed by .this disturbance depends on the function which assigns different weights at the locations changing with the time . in the following we require a more general model for local alternatives ,namely we consider for some and a deterministic function .we assume that meets the following assumption .* assumption 2 : * let be a nonzero function defined on \times[0,\overline{\tau}_2]\times[0,\overline{\tau}_3] ] , ] .the limit stochastic process is given by similarly as above we obtain central limit theorems for our detectors , defined in ( [ locmvb ] ) and ( [ globmvb ] ) , under the alternative . [ cltdetalt ]let the condition in ( [ learningsample ] ) hold .then , under the assumptions of theorem [ detalt ] we obtain the asymptotic distribution of the local and global maximum norm detector by replacing by in corollary [ cltdet ] .we now consider random disturbances , i.e. we require that our data obey the model where now a.s . as . to be more precise we require that for , and being a random function that is independent of the random field .moreover , we assume that a.s .we require that meets the following assumption .* assumption 3 : * let be an a.s .nonzero random function defined on \times[0,\overline{\tau}_2]\times[0,\overline{\tau}_3]\times\omega ] , ] .the limit stochastic process is given by this section we want to demonstrate that we can easily extend some of the results of the previous section into several directions with only slight modifications .we give two examples that shall show the great flexibility and applicability of our results .we point out , among other things , that the detector statistic in ( [ detectionprocess ] ) can serve with small changes not only as a detector for changes in time , but also as a detector for the concrete location where a change takes place .moreover , we can extend the result in theorem [ fcltnull ] to allow for unknown reference signals , by appropriate centering of the observations .we begin with a generalization of the detector statistic in ( [ detectionprocess ] ) in order to be able to detect the position of a change .this can be achieved by adding a suitable weighting function for the different pixels of the image and leads to the sequential monitoring process \\ & \varphi\left(t - l_1\tau_1,r_2-l_2\tau_2,r_3-l_3\tau_3\right)w(l_2\tau_2,l_3\tau_3,r_2,r_3)\end{aligned}\ ] ] for , r_2\in[0,\overline{\tau}_2 ] , r_3\in[0,\overline{\tau}_3] ] , ] .the limit stochastic process is of the form where is the standard brownian motion on ^ 3 ] . in this caseone may center the spatial - temporal observations at appropriately defined averages of previous observations . hereone can either use the learning sample or include all observations available at the current time instant . for us define and consider we then get the following theorem .[ unknownrefsignal ] under the conditions of theorem [ fcltnull ] and if the reference signal fulfills ( [ timeconstant ] ) , we have under the null hypothesis as for , ] , and ] , ^ 2 ] .moreover , we assume that at the point in time a jump occurs over the whole image sequence which leads to an alternative signal of the form with ^ 3 ] if there is no change in the signal .if we have , however , a change - point at the detection process directly reacts and crosses the control limit a short while later , namely for corresponding to the point in time . applied to the signal with a change - point at .the change is detected at .,width=453 ] in the following simulation study we investigate the accuracy of the global maximum norm detector .moreover , we evaluate the influence of different sampling periods and different correlation structures of the noise process . we also want to find out the influence of the asymptotic variance and its estimator developed in on the proper selection of the control limit .we begin by analyzing the influence of different sampling periods in the spatial and time domain with respect to the rejection rates . for that , we calculate the corresponding control limit with the help of the monte carlo algorithm described above .thus , we evaluate the process on the grid with . after calculating the required maxima of we use the -quantile of 10000 simulation replicates to estimate . in the followingwe adapt the setting of the illustrative example . as the noise processis modelled by an i.i.d .-distributed random field , we obtain for the asymptotic variance . table [ taberror1kind1000 ] shows the simulated type one errors for various sampling periods , and for 1000 repetitions .we can see that the simulated rejection rates lie between 0.055 and 0.084 and thus that there is only a small influence of the sampling periods on the accuracy of the detector ..simulated rejection rates for various sampling periods and for 1000 repetitions .[ cols="^,>,>,>,>,>,>,>",options="header " , ]the functional and its variations that were introduced in the previous sections can be viewed as an element of the skorohod space ^q ] with .this function space is the generalisation of the well - known skorohod space ] contains all real - valued functions that are ` continuous from above , with limits from below ' . to be more precise ,let ^q ] , where is a ladder on ] for each , where denotes the component - by - component successor of , see above .define a norm on ] .now , for each ladder consider the so - called riemann - stieltjes sum .\end{aligned}\ ] ] analogously to the one - dimensional case we now define the riemann - stieltjes integral of with respect to as if the latter exists .the integral on the left is understood as a multivariate integral , namely as similar to the one - dimensional case this integral and all ` lower dimensional ' integrals exist if is a continuous function on the hyperrectangle ] .moreover , there exists a generalization of the integration by parts formula for multivariate riemann - stieltjes integrals , see , p. 287 .this allows us to define the multivariate riemann - stieltjes integral even with respect to functions that are not of bounded variation in the sense of vitali or in the sense of hardy and krause . in this casewe take the integration by parts formula as a definition for the integral , i.e. we put {\bm{a}_{1:q}}^{\bm{b}_{1:q}}+(-1)^q\int_{\bm{a}_{1:q}}^{\bm{b}_{1:q}}\!h(\bm{x})\,\mathrm{d}f(\bm{x})\notag\\ & + \sum_{v\subseteq\{1,\ldots , q\},\atop 1\leq |v|\leq q-1}(-1)^{|v|}\int_{\bm{a}_v}^{\bm{b}_v}\!\left[h(\bm{x})\,\mathrm{d}f(\bm{x})\right]_{\bm{a}_{-v}}^{\bm{b}_{-v}}\end{aligned}\ ] ] whenever the right - hand side exists .the square bracket notation is used for the evaluation , respectively , the increment of a multivariate antiderivative over a hyperrectangle \subset\mathbb r^q ] , we write {\bm{a}_{-v}}^{\bm{b}_{-v}} ] , ] and ] and let be a function of bounded variation in the sense of hardy and krause on ] with }v_{hk}\left(\psi(\bm{\cdot},\bm y)\right)<\infty.\ ] ] let be a sequence of functions such that ] .moreover , suppose that {\bm a_{-v}}^{\bm s_{-v}}\ ] ] exists for ] and for all and .then , }\sup_{\bm y\in[\widetilde{\bm a},\widetilde{\bm b}]}\left|\int_{\bm a}^{\bm s}\!\psi(\bm x,\bm y)\,\mathrm{d}f_n(\bm x)-\int_{\bm a}^{\bm s}\!\psi(\bm x,\bm y)\,\mathrm{d}f(\bm x)\right|=0.\ ] ] we consider the function space ^ 3 ] , ] on it , whenever the integral exists. here is defined as in ( [ defphi_w ] ) .let be a sequence of functions such that ^ 3 ] implies }\sup_{t\in[0,\overline{\tau}_1],\atop r_2\in[0,\overline{\tau}_2],r_3\in[0,\overline{\tau}_3]}\left|\int_0^s\int_0 ^ 1\int_0 ^ 1 \!\varphi_w(v_1,v_2,v_3,t , r_2,r_3)\,\mathrm{d}f_n(v_1,v_2,v_3)\right.\notag\\&\left.\hspace{3.5 cm } -\int_0^s\int_0 ^ 1\int_0 ^ 1\!\varphi_w(v_1,v_2,v_3,t , r_2,r_3)\,\mathrm{d}f(v_1,v_2,v_3)\right|\to 0\end{aligned}\ ] ] in \times[0,\overline{\tau}_1]\times[0,\overline{\tau}_2]\times[0,\overline{\tau}_3]) ] .on the other hand we have we now interpret this last integral as a multivariate riemann - stieltjes integral .let be a ladder on ] and let and be ladders on ] and consider now , without loss of generality , a ladder with for all and all , respectively , and write for points , i=1,2,3 ] .now , for an arbitrary subset of ^q ] as this leads to the following notion of discrepancy , cf . definition 2.1 . in .[ hkinequality ] if has bounded variation on ^q ] , we have ^q}\!f(\bm u)\,\mathrm{d}\bm u\right|\leqv_{hk}(f)d_n^{\star}\left(\bm x^{(1)},\ldots,\bm x^{(n)}\right).\ ] ] the local alternative is given by with for .our test statistic is therefore defined as equals under the null hypothesis which converges to the process of theorem [ fcltnull ] for . by assumption on the sampling periods we obtain for the second process we now fix and set if we can show that }\sup_{t\in[0,\overline{\tau}_1],\atop r_2\in[0,\overline{\tau}_2],r_3\in[0,\overline{\tau}_3 ] } \left| \frac{\overline{\tau}_1\overline{\tau}_2\overline{\tau}_3}{n^3 } \sum_{k_1=1}^{\lfloor ns\rfloor}\sum_{k_2=1}^{n}\sum_{k_3=1}^{n}\varphi_{\delta}\left(t , r_2,r_3,k_1\frac{\overline{\tau}_1}{n},k_2\frac{\overline{\tau}_2}{n},k_3\frac{\overline{\tau}_3}{n}\right)\right.\notag\\&\left .\hspace{4.1cm}- \int_0^{s\overline{\tau}_1}\int_0^{\overline{\tau}_2}\int_0^{\overline{\tau}_3 } \!\varphi_{\delta}\left(t , r_2,r_3,z_1,z_2,z_3\right)\,\mathrm{d}z_3\mathrm{d}z_2\mathrm{d}z_1\right|\end{aligned}\ ] ] tends to zero as the assertion follows , since uniform convergence always implies convergence in the skorohod topology .if is continuous we can proceed in an analogous way as in the proof of theorem 3 of .thus , it remains to treat the case that is of bounded variation in the sense of hardy and krause .our aim is to apply the hwlaka - koksma inequality of lemma [ hkinequality ] .as this inequality is formulated for integrals over the unit cube we first observe that put and for and .then we obtain thus we can reformulate ( [ tn2 ] ) as }\sup_{t\in[0,\overline{\tau}_1],\atop r_2\in[0,\overline{\tau}_2],r_3\in[0,\overline{\tau}_3 ] } \left| \frac{1}{sn^3 } \sum_{k_1=1}^{\lfloor ns\rfloor}\sum_{k_2=1}^{n}\sum_{k_3=1}^{n}s\overline{\tau}_1\overline{\tau}_2\overline{\tau}_3\varphi_{\delta}\left(t , r_2,r_3,s\overline{\tau}_1\widetilde{x}_{k_1},\overline{\tau}_2\widetilde{x}_{k_2},\overline{\tau}_3\widetilde{x}_{k_3}\right)\right.\\&\notag\left . \hspace{4cm}- \int_0^{1}\int_0^{1}\int_0^{1 } \!s\overline{\tau}_1\overline{\tau}_2\overline{\tau}_3\varphi_{\delta}\left(t , r_2,r_3,s\overline{\tau}_1z_1,\overline{\tau}_2z_2,\overline{\tau}_3z_3\right)\,\mathrm{d}z_3\mathrm{d}z_2\mathrm{d}z_1\right|.\end{aligned}\ ] ] an upper bound for this expression without the suprema is we first consider . as both and are of bounded variation in the sense of hardy and krause , is bounded .thus , for some constant we have which tends to zero as , uniformly for all ] . to estimate we apply the hwlaka - koksma inequality .if we put and where , we have ^ 3\right)d^{\star}_{n_s}\left(\widetilde{x}_{k_1},\widetilde{x}_{k_2},\widetilde{x}_{k_3}\right).\end{aligned}\ ] ] by proposition 11 in we can estimate the variation by ^ 3\right)\\ & \leq\overline{\tau}_1\overline{\tau}_2\overline{\tau}_3v_{hk}\left(\varphi_{\delta}\left(t , r_2,r_3,\cdot,\cdot,\cdot\right),[0,\overline{\tau}_1]\times[0,\overline{\tau}_2]\times[0,\overline{\tau}_3]\right).\end{aligned}\ ] ] since , by assumption , and , by a similar argument as in lemma [ phi_wbvhk ] , are of bounded variation in the sense of hardy and krause uniformly in , we obtain }\sup_{t\in[0,\overline{\tau}_1],\atop r_2\in[0,\overline{\tau}_2],r_3\in[0,\overline{\tau}_3]}v_{hk}\left(s\overline{\tau}_1\overline{\tau}_2\overline{\tau}_3\varphi_{\delta}\left(t , r_2,r_3,s\overline{\tau}_1\cdot,\overline{\tau}_2\cdot,\overline{\tau}_3\cdot\right),[0,1]^3\right)<\infty.\end{aligned}\ ] ] it remains to verify that the discrepancy is . as for arbitrary have points with , points with , and points with we obtain for appropiately chosen .this finally leads to uniformly for all $ ] .now , combining ( [ vhkphidelta ] ) and ( [ diskpart ] ) with ( [ hkdisk ] ) it follows that as , uniformly in .thus , assertion ( [ tn2 ] ) also follows for the case that is a function of bounded variation in the sense of hardy and krause which finally completes the proof .prause , a. and steland , a. ( 2015 ) . , in _stochastic models , statistics and their applications , _ steland , a. , szajowski , k. , and rafajlowicz , e. , heidelberg : springer proceedings in mathematics and statistics , 139147 .
* abstract : * we study detection methods for multivariable signals under dependent noise . the main focus is on three - dimensional signals , i.e. on signals in the space - time domain . examples for such signals are multifaceted . they include geographic and climatic data as well as image data , that are observed over a fixed time horizon . we assume that the signal is observed as a finite block of noisy samples whereby we are interested in detecting changes from a given reference signal . our detector statistic is based on a sequential partial sum process , related to classical signal decomposition and reconstruction approaches applied to the sampled signal . we show that this detector process converges weakly under the no change null hypothesis that the signal coincides with the reference signal , provided that the spatial - temporal partial sum process associated to the random field of the noise terms disturbing the sampled signal converges to a brownian motion . more generally , we also establish the limiting distribution under a wide class of local alternatives that allows for smooth as well as discontinuous changes . our results also cover extensions to the case that the reference signal is unknown . we conclude with an extensive simulation study of the detection algorithm . + + * keywords : * change - point problems ; correlated noise random fields ; image processing ; multivariate brownian motion ; sampling theorems ; sequential detection .
when an object is imaged , variations of the refractive index in the medium , as well as optical alignment and manufacturing errors , distort the recorded image .this problem is typically solved using active or adaptive optics , where a deformable mirror , spatial light modulator ( slm ) or a comparable device corrects the propagating wavefront .typically , such systems are built with a separate optical arm to measure the distorted wavefront , because extracting the wavefront information from only focal - plane images is not trivial .however , focal - plane wavefront sensing is an active topic not only to simplify the optical design but also to eliminate the non - common path aberrations limiting the performance of high - contrast adaptive optics systems .the most popular method for the focal - plane wavefront sensing is perhaps the gerchberg - saxton ( gs ) error reduction algorithm and their variations , for instance .these are numerically very efficient algorithms , and it is easy to modify them for different applications .however , they suffer from accuracy , in particular because their iterative improvement procedure often stagnates at a local minimum .various alternatives have been proposed , and a popular approach is to use general numerical optimization techniques to minimize an error function ; examples include .however , when the number of optimization parameters is increased , the computational requirements generally rise unacceptably fast .the high computational costs are problematic for instance in astronomy ; the largest future adaptive optics system is envisioned to have a wavefront corrector of a size of .the numerical issues can be significantly reduced , if the unknown wavefront is sufficiently small .this is the case , for example , when calibrating the non - common path aberrations .previous works have exploited the small - phase approximations , but the implementations are generally not easily extended to the wavefront correction at extremely large resolution , such as over elements . in this paper , we present two algorithms capable of extremely fast control of a wavefront correcting device with 20 00030 000 degrees of freedom .the first algorithm , fast & furious ( ff ) , has been published before .it relies on small wf aberrations , pupil symmetries and phase - diversity to achieve very fast wf reconstruction .however , ff approximates the pupil amplitudes as an even function that not necessarily matches exactly the real situation . to improve the wf correction beyond the accuracy of ff , a natural way is to use approaches similar to the gs algorithm .however , the standard modifications of the algorithm are sensitive to the used phase diversities , in particular when the pupil amplitudes are not known , and they do not work with iterative wavefront correction as in ff .therefore , our second algorithm combines ff and gs in a way that can be used not only to correct the wavefront , but also to estimate the pupil amplitudes for which we make no assumptions .this comes at a cost in terms of noise sensitivity and instabilities as well as more demanding computational requirements . at first, we illustrate the motivation and principles of the ff algorithm in section [ sec : ff ] .then , section [ sec : ffgs ] describes the fast & furious gerchberg - saxton ( ff - gs ) algorithm in detail .section [ sec : hardware ] describes the used hardware , section [ sec : results ] shows simulation and experimental results , and section [ sec : conclusions ] draws the conclusions .the fast & furious algorithm is based on iteratively applying a weak - phase approximation of the wavefront .the main principle of the weak - phase solution is presented in , but we found slight modifications leading to significantly better performance .the algorithm uses focal - plane images and phase - diversity information to solve the wavefront , and the estimated wavefront is corrected with a wavefront correcting device .the correction step produces phase - diversity information and a new image that are again used to compute the following phase update .the schematic illustration of the algorithm is shown in fig .[ fg : algoscemaff ] .an important aspect of the algorithm is to maximize the use of the most recent psf denoted as image 1 in fig .[ fg : algoscemaff ] . in the weak - phase regime ,a single image is sufficient to estimate both the full odd wavefront component and the modulus of the even component of the focal - plane electric field .the phase - diversity is needed only for the sign determination since we assume the wavefront aberrations are small .this makes the ff substantially less prone to noise and stability issues as compared to approaches relying more on the phase diversity information such as the ff - gs .section [ sec : ffdet ] explains the details of the weak - phase solution , and section [ sec : ffpractical ] discusses the practical aspects when implementing the algorithm .a monochromatic psf can be described by fraunhofer diffraction and is given by the squared modulus of the fourier transform of the complex electric field in the pupil plane , where is the pupil amplitude describing transmission and is the wavefront in the pupil plane .the second order approximation of the psf , in terms of the wavefront expansion , can be written as the phase can be represented as a sum of even and odd functions , and eq .can then be written as we make the assumption that is even , and therefore all the terms here are either even or odd .therefore , the corresponding fourier transforms are then either purely real or imaginary with the same symmetries ; we list the corresponding terms in table [ tb : sym ] . .notations and symmetries [ cols="<,^,^,^,<,^,^ " , ] + residual wf rms errors ( rad ) at spatial frequencies falling within the used images . in theory , both algorithms should reach zero wavefront error in the perfect case .however , in the case of ff , we still have to use numerical regularization to maintain stability , and this compromises the performance in the error - free case .this could be improved by optimizing the codes , but it is not done here ; the codes are optimized for the performance with all the error sources present .the most severe error source for the ff algorithm , as expected , is indeed the amplitude aberrations : instead of the ideal rms error of 0.03 rad , we are limited to an error of 0.11 rad .similar errors are also seen if the imaging model does not exactly match the actual hardware ; this was tested when simulating the wavefront and psf with double sampling ( case 2 in table [ tb : errbud ] ) ; the double sampling was also used in the misalignment simulation .the different error sources are coupled , so they do not add up quadratically . in the presence of all the error sources, we end up having a residual wf error of .12 rad . with the ff - gs algorithm, we can radically reduce the problems of the unknown pupil aberrations .the transmission we used in simulations , however , had significant fluctuations creating speckles similar to what the wavefront aberrations do .therefore , the wavefront reconstruction problem is difficult to make unambiguous , and we saw a small residual rms error of 0.02 rad .the ff - gs is limited by the combined effect of read - out noise ( 0.05 rad ) , the fact that the slm couples the transmission and phase change ( 0.04 rad ) and the tt instability ( 0.04 rad ) .all the error sources add up quadratically , which indicates that they are largely independent .when comparing the ff and ff - gs , we see that a significant improvement can be obtained with the ff - gs algorithm ; the residual wavefront rms error is reduced from 0.12 rad to 0.08 rad. however , the method is more sensitive to uncertainties and noise : the tip - tilt jitter in our hardware has no influence on the ff while being a major error source with the ff - gs algorithm .we have demonstrated the performance of two numerically efficient focal - plane wavefront sensing algorithms : the fast & furious and its extension fast & furious gerchberg - saxton .both algorithms do an excellent job in calibrating static aberrations in an adaptive or active optics system : we demonstrated an increase in the strehl ratio from .75 to 0.980.99 with our optical setup .although the ff - gs algorithm is more prone to noise , we observed a clear improvement . with our hardware a high - resolution spatial light modulator as the wavefront corrector we estimate the remaining residual wavefront rms error to be .15 rad with ff and .10 rad with ff - gs .the difference occurs mostly at spatial frequencies corresponding to the 20th and further airy rings .simulations with error sources comparable to our hardware show very similar results .this increases our confidence that the estimated performance indicators are reliable , and the simulated error budget also confirms the unknown amplitude aberrations as the main limitation of the ff algorithm in the considered framework . to our knowledge, this is the first time that such focal - plane sensing methods have been demonstrated with 000 degrees of freedom and in the case of ff - gs , with twice the number of free parameters to estimate the pupil amplitudes .the sampling at the detector was such that the controlled wavefront of pixels would have been enough to correct all spatial frequencies inside an image of pixels .however , as we recorded only an image of pixels , we had no direct observations of the higher controlled spatial frequencies .simulations indicate that this resulted in a small amount of light being scattered outside the recorded field , but this amount was too small to be easily detected in our optical setup .we put no particular effort into optimizing the codes ; all the software was implemented in matlab , and it was run on a standard windows pc .still , the required computation time was negligible compared to the time of s we needed to collect data for a single hdr image .we implemented the ff algorithm with two ffts per iteration step ( one fft transferring the phase - diversity information into the focal plane could likely be replaced by a convolution , as explained in ) .our ff - gs implementation used 8 ffts per iteration , and that could also potentially be optimized .as with all focal - plane wavefront sensing techniques , the algorithms work best if a monochromatic light source is available . with a chromatic light sourcehaving a sufficiently small bandwidth , perhaps % , the algorithms would still work , but only with a limited corrected field .with special chromatic optics ( such as in ) or an integral field unit , it is potentially possible to use the algorithms with even wider bandwidth .currently , we have only demonstrated a case where an unobstructed psf is detected , and the wavefront is driven to be flat . to make the algorithms more interesting for astronomical applications in the extreme adaptive optics or ultra - high contrast imaging, a few extensions would be necessary .first , we should consider how coronagraphs and diffraction suppression optics will affect the techniques . in practice , this would mean that the core of the psf would not be detected , and we would need to consider also the moduli in a part of the focal - plane field as free parameters .second , instead of flattening the wavefront , we should optimize the contrast at a certain part of the field .this would mean calculating a wavefront shape that , in the same way as in , minimizes the light in certain regions of the field at the cost of increasing it in other parts ; the updated algorithm should then drive the wavefront to this desired shape .a similar problem is faced , if phase plates are used to create diffraction suppression , for instance as in .also in such a case , it is necessary to drive the wavefront to a particular shape that is far from flat .another , potentially interesting application is a real - time application , for instance as a high - order , second - stage sensor in an adaptive optics system .the computational load is manageable , and a successful system would greatly simplify the hardware design compared to a conventional ao approach .however , issues such as the requirement for small aberrations , chromaticity , temporal lag in the phase diversity and the limited dynamic range of the camera and therefore photon noise are major challenges .j. j. green , d. c. redding , s. b. shaklan , and s. a. basinger , `` extreme wave front sensing accuracy for the eclipse coronagraphic space telescope , '' in `` high - contrast imaging for exo - planet detection , '' , vol .4860 of _ proc .spie _ , a. b. schultz , ed .( 2003 ) , vol .4860 of _ proc .spie _ , pp .266276 .r. s. burruss , e. serabyn , d. p. mawet , j. e. roberts , j. p. hickey , k. rykoski , s. bikkannavar , and j. r. crepp , `` demonstration of on sky contrast improvement using the modified gerchberg - saxton algorithm at the palomar observatory , '' ( 2010 ) , vol .7736 of _ proc .spie_. p. riaud , d. mawet , and a. magette , `` nijboer - zernike phase retrieval for high contrast imaging .principle , on - sky demonstration with naco , and perspectives in vector vortex coronagraphy , '' astron .astrophys.**545 * * , a150 ( 2012 ) .b. paul , l. m. mugnier , j .- f .sauvage , m. ferrari , and k. dohlen , `` high - order myopic coronagraphic phase diversity ( coffee ) for wave - front control in high - contrast imaging systems , '' optics express * 21 * , 31751 ( 2013 ) . c. vrinaud , m. kasper , j .- l .beuzit , r. g. gratton , d. mesa , e. aller - carpentier , e. fedrigo , l. abe , p. baudoz , a. boccaletti , m. bonavita , k. dohlen , n. hubin , f. kerber , v. korkiakoski , j. antichi , p. martinez , p. rabou , r. roelfsema , h. m. schmid , n. thatte , g. salter , m. tecza , l. venema , h. hanenburg , r. jager , n. yaitskova , o. preis , m. orecchia , and e. stadler , `` system study of epics : the exoplanets imager for the e - elt , '' in `` adaptive optics systems ii , '' , vol .7736 of _ proc .spie _ ( 2010 ) , vol .7736 of _ proc ., pp . 77361n77361n12 . c. s. smith , r. marinic , a. j. den dekker , m. verhaegen , v. korkiakoski , c. u. keller , and n. doelman , `` iterative linear focal - plane wavefront correction , '' journal of the optical society of america a * 30 * , 2002 ( 2013 ) .c. u. keller , v. korkiakoski , n. doelman , r. fraanje , r. andrei , and m. verhaegen , `` extremely fast focal - plane wavefront sensing for extreme adaptive optics , '' ( 2012 ) , vol .8447 of _ proc .spie _ , pp .844721844721 .v. korkiakoski , c. u. keller , n. doelman , r. fraanje , r. andrei , and m. verhaegen , `` experimental validation of optimization concepts for focal - plane image processing with adaptive optics , '' ( 2012 ) , vol .8447 of _ proc .spie _ , pp . 84475z84475z .l. c. roberts , jr ., m. d. perrin , f. marchis , a. sivaramakrishnan , r. b. makidon , j. c. christou , b. a. macintosh , l. a. poyneer , m. a. van dam , and m. troy , `` is that really your strehl ratio ? '' in `` advancements in adaptive optics , '' , vol .5490 of _ proc .spie _ ( 2004 ) , vol .5490 of _ proc ., pp . 504515 .
we present two complementary algorithms suitable for using focal - plane measurements to control a wavefront corrector with an extremely high spatial resolution . the algorithms use linear approximations to iteratively minimize the aberrations seen by the focal - plane camera . the first algorithm , fast & furious ( ff ) , uses a weak - aberration assumption and pupil symmetries to achieve fast wavefront reconstruction . the second algorithm , an extension to ff , can deal with an arbitrary pupil shape ; it uses a gerchberg - saxton style error reduction to determine the pupil amplitudes . simulations and experimental results are shown for a spatial light modulator controlling the wavefront with a resolution of pixels . the algorithms increase the strehl ratio from .75 to 0.980.99 , and the intensity of the scattered light is reduced throughout the whole recorded image of pixels . the remaining wavefront rms error is estimated to be .15 rad with ff and rad with ff - gs .
how do metabolites distribute among species ?metabolite distributions across species or comprehensive species - metabolite relationships are important to understand design principles for metabolism in addition to metabolic networks because living organisms produce metabolic compounds of many types via their metabolisms , which adaptively shape - shift with changing environment in a long evolutionary history .especially , since living organisms have specific metabolite compositions due to metabolisms adaptively changing with respect to the environment , we can estimate environmental adaptation ( adaptive evolution ) using metabolite distributions . toward this end , we used flavonoids to investigate structures of metabolite distributions among plant species in the previous work .flavonoids are especially interesting examples when considering such metabolite distributions .plant species have secondary metabolites of many types including flavonoids , alkanoids , terpenoids , phenolics , and other compounds .these metabolites are not essential for preserving life unlike basic metabolites such as bases , amino acids , and sugars ; however , they play additional roles aiding survival in diverse environments .therefore , distributions of secondary metabolites are believed to be significantly different among species due to environmental adaptation , implying high species specificity .for this reason , secondary metabolites help us to understand adaptation and evolution .metabolite distributions are represented as bipartite networks ( or graphs ) in which nodes of two types correspond to plant species and flavonoids and links denote species - flavonoid relationships . in the previous work , we found heterogeneous connectivity ( degree distribution ) in the flavonoid distributions : the number of flavonoids in a plant species and the number of plant species sharing a flavonoid follow power - law - like distributions .moreover , a bipartite network model was proposed by considering simple evolution processes in order to explain a possible origin of the heterogeneous connectivity .we showed that the model is in good agreement with real data with analytical and numerical solutions .bipartite relationships such as the above metabolite distributions among species are observed in other fields .a good example is plant - animal mutualistic networks , which occupy an important place in theoretical ecology and are important to understand cooperation dynamics and biodiversity . in networks of this type, we can also observe the heterogeneous connectivity , or diversified patterns of interaction among species ( both plants and animals ) .in addition to this , non - random structural patterns such as nested structure and modular structure were found in the mutualistic networks .the nested structure means that animals ( pollinators or seed dispersers ) of a certain plant form a subset of those of another plant in a hierarchical fashion . such non - random patterns often strongly control dynamics of ecological systems .the modular structure represents that the subsets of species ( modules ) , in which species are strongly interconnected , are weakly connected .thus , this structural property helps to understand coevolution of two objects ( i.e. plants and animals ) .to reveal the origin of non - random patterns , proposed the bipartite cooperation ( bc ) model inspired by food - web models based on traits of species and external factors [ reviewed in ] .although it agrees well with real plant - animal mutualistic networks , the bc model is a non - growth model in which the number of species ( plants and pollinators ) is fixed ( i.e. this model is not an evolutionary model ) .the structure of model - generated networks is determined by intrinsic parameters of three types drawn from exponential or beta distributions : foraging traits ( e.g. efficiency and morphology ) , reward traits ( e.g. quantity and quality ) and external factors such as environmental context ( e.g. geographic and temporal variation ) . according to the above mechanism, the model network is generated with three observable parameters : the numbers of nodes of two types ( e.g. the number of plants and the number of animals ) and the number of interactions , and it is in good agreement with real mutualistic networks .furthermore , these structural properties are also observed in manufacturer - contractor interactions , and the bc model could reproduce them .therefore , it is believed that the bc model is a general model for bipartite relationships .taken together , several striking structural properties ( i.e. heterogeneous connectivity , nested structure , and modular structure ) are widely observed in bipartite networks , and there are two models to explain design principles for such bipartite networks : trait - based ( non - evolutionary ) model and evolutionary model . due to this ,we had the following questions : ( i ) do metabolite distributions additionally show nested and modular structures in analogy with ecological networks and organizational networks ?( ii ) can our model reproduce nested and modular structures in addition to heterogeneous connectivity ?in other words , are these structural properties acquired in evolutionary history ? suggest that the structure of mutualism between plants and animals is affected by not only traits of species and external factors but also evolution processes .thus , it is expected that our model ( i.e. evolutionary model ) also can reproduce such non - random patterns .( iii ) which is appropriate to our model ( evolution process ) and the bc model ( trait - based mechanism ) to describe the formation of metabolite distributions ? in this paper , we represent that metabolite distributions across species have nested structure and modular structure , and numerically investigate whether our model and the bc model can reproduce such non - random structures or not .furthermore , the prediction of network connectivity ( degree distribution ) is also evaluated between our model and the bc model . from these results, we show that formation mechanisms of metabolite distributions across plant species are governed by simple evolution processes rather than traits of metabolites and plant species and external factors .we utilized the data in in which a total of 14,378 species - flavonoid pairs were downloaded from metabolomics.jp ( http://metabolomics.jp/wiki/category:fl ) . in this dataset, there are 4725 species and 6846 identified flavonoids .the taxonomy ( family ) of a species was assigned according to the taxonomicon ( http://taxonomicon.taxonomy.nl ) .the six largest families in terms of the number of reported flavonoids are considered : fabaceae ( bean family ) , asteraceae ( composite family ) , lamiaceae ( japanese basil family ) , rutaceae ( citrus family ) , moraceae ( mulberry family ) , and rosaceae ( rose family ) .we extracted species - flavonoid pairs from the dataset based on these six families , and constructed the metabolite distribution of each family using bipartite networks .we here review our model proposed in . in this model , a small initial metabolite distributions ( fig .[ fig : model ] a ) are first prepared , and it evolves according to two simple evolutionary mechanisms as follows : \(i ) metabolite compositions of new species are inherited from those of existing ( ancestral ) species .we assume that new species emerge due to mutation of ancestral species . in our model, this event occurs with the probability at time , and new species are born from randomly selected existing species .flavonoid compositions of new species are inherited from that of ancestral species because new species are similar to the ancestral species due to mutation ( fig .[ fig : model ] b ) . by considering divergence , however , we model that each flavonoid is inherited from that of ancestral species with the probability ( fig . [fig : model ] c ) . independently of our model , in addition , a bipartite network model generated based on the above inheritance ( or copy ) mechanism was proposed in to describe evolution of protein domain networks around the same time .\(ii ) new flavonoids are generated by variation of existing flavonoids . in evolutionary history ,living organisms accordingly obtain new metabolic enzymes via gene duplications and horizontal gene transfers , and the metabolic enzymes synthesize new metabolites through modification of existing flavonoids with substituent groups and functional groups .we model that this event occurs with the probability at time and a species - flavonoid pair is selected at random ( fig .[ fig : model ] d ) , and its species obtains a new flavonoid ( fig .[ fig : model ] e ) .our model have two parameters and . we can generate the model network through the estimation of the parameters and using observable parameters of real metabolite distributions : the number of plant species , the number of metabolites ( flavonoid ) , and the number of interactions .the parameter is estimated as because and in our model . to obtain the parameter , we need to consider the time evolution of .this is derived as . since , as above, the parameter is estimated as using eq.s ( [ eq : p ] ) and ( [ eq : q ] ) , we estimated the parameters and from real data , and generated corresponding model networks for comparison with real ones .we first investigated the nestedness and the modularity of metabolite ( flavonoid ) distributions . to measure the degrees of nested structure and modular structure ( i.e. nestedness and modularity ) of metabolite distributions, we utilized the binmatnest program and the optimization algorithm proposed in , respectively .the nestedness ranges from perfect non - nestedness ( ) to perfect nestedness ( ) , and the high modularity means a strong modular structure .we also calculated and from randomized networks generated by the null model 2 in in order to show statistical significance of the structural properties .the statistically significance is suitably evaluated because the null model 2 generates randomized networks without bias of heterogeneous connectivity .figure [ fig : significant_nq ] shows the comparison of nestedness and modularity between real data and the null model for each family . as shown in this figure , and of real dataare significantly larger than that of the null model , indicating that metabolite distributions also show nested structure and modular structure in addition to heterogeneous connectivity as ecological networks and organizational networks .in addition , the nestedness and the modularity are different structural properties because of no correlation between them ( pearson correlation coefficient with -value ) .the above result means that metabolites in a plant species is a subset of that in other plant species , and plant species are divided into several clusters based on their metabolite compositions .( a ) and modularity ( b ) in metabolite distributions across plant species . the dark gray bars and the light gray bars correspond to real values and the null model , respectively . and obtained from the null model are averaged over 100 realizations .all -values for the difference are lower than 0.0001 .the -value is derived using the -score defined as , where corresponds to real values ( nestedness or modularity ) . and are the average of values from the null model and its standard error , respectively . ]next , the prediction of nestedness and modularity by our model and the bc model was mentioned .figure [ fig : nestedness ] shows the comparison of and between models and real data . for comparison, we also computed and calculated from the null model .( a ) and modularity ( b ) between models and real data .the dashed line represents the perfect agreement between predicted values ( or ) and observed ones .the nestedness and the modularity from models are averaged over 100 realizations . ]we evaluated the prediction accuracy of our model and the bc model using the pearson correlation coefficient ( cc ) and the root mean square error ( rmse ) between predicted values and observed values , defined as where is the number of samples ( i.e. the number of families ) .the cc and the rmse represent the degrees of agreement and error between observed values and predicted ones , respectively .our model showed the higher ccs and the lower rsmes ( see table [ table : compari_nest ] ) , indicating that our model has the higher prediction accuracy than the bc model ..prediction accuracy for nestedness and modularity : the correlation coefficient ( cc ) and the root mean square error ( rmse ) .the emphasized values correspond to the best accuracy . [ cols="<,^,^,^,^ " , ] we finally considered the frequency of the number of interactions per nodes ( degree distribution ) .figure [ fig : degree ] shows the degree distributions of metabolite distributions ( symbols ) and the models ( lines ) .the degree distributions of the only three metabolite distributions as representative examples due to the space limitation .we could observe degree distributions of two types [ and , where and denote the degrees of nodes corresponding to plant species and metabolites ( flavonoids ) , respectively ] because metabolite distributions are represented as bipartite graphs . in the both cases ,the degree distributions follow a power law with an exponential truncation , and model - generated degree distributions are in good agreement with real ones. however , the bc model seems to have bad predictions in the case of . to quantitatively verify goodness of fits between models and real data , we calculated the tail - weighted kolmogorov - smirnov ( wks ) statistics ( distance ) , defined as }},\ ] ] between empirical distributions and predicted distributions for species nodes ( wks ) and metabolite ( flavonoid ) nodes ( wks ) .figure [ fig : ks_distance ] a shows the comparison of wks distances between our model and the bc model . in the case of ( i.e. wks ) , the prediction accuracy ( wks distance ) is almost similar between our model and the bc model . in the case of , however , we can find the critical difference of the prediction between our model and the bc model .our model could more highly predict than the bc model .figure [ fig : ks_distance ] b shows the correlation between the network size ( i.e. ) and prediction accuracy , defined as wks . as shown in this figure , the prediction accuracy of our model tends to decrease with the network size ( with ) , suggesting better predictions of our model for degree distributions in the case of larger networks .however , there is no correlation between the prediction accuracy and network size in the case of the bc model ( with ) .in summary , metabolite distributions across plant species also show nested structure and modular structure in addition to heterogeneous connectivity in analogy with plant - animal mutualistic networks and organizational networks , suggesting that such structural properties are universal among bipartite networks in wide - ranging fields .moreover , we found that our model can also reproduce these structural properties in addition to the bc model , indicating an alternative way to obtain these structural properties .in other words , we showed that there are two different ways to obtain the structural properties : the trait - based way ( the bc model ) and the evolutionary way ( our model ) .either one of these two way ( i.e. the bc model or our model ) might become significant due to the types of bipartite relationship and observation condition . in particular , metabolite distributions might be different from ecological networks and organizational networks in perspective of design principles despite the same structural properties .as above , we showed that our model could better reproduce such structural properties of metabolite distributions than the bc model .this finding implies that these structural properties of metabolite distributions are acquired through evolution processes , considered in our model , rather than trait - based mechanisms ( i.e. the bc model ) , believed to be a general formation mechanism of bipartite networks .compared to ecological networks and organizational networks , metabolite distributions might be hardly influenced by traits of elements ( i.e. plant species and metabolites ) and external factors .this might because we can observe comprehensive species - metabolite relationships . in the case of ecological networks and organizational networks, such observations might be difficult .for example , we assume that a plant species can interact to pollinators a , b , c and d in plant - animal mutualistic networks . however , these interactions are limited because of several conditions such as geography and pollinators properties ( e.g. environmental fitness ) . supposing that the pollinator a only lives in area i , and the rest ( i.e. b d ) is in area ii due to such conditions , we can find the different mutualistic networks between areas i and ii . because of such restraints , ecological networks might be different from metabolite distributions . however , we speculate that our model can be also applied to ecological networks of this type if plant - animal relationships are comprehensively obtained under ideal conditions ( e.g. environmentally homogeneous islands ) .in fact , our model could reproduce the structure of plant - animal mutualistic networks in a limited way .when we consider the global tendency of bipartite relationships such as nested structure , modular structure , and heterogeneous connectivity , our model can explain its origin more simply than the bc model .this is an advantage of our model . in the case of the bc model ,the formation mechanisms are relatively complicated because we need to consider the interaction rule based on traits between elements and external factors .however , we believe that elements traits and external factors are important . especially such factors might play crucial roles for the formation of local interaction patterns . using our model ,the formation mechanisms of the structural properties in metabolite distributions are described as follows .the nested structure means that a plant s flavonoid composition is a subset of other plants flavonoid compositions , and its origin is explained using our model as follows . in our model ,metabolites of a new plant are inherited from those of an ancestral plant because these plants tend to be similar due to mutation .however , new plants obtain the part of metabolites by considering divergence ( elimination of interactions ) . as a result ,metabolites of an offspring plant become a subset of those of their parent plant , and produce nested structure .the modular structure implies that plant species are divided into several clusters in which they are strongly interconnected through common metabolites and these clusters interact loosely . in short , the modular structureis obtained by strong interconnections in clusters and weak interactions among these clusters .emergence of weak and strong interactions is also described by inherence and divergence of metabolite compositions .as above , metabolite compositions are inherited from ancestral species in our model . then, new species and ancestral species are connected because of common metabolites , and interactions of this type correspond to strong interconnections .due to divergence , on the other hand , new species indirectly connect to the other species via metabolites of ancestral species that were not inherited by new species , and this results weak interactions .regarding the origin of heterogeneous connectivity , we have already discussed in .easily speaking , the duplication mechanism and the randomly selection of species - flavonoid pairs result preferential attachments ( ` rich - gets - richer ' mechanisms ) because nodes with many neighbors tend to obtain more neighbors when considering such mechanisms .these are strongly related to the duplication - divergence model and the dorogovtsev - mendes - samukhin model , respectively . in the case of metabolite distributions , as above , we believe that nested structure , modular structure , and heterogeneous connectivity are dominantly acquired in evolutionary history . thus , these structural properties might provide novel classification schemes of plant species based on metabolite compositions such as chemotaxonomy . for example , we might be able to extract hierarchical organization of plant species based on their metabolite compositions from nested structure .moreover , modular structure might reveal classified characteristic species - metabolite relationships , and heterogeneous connectivity helps to find useful metabolites ( i.e. hub metabolites ) for taxonomic classification and characterization of plant species at higher levels ( e.g. family and order ) . as a result , these structural properties might provide insights into metabolite diversity and plant evolution . for simplicity , we did not consider a number of important evolution processes ( especially deletions of nodes and interactions ) at present . in particular , the degree distributions may become different due to such extinctions .however , such mechanisms might contribute only negligible effects on the above structural properties ( the grobal tendency ) according to our result .this might be because such mechanisms tend to be nonessential in plant evolution . in plant species ,genome doubling ( polyploidity ) is a major driving force for increasing genome size and the number of genes .duplicated genes typically diversify in their function , and some acquire the ability to synthesize new compounds .indeed , plants acquire metabolites of many types ( mostly secondary metabolites ) , compared to a few thousand primary metabolites in higher animals .the population of flavonoids , a type of secondary metabolites , is therefore expected to increase , indicating that we can roughly dismiss the effect of node losses when we consider the global tendency of metabolite distributions . however, this does not mean that the deletions of nodes and interactions are unnecessary .such evolutionary mechanisms might play important roles to determine partial ( or local ) interaction patterns of bipartite relationships .thus , we need to focus on such evolution processes in the future to fully understand the formation of metabolite distributions across species .this work was supported by a presto program of the japan science and technology agency .bastolla , u. , fortuna , m.a . , pascual - grca a. , ferrera , a. , luque , b. , bascompte , j. , 2009 .the architecture of mutualistic networks minimizes competition and increases biodiversity .nature 458 , 10181020 .shinbo , y. , nakamura , y. , altaf - ul - amin , m. , asahi , h. , kurokawa , k. , arita , m. , saito , k. , ohta , d. , shibata , d. , kanaya , s. , 2006 .knapsack : a comprehensive species - metabolite relationship database .biotechnology in agriculture and forestry 57 , 165181 .
living organisms produce metabolites of many types via their metabolisms . especially , flavonoids , a kind of secondary metabolites , of plant species are interesting examples . since plant species are believed to have specific flavonoids with respect to diverse environment , elucidation of design principles of metabolite distributions across plant species is important to understand metabolite diversity and plant evolution . in the previous work , we found heterogeneous connectivity in metabolite distributions , and proposed a simple model to explain a possible origin of heterogeneous connectivity . in this paper , we show further structural properties in the metabolite distribution among families inspired by analogy with plant - animal mutualistic networks : nested structure and modular structure . an earlier model represents that these structural properties in bipartite relationships are determined based on traits of elements and external factors . however , we find that the architecture of metabolite distributions is described by simple evolution processes without trait - based mechanisms by comparison between our model and the earlier model . our model can better predict nested structure and modular structure in addition to heterogeneous connectivity both qualitatively and quantitatively . this finding implies an alternative possible origin of these structural properties , and suggests simpler formation mechanisms of metabolite distributions across plant species than expected . nestedness , modularity , heterogeneous connectivity , bipartite graph model , evolution
today s exponentially growing mobile data traffic is mainly due to video applications such as content - based video streaming .the skewness of the video traffic together with the ever - growing cheap on - board storage memory suggests that the quality of experience can be boosted by caching popular contents at ( or close to ) the end - users in wireless networks .a number of recent works have studied such concept under different models and assumptions ( see and references therein ) .most of existing works assume that caching is performed in two phases : _ placement phase _ to prefetch users caches under their memory constraints ( typically during off - peak hours ) prior to the actual demands ; _ delivery phase _ to transmit codewords such that each user , based on the received signal and the contents of its cache , is able to decode the requested file . in this work, we study the delivery phase based on a coded caching model where a server is connected to many users , each equipped with a cache of finite memory . by carefully choosing the sub - files to be distributed across users , codedcaching exploits opportunistic multicasting such that a common signal is simultaneously useful for all users even with distinct file requests .a number of extensions of coded caching have been developed ( see e.g. ( * ? ? ? *section viii ) ) .these include the decentralized content placement , online coded caching , non - uniform popularities , more general networks such as device - to - device ( d2d ) enabled network , hierarchical networks , heterogeneous networks , as well as the performance analysis in different regimes .further , very recent works have attempted to relax the unrealistic assumption of a perfect shared link by replacing it by wireless channels ( e.g. ) . if wireless channels are used only to multicast a common signal , naturally the performance of coded caching ( delivery phase ) is limited by the user in the worst condition of fading channels as observed in .this is due to the information theoretic limit , that is , the multicasting rate is determined by the worst user ( * ? ? ?* chapter 7.2 ) .however , if the underlying wireless channels enjoy some degrees of freedom to convey simultaneously both private messages and common messages , the delivery phase of coded caching can be further enhanced . in the context of multi - antenna broadcast channel and erasure broadcast channel ,the potential gain of coded caching in the presence of channel state feedback has been demonstrated .the key observation behind is that opportunistic multicasting can be performed based on either the receiver side information established during the placement phase or the channel state information acquired via feedback . in this work , we model the bottleneck link between the server with files and users equipped with a cache of a finite memory as an erasure broadcast channel ( ebc ) .the simple ebc captures the essential features of wireless channels such as random failure or disconnection of any server - user link that a packet transmission may experience during high - traffic hours , i.e. during the delivery phase . in this work, we consider a memoryless ebc in which erasure is independent across users with probabilities and each user can cache up to files .moreover , the server is assumed to acquire the channel states causally via feedback sent by the users . assuming that users fill the caches randomly and independently according to the decentralized content placement scheme as proposed in , we study the achievable rate region of the ebc with cache and state feedback . our main contribution is the characterization of the rate region in the cache - enabled ebc with state feedback for the case of the decentralized content placement ( theorem 1 ) .the converse proof builds on the genie - aided bounds exploiting two key lemmas , i.e. a generalized form of the entropy inequalities ( lemma 1 ) as well as the reduced entropy of messages in the presence of receiver side information ( lemma 2 ) . for the achievability, we present a multi - phase delivery scheme extending the algorithm proposed independently by wang and by gatzianas et al . to the case with receiver side information and prove that it achieves the optimal rate region for special cases of interest .we provide , as a byproduct of the achievability proof for the symmetric network , an alternative proof for the sum capacity of the ebc with state feedback and without cache .more specifically , we characterize the order- capacity defined as the maximum transmission rate of a message intended to users and express the sum capacity in a convenient manner along the line of .this allows us to characterize the rate region of the symmetric cache - enabled ebc with state feedback easily , since as such all we need is to incorporate the packets generated during the placement phase .however , such proof exploits the specific structure of the rate region of symmetric networks , and unfortunately can not be applied to a general network setting considered here .our current work provides a non - trivial extension of to such networks .furthermore , we show that our results can be extended in a straightforward manner to the centralized content placement as well as the multi - antenna broadcast channel with state feedback .finally , we provide some numerical examples to quantify the benefit of state feedback , the relative merit of the centralized caching to the decentralized counterpart , as well as the gain due to the optimization of memory sizes , as a function of other system parameters . the rest of the paper is organized as follows . in section [ section :mainresult ] , we describe the system model together with some definitions and then summarize the main results . section [ section : upperbound ] gives the converse proof of the achievable rate region of the cache - enabled ebc with state feedback .after a high - level description of the well - known algorithm by wang and gatzianas et al . in section [ section :revisiting ] , section [ section : achievability2 ] presents our proposed delivery scheme and provides the achievability proof for some special cases of interest .section [ section : extensions ] provides the extensions of the previous results and section [ section : examples ] shows some numerical examples . throughout the paper ,we use the following notational conventions .the superscript notation represents a sequence of variables . is used to denote the set of variables .the entropy of is denoted by .we let =\{1,\dots , k\} ] , where is the average size of the files .under such a setting , consider a discrete time communication system where a packet is sent in each slot over the -user ebc .the channel input belongs to the input alphabet of size bits .the erasure is assumed to be memoryless and independently distributed across users so that in a given slot we have where denotes the channel output of receiver , stands for an erased output , denotes the erasure probability of user .we let denote the state of the channel in slot and indicate the set of users who received correctly the packet .we assume that all the receivers know instantaneously , and that through feedback the transmitter only knows the past states during slot .the caching network is operated in two phases : the placement phase and the delivery phase . in the content placement phase, the server fills the caches of all users , , up to the memory constraint . as in most works in the literature, we assume that the placement phase incurs no error and no cost , since it takes place usually during off - peak traffic hours . once each user makes a request , the server sends the codewords so that each user can decode its requested file as a function of its cache contents and received signals during the delivery phase .we provide a more formal definition below .a caching scheme consists of the following components .* message files independently and uniformly distributed over with for all . * caching functions defined by that map the files into user s cache contents .\end{aligned}\ ] ] * a sequence of encoding functions which transmit at slot a symbol , based on the requested files and the state feedback up to slot for , where denotes the message file requested by user for .* decoding functions defined by , ] , it is reduced to which coincides with the one - sided fairness originally defined in . focusing on the case of most interest with and distinct demands , we present the following main results of this work . [theorem : region ] for , or for the symmetric network with , or for the one - sided fair rate vector with , the achievable rate region of the cached - enabled ebc with the state feedback under the decentralized content placement is given by for any permutation of .the above region has a polyhedron structure determined by inequalities in general .it should be remarked that theorem [ theorem : region ] covers some existing results . for the symmetric network, the above region simplifies to for the case without cache memory , i.e. for all , theorem [ theorem : region ] boils down to the capacity region of the ebc with state feedback given by which is achievable for or the symmetric network or the one - sided fair rate vector where implies for any . comparing and , we immediately see that the presence of cache memories decreases the weights in the weighted rate sum and thus enlarges the rate region . in order to gain some further insight , fig .[ fig : capacity ] illustrates a toy example of two users with and . according to theorem [ theorem : region ] ,the rate region is given by which is characterized by three vertices , and .the vertex , achieving the sum rate of , corresponds to the case when the requested files satisfy the ratio . on the other hand ,the region of the ebc without cache is given by which is characterized by three vertices , , .the sum capacity of is achievable for the ratio .the gain due to the cache is highlighted even in this toy example .theorem [ theorem : region ] yields the following corollary .[ cor : rate ] for , or for the symmetric network with , or for the one - sided fair rate vector with , the transmission length to deliver requested files to useres in the cached - enabled ebc under the decentralized content placement is given by as .the corollary covers some existing results in the literature . for the symmetric network with files of equal size ( ), the transmission length simplifies to as . for the case with files of equal size and without erasure , the transmission length in corollary [ cor : rate ] normalized by coincides with the `` rate - memory tradeoff '' here .] under the decentralized content placement for asymmetric memory sizes given by , \ ] ] where the maximum over all permutations is chosen to be identity by assuming .if additionally we restrict ourselves to the case with caches of equal size , we recover the rate - memory tradeoff given in in fact , the above expression readily follows by applying the geometric series to the rhs of .in this section , we prove the converse of theorem [ theorem : region ] .first we provide two useful lemmas .the first one is a generalized form of the entropy inequality , while the second one is a simple relation of the message entropy in the presence of receiver side information .although the former has been proved in , we restate it for the sake of completeness .* lemma 5)[lemma : erasure - ineq ] for the erasure broadcast channel , if is such that , , for any sets such that .we have , for , {h(y^n_{\mathcal{i } } { \,\vert\,}u , s^n)}\\ & = \sum_{l=1}^n h(y_{\mathcal{i } , l } { \,\vert\,}y^{l-1}_{\mathcal{i } } , u , s^n ) \\ & = \sum_{l=1}^n h(y_{\mathcal{i},l } { \,\vert\,}y^{l-1}_{\mathcal{i } } , u , s^{l-1 } , s_l ) \\ & = \sum_{l=1}^n \mathrm{pr}\{s_l\cap\mathcal{i}\ne\emptyset\ } \ , h(x_l { \,\vert\,}y^{l-1}_{\mathcal{i } } , u , s^{l-1 } , s_l\cap\mathcal{i}\ne\emptyset ) \\ & = \sum_{l=1}^n \bigl(1-\prod_{i\in{{\cal i}}}\delta_i\bigr ) h(x_l { \,\vert\,}y^{l-1}_{\mathcal{i } } , u , s^{l-1 } ) \\ & \le \bigl(1-\prod_{i\in{{\cal i}}}\delta_i\bigr ) \sum_{l=1}^n h(x_l { \,\vert\,}y^{l-1}_{\mathcal{j } } , u , s^{l-1 } ) \label{eq : tmp821 } \end{aligned}\ ] ] where the first equality is from the chain rule ; the second equality is because the current input does not depend on future states conditioned on the past outputs / states and ; the third one holds since is deterministic and has entropy when all outputs in are erased ( ) ; the fourth equality is from the independence between and ; and we get the last inequality by removing the terms in the condition of the entropy . following the same steps , we have {h(y^n_{\mathcal{j } } { \,\vert\,}u , s^n ) = \bigl(1-\prod_{i\in{{\cal j}}}\delta_i\bigr ) \sum_{l=1}^n h(x_l { \,\vert\,}y^{l-1}_{\mathcal{j } } , u , s^{l-1 } ) } , \end{aligned}\ ] ] from which and , we obtain .[ lemma : decentralized ] under the decentralized content placement , the following inequality hold for any and ] .it is readily shown that .this implies that \setminus{{\cal k } } } \prod_{j\in{{\cal j}}}p_j\prod_{k\in[k]\setminus{{\cal j}}}(1-p_k)h ( w_{i } ) \label{eq : tmp893 } \\ & = \prod_{k\in{{\cal k}}}(1-p_k)\sum_{{{\cal j}}:{{\cal j}}\subseteq [ k]\setminus{{\cal k } } } \prod_{j\in{{\cal j}}}p_j\prod_{k\in[k]\setminus{{\cal k}}\setminus{{\cal j}}}(1-p_k)h ( w_{i } ) \\ & = \prod_{k\in{{\cal k}}}\left(1-p_k\right)h(w_{i } ) \end{aligned}\ ] ] where the last inequality is obtained from the basic property that we have for a subset \setminus{{\cal k}} ] . here is a high - level description of the broadcasting algorithm : 1 .broadcasting phase ( phase ) : send each message of packets sequentially for .this phase generates overheard symbols to be transmitted via linear combination in multicasting phase , where \setminus k ] of the cardinality .the achievability result of theorem [ theorem : ebc ] implies the following corollary .[ lemma : ggt ] for , or for the symmetric channel with , or for the one - sided fair rate vector with , the total transmission length to convey to users , respectively , is given by the proof is omitted because the proof in section [ subsection : proof1 ] covers the case without user memories ..notations for the erasure broadcast channel . [ cols="^,^ " , ] for , we have by counting the total number of order- packets and the transmission length from phase to phase , the sum rate of order- messages achieved by the algorithm is given by it remains to prove that coincides with the rhs expression of .we notice that the transmission length from phase to can be expressed in the following different way , i.e. where we let by following similar steps as ( * ? ? ?* appendix c ) , we obtain the recursive equations given by for .since we have and using the equality and the binomial theorem , it readily follows that we have by plugging the last expression into using , we have which coincides the rhs of for .this establishes the achievability proof . as a corollary of theorem [ theorem : order ] , we provide an alternative expression for the sum capacity .[ cor : sheng ] the sum capacity of the -user symmetric broadcast erasure channel with state feedback can be expressed as a function of by where is the duration of phase 1 , corresponds to the total number of order- packets generated in phase 1 . by letting denote the rhs of , we wish to prove the equality by proving .if it is true , from the achievability proof of theorem [ theorem : order ] that proves for all , the proof is complete . in the rhs of , we replace by the expression in by letting for .then , we have comparing the desired equality with the above expression and noticing that , we immediately see that it remains to prove the following equality . prove this relation recursively . for ,the above equality follows from and . now suppose that holds for and we prove it for . from wehave \label{eq : e1}\\ & = \frac{1}{1-\delta^{k - j+1}}\left[n_{1\rightarrow j } + \sum_{l=2}^{j-1}{j-1 \choose l-1 } \sum_{i=2}^{l}t^{i}_{l } \delta^{k - j+1}(1-\delta)^{j - l } \right]\label{eq : e2}\\ & = \frac{1}{1-\delta^{k - j+1}}\left[n_{1\rightarrow j } + \sum_{l=2}^{j-1}{j-1 \choose l-1 } \sum_{i=2}^{l } n^{i}_{l\rightarrow j}\right]\label{eq : e3}\\ & = \frac{1}{1-\delta^{k - j+1}}\left[n_{1\rightarrow j } + \sum_{i=2}^{j-1 } \sum_{l = i}^{j-1 } { j-1 \choose l-1}n^{i}_{l\rightarrow j}\right]\label{eq : e4}\\ & = t_j^j + \sum_{i=2}^{j-1}t^{i}_j , \ ] ] where follows from ; follows from our hypothesis ; follows from ; is due to the equality ; the last equality is due to .therefore , the desired equality holds also for .this completes the proof of corollary [ cor : sheng ] .we provide the achievability proof of theorem [ theorem : region ] for the case of one - sided fair rate vector as well as the symmetric network .the proof for the case of is omitted , since it is a straightforward extension of ( * ? ? ? * section v ) .we describe the proposed delivery scheme for the case of assuming that user requests file of size packets for without loss of generality .compared to the algorithm revisited previously , our scheme must convey packets created during the placement phase as well as all previous phases in each phase .here is a high - level description of our proposed delivery scheme .1 . placement phase ( phase 0 ) : fill the caches according to the decentralized content placement ( see subsection [ subsection : decentralized ] ) .this phase creates `` overheard '' packets for ] , the proposed delivery scheme achieves the optimal rate region only in two special cases .we provide the proof separately in upcoming subsections .we assume without loss of generality , , and . under this setting, we wish to prove the achievability of the following equality . by replacing and further assuming for all without loss of generality , the above equality is equivalent to the rest of the subsection is dedicated to the proof of the total transmission length .we start by rewriting in by incorporating the packets generated during the placement phase .namely we have for ] is given by \setminus{{\cal j}}\cup\{k\}}(1-p_j)}{1-\prod_{j\in[k]\setminus{{\cal j}}\cup\{k\}}\delta_j}f_k.\end{aligned}\ ] ] we have an alternative expression for which is useful as will be seen shortly .the length of sub - phase needed by user such that ] , where the symmetric rate is given by this means that when only users are active in the system , each of these users achieves the same symmetric rate as the reduced system of dimension .then , it suffices to prove the achievability of the symmetric rate for a given dimension .as explained in subsection [ subsection : delivery ] , the placement phase generates `` overheard packets '' for ] with cardinality .namely , the size of any sub - file of file is given by which satisfies the memory constraint for user in analogy to lemma [ lemma : decentralized ] for the decentralized content placement , we can characterize the message entropy given the receiver side information .[ lemma : centralized ] for the centralized content placement , the following equalities hold for any and ] with . in subsequent phases to , we repeat for ] . using weobtain \setminus{{\cal j}}\cup\{k\}}. \end{aligned}\ ] ] we first need to prove the following lemma \setminus{{\cal i}}\cup{{\cal h}}}&=\sum_{{{\cal i}}:{{\cal i}}\subseteq{{\cal j}}}\sum_{{{\cal h}}:{{\cal h}}\subseteq{{\cal i}}}(-1)^{|{{\cal h}}|}w_{[k]\setminus({{\cal i}}\setminus{{\cal h}})}\\ & = \sum_{{{\cal i}}:{{\cal i}}\subseteq{{\cal j}}}\sum_{{{\cal h}}':{{\cal h}}'\subseteq{{\cal i}}}(-1)^{|{{\cal i}}\setminus{{\cal h}}'|}w_{[k]\setminus{{\cal h}}'}\label{eq : var1}\\ & = \sum_{{{\cal h}}':{{\cal h}}'\subseteq{{\cal j}}}\sum_{{{\cal i}}:{{\cal h}}'\subseteq{{\cal i}}\subseteq{{\cal j}}}(-1)^{|{{\cal i}}\setminus{{\cal h}}'|}w_{[k]\setminus{{\cal h}}'}\\ & = \sum_{{{\cal h}}':{{\cal h}}'\subseteq{{\cal j}}}w_{[k]\setminus{{\cal h}}'}\sum_{{{\cal i}}:{{\cal h}}'\subseteq{{\cal i}}\subseteq{{\cal j}}}(-1)^{|{{\cal i}}\setminus{{\cal h}}'|}\\ & = \sum_{{{\cal h}}':{{\cal h}}'\subseteq{{\cal j}}}w_{[k]\setminus{{\cal h}}'}\sum_{{{\cal i}}':{{\cal i}}'\subseteq{{\cal j}}\setminus{{\cal h}}'}(-1)^{|{{\cal i}}'|}\label{eq : var2}\\ & = w_{[k]\setminus{{\cal j}}}+\sum_{{{\cal h}}':{{\cal h}}'\subset{{\cal j}}}w_{[k]\setminus{{\cal h}}'}\sum_{{{\cal i}}':{{\cal i}}'\subseteq{{\cal j}}\setminus{{\cal h}}'}(-1)^{|{{\cal i}}'|}\\ & = w_{[k]\setminus{{\cal j}}}.\end{aligned}\ ] ] we prove by induction on .for we have and \setminus{{\cal j}}\cup\{i\}\cup{{\cal h}}}=w_{[k]\setminus{{\cal j}}\cup\{i\}} ] such that and we prove in the following that it holds for too .we have \setminus{{\cal j}}\cup\{i\}}\\ & = g_{{{\cal j}}}^{\{i\}}+\sum_{{{\cal i}}:i\in { { \cal i}}\subset { { \cal j}}}g^{\{i\}}_{{{\cal i}}}.\end{aligned}\ ] ] thus , we obtain \setminus{{\cal j}}\cup\{i\}}-\sum_{{{\cal i}}:i\in { { \cal i}}\subset { { \cal j}}}g^{\{i\}}_{{{\cal i}}}}\\ & = w_{[k]\setminus{{\cal j}}\cup\{i\}}-\sum_{{{\cal i}}:i\in { { \cal i}}\subset { { \cal j}}}\sum_{{{\cal h}}:{{\cal h}}\subseteq{{\cal i}}\setminus\{i\}}(-1)^{|{{\cal h}}|}w_{[k]\setminus{{\cal i}}\cup\{i\}\cup{{\cal h}}}\\ & = w_{[k]\setminus{{\cal j}}\cup\{i\}}-\sum_{{{\cal i}}:i\in { { \cal i}}\subseteq { { \cal j}}}\sum_{{{\cal h}}:{{\cal h}}\subseteq{{\cal i}}\setminus\{i\}}(-1)^{|{{\cal h}}|}w_{[k]\setminus{{\cal i}}\cup\{i\}\cup{{\cal h}}}+\sum_{{{\cal h}}:{{\cal h}}\subseteq{{\cal j}}\setminus\{i\}}(-1)^{|{{\cal h}}|}w_{[k]\setminus{{\cal j}}\cup\{i\}\cup{{\cal h}}}\\ & = w_{[k]\setminus{{\cal j}}\cup\{i\}}-\sum_{{{\cal i}}:{{\cal i}}\subseteq { { \cal j}}\setminus\{i\}}\sum_{{{\cal h}}:{{\cal h}}\subseteq{{\cal i}}}(-1)^{|{{\cal h}}|}w_{[k]\setminus{{\cal i}}\cup{{\cal h}}}+\sum_{{{\cal h}}:{{\cal h}}\subseteq{{\cal j}}\setminus\{i\}}(-1)^{|{{\cal h}}|}w_{[k]\setminus{{\cal j}}\cup\{i\}\cup{{\cal h}}}\\ & = w_{[k]\setminus{{\cal j}}\cup\{i\}}-w_{[k]\setminus({{\cal j}}\setminus\{i\})}+\sum_{{{\cal h}}:{{\cal h}}\subseteq{{\cal j}}\setminus\{i\}}(-1)^{|{{\cal h}}|}w_{[k]\setminus{{\cal j}}\cup\{i\}\cup{{\cal h}}}\label{eq : eq1}\\ & = \sum_{{{\cal h}}:{{\cal h}}\subseteq{{\cal j}}\setminus\{i\}}(-1)^{|{{\cal h}}|}w_{[k]\setminus{{\cal j}}\cup\{i\}\cup{{\cal h}}},\end{aligned}\ ] ] in this section , we prove that the worst user under the one - sided fair rate vector is determined by , namely .\end{aligned}\ ] ] we set for any subset ] and . since , then it holds \setminus{{\cal j}}\cup\{m\}}r_m\geq p_{m}\bar{p}_{[k]\setminus{{\cal j}}\cup\{i\}}r_i ] and , thus we obtain for .suppose that holds for any ] . since , it holds + \setminus{{\cal j}}\cup\{m\}}r_m\geq p_{{{\cal j}}\setminus\{i\}}\bar{p}_{[k]\setminus{{\cal j}}\cup\{i\}}r_i ] . as a resultwe obtain \setminus{{\cal j}}\cup\{m\ } } \geq r_i\sum_{{{\cal i}}\subset{{\cal j}}\setminus\{m , i\}}g_{{{\cal i}}\cup\{i\}}^{\{i\}}\bar{\delta}_{{{\cal j}}\setminus{{\cal i}}\setminus\{i\}}\delta_{[k]\setminus{{\cal j}}\cup\{i\}}. \end{aligned}\ ] ] hence the proof is completed .suppose that there exists such that and that holds for some ] , it holds .it suffices to show that equivalent to equivalent to where . by replacing the weight by its expression we obtain \\ & = \bar{p}_{{{\cal i}}k}\left [ \frac{(1-\delta_{{{\cal i}}kk'})-(1-\delta_{{{\cal i}}k})}{(1-\delta_{{{\cal i}}k})(1-\delta_{{{\cal i}}kk'})}+\frac{p_{k'}}{1-\delta_{{{\cal i}}kk'}}\right ] \\ & = \frac{\bar{p}_{{{\cal i}}k}}{1-\delta_{{{\cal i}}kk'}}\left [ \frac{\delta_{{{\cal i}}k}(1-\delta_{k'})}{(1-\delta_{{{\cal i}}k})}+p_{k'}\right ] , \end{aligned}\ ] ] and similarly thus , is equivalent to since then , so it is sufficient to prove that }_{a } + \underbrace{\left ( \bar{p}_{k}p_{k'}r_{k}-\bar{p}_{k'}p_{k}r_{k'}\right)}_{b}&\geq0.\end{aligned}\ ] ] this is satisfied if and . the condition b holds thanks to the definition of one - sided fair rate vector , and it is equivalent to * case + in this case we have , or .condition a reduces to : * case + in this case we have or . then we have this means that b implies a so that the desired inequality holds once b holds . since a is inactive , we can then consider a looser bounds which holds by the definition of one - sided fair rate vector .n. golrezaei , k. shanmugam , a. g. dimakis , a. f. molisch , g. caire , `` femtocaching : wireless video content delivery through distributed caching helpers '' , _ ieee trans .inf . theory _12 , pp . 84028413 , 2013 .m. maddah - ali and u. niesen , `` decentralized coded caching attains order - optimal memory - rate tradeoff '' , _ ieee / acm trans . on networking _ , vol .4 , pp . 10291040 , 2015 . m. maddah - ali and u. niesen , `` coded caching with nonuniform demands'',in _ proceedings of the ieee conference on computer communications workshops ( infocom ) , toronto , canada , 2014 _ , http://arxiv.org/abs/1308.0178v3 , 2015 . j. hachem , n. karamchandani , and s. diggavi , `` effect of number of users in multi - level coded caching '' , in _ proceedings of the ieee international symposium on information theory ( isit2015 ) _ , hong - kong , china , 2015 .j. zhang , x. lin , c. c. wang , and x. wang , `` coded caching for files with distinct file sizes '' , in _ proceedings of the ieee international symposium on information theory ( isit2015 ) _ , hong - kong , china , 2015 .s. yang and m. kobayashi , `` secrecy communications in -user multi - antenna broadcast channel with state feedback '' , in _ proceedings of the ieee international symposium on information theory ( isit2015 ) _ , hong - kong , china , 2015 .p. piantanida , m. kobayashi , and g. caire , `` analog index coding over block - fading miso broadcast channels with feedback '' , in _ proceedings of the ieee information theory workshop ( itw ) , 2013 _ , seville , spain , 2013 .a. ghorbel , m. kobayashi , and s. yang , `` cache - enabled broadcast packet erasure channels with state feedback '' , in _ proceedings of the 53rd annual allerton conference on communication , control , and computing ( allerton ) _, il , usa , 2015 .s. karthikeyan , m. ji , a. tulino , j. llorca and a. dimakis `` finite length analysis of caching - aided coded multicasting '' , in _ proceedings of the 52nd annual allerton conference on communication , control , and computing ( allerton ) _ ,il , usa , 2014 j. zhang , f. engelmann , and p. elia `` coded caching for reducing csit - feedback in wireless communications '' , in _ proceedings of the 53rd annual allerton conference on communication , control , and computing ( allerton ) _ , il , usa , 2015 .
we study a content delivery problem in a -user erasure broadcast channel such that a content providing server wishes to deliver requested files to users , each equipped with a cache of a finite memory . assuming that the transmitter has state feedback and user caches can be filled during off - peak hours reliably by the decentralized content placement , we characterize the achievable rate region as a function of the memory sizes and the erasure probabilities . the proposed delivery scheme , based on the broadcasting scheme by wang and gatzianas et al . , exploits the receiver side information established during the placement phase . our results can be extended to centralized content placement as well as multi - antenna broadcast channels with state feedback .
the focus of our research is the accuracy of state estimation in the so - called _ continuous - discrete stochastic state - space _ systems .this means that their process models are presented in the form of the following it-type _ stochastic differential equation _ ( sde ) : where , is a nonlinear sufficiently smooth drift function , is a time - invariant matrix of dimension and is a brownian motion with a fixed square diffusion matrix of size .the initial state of sde ( [ eq1.1 ] ) is supposed to be a random variable , i.e. with , where the notation stands for the normal distribution with mean and covariance .we point out that only _ additive - noise _ sde models are studied in this paper . at the same time , the utilized measurement models are discrete - time and given by the formula where stands for a discrete time index ( i.e. means ) , is a sufficiently smooth function and the measurement noise is with .we remark that the _ sampling period _ ( or _ waiting time _ ) , i.e. when the additional information comes to consideration , is assumed to be constant , below .in addition , all realizations of the noises , and the initial state are assumed to be taken from mutually independent gaussian distributions .the continuous - discrete stochastic system ( [ eq1.1 ] ) , ( [ eq1.2 ] ) is a usual state estimation problem arisen in many areas of study as diverse as target tracking , navigation , stochastic control , chemistry and finance .strong arguments for using the discussed mathematical model in practical estimation tasks are outlined in .the key issue of our research is to identify changes in performances of filters , which are core state estimation tools in practice , when they are applied for treating stiff sde models of the form ( [ eq1.1 ] ) .our first state estimator is the traditional ekf designed long ago and presented , for example , in . despite its simplicity and the old - fashioned nature it has been a successful filtering means in the realm of nonlinear stochastic systems for decades .nevertheless , the first - order approximation provided by the ekf has been criticized in many studies , which have resulted in the development of more accurate ukf and ckf methods .we point out that a lot of evidence confirming the superiority of the latter filters towards the ekf in estimating various continuous- and discrete - time stochastic state - space systems have been presented in and in other literature .however , all published proofs in the cited papers relate to estimation of nonstiff stochastic systems and , hence , the success of the ukf and ckf for treating stiff ones is questionable .below , we address this issue on two stochastic models whose dynamic behavior may be both nonstiff and stiff . to estimate these nonstiff and stiff examples of the form ( [ eq1.1 ] ) , ( [ eq1.2 ] ) , we utilize filters designed in the frame of the so - called _ discrete - discrete _ approach , also known as_ linearized discretization _ in .nevertheless , all methods used below are abbreviated to : _ continuous - discrete extended kalman filter _ ( cd - ekf ) , _ continuous - discrete unscented kalman filter _ ( cd - ukf ) and _ continuous - discrete cubature kalman filter _ ( cd - ckf ) , because of the continuous - discrete fashion of the state estimation task under consideration .it is also worthwhile to emphasize that the time - invariant form of the matrices and in sde ( [ eq1.1 ] ) is crucial in our research .this requirement is imposed by the cd - ckf and cd - ukf method developments in , which are grounded in the it-taylor expansion of order 1.5 ( it-1.5 ) . in the case of the variable matrices ,the underlying stochastic discretization method it-1.5 has a much more complicated form in comparison to that utilized in the latter papers ( see further details in ( * ? ? ?10.4 ) ) . thus , the cited cd - ckf and cd - ukf methods are not applicable to such models .in this section , we present all technical particulars of the used state estimators . they allow for an independent inspection of all calculations presented in sec .[ sect3 ] .we begin with the classical ekf .as said in sec .[ sect1 ] , all filters considered in our study are obtained within the discrete - discrete ( or linearized discretization ) approach and , hence , they are fixed - stepsize .the latter means that an -step equidistant mesh is introduced in the sampling interval ] of the euler - maruyama discretization ( [ eq1.3 ] ) .this is done by means of the first - order taylor expansion of the drift function around the state mean vector computed at the time , i.e. where is the corresponding partial derivative ( jacobian ) of the function with respect to and evaluated at , stands for higher - order terms of this taylor expansion . now neglecting in ( [ eq1.6 ] ) and substituting it into eqs ( [ eq1.4 ] ) and ( [ eq1.5 ] )yields the classical ekf in the form of the following -step state estimation algorithm . ' '' '' * initialization . *set the initial state mean and covariance as follows : , . + * loop body . * for ,where is the number of sampling instants in the simulation interval of sde model ( [ eq1.1 ] ) , do : + : given the filtering solution and at time , compute the predicted state mean and covariance matrix at the next sampling instant . for that , set the local initial values and and fulfil the following -step time - update recursive procedure with : + for do ; * ; * evaluate the jacobian matrix ; * , where is the identity matrix of size ; * ; * : having computed the predicted state expectation and the predicted covariance matrix , one finds then the filtering solution and based on measurement received at the time , as follows : * evaluate the jacobian at the state mean ; * * * * ' '' '' the presented cd - ekf calculates the _ linear least - square estimate _ of the system s state based on measurements .the cd - ckf method is invented by arasaratnam et al . and presented in the cited paper in great detail .again , this filter is fixed - stepsize and , hence , sde ( [ eq1.1 ] ) should be discretized on an equidistant mesh at first .however , arasaratnam et al . recommend the higher - order it-1.5 discretization for constructing their cd - ckf algorithm . it-1.5converges with order 1.5 ( * ? ? ?that is why it is expected to provide a more accurate approximation in comparison to the euler - maruyama discretization ( [ eq1.3 ] ) on the same mesh .it results in the following discrete - time stochastic state - space model : with , where denotes the square root of , and here , the vector stands for the it-1.5 approximation to the state of sde ( [ eq1.1 ] ) at the time , , and is the drift function in the given sde model .again , implies the step size of our subdivision of the sampling interval ] of the cubature nodes , , where the vectors are defined in ( [ eq2.2 ] ) ; * create the matrix ] , where is any orthogonal rotation that lower triangularizes the right - hand matrix and produces the square root . : having computed the predicted state expectation and the predicted covariance square root , one finds then the filtering solution and based on measurement received at the time , as follows : * create the matrix ] of the propagated nodes , , where the function is from the measurement equation ( [ eq1.2 ] ) ; * ; * ; * ; * apply the cholesky decomposition for finding the measurement noise covariance lower triangular factor ; * compute the cross - covariance matrix , the square root of the innovations covariance and the square root of the filtering covariance as follows : \theta_k = \left [ \begin{array}{cc } r_{e , k}^{1/2 } & 0 \\ \bar p_{xz , k } & p_{k|k}^{1/2 } \end{array } \right]\ ] ] where is any orthogonal rotation that lower triangularizes the left - hand matrix of this formula , i.e. and are lower triangular matrices . the square root of the filtering covariance matrix at the sampling time appears as the result of this triangularization ; * ; * . ' '' '' we remark that the cholesky decomposition of the noise covariance in the measurement - update step of the cd - ckf may be optional and absent in the situation when this measurement covariance matrix is time - invariant or possesses such a trivial structure that its square root is known in advance . in all such cases , the square root is set in the beginning of the filtering procedure , i.e. in the * initialization*. our examples presented in sec . [ sect3 ] satisfy this condition .the _ unscented kalman filtering _( ukf ) originates from the paper of julier et al . , which constructs the method for discrete - time nonlinear stochastic systems .later on , various issues related to the ukf have been explored by many authors , including and so on . at the heart of the unscentedfiltering is the _ unscented transform _ ( ut ) introduced by julier et al . .the ut implies that the set of deterministically selected sigma points ( smaller sigma sets are also possible ) is taken by the rule , where , as customary , stands for the -th coordinate vector in , and means the lower triangular cholesky factor ( square root ) of the covariance matrix of a given -dimensional random variable , i.e. . in this paper , we utilize the classical parametrization of the mentioned ut and use the following ut coefficients : with the fixed parameters , and ( see the cited papers ) .however , other parameterizations have also been considered in literature and an exhaustive study of this issue is published in .the sigma vectors ( [ eq2.4 ] ) and weights ( [ eq2.5 ] ) allow the mean and covariance of given gaussian distribution to be calculated as follows : the main property of the above - defined ut is that if one changes the gaussian distribution with the mean and covariance by a sufficiently smooth nonlinear mapping then the mean and covariance of the transformed random variable will be calculated approximately by the same formulas ( [ eq2.6 ] ) but with the sigma vectors replaced by the transformed ones and with the mean replaced by the mean evaluated for the transformed distribution by the first formula in ( [ eq2.6 ] ) . for treating continuous - time nonlinear stochastic systems ,the continuous - discrete variant of the ukf is developed in .again , it is grounded the in the stochastic it-1.5 discretization of sde ( [ eq1.1 ] ) and uses the the additive ( zero - mean ) noise case ukf algorithm ( * ? ? ?* table 7.3 ) for estimating the discretized process model ( [ eq2.1 ] ) .eventually , one arrives at the following cd - ukf method , which is tested on nonstiff and stiff models in sec .[ sect3 ] , numerically . ' '' '' * initialization .* set the initial state expectation and covariance and also the ut coefficients ( [ eq2.5 ] ) as follows : , , ^\top ] of the sigma points computed by formulas ( [ eq2.4 ] ) with and ; * create the matrix ] of the sigma points computed by formulas ( [ eq2.4 ] ) with and ; * create the matrix ] .the initial data of all the filters are fixed as follows : ^\top=[1 , 1,\exp(-25)]^\top$ ] and .the formulated sde model is observed partially , i.e. we exploit the measurement equation with the measurement noise .[ fig:2 ] exposes the outcome of our numerical simulation of example [ ex:2 ] in nonstiff and stiff scenarios .our nonstiff scenario corresponds to , whereas the stiff one is obtained by increasing its stiffness to , as in example [ ex:1 ] .[ fig:2](a ) is scaled logarithmically . despite its reverse matter in the accuracies of the state estimation , example [ ex:2 ]undoubtedly confirms our observation on the better performance of the traditional ekf in comparison to the contemporary ckf and ukf methods for estimating stiff continuous - discrete stochastic systems .this is clearly seen in fig .[ fig:2](b ) .in contrast , the accuracies of all the filters under examination are in line with the commonly accepted opinion on their performances within our nonstiff scenario presented in fig . [ fig:2](a ) . here , the armse s of the cd - ekf are about 2% larger in average than those of the cd - ckf and cd - ukf .thus , stiff continuous - discrete stochastic systems constitute a special class of state estimation problems for which the contemporary ckf and ukf are ineffective .a theoretical exploration of this unsatisfactory performance of the modern filtering techniques on stiff continuous - time stochastic models will be an interesting topic of future research .this paper has revealed quite an interesting and counterintuitive phenomenon of better performance of the traditional ekf in comparison to the contemporary ckf and ukf methods when applied to stochastic systems modeled by sdes whose drift functions expose stiff behaviors . in other words , we have shown numerically that the lower - order filter outperform the higher - order methods in the accuracy of estimation of stiff continuous - discrete stochastic systems .so a theoretical justification of this phenomenon will be an interesting issue of future research in filtering theory .in addition , our numerical examples suggest that stiff stochastic systems constitute a family of sde models which are much more difficult for accurate state estimation than traditional nonstiff ones and demand special filtering methods to be invented yet for their effective numerical treatment .the authors acknowledge the support from portuguese national funds through the _ fundao para a cincia e a tecnologia _ ( fct ) within project uid / multi/04621/2013 and the _ investigador fct 2013 _ programme .s. j. julier and j. k. uhlmann .reduced sigma point filters for the propagation of means and covariances through nonlinear transformations . in _ proceedings of the american control conference _ , pages 887892 , may 2002 .r. van der merwe and e. a. wan .the square - root unscented alman filter for state and parameter - estimation . in _2001 ieee international conference on acoustics , speech , and signal processing proceedings _ , volume 6 , pages 34613464 , may 2001 .
this brief technical note elaborates three well - known state estimators , which are used extensively in practice . these are the rather old - fashioned extended kalman filter ( ekf ) and the recently - designed cubature kalman filtering ( ckf ) and unscented kalman filtering ( ukf ) algorithms . nowadays , it is commonly accepted that the contemporary techniques outperform always the traditional ekf in the accuracy of state estimation because of the higher - order approximation of the mean of propagated gaussian density in the time- and measurement - update steps of the listed filters . however , the present paper specifies this commonly accepted opinion and shows that despite the mentioned theoretical fact the ekf may outperform the ckf and ukf methods in the accuracy of state estimation when the stochastic system under consideration exposes a stiff behavior . that is why stiff stochastic models are difficult to deal with and require effective state estimation techniques to be designed yet . * keywords : * continuous - discrete stochastic state - space system , stiff model , continuous - discrete extended kalman filter , continuous - discrete cubature kalman filter , continuous - discrete unscented kalman filter .
let be the complex plane .since can not be handled like an ordinary point here , as deserved by many complex functions , this lack is fulfilled by the compactification , namely the _ extended complex plane _ ; anyway the representation of the neighborhood of is still impracticable here .riemann cracked the problem by a stereographic projection of onto a sphere of radius 1 : this is the riemann sphere .holomorphic dynamics collect the studies on the iterates of the function of given type ( entire , meromorphic , transcendental , ) and in one or several complex variables ( depending on ) .the map is of finite degree .we will deal with the case of one variable , where .the questions in this field may rise to high degrees of complication and many ones deserve a multilateral attack rooting into complex analysis , topology , theory of numbers , uniformization theory .one of the most important goals is the study of elements not changing under iteration : the so - called _ invariants _ , showing up to dimension at most : they might be points ( dim 0 ) , lines ( dim 1 ) or surfaces ( dim 2 ) .the collection of invariants with a same property is the _invariant set_. it is convenient to split results into the ` _ _ local _ _ ' and into the ` _ _ global _ _ ' branch , in order to have a good picture of the whole corpus .a local study investigates on the properties which hold up to a finite distance from the invariant set , thus inside a bounded domain ; while a global approach wants those properties enjoyed by points being even at infinite distance , thus all over the .branches are not disjoint and related concepts act in mutual cooperation : in fact , either locally or globally , the fate of ( inverse and forward is negative or positive respectively . ] ) orbits closely relates to the nature of the fixed point and its neighborhood .results rely on the study of the _ orbits _ , i.e. the set of points generated by iterating the given function , , \dots ,\ f_n\equiv f[f_{n-1}].\ ] ] approaching to limit sets which may consist of finitely ( dim 0 ) , or of infinitely and uncountably many ( dim 1 and 2 ) points . for such setsfinitely many points of order , one speaks of _ limit cycle of periodic points _ : if the period is , then . if , we have the _ limit fixed point _ and the expression boils down to : so the fixed point can be re - framed as a cycle of period .other cycles may belong to invariants of dimension 0 ( infinitely many points ) , or 1 and , exceptionally , of dimension . ] 2 ( both , uncountably many ) . in the economy of dynamics over ,cycles of finitely or of infinitely and of uncountably many points play different roles which are explained locally in the former case , while the latter cycles are object of the global investigation . in the second case , we speak of ` _ _ julia sets _ _ ' : when they include infinitely many points , their topology is totally disconnected ; when uncountably many , they are continuous ( jordan curves or not ) . with the caution to the casuistry of related local dynamics , a same limit cycle of finitelymany points groups all orbits converging to it into macro - sets , said ` _ _ the basins _ _ _ of attraction _ ' , also defined ` _ _ fatou sets _ _ ' , in honor of pierre fatou ( 18781929 ) , who co - pioneered these investigations in the same times ( 191820 ) and independently from gaston julia ( 18931978 ) , who is credited as the first official discoverer of sets in 1918 . while the role of the basins is to include all those orbits to a same limit cycle , is the boundary between the basins .julia sets are well - known objects at all levels today , sparkling the imagination a wide range of people , from artists to mathematicians .the details of their suggestive and very complicated fractal shapes were disclosed to human eyes by the early computer experiments during late 1970s : machines revealed to be the indispensable aid for overcoming the long standing barriers of hand - written , rough plots available to those ancient mathematicians .the technologic run is continuously opening to finer results , thanks to higher screen resolutions and faster cpu saving from long time consuming computations , as required to render these images .anyway this side role shows that technology is not a priority and does not fully belong to the road - map of this field : it is just a good tool to develop those methods mastering both analytical and geometrical relations which are most wanted today ,together with more accurate numerical precision , especially for the question we are going to deal with . even at the graphical level ,we differ local from global methods , whether they focus on the previous limit sets related to the two branches . after some introductory theory culminating into the presentation of what the invariant sets , said ` _ _ hedgehogs _ _ ' , are meant in holomorphic dynamics , we will illustrate the related problem by showing how most available techniques fail to display adequately ; finally , we will discuss how to solve this question and to code an approach via pseudo - c language code .we are going to sketch out the mathematical terms of our local environment : here complex dynamics are mostly interested in the orbits behavior induced by iterates near limit cycles of the map and focuses on cycles of order 1 , i.e. fixed points .for an easier approach , we will assume fixed points from now on .iterates are operators defined as follows in the forward sense ( ) : ,\dots\ , f_n(z)\equiv f[f_{n-1}(z)];\ ] ] or backwards by a composition of inverse maps ( ) : ,\dots\ .\ ] ] the classification of is achieved by computing the modulus of the first derivative at , and it is essential to understand the nature of local invariant sets , grouped here into four main classes : 1 . _super - attracting _ fixed point , when ;[fixedpointsuper ] 2 ._ attracting _ fixed point , when ;[fixedpointattracting ] 3 . _ indifferent _ or _ neutral _ fixed point , when ;[fixedpointneutral ] 4 ._ repelling _ fixed point , when ;[fixedpointrepelling ] reader s understanding can be lessened by splitting this basic classification into two extremal cases , ( super)attracting and repelling entries , their directions are opposite but dynamical properties are very similar . with it ,one understand that the indifferent case stands at the middle way and it is the conjunction point among those ones , either in a _ inclusive _ way by presenting subcases where all dynamical features are shown at the same time and in an _ exclusive _ way , i.e. showing features which are not enjoyed by both extremal cases .in fact ( super)attracting fixed points can be reached by forward iterates , whereas julia sets include cycles of repelling points and reached by backward iterates . obviously , the dynamical characters ( contraction , repulsion ) of entries [ fixedpointsuper ] ) , [ fixedpointattracting ] ) and [ fixedpointrepelling ] ) hold for the direct function , while they are inverted for .indifferent points deserve a separate discussion , where even the concept of ` limit ' shall be carefully applied , because not conventionally intended like in the other cases . while entries [ fixedpointsuper ] ) , [ fixedpointattracting ] ) and [ fixedpointrepelling ] ) just need the modulus value , case [ fixedpointneutral ] ) requires a thorough investigation : the modulus is insufficient to distinguish the more complex casuistry here , as illustrated in table [ indifferenttable ] ; another parameter is demanded and our next ( and immediate ) choice is the angle .its numerical properties rule out the dynamical characters of neighboring orbits about indifferent points .the main separation , into the two great classes of ( rationally indifferent , _ parabolic _ case ) and of ( irrationally indifferent , _ elliptic _ case ) , is followed by a number of sub - cases .the former relates to one only invariant set , namely the fatou - leau flower ( see def . at p. ) , and does not branch out ; while generates a richer variety whose local dynamics get far more complicate : here a second level opens to local invariant sets , distinguishing for the numerical properties of , some of which are extremely weak and shunning the machine finite digits computation .the goal of this work is understand how and if such latter cases might be actually attackable in particular graphical terms which evince their local dynamics .one crucial tool to study the local dynamics is the _ schrder functional equation _ ( sfe ) : =a[\psi(z)],\ ] ] where is an invertible map . without loss of generalization ,let the origin be fixed for .if we replace with the meaning of the first derivative strengthens the application of sfe to local problems , then turns into this version : =\lambda\psi(z).\ ] ] girshick a. , interrante v. , haker s. , lemoine t. , _ line direction matters : an argument for the use of principal directions in 3d line drawings _ , non - photorealistic animation and rendering , proceedings of the 1st international symposium on non - photorealistic animation and rendering , annecy , france , 2000 , pp .
in the field of holomorphic dynamics in one complex variable , hedgehog is the local invariant set arising about a cremer point and endowed with a very complicate shape as well as relating to very weak numerical conditions . we give a solution to the open problem of its digital visualization , featuring either a time saving approach and a far - reaching insight .
thermal noise is a fundamental limit to the sensitivity of gravitational wave detectors , such as the ones being built in the use by the ligo project .thermal noise is associated with sources of energy dissipation , following the fluctuation - dissipation theorem .thermal noise comes in at least two important kinds : one due to the brownian motion of the mirrors , associated with the losses in the mirrors material ; and another due to the suspension of the mirrors , due to the losses in the wires material .the limits following from these assumptions ( losses due to elastic properties of materials ) are a lower limit to the noise in the detector , since there may always be other sources of energy dissipation in imperfect clamps , mirror attachments , etc .but the correct calculation of the thermal noise limit is essential to the design of detectors and diagnostics of the already - built detectors .we will deal in this article with thermal noise of suspensions ( not of internal modes of the mirrors themselves ) , and assume only losses due to the elasticity of the suspension wires .the calculation of thermal noise can be done in several ways ,,,, .all of these follow the fluctuation - dissipation theorem ( fdt ) , but a complication arises because in suspensions there are two sources of energy ( gravitational and elastic ) , but only one of them is `` lossy '' ( elastic energy ) .moreover , the losses in the suspension wires are associated with their bending , and seems to be localized at the top and bottom of the wires .the ways to include these features into the thermal noise calculations are different enough that they have led to some confusion among the gravitational wave community . also , attention has been paid mostly to the horizontal motion of the suspension , although all modes ( angular , transverse , and vertical ) appear to some degree into the detector s noise .we present a method to calculate thermal noise that allows the prediction of the suspension thermal noise in all its 6 degrees of freedom , from the energy dissipation due to the elasticity of the suspension wires .we also show how the contributions of thermal noise in different directions can be sensed by the interferometer through the laser beam position and direction .the results will follow from the consideration of the coupled equations of the suspension and the continuous wire , first presented in for just the horizontal degree of freedom .we show how this approach encompasses and explains previous ways to approximate the thermal noise limit in gravitational wave detectors .we show how this approach can be extended to more complicated suspensions to be used in future ligo detectors . to our knowledge , this is the first time the thermal noise of angular degrees of freedom is presented , and that all suspension degrees of freedom are calculated in an unified approach .since the full treatment of the problem is somewhat involved , we present first the problem without considering the elasticity of the wire , but adding a second , lossy , energy source to the gravitational energy in the treatment of the mechanical pendulum , and introduce the concepts of `` dilution factors '' , and `` effective '' quality factors .we also start with one and two - degrees of freedom suspensions instead of 6-dof . with these tools ,most of the issues can be clearly presented and then we follow to the full treatment of the ligo suspensions , presenting the implications for ligo .the full treatment of this case , considering the elastic coupling of the wire to the suspension , was presented in . here , we will present the simpler `` mechanical '' treatment of this case , which will introduce the concepts of `` dilution factors '' , and measured vs. effective quality factors .we first recapitulate the calculation of thermal noise in the simplest case , a suspended point mass .the potential energy is and .the kinetic energy is .the admittance to an external force is given by the admittance has a pole at the system eigenfrequency .if is real , the resonance has an infinite amplitude and zero width .if the spring constant has an imaginary part representing an energy loss , , then the amplitude is finite , and the peak has a width determined by the complex part of the eigenfrequency .the width of the peak is characterized with a quality factor , and it is usually measured from the free decay time of the natural oscillation at the frequency : . the thermal noise is proportional to the real part of the admittance , and thus to : =\frac{wk\phi}{(k - m{\omega}^2)^2+k^2\phi^2}\ ] ] we are usually interested in frequencies well above , since the pendulum frequency in gravitational wave detectors is usually below 1 hz , and the detectors have their maximum sensitivity at 100 hz . at those frequencies ,the thermal noise is \sim\frac{4k_bt_0{\omega}_0 ^ 2\phi}{m{\omega}^5 } \label{simpleosc}\ ] ] this how we see that the measured decay of the pendulum mode can be used to predict the suspension thermal noise at gravitational wave frequencies .some beautiful examples of these difficult measurements and their use for gravitational wave detectors are presented in , for example .next , we consider a suspended point mass , but we now assume there two sources of energy , gravitational and elastic , each with its own spring constant .the potential energy is then , and =\frac{{\omega}(k_g\phi_g+k_e\phi_e)}{((k_g+k_e)-m{\omega}^2)^2+(k_g+k_e)^2\phi^2}\ ] ] if we assume that , then \sim\frac{{\omega}(k_g\phi_g+k_e\phi_e)}{(k_g - m{\omega}^2)^2+k_g^2\phi^2}\ ] ] and at high frequencies \sim \frac{4k_bt_0(k_g\phi_g+k_e\phi_e)}{m^2{\omega}^5}\ ] ] if ( `` gravity is lossless '' ) , or at least , then where .we see that is the same expression as if we had just one energy source with a complex spring constant .this is why we call the factor the `` dilution factor '' : the elastic energy is the one contributing the loss factor to the otherwise loss - free , but `` diluted '' by the small factor .the dilution factor is also equal to the ratio of elastic energy to gravitational energy .the concept of a dilution factor is very useful because it is usually easier to measure the loss factor associated with the elastic spring constant than the quality factor of the pendulum mode .this is because is usually a function of the complex young modulus , and the imaginary part of the young modulus is easily measurable for most fiber materials , and can even be found in tables of material properties .( of course , there are subtleties to this argument , in particular with thermoelastic or surface losses , but we are assuming the minimum material loss ) .this case is a particular case of the one treated in , and here we just mention it to present the approach taken to the full problem , and present some new relevant aspects .we want to include the elasticity of the wire in the equations of motion , so we treat the suspension wire as an elastic beam , and then we have pendulum degree of freedom , plus the wire s infinite degrees of freedom of transverse motion .we define a coordinate , that starts at the top of the wire , and ends at the attachment point to the mirror , .correspondingly , we will have an eigenfrequency , associated with the pendulum mode , and an infinite series of `` violin '' modes .the potential energy is , and the kinetic energy is .the solutions to the wire equation of motion , with boundary conditions and are , and the equation of motion for the mass subject to an external force is .the admittance has a pole at the pendulum frequency , where , and an infinite number of poles at the violin mode frequencies , at frequencies , where .the spring `` constant '' associated with the wire and gravity s restoring force is in fact a function of frequency , although it is the usual constant for frequencies below the violin modes , where . at frequencies above the first violin mode ,the spring function is not even positive definite , or finite .the function is real at all frequencies because we havent added any source of energy loss yet .we introduce energy loss in the system by adding the wire elastic energy to the system , and then assuming a complex young modulus .the potential energy is now .the equation of motion for the wire is a fourth order equation with boundary conditions , , and . the wire slope at the bottom , , is a free parameter ( since we are assuming a point mass ) , and the variation of the lagrangian with respect to provides the fourth boundary condition for the wire : .we can find an exact solution for the wire shape as a function of , trigonometric functions of , and hyperbolic functions of , where are solutions to which approximate at low frequencies the perfect string wavenumber , and a constant `` elastic '' wavenumber , .the distance is the characteristic elastic distance over which the wire bends , especially at top and bottom clamps . in ligo test mass suspensions , mm , a small fraction of m .the approximations and are valid for frequencies that satisfy , about 12 khz for ligo , so we will use them in the remainder of this article .it is also equivalent to .we also use an approximate solution for the wire shape , good to order ( ( ! ) for ligo ) : the coefficients are functions of and , and thus , functions of frequency : in the limit , we recover the perfect wire solution , .the ratio measures how much more ( or less ) the wire bends at the bottom than at the top .the elastic energy is well approximated by the contribution of the exponential terms in the wire shape , at top and bottom : . at low frequencies where , the ratio , indicating that the wire bends much more at the top than at the bottom ( recall this is a point mass ) .the equation of motion for the mass when there is an external force is the ratio of the elastic force to the gravitational force , , is of order , and thus it was dropped . if we now consider complex , then the spring function is also complex , and the admittance will have a non - zero real part . at frequencies below the violin modeswhere , we have , an expression that suggests a split between a real gravitational spring constant and a complex elastic spring constant .however , this distinction can only be done in the approximation , and low frequencies . in general , however , we _ can not _ strictly derive separate gravitational and elastic spring constants from their respective potential energy expressions : notice that the total , complex spring constant was derived from the variation of the _ gravitational _ potential energy term , which becomes complex because we use a wire shape involving the complex distance , satisfying the boundary conditions .where the approximations are valid , we can consider the case of two separate spring constants and thus a `` dilution factor '' for the pendulum loss , , where the numerical value corresponds to ligo parameters in appendix 1 .however , if we numerically calculate the _ exact _ pendulum mode quality factor , we get : this would be the _ measured _ q from a decay time of the pendulum mode , if there are no extra losses .did we make a `` factor of 2 '' mistake ?in fact , this factor of 2 has haunted some people in the community ( including myself) , but there is a simple explanation .since the elastic complex spring constant is proportional to , and is proportional to the _ square root _ of the young modulus , then when we make complex , we get .that is , we get an extra dilution factor of two between the wire loss and the pendulum loss : , very close to the actual value .this teaches us that if the spring constant of the dissipative force is not just proportional to , we will get correction factors .the thermal noise below the violin modes is well approximated by the thermal noise of a simple oscillator , as in eqn.[simpleosc ] , with natural eigenfrequency and loss .we then call the `` effective '' loss , in this case equal to the pendulum loss ( but we will see this is not always the case ) .we saw that the complex spring constant split into gravitational and elastic components .however both were derived from the _ gravitational _ force , since the _ elastic _force , , was negligible .it is because the wire _ shape _ is different due to elasticity , that the function is different from the pure gravitational expression .the way we split gravitational and elastic contributions to the spring constant and then got a dilution factor , is only valid at low frequencies .so the argument we posed in the previous section about a dilution factor applied to the calculation in the thermal noise in the gravitational wave band is in priciple not applicable here , especially when taking into account that the total force was contributed by the variation of just the _ gravitational _ potential energy , with the elasticity in the wire shape .however , using the wire shape without low frequency approximations , we can numerically evaluate the integrals that make up the potential and elastic energies ( using a _ real _ ) , and compare the ratio with the `` low frequency '' dilution factor .we show the calculation of elastic and gravitational potential energies , and their ratio , in fig.[ptmass ] . at low frequencies ,the ratio is constant , and equal to : this is the dilution factor between the wire loss and the pendulum loss , also the one to use for a simple - oscillator approximation of the thermal noise .it is _ not _ the ratio of the `` gravitational '' and `` elastic '' spring constants at low frequencies , but as we explained , we had no reason to expect that , since . at higher frequencies ,the ratio is not constant , and it gives correctly the dilution factors for the quality factors of the violin modes .notice that the loss at the violin modes increases with mode number , as noted in : , and this anharmonic behavior is well followed by the energy ratio . in summary , the concept of dilution factor is strictly true only when the total restoring _ force _ can be split into two forces , one lossless and one dissipative , both represented with spring constants . in the general case ,if we can only split the _ potential energy _ into two terms , one lossless and another dissipative , then the ratio of the energies calculated as a function of driving frequency is the exact dilution `` factor '' .moreover , this ratio can be calculated as a function of frequency , and then we get the different dilution factors for all the modes in the system . this is an important lesson that also we will use more extensively in suspensions with more coupled degrees of freedom .we now consider an extended mass instead of a point mass , with a single generic dissipative energy source .the pendulum motion is described with the horizontal displacement of its center of mass , and the pitch angle of the mass , , as in the side view of a ligo test mass , in fig.[pendulum ] .the kinetic energy is . instead of a spring constant, we have a 2x2 spring _matrix_. the potential energy is where and are the normal coordinates .the point is the point where the wire is attached to the mirror , and the angle is the angle the wire makes with the vertical . if we only consider gravitational forces , and .however , we will assume that the elements of the spring matrix can be complex , and each has its own different imaginary part .the eigenfrequencies are the solutions to the equation , or in order to calculate ( the brownian motion of the center of mass ) , we need to calculate the admittance to a horizontal force applied at the center of mass . in order to calculate , we need the admittance to a torque applied around the pitch axis .if the spring constants are complex , then the admittances are complex and we can calculate their real parts , and the thermal noise determined by them : \ ] ] \ ] ] where the eigenfrequencies are now complex : .the quality factors measurable from the free decay of each of the eigenfrequencies are . at frequencies larger than any of the eigenfrequencies ,we obtain the system may have the two eigenfrequencies close in value if ( see fig.[wpmqpm ] ) , but for , we have and ; and for , and . in both limits ,two terms cancel in the sum of loss factors in the formulas above ( for small , for example ) and we see that thus , even though it is a coupled system , thermal noise in is always associated mostly to and the pendulum eigenfrequency ; and thermal noise in is always associated with and the pitch eigenfrequency . for both degrees of freedom and , we obtain the thermal noise of single - dof systems . unfortunately ,neither limit ( small or large with respect to ) applies to the suspension parameters in ligo test masses , and , more importantly , even though the approximation for the eigenfrequencies is relatively good for most values of , the approximation we used for the losses is not ( fig.[wpmqpm ] ) .the measurable quality factors give us , but we need and to use in the thermal noise of the pendulum , and these can not be precisely calculated from unless we know , or a way to relate it to the other loss factors .we will do this in the next section , using the elasticity of the wire .notice that the forces and torques we have used to calculate the admittances and , will each produce _ both _ displacement and rotation of the pendulum .this means that the thermal noise in displacement and angle are * not * uncorrelated .this can be exploited to find a point other than the center of mass where the laser beam in the interferometer would be sensing less displacement thermal noise than at the center of the mirror , as was done following a somewhat different logic in . if we were to calculate the thermal noise at a point a distance above the center of mass , we then need to calculate the admittance of the velocity of that point ( ) to a horizontal force applied at that point .the equations of motion are and then the thermal noise is \nonumber\\ & = & \frac{4k_bt_0}{{\omega}^2}\re\left[y_{xx}+d^2y_{{\theta\theta}}+ 2dy_{x{\theta}}\right ] \nonumber \\ & = & x^2({\omega})+d^2{\theta}^2({\omega})+2d\frac{4k_bt_0}{{\omega}^2}\re(y_{x{\theta } } ) \label{chi2}\end{aligned}\ ] ] where is the admittance of to a pure force , is the admittance of to a pure torque , and is the admittance of a displacement to a pure torque , equal to the admittance of to a pure force .there is an optimal distance below the center of mass for which the thermal noise is a minimum : this distance is .the resulting thermal noise is which is _ less _ than the thermal noise observed at the center of mass .however , the expression obtained for the distance is frequency - dependent : that means we have to choose a frequency at which to optimize the sampling point . summarizing, we have shown that whenever there are coupled motions , the thermal noise sensed at a point whose position depends on both coordinates is _ not _ the sum in quadrature of the two thermal noise ( and in our case ) , but a combination that depends on the `` cross - admittance '' . moreover ,the thermal noise of each degree of freedom can not in general be calculated just from the measured quality factors if the modes are coupled to each other strongly enough .we add to the previous 2-dof pendulum a continuum wire , to be able to add the losses due to the wire s elasticity , and calculate modal and effective quality factors , as well as the point on the mirror at which we can sense the minimum thermal noise . if we add elasticity to the problem , as in , the potential energy is .the boundary conditions for the wire equation are , and .the equations of motion for the pendulum are in order to complete the equations of motion of the pendulum , we need the shape of the wire at the bottom end . for this, we use the shape given by the expression in eqn.[shape ] , but this time the top and bottom weights are given by with and as we used earlier . with the shape known , we can write the equations for with a spring matrix : where the spring functions are at this point , even though the expressions are complicated , we can calculate the complex admittances using a complex and .the analytical expressions for the admittances are quite involved , but we can always calculate numerically the thermal noise associated with any set of parameters .we can also calculate the widths of the peaks in the admittance , which would correspond to measurable quality factors for the pendulum , pitch , and violin modes .the plots presented in figs . [ wpmqpm ] and [ thnsasds ] were calculated using these solutions .fig.[wpmqpm ] shows that the frequency and quality factor of the pendulum and pitch modes vary significantly with the pitch distance . at frequencies close to the pendulum eigenfrequencies ,the thermal noise spectral densities show peaks at both frequencies . however , at higher frequencies , the thermal noise can always be approximated by the thermal noise of a simple oscilaltor as in eqn.[simpleosc ] , with an `` effective '' quality factor that fits the amplitude at high frequencies to the position of the single peak .we can similarly define an effective quality factor for the thermal noise in .we show in fig.[thnsasds ] the actual and approximated thermal noise , with their corresponding effective quality factors found to fit best at 50 hz .the effective quality factor at 50 hz can be calculated as a function of pitch distance , and we show this calculation in fig . [ wpmqpm ] .the effective pitch quality factor is well approximated , for any pitch distance , by the measurable pitch quality factor , while the pendulum effective quality factor is close to the measurable quality factor of the pendulum mode only at very small , or very large pitch distances .for the ligo pitch distance of 8 mm , the measurable pendulum quality factor is 10 times lower than the effective quality factor , and would then give a pessimistic estimate of thermal noise amplitude ._ low frequency approximation ._ at low frequencies , where , we have expressions that can help us understand how the elasticity loss factor contributes to the effective quality factors , as well as to the pendulum and pitch modes .we trade this gain in simplicity for the loss of expressions valid at or above violin mode resonances . the low frequency limit of the spring constants in eqns .[ kxxtt ] is if we assume has an imaginary part related to the material : , then we get complex spring constants . if we use these complex spring constants in eq.[wpm ]we can calculate the loss factors of the pendulum and pitch mode , and ; and if we use them in eqns .[ xtthns ] we can get the effective quality factors . using and , we get and .this represents a `` dilution factor '' in displacement of and in pitch of .these approximations fit very well the values shown in fig .[ wpmqpm ] . at low frequencies ,the equations of motion for can be derived from a potential energy using a complex , this gives us the complex spring constants that we can use to get mode and effective quality factors .we would like to break up this potential energy into a gravitational part and an elastic part , corresponding to a real gravitational spring constant ( independent of ) and a lossy elastic spring constant .we know that if we take the limit in , we obtain the regular potential energy in eq.[pe2dof ] for a 2 dof pendulum without elasticity .thus , we are tempted to say that the elastic energy is the remainder , proportional to , and thus having a complex spring constant when we consider a complex . according to this argument, we get and .this would then give us a dilution factor for the displacement loss .this is the factor by which the imaginary part of is diluted ; however , as explained before , we pick up another factor of two due to being proportional to .thus , the dilution factor between the effective quality factor and the wire quality factor is ._ energy ratios and the dilution factor._we have seen that there is another way of identifying the `` gravitational '' and `` elastic '' terms in the potential energy , using the actual potential energy expressions from which the equations of motion were derived : and . for any given applied force , or torque , we can solve the equations of motion for and .( we do nt need to invoke a low frequency approximation to do this calculation . ) then , if we calculate the ratio of elastic to gravitational potential energy for a unit applied force , we get a function which is frequency dependent , and is equal to the dilution factors for the pitch mode at the pitch frequency , for the pendulum mode at the pendulum frequency , and for the effective quality factor at high frequencies , .we show the energy values and ratios for different values of the pitch distance in fig .[ energyratios ] . _ potential energy densities._there is another interesting calculation we can do with the solution obtained for the wire shape , and that is to find out where in the wire the elastic potential energy is concentrated . in other words , we want to find a relationship between the variation of the dilution factor with frequency , and the curvature of the wire , mostly at the top and bottom clamps . since both the gravitational energy and the elastic energy involve integrals over the wire length , we can define energy densities along the wire , and calculate a cumulative integral from top to bottom .the gravitational potential energy has also a term : we define a ratio , indicating the relative contribution of this `` pitch '' term . from fig.[cumulativeenergies ], we observe that the gravitational potential energy density is distributed quite homogenously along the wire , even at the first violin mode .however , the pitch term , which can be considered a `` bottom '' contribution , contributes most of the gravitational energy when the system is excited at the pitch eigenfrequency , but also in several other cases at the pendulum frequency and at low frequencies .the elastic energy density is concentrated at top and bottom portions of length . at low frequencies, the top contributes the most ; at the pendulum eigenfrequency , the relative contributions depend strongly on , but the bottom contributes at least half the energy ; at the pitch eigenfrequency , the bottom contributes more than 99% of the energy ; at higher frequencies , including the violin modes , top and bottom contribute equally ._ motion of points away from center of mass_. we discussed previously how it was possible to find a point whose thermal noise displacement was smaller than the thermal noise displacement of the center of mass . now that we have expressions for the complex spring functions, we can find the optimal point and discuss the differences .the cross admittance in eq.[chi2 ] is and then the optimal point ( otimized at frequencies in the gravitational wave band , above pendulum modes ) is .notice that even though the optimal distance was deduced from the thermal noise expressions , which all involve loss factors , the optimal distance only depends on mechanical parameters . as first explained in , the interpretation of this distance is that when the pendulum is pushed at that point by a horizontal force , the wire does nt bend at the bottom clamp , producing less losses .the fact that we can recover the result from the fdt is another manifestation of the deep relationship between thermal fluctuations and energy dissipation .we show in fig[dmin ] the dependence of the thermal noise at 50 hz on the point probed by the laser beam on the mirror , and the ratio of the thermal noise for and at all frequencies .as expected , since the integrated rms has to be the the same for any distance at which we sense the motion , the fact that the spectral density is smaller at 50 hz if means that the noise will be increased at some other frequencies : this happens mainly at frequencies below the pendulum modes .there are many lessons to be learned from this exercise , but perhaps the most important one is that the explicit solutions to the equations of motion have many different important results : * using the solutions to calculate the elastic and gravitational potential energies allows us to calculate a `` dilution function '' of frequency , equal to the dilution factor at _ each _ of the resonant modes of the system , as well as to the most important effective dilution factor at frequencies in the gravitational wave band ; * we can calculate energy densities along the wire to identify the portions of the wire most responsible for the energy loss and thus the thermal noise ; * we can use low frequency approximations to find out expressions for dilution factors that can be found using other methods , explaining in this way subtleties like factors of two ; * we can calculate the admittance of an arbitrary point in the mirror surface to the driving force , and thus find out improvements or degradation of observed noise due to beam misalignments .we will now calculate the solutions to the equations of motion for the six degrees of freedom of a ligo suspended test mass , and then use the solutions to calculate the thermal nosie of all degrees of freedom , as well as the observed tehrmal noise in the gravitational wave detector .the mirrors at ligo are suspended by a single wire looping around the cylindrical mass , attached at the top at a distance smaller than the mirror diameter , to provide a low yaw eigenfrequency .this is equivalent to having a mass suspended by two wires , attached slightly above the horizontal plane where the center of mass is .the mirror s 6 degrees of freedom are the longitudinal and transverse horizontal and , and the vertical , displacements of the center of mass ; the pitch and yaw rotations around the and axis , respectively , and the roll around the longitudinal axis .we show the coordinate system used and the relevant dimensions in fig .[ pendulum ] .the parameters used in the calculation presented are those for ligo test mass suspensions ( large optics suspensions ) .the mass of the cylindrical mirror is 10.3 kg , the diameter is 25 cm , and the thickness 10 cm .the cylindrical wires are made of steel with density and 0.62 mm diameter .we assumed a complex young modulus .the vertical distance between the center of mass and the top clamps is cm , the wires are attached to the mass a distance mm above the center of mass .the distance between the top attachment points is mm .( in the previous examples where one wire was used , we assumed the same wire material and the same test mirror , but we used a mm radius , so the stress in the wires remained constant . )each wire element has displacement in a 2-dimensional plane transverse to the wire , and a longitudinal displacement along the wire , .the kinetic energy is given by the potential energy is given by the sum of the axial strain energy and the bending ( transverse ) strain energy in each wire : plus the energy involved in rotating the mass : the wires will be attached at the top ( ) at the coordinates , where is the vertical distance of the top support from the equilibrium position of the center of mass .the wires transverse slopes at the top will be zero . at the bottomthe wires are clamped to the mass a distance above the center of mass , and a distance on the y - direction between the wires on each side of the mass .the angle is the angle at which the wires are slanted from top to bottom when looking at the mass along the optical axis . if , the wires hang vertically .the length of the wires is .the tension in each wire is .the position of the bottom attachments when the mirror is moving with a motion described by are , and the slopes at the bottom are . if we express the wire transverse and longitudinal displacements in the coordinate system , we have , and and the equations of motion become non - linear . in order to keep the problem simple , without losing any degree of freedom , we will then consider two different cases : ( i ) the wire only has displacements in the direction , and the mirror moves in degrees of freedom ; and ( ii ) the wire only has displacements in the directions , and the mirror moves in degrees of freedom .we analyze these cases separately .the boundary conditions for the wires at the top are zero displacements and slopes , and at the bottom attachment to the mass , .we combine the wires transverse displacements into , then the boundary conditions at the bottom are . the solutions to the wire equations of motion and the boundary conditions are ( up to order ) will then be as in eqn.[shape ] , with where .the equations for the mass transverse dof subject to a force and torques , are we see that the combination is associated with the degrees of freedom just as for the single wire case , while the combination is associated with the yaw degree of freedom .this is easily understood when imagining the wires moving back and forth `` in phase '' ( ) , producing displacement and pitch but nt yaw ; while if they move back and forth in opposition ( ) , then the only effect is into mirror s yaw .thus , we can solve the equations for and separately from the equations for and : we will do so in the next parapgraphs . * yaw angular thermal noise . *the admittance of yaw to a torque is with . as usual , when is complex , the admittance is complex and using the fluctuation - dissipation theorem we obtain .the effect of the tilted wires with is to lower the restoring force , and thus the resonance frequency : in ligo suspensions , the frequency is 0.48 hz instead of 1.32 hz if .however , since decreases the gravitational restoring force but not the elastic force , the dilution factor increases , and so does the thermal noise . at frequencies where , . the thermal noise at frequencies below the violin modes , where , is well approximated by the thermal noise of a single oscillator with resonance frequency and quality factor , where .thus , the `` dilution factor '' is ( = 1/72 for ligo parameters , where ) .as in the case of a simple pendulum , this dilution factor is half of the ratio of the elastic spring constant to the gravitational spring constant , because the elastic spring constant has an `` extra '' dilution factor of 2 : if , then and .however , the ratio of elastic potential energy to gravitational potential energy is equal to the `` right '' dilution factor at low frequencies , as shown in fig .[ yawdilutionfactor ] .the energy ratio also gives us the right dilution factor at the violin frequencies ( ) .the yaw angular thermal noise may be seen in the detectors gravitational wave signal if the beam hits a mirror at distance to either side of the center of mass , or if it hits the mirror in a direction an angle away from longitudinal .considering both cases , the sensed thermal noise will be given by , where is the thickness of the mirror : where the approximation is valid between the pendulum mode and the first violin mode , 1hz-50hz . at 160 hz , where the maximum sensitivity of is expected, the yaw thermal noise is .if the yaw thermal nose is to be kept an order of magnitude below the dominant noise source , then it is required that cm and .notice that the mirror will always be aligned normal to the laser beam to make the optical cavities resonant ; however , what matters is the beam direction with respect to the coordinate system defined by the local vertical and the plane defined by the mirror in equilibrium .presumably there will be forces applied to align the mirror , but in principle they have no effect on the response of the mirror to an oscullatory driving force such as the one we imagine in the beam s direction , to calculate the admittance .thus the requirement on is on the position of the mirror _ when there are no bias forces acting _ , with respect to the ultimate direction of the beam .the beam s direction must be within 1 of the normal to the _ aligned _ mirror to keep the beam aligned on mirrors 4 km apart , but that does nt mean that the mirror has not been biased by less than to get it to the final position .* pitch and displacement thermal noise . *we now solve the equations for and .the equations for these degrees of freedom in eqns . [ 2wiresxteom ] are exactly the same as for the pendulum suspended on a single wire ( eqn .[ 1wirexteom ] ) , except for the addition of a softening term to the torque equation , due to the tilted wires ; and factors of two due to the two wires ( with about half the tension ) instead of a single wire .the extra term in the torque equation is a negligible contribution to the real part of , at the level of 1% for ligo parameters .therefore , the conclusions we obtained , with respect to the optimization of the beam location on the mirror , and the difference between effective and measurable quality factors , are equally valid here .the spring constants we obtained in eqns .[ kxxtt ] involve now a factor , instead of for a single wire .the elastic distance is however determined by the tension in _ each _ wire , . to keep the stress in the wires constant , the cross section area of a single supporting wireis twice the area of two each of two supporting wires .thus , the effective quality factor determining the thermal noise for the displacement thermal noise has a smaller dilution factor of , instead of 1/231 for a single wire .this is the well - know effect of reducing thermal noise by increasing the number the wires .the dilution factor for pitch is , considerably higher than the dilution factor for yaw , .the displacement thermal noise at 160 hz is , limiting the detector sensitivity to .this is expected to be lower than the thermal noise due to the internal modes of the mirror mass , not considered here .the pendulum thermal noise could be reduced by a factor , or about 40% , if the beam spot was positioned at the optimal position on the mirror .since pendulum thermal noise is not the dominant source noise , but the detectors shot noise would increase due to diffraction losses , it is not advisable for ligo to proceed this way .however , these considerations should be taken into account for future detectors , where thermal noise may be a severe limitation at low frequencies .the pitch angular noise at 160 hz , is .its contribution to the sensed motion has to take into account the coupling with displacement , and we will do this in detail inthe last section .we are now concerned with the mirror motion in its and degrees of freedom .the potential energy is for each wire , plus .notice that due to the tilting of the wires , the `` transverse '' and `` axial '' directions are not and , but rotations of these directions by the wire tilt angle .we define for each wire pointing `` out '' ( and thus in opposite directions if ) , and pointing down along the wire , from top to bottom .the boundary conditions at top are , and at the bottom , , , and , where we defined two new distances and . if we define as earlier , sums and differences of the two wires shape functions , , then the equations of motion for the mirror degrees of freedom , when subject to external forces and a torque , are the solution for the wires transverse motion satisfying the boundary conditions up to order and are of the same form as in eqn . [ shape ] , with top and bottom weights equal to the axial wire motion is where and .the wavenumber functions are , . even though the tilting of the wires produces more complicated formulas than in the pendulum - pitch - yaw case , the equations for the vertical motion decouple from the equations from the transverse pendulum displacement and roll , similar to yaw decoupling from pendulum and pitch .as before , if the wires move in phase ( transverse or axially or both ) , they produce only vertical motion ; but if they move in opposition , they produce side to side motion plus rotation around the optical axis .we analyze the two decoupled systems separately .* vertical thermal noise*. once we have solved the wire shape ( from eqns[shape],[ab+ ] , and [ wpar ] ) , we can write the equation of motion of the wire vertical displacement as or where we defined as the spring constant that was used in the pendulum - pitch case , and the spring constant of the wire . for ligo parameters , and for usual wires , .if the wires are not tilted and , we recover the simple case of vertical modes of a mirror hanging on a single wire .the restoring force is elastic and proportional to , so there is no dilution factor .the term added because of the wire tilting is a gravitational restoring force , much smaller than the elastic restoring force . since it is also mostly real when considering a complex young modulus , it will not change significatively the loss terms , and thus the thermal noise .the wire tilting does add , however , the violin modes to the vertical motion , and it slightly decreases ( by a factor ) the frequency of the lowest vertical mode . the vertical thermal noise at 160 hz is , 260 times the pendulum thermal noise .this is due to the lower quality factor , and the higher mode frequency .however , vertical noise is sensed in the gravitational wave interferometer through the angle of the laser beam and the normal to the mirror surface , which is not less than the earth s curvature over 4 km , ( 0.6 mrad ) . at the minimum coupling ( 0.3 mrad for each mirror in the 4 km cavity ) ,the contribution due to vertical thermal noise is 10% of the pendulum thermal noise . in advanced detectors ,vertical modes are going to be at lower frequencies due to soft vertical supports , like in the suspensions used in the geo600 interferometer , but the ratio of quality factors is just the mechanical dilution factor , so the contribution of vertical thermal noise to sensed motion will be of order , not necessarily a small number !* side pendulum and roll*. the side motion and roll of the pendulum are not expected to appear in the interferometer signal , but it is usually the case that at least the high quality - factor resonances do appear through imperfect optic alignment . the equations for the system can be written as with within the ( very good ) approximation , we can prove that the eigenfrequencies are hz and hz . the corresponding loss factors are and .if the wires are perfectly straight , the thermal noise of the side - to - side pendulum motion below the violin modes is well approximated by that of a simple oscillator with eigenfrequency and a dilution factor .the thermal noise of the roll angular motion does not depend much on the wire tilt , and is well approximated ( below violin modes ) by the thermal noise of a simple oscillator with eigenfrequency and quality factor equal to the free wire s quality factor ( there is no dilution factor ) .for any small wire tilt , however , the spring constant in eqn.[kyy ] has a large contribution of , and thus the fluctuations increase as , as seen in fig .[ sideroll ] .since the roll eigenfrequency is higher than the side pendulum eigenfrequency , and its quality factor is lower , this is a significant increase in the thermal noise spectral density , about a factor of 100 for ligo parameters .however , the gravitationalw ave detectors are mostly immune to the roll degree of freedom , as we will see later . also , the tilting of the wires introduces violin modes harmonics of apart from the usual harmonics .the violin modes that are most visible in the roll thermal noise are the harmonics of ( and are strictly the only ones present if the wires are not tilted ) .we have explored the relationship between quality factors of pendulum modes and the `` effective '' quality factor needed to predict the thermal noise of any given degree of freedom ( seen only in sensitive interferometers ) . for the most important longitudinal motion ,we ve seen that the effective quality factor is approximately equal to , where is the quality factor of the free wire , related to the imaginary part of the young modulus . under ideal conditions where the pendulum mode is far away from the pitch mode in frequency ,its quality factor is close to the effective quality factor , but as we have seen , the errors may be as large as 50% .the violin modes , approximately equal to , appear in the horizontal motion of the pendulum in both directions , along the optical axis and transverse to it .the violin modes show some anharmonicity , as pointed in , with the frequencies slightly higher than , and the quality factors degrading with mode number .the complex eigenfrequencies are the solutions to the equation with .this means that if we measure the quality factors of violin modes , and they follow the predicted anharmonic behavior in both directions , we can assume the losses are only limited by wire losses .we can then predict the thermal noise in the gravitational wave band with more confidence , having also consistency checks with the quality factors measured at the pendulum modes .the thermal amplitude of the peaks at the violin modes follows a simple law , corrected by the change in longitudinal qs .we show all these features in fig.[violinmodes ] . the right way to calculate the total thermal noise observed in the interferometer signal is to calculate the pendulum response to an applied oscillating force in the direction of the laser beam .the pendulum responds to a force in all its six degrees of freedom , but the motion we are sensitive to is the motion projected on the laser beam s direction .if the laser beam is horizontal , and its direction passes through the mirror center of mass , it will be sensitive to only longitudinal displacement and pitch motion . if the beam is not horizontal , and for example is tilted up or down by an angle , ( but still going through the center of mass ), we imagine an applied force applied in the beam s direction , with a horizontal component and a vertical component .the motion we are interested in is , since it is the direction sensed by the laser beam .the admittance we need to calculate is then , where is the response of the horizontal displacement to a horizontal force and is the vertical response to a vertical force .a force applied in the beam s direction , with magnitude , will have components in all 3 axes , and torques around all 3 axes too : and .the motion we are interested is , in general , .if the motion in all 6-dof was uncoupled , then each dof responds to just one component of the force or the torque , and the admittance we need would be just the sum of the admittances , each weighted by the square of a factor or a distance .however , only and are decoupled dof from the rest , and and form two coupled systems , for which we need to solve the response to a forces and torques together .we define an admittance as the admittance of displacement to an applied torque , and as the admittance of pitch to an applied horizontal force , and similar quantities . then , the total thermal noise sensed by the laser beam is or so it is a weighted sum in quadrature of the thermal noise of different degrees of freedom , plus some cross - terms .these terms may be negative , so it is possible to choose an optimal set of parameters to minimize the sensed motion , as shown in . the weighting factors and distances are typical distances are few mm at most , and typical angles in the order of microradians , since an angle of such magnitude produces displacements of the order of millimeters at the beam at the other end of the arm , 4 km away .however , the angle can not be less than half the arm length divided by the curvature of earth , or .we show in fig[finalplot ] the thermal noise sensed by a beam with mm , and .we have shown several general results , and then calculated predicted thermal motions for ligo suspensions .first , we showed that if there is a single oscillator with two sources of potential energy , one with a spring constant and a dominant real part , and the other with a complex spring constant , with a dominant loss factor , then the dilution factor gives us the ratio between the oscillator s quality factor ( determining its thermal noise spectral density ) and the loss factor of .however , the elastic loss might itself have a small dilution factor with respect to the loss factor of the young modulus , for example if , then there is a dilution factor of 1/2 . we also show that when two or more degrees of freedom are coupled , the measurable quality factors at resonance may not be as useful to predict the `` effective '' quality factor used in the thermal noise spectral density , unless the eigenfrequencies are far from each other .we showed that by using approximations of order , we can easily obtain wire shapes and equations of motion for the 6 degrees of freedom of the mirror , as a function of applied oscillating forces and torques .we think that this method will be most useful when applied to multiple pendulum systems such as those used in geo600 and planned for advanced ligo detectors .however , even in the simple pendulum case this approach allows us to calculate the gravitational and elastic potential energy as a linear energy density along the wire , and the total energy .we showed that , for an applied horizontal force , the gravitational potential energy ( ) is homogeneously distributed along the wires , while the elastic potential energy ( ) is concentrated at the top and bottom , but in different proportions depending on the frequency of the applied force ( fig .[ cumulativeenergies ] .we also calculate the ratio of total elastic potential energy to gravitational energy using the solutions for the wire shape , and show that this function of frequency corresponds to the dilution factor for the eigenmode loss factors _ as well _ as for the effective quality factor that allows us to calculate the thermal noise at gravitational wave frequencies ( see figs.[ptmass],[energyratios],[yawdilutionfactor ] ) . applying our calculation of wire shapes and equations of motion that include elasticity to ligo suspensions , we show in figs [ disptn ] and [ angtn ] the resulting spectral densities of displacement and angular degrees of freedom of the mirror .more importantly , we show in fig[finalplot ] the resulting contribution of pendulum thermal noise to the ligo sensitivity curve , assuming small misalignments in the sensing laser beam ( 5 mm away from center of mass , 1 away from horizontal ) . as expected , the displacement degree of freedom is the one that dominates the contribution , but pitch noise contributes significantly ( 29% at 100 hz ) if the beam is 5 mm above center .but this is not added in quadrature to the horizontal noise , which makes 81% : the coupled displacement - pitch thermal motion makes up 99% of the total thermal noise .a misplacement _ below _ the center of mass will _ reduce _ the observed thermal noise , as first noted in .the contribution of yaw thermal noise ( 11% at 100hz ) is smaller than that of pitch , but very comparable .the contribution of vertical noise due to the 4 km length of the interferometer is 8% at 100 hz .the side and roll motions are coupled , but the roll contribution dominates ( due to the large angle and the assumed mm ) and is 0.7% , much smaller than the contributions of pitch , yaw and vertical degrees of freedom . when added in quadrature , the total thermal noise is 23% higher than the contribution of just the horizontal thermal noise . however ,if the vertical misplacement of the beam is 5 mm below the center of mass , instead of above , the total contribution of thermal noise is 89% of the horizontal thermal noise of the center mass .much of this work was motivated by many discussions held with jim hough , sheila rowan and peter saulson , and i am very glad to thank their insights .i want to especially thank p. saulson for carefully reading the original manuscript and making important suggestions .i also want to thank p. fritschel and mike zucker , who first asked about thermal noise of angular modes in pendulums .this work was supported by nsf grants 9870032 and 9973783 , and by the pennsylvania state university .recent reviews of the status of interferometric gravitational waves detectors around the world can be found in the proceedings of the third edoardo amaldi conference on gravitational waves , in press : m. coles , _ the status of ligo _ ; f. marion , _ the status of the virgo experiment _ ; h. lck , p. aufmuth , o.s .brozek et al.,_the status of geo600 _ ; m. ando , k. tsubono , _ tama project : design and current status _ ; and d. mcclelland , m.b .gray et al , _ status of australian consortium for interferometric gravitational astronomy_. electronic files can be found at http://131.215.125.172/info/paperindex/
we present a calculation of the maximum sensitivity achievable by the ligo gravitational wave detector in construction , due to limiting thermal noise of its suspensions . we present a method to calculate thermal noise that allows the prediction of the suspension thermal noise in all its 6 degrees of freedom , from the energy dissipation due to the elasticity of the suspension wires . we show how this approach encompasses and explains previous ways to approximate the thermal noise limit in gravitational waver detectors . we show how this approach can be extended to more complicated suspensions to be used in future ligo detectors . psfig.sty 0.2 in 8.5 in 6.5 in 2
_ low - density parity - check ( ldpc ) _ codes , combined with iterative _ belief - propagation ( bp ) _ decoding , have emerged in recent years as the most promising method of achieving the goal set by shannon in his landmark 1948 paper : to communicate reliably over a noisy transmission channel at a rate approaching channel capacity .indeed , many applications have recently adopted ldpc codes as industry standards - such as wireless lans ( ieee 802.11n ) , wimax ( ieee 802.16e ) , digital video broadcasting ( dvb - s2 ) , 10gbase - t ethernet ( ieee 802.3an ) , and the itu - t standard for networking over power lines , phone lines , and coaxial cable ( g.hn/g.9960 ) .the key feature that sets ldpc codes apart from other capacity approaching codes is that , with suboptimal iterative bp decoding , complexity grows only linearly with code block length , resulting in practically realizable decoder implementations for powerful ( long block length ) codes .( the decoding complexity of optimum _ maximum likelihood ( ml ) _ decoding , on the other hand , grows exponentially with block length , making it impractical for large block lengths . )_ ldpc block code ( ldpc - bc ) _ designs can be classified in two types : regular and irregular ._ regular _ codes , as originally proposed by gallager in 1962 , are _ asymptotically good _ in the sense that their _ minimum distance _ grows linearly with block length .this guarantees , with ml decoding , that the codes do not suffer from the _ error floor _ phenomenon , a flattening of the _ bit error rate ( ber ) _ curve that results in poor performance at high _ signal - to - noise ratios ( snrs ) _ , and similar behavior is observed with iterative bp decoding as well . however , the iterative decoding behavior of regular codes in the so - called _ waterfall _ , or moderate ber , region of the performance curve falls short of capacity , making them unsuitable for severely power - constrained applications , such as uplink cellular data transmission or digital satellite broadcasting systems , that must achieve the best possible performance at moderate bers . on the other hand , _ irregular _ codes , pioneered by luby et al . in 2001 , exhibit capacity approaching performance in the waterfall but are normally subject to an error floor , making them undesirable in applications , such as data storage and optical communication , that require very low decoded bers .typical performance characteristics of regular and irregular ldpc - bcs on an _ additive white gaussian noise channel ( awgnc ) _ are illustrated in fig .[ fig : ldpcsketch ] , where the channel snr is expressed in terms of , the _ information bit signal - to - noise ratio_. in this paper , we highlight a particularly exciting new class of ldpc codes , called _ spatially - coupled ldpc ( sc - ldpc ) _ codes , which promise robustly excellent performance over a broad range of channel conditions , including both the waterfall and error floor regions of the ber curve .we also show how sc - ldpc codes can be viewed as a type of _ ldpc convolutional code ( ldpc - cc ) _ , since spatial coupling is equivalent to introducing memory into the encoding process . in channel coding parlance ,the key feature of sc - ldpc codes that distinguishes them from standard ldpc codes is their ability to combine the best features of regular and irregular codes in a single design : ( 1 ) capacity approaching iterative decoding __ thresholds _ _ , characteristic of optimized irregular codes , thus promising excellent performance in the waterfall , and ( 2 ) linear growth of minimum distance with block length , characteristic of regular codes , thus promising the elimination of an error floor .as will be discussed in more detail in section [ sec : scstructure ] , this is achieved by introducing a slight _ structured irregularity _ into the tanner graph representation of a regular ldpc code .an added feature of the sc - ldpc code design is that the resulting graph retains the essential implementation advantages associated with the structure of regular codes , compared to typical irregular designs .the research establishing the performance characteristics of sc - ldpc codes relies on ensemble average asymptotic methods , _i.e. _ , the capacity approaching thresholds and asymptotically good minimum distance behavior are shown to hold for typical members of sc - ldpc code ensembles as the block length tends to infinity .( following the lead of shannon , coding theorists often find it easier and more insightful to analyze the average asymptotic behavior of code ensembles than to determine the exact performance of specific codes . ) these research results are summarized in section [ sec : scstructure ] .section [ sec : scproblems ] discusses issues related to realizing the exceptional promise of sc - ldpc codes with specific code and decoder designs suitable for low - complexity implementation at block lengths typically employed in practice : 1 ) the use of high - throughput , parallel , pipeline decoding and 2 ) the use of _ sliding - window _ decoding strategies for reduced latency and computational complexity , and section [ sec : op ] contains a short summary of several open research problems .finally , section [ sec : conc ] includes some concluding remarks along with a brief discussion of the promising use of the spatial coupling concept beyond the realm of channel coding .a -regular _ _ ldpc - bc of _ rate _ and _ block length _ is defined as the null space of an binary parity - check matrix , where each row of contains exactly ones , each column of contains exactly ones , and both and are small compared with the number of rows in . an ldpc code is called _ irregular _ if the row and column weights are not constant . it is often useful to represent the parity - check matrix using a bipartite graph called the _tanner graph_. in the tanner graph representation , each column of corresponds to a _ code bit _ or _variable node _ and each row corresponds to a _ parity - check _ or _ constraint node_. if position of is equal to one , then constraint node is connected by an _ edge _ to variable node in the tanner graph ; otherwise , there is no edge connecting these nodes .[ fig : tannergraph ] depicts the parity - check matrix and associated tanner graph of a -regular ldpc - bc . in this example, we see that all variable nodes have _ degree _ , since they are connected to exactly constraint nodes , and similarly all constraint nodes have degree . in the case of irregular codes , the notion of _ degree distribution _ is used to characterize the variations of constraint and variable node degrees ( see ) .-regular ldpc - bc with block length and ( b ) the associated -regular tanner graph .the filled circles represent code bits , or variable nodes , the open circles represent parity - checks , or constraint nodes , and the darkened edges represent a cycle of length .,width=480 ] using the tanner graph , iterative bp decoding can be viewed as passing messages back and forth between variable and constraint nodes ( see , _e.g. _ , ) . on an awgnc ,for example , the messages are typically _ log - likelihood ratios ( llrs ) _ associated with the ( in general soft - valued ) received symbols , which serve as indicators of the probability that a particular code bit is a `` 1 '' or a `` 0 '' .these llrs are then passed across the graph and adjusted iteratively to reflect the parity constraints until some stopping condition is satisfied , indicating that the received symbols can be reliably decoded .certain properties of the tanner graph can serve as useful indicators of the performance characteristics of iterative decoding . in fig .[ fig : tannergraph](b ) , the darkened edges indicate a _ cycle _ of length . in general , codes with short cycles do not perform well in the waterfall , due to the build up of correlation in the iterative process . hence it is desirable to choose codes with large _ girth _ , the length of the shortest cycle , for good waterfall performance . in terms of the error floor performance ,minimum distance is the best indicator for ml decoding , and asymptotically good codes are typically not subject to an error floor . for iterative decoding , however , certain substructures of the tanner graph , such as _ trapping sets _ and _ _ absorbing sets _ _ ( an important subclass of trapping sets ) , can cause iterative decoders to fail apart from minimum distance considerations , resulting in the emergence of an error floor . hence it is desirable to select graphs without problematic trapping or absorbing sets for good error floor performance .sc - ldpc codes can be viewed as a type of ldpc - cc , first introduced in the open literature by jimenez - felstrom and zigangirov in 1999 .a rate ldpc - cc can be represented by a bi - infinite parity - check matrix ,\ ] ] composed of a diagonal band of submatrices , , , where the rows and columns of are sparse , _i.e. _ , they contain a small number of non - zero entries .if contains only zeros and ones , the code is binary ; otherwise , it is non - binary . is called the _syndrome former memory _, where is the width of each row in submatrices , and , the width of each row in symbols , is called the _ decoding constraint length_. if contains a fixed number of ones in each column and a fixed number of ones in each row , it represents a -regular ldpc - cc ; otherwise , the code is irregular . in general, describes a _ time - varying _ ldpc - cc , and if the rows of vary periodically , the code is _ periodically time - varying_. if the rows of do not vary with time , the code is _ time - invariant_. using a technique termed _ unwrapping _ in , it is possible to take any good ldpc - bc and _ unwrap _ it to form an ldpc - cc with improved ber performance .the unwrapping procedure applies cut - and - paste and diagonal matrix extension operations to the parity - check matrix of an ldpc - bc to produce a bi - infinite parity - check matrix of an ldpc - cc , as illustrated in fig .[ fig : unwrapping](a ) , where represents a -regular block code with block length and represents a -regular convolutional code with constraint length .the bi - infinite ( convolutional ) tanner graph representation of is shown in fig .[ fig : unwrapping](b ) , and we see that the unwrapping procedure preserves the graph structure of the underlying ldpc - bc , _i.e. _ , all node degrees remain the same and the local connectivity of nodes is unchanged .( 480,341 ) ( 30,0)-regular ldpc - bc , ( b ) the tanner graph associated with the unwrapped -regular ldpc - cc , and ( c ) the terminated tanner graph associated with the unwrapped -regular ldpc - cc.,title="fig:",width=480 ] ( 0,111) ( 0,288) ( 155,288) ( 400,118)(diagonal matrix ( 400,105)extension ) ( 300,288)(cut - and - paste ) + -regular ldpc - bc , ( b ) the tanner graph associated with the unwrapped -regular ldpc - cc , and ( c ) the terminated tanner graph associated with the unwrapped -regular ldpc - cc.,title="fig:",width=556 ] extensive computer simulation results ( see , _e.g. _ , ) have verified that , for practical code lengths , ldpc - ccs obtained by unwrapping an ldpc - bc achieve a substantial _ convolutional gain _ compared to the underlying ldpc - bc , where both codes have the same computational complexity with iterative decoding and the block length of the ldpc - bc equals the constraint length of the ldpc - cc .an example illustrating this convolutional gain is shown in fig .[ fig : convgain ] .even though the tanner graph representation of an ldpc - cc extends infinitely both forward and backward in time , in practice there is always some finite starting and ending time , _i.e. _ , the tanner graph is _ terminated _ at both the beginning and the end ( see fig .[ fig : unwrapping](c ) ) .a remarkable feature of this graph termination , first noted numerically in the paper by lentmaier et al . for both the _ binary erasure channel ( bec ) _ and the awgnc and then shown analytically ( for the bec ) by kudekar et al . , is the so - called _ threshold saturation _ effect .consider for purposes of illustration the -regular ldpc - bc ensemble with awgnc iterative bp decoding threshold , which is also the threshold of the associated ( unterminated ) ldpc - cc ensemble .as the graph termination length becomes large , the threshold of the ( terminated ) ldpc - cc ensemble improves all the way to , the threshold of the -regular ldpc - bc ensemble with ml decoding ., the terminated ldpc - cc suffers a rate loss compared to the underlying ldpc - bc , but this rate loss vanishes for large . ] in other words , terminated ldpc - ccs with bp decoding are capable of achieving the same performance as comparable ldpc - bcs with ( much more complex , and impractical ) ml decoding! this `` step - up '' of the bp threshold to the ml threshold is referred to as threshold saturation .note that , after termination , the ldpc - cc code ensemble can be viewed as an ldpc - bc ensemble with block length .however , compared to typical ldpc - bc designs that have no restrictions on the location of the ones in the parity - check matrix and hence allow connections across the entire graph , the ldpc - cc code ensemble has a highly _ localized _ graph structure , since the non - zero portion of the parity - check matrix is restricted to a diagonal band of width .we will see later that this structure , in addition to yielding excellent iterative decoding thresholds , also gives rise to an efficient decoder implementation .threshold saturation is a result of the termination , which introduces a slight structured irregularity in the graph .termination has the effect of introducing lower constraint node degrees , _i.e. _ , a structured irregularity , at each end of the graph ( see fig .[ fig : unwrapping](c ) ) . in the context of iterative bp decoding ,the smaller degree constraint nodes pass more reliable messages to their neighboring variable nodes , and this effect propagates throughout the graph as iterations increase .this results in bp thresholds for terminated ldpc - cc ensembles that , for large enough degree densities ( and for regular codes ) , actually _ achieve capacity _ as the constraint length and the termination length go to infinity .in addition , for regular ldpc - ccs , the terminated ( slightly irregular ) ensembles are still asymptotically good , in the sense that their minimum distance grows linearly with block length .the net result of these effects is captured in fig .[ fig : tradeoff ] , which illustrates the tradeoffs between the awgnc bp decoding threshold ( in ) , the minimum distance growth rate ( ) , and the code rate ( ) for several -regular terminated ldpc - cc ensembles as a function of the termination length . we observe that , in general , as the termination length increases , the ldpc - cc rate approaches the rate of the underlying ldpc - bc and the bp thresholds of the terminated ldpc - cc ensembles approach capacity as increases . increases . ]also , linear distance growth is maintained for any finite value of .in addition to regular ensembles , fig . [fig : tradeoff ] also includes terminated ldpc - cc ensembles based on the irregular arja codes designed by divsalar et al . , an irregular ldpc - bc ensemble with linear distance growth and better thresholds than comparable regular ensembles .( irregular ldpc - bc ensembles with optimized degree profiles already have thresholds close to capacity , and they do not possess linear distance growth , so little is to be gained by applying the terminated ldpc - cc construction in these cases . )the major advantage of the regular terminated ldpc - cc constructions highlighted above is that they can achieve the same thresholds as the optimized irregular designs _ without sacrificing _ linear distance growth , while maintaining the desirable structural features of regular codes .-regular ldpc - cc ensembles , terminated arja - based ldpc - cc ensembles , and the underlying ldpc - bc ensembles .the shannon limit ( lower bound on ) and the gilbert - varshamov bound ( upper limit on ) are plotted for comparison . ] an insightful way of viewing the design of terminated ldpc - ccs is to use a _ protograph _ representation of the code ensemble .a block code protograph is a small bipartite graph , with variable nodes and constraint nodes , that is used to represent the parity - check matrix of a rate block code with block length , where and are typically small integers .an example of a block code protograph with variable nodes of degree and constraint node of degree is shown in fig .[ fig : coupling](a ) .the corresponding parity - check matrix in this case is given by ] so the bi - infinite convolutional base matrix becomes .\ ] ] if the graph lifting operation is now applied to the convolutional protograph by placing randomly selected permutations of size on each edge of the graph , an unterminated -regular ldpc - cc ensemble with constraint length results .the coupled convolutional protograph can then be terminated , resulting in reduced constraint node degrees at both ends , as shown in fig .[ fig : coupling](d ) , and the terminated convolutional base matrix becomes .\ ] ] now applying the graph lifting operation results in a terminated -regular ldpc - cc ensemble , which can also be viewed as an ldpc - bc with block length .note that , because of the reduced constraint node degrees at each end , the graph is not quite -regular , and the code rate associated with the terminated ldpc - cc ensemble is less than the rate of the underlying ldpc - bc .however , as the termination length , the terminated ldpc - cc ensemble becomes -regular and the associated code rate .because the memory employed in the convolutional code design has the effect of coupling together several identical block code protographs , the above graphical construction of terminated ldpc - cc ensembles , also denoted as sc - ldpc code ensembles , is referred to as _ spatial coupling _while it is the asymptotic threshold and minimum distance properties of sc - ldpc code ensembles , summarized above , that have generated so much interest in these codes , some basic questions having to do with how best to employ them for practical code lengths still must be solved before they can realize their exceptional promise as a robust , near - optimal solution to the channel coding problem .the following section describes some of these practical issues .an important contribution of was the introduction of a parallel , high - speed , pipeline - decoding architecture for ldpc - ccs based on the same iterative bp decoding algorithm used to decode ldpc - bcs .this is illustrated in terms of the convolutional protograph associated with an example -regular rate ldpc - cc with and in fig .[ fig : tanner](a ) . given some fixed number of decoding iterations , the pipeline decoding architecture employs identical copies of a message - passing processor operating in parallel .each processor covers a span of variable nodes , so that during a single decoding iteration messages are always passed within a single processor . as each new set of ( in generalsoft - valued ) symbols ( represented by -ary vectors and in fig .[ fig : tanner](a ) ) enters the decoder from the channel , new llrs are computed and each processor updates ( in parallel ) exactly one set of variable nodes and one set of constraint nodes . and graph lifting factor .for example , for a given constraint length ( which determines the code strength ) , large and small result in high - speed processing , whereas the processing is slower for small and large .] when the next set of symbols arrives , the decoding window , containing variable nodes and processors , shifts by one time unit ( corresponding to a set of received symbols ) to the right and another decoding iteration is performed . in this fashion, the decoder continuously accepts new symbols from the channel and produces ( with a delay of time units , or received symbols ) decoding estimates of symbols ( represented by -ary vectors and in fig .[ fig : tanner](a ) ) at each time unit .-regular ldpc - cc .( b ) example of a sliding window decoder with window size operating on the protograph of a -regular sc - ldpc code at times ( left ) , and ( right).,width=672 ] as noted above , for the parallel pipeline decoder architecture illustrated in fig .[ fig : tanner](a ) , the hardware processor includes only one constraint length , _i.e. _ , variable nodes . on the other hand , in the case of sc - ldpc codes (terminated ldpc - ccs ) , the standard ldpc - bc decoder architecture includes all the variables nodes in a block , which equals , _ i.e. _ , the total block length ( see fig . [fig : coupling](d ) ) .since , typically , , the pipeline architecture achieves a large saving in processor size compared to the standard ldpc - bc architecture , while , for the same number of iterations , the performance of the two decoders is identical . the latency and memory requirements of the pipeline architecture , however , involve an additional factor of the number of iterations so that represents the total decoding latency in received symbols and the total number of soft received values that must be stored in the decoder memory at any given time .this equals the length of the decoding window in fig .[ fig : tanner](a ) , where the factor of is included to account for the size of the permutation matrix . in some applications , since capacity - approaching performance can require a large number of iterations , these latency and storage requirements may be unacceptably high .this fact has spurred interest in a modified _ sliding window _decoding architecture for sc - lpdc codes with much reduced latency and memory requirements .rather than maintaining a full window size of symbols , like the pipeline decoder , the sliding window decoder uses a much smaller window , typically just a few constraint lengths .the concept of a sliding window decoder is illustrated in fig .[ fig : tanner](b ) . assuming a window size of symbols , where is the number of protograph sections within the window ,decoding iterations proceed until some stopping criterion is met or a fixed number of iterations has been performed , after which the window shifts and the symbols shifted out of the window are decoded .the key feature of a sliding window decoder is that , for , its latency and memory requirements are much less than for the pipeline decoder .values of roughly to times as large as have been shown ( see ) to result in significant savings in latency and memory with minimal performance degradation , compared to using the full window size for the ( typically large ) fixed number of iterations needed to optimize the performance of the pipeline decoder . because the initial few ( depending on the coupling depth , or code memory ) positions of the window include a part of the graph with reduced constraint node degrees ( see fig .[ fig : tanner](b ) ) , the information passed to variable nodes during the iterations associated with these initial window positions is highly reliable .the design of the sliding window decoder insures that this highly reliable information then propagates down the graph as the window is shifted .this phenomenon is responsible for the threshold saturation effect associated with sc - ldpc codes .the same phenomenon manifests itself with the standard ldpc - bc or pipeline decoding architectures , but recent work shows that the propagation of reliable information through the graph occurs more efficiently with the sliding window architecture .thus , besides reducing latency and memory , another motivation for considering a sliding window decoder is to reduce the number of node updates ( computational complexity ) required to achieve a given level of performance .below is a partial list of open research problems related to the practical realization of sc - ldpc codes . *a topic of signficant current research interest involves a detailed performance / complexity comparison of sc - ldpc codes with ldpc - bcs . in particular , can sc - ldpcs with sliding window decoding achieve better performance than standard ldpc - bcs with less computational complexity ?a fair comparison must consider decoders with the same latency and memory requirements , _i.e. _ , the sliding window decoder for an sc - ldpc code must be compared to an ldpc - bc whose block length is equal to rather than . in addition , the degree profiles of the two codes should be the same , _e.g. _ , a -regular sc - ldpc code should be compared to a -regular ldpc - bc .some of the factors to be considered include choosing the most efficient ( message passing ) _ node update schedules _ for decoding and designing appropriate _ stopping rules _ for deciding when enough iterations have been performed to allow reliable decoding .for example , practical stopping rules for the sliding window decoder could be based on a partial syndrome check , analogous to the stopping rule normally applied to the decoding of ldpc - bcs , or on the llr statistics associated with the next set of symbols to be decoded , for example by using a threshold criterion . * the results of imply tradeoffs favorable to sc - ldpc codes ( assuming sliding window decoding ) compared to ldpc - bcs in the waterfall region of the ber curve. there has been only limited investigation , however , regarding the error floors that might result from the use of sc - ldpc codes .since error floor performance is a major factor in selecting codes for applications that require very low bers , such as data storage and optical communication , it is important to consider those factors that contribute to decoding failures for sc - ldpc codes in the error floor .an important aspect of such an analysis involves establishing the precise connection between the problematic graphical substructures ( trapping sets , absorbing sets ) that can cause decoding failures and various decoding parameters , including scheduling choices ( parallel or serial node updates , for example ) , amount of quantization for the stored llrs , and window size ( for sliding window decoding ) .* sc - ldpc codes are known to have excellent asymptotic properties , but there are many open questions regarding code design , in particular for short - to - moderate block lengths : scaling the lifting factor and the termination length to achieve the best possible performance for sc - ldpc codes of finite length ; studying puncturing as a means of obtaining sc - ldpc codes with high rates , so as to provide rate flexibility for standards applications ; finding ways to mitigate the rate loss associated with short - to - moderate block length sc - ldpc codes , such as puncturing and partial termination , without affecting performance ; and exploiting the potential of connecting together multiple sc - ldpc code chains .in addition , analyzing and designing more powerful sc - ldpc codes , such as non - binary or generalized sc - ldpc codes , may be attractive in areas such as coded modulation and flash memories .* members of an ldpc code ensemble that are _ quasi - cyclic _ ( qc ) are of particular interest to code designers ,since they can be encoded with low complexity using simple feedback shift - registers and their structure leads to efficiencies in decoder design .the practical advantages of qc - ldpc - bc designs also carry over to the design of qc - sc - ldpc codes. it should be noted , however , that once the qc constraint is applied , the asymptotic ensemble average properties noted in sec .[ sec : scstructure ] do not necessarily hold ( since we are now choosing a code from a restricted sub - ensemble ) , and particular qc - sc - ldpc codes must be carefully designed to insure good performance .in this paper we have attempted to provide a brief tutorial overview of the exciting new field of spatially coupled low - density parity - check codes .capacity approaching iterative decoding thresholds and asymptotically good minimum distance properties make these codes potentially very attractive for future industry standard applications .we traced the origins of sc - ldpc codes to the development of low - density parity - check convolutional codes in , we used a visually convenient protograph representation to describe their construction , we discussed several issues related to practical decoder implementations , and we summarized a few still remaining open research problems . finally , it has recently been shown that the improved thresholds associated with spatial coupling apply generally regardless of the particular physical situation or communication channel ( see , _ e.g. _ , ) .further , although space limitations do not allow us to provide details , we note that the concept of spatial coupling of a sequence of identical copies of a small structured graph ( a protograph ) is applicable to a wide variety of problems and has been shown to lead to improved system performance in areas as diverse as multiterminal source and channel coding , cooperative relaying , compressed sensing , secure communication , and statistical physics .m. g. luby , m. mitzenmacher , m. a. shokrollahi , and d. a. spielman , `` improved low - density parity - check codes using irregular graphs , '' _ ieee transactions on information theory _ , vol .47 , no . 2 ,585598 , feb .2001 . a. jimnez felstrm and k. sh .zigangirov , `` time - varying periodic convolutional codes with low - density parity - check matrices , '' _ ieee transactions on information theory _ , vol . 45 , no . 6 , pp .21812191 , sept .a. e. pusane , r. smarandache , p. o. vontobel , and d. j. costello , jr ., `` deriving good ldpc convolutional codes from ldpc block codes , '' _ ieee transactions on information theory _ , vol .57 , no . 2 ,835857 , feb .m. lentmaier , a. sridharan , d. j. costello , jr . , and k. sh .zigangirov , `` iterative decoding threshold analysis for ldpc convolutional codes , '' _ ieee transactions on information theory _ , vol .56 , no .10 , pp . 52745289 , oct .s. kudekar , t. j. richardson , and r. l. urbanke , `` threshold saturation via spatial coupling : why convolutional ldpc ensembles perform so well over the bec , '' _ ieee transactions on information theory _ , vol .57 , no . 2 ,803834 , feb . 2011 .m. lentmaier , d. g. m. mitchell , g. p. fettweis , and d. j. costello , jr ., `` asymptotically good ldpc convolutional codes with awgn channel thresholds close to the shannon limit , '' in _ proc .6th international symposium on turbo codes and iterative information processing _ , brest , france , sept .2010 .a. r. iyengar , m. papaleo , p. h. siegel , j. k. wolf , a. vanelli - coralli , and g. e. corazza , `` windowed decoding of protograph - based ldpc convolutional codes over erasure channels , '' _ ieee transactions on information theory _ , vol .58 , no . 4 , pp . 23032320 , apr . 2012 .m. lentmaier , m. m. prenda , and g. fettweis , `` efficient message passing scheduling for terminated ldpc convolutional codes , '' in _ proc .ieee international symposium on information theory _ , st .petersburg , russia , aug .2011 .s. kumar , a. j. young , n. macris , and h. d. pfister , `` a proof of threshold saturation for spatially - coupled ldpc codes on bms channels , '' in _ proc .fiftieth annual allerton conference _ , monticello ,il , oct . 2012 .
since the discovery of turbo codes 20 years ago and the subsequent re - discovery of low - density parity - check codes a few years later , the field of channel coding has experienced a number of major advances . up until that time , code designers were usually happy with performance that came within a few decibels of the shannon limit , primarily due to implementation complexity constraints , whereas the new coding techniques now allow performance within a small fraction of a decibel of capacity with modest encoding and decoding complexity . due to these significant improvements , coding standards in applications as varied as wireless mobile transmission , satellite tv , and deep space communication are being updated to incorporate the new techniques . in this paper , we review a particularly exciting new class of low - density parity - check codes , called spatially - coupled codes , which promise excellent performance over a broad range of channel conditions and decoded error rate requirements .
hospitals throughout the world are facing a unique problem , as the aged population is increased , health - care population is decreased .telecommunication community is not doing much work in the field of medicine however , there is a need of remote patient monitoring technology . to fulfill this task , it is required to build communication network between an external interface and portable sensor devices worn on and implemented within the body of the user which can be done by basns .basns is not only useful for remote patient monitoring , but can also establishes within the hospitals ; like in operation theaters and intensive care units. it would enhance patient comfort as well as provide ease to doctors and nurses to perform their work efficiently .ban is used for connecting body to wireless devices and finds applications in various areas such as entertainment , defense forces and sports .the basic step in building any wireless device is to study the transmission channel and to model it accurately .channel modeling is a technique that has been initiated by a group of researchers throughout the world [ 1 ] .they have studied path loss and performed measurement campaigns for wireless node on the body [ 2 - 8 ] .some researchers have taken into account , the implanted devices which are the area of ban called as intra - body communication [ 9 ] .for the short range low data rate communication in ban , measurement groups have considered ultra - wide band ( uwb ) as the appropriate air interface .the models developed by measurement campaigns are only path loss models and do not provide any description of propagation channel .it is important to study the propagation mechanism of radio waves on and inside the body in order to develop an accurate ban channel model .this study will show the underlying propagation characteristics .it would help in the development of ban transceivers which are much suited to the body environment .for a given position of the transmitter on or inside the body it is required to find out the electromagnetic field on or inside the body for a ban channel model .this is quite a critical problem that requires a large amount of computational power .therefore , it is necessary to derive an analytical expression which will perform this objective . in shortthis determines which propagation mechanism takes place , that is reflection , diffraction and transmission [ 10 ] .an appropriate method of doing this task is by using dyadic green s function .the solution of canonical problems , such as cylinder , multi layer and sphere have been solved in electro magnetic ( em ) theory , using dyadic green s functions [ 11 - 13 ] .recently , wbasns shows potential due to increasing application in medical health care . in wbasns , each sensor in the body sends it s data to antenna , both sensors and antenna are worn directly on the body .examples include sensors which can measure brain activity , blood pressure , body movement and automatic emergency calls .we require simple and generic body area propagation models to develop efficient and low power radio systems near the human body . to achieve better performance and reliability, wave propagation needs to be modeled correctly .few studies have focused on analytic model of propagation around a cylinder ( as human body resembles a cylinder ) using different functions .these functions involve mathieu function , dyadic green s function , maxwell s equations , finite difference time domain ( fdtd ) and uniform theory of diffraction ( utd ) .some of these approaches have already proven effective for evaluating body area communication system proposals .finite difference time domain had successfully measured the communication scenarios .complete ultra - wide band models have been developed using measurements and simulations , however they do not consider the physical propagation mechanism .so , the researchers have to rely on ad - hoc modeling approaches which can result in less accurate propagation trends and inappropriate modeling choices [ 14 , 15 ] .uniform theory of diffraction depends on a ray tracing mechanism allowing propagation channel to be explained in terms of ray diffraction around the body .it typically based on high - frequency approximations which is not valid for low frequencies , also not useful when antenna is very close to the body [ 16 ] .a generic approach is proposed to understand the body area propagation by considering the body as a lossy cylinder and antenna as a point source by using maxwell s equation . a solution for a line source near lossy cylinderis derived using addition theorem of hankel functions then the line source is converted into the point source by taking inverse fourier transform .the model accurately predicts the path loss model and can be extended to all frequencies and polarities but this is limited in scope and not always physically motivated [ 17 ] .mathieu functions are also used for body area propagation model .the human body is treated as a lossy dielectric elliptic cylinder with infinite length and a small antenna is treated as three - dimensional ( 3-d ) polarized point source .first the three - dimensional problem of cylinder is resolved into 2-d problem by using fourier transform and then this can be expanded in terms of eigen functions in cylindrical coordinates . by using mathieu function exact expression of electric field distribution near the human body is deduced [ 18 ] . the propagation characteristics of cylindrical shaped human body have been derived using dyadic green s functions .the model includes the cases of transmitter and receiver presents either inside or outside of the body and also provides simulation plots of electric field with different values of angle .all the above proposals describe the propagation characteristics of cylindrically shaped human model [ 19 ] .we have developed a simple but generic approach to body area propagation derived from dyadic green s function ( dgf ) .this approach is for arm motion of human body . when the human arm is moved in direction , propagation characteristics of spherical shapedhave been derived using dgf .first , we use spherical vector eigen functions for finding the scattering superposition .four cases are considered for either transmitter or receiver is located inside or outside the body .finally , simulated results of electric field distribution with different values of angle have shown .in this paper , spherical symmetry is used to represent in and around the arm of the human body . a point on body is a sensor , denoted by x which represents ( ,, ) coordinates in the spherical coordinate system and is the location of transmitting antenna .( ,, ) are unit vectors along radial , angle of elevation from z - axis and azimuthal angle from x - axis as shown in figure 1 .let be electric field at point due to current source .the general formula for electric field can be written as : is volume of source , is the current source , is the dyadic green s function is the radian frequency of transmission and is magmatic permeability of the medium .a dyadic green s function is a type of function used to solve inhomogeneous differential equations subject to specific initial conditions or boundary condition .as we are considering arm motion of human body , so spherical symmetry is used by taking shoulder as center . for this , spherical eigen functions are used to write the dyadic green s function .dyadic green s function is basically depends on the spherical vector eigen functions [ 14 ] .these eigen functions are , and , where is the wave number of medium , is an integer , is a real number and is a point in space .these all are the solutions to the helmholtz equation having three components in , and .these vector eigen functions are given by [ 19 ] : \end{aligned}\ ] ] \end{aligned}\ ] ] \end{aligned}\ ] ] in above eigen functions , laplacian operator in the spherical coordinate system is .it s mathematical expression is given as : represents the point in space having components , and .solution of helmoltz equation is which is the scalar eigen function [ 19 ] .= z_{n}(\eta r ) p^{h}_{n}(\cos\theta)_{\sin}^{\cos } h \phi\end{aligned}\ ] ] is a general spherical function of order . for spherewe use hankle function of first and second order which are defined as : = ( -1)^{n}(\eta r)(\frac{d}{dr\eta^{2 } r})^n(\frac{\sin(\eta r)}{\eta r})^{n}\end{aligned}\ ] ] is the propagation constant in direction of , whereas . the laplace operator is applied and find the eigen values , and by using eigen function . the vector eigen function in ( 2 ) , ( 3 ) and ( 4 ) becomes : these three vector eigen function are perpendicular among themselves as well as with respect to each other [ 11 ] . in the form of matrices ,vector eigen functions can be written in this form , in scattering problems , it is desirable to determine an unknown scattered field that is due to a known incident field . using the principle of scattering superposition we can write dyadic green s equation as superposition of direct wave and scattering wave . in figure 2 ,concept of scattering superposition is shown in which there is a sensor located inside the arm of body considered as sphere .the sensor transmits the wave to antenna which is divided in two parts as direct wave and scattered wave .the direct wave is considered as wave directly transmits from sensor to transmitter and scattered wave is composed of reflection and transmission waves .therefore , general equation of scattering superposition is illustrated as : dyadic green s equation is divided in to two parts as direct wave ] .the direct wave corresponds to direct from source to measuring point and scattered is the reflection and transmission waves due to presence of dielectric interface .the direct component of dgf is given as [ 11 ] : in the above equation of dgf , is for first case and is second case.the denotes the conjugation and is for the dyadic product .here we introduces superscript ( 1 ) for outgoings wave and other for standing waves .if the vector eigen function has the superscript ( 1 ) then , is chosen for and should be used otherwise . herewe discuss four different scenarios for the scattering components of dgf along with boundary conditions .( i ) both receiver and transmitter are inside the body .( ii ) the receiver is located outside and transmitter is located inside the body .( iii ) the receiver is located inside and transmitter is outside the body .( iv ) both transmitter and receiver are located outside the body .receiver and transmitter are in the order : denotes the medium inside human body and is for free space medium .in this case , receiver and transmitter both located inside the body so we can write dyadic green s equation as , r_12 \times \begin{cases } n_{nhk1}(x_0)^t\\ m_{nhk1}(x_0)^t \end{cases } \end{split}\end{aligned}\ ] ] contains reflection coefficients . is calculated in literature using boundary conditions , its matrix is given by [ 16 ] : ^ -1\\ \times[h_n(\eta_2d)h_n(\eta_1d)-h_n(\eta_1d)j_n(\eta_2d)]^-1 \end{split}\end{aligned}\ ] ] in the above equation of reflection coefficient d represents radius of spherical body model , .the matrices for and are expressed as : is either , is the derivative of w.r.t the whole argument , and p=1,2 in this case dgf can be written as : {12 } \begin{pmatrix } n^{\ast}_{nhk1}(x_0)^{t}\\ m^{\ast}_{nhk1}(x_0)^{t } \\\end{pmatrix } \end{split}\end{aligned}\ ] ] in the above equation is a transmission coefficient matrix and given as : ^ -1\\ \times \begin{pmatrix } \varepsilon_{1 } & 0\\ 0&\varepsilon \\\end{pmatrix } \end{split}\end{aligned}\ ] ] r_21\\ \begin{cases } n_nhk(x_0)^t m_nhk(x_0)^t \end{cases } \end{split}\end{aligned}\ ] ] similarly as , is the reflection coefficient matrix and it is given as : ^ -1\\ \times[j_n(\eta_2d)j_n(\eta_1d)-j_n(\eta_1d)j_n(\eta_2d ) ] \end{split}\end{aligned}\ ] ] in this case , we can write dgf as : t_{21}\\ \begin{pmatrix } n^{\ast}_{nhk1}(x_0)^{t}\\ m^{\ast}_{nhk1}(x_0)^{t } \\\end{pmatrix } \end{split}\end{aligned}\ ] ] is the transmission coefficient matrix , given as : ^ -1\\ \times \begin{pmatrix } \varepsilon_{2 } & 0\\ 0&-\mu_{2}\\ \end{pmatrix } \end{split}\end{aligned}\ ] ]in this section we presents the equation which is required for simulation . with the help of simulation it will be easy to study the propagation characteristics of arm motion making spherical pattern . is stated as : we have defined earlier , arm motion at different angles are presenting spherical pattern .therefore , we simulate the radio propagation environment having radius , megnatic permeability for human body ( assume that permeability of human body is approximately equal to air ) , similarly electric permittivity .the dielectric constant is mean value of all tissues of human body .we take the surrounding homogeneous medium to be air with megnatic permeability and electric permittivity .frequency up to ghz is used for ban communication , which is for ism band .the transmission frequency for simulation is 1ghz .we assumed that the transmitter is acting as point source at .the radial distance of receiver is from the central spherical axis of shoulder . for the simulation, we assumed that receiver move along the azimuthal angle for varying values of and different heights from the center of shoulder . for simulation ,we consider equation ( 25 ) in which is used in matrix form of eigen functions .this equation has an integration which is not possible so we approximate it to summation .thus , we approximate equation ( 25 ) in to this form : and are the truncation limits and are the step size of integration . and are so small that could be ignored and has no effect on calculations .we only presents electric propagation of multi - path reflection and transmission waves of scattering dgf.this is more significant to represent the attribute of arm motion as compared to the direct dgf .figure 2,3 and 4 show the scattering dgf ( simulation ) of electric field with the change in .versus angle ,with different values of and the angle is , width=340,height=340 ] using equation ( 27 ) , we have three components in , and direction . every component of electric field is plotted as a function of azimuthal angle .the values of is ( 0 to 2 ) , whereas at z coordinate different values of receiver has been plotted .the electric field is plotted , which is vector addition of three components .these all parameters are shown in the simulation graph . by taking the value of , figure shows that magnitude of electric field is decreasing as the distance of receiving antenna is increasing from the sensor ( transmitting antenna ) .the plot shows electric field component at different values of , varying from to . in this case, is decreasing from ( to )db by replacing the receiving antenna from cm to cm . versus angle ,with different values of and the angle is , width=340,height=340 ] in figure , when we take value of , magnitude of electric field again decreases as the antenna moves away from sensor .for the values of from to , has different values from ( from )db . by changing position of receiving antenna from cm to cm .versus angle ,with different values of and the angle is , width=340,height=340 ] the values of distance and are same , as described in the above graphs by only replacing the parameter . similarly in figure values of change from ( to )db by moving the position of receiver away from transmitting antenna , which in return decreases the electric field intensity .we have proposed a generic approach to derive an analytical channel modeling and propagation characteristics of arm motion as spherical model .to predict the electric field around body , we have formulated a two step procedure based on dyadic green s function .first , we derive eigen functions of spherical model then calculated the scattering superposition to come across reflection and transmission waves of antenna .the model includes four cases where transmitter or receiver is located inside or outside of the body .this model is presented to understand complex problem of wave propagation in and around arm of human body .simulation shows that electric field decreases when receiver moves away from the shoulder with change of angle .t. zasowski , f. althaus , m. stager , a. wittneben , and g. troster , `` uwb for noninvasive wireless body area networks : channel measurements and results , '' proc .ieee conf .on ultra wideband systems and technologies , pp .285 - 289 , nov 2003 . a. fort , j. ryckaert , c. desset , p.d .doncker , p. wambacq , and l.v .biesen , `` ultra - wideband channel model for communication around the human body , '' ieee journal on selected areas in communications , vol .927 - 933 , april 2006 .a. alomainy , y. hao , x. hu , c.g .parini , and p.s .hall , `` uwb on- body radio propagation and system modelling for wireless body - centric networks , '' iee proc .107 - 114 , 2006 .y. zhao , y. hao , a. alomainy , and c. parini , `` uwb on - body radio channel modelling using ray theory and sub - band fdtd method , '' ieee trans . on microwave theory and techniques , special issue on ultra- wideband , vol . 54 , no .1827 - 1835 , 2006 .kovacs , g.f .pedersen , p.c.f .eggers , and k. olesen , `` ultra wideband radio propagation in body area network scenarios , '' ieee 8th intl . symp . on spread spectrum techniques and applications , pp .102 - 106 , 2004 .ruiz , j. xu , and s. shiamamoto , `` propagation characteristics of intra - body communications for body area networks , '' 3rd ieee conf . on consumer communications and networking , vol .509 - 503 , 2006 .cottis , g.e .chatzarakis , and n.k .uzunoglu , `` electromagnetic energy deposition inside a three - layer cylindrical human body model caused by near-?eld radiators , '' ieee trans . on microwave theory and techniques , vol .38 , no . 8 , pp .415 - 436 , 1990 .t.zasowski , f. althaus , m. stager , a. wittneben and g. troster , `` uwb for noninvasive wireless body area networks : channel measurement and results.''in 2003 ieee conference on ultra - wide band system and technologies,2003.pp.285 - 289 .a. alomainy , y. hao , x.hu,c.g . parini and p.s. hall , `` uwb on - body radio propagation and system modeling for body centric networks , '' in ieee communication proceeding , vol .1 , february 2006 , pp .107 - 114 .d. a. macnamara , c , pistorius and j. malherbe , in troduction to the uniform geometrical theory of diffraction .artech house : boston , 1991 .
to monitor health information using wireless sensors on body is a promising new application . human body acts as a transmission channel in wearable wireless devices , so electromagnetic propagation modeling is well thought - out for transmission channel in wireless body area sensor network ( wbasn ) . in this paper we have presented the wave propagation in wbasn which is modeled as point source ( antenna ) , close to the arm of the human body . four possible cases are presented , where transmitter and receiver are inside or outside of the body . dyadic green s function is specifically used to propose a channel model for arm motion of human body model . this function is expanded in terms of vector wave function and scattering superposition principle . this paper describes the analytical derivation of the spherical electric field distribution model and the simulation of those derivations . wireless body area networks , dyadic green s function